Planet Russell


Sociological ImagesExplaining Trump

Originally posted at Made in America.

Explaining how such an unfit candidate and such a bizarre candidacy succeeded has become a critical concern for journalists and scholars. Through sites like Monkey Cage, Vox, and 538, as well as academic papers, we can watch political scientists in real time try to answer the question, “What the Hell Happened?” (There are already at least two catalogs of answers, here and here, and a couple of college-level Trump syllabi.) Although a substantial answer will not emerge for years, this post is my own morning-after answer to the “WTHH?” question.

I make three arguments: First, Trump’s electoral college victory was a fluke, a small accident with vast implications, but from a social science perspective not very interesting. Second, the deeper task is to understand who were the distinctive supporters for Trump, in particular to sort out whether their support was rooted mostly in economic or in cultural grievances; the evidence suggests cultural. Third, party polarization converted Trump’s small and unusual personal base of support into 46 percent of the popular vote.

Explaining November 8, 2016

Why did Donald Trump, an historically flawed candidate even to many of those who voted for him, win? With a small margin in three states (about 100,000 votes strategically located), many explanations are all true:

* Statistical fluke: Trump won 2.1 percentage points less of the popular vote than did Clinton, easily the largest negative margin of an incoming president in 140 years. (Bush was only 0.5 points behind Gore in 2000.) Given those numbers, Trump’s electoral college win was like getting two or three snake-eye dice rolls in a row. Similarly, political scientists’ structural models–which assume “generic” Democratic and Republican candidates and predict outcomes based on party incumbency and economic indicators–forecast a close Republican victory. “In 2012, the ‘fundamentals’ predicted a close election and the Democrats won narrowly,” wrote Larry Bartels. “In 2016, the ‘fundamentals’ predicted a close election and the Republicans won narrowly. That’s how coin tosses go.” But, of course, Donald Trump is far from a generic Republican. That’s what energizes the search for a special explanation.

* FBI Director Comey’s email announcement in the closing days of the election appeared to sway the undecided enough to easily make the 100,000 vote difference.

* Russian hacks plus Wikileaks.

* The Clinton campaign. Had she visited the Rust Belt more, embraced Black Lives Matter less (or more), or used a slogan that pointed to economics instead of diversity… who knows? Pundits have been mud-wrestling over whether her campaign was too much about identity politics or whether all politics is identity politics. Anyway, surely some tweak here would have made a difference.

* Facebook and Fakenews.

* The weather. It was seasonably mild with only light rain in the upper Midwest on November 8. Storms or snow would probably have depressed rural turnout enough to make Clinton president.

* The Founding Fathers. They meant the electoral college to quiet vox populi (and so it worked in John Q. Adams’s minus 10 point defeat of Andrew Jackson in 1824).

* Add almost anything you can imagine that could have moved less than one percent of the PA/MI/WI votes or of the national vote.

* Oh, and could Bernie would have won? Nah, no way, no how. [1]

Small causes can have enormous consequences: the precise flight of a bullet on November 22, 1963; missed intelligence notes about the suspicious student pilots before the 9/11 attacks; and so on. Donald Trump’s victory could become extremely consequential, second only to Lincoln’s in 1860, argues journalist James Fallows, [2] but the margin that created the victory was very small, effectively an accident. From an historical and social science point of view, there is nothing much interesting in Trump’s electoral college margin.

Trump’s Legions

More interesting is Trump’s energizing and mobilizing so many previously passive voters, notably during the primaries. What was that about?

One popular answer is that Trump’s base is composed of people, particularly members of the white working class (WWC), who are suffering economic dislocation. Because their suffering has not been addressed, they rallied to a jobs champion.

Another answer is that Trump’s core is composed of people, largely but not only WWC, with strong cultural resentments. While often suffering economically and voicing economic complaints, they are mainly distinguished by holding a connected set of racial, gender, anti-immigrant, and class resentments–resentments against those who presumably undermined America’s past “greatness,” resentments which tend to go together with tendencies toward authoritarianism (see this earlier post).

The empirical evidence so far best supports the second account. Indicators of cultural resentment better account for Trump support than do indicators of economic hardship or economic anxiety. [3]

In-depth, in-person reports have appeared that flesh out these resentments in ways that survey questions only roughly capture. They describe feelings of being pushed out of the way by those who are undeserving, by those who are not really Americans; feelings of being neglected and condescended to by over-educated coastal elites; feelings of seeing the America they nostalgically remember falling away. [4]

trump-supportersDefenders of the economic explanation would point to the economic strains and grievances of the WWC. Those difficulties and complaints are true–but they are not new. Less-educated workers have been left behind for decades now; the flat-lining of their incomes started in the 1970s, with a bit of a break in the late 1990s. Moreover, the economy has been in an upswing in the last few years; the unemployment rate was about 8 percent when Obama was re-elected in 2012, but about half of that when Trump was elected. Economic conditions do not explain 2016.

Nor are complaints about economic insecurity new. For example, the percentage of WWC respondents to the General Social Survey who said that they were dissatisfied with their financial situations has varied around 25 percent (+/- 5 points) over the last 30 years. The percentage dissatisfied did hit a high in the early years of the Great Recession (34 percent in 2010), but it dropped afterwards (to 31% in 2012 when Obama was re-elected and 29% in 2014). Economic discontent has been trending down, not up. [5] That only one-fifth of Trump voters supported raising the minimum wage to $15 further undercuts the primacy of economic motives.

To be sure, journalists can find and record the angry voices of economic distress; they do so virtually every election year. (Remember the painful stories about the foreclosure crisis and about lay-offs during the Great Recession?). There was little distinctive about either the economic distress or the economic anxiety of 2016 to explain Trump.

Some have noted, however, what appear to be a significant number of voters who supported Obama in 2008 or in 2012 and seemed to have switched to Trump in 2016 (e.g., here). Do these data not undermine a cultural, specifically a racial, explanation for Trump? No. In 2008, Obama received an unusual number of WWC votes because of the financial collapse, the Iraq fiasco, and Bush’s consequent unpopularity. These immediate factors overrode race for many in the WWC. But WWC votes for Obama dropped in 2012 despite his being the incumbent. Then, last November, the WWC vote for a Democratic candidate reverted back to its pre-Great Recession levels. [6] Put another way, Clinton’s support from the WWC was not especially low, Obama’s was unusually high for temporary reasons.

What was special about 2016 was the candidate: Donald Trump explicitly and loudly voiced the cultural resentments and authoritarian impulses of many in the WWC (and some in the middle class, too) that had been there for years but had had no full-throated champion–not Romney, not McCain, not the Bushes, probably not even Reagan–since perhaps Richard Nixon. What changed was not the terrain for a politics of resentment but the arrival of an unusual tiller of that soil, someone who brought out just enough of these voters to win his party’s nomination and to boost turnout in particular places for the general election. As one analyst wrote, “Trump repeatedly went where prior Republican presidential candidates were unwilling to go: making explicit appeals to racial resentment, religious intolerance, and white identity.”

But this is still less than half the story.

Party Polarization

To really how understand how Trump could get 46 percent of the vote, it takes more than identifying the distinctive sorts of people whom Trump attracted, because they are not that numerous. Trump won only a minority of the primary votes and faced intense opposition within his party. In the end, however, almost all Republicans came home to him–even evangelicals, to whose moral standards Trump is a living insult. The polarization of American politics in recent years was critical. Party ended up mattering more to college-educated, women, and suburban Republicans than whatever distaste they had for Trump the man.

Consider how historically new this development is. In 1964, the Republican nominee, Barry Goldwater, was considered to be at the far right end of the political spectrum. About 20 to 25% of Republicans crossed over and voted for Democrat Lyndon Johnson. (This crossover was mirrored by Democrats in the 1972 election. [7]) In 2016, by contrast, fewer than 10% of Republicans abandoned Trump–a far more problematic candidate than Goldwater–so much has America become polarized by party in the last couple of decades. [8]


Readings of the 2016 election as the product of a profound shift in American society or politics are overblown. In particular, notions that the WWC’s fortunes or views shifted so substantially in recent years as to account for Trump seem wrong.

What about the argument that the Trump phenomenon is part of a general rise across the western world of xenophobia? I don’t see much evidence outside of the Trump case itself for that in the United States. Long-term data suggest a decline–too slowly, for sure–rather than an increase in such attitudes.[9] And let’s not forget: Hillary Clinton won the popular vote.

The best explanation of why Trump got 46% of the ballots: Advantages for the out party in a third-term election + Trump’s unusual cultural appeal to a minority but still notable number of Americans + historically high party polarization. That Trump actually won the electoral college as well is pretty much an accident, albeit a hugely consequential one.



[1] Basically no one, including Trump, said anything bad about Bernie Sanders from the moment it became clear that Sanders would lose the primaries to Clinton. Had he been nominated, that silence would have ended fast and furiously. Moreover as the always astute Kevin Drum pointed out, Sanders is much too far to left to get elected, even way to the left of George McGovern, who got creamed in 1972. Finally, the “Bernie Brothers” who avoided Clinton would have been more than outnumbered by Hillary’s pissed-off sisters if she had been once again displaced by a man.

[2] On the other hand, economist-blogger Tyler Cowen is skeptical: If the doomsayers are right, why aren’t investors dumping equities, shorting the market, or fleeing to safer commodities?

[3] See these sources: 1, 2, 3, 4, 5, 6.

[4] For examples: 1, 2, 34.

[5] My analysis of the GSS through 2014. White working class is defined as whites who have not graduated college.

[6] Again, I used the GSS. In 2000 and 2004, the Democratic candidates, Gore and Kerry, got about 35 percent of the WWC vote, about what Bill Clinton got in his first run in 1992. Obama got substantially more, 48%, in 2008, then somewhat less, 42%, in 2012. Hillary Clinton got, according to a very different sort of survey, the exit polls, 29% of the WWC, but it is hard to compare the two methods. Note that the GSS reports of who people voted for in the previous election tend to skew toward the winners, but the point still stands that Obama’s jump in support from the WWC, especially in 2008, was quite unusual, not Hillary Clinton’s apparent slump in support.

[7] According to Gallup’s last poll before the 1964 election, 20% of Republicans were going to vote for Johnson. According to my analysis of the American National Election Survey, which is retrospective, 26% actually did. In 1972, the Democrats nominated the most left-leaning candidate of postwar era. According to Gallup data, 33% of Democrats crossed over to vote for Nixon. ANES data suggest that about 40 percent did. Whatever the specifics, there was much more cross-over voting 40 to 50 years ago, even under milder provocation.

[8] On the decline of ticket-splitting, see here.

[9] For example, one of the longest-running items in the GSS is the question, “I’d like you to tell me whether you think we’re spending too much money … too little money, or about the right amount … improving the conditions of Blacks.” In the 1970s, 28% of whites said too much; in the 2000s, 19% did. Another question was asked only through 2002: “Do you agree or disagree… (Negroes/blacks/African-Americans) shouldn’t push themselves where they’re not wanted?” In the 1970s, 74% of whites agreed; from 1990 to 2002, 15% did. More striking, in the 1970s, 11% of whites “strongly disagreed”; from 1990 to 2002, 32% did. On immigrants: David Weakliem has graphed responses to a recurrent Gallup Poll question, “Should immigration be kept at its present level, increased or decreased?”. From 1965 to the mid-1990s, the trend was strongly toward “decreased,” but since then the trend has strongly been toward “increased” (although that’s still a minority view).

Claude S. Fischer, PhD is a sociologist at UC Berkeley and is the author of Made in America: A Social History of American Culture and Character. This post originally appeared at his blog, Made in America.

(View original at

Worse Than FailureError'd: Nicht Gesprochen

"I can't read German, but that doesn't look like glowing praise," writes Bruno G.


TC wrote, "This is a thank-you email from Oracle after signing up to their forum. Personally I usually go by just 36db for short. 36bd5a41-416f-438c-93e0-d4dd04bf860e is my father's name!"


"I've heard FSX addons are expensive, but I'll pass on this one anyway," writes Stephan.


"This Windows Embedded installation is provided with the best license I have ever found on a software package. Forget about Linux, this is freedom!" Arrigo M. wrote.


Finlay writes, "You know, I appreciate how Xcode likes provide extra technical information in the small print."


"Yeah...whole lotta crashing going on here. Time to go read a book and let my computer think about what it's done," wrote Frankie.


Will H. writes, "You know, if they know there is going to be an error, why not FIX IT instead of warning me?!"


[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet Linux AustraliaDavid Rowe: OQPSK Modem Simulation

A friend of mine is developing a commercial OQPSK modem and was a bit stuck. I’m not surprised as I’ve had problems with OQPSK in the past as well. He called to run a few ideas past me and I remembered I had developed a coherent GMSK modem simulation a few years ago. Turns out MSK and friends like GMSK can be interpreted as a form of OQPSK.

A few hours later I had a basic OQPSK modem simulation running. At that point we sat down for a bottle of Sparkling Shiraz and some curry to celebrate. The next morning, slightly hung over, I spent another day sorting out the diabolical phase and timing ambiguity issues to make sure it runs at all sorts of timing and phase offsets.

So oqsk.m is a reference implementation of an Offset QPSK (OQPSK) modem simulation, written in GNU Octave. It’s complete, including timing and phase offset estimation, and phase/timing ambiguity resolution. It handles phase, frequency, timing, and sample clock offsets. You could run it over real world channels.

It’s performance is bang on ideal for QPSK:

I thought it would be useful to publish this blog post as OQPSK Modems are hard. I’ve had a few run-in with these beasts over the years and had headaches every time. This business about the I and Q arms being half a symbol offset from each other makes phase synchronisation very hard and does your head in. Here is the Tx waveform, you can see the half symbol time offset in the instant where I and Q symbols change:

As this is unfiltered OQPSK, the Tx waveform is just the the Tx symbols passed through a zero-order hold. That’s a fancy way of saying we keep the symbols values constant for M=4 samples then change them.

There are very few complete reference implementations of high quality modems on the Internet, so it’s become a bit of a mission of mine. By “complete” I mean pushing past the textbook definitions to include real world synchronisation. By “high quality” I mean tested against theoretical performance curves with different channel impairments. Or even tested at all. OQPSK is a bit obscure and it’s even harder to find any details of how to build a real world modem. Plenty of information on the basics, but not the nitty gritty details like synchronisation.

The PLL and timing loop simultaneously provides phase and timing estimation. I derived it from a similar algorithm used for the GMSK modem simulation. Unusually for me, the operation of the timing and phase PLL loop is still a bit of mystery. I don’t quite fully understand it. Would welcome more explanation from any readers who are familiar to it. Parts of it I understand (and indeed I engineered) – the timing is estimated on blocks of samples using a non-linearity and DFT, and the PLL equations I worked through a few years ago. It’s also a bit old school, I’m used feed forward type estimators and not something this “analog”. Oh well, it works.

Here is the phase estimator PLL loop doing it’s thing. You can see the Digital Controlled Oscillator (DCO) phase tracking a small frequency offset in the lower subplot:

Phase and Timing Ambiguities

The phase/timing estimation works quite well (great scatter diagram and BER curve), but can sync up with some ambiguities. For example the PLL will lock on the actual phase offset plus integer multiples of 90 degrees. This is common with phase estimators for QPSK and it means your constellation has been rotated by some multiple of 90 degrees. I also discovered that combinations of phase and timing offsets can cause confusion. For example a 90 degree phase shift swaps I and Q. As the timing estimator can’t tell I from Q it might lock onto a sequence like …IQIQIQI… or …QIQIQIQ…. leading to lots of pain when you try to de-map the sequence back to bits.

So I spent a Thursday exploring these ambiguities. I ended up correlating the known test sequence with the I and Q arms separately, and worked out how to detect IQ swapping and the phase ambiguity. This was tough, but it’s now handling the different combinations of phase, frequency and timing offsets that I throw at it. In a real modem with unknown payload data a Unique Word (UW) of 10 or 20 bits at the start of each data frame could be used for ambiguity resolution.

Optional Extras

The modem lacks an initial frequency offset estimator, but the PLL works OK with small freq offsets like 0.1% of the symbol rate. It would be useful to add an outer loop to track these frequency offsets out.

As it uses feedback loops its not super fast to sync and best suited to continuous rather than burst operation.

The timing recovery might need some work for your application, as it just uses the nearest whole sample. So for a small over-sample rate M=4, a timing off set of 2.7 samples will mean it chooses sample 3, which is a bit coarse, although given our BER results it appears unfiltered PSK isn’t too sensitive to timing errors. Here is the timing estimator tracking a sample clock offset of 100ppm, you can see the coarse quantisation to the nearest sample in the lower subplot:

For small M, a linear interpolator would help. If M is large, say 10 or 20, then using the nearest sample will probably be good enough.

This modem is unfiltered PSK, so it has broad lobes in the transmit spectrum. Here is the Tx spectrum at Eb/No=4dB:

The transmit filter is just a “zero older hold” and the received filter an integrator. Raised cosine filtering could be added if you want a narrow bandwidth. This will probably make it more sensitive to timing errors.

Like everything with modems, test it by measuring the BER. Please.


oqsk.mGNU Octave OQPSK modem simulation

GMSK Modem Simulation blog post that was used as a starting point for the OQPSK modem.

Planet Linux AustraliaDavid Rowe: Codec 2 700C

My endeavor to produce a digital voice mode that competes with SSB continues. For a big chunk of 2016 I took a break from this work as I was gainfully employed on a commercial HF modem project. However since December I have once again been working on a 700 bit/s codec. The goal is voice quality roughly the same as the current 1300 bit/s mode. This can then be mated with the coherent PSK modem, and possibly the 4FSK modem for trials over HF channels.

I have diverged somewhat from the prototype I discussed in the last post in this saga. Lots of twists and turns in R&D, and sometimes you just have to forge ahead in one direction leaving other branches unexplored.


Sample 1300 700C
hts1a Listen Listen
hts2a Listen Listen
forig Listen Listen
ve9qrp_10s Listen Listen
mmt1 Listen Listen
vk5qi Listen Listen
vk5qi 1% BER Listen Listen
cq_ref Listen Listen

Note the 700C samples are a little lower level, an artifact of the post filtering as discussed below. What I listen for is intelligibility, how easy is the same to understand compared to the reference 1300 bit/s samples? Is it muffled? I feel that 700C is roughly the same as 1300. Some samples a little better (cq_ref), some (ve9qrp_10s, mmt1) a little worse. The artifacts and frequency response are different. But close enough for now, and worth testing over air. And hey – it’s half the bit rate!

I threw in a vk5qi sample with 1% random errors, and it’s still usable. No squealing or ear damage, but perhaps more sensitive that 1300 to the same BER. Guess that’s expected, every bit means more at a lower bit rate.

Some of the samples like vk5qi and cq_ref are strongly low pass filtered, others like ve9qrp are “flat” spectrally, with the high frequencies at about the same level as the low frequencies. The spectral flatness doesn’t affect intelligibility much but can upset speech codecs. Might be worth trying some high pass (vk5qi, cq_ref) or low pass (ve9qrp_10s) filtering before encoding.


Below is a block diagram of the signal processing. The resampling step is the key, it converts the time varying number of harmonic amplitudes to fixed number (K=20) of samples. They are sampled using the “mel” scale, which means we take more finely spaced samples at low frequencies, with coarser steps at high frequencies. This matches the log frequency response of the ear. I arrived at K=20 by experiment.

The amplitudes and even the Vector Quantiser (VQ) entries are in dB, which is very nice to work in and matches the ears logarithmic amplitude response. The VQ was trained on just 120 seconds of data from a training database that doesn’t include any of the samples above. More work required on the VQ design and training, but I’m encouraged that it works so well already.

Here is a 3D plot of amplitude in dB against time (300 frames) and the K=20 frequency vectors for hts1a. You can see the signal evolving over time, and the low levels at the high frequency end.

The post filter is another key step. It raises the spectral peaks (formants) an lowers the valleys (anti-formants), greatly improving the speech quality. When the peak/valley ratio is low, the speech takes on a muffled quality. This is an important area for further investigation. Gain normalisation after post filtering is why the 700C samples are lower in level than the 1300 samples. Need some more work here.

The two stage VQ uses 18 bits, energy 4 bits, and pitch 6 bits for a total of 28 bits every 40ms frame. Unvoiced frames are signalled by a zero value in the pitch quantiser removing the need for a voicing bit. It doesn’t use differential in time encoding to make it more robust to bit errors.

Days and days of very careful coding and checks at each development step. It’s so easy to make a mistake or declare victory early. I continually compared the output speech to a few Codec 2 1300 samples to make sure I was in the ball park. This reduced the subjective testing to a manageable load. I used automated testing to compare the reference Octave code to the C code, porting and testing one signal processing module at a time. Sometimes I would just printf rows of vectors from two versions and compare the two, old school but quite effective and spotting the step where the bug crept in.

Command line

The Octave simulation code can be driven by the scripts newamp1_batch.m and newamp1_fby.m, in combination with c2sim.

To try the C version of the new mode:

codec2-dev/build_linux/src$ ./c2enc 700C ../../raw/hts1a.raw - | ./c2dec 700C - -| play -t raw -r 8000 -s -2 -

Next Steps

Some thoughts on FEC. A (23,12) Golay code could protect the most significant bits of 1st VQ index, pitch, and energy. The VQ could be organised to tolerate errors in a few of its bits by sorting to make an error jump to a ‘close’ entry. The extra 11 parity bits would cost 1.5dB in SNR, but might let us operate at significantly lower in SNR on a HF channel.

Over the next few weeks we’ll hook up 700C to the FreeDV API, and get it running over the air. Release early and often – lets find out if 700C works in the real world and provides a gain in performance on HF channels over FreeDV 1600. If it looks promising I’d like to do another lap around the 700C algorithm, investigating some of the issues mentioned above.

Planet Linux AustraliaDavid Rowe: Physics of Road Rage

A few days ago while riding my bike I was involved in a spirited exchange of opinions with a gentleman in a motor vehicle. After said exchange he attempted to run me off the road, and got out of his car, presumably with intent to assault me. Despite the surge of adrenaline I declined to engage in fisticuffs, dodged around him, and rode off into the sunset. I may have been laughing and communicating further with sign language. It’s hard to recall.

I thought I’d apply some year 11 physics to see what all the fuss was about. I was in the middle of the road, preparing to turn right at a T-junction (this is Australia remember). While his motivations were unclear, his vehicle didn’t look like an ambulance. I am assuming he as not an organ-courier, and that there probably wasn’t a live heart beating in an icebox on the front seat as he raced to the transplant recipient. Rather, I am guessing he objected to me being in that position, as that impeded his ability to travel at full speed.

The street in question is 140m long. Our paths crossed half way along at the 70m point, with him traveling at the legal limit of 14 m/s, and me a sedate 5 m/s.

Lets say he intended to brake sharply 10m before the T junction, so he could maintain 14 m/s for at most 60m. His optimal journey duration was therefore 4 seconds. My monopolization of the taxpayer funded side-street meant he was forced to endure a 12 second journey. The 8 second difference must have seemed like eternity, no wonder he was angry, prepared to risk physical injury and an assault charge!

Planet Linux AustraliaDavid Rowe: Horus 39 – Fantastic High Speed SSDV Images

A great result from our high speed SSDV image (Wenet) system, which we flew as part of Horus 38 on Saturday Dec 3. A great write up and many images on the AREG web site.

One of my favorite images below, just before impact with the ground. You can see the parachute and the tangled remains of the balloon in the background, the yellow fuzzy line is the nylon rope close to the lens.

Well done to the AREG club members (in particular Mark) for all your hard work in preparing the payloads and ground stations.

High Altitude Balloons is a fun hobby. It’s a really nice day out driving in the country with nice people in a car packed full of technology. South Australia has some really nice bakeries that we stop at for meat pies and donuts on the way. Yum. It was very satisfying to see High Definition (HD) images immediately after take off as the balloon soared above us. Several ground stations were collecting packets that were re-assembled by a central server – we crowd sourced the image reception.

Open Source FSK modem

Surprisingly we were receiving images while mobile for much of the flight. I could see the Eb/No move up and down about 6dB over 3 second cycles, which we guess is due to rotation or swinging of the payload under the balloon. The antennas used are not omnidirectional so the change in orientation of tx and rx antennas would account for this signal variation. Perhaps this can be improved using different antennas or interleaving/FEC.

Our little modem is as good as the Universe will let us make it (near perfect performance against theory) and it lived up to the results predicted by our calculations and tested on the ground. Bill, VK5DSP, developed a rate 0.8 LDPC code that provides 6dB coding gain. We were receiving 115 kbit/s data on just 50mW of tx power at ranges of over 100km. Our secret is good engineering, open source software, $20 SDRs, and a LNA. We are outperforming commercial chipsets with open source.

The same modem has been used for low bit rate RTTY telemetry and even innovative new VHF/UHF Digital Voice modes.

The work on our wonderful little FSK modem continues. Brady O’Brien, KC9TPA has been refactoring the code for the past few weeks. It is now more compact, has a better command line interface, and most importantly runs faster so we getting close to running high speed telemetry on a Raspberry Pi and fully embedded platforms.

I think we can get another 4dB out of the system, bringing the MDS down to -116dBm – if we use 4FSK and lose the RS232 start/stop bits. What we really need next is custom tx hardware for open source telemetry. None of the chipsets out there are quite right, and our demod outperforms them all so why should we compromise?

Recycled Laptops

The project has had some interesting spin offs. The members of AREG are getting really interested in SDR on Linux resulting in a run on recycled laptops from ASPItech, a local electronics recycler!


Balloon meets Gum Tree
Horus 37 – High Speed SSTV Images
High Speed Balloon Data Link
All Your Modem are Belong To Us
FreeDV 2400A and 2400B Demos
Wenet Source Code
Nov 2016 Wenet Presentation

Planet Linux AustraliaDavid Rowe: Balloon Meets Gum Tree

Today I attended the launch of Horus 38, a high altitude ballon flight carrying 4 payloads, one of which was the latest version of the SSDV system Mark and I have been working on.

Since the last launch, Mark and I have put a lot of work into carefully integrating a rate 0.8 LDPC code developed by Bill, VK5DSP. The coded 115 kbit/s system is now working error free on the bench down to -112dBm, and can transfer a new hi-res image in just a few seconds. With a tx power of 50mW, we estimate a line of site range of 100km. We are now out-performing commercial FSK telemetry chip sets using our open source system.

However disaster struck soon after launch at Mt Barker High School oval. High winds blew the payloads into a tree and three of them were chopped off, leaving the balloon and a lone payload to continue into the stratosphere. One of the payloads that hit the tree was our SSDV, tumbling into a neighboring back yard. Oh well, we’ll have another try in December.

Now I’ve been playing a lot of Kerbal Space Program lately. It’s got me thinking about vectors, for example in Kerbal I learned how to land two space craft at exactly the same point on the Mun (Moon) using vectors and some high school equations of motion. I’ve also taken up sailing – more vectors involved in how sails propel a ship.

The high altitude balloon consists of a latex, helium filled weather balloon a few meters in diameters. Strung out beneath that on 50m of fishing line are a series of “payloads”, our electronic gizmos in little foam boxes. The physical distance helps avoid interference between the radios in each box.

While the balloon was held near the ground, it was keeled over at an angle:

It’s tethered, and not moving, but is acted on by the force of the lift from the helium and drag from the wind. These forces pivot the balloon around an arc with a radius of the tether. If these forces were equal the balloon would be at 45 degrees. Today it was lower, perhaps 30 degrees.

When the balloon is released, it is accelerated by the wind until it reaches a horizontal velocity that matches the wind speed. The payloads will also reach wind speed and eventually hang vertically under the balloon due to the force of gravity. Likewise the lift accelerates the balloon upwards. This is balanced by drag to reach a vertical velocity (the ascent rate). The horizontal and vertical velocity components will vary over time, but lets assume they are roughly constant over the duration of our launch.

Now today the wind speed was 40 km/hr, just over 10 m/s. Mark suggested a typical balloon ascent rate of 5 m/s. The high school oval was 100m wide, so the balloon would take 100/10 = 10s to traverse the oval from one side to the gum tree. In 10 seconds the balloon would rise 5×10 = 50m, approximately the length of the payload string. Our gum tree, however, rises to a height of 30m, and reached out to snag the lower 3 payloads…..

Planet Linux AustraliaDavid Rowe: Horus 37 – High Speed SSTV Images

Today I was part of the AREG team that flew Horus 37 – a High Altitude Balloon flight. The payload included hardware sending Slow Scan TV (SSTV) images at 115 kbit/s, based on the work Mark and I documented in this blog post from earlier this year.

It worked! Using just 50mW of transmit power and open source software we managed to receive SSTV images at bit rates of up to 115 kbit/s:

More images here.

Here is a screen shot of the Python dashboard for the FSK demodulator that Mark and Brady have developed. It gives us some visibility into the demod state and signal quality:

(View-Image on your browser to get a larger version)

The Eb/No plot shows the signal strength moving up and down over time, probably due to motion of our car. The Tone Frequency Estimate shows a solid lock on the two FSK frequencies. The centre of the Eye Diagram looks good in this snapshot.

Octave and C LDPC Library

There were some errors in received packets, which appear as stripes in the images:

On the next flight we plan to add a LDPC FEC code to protect against these errors and allow the system to operate at signal levels about 8dB lower (more than doubling our range).

Bill, VK5DSP, has developed a rate 0.8 LDPC code designed for the packet length of our SSTV software (2064 bits/packet including checksum). This runs with the CML library – C software designed to be called from Matlab via the MEX file interface. I previously showed how the CML library can be used in GNU Octave.

I like to develop modem algorithms in GNU Octave, then port to C for real time operation. So I have put some time into developing Octave/C software to simulate the LDPC encoded FSK modem in Octave, then easily port exactly the same LDPC code to C. For example the write_code_to_C_include_file() Octave function generates a C header file with the code matrices and test vectors. There are test functions that use an Octave encoder and C decoder and compare the results to an Octave decoder. It’s carefully tested and bit exact to 64-bit double precision! Still a work in progress, but has been checked into codec2-dev SVN:

ldpc_fsk_lib.m Library of Octave functions to support LDPC over FSK modems
test_ldpc_fsk_lib.m Test and demo functions for Octave and C library code
mpdecode_core.c CML MpDecode.c LDPC decoder functions re-factored
H2064_516_sparse.h Sample C include file that describes Bill’s rate 0.8 code
ldpc_enc.c Command line LDPC encoder
ldpc_dec.c Command line LDPC decoder
drs232_ldpc.c Command line SSTV deframer and LDPC decoder

This software might be useful for others who want to use LDPC codes in their Matlab/Octave work, then run them in real time in C. With the (2064,512) code, the decoder runs at about 500 kbit/s on one core of my old laptop. I would also like to explore the use of these powerful codes in my HF Digital Voice work.

SSTV Hardware and Software

Mark did a fine job putting the system together and building the payload hardware and it’s enclosure:

It uses a Raspberry Pi, with a FSK modulator we drive from the Pi’s serial port. The camera aperture is just visible at the front. Mark has published the software here. The tx side is handled by a single Python script. Here is the impressive command line used to start the rx side running:

#	Start RX using a rtlsdr. 
python & 
rtl_sdr -s 1000000 -f 441000000 -g 35 - | csdr convert_u8_f | csdr bandpass_fir_fft_cc 0.1 0.4 0.05 | csdr fractional_decimator_ff 1.08331 | csdr realpart_cf | csdr convert_f_s16 | ./fsk_demod 2XS 8 923096 115387 - - S 2> >(python --wide) | ./drs232_ldpc - - | python --partialupdate 16

We have piped together a bunch of command line utilities on the Linux command line. A hardware analogy is a bunch of electronic boards on a work bench connected via coaxial jumper leads. It works quite well and allows us to easily prototype SDR radio systems on Linux machines from a laptop to a RPi. However down the track we need to get it all “in one box” – a single, cross platform executable anyone can run.

Next Steps

We did some initial tests with the LDPC decoder today but hit integration issues that flat lined our CPU. Next steps will be to investigate these issues and try LDPC encoded SSTV on the next flight, which is currently scheduled for the end of October. We would love to have some help with this work, e.g. optimizing and testing the software. Please let us know if you would like to help!

Mark’s blog post on the flight
AREG blog post detailing the entire flight, including set up and recovery
High Speed Balloon Data Link – Development and Testing of the SSTV over FSK system
All your Modems are belong to Us – The origin of the “ideal” FSK demod used for this work.
FreeDV 2400A – The C version of this modem developed by Brady and used for VHF Digital Voice
LDPC using Octave and CML – using the CML library LDPC decoder in GNU Octave

Planet Linux AustraliaSimon Lyall: 2017 – Friday – Lightning Talks

Use #lcapapers to tell what you want to see in 2018

Michael Still and Michael Davies get the Rusty Wrench award

Karaoke – Jack Skinner

  • Talk with random slides

Martin Krafft

  • Matrix
  • End to end encrypted communication system
  • No entity owns your conversations
  • Bridge between walled gardens (eg IRC and Slack)
  • In Very late Beta, 450K user accounts
  • Run or Write your own servers or services or client

Cooked – Pete the Pirate

  • How to get into Sous Vide cooking
  • Create home kit
  • Beaglebone Black
  • Rice cooker, fish tank air pump.
  • Also use to germinate seeds
  • Also use this system to brew beer

Emoji Archeology 101 – Russell Keith-Magee

  • 1963 Happy face created
  • 🙂 invented
  • later 🙁 invented
  • Only those emotions imposed by the Unicode consortium can now be expressed

The NTPsec Project – Mark Atwood

  • Since 2014
  • For and git in 2015 from parent ntp project
  • 1.0.0 release soon
  • Removed 73% of lines from classic
    • Removed commandline tools
    • Got write of stuff for old OSes
    • Changed to POSIX and modern coding
    • removed experiments
  • Switch to git and bugzilla etc
  • Fun not painful
  • Welcoming community, not angry

National Computer Science Summer School – Katie Bell

  • Running for 22 years
  • Web stream, Embedded Stream
  • Using BBC Microbit
  • Lots of projects
  • Students in grade 10-11
  • Happens in January
  • Also 5 week long online programming competition NCSS Competition.

Blockchain – Rusty Russell

  • Blockchain
  • Blockchain
  • Blockchain

Go to Antarctica – Jucinter Richardson

  • Went Twice
  • Go by ship
  • No rain
  • Nice and cool
  • Join the government
  • Positions close
  • Go while it is still there

Cool and Awesome projects you should help with – Tim Ansell

  • Tomu Boards
  • MicroPython on FPGAs
  • Python Devicetree – needs a good library
  • QEMU for LiteX / MiSoC
  • NuttX for LiteX / MiSoC
  • QEMU for Tomu
  • Improving LiteX / MiSoc
  • Sypress FX2
  • Linux to LiteX / MiSoC

LoRa TAS – Paul Neumeyer

  • long range (2-3km urban 10km rural)
  • low power (batter ~5 years)
  • Unlicensed radio spectrum 915-928 Mhz BAnd (AUS)
  • LoRaWAN is an open standard
  • Ideal for IoT applications (sensing, preventative maintenance, smart)

Roan Kattatow

  • Different languages mix dots and commas and spaces etc to write numbers

ZeroSkip – Ron Gondwana

  • Crash safe embeded database
  • Not fast enough
  • Zeroskip
  • Append only database file
  • Switch files now and then
  • Repack old files togeather

PyCon Au – Richard Jones

  • Python Conference Australia
  • 7th in Melbourne in Aug 2016 – 650 people, 96 presentation
  • In Melb on 308 of August on 2016

Buying a Laptop built for Linux – Paul Wayper

  • Bought from System76
  • Designed for Linux

openQA – Aleksa Sarai

  • Life is too short for manual testing
  • Perl based framework that lets you emulate a user
  • Runs from console, emulates keyboard and mouse
  • Has screenshots
  • Used by SUSE and openSUSE and fedora
  • Fuzzy comparison, using regular expressions

South Coast Track – Bec, Clinton and Richard

  • What I did in the Holidays
  • 6 day walk in southern tasmania
  • Lots of pretty photos


Planet Linux AustraliaSimon Lyall: 2017 – Friday – Session 2

Continuously Delivering Security in the Cloud – Casey West

  • This is a talk about operation excellence
  • Why are system attacked? Because they exist
  • Resisting Change to Mitigate Risk – It’s a trap!
  • You have a choice
    • Going fast with unbounded risk
    • Going slow to mitigate risk
  • Advanced Persistent Threat (ATP) – The breach that lasts for months
  • Successful attacks have
    • Time
    • Leaked or misused creditials
    • Miconfigured or unpatched software
  • Changing very little slowly helps all three of the above
  • A moving target is harder to hit
  • Cloud-native operability lets platforms move faster
    • Composable architecture (serverless, microservices)
    • Automated Processes (CD)
    • Collaborative Culture (DevOps)
    • Production Environment (Structured Platform)
  • The 3 Rs
    • Rotate
      • Rotate credentials every few minutes or hours
      • Credentials will leak, Humans are weak
      • “If a human being generates a password for you then you should reject it”
      • Computers should generate it, every few hours
    • Repave
      • Repave every server and application every few minutes/hours
      • Implies you have things like LBs that can handle servers adding and leaving
      • Container lifecycle
        • Built
        • Deploy
        • Run
        • Stop
        • Note: No “change “step
      • A Server that doesn’t exist isn’t being cromprimised
      • Regularly blow away running containers
      • Repave ≠ Patch
      • uptime <= 3600
    • Repair
      • Repair vulnerable runtime environments every few minutes or hours
      • What stuff will need repair?
        • Applications
        • Runtime Environments (eg rails)
        • Servers
        • Operating Systems
      • The Future of security is build pipelines
      • Try to put in credential rotation and upsteam imports into your builds
  • Embracing Change to Mitigate Risk
  • Less of a Trap (in the cloud)


Planet Linux AustraliaSimon Lyall: 2017 – Friday – Session 1

Adventures in laptop battery hacking -Matthew Chapman

  • Lenovo Thinkpad X230T
    • Bought Aug 2013
    • Ariginal capacity 62 KWh – 5hours and 12W
    • Capacity down to 1.9Wh – 10 minutes
  • 45N1079 replacement bought
    • DRM on laptop claimed it was not genuine and refused to recharge it.
  • Batteries talk SBS protocol to laptop
  • SMBus port and SMClock port
    • sniffed the port with logic analyser
    • Using I2C protocol
    • Looked at spec to see what it means
    • Challenge-response authentication
  • Options
    1. Throw Away
    2. Replace Cells
      • Easy to damage
      • Might not work
    3. Hack firmware on battery
      • Talk at DEFCON 19
      • But this is different model from that
      • Couldn’t work out how to get to firmware
    4. Added something in between
    5. Update the firmware on the machine
      • Embeded Controller (EC)
      • MEC1619
  • Looking though the firmware for Battery Authentication
    • Found routine that look plausable
    • But other stuff was encrypted
  • EC Update process
    • BIOS update puts EC update in spare flash memory area
    • After the BIOs grabs that and applies update
  • Pulled apart the BIOs, found EcFwUpdateDxe.efi routine that updates the EC
    • Found that stuff send to the EC still encrypted.
    • Unencryption done by flasher program
  • Flasher program
    • Encrypted itself (decrypted by the current fireware)
    • JTAG interface for flashing debug
  • JTAG
    • Physically difficult to get to
    • Luckily Russian Hackers have already grabbed a copy
  • The Decryption function in the Flasher program
    • Appears to be blowfish
    • Found the key (in expanded form) in the firmware
    • Enough for the encryption and decryption
  • Checksums
    • Outer checksum checked by BIOs
    • Post-decryption sum – checked by the flasher (bricks EC if bad)
    • Section Echecksums (also bricks)
  • Applying
    • noop the checks in code
    • noop another check that sometimes failer
    • Different error message
  • Found a second authentication process
    • noop out the 2nd challenge in the BIOs
  • Works!
  • Posted writeup, posted to hacker news
    • 1 million page views
  • Uploaded code to github
    • Other people doing stuff with the embedded controller
    • No longer works on latest laptops, EC firmware appears to be signed
  • Anything can be broken with physical access and significant determination

Election Software – Vanessa Teague

  • Australian Elections use a lot of software
    • Encoding and counting preferential votes
    • For voting in polling places
    • For voting over the internet
  • How do we know this software is correct
  • The Paper ballot box is engineered around a serious of problems
    • In the past people bought their own voting paper
    • The Australian Ballot used in many places (eg NZ)
    • Franch use different method with envelopes and glass boxes
    • The US has had lots of problems and different ways
  • Four cases studies in Aus
  • vVote: Victoria
    • Vic state election 2014
    • 1121 votes for overseas Australians voting in Embassies etc
    • Based on Pret a Voter
    • You can varify that what you voted was what went though
    • Source code on bitbucket
    • Crypto signed, varified, open source, etc
    • Not going forward
    • Didn’t get the electoral commissions input and buy-in.
    • A little hard to use
  • iVote: NSW and WA
    • 280,000 votes over Internet in 2015 NSW state election ( around 5-6% of total votes)
    • Vote on a device of your choosing
    • Vote encrypted and send over Internet
    • Get receipt number
    • Exports to a varification service. You can telephone them, give them your number and they will read back you votes
    • Website used 3rd-party analytics provider with export-grade crypto
      • Vulnerable to injection of content, votes could be read or changed
      • Fixed (after 66k votes cast)
    • NSW iVote really wasn’t varifiable
    • About 5000 people called into service and successfully verified
    • How many tried to verify but failed?
    • Commission said 1.7% of electors verified and none identified any anomalies with their vote (Mar 2015)
    • How many tried and failed? “in the 10s” (Oct 2015)
    • Parliamentary said how many failed? Seven or 5 (Aug 2016)
    • How many failed to get any vote? 627 (Aug 2016)
    • This is a failure rate of about 10%
    • It is believed it was around 200 unique (later in 2016)
  • Vote Counting software
  • Errors in NSW counting
    • NSW legislative voting redistributed votes are selected at random
    • No source code for this
    • Use same source code for lots of other elections
    • Re-ran some of the votes, found randomness could change results. Found one most likely cost somebody a seat, but not till 4 years later.
  • Recomended
    • Generate the random key publicly
    • Open up the source code
    • They electorial peopel didn’t want to do this.
  • In the 2016 localgovt count we found 2 more bugs
    • One candidate should have won with 54% probability but didn’t
  • The Australian Senate Count
  • AEC consistent refuses to revel the source code
  • The Senate Date is release, you can redo it yourself any bugs will become evident
  • What about digitising the ballots?
    • How would we know if that wasn’t working?
    • Only by auditing the paper evidence
  • Auditing
    • The Americas have a history or auditing the paper ballots
    • But the Australian vote is a lot more complex so everything not 100% yet
    • Stuff is online



Planet Linux AustraliaPia Waugh: Retiring from GovHack

It is with a little sadness, but a lot of pride that I announce my retirement from GovHack, at least retirement from the organising team :) It has been an incredible journey with a lot of amazing people along the way and I will continue to be it’s biggest fan and support. I look forward to actually competing in future GovHacks and just joining in the community a little more than is possible when you are running around organising things! I think GovHack has grown up and started to walk, so as any responsible parent, I want to give it space to grow and evolve with the incredible people at the helm, and the new people getting involved.

Just quickly, it might be worth reflecting on the history. The first “GovHack” event was a wonderfully run hackathon by John Allsopp and Web Directions as part of the Gov 2.0 Taskforce program in 2009. It was small with about 40 or so people, but extremely influential and groundbreaking in bringing government and community together in Australia, and I want to thank John for his work on this. You rock! I should also acknowledge the Gov 2.0 Taskforce for funding the initiative, Senator at the time Kate Lundy for participating and giving it some political imprimatur, and early public servants who took a risk to explore new models of openness and collaboration such as Aus Gov CTO John Sheridan. A lot of things came together to create an environment in which community and government could work together better.

Over the subsequent couple of years there were heaps of “apps” competitions run by government and industry. On the one hand it was great to see experimentation however, unfortunately, several events did silly things like suing developers for copyright infringement, including NDAs for participation, or setting actual work for development rather than experimentation (which arguably amounts to just getting free labour). I could see the tech community, my people, starting to disengage and become entirely and understandably cynical of engaging with government. This would be a disastrous outcome because government need geeks. The instincts, skills and energy of the tech community can help reinvent the future of government so I wanted to right this wrong.

In 2012 I pulled together a small group of awesome people. Some from that first GovHack event, some from BarCamp, some I just knew and we asked John if we could use the name (thank you again John!) and launched a voluntary, community run, annual and fun hackathon, by hackers for hackers (and if you are concerned by that term, please check out what a hacker is). We knew if we did something awesome, it would build the community up, encourage governments to open data, show off our awesome technical community, and provide a way to explore tricky problems in new and interesting ways. But we had to make is an awesome event for people to participate in.

It worked.

It has been wonderful to see GovHack grow from such humble origins to the behemoth it is today, whilst also staying true to the original purpose, and true to the community it serves. In 2016 (for which I was on maternity leave) there were over 3000 participants in 40 locations across two countries with active participation by Federal, State/Territory and Local Governments. There are always growing pains, but the integrity of the event and commitment to community continues to be a huge part of the success of the event.

In 2015 I stepped back from the lead role onto the general committee, and Geoff Mason did a brilliant job as Head Cat Herder! In 2016 I was on maternity leave and watched from a distance as the team and event continued to evolve and grow under the leadership of Richard Tubb. I feel now that it has its own momentum, strong leadership, an amazing community of volunteers and participation and can continue to blossom. This is a huge credit to all the people involved, to the dedicated national organisers over the years, to the local organisers across Australia and New Zealand, and of course, to all the community who have grown around it.

A few days ago, a woman came up to me at and told me about how she had come to Australia not knowing anyone, and gone to GovHack after seeing it advertised at her university, and she made all her friends and relationships there and is so extremely happy. It made me teary, but also was a timely reminder. Our community is amazing. And initiatives like GovHack can be great enablers for our community, for new people to meet, build new communities, and be supported to rock. So we need to always remember that the projects are only as important as how much they help our community.

I continue to be one of GovHack’s biggest fans. I look forward to competing this year and seeing where current and future leadership takes the event and they have my full support and confidence. I will be looking for my next community startup after I finish writing my book (hopefully due mid year :)).

If you love GovHack and want to help, please volunteer for 2017, consider joining the leadership, or just come along for fun. If you don’t know what GovHack is, I’ll see you there!

Planet Linux AustraliaSimon Lyall: 2017 – Friday Keynote – Robert Lefkowitz

Keeping Linux Great

  • Previous Keynotes have posed question I’ll pose answers
  • What is the free of open source software, it has no future
  • FLOSS is yesterday’s gravy
    • Based on where the technology is today. How would FLOSS work with punch cards?
    • Other people have said similar things
    • Software, Linux and similar all going down in google trends
    • But “app” is going up
  • Lithification
    • Small pieces losely joined
    • Linux used to be great could you could pipe stuff to little programs
    • That is what is happening to software
    • Example – share a page to another app in a mobile interface
    • All apps no longer need to send mail, they just have to talk to the mail app
  • So What should you do?
    • Vendor all you dependencies, just copy everyone elses code into your repo (and list their names if it is BSD) so you can ship everything in one blob (eg Android)
      • Components must be 5> million or >20 LOC , only a handful or them
      • At the other end apps are smaller since they can depend on the OS or other Apps for lots of functionality so they don’t have to write it themselves.
      • Example node with thousands of dependencies
  • App Freedom
    • “Advanced programming environments conflate the runtime with the devtime” – Bret Victor
    • Open Source software rarely does that
    • “It turns out that Object Orientation didn’t work out, it is another legacy with are stuck with”
    • Having the source code is nice but it is not a requirement. Access to the runtime is what you want. You need to get it where people are using it.
  • Liberal Software
  • But not everything wasn’t to be a programmer
    • 75% comes from 6 generic web applications ( collection, storage, reservation, etc)
  • A lot of functionality requires big data or huge amounts of machines or is centralised so open sourcing the software doesn’t do anything useful
  • If it was useful it could be patented, if it was not useful but literary then it was just copyright



Planet DebianMatthew Garrett: Android apps, IMEIs and privacy

There's been a sudden wave of people concerned about the Meitu selfie app's use of unique phone IDs. Here's what we know: the app will transmit your phone's IMEI (a unique per-phone identifier that can't be altered under normal circumstances) to servers in China. It's able to obtain this value because it asks for a permission called READ_PHONE_STATE, which (if granted) means that the app can obtain various bits of information about your phone including those unique IDs and whether you're currently on a call.

Why would anybody want these IDs? The simple answer is that app authors mostly make money by selling advertising, and advertisers like to know who's seeing their advertisements. The more app views they can tie to a single individual, the more they can track that user's response to different kinds of adverts and the more targeted (and, they hope, more profitable) the advertising towards that user. Using the same ID between multiple apps makes this easier, and so using a device-level ID rather than an app-level one is preferred. The IMEI is the most stable ID on Android devices, persisting even across factory resets.

The downside of using a device-level ID is, well, whoever has that data knows a lot about what you're running. That lets them tailor adverts to your tastes, but there are certainly circumstances where that could be embarrassing or even compromising. Using the IMEI for this is even worse, since it's also used for fundamental telephony functions - for instance, when a phone is reported stolen, its IMEI is added to a blacklist and networks will refuse to allow it to join. A sufficiently malicious person could potentially report your phone stolen and get it blocked by providing your IMEI. And phone networks are obviously able to track devices using them, so someone with enough access could figure out who you are from your app usage and then track you via your IMEI. But realistically, anyone with that level of access to the phone network could just identify you via other means. There's no reason to believe that this is part of a nefarious Chinese plot.

Is there anything you can do about this? On Android 6 and later, yes. Go to settings, hit apps, hit the gear menu in the top right, choose "App permissions" and scroll down to phone. Under there you'll see all apps that have permission to obtain this information, and you can turn them off. Doing so may cause some apps to crash or otherwise misbehave, whereas newer apps may simply ask for you to grant the permission again and refuse to do so if you don't.

Meitu isn't especially rare in this respect. Over 50% of the Android apps I have handy request your IMEI, although I haven't tracked what they all do with it. It's certainly something to be concerned about, but Meitu isn't especially rare here - there are big-name apps that do exactly the same thing. There's a legitimate question over whether Android should be making it so easy for apps to obtain this level of identifying information without more explicit informed consent from the user, but until Google do anything to make it more difficult, apps will continue making use of this information. Let's turn this into a conversation about user privacy online rather than blaming one specific example.

comment count unavailable comments

Rondam RamblingsDead nation walking

I had a very vivid dream last night.  I was a passenger in the back seat of a small four-seat airplane that was careening wildly out of control and diving towards the ground.  I kept yelling at the pilot to pull up, pull up, but he wasn't paying attention.  I don't think it's a coincidence that I had this dream a day before Donald Trump is going to be inaugurated as president. Like I said in my

Planet DebianDaniel Pocock: Which movie most accurately forecasts the Trump presidency?

Many people have been scratching their heads wondering what the new US president will really do and what he really stands for. His alternating positions on abortion, for example, suggest he may simply be telling people what he thinks is most likely to win public support from one day to the next. Will he really waste billions of dollars building a wall? Will Muslims really be banned from the US?

As it turns out, several movies provide a thought-provoking insight into what could eventuate. What's more, these two have a creepy resemblance to the Trump phenomenon and many of the problems in the world today.

Countdown to Looking Glass

On the classic cold war theme of nuclear annihilation, Countdown to Looking Glass is probably far more scary to watch on Trump eve than in the era when it was made. Released in 1984, the movie follows a series of international crises that have all come to pass: the assassination of a US ambassador in the middle east, a banking crisis and two superpowers in an escalating conflict over territory. The movie even picked a young Republican congressman for a cameo role: he subsequently went on to become speaker of the house. To relate it to modern times, you may need to imagine it is China, not Russia, who is the adversary but then you probably won't be able to sleep after watching it.

cleaning out the swamp?

The Omen

Another classic is The Omen. The star of this series of four horror movies, Damien Thorn, appears to have a history that is eerily reminiscent of Trump: born into a wealthy family, a series of disasters befall every honest person he comes into contact with, he comes to control a vast business empire acquired by inheritance and as he enters the world of politics in the third movie of the series, there is a scene in the Oval Office where he is flippantly advised that he shouldn't lose any sleep over any conflict of interest arising from his business holdings. Did you notice Damien Thorn and Donald Trump even share the same initials, DT?

CryptogramHeartbeat as Biometric Password

There's research in using a heartbeat as a biometric password. No details in the article. My guess is that there isn't nearly enough entropy in the reproducible biometric, but I might be surprised. The article's suggestion to use it as a password for health records seems especially problematic. "I'm sorry, but we can't access the patient's health records because he's having a heart attack."

I wrote about this before here.

Worse Than FailureUnstructured Data

Alex T had hit the ceiling with his current team, in terms of career advancement. He was ready to be promoted to a senior position, but there simply wasn’t room where he was- they were top-heavy as it was, and there were whispers among management of needing to make some cuts from that team. So Alex started looking for other openings.

There was another team at his company which had just lost all of its senior developers to other teams. Alex knew that was a bad sign, but in general, climbing the career ladder was a one-way street. Once he had a senior position, even if it was terrible, he could transfer to another team in a few months, keeping his senior title and salary.

Perry was the team’s technical director. “I’ve been laying out the TPM architecture for years,” Perry explained, “and you are going to be part of implementing my vision.” That vision was an Internal Framework called “Total Process Management”, which, as the name implied, was a flexible business rules engine that would manage all of their business processes, from HR, to supply chain, to marketing, it would do everything. “We’re bringing the latest technologies to bear, it’ll be based on RESTful microservices with a distributed backend. But we need to staff up to achieve this, so we’re going to be doing a lot of interviews over the next few months, you and me.”

Alex knew he could apply for another internal transfer after six months. He already saw this was a disaster, the only question was how disastrous would it be?

While the code Perry had him writing was an overcomplicated mess of trendy ideas badly implemented, the worst part was doing the interviews. Perry sat in on every phase of the interview, and had Opinions™ about everything the candidate had on their resume.

“You used Angular for that?” he demanded from one candidate, sneering, and drawing a bright red “X” on their resume. He criticized another for using a relational database when they could have used MongoDB. One interview ended early when the candidate admitted that they didn’t spend their nights and weekends hacking at personal projects.

The worst part, for Alex, was his role in the technical screens. He’d read about the failures of white-board programming, the uselessness of asking trivia questions: “How do you reverse a linked-list?” wasn’t exactly a great interview question. He’d planned out a set of questions he thought would be better, and even some hands-on coding, but Perry nixed that.

“I want you to build a test with an answer key,” Perry said. “Because at some point, we may want to have non-technical people doing a first-pass screening as our team grows and more people want to join it. Use that in the technical portion of the interview.”

Interviews turned into days, days turned into weeks, weeks into months, and eventually Perry brought in Jack. Jack had worked at Google (as an intern), and Perry loved that. In fact, through the whole interview, Perry and Jack got on like a house on fire, smiling, laughing, happily bashing the same technologies and waxing rhapsodic over the joys of using Riak (Mongo was so last year, they were junking all of their database access to use Riak now).

Eventually, Perry left and it was Alex’s turn to recite his test, and compare the results against his answer key. “What’s a linked-list?” he asked, dying on the inside.


“It’s a navigation widget on websites.”

Alex blinked, but continued. “How does a linked-list differ from a doubly-linked-list?”

“A doubly-linked list has a pop-up menu so you can have more links in the list,” Jack said.

For the first time since he’d written his test, Alex was actually excited to see the results. Jack wasn’t just wrong, he was finding incredibly new ways to be wrong. He claimed a binary-tree was a kind of legacy hard-drive. Or RAM, perhaps, it wasn’t really clear from his answer. Design Patterns were templates you could use… in Photoshop.

Alex thanked Jack for his time, sent him on his way, and then went to compare notes with Perry.

Perry was positively beaming. “I think we found a really great candidate,” he said. “Jack’s sharp as a tack, and is definitely a culture fit. What did you think?”

“Well,” Alex started, and then stopped. Perry was difficult to handle, so Alex decided that he should be as diplomatic as possible. “It started pretty well, but when we started talking about data-structures- he was really weak. It’s a bad sign. We should pass.”

“That’s probably not a big deal,” Perry said, “I don’t care if he knows Oracle or not. We use unstructured data.”

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet Linux AustraliaSimon Lyall: 2017 – Thursday – Session 3

Open Source Accelerating Innovation – Allison Randal

  • Story of Stallman and the printer
  • Don’t talk about the story of the context
    • Stallman was living in a free software domain, propriety software was creeping in
    • Software only became subject to copyright in early 80s
  • First age of software – 1940s – 1960s
    • Software was low value
    • Software was all free and open, given away
  • Precursor – The 1970s
  • Middle Age of Software – 1980s
    • Start of Windows, Mac, Oracle and other big software companies
    • Also start of GNU and BSD
    • Who Leads?
      • Propritory software was seen as the innovator and always would be.
      • Free Software was seen to be always chasing after windows
  • The 2000s
    • Free Software caught up with Propritory
    • Used by big companies
    • “Open Source” name adopted
    • dot-com bubble had burst
    • Web 2.0
    • Economic necessity, everyone else getting it for free
    • Collaborative Process – no silver bullet but a better chance
    • Innovations lead by open source
  • Software Freedoms
    • About Control over our material enviroment
    • If you don’t other freedoms then you don’t have a free society
  • Modern Age of Software
    • Accelerating
    • Cops in 2010 42% used OS software,  In 2015 78% using
    • Using Open Source is now just table stakes
    • Competitive edge for companies is participating is OS
    • Most participation pushes innovation even faster
  • Now What?
    • The New innovative companies
      • Amazing experiences
      • Augment Workers
      • Deliver cool stuff to customers
      • Use Network effects, Brand names
    • Businesses making contribution to society
    • Need to look at software that just doesn’t cover commercial use cases.
  • Next Phase
    • Diversity
    • Myopic monocultures – risk cause they miss the dangers
    • empowered to change the rule for the better

Surviving the Next 30 Years of Free Software – Karen M. Sandler

  • We’re not getting any younger
  • Software Relicensing
    • Need to get approval of authors to re-license
    • Has had to contact surviving spouse and get them to agree to re-license the code
    • One survivor wanted payment. Didn’t understand that code would be written out of the project.
  • There are surely other issues that that we have no considered
  • Copyright Assignment is a way around it
    • But not everybody likes that.
  • Bequeathment doesn’t work
    • In some jurisdictions copyrights have to assessed for their value before being transferred. Taxes could be owed
  • Who is your next of Kin?
    • They might share your OS values or even think of them
  • Need perpetual care of copyrights
    • Debian Copyright Aggregation Projects
  • A Trust
    • Assign copyrights today, will give you back the rights you want but these expire on your death
    • Would be a registry for free software
    • Companies could participate to
  • Recognize the opportunity with age
    • A lot of people with a lot of spare time



Planet Linux AustraliaStewart Smith: Books referenced in my Organizational Change talk at LCA2017

All of these are available as Kindle books, but I’m sure you can get 3D copies too:

The Five Dysfunctions of a Team: A Leadership Fable by Patrick M. Lencioni
Leading Change by John P. Kotter
Who Says Elephants Can’t Dance? Louis V. Gerstner Jr.
Nonviolent Communication: A language of Life by Marshall B. Rosenberg and Arun Gandhi

Planet Linux AustraliaSimon Lyall: 2017 – Thursday – Session 2

Content as a driver of change: then and now – Lana Brindley

  • Humans have always told stories
  • Cave Drawings
    • Australian Indigenous art is the oldest continuous art in the world
    • Stories of extinct mega-fauna
    • Stories of morals but sometimes also funny
  • Early Written Manuals
    • We remember the Eureka
  • Religious Leaders
    • Gutenburg
    • Bible was only redistributed book, restricted to clergy
  • Fairy Tales
    • Charles Perrault versions.
    • Brother Grim
    • Cautionary tales for adults
    • Very gruesome in the originals and many versions
    • Easiest and entertaining way for illiterate people to share moral stories
  • Master and Apprentice
    • Cheap Labour and Learn a Trade
  • Journals and Letters
    • In the early 19th century letter writing started happoning
    • Recipe Books


  • Recently
  • Paper Manuals
    • Traditionally the proper method for technical docs
  • Whitepapers
    • Printed version will probably go away
    • Digital form may live on
  • Training Courses
    • Face to face training has it’s benifits
    • Online is where techical stuff is moving
  • Online Books
    • Online version of a printed book
    • Designed to be read from beginning to end, TOC, glossary, etc


  • Today
    • Quite common
  • Data Typing (DITA)
    • Break down the content into logical pices
    • Store in a database
    • Mix on the fly
    • Doing this sort of the since 1960s and 1970s
  • Single Sourcing
    • Walked away from old idea of telling a story
    • Look at how people consumed and learnt difficult concepts
    • Deliver the same content many ways (beginner user, advanced, reference)
    • Chunks of information we can deliver however we like
  • User-Side Content Curation
    • Organised like a wikipedia article
    • Imagine a side listing lots of cars for sale, the filters curate the content
  • What comes next?
    • Large datasets and let people filter
    • Power going from producers to consumers
    • Consumers want to filter themselves, not leave the producers to do this
  • References and further reading for talk

I am your user. Why do you hate me? Donna Benjamin

  • Free and open source software suffers from poor usability
  • We’ve struggled with open source software, heard devs talk about users with contempt
  • We define users by what they can’t do
  • How do I hate thee let I count the ways
    • Why were we being made to feel stupid when we used free software
    • Software is “made by me for me”, just for brainiac me
    • Lots of stories about stupid users. Should we be calling our users stupid?
    • We often talk/draw about users as faceless icons
    • Take pride in having prickly attitudes
  • Users
    • Whiney, entitled and demanding
    • We wouldn’t want some of them as friends
    • Not talk about those sort of users
  • Lets Chat about chat
    • Slack – used by OS projects, not the freest, propritory
    • Better in many ways less friction, in many ways
  • Steep Learning curves
    • How long to get to the level of (a) Stop hating it? (b) Are Kicking ass
    • How do we get people over that level as quickly as possible
    • They don’t want to be badass at using your tool. They want you to be badass at what using your tool allows them to do
    • Badass: Making Users Awesome – Kathy Sierra
  • Perfect is the enemy of the good
  • Understand who your users are; see them as people like your friends and colleagues; not faceless icons



Planet Linux AustraliaSimon Lyall: 2017 – Thursday – Session 1

The Vulkan Graphics API, what it means for Linux – David Airlie

  • What is Vulkan
    • Not OpenGL++
    • From Scratch, Low Level, Open Graphics API
    • Stack
      • Loader (Mostly just picks the driver)
      • Layers (sometimes optional) – Seperate from the drivers.
        • Validation
        • Application Bug fixing
        • Tracing
        • Default GPU selection
      • Drivers (ICDs)
    • Open Source test Suite. ( “throw it over the wall Open Source”)
  • Why a new 3D API
    • OpenGL is old, from 1992
    • OpenGL Design based on 1992 hardware model
    • State machine has grown a lot as hardware has changed
    • Lots of stuff in it that nobody uses anymore
    • Some ideas were not so good in retrospec
      • Single context makes multi-threading hard
      • Sharing context is not reliable
      • Orientated around windows, off-screen rendering is a bolt-on
      • GPU hardware has converged to just 3-5 vendors with similar hardware. Not as much need to hid things
    •  Vulkan moves a lot of stuff up to the application (or more likely the OS graphics layer like Unity)
    • Vulkan gives applications access to the queues if they want them.
    • Shading Language – SPIR-V
      • Binary formatted, seperate from Vulkan, also used by OpenGL
      • Write Shaders HSL or GLSL and they get converted to SPIR-V
    • Driver Development
      • Almost all Error checking needed since done on the validation layer
      • Simpler to explicitly build command stream and then submit
    • Linux Support
      • Closed source Drivers
        • Nvidia
        • AMD (amdgpu-pro) – promised open source “real soon now … a year ago”
      • Open Source
        • Intel Linux (anv) –
          • on release day. 3.5 people over 8 months
          • SPIR -> NIR
          • Vulkan X11/Wayland WSI
          • anv Vulkan <– Core driver, not sharable
          • NIR -> i965 gen
          • ISL Library (image layout/tiling)
        • radv (for AMD GPUs)
          • Dave has been working on it since early July 2016 with one other guy
          • End of September Doom worked.
          • One Benchmark faster than AMD Driver
          • Valve hired someone to work on the driver.
          • Similar model to Intel anv driver.
          • Works on the few Vulkan games, working on SteamVR


Building reliable Ceph clusters – Lars Marowsky-Brée

  • Ceph
    • Storage Project
    • Multiple front ends (S3, Swift, Block IO, iSCSI, CephFS)
    • Built on RADOS data store
    • Software Defined Storage
      • Commodity servers + ceph + OS + Mngt (eg Open Attic)
      • Makes sense at 4+ servers with 10 drives each
      • metadata servce
      • CRUSH algorithm to speread out the data, no centralised table (client goes directly to data)
    • Access Methods
      • Use only what you need
      • RADOS Block devices   <– most stable
      • S3 (or Swift) via RadosGW  <– Mature
      • CephFS  <— New and pretty stable , avoid stuff non meta-data intensive
    • Introducing Dependability
      • Availability
      • Reliability
        • Duribility
      • Safety
      • Maintainability
    • Most outages are caused by Humans
    • At Scale everything fails
      • The Distributed systems are still vulnerable to correlated failures (eg same batch of hard drives)
      • Advantages of Heterogeneity – Everything is broken different
      • Homogeneity is non-sustainable
    • Failure is inevitable; suffering is optional
      • Prepare for downtime
      • Test if system meets your SLA when under load and when degraded and during recovery
    • How much available do you need?
      • An extra nine will double your price
  • A Bag full of suggestions
    • Embrace diversity
      • Auto recovery requires a >50% majority
      • 3 suppliers?
      • Mix arch and stuff between racks/pods and geography
      • Maybe you just go with manually added recovery
    • Hardware Choices
      • Vendors have reference archetectures
      • Hard to get vendors to mix, they don’t like that and fewer docs.
      • Hardware certification reduces the risk
      • Small variations can have huge impact
        • Customer bought network card and switch one up from the ref architecture. 6 months of problems till firmware bug fixed.
    • How many monitors do I need?
      • Not performance critcal
      • 3 is usually enough as long as well distributed
      • Big envs maybe 5 or 7
      • Don’t coverge (VMs) these with other types of nodes
    • Storage
      • Avoid Desktop Disks and SSDs
    • Storage Node sizing
      • A single node should not be more than 10% of your capacity
      • You need space capacity at least as big as a single node (to recover after fail)
    • Durability
      • Erasure Encode more durabily and high percentage of disk used
      • But recovery a lot slower, high overhead, etc
      • Different strokes for different pools
    • Network cards, different types, cross connect, use last years cards
    • Gateways: tests okay under failure
    • Config drift: Use config mngt (puppet etc)
    • Monioring
      • Perf as system ages
      • SSD degradation
    • Updates
      • Latest software is always the best
      • Usually good to update
      • Can do rolling upgrades
      • But still test a little on a staging server first
      • Always test on your system
        • Don’t trust metrics from vendors
        • Test updates
        • test your processes
        • Use OS to avoid vendor lock in
    • Disaster will strike
      • Have backups and test them and recoveries
    • Avoid Complexity
      • Be aggressive in what you test
      • Be commiserative in what you deploy only what you need
    • Q: Minimum size?
    • A: Not if you can fit on a single server



Planet Linux AustraliaSimon Lyall: 2017 – Thursday Keynote – Nadia Eghbal

Consider the Maintainer

  • Is it alright to compromise or even deliberately ignore the happiness of maintainers so that we can enjoy free software?
  • Huge growth in usage and downloads of Open Source software
  • 2/3s of popular open source projects on github are maintained by one of two people
  • Why so few?
    • Style has changed, lots of smaller projects
    • Being a maintainer isn’t glamorous of fun most of the time
    • 1% are creating the content that 99% of people consume
  • “Rapid evolution [..] poses the risk of introducing errors faster than people can fix them”
  • Consumption scales for most thing, not for open source because it creates more work for the maintainer
  • “~80% of contributors on github don’t know how to solve a merge conflict”
  • People see themselves as users of OS software, not potential maintainers – examples of rants by users against maintainers and the software
  • “Need maintainers, not contributors”
  • “Helping people over their first pull request, not helping them triage issues”
  • Why are we not talking about this?
  • Lets take a trip back in History
    • Originally Stallman said Free software was about freedom, not popularity. eg “as is” disclaimer of warranty
    • Some people create software sometimes.
    • Debian Social Contract, 4 freedoms, etc places [OS / Free] software and users first, maintainers often not mentioned.
    • Orientated around the user not the producer
  • Four Freedoms of OS producers
    • Decide to participate
    • Say no to contributions or requests
    • Define the priorities and policies of the project
    • Step down or move on
  • Other Issues maintainers need help with
    • Community best practices
    • Project analytics
    • Tools and bots for maintainers (especially for human coordination)
    • Conveying support status ( for contributors, not just user support )
    • Finding funding
  • People have talked about this before, mostly they concentrated on a few big projects like Linux or Apache (and not much written since 2005)
    • Doesn’t reflect the ecosystem today, thousands of small projects, github, social media, etc
    • Open source today is not what open source was 20 years ago
  • Q&A
    • Q: What do you see as responsibly and potential for orgs like Github?
    • A: Joined github to help with this. Hopes that github can help with tools.
    • Q: How can we get metrics on real projects, no just plaything on github
    • A: People are using stars on github, which is useless. One idea is to look at dependencies. is looking. Hope for better metrics.
    • Q: Is it all agile programmings fault?
    • A: Possibly, people this days are learning to code but average level is lower and they don’t know what is under the hood. Pretty good in general but. “Under the hood it is not just a hammer, it is a human being”
    • Q: Your background is in funding, how does transiticion work when a project or some people on it start getting money?
    • A: It is complicated, need some guidelines. Some projects have made it work well ( “jsmobile” I think she said ). Need best practice and to keep things transparent
    • Q: How to we get out to the public (even programmers/tech people at tech companies) what OS is really like these days?
    • A: Example of Rust. Maybe some outreach and general material
    • Q: Is Patreon or other crowd-funding a good way to fund projects?
    • A: Needs a good target, requires a huge following which is hard to people who are not good at marketing. Better for one-time vs recurring. Hard to decide exactly what money should be used for




CryptogramBrian Krebs Uncovers Murai Botnet Author

Really interesting investigative story.

Planet DebianReproducible builds folks: Reproducible Builds: week 90 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday January 8 and Saturday January 14 2017:

Upcoming Events

  • The Reproducible Build Zoo will be presented by Vagrant Cascadian at the Embedded Linux Conference in Portland, Oregon, February 22nd

  • Dennis Gilmore and Holger Levsen will present about "Reproducible Builds and Fedora" at on February, 27th.

  • Introduction to Reproducible Builds will be presented by Vagrant Cascadian at Scale15x in Pasadena, California, March 5th

Reproducible work in other projects

Reproducible Builds have been mentioned in the FSF high-priority project list.

The F-Droid Verification Server has been launched. It rebuilds apps from source that were built by and checks that the results match.

Bernhard M. Wiedemann did some more work on reproducibility for openSUSE. (unfortunately no HTTPS yet) was launched after the initial work was started at our recent summit in Berlin. This is another topic related to reproducible builds and both will be needed in order to perform "Diverse Double Compilation" in practice in the future.

Toolchain development and fixes

Ximin Luo researched data formats for SOURCE_PREFIX_MAP and explored different options for encoding a map data structure in a single environment variable. He also continued to talk with the rustc team on the topic.

Daniel Shahaf filed #851225 ('udd: patches: index by DEP-3 "Forwarded" status') to make it easier to track our patches.

Chris Lamb forwarded #849972 upstream to yard, a Ruby documentation generator. Upstream has fixed the issue as of release 0.9.6.

Alexander Couzens (lynxis) has made mksquashfs reproducible and is looking for testers. It compiles on BSD systems such as FreeBSD, OpenBSD and NetBSD.

Bugs filed

Chris Lamb:

Lucas Nussbaum:

Nicola Corna:

Reviews of unreproducible packages

13 package reviews have been added and 13 have been removed in this week, adding to our knowledge about identified issues.

1 issue type has been added:

Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris Lamb (3)
  • Lucas Nussbaum (11)
  • Nicola Corna (1)

diffoscope development

Bugs in diffoscope in the last year

Many bugs were opened in diffoscope during the past few weeks, which probably is a good sign as it shows that diffoscope is much more widely used than a year ago. We have been working hard to squash many of them in time for Debian stable, though we will see how that goes in the end…

reproducible-website development

  • Ximin Luo and Holger Levsen worked on stricter tests to check that /dev/shm and /run/shm are both mounted with the correct permissions. Some of our build machines currently still fail this test, and the problem is probably the root cause of the FTBFS of some packages (which fails with issues regarding sem_open). The proper fix is still being discussed in #851427.

  • Valerie Young worked on creating and linking autogenerated schema documentation for our database used to store the results.

  • Holger Levsen added a graph with diffoscope crashes and timeouts.

  • Holger also further improved the daily mail notifications about problems.


This week's edition was written by Ximin Luo, Chris Lamb and Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianJan Wagner: Migrating Gitlab non-packaged PostgreSQL into omnibus-packaged

With the release of Gitlab 8.15 it was announced that PostgreSQL needs to be upgraded. As I migrated from a source installation I used to have an external PostgreSQL database instead of using the one shiped with the omnibus package.
So I decided to do the data migration into the omnibus PostgreSQL database now which I skipped before.

Let's have a look into the databases:

$ sudo -u postgres psql -d template1
psql (9.2.18)  
Type "help" for help.

gitlabhq_production=# \l  
                                             List of databases
         Name          |       Owner       | Encoding | Collate |  Ctype  |        Access privileges
 gitlabhq_production   | git               | UTF8     | C.UTF-8 | C.UTF-8 |
 gitlab_mattermost     | git               | UTF8     | C.UTF-8 | C.UTF-8 |
gitlabhq_production=# \q  

Dumping the databases and stop PostgreSQL. Maybe you need to adjust database names and users for your needs.

$ su postgres -c "pg_dump gitlabhq_production -f /tmp/gitlabhq_production.sql" && \
su postgres -c "pg_dump gitlab_mattermost -f /tmp/gitlab_mattermost.sql" && \  
/etc/init.d/postgresql stop

Activate PostgreSQL shipped with Gitlab Omnibus

$ sed -i "s/^postgresql\['enable'\] = false/#postgresql\['enable'\] = false/g" /etc/gitlab/gitlab.rb && \
sed -i "s/^#mattermost\['enable'\] = true/mattermost\['enable'\] = true/" /etc/gitlab/gitlab.rb && \  
gitlab-ctl reconfigure  

Testing if the connection to the databases works

$ su - git -c "psql --username=gitlab  --dbname=gitlabhq_production --host=/var/opt/gitlab/postgresql/"
psql (9.2.18)  
Type "help" for help.

gitlabhq_production=# \q  
$ su - git -c "psql --username=gitlab  --dbname=mattermost_production --host=/var/opt/gitlab/postgresql/"
psql (9.2.18)  
Type "help" for help.

mattermost_production=# \q  

Ensure pg_trgm extension is enabled

$ sudo gitlab-psql -d gitlabhq_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";'
$ sudo gitlab-psql -d mattermost_production -c 'CREATE EXTENSION IF NOT EXISTS "pg_trgm";'

Adjust permissions in the database dumps. Indeed please verify that users and databases might need to be adjusted too.

$ sed -i "s/OWNER TO git;/OWNER TO gitlab;/" /tmp/gitlabhq_production.sql && \
sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlabhq_production.sql  
$ sed -i "s/OWNER TO git;/OWNER TO gitlab_mattermost;/" /tmp/gitlab_mattermost.sql && \
sed -i "s/postgres;$/gitlab-psql;/" /tmp/gitlab_mattermost.sql  

(Re)import the data

$ sudo gitlab-psql -d gitlabhq_production -f /tmp/gitlabhq_production.sql
$ sudo gitlab-psql -d gitlabhq_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \
sudo gitlab-psql -d gitlabhq_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";'  
$ sudo gitlab-psql -d mattermost_production -f /tmp/gitlab_mattermost.sql
$ sudo gitlab-psql -d mattermost_production -c 'REVOKE ALL ON SCHEMA public FROM "gitlab-psql";' && \
sudo gitlab-psql -d mattermost_production -c 'GRANT ALL ON SCHEMA public TO "gitlab-psql";'  

Make use of the shipped PostgreSQL

$ sed -i "s/^gitlab_rails\['db_/#gitlab_rails\['db_/" /etc/gitlab/gitlab.rb && \
sed -i "s/^mattermost\['sql_/#mattermost\['sql_/" /etc/gitlab/gitlab.rb && \  
gitlab-ctl reconfigure  

Now you should be able to connect to all the Gitlab services again.

Optionally remove the external database

apt-get remove postgresql postgresql-client postgresql-9.4 postgresql-client-9.4 postgresql-client-common postgresql-common  

Maybe you also want to purge the old database content

apt-get purge postgresql-9.4  

Krebs on SecurityWho is Anna-Senpai, the Mirai Worm Author?

On September 22, 2016, this site was forced offline for nearly four days after it was hit with “Mirai,” a malware strain that enslaves poorly secured Internet of Things (IoT) devices like wireless routers and security cameras into a botnet for use in large cyberattacks. Roughly a week after that assault, the individual(s) who launched that attack — using the name “Anna-Senpai” — released the source code for Mirai, spawning dozens of copycat attack armies online.

After months of digging, KrebsOnSecurity is now confident to have uncovered Anna-Senpai’s real-life identity, and the identity of at least one co-conspirator who helped to write and modify the malware.

The Hackforums post that includes links to the Mirai source code.

Mirai co-author Anna-Senpai leaked the source code for Mirai on Sept. 30, 2016.

Before we go further, a few disclosures are probably in order. First, this is easily the longest story I’ve ever written on this blog. It’s lengthy because I wanted to walk readers through my process of discovery, which has taken months to unravel. The details help in understanding the financial motivations behind Mirai and the botnet wars that preceded it. Also, I realize there are a great many names to keep track of as you read this post, so I’ve included a glossary.

The story you’re reading now is the result of hundreds of hours of research.  At times, I was desperately seeking the missing link between seemingly unrelated people and events; sometimes I was inundated with huge amounts of information — much of it intentionally false or misleading — and left to search for kernels of truth hidden among the dross.  If you’ve ever wondered why it seems that so few Internet criminals are brought to justice, I can tell you that the sheer amount of persistence and investigative resources required to piece together who’s done what to whom (and why) in the online era is tremendous.

As noted in previous KrebsOnSecurity articles, botnets like Mirai are used to knock individuals, businesses, governmental agencies, and non-profits offline on a daily basis. These so-called “distributed denial-of-service (DDoS) attacks are digital sieges in which an attacker causes thousands of hacked systems to hit a target with so much junk traffic that it falls over and remains unreachable by legitimate visitors. While DDoS attacks typically target a single Web site or Internet host, they often result in widespread collateral Internet disruption.

A great deal of DDoS activity on the Internet originates from so-called ‘booter/stresser’ services, which are essentially DDoS-for-hire services which allow even unsophisticated users to launch high-impact attacks.  And as we will see, the incessant competition for profits in the blatantly illegal DDoS-for-hire industry can lead those involved down some very strange paths, indeed.


The first clues to Anna-Senpai’s identity didn’t become clear until I understood that Mirai was just the latest incarnation of an IoT botnet family that has been in development and relatively broad use for nearly three years.

Earlier this summer, my site was hit with several huge attacks from a collection of hacked IoT systems compromised by a family of botnet code that served as a precursor to Mirai. The malware went by several names, including “Bashlite,” “Gafgyt,” “Qbot,” “Remaiten,” and “Torlus.”

All of these related IoT botnet varieties infect new systems in a fashion similar to other well-known Internet worms — propagating from one infected host to another. And like those earlier Internet worms, sometimes the Internet scanning these systems perform to identify other candidates for inclusion into the botnet is so aggressive that it constitutes an unintended DDoS on the very home routers, Web cameras and DVRs that the bot code is trying to subvert and recruit into the botnet. This kind of self-defeating behavior will be familiar to those who recall the original Morris Worm, NIMDA, CODE RED, Welchia, Blaster and SQL Slammer disruptions of yesteryear.

Infected IoT devices constantly scan the Web for other IoT things to compromise, wriggling into devices that are protected by little more than insecure factory-default settings and passwords. The infected devices are then forced to participate in DDoS attacks (ironically, many of the devices most commonly infected by Mirai and similar IoT worms are security cameras).

Mirai’s ancestors had so many names because each name corresponded to a variant that included new improvements over time. In 2014, a group of Internet hooligans operating under the banner “lelddos” very publicly used the code to launch large, sustained attacks that knocked many Web sites offline.

The most frequent target of the lelddos gang were Web servers used to host Minecraft, a wildly popular computer game sold by Microsoft that can be played from any device and on any Internet connection.

The object of Minecraft is to run around and build stuff, block by large pixelated block. That may sound simplistic and boring, but an impressive number of people positively adore this game – particularly pre-teen males. Microsoft has sold more than a 100 million copies of Minecraft, and at any given time there are over a million people playing it online. Players can build their own worlds, or visit a myriad other blocky realms by logging on to their favorite Minecraft server to play with friends.



A large, successful Minecraft server with more than a thousand players logging on each day can easily earn the server’s owners upwards of $50,000 per month, mainly from players renting space on the server to build their Minecraft worlds, and purchasing in-game items and special abilities.

Perhaps unsurprisingly, the top-earning Minecraft servers eventually attracted the attention of ne’er-do-wells and extortionists like the lelddos gang. Lelddos would launch a huge DDoS attack against a Minecraft server, knowing that the targeted Minecraft server owner was likely losing thousands of dollars for each day his gaming channel remained offline.

Adding urgency to the ordeal, many of the targeted server’s loyal customers would soon find other Minecraft servers to patronize if they could not get their Minecraft fix at the usual online spot.

Robert Coelho is vice president of ProxyPipe, Inc., a San Francisco company that specializes in protecting Minecraft servers from attacks.

“The Minecraft industry is so competitive,” Coelho said. “If you’re a player, and your favorite Minecraft server gets knocked offline, you can switch to another server. But for the server operators, it’s all about maximizing the number of players and running a large, powerful server. The more players you can hold on the server, the more money you make. But if you go down, you start to lose Minecraft players very fast — maybe for good.”

In June 2014, ProxyPipe was hit with a 300 gigabit per second DDoS attack launched by lelddos, which had a penchant for publicly taunting its victims on Twitter just as it began launching DDoS assaults at the taunted.

The hacker group "lelddos" tweeted at its victims before launching huge DDoS attacks against them.

The hacker group “lelddos” tweeted at its victims before launching huge DDoS attacks against them.

At the time, ProxyPipe was buying DDoS protection from Reston, Va. -based security giant Verisign. In a quarterly report published in 2014, Verisign called the attack the largest it had ever seen, although it didn’t name ProxyPipe in the report – referring to it only as a customer in the media and entertainment business.

Verisign said the 2014 attack was launched by a botnet of more than 100,000 servers running on SuperMicro IPMI boards. Days before the huge attack on ProxyPipe, a security researcher published information about a vulnerability in the SuperMicro devices that could allow them to be remotely hacked and commandeered for these sorts of attacks.


Coelho recalled that in mid-2015 his company’s Minecraft customers began coming under attack from a botnet made up of IoT devices infected with Qbot. He said the attacks were directly preceded by a threat made by a then-17-year-old Christopher “CJ” Sculti, Jr., the owner and sole employee of a competing DDoS protection company called Datawagon.

Datawagon also courted Minecraft servers as customers, and its servers were hosted on Internet space claimed by yet another Minecraft-focused DDoS protection provider — ProTraf Solutions.

CJ Sculti, Jr.

Christopher “CJ” Sculti, Jr.

According to Coelho, ProTraf was trying to woo many of his biggest Minecraft server customers away from ProxyPipe. Coelho said in mid-2015, Sculti reached out to him on Skype and said he was getting ready to disable Coelho’s Skype account. At the time, an exploit for a software weakness in Skype was being traded online, and this exploit could be used to remotely and instantaneously disable any Skype account.

Sure enough, Coelho recalled, his Skype account and two others used by co-workers were shut off just minutes after that threat, effectively severing a main artery of support for ProxyPipe’s customers – many of whom were accustomed to communicating with ProxyPipe via Skype.

“CJ messaged me about five minutes before the DDoS started, saying he was going to disable my skype,” Coelho said. “The scary thing about when this happens is you don’t know if your Skype account has been hacked and under control of someone else or if it just got disabled.”

Once ProxyPipe’s Skype accounts were disabled, the company’s servers were hit with a massive, constantly changing DDoS attack that disrupted ProxyPipe’s service to its Minecraft server customers. Coelho said within a few days of the attack, many of ProxyPipe’s most lucrative Minecraft servers had moved over to servers run protected by ProTraf Solutions.

“In 2015, the ProTraf guys hit us offline tons, so a lot of our customers moved over to them,” Coelho said. “We told our customers that we knew [ProTraf] were the ones doing it, but some of the customers didn’t care and moved over to ProTraf anyway because they were losing money from being down.”

I found Coelho’s story fascinating because it eerily echoed the events leading up to my Sept. 2016 record 620 Gbps attack. I, too, was contacted via Skype by Sculti — on two occasions. The first was on July 7, 2015, when Sculti reached out apropos of nothing to brag about scanning the Internet for IoT devices running default usernames and passwords, saying he had uploaded some kind of program to more than a quarter-million systems that his scans found.

Here’s a snippet of that conversation:

July 7, 2015:

21:37 CJ:
21:37 CJ: vulnerable routers are a HUGE issue
21:37 CJ: a few months ago
21:37 CJ: I scanned the internet with a few sets of defualt logins
21:37 CJ: for telnet
21:37 CJ: and I was able to upload and execute a binary
21:38 CJ: on 250k devices
21:38 CJ: most of which were routers
21:38 Brian Krebs: o_0

The second time I heard from Sculti on Skype was Sept. 20, 2016 — the day of my 620 Gbps attack. Sculti was angry over a story I’d just published that mentioned his name, and he began rather saltily maligning the reputation of a source and friend who had helped me with that story.

Indignant on behalf of my source and annoyed at Sculti’s rant, I simply blocked his Skype account from communicating with mine and went on with my day. Just minutes after that conversation, however, my Skype account was flooded with thousands of contact requests from compromised or junk Skype accounts, making it virtually impossible to use the software for making phone calls or instant messaging.

Six hours after that Sept. 20 conversation with Sculti, the huge 620 Gbps DDoS attack commenced on this site.


Coelho said he believes the main members of lelddos gang were Sculti and the owners of ProTraf. Asked why he was so sure of this, he recounted a large lelddos attack in early 2015 against ProxyPipe that coincided with a scam in which large tracts of Internet address space were temporarily stolen from the company.

According to ProxyPipe, a swath of Internet addresses was hijacked from the company by FastReturn, a cloud hosting firm. Dyn, a company that closely tracks which blocks of Internet addresses are assigned to which organizations, confirmed the timing of the Internet address hijack that Coelho described.

A few months after that attack, the owner of FastReturn — a young man from Dubai named Ammar Zuberi — went to work as a software developer for ProTraf. In the process, Zuberi transferred the majority of Internet addresses assigned to FastReturn over to ProTraf.

Zuberi told KrebsOnSecurity that he was not involved with lelddos, but he acknowledged that he did hijack ProxyPipe’s Internet address space before moving over to ProTraf.

“I was stupid and new to this entire thing and it was interesting to me how insecure the underlying ecosystem of the Internet was,” Zuberi said. “I just kept pushing the envelope to see how far I could get with that, I guess. I eventually realized though and got away from it, although that’s not really much of a justification.”

According to Zuberi, CJ Sculti Jr. was a member of lelddos, as were the two co-owners of ProTraf. This is interesting because not long after the September 2016 Mirai attack took this site offline, several sources who specialize in lurking on cybercrime forums shared information suggesting that the principal author of Bashlite/Qbot was a ProTraf employee: A 19-year-old computer whiz from Washington, Penn. named Josiah White.

White’s profile on LinkedIn lists him as an “enterprise DDoS mitigation expert” at ProTraf, but for years he was better known to those in the hacker community under the alias “LiteSpeed.”

LiteSpeed is the screen name White used on Hackforums[dot]net – a sprawling English-language marketplace where mostly young, low-skilled hackers can buy and sell cybercrime tools and stolen goods with ease. Until very recently, Hackforums also was the definitive place to buy and sell DDoS-for-hire services.

I contacted White to find out if the rumors about his authorship of Qbot/Bashlite were true. White acknowledged that he had written some of Qbot/Bashlite’s components — including the code segment that the malware uses to spread the infection to new machines. But White said he never intended for his code to be sold and traded online.

White claims that a onetime friend and Hackforums member nicknamed “Vyp0r” betrayed his trust and forced him to publish the code online by threatening to post White’s personal details online and to “swat” his home. Swatting is a potentially deadly hoax in which an attacker calls in a fake hostage situation or bomb threat at a residence or business with the intention of sending a team of heavily-armed police officers to the target’s address.

“Most of the stuff that I had wrote was for friends, but as I later realized, things on HF [Hackforums] tend to not remain private,” White wrote in an instant message to KrebsOnSecurity. “Eventually I learned they were reselling them in under-the-table deals, and so I just released everything to stop that. I made some mistakes when I was younger, and I realize that, but I’m trying to set my path straight and move on.”



White’s employer ProTraf Solutions has only one other employee – 20-year-old President Paras Jha, from Fanwood, NJ. On his LinkedIn profile, Jha states that “Paras is a passionate entrepreneur driven by the want to create.” The profile continues:

“Highly self-motivated, in 7th grade he began to teach himself to program in a variety of languages. Today, his skillset for software development includes C#, Java, Golang, C, C++, PHP, x86 ASM, not to mention web ‘browser languages’ such as Javascript and HTML/CSS.”

Jha’s LinkedIn page also shows that he has extensive experience running Minecraft servers, and that for several years he worked for Minetime, one of the most popular Minecraft servers at the time.

After first reading Jha’s LinkedIn resume, I was haunted by the nagging feeling that I’d seen this rather unique combination of computer language skills somewhere else online. Then it dawned on me: The mix of programming skills that Jha listed in his LinkedIn profile is remarkably similar to the skills listed on Hackforums by none other than Mirai’s author — Anna-Senpai.

Prior to leaking the Mirai source code on HackForums at the end of September 2016, the majority of Anna-Senpai’s posts on Hackforums were meant to taunt other hackers on the forum who were using Qbot to build DDoS attack armies.

The best example of this is a thread posted to Hackforums on July 10, 2016 titled “Killing All Telnets,” in which Anna-Senpai boldly warns forum members that the malicious code powering his botnet contains a particularly effective “bot killer” designed to remove Qbot from infected IoT devices and to prevent systems infected with his malware from ever being reinfected with Qbot again.

Anna-Senpai warns Qbot users that his new worm (relatively unknown by its name "Mirai" at the time) was capable of killing off IoT devices infected with Qbot.

Anna-Senpai warns Qbot users that his new worm (relatively unknown by its name “Mirai” at the time) was capable of killing off IoT devices infected with Qbot.

Initially, forum members dismissed Anna’s threats as idle taunts, but as the thread continues for page after page we can see from other forum members that his bot killer is indeed having its intended effect. [Oddly enough, it’s very common for the authors of botnet code to include patching routines to protect their newly-enslaved bots from being compromised by other miscreants.  Just like in any other market, there is a high degree of competition between cybercrooks who are constantly seeking to add more zombies to their DDoS armies, and they often resort to unorthodox tactics to knock out the competition.  As we’ll see, this kind of internecine warfare is a major element in this story.]

“When the owner of this botnet wrote a July 2016 Hackforums thread named ‘Killing all Telnets’, he was right,” wrote Allison Nixon and Pierre Lamy, threat researchers for New York City-based security firm Flashpoint. “Our intelligence around that time reflected a massive shift away from the traditional gafgyt infection patterns and towards a different pattern that refused to properly execute on analysts’ machines. This new species choked out all the others.”

It wasn’t until after I’d spoken with Jha’s business partner Josiah White that I began re-reading every one of Anna-Senpai’s several dozen posts to Hackforums. The one that made Jha’s programming skills seem familiar came on July 12, 2016 — a week after posting his “Killing All Telnets” discussion thread — when Anna-Senpai contributed to a Hackforums thread started by a hacker group calling itself “Nightmare.”

Such groups or hacker cliques are common on Hackforums, and forum members can apply for membership by stating their skills and answering a few questions. Anna-Senpai posted his application for membership into this thread among dozens of others, describing himself thusly:

Age: 18+

Location and Languages Spoken: English

Which of the aforementioned categories describe you the best?: Programmer / Development

What do you Specialize in? (List only): Systems programming / general low level languages (C + ASM)

Why should we choose you over other applicants?: I have 8 years of development under my belt, and I’m very familiar with programming in a variety of languages, including ASM, C, Go, Java, C#, and PHP. I like to use this knowledge for personal gain.”

The Hackforums post shows Jha and Anna-Senpai have the exact same programming skills. Additionally, according to an analysis of Mirai by security firm Incapsula, the malicious software used to control a botnet powered by Mirai is coded in Go (a.k.a. “Golang”), a somewhat esoteric programming language developed by Google in 2007 that saw a surge in popularity in 2016. Incapsula also said the malcode that gets installed on IoT bots is coded in C.



I began to dig deeper into Paras Jha’s history and footprint online, and discovered that his father in October 2013 registered a vanity domain for his son, That site is no longer online, but a historic version of it cached by the indispensable Internet Archive includes a resume of Jha’s early work with various popular Minecraft servers. Here’s a autobiographical snippet from

“My passion is to utilize my skills in programming and drawing to develop entertaining games and software for the online game ‘Minecraft. Someday, I plan to start my own enterprise focused on the gaming industry targeted towards game consoles and the mobile platform. To further my ideas and help the gaming community, I have released some of my code to open source projects on websites centered on public coding under the handle dreadiscool.”

A Google search for this rather unique username “dreadiscool” turns up accounts by the same name at dozens of forums dedicated to computer programming and Minecraft. In many of those accounts, the owner is clearly frustrated by incessant DDoS attacks targeting his Minecraft servers, and appears eager for advice on how best to counter the assaults.

From Dreadiscool’s various online postings, it seems clear that at some point Jha decided it might be more profitable and less frustrating to defend Minecraft servers from DDoS attacks, as opposed to trying to maintain the servers themselves.

“My experience in dealing with DDoS attacks led me to start a server hosting company focused on providing solutions to clients to mitigate such attacks,” Jha wrote on his vanity site.

Some of the more recent Dreadiscool posts date to November 2016, and many of those posts are lengthy explanations of highly technical subjects. The tone of voice in these posts is far more confident and even condescending than the Dreadiscool from years earlier, covering a range of subjects from programming to DDoS attacks.

Dreadiscool's account on Spigot Minecraft forum since 2013 includes some interesting characters photoshopped into this image.

Dreadiscool’s account on Spigot Minecraft forum since 2013 includes some interesting characters photoshopped into this image.

For example, Dreadiscool has been an active member of the Minecraft forum since 2013. This user’s avatar (pictured above) on is an altered image taken from the 1994 Quentin Tarantino cult hit “Pulp Fiction,” specifically from a scene in which the gangster characters Jules and Vincent are pointing their pistols in the same direction. However, the heads of both actors have been digitally altered to include someone else’s faces.

Pasted over the head of John Travolta’s character (left) is a real-life picture of Vyp0r — the Hackforums nickname of the guy that ProTraf’s Josiah White said threatened him into releasing the source code for Bashlite. On the shoulders of Samuel L. Jackson’s body is the face of Tucker Preston, co-founder of BackConnect Security — a competing DDoS mitigation provider that also has a history of hijacking Internet address ranges from other providers.

Pictured below and to the left of Travolta and Jackson’s characters — seated on the bed behind them — is “Yamada,” a Japanese animation (“anime”) character featured in the anime movie B Gata H Hei.

Turns out, there is a Dreadiscool user on, a site where members proudly list the various anime films they have watched. Dreadiscool says B Gata H Kei is one of nine anime film series he has watched. Among the other eight? The anime series Mirai Nikki, from which the Mirai malware derives its name.

Dreadiscool’s Reddit profile also is very interesting, and most of the recent posts there relate to major DDoS attacks going on at the time, including a series of DDoS attacks on Rutgers University. More on Rutgers later.


At around the same time as the record 620 Gbps attack on KrebsOnSecurity, French Web hosting giant OVH suffered an even larger attack — launched by the very same Mirai botnet used to attack this site. Although this fact has been widely reported in the news media, the reason for the OVH attack may not be so well known.

According to a tweet from OVH founder and chief technology officer Octave Klaba, the target of that massive attack also was a Minecraft server (although Klaba mistakenly called the target “mindcraft servers” in his tweet).

A tweet from OVH founder and CTO, stating the intended target of Sept. 2016 Mirai DDoS on his company.

A tweet from OVH founder and CTO, stating the intended target of Sept. 2016 Mirai DDoS on his company.

Turns out, in the days following the attack on this site and on OVH, Anna-Sempai had trained his Mirai botnet on Coelho’s ProxyPipe, completely knocking his DDoS mitigation service offline for the better part of a day and causing problems for many popular Minecraft servers.

Unable to obtain more bandwidth and unwilling to sign an expensive annual contract with a third-party DDoS mitigation firm, Coelho turned to the only other option available to get out from under the attack: Filing abuse complaints with the Internet hosting firms that were responsible for providing connectivity to the control server used to orchestrate the activities of the Mirai botnet.

“We did it because we had no other options, and because all of our customers were offline,” Coelho said. “Even though no other DDoS mitigation company was able to defend against these attacks [from Mirai], we still needed to defend against it because our customers were starting to move to other providers that attracted fewer attacks.”

After scouring a list of Internet addresses tied to bots used in the attack, Coelho said he was able to trace the control server for the Mirai botnet back to a hosting provider in Ukraine. That company — BlazingFast[dot]io — has a reputation for hosting botnet control networks (even now, Spamhaus is reporting an IoT botnet controller running out of BlazingFast since Jan. 17, 2017).

Getting no love from BlazingFast, Coelho said he escalated his complaint to Voxility, a company that was providing DDoS protection to BlazingFast at the time.

“Voxility acknowledged the presence of the control server, and said they null-routed [removed] it, but they didn’t,” Coelho said. “They basically lied to us and didn’t reply to any other emails.”

Undeterred, Coelho said he then emailed the ISP that was upstream of BlazingFast, but received little help from that company or the next ISP further upstream. Coelho said the fifth ISP upstream of BlazingFast, however — Internet provider Telia Sonera — confirmed his report, and promptly had the Mirai botnet’s control server killed.

As a result, many of the systems infected with Mirai could no longer connect to the botnet’s control servers, drastically reducing the botnet’s overall firepower.

“The action by Telia cut the size of the attacks launched by the botnet down to 80 Gbps,” well within the range of ProxyPipe’s in-house DDoS mitigation capabilities, Coelho said.

Incredibly, on Sept. 28, Anna-Senpai himself would reach out to Coelho via Skype. Coelho shared a copy of that chat conversation with KrebsOnSecurity. The log shows that Anna correctly guessed ProxyPipe was responsible for the abuse complaints that kneecapped Mirai. Anna-Senpai said he guessed ProxyPipe was responsible after reading a comment on a KrebsOnSecurity blog post from a reader who shared the same username as Coelho’s business partner.

In the following chat, Coelho is using the Skype nickname “katie.onis.”

[10:23:08 AM] live:anna-senpai: ^
[10:26:08 AM] katie.onis: hi there.
[10:26:52 AM] katie.onis: How can I help you?
[10:28:06 AM] live:anna-senpai: hi
[10:28:45 AM] live:anna-senpai: you know i had my suspicions, but this one was proof [this is a benign/safe link to a screenshot of some comments on]

[10:28:59 AM] live:anna-senpai: don’t get me wrong, im not even mad, it was pretty funny actually. nobody has ever done that to my c2 [Mirai “command and control” server]
[10:29:25 AM] live:anna-senpai: (goldmedal)
[10:29:29 AM] katie.onis: ah you’re mistaken, that’s not us.
[10:29:33 AM] katie.onis: but we know who it is
[10:29:42 AM] live:anna-senpai: eric / 9gigs
[10:29:47 AM] katie.onis: no, 9gigs is erik
[10:29:48 AM] katie.onis: not eric
[10:29:53 AM] katie.onis: different people
[10:30:09 AM] live:anna-senpai: oh?
[10:30:17 AM] katie.onis: yep
[10:30:39 AM] live:anna-senpai: is he someone related to you guys?
[10:30:44 AM] katie.onis: not related to us, we just know him
[10:30:50 AM] katie.onis: anyway, we’re not interested in any harm, we simply don’t want attacks against us.
[10:31:16 AM] live:anna-senpai: yeah i figured, i added you because i wanted to tip my hat if that was actually you lol
[10:31:24 AM] katie.onis: we didn’t make that dumb post
[10:31:26 AM] katie.onis: if that is what you are asking
[10:31:30 AM] katie.onis: but yes, we were involved in doing that.
[10:31:47 AM] live:anna-senpai: so you got it nulled, but some other eric is claiming credit for it?
[10:31:52 AM] katie.onis: seems so.
[10:31:52 AM] live:anna-senpai: eric with a c
[10:31:56 AM] live:anna-senpai: lol
[10:32:17 AM] live:anna-senpai: can’t say im surprised, tons of people take credit for things that they didn’t do if nobody else takes credit for
[10:32:24 AM] katie.onis: we’re not interested in taking credit
[10:32:30 AM] katie.onis: we just wanted the attacks to get smaller


One reason Anna-Senpai may have been enamored of Coelho’s approach to taking down Mirai is that Anna-Senpai had spent the previous month doing exactly the same thing to criminals running IoT botnets powered by Mirai’s top rival — Qbot.

A month before this chat between Coelho and Anna-Senpai, Anna is busy sending abuse complaints to various hosting firms, warning them that they are hosting huge IoT botnet control channels that needed to be shut down. This was clearly just part of an extended campaign by the Mirai botmasters to eliminate other IoT-based DDoS botnets that might compete for the same pool of vulnerable IoT devices. Anna confirmed this in his chat with Coelho:

[10:50:36 AM] live:anna-senpai: i have good killer so nobody else can assemble a large net
[10:50:53 AM] live:anna-senpai: i monitor the devices to see for any new threats
[10:51:33 AM] live:anna-senpai: and when i find any new host, i get them taken down

The ISPs or hosting providers that received abuse complaints from Anna-Senpai were all encouraged to reply to the email address for questions and/or confirmation of the takedown. ISPs that declined to act promptly on Anna-Senpai’s Qbot email complaints soon found themselves on the receiving end of enormous DDoS attacks from Mirai.

Francisco Dias, owner of hosting provider Frantech, found out firsthand what it would cost to ignore one of Anna’s abuse reports. In mid-September 2016, Francisco accidentally got into an Internet fight with Anna-Senpai.  The Mirai botmaster was using the nickname “jorgemichaels” at the time — and Jorgemichaels was talking trash on, a discussion forum for vendors of low-costing hosting.

Specifically, Jorgemichaels takes Francisco to task publicly on the forum for ignoring one of his Qbot abuse complaints. Francisco tells Jorgemichaels to file a complaint with the police if it’s so urgent. Jorgemichaels tells Francisco to shut up, and when Francisco is silent for a while Jorgemichaels gloats that Francisco learned his place. Francisco explains his further silence on the thread by saying he’s busy supporting customers, to which Jorgemichaels replies, “Sounds like you just got a lot more customers to help. Don’t mess with the underworld francisco or it will harm your business.”

Shortly thereafter, Frantech is systematically knocked offline after being attacked by Mirai. Below is a fascinating snippet from a private conversation between Francisco and Anna-Senpai/Jorgemichaels, in which Francisco kills the reported Qbot control server to make Anna/Jorgemichaels call off the attack.

Using the nickname "jorgemichaels" on LowEndTalk, Anna-Senpai reaches out to Francisco Dias after Dias ignores Anna's abuse complaint. Francisco agrees to kill the Qbot control server after being walloped with Mirai.

Using the nickname “jorgemichaels” on LowEndTalk, Anna-Senpai reaches out to Francisco Dias after Dias ignores Anna’s abuse complaint. Francisco agrees to kill the Qbot control server only after being walloped with Mirai.

Back to the chat between Anna-Senpai and Coelho at the end of Sept 2016.  Anna-Senpai tells Coelho that the attacks against ProxyPipe aren’t personal; they’re just business. Anna says he has been renting out “net spots” — sizable chunks of his Mirai botnet — to other hackers who use them in their own attacks for pre-arranged periods of time.

By way of example, Anna brags that as he and Coelho are speaking, the owners of a large Minecraft server were paying him to launch a crippling DDoS against Hypixel, currently the world’s most popular Minecraft server. KrebsOnSecurity confirmed with Hypixel that they were indeed under a massive attack from Mirai between Sept. 27 and 30.

[12:24:00 PM] live:anna-senpai: right now i just have a script sitting there hitting them for 45s every 20 minutes
[12:24:09 PM] live:anna-senpai: enough to drop all players and make them rage

Coelho told KrebsOnSecurity that the on-again, off-again attack DDoS method that Anna described using against Hypixel was designed not just to cost Hypixel money. The purpose of that attack method, he said, was to aggravate and annoy Hypixel’s customers so much that they might take their business to a competing Minecraft server.

“It’s not just about taking it down, it’s about making everyone who is playing on that server crazy mad,” Coelho explained. “If you launch the attack every 20 minutes for a short period of time, you basically give the players just enough time to get back on the server and involved in another game before they’re disconnected again.”

Anna-Senpai told Coelho that paying customers also were the reason for the 620 Gbps attack on KrebsOnSecurity. Two weeks prior to that attack, I published the results of a months-long investigation revealing that “vDOS” — one of the largest and longest-running DDoS-for-hire services — had been hacked, exposing details about the services owners and customers.

The story noted that vDOS earned its proprietors more than $600,000 and was being run by two 18-year-old Israeli men who went by the hacker aliases “applej4ck” and “p1st0”. Hours after that piece ran, Israeli authorities arrested both men, and vDOS — which had been in operation for four years — was shuttered for good.

[10:47:42 AM] live:anna-senpai: i sell net spots, starting at $5k a week
[10:47:50 AM] live:anna-senpai: and one client was upset about applejack arrest
[10:48:01 AM] live:anna-senpai: so while i was gone he was sitting on them for hours with gre and ack
[10:48:14 AM] live:anna-senpai: when i came back i was like oh fuck
[10:48:16 AM] live:anna-senpai: and whitelisted the prefix
[10:48:24 AM] live:anna-senpai: but then krebs tweeted that akamai is kicking them off
[10:48:31 AM] live:anna-senpai: fuck me
[10:48:43 AM] live:anna-senpai: he was a cool guy too, i like his article

[SIDE NOTE: If true, it’s ironic that someone would hire Anna-Senpai to attack my site in retribution for the vDOS story. That’s because the firepower behind applej4ck’s vDOS service was generated in large part by a botnet of IoT systems infected with a Qbot variant — the very same botnet strain that Anna-Senpai and Mirai were busy killing and erasing from the Internet.]

Coelho told KrebsOnSecurity that if his side of the conversation reads like he was being too conciliatory to his assailant, that’s because he was wary of giving Anna a reason to launch another monster attack against ProxyPipe. After all, Coelho said, the Mirai attacks on ProxyPipe caused many customers to switch to other Minecraft servers, and Coelho estimates the attack cost the company between $400,000 and $500,000.

Nevertheless, about halfway through the chat Coelho gently confronts Anna on the consequences of his actions.

[10:54:17 AM] katie.onis: People have a genuine reason to be unhappy though about large attacks like this
[10:54:27 AM] live:anna-senpai: yeah
[10:54:32 AM] katie.onis: There’s really nothing anyone can do lol
[10:54:36 AM] live:anna-senpai: 😛
[10:54:38 AM] katie.onis: And it does affect their lives
[10:55:10 AM] live:anna-senpai: well, i stopped caring about other people a long time ago
[10:55:18 AM] live:anna-senpai: my life experience has always been get fucked over or fuck someone else over
[10:55:52 AM] katie.onis: My experience with [ProxyPipe] thus far has been
[10:55:54 AM] katie.onis: Do nothing bad to anyone
[10:55:58 AM] katie.onis: And still get screwed over
[10:55:59 AM] katie.onis: Haha

The two even discussed anime after Anna-Senpai guessed that Coelho might be a fan of the genre. Anna-Senpai says he watched the anime series “Gate,” a reference to the above-mentioned B Gata H Hei that Dreadiscool included in the list of anime film series he’s watched. Anna also confirms that the name for his bot malware was derived from the anime series Mirai Nikki.

[5:25:12 PM] live:anna-senpai: i rewatched mirai nikki recently
[5:25:22 PM] live:anna-senpai: (it was the reason i named my bot mirai lol)


Coelho said when Anna-Senpai first reached out to him on Skype, he had no clue about the hacker’s real-life identity. But a few weeks after that chat conversation with Anna-Senpai, Coelho’s business partner (the Eric referenced in the first chat segment above) said he noticed that some of the code in Mirai looked awfully similar to code that Dreadiscool had posted to his Github account.

“He started to come to the conclusion that maybe Anna was Paras,” Coelho said. “He gave me a lot of ideas, and after I did my own investigation I decided he was probably right.”

Coelho said he’s known Paras Jha for more than four years, having met him online when Jha was working for Minetime — which ProxyPipe was protecting from DDoS attacks at the time.

“We talked a lot back then and we used to program a lot of projects together,” Coelho said. “He’s really good at programming, but back then he wasn’t. He was a little bit behind, and I was teaching him most everything.”

According to Coelho, as Jha became more confident in his coding skills, he also grew more arrogant, belittling others online who didn’t have as firm a grasp on subjects such as programming and DDoS mitigation.

“He likes to be recognized for his knowledge, being praised and having other people recognize that,” Coelho said of Jha. “He brags too much, basically.”

Coelho said not long after Minetime was hit by a DDoS extortion attack in 2013, Paras joined Hackforums and fairly soon after stopped responding to his online messages.

“He just kind of dropped off the face of the earth entirely,” he said. “When he started going on Hackforums, I didn’t know him anymore. He became a different person.”

Coelho said he doesn’t believe his old friend wished him harm, and that Jha was probably pressured into attacking ProxyPipe.

“In my opinion he’s still a kid, in that he gets peer-pressured a lot,” Coelho said. “If he didn’t [launch the attack] not only would he feel super excluded, but these people wouldn’t be his friends anymore, they could out him and screw him over. I think he was pretty much in a really bad position with the people he got involved with.”


On Dec. 16, security vendor Digital Shadows presented a Webinar that focused on clues about the Mirai author’s real life identity. According to their analysis, before the Mirai author was known as Anna-Senpai on Hackforums, he used the nickname “Ogmemes123123” (this also was the alias of the Skype username that contacted Coelho), and the email address (recall this is the same email address Anna-Senpai used in his alerts to various hosting firms about the urgent need to take down Qbot control servers hosted on their networks).

Digital Shadows noted that the Mirai author appears to have used another nickname: “OG_Richard_Stallman,” a likely reference to the founder of the Free Software Foundation. The account was used to register a Facebook account in the name of OG_Richard Stallman.

That Facebook account states that OG_Richard_Stallman began studying computer engineering at New Brunswick, NJ-based Rutgers University in 2015.

As it happens, Paras Jha is a student at Rutgers University. This is especially notable because Rutgers has been dealing with a series of DDoS attacks on its network since the fall semester of 2015 — more than a half dozen incidents in all. With each DDoS, the attacker would taunt the university in online posts and media interviews, encouraging the school to spend the money to purchase some kind of DDoS mitigation service.


Using the nicknames  “og_richard_stallman,” “exfocus” and “ogexfocus,” the person who attacked Rutgers more than a half-dozen times took to Reddit and Twitter to claim credit for the attacks. Exfocus even created his own “Ask Me Anything” interview on Reddit to discuss the Rutgers attacks.

Exfocus also gave an interview to a New Jersey-based blogger, claiming he got paid $500 an hour to DDoS the university with as many as 170,000 bots. Here are a few snippets from that interview, in which he blames the attacks on a “client” who is renting his botnet:

Are you for real? Why would you do an interview with us if you’re getting paid?

Normally I don’t show myself, but the entity paying me has something against the school. They want me to “make a splash”.

Why do you have a twitter account where you publically broadcast patronizing messages. Are you worried that this increases the risk of things getting back to you?

Public twitter is on clients request. The client hates the school for whatever reason. They told me to say generic things like that I hate the bus system and etc.

Have you ever attacked RU before?

During freshman registration the client requested it also – he didn’t want any publicity then though.

What are your plans for the future in terms of DDOSing and attacking the Rutgers cyber infrastructure?

When I stop getting paid – I’ll stop DDosing lol. I’m hoping that RU will sign on some ddos mitigation provider. I get paid extra if that happens.

At some point you said you were at the Livingston student center – outside of Sbarro. In this interview you said that you aren’t affiliated directly with Rutgers, did you lie then?


An online search for the Gmail address used by Anna-Senpai and OG_Richard_Stallman turns up a Pastebin post from July 1, 2016, in which an anonymous Pastebin user creates a “dox” of OG_Richard_Stallman. Doxing refers to the act of publishing someone’s personal information online and/or connecting an online alias to a real life identity.

The dox said OG_Richard_Stallman was connected to an address and phone number of an individual living in Turkey. But this is almost certainly a fake dox intended to confuse cybercrime investigators. Here’s why:

A Google search shows that this same address and phone number showed up in another dox on Pastebin from almost three years earlier — June 2013 — intended to expose or confuse the identity of a Hackforums user known as LiteSpeed. Recall that LiteSpeed is the same alias that ProTraf’s Josiah White acknowledged using on Hackforums.


This OG_Richard_Stallman identity is connected to Anna-Senpai by another person we’ve heard from already: Francisco Dias, whose Frantech ISP was attacked by Anna-Senpai and Mirai in mid-September. Francisco told KrebsOnSecurity that in early August 2016 he began receiving extortion emails from a Gmail address associated with a OG_Richard_Stallman.

“This guy using the Richard Stallman name added me on Skype and basically said ‘I’m going to knock all of your [Internet addresses] offline until you pay me’,” Dias recalled. “He told me the up front cost to stop the attack was 10 bitcoins [~USD $5,000 at the time], and if I didn’t pay within four hours after the attack started the fee would double to 20 bitcoins.”

Dias said he didn’t pay the demand and eventually OG_Richard_Stallman called off the attack. But he said for a while the attacks were powerful enough to cause problems for Frantech’s Internet provider.

“He was hitting us so hard with Mirai that he was dropping large parts of Hurricane Electric and causing problems at their Los Angeles point of presence,” Dias said. “I basically threw everything behind [DDoS mitigation provider] Voxility, and eventually Stallman buggered off.”

The OG_Richard_Stallman identity also was tied to similar extortion attacks at the beginning of August against one hosting firm that had briefly been one of ProTraf’s customers in 2016. The company declined to be quoted on the record, but said it stopped doing business with Protraf in mid-2016 because they were unhappy with the quality of service.

The Internet provider said not long after that it received an extortion demand from the “OG_Richard_Stallman” character for $5,000 in Bitcoin to avoid a DDoS attack. One of the company’s researchers contacted the extortionist via the address supplied in the email, but posing as someone who wished to hire some DDoS services.

OG_Richard_Stallman told the researcher that he could guarantee 350 Gbps of attack traffic and that the target would go down or the customer would receive a full refund. The price for the attack? USD $100 worth of Bitcoin for every five minutes of attack time.

My source at the hosting company said his employer declined to pay the demand, and subsequently got hit with an attack from Mirai that clocked in at more than 300 Gbps.

“Clearly, the attacker is very technical, as they attacked every single [Internet address] within the subnet, and after we brought up protection, he started attacking upstream router interfaces,” the source said on condition of anonymity.

Asked who they thought might be responsible for the attacks, my source said his employer immediately suspected ProTraf. That’s because the Mirai attack also targeted the Internet address for the company’s home page, but that Internet address was hidden by DDoS mitigation firm Cloudflare. However, ProTraf knew about the secret address from its previous work with the company, the source explained.

“We believe it’s Protraf’s staff or someone related to Protraf,” my source said.

A source at an Internet provider agreed to share information about an extortion demand his company received from Richard Stallman in August 2016. Here he is contacting Stallman directly and pretending to be someone interested in hiring Stallman/Anna-Senpai to attack others.

A source at an Internet provider agreed to share information about an extortion demand his company received from OG_Richard_Stallman in August 2016. Here he is contacting the Stallman character directly and pretending to be someone interested in renting a botnet. Notice the source brazenly said he wanted to DDoS ProTraf.


After months of gathering information about the apparent authors of Mirai, I heard from Ammar Zuberi, once a co-worker of ProTraf President Paras Jha.

Zuberi told KrebsOnSecurity that Jha admitted he was responsible for both Mirai and the Rutgers DDoS attacks. Zuberi said when he visited Jha at his Rutgers University dorm in October 2015, Paras bragged to him about launching the DDoS attacks against Rutgers.

“He was laughing and bragging about how he was going to get a security guy at the school fired, and how they raised school fees because of him,” Zuberi recalled.  “He didn’t really say why he did it, but I think he was just sort of experimenting with how far he could go with these attacks.”

Zuberi said he didn’t realize how far Jha had gone with his DDoS attacks until he confronted him about it late last year. Zuberi said he was on his way to see his grandmother in Arizona at the end of November 2016, and he had a layover in New York. So he contacted Jha and arranged to spend the night at Jha’s home in Fanwood, New Jersey.

As I noted in Spreading the DDoS Disease and Selling the Cure, Anna-Senpai leaked the Mirai code on a domain name (santasbigcandycane[dot]cx) that was registered via Namecentral, an extremely obscure domain name registrar which had previously been used to register fewer than three dozen other domains over a three-year period.

According to Zuberi, only five people knew about the existence of Namecentral: himself, CJ Sculti, Paras Jha, Josiah White and Namecentral’s owner Jesse Wu (19-year-old Wu features prominently in the DDoS Disease story linked in the previous paragraph).

“When I saw that the Mirai code had been leaked on that domain at Namecentral, I straight up asked Paras at that point, ‘Was this you?,’ and he smiled and said yep,” Zuberi recalled. “Then he told me he’d recently heard from an FBI agent who was investigating Mirai, and he showed me some text messages between him and the agent. He was pretty proud of himself, and was bragging that he led the FBI on a wild goose chase.”

Zuberi said he hasn’t been in contact with Jha since visiting his home in November. Zuberi said he believes Jha wrote most of the code that Mirai uses to control the individual bot-infected IoT devices, since it was written in Golang and Jha’s partner White didn’t code well in this language. Zuberi said he thought White’s role was mainly in developing the spreading code used to infect new IoT devices with Mirai, since that was written in C — a language White excelled at.

In the time since most of the above occurred, the Internet address ranges previously occupied by ProTraf have been withdrawn. ProxyPipe’s Coelho said it could be that the ProTraf simply ran out of money.

ProTraf’s Josiah White explained the disappearance of ProTraf’s Internet space as part of an effort to reboot the company.

“We [are] in the process of restructuring and refocusing what we are doing,” White told KrebsOnSecurity.

Jha did not respond to requests for comment.

Update: Jan. 19, 10:51 a.m. ET: Jha responded to my request for comment. His first comment about this story was that I erred in citing the proper anime film listed on one of the dreadiscool profiles mentioned above. When asked directly about his alleged involvement with Mirai, Jha said he did not write Mirai and was not involved in attacking Rutgers.

“The first time it happened, I was a freshman, and living in the dorms,” Jha said. “At the culmination of the attacks near the end of the year, I was without internet for almost a week, along with the rest of the student body. I couldn’t register for classes, and had a host of issues dealing with it. This semester and the previous semester were the reasons I moved to commute, because of these problems that I frankly don’t have time to deal with.”

Jha said Zuberi did spend the night at his house last year but he denied admitting anything to Zuberi. He acknowledged hearing from an FBI agent investigating Mirai, but said “no comment” when asked if he’d heard from that FBI agent since then.

“I don’t think there are enough facts to definitively point the finger at me,” Jha said. “Besides this article, I was pretty much a nobody. No history of doing this kind of stuff, nothing that points to any kind of sociopathic behavior. Which is what the author is, a sociopath.”

Original story:

Rutgers University did not respond to requests for comment.

The FBI officials could not be immediately reached for comment.

A copy of the entire chat between Anna-Senpai and ProxyPipe’s Coelho is available here.

Planet DebianMichael Stapelberg: has been modernized has been modernized! We have just launched a major update to our manpage repository. What used to be served via a CGI script is now a statically generated website, and therefore blazingly fast.

While we were at it, we have restructured the paths so that we can serve all manpages, even those whose name conflicts with other binary packages (e.g. crontab(5) from cron, bcron or systemd-cron). Don’t worry: the old URLs are redirected correctly.

Furthermore, the design of the site has been updated and now includes navigation panels that allow quick access to the manpage in other Debian versions, other binary packages, other sections and other languages. Speaking of languages, the site serves manpages in all their available languages and respects your browser’s language when redirecting or following a cross-reference.

Much like the Debian package tracker, includes packages from Debian oldstable, oldstable-backports, stable, stable-backports, testing and unstable. New manpages should make their way onto within a few hours.

The generator program (“debiman”) is open source and can be found at In case you would like to use it to run a similar manpage repository (or convert your existing manpage repository to it), we’d love to help you out; just send an email to stapelberg AT debian DOT org.

This effort is standing on the shoulders of giants: check out for a list of people we thank.

We’d love to hear your feedback and thoughts. Either contact us via an issue on, or send an email to the debian-doc mailing list (see

Planet DebianJonathan Dowland: RetroPie, NES Classic and Bluetooth peripherals

I wanted to write a more in-depth post about RetroPie the Retro Gaming Appliance OS for Raspberry Pis, either technically or more positively, but unfortunately I don't have much positive to write.

What I hoped for was a nice appliance that I could use to play old games from the comfort of my sofa. Unfortunately, nine times out of ten, I had a malfunctioning Linux machine and the time I'd set aside for jumping on goombas was being spent trying to figure out why bluetooth wasn't working. I have enough opportunities for that already, both at work and at home.

I feel a little bad complaining about an open source, volunteer project: in its defence I can say that it is iterating fast and the two versions I tried in a relatively short time span were rapidly different. So hopefully a lot of my woes will eventually be fixed. I've also read a lot of other people get on with it just fine.

Instead, I decided the Nintendo Classic NES Mini was the plug-and-play appliance for me. Alas, it became the "must have" Christmas toy for 2016 and impossible to obtain for the recommended retail price. I did succeed in finding one in stock at Toys R Us online at one point, only to have the checkout process break and my order not go through. Checking Stock Informer afterwards, that particular window of opportunity was only 5 minutes wide. So no NES classic for me!

My adventures in RetroPie weren't entirely fruitless, thankfully: I discovered two really nice pieces of hardware.

ThinkPad Keyboard

ThinkPad Keyboard

The first is Lenovo's ThinkPad Compact Bluetooth Keyboard with TrackPoint, a very compact but pleasant to use Bluetooth keyboard including a trackpoint. I miss the trackpoint from my days as a Thinkpad laptop user. Having a keyboard and mouse combo in such a small package is excellent. My only two complaints would be the price (I was lucky to get one cheaper on eBay) and the fact it's bluetooth only: there's a micro-USB port for charging, but it would be nice if it could be used as a USB keyboard too. There's a separate, cheaper USB model.

8bitdo SFC30

8bitdo SFC30

The second neat device is a clone of the SNES gamepad by HK company 8bitdo called the SFC30. This looks and feels very much like the classic Nintendo SNES controller, albeit slightly thicker from front to back. It can be used in a whole range of different modes, including attached USB; Bluetooth pretending to be a keyboard; Bluetooth pretending to be a controller; and a bunch of other special modes designed to work with iOS or Android devices in various configurations. The manufacturer seem to be actively working on firmware updates to further enhance the controller. The firmware is presently closed source, but it would not be impossible to write an open source firmware for it (some people have figured out the basis for the official firmware).

I like the SFC30 enough that I spent some time trying to get it working for various versions of Doom. There are just enough buttons to control a 2.5D game like Doom, whereas something like Quake or a more modern shooter would not work so well. I added support for several 8bitdo controllers directly into Chocolate Doom (available from 2.3.0 onwards) and into SDL2, a popular library for game development, which I think is used by Steam, so Steam games may all gain SFC30 support in the future too.

Sociological ImagesSuper Mario and Cultural Globalization

The 2020 Summer Olympics will be held in Japan.  And when the prime minister of Japan, Shinzo Abe, made this public at the 2016 Olympics in Rio de Janeiro, Brazil, he did so in an interesting way.   He was standing atop a giant “warp pipe” dressed as Super Mario.  I’m trying to imagine the U.S. equivalent.  Can you imagine the president of the United States standing atop the golden arches, dressed as Ronald McDonald, telling the world that we’d be hosting some international event?

Prime minister Abe was able to do this because Mario is a cultural icon recognized around the world.  That Italian-American plumber from Brooklyn created in Japan is truly a global citizen. The Economist recently published an essay on how Mario became known around the world.

Mario is a great example of a process sociologists call cultural globalization.  This is a more general social process whereby ideas, meanings, and values are shared on a global level in a way that intensifies social relations.  And Japan’s prime minister knew this.  Shinzo Abe didn’t dress as Mario to simply sell more Nintendo games.  I’m sure it didn’t hurt sales.  In fact, in the past decade alone, Super Mario may account for up to one third of the software sales by Nintendo.  More than 500 million copies of games in which Mario is featured circulate worldwide.  But, Japan selected Mario because he’s an illustration of technological and artistic innovations for which the Japanese economy is internationally known.  And beyond this, Mario is also an identity known around the world because of his simple association with the same human sentiment—joy.  He intensifies our connections to one another.  You can imagine people at the ceremony in Rio de Janeiro laughing along with audience members from different countries who might not speak the same language, but were able to point, smile, and share a moment together during the prime minister’s performance.  A short, pudgy, mustached, working-class, Italian-American character is a small representation of that shared sentiment and pursuit.  This intensification of human connection, however, comes at a cost.

We may be more connected through Mario, but that connection takes place within a global capitalist economy.  In fact, Wisecrack produced a great short animation using Mario to explain Marxism and the inequalities Marx saw as inherent within capitalist economies.  Cultural globalization has more sinister sides as well, as it also has to do with global cultural hegemony.  Local culture is increasingly swallowed up.  We may very well be more internationally connected.  But the objects and ideas that get disseminated are not disseminated on an equal playing field.  And while the smiles we all share when we connect with Mario and his antics are similar, the political and economic benefits associated with those shared smirks are not equally distributed around the world.  Indeed, the character of Mario is partially so well-known because he happened to be created in a nation with a dominant capitalist economy.  Add to that that the character himself hails from another globally dominant nation–the U.S.  The culture in which he emerged made his a story we’d all be much more likely to hear.

Tristan Bridges, PhD is a professor at The College at Brockport, SUNY. He is the co-editor of Exploring Masculinities: Identity, Inequality, Inequality, and Change with C.J. Pascoe and studies gender and sexual identity and inequality. You can follow him on Twitter here. Tristan also blogs regularly at Inequality by (Interior) Design.

(View original at

Planet DebianBálint Réczey: My debian-devel pledge

I pledge that before sending each email to the debian-devel mailing list I move forward at least one actionable bug in my packages.

Worse Than FailureCodeSOD: Popping a Plister

We live in a brave new world. Microsoft, over the past few years has emphasized, more and more, a cross-platform, open-source approach. So, for example, if you were developing something in .NET today, it’s not unreasonable that you might want to parse a PList file- the OSX/NextStep/GNUStep configuration file format.

But let’s rewind, oh, say, five years. An Anonymous reader found a third-party library in their .NET application. It never passed through any review or acquisition process- it was simply dropped in by another developer. Despite being a .NET library, it uses PLists as its configuration format- despite .NET offering a perfectly good in-built format. Of course, this C# code isn’t what we’d call good code, and thus one is left with an impression that someone hastily ported an Objective-C library without really thinking about what they were doing.

For example, perhaps you have an object that you want to convert to a binary PList file. Do you, perhaps, use overriding and polymorphism to create methods which can handle this? Do you perhaps use generics? Or do you ignore all of the benefits of a type system and use a case statement and compare against the type of an object as a string?

private static byte[] composeBinary(object obj)
    byte[] value;
    switch (obj.GetType().ToString())
        case "System.Collections.Generic.Dictionary`2[System.String,System.Object]":
            value = writeBinaryDictionary((Dictionary<string, object>)obj);
            return value;

        case "System.Collections.Generic.List`1[System.Object]":
            value = composeBinaryArray((List<object>)obj);
            return value;

        case "System.Byte[]":
            value = writeBinaryByteArray((byte[])obj);
            return value;

        case "System.Double":
            value = writeBinaryDouble((double)obj);
            return value;

        case "System.Int32":
            value = writeBinaryInteger((int)obj, true);
            return value;

        case "System.String":
            value = writeBinaryString((string)obj, true);
            return value;

        case "System.DateTime":
            value = writeBinaryDate((DateTime)obj);
            return value;

        case "System.Boolean":
            value = writeBinaryBool((bool)obj);
            return value;

            return new byte[0];

Honestly, the thing that bothers me most here is that they’re both setting the value variable and returning from each branch. Do one or the other, but not both.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

CryptogramA Comment on the Trump Dossier

Imagine that you are someone in the CIA, concerned about the future of America. You have this Russian dossier on Donald Trump, which you have some evidence might be true. The smartest thing you can do is to leak it to the public. By doing so, you are eliminating any leverage Russia has over Trump and probably reducing the effectiveness of any other blackmail material any government might have on Trump. I believe you do this regardless of whether you ultimately believe the document's findings or not, and regardless of whether you support or oppose Trump. It's simple game-theory.

This document is particularly safe to release. Because it's not a classified report of the CIA, leaking it is not a crime. And you release it now, before Trump becomes president, because doing so afterwards becomes much more dangerous.

MODERATION NOTE: Please keep comments focused on this particular point. More general comments, especially uncivil comments, will be deleted.

Planet DebianMike Hommey: Announcing git-cinnabar 0.4.0

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.3.2?

  • Various bug fixes.
  • Updated git to 2.11.0 for cinnabar-helper.
  • Now supports bundle2 for both fetch/clone and push (
  • Now Supports git credential for HTTP authentication.
  • Now supports git push --dry-run.
  • Added a new git cinnabar fetch command to fetch a specific revision that is not necessarily a head.
  • Added a new git cinnabar download command to download a helper on platforms where one is available.
  • Removed upgrade path from repositories used with version < 0.3.0.
  • Experimental (and partial) support for using git-cinnabar without having mercurial installed.
  • Use a mercurial subprocess to access local mercurial repositories.
  • Cinnabar-helper now handles fast-import, with workarounds for performance issues on macOS.
  • Fixed some corner cases involving empty files. This prevented cloning Mozilla’s stylo incubator repository.
  • Fixed some correctness issues in file parenting when pushing changesets pulled from one mercurial repository to another.
  • Various improvements to the rules to build the helper.
  • Experimental (and slow) support for pushing merges, with caveats. See issue #20 for details about the current status.
  • Fail graft earlier when no commit was found to graft
  • Allow graft to work with git version < 1.9
  • Allow git cinnabar bundle to do the same grafting as git push

Planet Linux AustraliaSimon Lyall: 2017 – Wednesday – Session 3

Handle Conflict, Like a Boss! – Deb Nicholson

  • Conflict is natural
  • “When they had no outfit for their conflict they turned into Reavers and ate people and stuff”
  • People get caught up in their area not the overall goal for their organisation
  • People associate with a role, don’t like when it gets changed or eliminated
  • Need to go deep, people don’t actually tell you the problem straight away
  • If things get too bad, then go to another project
  • Identify the causes of conflict
  • 3 Styles of handling conflict
    • Avoidance
      • Can let things fester
      • They come across as unconnected
      • Looks like support for the status quo
    • Accommodation
      • Compromise on everything
      • Looks like not taking seriously
    • Assertion
      • Going to wear down everyone else
      • People won’t tell you when things are wrong
  • Going a little deeper
    • People don’t understand history (and why things are weird)
      • go to historical motivations and get buy-in for the strategy that reflects the new reality
    • People are acting to motivations you don’t see
      • Ask about the other persons motivations
    • Fear (often of change)
      • “What is the worse that could happen?”
    • Right Place, wrong time
      • Stuff is going to the wrong person or group
    • Help everyone get perspective
      • Don’t do the same forum, method, people all the time if it always has conflict.
  • What do you do with the Info
    • Put yourself in other persons shoes
    • Find alignment
    • A Word about who is doing this conflict resolution
      • Shouldn’t be just a single person/role
      • Or only women
      • Should be everyone/anyone
      • But if it is within a big or then maybe hire someone
  • Planning for future conflicts
    • Assuming the best
    • No ad hominem (hard to go back)
  • Conflict resolution between groups
    • What could we accomplish if we worked together
    • Doesn’t look good to outsiders
    • More Face-to-Face between projects (towards a common goal)


Open Compute Project Down Under – Andrew Ruthven

  • What is Open Compute
    • Vanity free Computing ( remove pretty bits )
    • Stripped Down – we don’t need, no video, minimum extra posts)
    • Efficient and easy
      • Maintenance, Air flow, Electricity
    • Came out of Facebook, now a foundation
    • 1/10th the number of techs/server
  • Projects and Technologies
    • 9 main areas, over 4000 people working on it.
    • Design and Specs
  • Recent Hardware
    • Some comes in 19″ racks
    • HPE, Microsoft Project Olympus
  • In Aus / NZ
    • Telstra – 2 rack of OCP Decathleon, Open Networking using Hyper Scalers
    • Rackspace
    • Large Gaming site
    • Catalyst IT
  • Why OCP for Catalyst
    • Very Open source software orientated company
    • Have a Cloud Operation
    • Looking at for a while
    • Finally ordered first unit in 2016 (Winterfell)
    • Cumulus Linux switches from Penguin computing, works of 12volt in Open Rack
  • Issues for Aus / Nz
    • Very small scale, sometimes to small for vendors
    • Supply chain hard, ended up using an existing integrator
    • Hyper Scalers in Aus, will ship to NZ
    • Number of comapnies seee to NZ
  • Lessons
    • Scale is an issue for failures aswell as supply
    • Have >1 power shelf
    • Have at least 2 racks with 4 power sheleves
    • Too small for vendors to get certification
    • Trust in new hardware
  • Your Own deployment
    • Green field DC
      • Use DC Designs
      • Allow for 48U racks (2.5 metres tall)
      • 2x or 4x 3-phase circuits per rack
    • Existing DCs
      • Consider modifications
      • 19″ servers options
      • 48OU Open rack if you have enough height
      • 22OU is you don’t have enough height
      • Carefully check the specs
    • Open Networking
      • Run collectd etc directly on your switch
    • Supply Chain
    • Community Support
      • OCP has a Aus/NZ Mailing list (ocp-anz)
      • Discussion on what is a priority across Aus and NZ


Planet DebianNorbert Preining: Debian/TeX Live January 2017

As the freeze of the next release is closing in, I have updated a bunch of packages around TeX: All of the TeX Live packages (binaries and arch independent ones) and tex-common. I might see whether I get some updates of ConTeXt out, too.

The changes in the binaries are mostly cosmetic: one removal of a non-free (unclear-free) file, and several upstream patches got cherrypicked (dvips, tltexjp contact email, upmendex, dvipdfmx). I played around with including LuaTeX v1.0, but that breaks horribly with the current packages in TeX Live, so I refrained from it. The infrastructure package tex-common got a bugfix for updates from previous releases, and for the other packages there is the usual bunch of updates and new packages. Enjoy!

New packages

arimo, arphic-ttf, babel-japanese, conv-xkv, css-colors, dtxdescribe, fgruler, footmisx, halloweenmath, keyfloat, luahyphenrules, math-into-latex-4, mendex-doc, missaali, mpostinl, padauk, platexcheat, pstring, pst-shell, ptex-fontmaps, scsnowman, stanli, tinos, undergradmath, yaletter.

Updated packages

acmart, animate, apxproof, arabluatex, arsclassica, babel-french, babel-russian, baskervillef, beamer, beebe, biber, biber.x86_64-linux, biblatex, biblatex-apa, biblatex-chem, biblatex-dw, biblatex-gb7714-2015, biblatex-ieee, biblatex-philosophy, biblatex-sbl, bidi, calxxxx-yyyy, chemgreek, churchslavonic, cochineal, comicneue, cquthesis, csquotes, ctanify, ctex, cweb, dataref, denisbdoc, diagbox, dozenal, dtk, dvipdfmx, dvipng, elocalloc, epstopdf, erewhon, etoolbox, exam-n, fbb, fei, fithesis, forest, glossaries, glossaries-extra, glossaries-french, gost, gzt, historische-zeitschrift, inconsolata, japanese-otf, japanese-otf-uptex, jsclasses, latex-bin, latex-make, latexmk, lt3graph, luatexja, markdown, mathspec, mcf2graph, media9, mendex-doc, metafont, mhchem, mweights, nameauth, noto, nwejm, old-arrows, omegaware, onlyamsmath, optidef, pdfpages, pdftools, perception, phonrule, platex-tools, polynom, preview, prooftrees, pst-geo, pstricks, pst-solides3d, ptex, ptex2pdf, ptex-fonts, qcircuit, quran, raleway, reledmac, resphilosophica, sanskrit, scalerel, scanpages, showexpl, siunitx, skdoc, skmath, skrapport, smartdiagram, sourcesanspro, sparklines, tabstackengine, tetex, tex, tex4ht, texlive-scripts, tikzsymbols, tocdata, uantwerpendocs, updmap-map, uplatex, uptex, uptex-fonts, withargs, wtref, xcharter, xcntperchap, xecjk, xellipsis, xepersian, xint, xlop, yathesis.

Sky CroeserMotherhood and work

I’ve written this post, in my head, so many times and in so many different forms over the last months.

A version where I wrote about how a particular mix of circumstances (a baby coming, a ‘budget tightening’ at the university, the privilege of a well-paid partner) meant that I would probably be leaving academia, probably for good.

A version where I wrote about my mixed feelings about gaining the security of an ongoing position amidst the insecurity of academia generally, and the job losses at my university specifically.

A version where I ignored all this complexity and just wrote about what I published last year, what I’ll do this year.

I found out that I was pregnant in July, the same week that I put in my application for an ongoing position that I dearly wanted. The two felt deeply linked. Job security would mean maternity leave, a path to come back to.

It was ‘too early’ to tell people about the pregnancy (an idea that felt strange to me: if I had a miscarriage, I wasn’t sure I’d want to carry that sadness and loss in private). I thought I’d wait until I knew whether or not I got the job – I was still in the first trimester anyway, and it felt like so much was at stake, I’d hate to have even a shadow of a doubt that my pregnancy affected the job application.

The application process dragged out. I spent the first trimester exhausted, wanting desperately to lie down on the floor by the middle of each work day, wanting to tell people that I’d like to take on less of that extra work that academia offers in such abundance (especially to women). A tension between knowing the ways in which feminine embodiment is constructed as weakness, and the power of feminist narratives about the importance of acknowledging our embodied selves. (My tired body, not weak, but growing a whole new potential-person.)

I look for spaces in academia to think about and write about the issues that matter to me. I’d been thinking about parenting, and about motherhood specifically, for years before my partner and I decided to try for a baby. Thinking about the ways in which academia is constructed around particular bodies and career patterns: still the norm of a white, cis, heterosexual male with few caring responsibilities, someone who’s cared for (usually invisibly) by others. Others, often women, who do childcare and other reproductive work, who take on more of the caring labour of teaching, who empty the bins and vacuum the offices. Thinking about reproductive choices, and gender inequality, and capitalism, and racism, and this bundle of other factors that structure our experiences of motherhood.

It felt strange to be silent about the ways in which that thinking tied to my own experiences, or at least to only discuss it in much more private spaces. I was thinking a lot about the politics of mothering, particularly as set out in the powerful essays collected in Revolutionary Mothering. I was noticing the split between my experience of pregnancy and my male partner’s, including that he talked to his manager about it very early, and thinking about Andie Fox‘s reflections on the micropolitics of parenting and gender roles. I felt a deep resistance, during those early months, to people calling this spark inside me a ‘baby’ (seeing the ways in which that language is used by people controlling reproductive choice). Having tried to learn more about trans people’s experiences of the world, I looked for ways to push back against assumptions that there would be an easy, definite answer, to the constant question: “Do you know what you’re having?” (A baby, probably. Maybe a kitten. But we’re pretty sure it’ll be a baby.)

All of this felt artificially separated from my academic work. It felt odd to have this area of analysis, something that felt so deeply consuming, and so interesting in so many ways, needing to be put out of the way, particularly given that Internet Studies as a field often seems so given to a certain kind of openness in connecting analysis to personal experience.

I got back from travel, including attending a conference where I felt terribly disconnected (perhaps because of this sense of much of my thought and experience needing to be tucked out of sight), and almost immediately had a meeting where I was told that the fixed-term contract I was on would not be renewed in 2017. I still had no idea whether I’d even made the shortlist for the ongoing position.

I spent more time considering what I would do if I didn’t get the job. With the baby coming, it seemed like the uphill battle of finding job security within academia would be even more challenging, and even more unlikely to be successful. It felt ridiculous to plan research for 2017 not knowing whether I would be in a position to carry on with my work. I felt uncomfortable coming in to work with the awkwardness of the job application looming, and with my pregnancy feeling increasingly visible.

In the end, at the end of the year and with the holiday shut-down rapidly approaching, I did make the shortlist, and did get the job. I’ve spent much of the time since then trying to re-engage with my research, after all that soul-searching about what I might do instead of academia, trying to work out what comes next. It feels hard to plan for the arrival of a new human, who I haven’t met yet, who may sleep or not sleep. It feels hard to know how I’ll feel about mothering.

Many people say that mothering fundamentally transforms you. That you won’t want to be apart from the baby. That you won’t care about work anymore, anyway. That, in fact, you’ll be failing as a mother if you want time apart from the baby. (Oddly enough, I’ve never heard anyone say this about becoming a father.) Other people say you’ll just be the same you, but much more tired.

I don’t know exactly how I’ll feel. I’m excited to meet this new person, and scared, and worried about all the structures that shape how we parent and how children grow up. This experience has already shifted my focus, as I think more about mothering as a political act and the balances we walk between managing within existing institutions and changing them. This post has been the first attempt to start more publicly sorting through this tangle of ideas and experiences: I imagine that there’ll be more, as I go along. And some silences, too, hopefully of the comfortable sort – silences that come not from uncertainty, but from cocooning and growth and taking time to explore new spaces with a new tiny human.

Planet Linux AustraliaSimon Lyall: 2017 – Wednesday – Session 2

400,000 ephemeral containers: testing entire ecosystems with Docker – Daniel Axtens

  • A pretty interesting talk. It was largely a demo so I didn’t grab many notes

Community Building Beyond the Black Stump – Josh Simmons

  • How to build communities when you don’t live in a big city
  • Whats in a meetup?
  • Santa Rosa County, north of San Franscisco
    • Not easy to get to SF
    • SF meetups not always relevant
  • After meeting with one other person, created “North Bay web Professionals”, minimal existing groups
  • Multidisciplinary community worked better
    • Designers, Marketers, Web Devs, writers, etc
    • Hired each other
    • Seemed to work better, fewer toxic dynamics
    • Safe space for beginners
  • 23 People at first event (worked hard to tell people)
    • Told everyone that we knew even if not interested
    • Contacted the competitors
    • Contacting firms, schools
    • Co-working spaces (formal of de-facto like cafes)
    • Other meetup groups, even in unrelated areas.
  • Adapting to the needs of the community
    • You might have a vision
    • But you must adapt to who turns up and what they want/need
  • First meeting
    • Asked people to bring food
    • Fluffy start time so could greet people and mingle
    • Went round room and got people to introduce themselves
      • Intro ended up being a thing they always did
      • Helped people remember names
      • Got everyone to say a little
      • put people in a social mindset
    • Framework for events decided
    • Decided on next meeting date, some prep
    • Ended up going late
      • Format became. Social -> talk -> Social on each night.
  • Tools
    • Used facebook and meetup
    • 1/3 of people came just from meetup promoting automatically
    • Go where people already are
  • Renamed from “North Bay Web professions” to “North Bay Web and Interactive Media professionals”
  • “Ask a person, not a search engine”
  • Hosted over 169 events – Core was the monthly meeting
    • Tried to keep the topics a little broad
    • Often the talk was narrow but compensated with a broad Q&A afterwards
  • Thinking of people as “members” not “attendees” , have to work at getting them come back
  • Also hosted
    • Lunches, rotated all around the region so eventually near everywhere, Casual
    • Unconfernces
    • Topical meetups
    • Charity Hackathon, teamed up with students and non-profits to do website for non-profit. Student was an apprentice.
    • Hosted Ag+Tech mixers with local farmers groups
    • Helped local cities put out tech RFPs
  • Q: Success measures? A: Survey of member, things like Job referrals, what have learnt




Planet DebianHideki Yamane: It's all about design

From Arturo's blog
When I asked why not Debian, the answer was that it was very difficult to install and manage.
It's all about design, IMHO.
Installer, website, wiki... It should be "simple", not verbose, not cheap.

Planet Linux AustraliaSimon Lyall: 2017 – Wednesday – Session 1

Servo Architecture: Safety and Performance – Jack Moffitt

  • History
    • 1994 Netscape Navigator
    • 2002 Mozilla Release
    • 2008 multi-core CPU stuff not making firefox faster
    • 2016 CPUs now have on-chip GPUs
    • Very hard to write multi-threaded C++ to allow mozilla to take advantage of many cores
  • How to make Servo Faster?
  • Constellation
    • In the past – Monolithic browser engines
      • Single browser engine handling multiple tabs
      • Two processes – Pool Content processes vs Chrome process
        • If one process dies on a page doesn’t take out whole browser
      • Sanboxing lets webpage copies have less privs
    • Threads
      • Less overhead than whole processes
      • Thread per page
      • More responsive
      • Sandboxing
      • More robust to failure
    • Is this the best we can do?
      • Run Javascript and layout simultaniously
      • Pipeline splitting them up
      • Child pipelines for inner iframes (eg ads)
  • Constellation
    • Rust can fail better
    • Most failures stop at thread boundaries
    • Still do sandbox and privledges
    • Option to still have some tabs in multiple processes
  • Webrender
    • Using the GPU
      • Frees up main CPU
      • Are VERY fast at some stuff
      • Easiest place to start is rendering
    • Don’t browsers already use the GPU?
      • Only in a limited way for compositing
    • Key ideas
      • Retain mode not immediate mode (put things in optimal order first)
      • Designed to render CSS content (CSS is actually pretty simple)
      • Draw the whole frame every frame (things are fast enough, simpler to not try to optimise)
    • Pipeline
      • Chop screen into 256×256 tiles
      • Tile assignment
      • Create a big tree
      • merge and assign render targets
      • create and execute batches
    • Text
      • Rasterize on CPU and upload glyth to GPU
      • Paste and shadow usign the GPU
  • Project Quantum
    •  Taking technology we made in servo and put it in gecko
  • Research in progress
    • Pathfinder – GPU font rasterizer – Now faster than everything else
    • Magic DOM
      • Wins in JS/DOM intergration
      • Fusing reflectors and DOM objects
      • Self hosted JS
    • External colaborations: ML, Power Mngt, WebBluetooth, etc
  • Get involved
    • Test nightlies
    • Curated bugs for new contributors

In Case of Emergency: Break Glass – BCP, DRP, & Digital Legacy – David Bell

  • Definitions
    • BCP = Business continuity Plan
    • A process to prevent and recover from business continuity plans
    • BIP = Business interuptions plan
    • BRP = Recovery plan
    • RPO = Recovery point objective, targetted recovery point (when you last backed up)
    • RTO = Recovery time objective
  • Why?
    • Because things will go wrong
    • Because things should not go even more wrong
  • Create your BCP
    • Brainstorm
    • Identify events that may interrupt, loss access to physical site, loss of staff
    • Backups
      • 3 copies
      • 2 different media/formats
      • 1 offsite and online
      • Check how long it will take to download or fetch
    • Test
    • Who has the Authority
    • Communication chains, phone trees, contact details
    • Practice Early, Practice often
      • Real-world scenarios
      • Measure, measure, measure
      • Record your results
      • Convert your into an action item
      • Have different people on the tests
    • Each Biz Unit or team should have their own BCP
    • Recovery can be expensive, make sure you know what your insurance will cover
  • Breaking the Glass
    • Documentation is the Key
    • Secure credentials super important
    • Shamir secret sharing, need number of people to re-create the share
  • Digital Legacy
    • Do the same for your personal data
    • Document
      • Credentials
      • Services
        • What uses them
        • billing arrangments
        • Credentials
      • What are your wishes for the above.
    • Talk to your family and friends
    • Backups
    • Document backups and backup your documentation
    • Secret sharing, offer to do the same for your friends
  • Other / Questions
    • Think about 2-Facter devices
    • Google and some others companies can setup “Next of Kin” contacts




Planet DebianDirk Eddelbuettel: RProtoBuf 0.4.8: Windows support for proto3

Issue ticket #20 demonstrated that we had not yet set up Windows for version 3 of Google Protocol Buffers ("Protobuf") -- while the other platforms support it. So I made the change, and there is release 0.4.8.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding and serialization library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

The NEWS file summarises the release as follows:

Changes in RProtoBuf version 0.4.8 (2017-01-17)

  • Windows builds now use the proto3 library as well (PR #21 fixing #20)

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaSimon Lyall: 2017 – Wednesday Keynote – Dan Callahan

Designing for failure: On the decommissioning of Persona

  • Worked for Mozilla on Persona
  • Persona did authentication on the web
    • You would go to a website
    • Type in your email address
    • Redirects via login page by your email provider
    • You login and redirect back
  • Started centralised, designed to be uncentralised as it is taken up
  • Some sites were only offering login via social media
    • Some didn’t offer traditional logins for emails or local usernames
    • Imposes 3rd party between you and your user.
    • Those 3rd parties have their own rules, eg real name requirements
  • Persona Failed
    • Traditional logins now more common
  • Cave Diving
    • Equipment and procedures designed to let you still survive if something fails
    • Training review deaths and determines how can be prevented
    • “5 rules of accident analysis” for cave diving
  • Three weeks ago switched off Persona
    • Encourage others to share mistakes


  • Just having a free license is not enough to succeed
  • Had a built in centralisation point
    • Protocol designed so browser could eventually natively implement but initially was using it.
    • Relay between provider and website went via Mozilla until browser natively implemented
    • No ability to fork the project
  • Bits rot more quickly online
    • Stuff that is online must be continually maintain (especially security)
    • Need a way to have software maintained without experts
  • Complexity Limits agency
    • Limits who can run project at all
    • Lots of work for those people who can run it
  • A free license don’t further my feeedom if we can’t run the software


  • Prolong Your Project’s Life
  • Bad ideas
    • We used popups and people reflexively closed them
    • API wasn’t great
  • Didn’t measure the right thing
    • Is persona product or infrastructure?
    • Treated like a product, not a good fit
  • Explicitly define and communicate your scope
    • “Solves authentication” or “Authenticate email addresses”
    • Broke some sites
    • Got used by FireFoxOS which was not a good fit
  • Ruthlessly oppose complexity
    • Tried to do too much mean’t it was overly complex
    • Complex hard to maintain and review and grow
    • Hard for newbies to join
    • If it is complex then it is hard to even test that is is working as expected
    • Focus and simplify
    • Almost no outside contributors, especially bad when mozilla dropped it.


  • Plan for Your Projects Failure
  • “Sometimes that [bus failure] is just a commuter bus that picks up that person and takes them to another job”
  • If you know you are dead say it
    • 3 years after we pulled people off project till officially killed
    • Might work for local software but services cost money to run
    • Sooner you admit you are dead the sooner people can plan to your departure
  • Ensure your users can recover without your involvement
    • Hard to do when you think your project is going to save the world
    • Example firefox sync has a copy of the data locally so even if it dies user will survive
  • Use standard data formats
    • eg OPML for RSS providers
  • Minimise the harm caused when your project goes away




Google AdsenseIntroducing “Ad balance” - focus on your best-performing ads

Over the last few weeks you may have noticed the Ad balance subtab under My ads. This new AdSense feature will give you more control to create a great ads experience for your users.

Ad balance lets you reduce the number of ads you show to your users. Finding the right balance between the number of ads you show and the user experience on your site can lead to better overall engagement with your content. With Ad balance, you’re able to see how changes in the volume of ads you show affect your earnings, and find the balance that makes the most sense for you and your users.

By only showing your best-performing ads, you may see a minimal drop in your earnings. However, these changes may result in an overall earnings increase, since an improvement of the user experience often leads to users staying longer on your site and engaging with more of your content*.

Blog post img-01.png
Ad balance is the first example of a Lab that's being made available to all publishers. Those of you who had the Show fewer ads lab enabled have been automatically moved over to Ad balance**. Thank you for trying it out!

To learn more about this new feature, please visit the Help Center.

We’d love to hear what you think about Ad balance. Please leave your feedback within your AdSense account by clicking send feedback.

Posted by:
Dongcai Shen, Software Engineer
Rikard Lundmark, Software Engineer
Spandana Raj Babbula, Software Engineer

*We don’t guarantee any specific results. And, just as a reminder, you’re responsible for the content and layout of your site.
**Find out how you turn this feature on and off and make changes to your settings in the AdSense Help Center.

Falkvinge - Pirate PartyThe entire modern copyright was built on one fundamental assumption that the Internet has reversed

Letterpress background, close up of many old, random metal letters with copy space

Copyright Monopoly: When the copyright monopoly was reinstated in 1710, the justification was that of publishing being many orders of magnitude more expensive than authoring, and so without it, nothing would get published. But the Internet has reversed this assumption completely: publishing is now many orders of magnitude cheaper than writing the piece you want to publish.

The copyright monopoly, as we know, was created on May 4, 1557, when Queen Mary I introduced a complete censorship of dissenting political opinions and prevented them from being printed (and thus the “right to copy” was born as a privilege within a guild, by banning all wrongthinkers of the time from expressing ideas). This stands in contrast to France’s attempt at banning the printing press entirely by penalty of death in at least two aspects: One, England’s suppression was successful, and two, the suppression has survived (albeit mutating) to present day.

After the Glorious Revolution of 1688, which is a point of pride in that no blood was shed (at least none that mattered to the history writers), people were really really really tired of the censorship, and wanted to end it promptly. Thus, the monopoly that was the foundation of copyright – the exclusive right to the London Company of Stationers to print anything in the country, in exchange for letting it pass by the Crown’s censors first – the monopoly of copyright was not renewed as the law required, and lapsed in 1695.

Yes, the copyright monopoly ceased to exist in 1695, after having been in effect since 1557.

The post-revolution British parliament would have none of it.

The formerly very profitable print shops, having operated under a repressive monopoly upholding political censorship, though — they would petition Parliament again, and again, and again, to reinstate their lucrative monopoly, but to no end. Parliament just wouldn’t introduce something like it again. What’s really interesting here isn’t the fact that the printers gathered their families on the steps of Parliament to weep for bread to their children, but the arguments they used, and what didn’t happen:

First, they argued that nothing would get printed if they didn’t get their monopoly back, as they couldn’t make a profit. The extremely noteworthy part of the argument is that they didn’t argue nothing would get created – but that nothing would get printed.

Second, the authors had no interest whatsoever in this construct. The printers and publishers were the ones arguing for the monopoly, claiming to speak on behalf of authors, and presented the idea that authors should “own” their works and have such “ownership” transferrable by contract — knowing full well authors would have no choice but to sign their rights away to the previous vested interest.

The British Parliament bought this line of reasoning, unfortunately, sending us down 300 years (and counting) of suppression of speech by those who have most to profit from suppressing it. This date – the reinstatement of copyright on April 10, 1710 – this is what the copyright industry deceptively calls “the birth of modern copyright”, in an attempt to conceal or dissociate from copyright’s origin as political censorship.

The real meat here lies in understanding that the entire underlying assumption, and justification of this construct, was that publishing was far more expensive than writing. Setting up a print shop required considerable investment and labor in order to distribute works, whereas writing just required pen, paper, and time.

“Far from viewing copying as theft, authors [in 1700] generally regarded it as flattery. The bulk of creative work has always depended, then and now, on a diversity of funding sources: commissions, teaching jobs, grants or stipends, patronage, etc. The introduction of copyright did not change this situation. What it did was allow a particular business model — mass pressings with centralized distribution — to make a few lucky works available to a wider audience, at considerable profit to the distributors.” — Copyright historian Karl Fogel

The Internet has completely reversed this assumption. Thinking in terms of time required, the effort required to publish is now approximately the equivalent effort of writing a few words – here in WordPress, it involves moving the mouse to the upper right corner, placing the cursor over “Publish”, and pressing the left mouse button. Thus, we can observe the following:

Where the reintroduction of the copyright monopoly – the “modern” copyright monopoly – was justified by publishing being several orders of magnitude more expensive than authoring, the Internet has made publishing several orders of magnitude cheaper than authoring, completely reversing the original premise.

Of course, there will be no shortage of people who profit from an artificial limitation, once it is in place. You could easily argue today that X and Y must not change, because A and B profit from the status quo — and so, the copyright industry readily claims that so and so many thousand jobs are upheld (“created”) by this artificial and harmful limit. But really, what kind of an argument is that? Who has the right to prevent the passage of time because they benefit from a lack of change? This is effectively the copyright industry’s single argument today.

And that industry will let nothing stand in its way – in particular not civil liberties such as privacy. They have consistently tried to erode basic freedoms under the guise of preserving the status quo, when what they’re doing is denying our children the liberties that our parents had, such as the ability to send an anonymous letter to somebody.

Further reading: The surprising history of copyright, and the promise of a post-copyright world.

Syndicated article
This article was previously published at Private Internet Access.

(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

Planet DebianArturo Borrero González: Debian is a puzzle: difficult

Debian is a puzzle

Debian is very difficult, a puzzle. This surprising statement was what I got last week when talking with a group of new IT students (and their teachers).

I would like to write down here what I was able to obtain from that conversation.

From time to time, as part of my job at CICA, we open the doors of our datacenter to IT students from all around Andalusia (our region) who want to learn what we do here and how we do it. All our infraestructure and servers are primarily built using FLOSS software (we have some exceptions, like backbone routers and switches), and the most important servers run Debian.

As part of the talk, when I am in such a meeting with a visiting group, I usually ask about which technologies they use and learn in their studies. The other day, they told me they use mostly Ubuntu and a bit of Fedora.

When I asked why not Debian, the answer was that it was very difficult to install and manage. I tried to obtain some facts about this but I failed in what seems to be a case of bad fame, a reputation problem which was extended among the teachers and therefore among the students. I didn’t detect any branding biasing or the like. I just seems lack of knowledge, and bad Debian reputation.

Using my DD powers and responsabilities, I kindly asked for feedback to improve our installer or whatever they may find difficult, but a week later I have received no email so far.

Then, what I obtain is nothing new:

  • we probably need more new-users feedback
  • we have work to do in the marketing/branding area
  • we have very strong competitors out there
  • we should keep doing our best

I myself recently had to use the Ubuntu installer in a laptop, and it didn’t seem that different to the Debian one: same steps and choices, like in every other OS installation.

Please, spread the word: Debian is not difficult. Certainly not perfect, but I don’t think that installing and using Debian is such a puzzle.

Worse Than FailureAnnouncements: Sponsor Announcement: Hired

There are certain tropes that show up in our articles, and judging from our comments section, our readers are well aware of them. For example, if a manager in a story says, “You’re going to love working with $X, they’re very smart,” it’s a pretty clear sign that the character in question is not very smart, and is almost certainly sure to be TRWTF in the story.

Part of this is narrative convenience- we try and keep our articles “coffee-break length”, and dropping a few obvious signals in there helps keep it concise. Most of it, however, really boils down to the fact that reality is full of certain patterns. The world is full of people who aren’t half as smart as they think they are. There are legions of PHBs ready to micromanage even if they haven’t a clue what they’re doing. And there are a lot of employers that can make a terrible job sound really great for the duration of the interview process.

Let’s focus on that last bit: finding a new job is hard. Finding a good job is even harder. At its worst, you end up suffering your way through a horror story that ends up on this site (so hey, Internet “fame”, it’s not all bad, right?). Maybe you just end up trading hours of your life for a paycheck, doing work that you don’t hate but you don’t love. If you’re really lucky, you land something that you really care about doing, and you get paid exactly what you’re worth.

And let’s not even get into the job search process- it’s stressful and eats enough time to be a job in itself. You have to dance around recruiters who just want the commission and don’t care if the job’s a fit for anyone involved. You chuck your resume on job sites, which might as well be a black hole. You can end up investing countless hours into a company’s interview process only to get an offer that isn’t sufficient, or to discover that the company culture isn’t what you were looking for.

Which brings us to our newest sponsor, Hired. Hired flips the script on the traditional job site. Once you fill out a simple application, employers start applying to interview you, instead of you applying for an interview. Whether you’re looking for a full-time or a contract gig, whether you’re looking for engineering, development, design, product management or data-science- Hired will match you up with top employers.

And “top” doesn’t mean “gigantic” or “corporate”. Sure, there’s companies like Facebook on there. But in their pool of over 6,000 employers, they have everything from titans of industry to small startups, spread across 17 major cities in North America, Europe, Asia, & Australia.

Okay, sure, there are lots of companies you might work for there, but what does this “apply to interview you” stuff mean? It sounds like marketing copy that Remy just pasted into this article to make the sponsor happy, and you’re right- but it’s also so much more.

Once you have filled out Hired’s application, employers who are interested in your profile will send you a personalized interview request which includes salary information up front. Hired’s going to provide a “talent advocate” who can provide unbiased career advice to help you put the best foot forward. And Hired solves one of the worst problems of the job search: they hide your profile from current and past employers, so your boss will never find out you’re searching for a new job until you’re ready to tell them.

And most important: you’ll never pay a dime for this service. So try it out and plan your next career change.

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianRitesh Raj Sarraf: Linux Tablet-Mode Usability

In my ongoing quest to get Tablet-Mode working on my Hybrid machine, here's how I've been living with it so far. My intent is to continue using Free Software for both use cases. My wishful thought is to use the same software under both use cases. 

  • Browser: On the browser front, things are pretty decent. Chromium has good support for Touchscreen input. Most of the Touchscreen use cases work well with Chromium. On the Firefox side, after a huge delay, finally, Firefox seems to be catching up. Hopefully, with Firefox 51/52, we'll have a much more usable Touchscreen browser.
  • Desktop Shell: One of the reason of migrating to GNOME was its touch support. From what I've explored so far, GNOME is the only desktop shell that has touch support natively done. The feature isn't complete yet, but is fairly well usable.
    • Given that GNOME has touchscreen support native, it is obvious to be using GNOME equivalent of tools for common use cases. Most of these tools inherit the touchscreen capabilities from the underneath GNOME libraries.
    • File Manager: Nautilus has decent support for touch, as a file manager. The only annoying bit is a right-click equivalent. Or in touch input sense, a long-press.
    • Movie Player: There's a decent movie player, based on GNOME libs; GNOME MPV. In my limited use so far, this interface seems to have good support. Other contenders are:
      • SMPlayer is based on Qt libs. So initial expectation would be that Qt based apps would have better Touch support. But I'm yet to see any serious Qt application with Touch input support. Back to SMPlayer, the dev is pragmatic enough to recognize tablet-mode users and as such has provided a so called "Tablet Mode" view for SMPlayer (The tooltip did not get captured in the screenshot).
      • MPV doesn't come with a UI but has basic management with OSD. And in my limited usage, the OSD implementation does seem capable to take touch input.
  • Books / Documents: GNOME Documents/Books is very basic in what it has to offer, to the point that it is not much useful. But since it is based on the same GNOME libraries, it enjoys native touch input support. Calibre, on the other hand, is feature rich. But it is based on (Py)Qt. Touch input is told to work for Windows. For Linux, there's no support yet. The good thing about Calibre is that it has its own UI, which is pretty decent in a Tablet-Mode Touch workflow.
  • Photo Management: With compact digital devices commonly available, digital content (Both Photos and Videos) is on the rise. The most obvious names that come to mind are Digikam and Shotwell.
    • Shotwell saw its reincarnation in the recent past. From what I recollect, it does have touch support but was lacking quite a bit in terms of features, as compared to Digikam.
    • Digikam is an impressive tool for digital content management. While Digikam is a KDE project, thankfully it does a great job in keeping its KDE dependencies to a bare minimum. But given that Digikam builds on KDE/Qt libs, I haven't had any much success in getting a good touch input solution for Tablet Mode. To make it barely usable in Table-Mode, one could choose a theme preference with bigger toolbars, labels and scrollbars. This helps in making a touch input workaround use case. As you can see, I've configured the Digikam UI with Text alongside Icons for easy touch input.
  • Email: The most common use case. With Gmail and friends, many believe standalone email clients are no more a need. But there always are users like us who prefer emails offline, encrypted emails and prefer theis own email domains. Many of these are still doable with free services like Gmail, but still.
    • Thunderbird shows its age at times. And given the state of Firefox in getting touch support (and GTK3 port), I see nothing happening with TB.
    • KMail was something I discontinued while still being on KDE. The debacle that KDEPIM was, is something I'd always avoid, in the future. Complete waste of time/resource in building, testing, reporting and follow-ups.
    • Geary is another email client that recently saw its reincarnation. I recently had explored Geary. It enjoys similar benefits like the rest applications using GNOME libraries. There was one touch input bug I found, but otherwise Geary's featureset was limited in comparison to Evolution.
    • Migration to Evolution, when migrating to GNOME, was not easy. GNOME's philosophy is to keep things simple and limited. In doing that, they restrict possible flexibilities that users may find obvious. This design philosophy is easily visible across all applications of the GNOME family. Evolution is no different. Hence, coming from TB to E was a small unlearning + newLearning curve. And since Evolution is using the same GNOME libraries, it enjoys similar benefits. Touch input support in Evolution is fairly good. The missing bit is the new Toolbar and Menu structure that many have noticed in the newer GNOME applications (Photos, Documents, Nautilus etc). If only Evolution (and the GNOME family) had the option of customization beyond the developer/project's view, there wouldn't be any wishful thoughts.
      • Above is a screenshot of 2 windows of Evoluiton. In its current form too, Evolution is a gem at times. For my RSS feeds, they are stored in a VFolder in Evolution, so that I can read them when offline. RSS feeds are something I read up in Tablet-mode. On the right is an Evolution window with larger fonts, while on the left, Evoltuion still retains its default font size. This current behavior helps me get Table-Mode Touch working to an extent. In my wishful thoughts, I wish if Evolution provided flexibility to change Toolbar icon sizes. That'd really help easily touch the delete button when in Tablet Mode. A simple button, Tablet Mode, like what SMPlayer has done, would keep users sticky with Evolution.

My wishful thought is that people write (free) software, thinking more about usability across toolkits and desktop environments. Otherwise, the year of the Linux desktoplaptop, tablet; in my opinion, is yet to come. And please don't rip apart tools, in porting them to newer versions of the toolkits. When you rip a tool, you also rip all its QA, Bug Reporting and Testing, that was done over the years.

Here's an example of a tool (Goldendict), so well written. Written in Qt, Running under GNOME, and serving over the Chromium interface.


In this whole exercise of getting a hybrid working setup, I also came to realize that there does not seem to be a standardized interface, yet, to determine the current operating mode of a running hybrid machine. From what we explored so far, every product has its own way to doing it. Most hybrids come pre-installed and supported with Windows only. So, their mode detection logic seems to be proprietary too. In case anyone is awaer of a standard interface, please drop a note in the comments.




CryptogramWhatsApp Security Vulnerability

Back in March, Rolf Weber wrote about a potential vulnerability in the WhatsApp protocol that would allow Facebook to defeat perfect forward secrecy by forcibly change users' keys, allowing it -- or more likely, the government -- to eavesdrop on encrypted messages.

It seems that this vulnerability is real:

WhatsApp has the ability to force the generation of new encryption keys for offline users, unbeknown to the sender and recipient of the messages, and to make the sender re-encrypt messages with new keys and send them again for any messages that have not been marked as delivered.

The recipient is not made aware of this change in encryption, while the sender is only notified if they have opted-in to encryption warnings in settings, and only after the messages have been re-sent. This re-encryption and rebroadcasting effectively allows WhatsApp to intercept and read users' messages.

The security loophole was discovered by Tobias Boelter, a cryptography and security researcher at the University of California, Berkeley. He told the Guardian: "If WhatsApp is asked by a government agency to disclose its messaging records, it can effectively grant access due to the change in keys."

The vulnerability is not inherent to the Signal protocol. Open Whisper Systems' messaging app, Signal, the app used and recommended by whistleblower Edward Snowden, does not suffer from the same vulnerability. If a recipient changes the security key while offline, for instance, a sent message will fail to be delivered and the sender will be notified of the change in security keys without automatically resending the message.

WhatsApp's implementation automatically resends an undelivered message with a new key without warning the user in advance or giving them the ability to prevent it.

Note that it's an attack against current and future messages, and not something that would allow the government to reach into the past. In that way, it is no more troubling than the government hacking your mobile phone and reading your WhatsApp conversations that way.

An unnamed "WhatsApp spokesperson" said that they implemented the encryption this way for usability:

In WhatsApp's implementation of the Signal protocol, we have a "Show Security Notifications" setting (option under Settings > Account > Security) that notifies you when a contact's security code has changed. We know the most common reasons this happens are because someone has switched phones or reinstalled WhatsApp. This is because in many parts of the world, people frequently change devices and Sim cards. In these situations, we want to make sure people's messages are delivered, not lost in transit.

He's technically correct. This is not a backdoor. This really isn't even a flaw. It's a design decision that put usability ahead of security in this particular instance. Moxie Marlinspike, creator of Signal and the code base underlying WhatsApp's encryption, said as much:

Under normal circumstances, when communicating with a contact who has recently changed devices or reinstalled WhatsApp, it might be possible to send a message before the sending client discovers that the receiving client has new keys. The recipient's device immediately responds, and asks the sender to reencrypt the message with the recipient's new identity key pair. The sender displays the "safety number has changed" notification, reencrypts the message, and delivers it.

The WhatsApp clients have been carefully designed so that they will not re-encrypt messages that have already been delivered. Once the sending client displays a "double check mark," it can no longer be asked to re-send that message. This prevents anyone who compromises the server from being able to selectively target previously delivered messages for re-encryption.

The fact that WhatsApp handles key changes is not a "backdoor," it is how cryptography works. Any attempt to intercept messages in transmit by the server is detectable by the sender, just like with Signal, PGP, or any other end-to-end encrypted communication system.

The only question it might be reasonable to ask is whether these safety number change notifications should be "blocking" or "non-blocking." In other words, when a contact's key changes, should WhatsApp require the user to manually verify the new key before continuing, or should WhatsApp display an advisory notification and continue without blocking the user.

Given the size and scope of WhatsApp's user base, we feel that their choice to display a non-blocking notification is appropriate. It provides transparent and cryptographically guaranteed confidence in the privacy of a user's communication, along with a simple user experience. The choice to make these notifications "blocking" would in some ways make things worse. That would leak information to the server about who has enabled safety number change notifications and who hasn't, effectively telling the server who it could MITM transparently and who it couldn't; something that WhatsApp considered very carefully.

How serious this is depends on your threat model. If you are worried about the US government -- or any other government that can pressure Facebook -- snooping on your messages, then this is a small vulnerability. If not, then it's nothing to worry about.

Slashdot thread. Hacker News thread. BoingBoing post. More here.

Worse Than FailureThe 3,000 Mile Commute

A true story, recounted from personal experience by our own Snoofle.

Many decades ago, DefCon Inc, a defense contractor working for the US military was attempting to get awarded a new contract to build some widget needed for combat. As part of their proposal, they wished to demonstrate that they had the available staff to dedicate to the project. Toward this end, they hired more than 1,000 assorted programmers, project leads, managers and so forth. The military folks that were evaluating the various proposals saw a slew of new employees that were completely unfamiliar with the relevant processes, procedures and requirements, and awarded the contract to another firm. In response, the contractor laid off all 1,000 folks.

A few months later, another such contract came up for grabs. Again, they hired 1,000 folks to show that they had the staff. A few months later, that contract was also awarded to another contractor, and again, all 1,000 folks were laid off.

A map showing the routes between Newark Airport and LAX

This repeated a few times over two years.

After all of this, the base of available employees was wise to the very short repeating hire/fire cycle, and the contractor was unable to attract anyone beyond folks fresh out of school. Finally, some C-level executive realized that all of these people just out of school were far cheaper than the experienced developers that were on staff and those that they had previously hired and fired for the potential projects, and so issued an edict that all in-house senior staff was to be cycled into cheap young employees. It took two years, but it happened.

Now that their payroll was drastically reduced, and they had royally pissed off the potential pool of experienced developers, they could increase their permanent headcount without increasing their long term payroll costs - by hiring only young, inexperienced developers - which enabled them to finally get awarded a contract.

Unfortunately, all those junior developers had very little experience, and there was nobody at the firm who had been through the war to guide them. As a result, their two year contract yielded a flaky project that frequently crashed, acted unpredictably and could not be modified. When you're dealing with a system that can shoot at and blow things up, these are not desirable or tolerable attributes.

At some point, some high level exec realized what had happened, and forced the company to stick a crowbar into its pocket and hire some highly paid consultants. Unfortunately, the HPCs remembered the hire/fire cycle and wanted nothing to do with the place. After some time, this led to substantial sweetening of the pot until a few experienced folks finally agreed to come on board as full time employees. This happened in New Jersey.

After management got the new folks up to speed on the project, the new folks said Hold on; there's a gaping hole in the middle of this project! Management replied that this part of the project was classified and could only be viewed by folks with secret clearances, and from the facility in California. OK, so relevant clearances were applied for and granted, and the senior folks were assigned to go to the CA facility for two weeks.

Before agreeing to go, the developers wanted some information as to how they'd be able to access this stuff after being familiarized with it since it could only be accessed from CA, and they all lived and worked in NJ. They were told that they'd be advised of the details when they got to CA.

OK, they all fly to the Left Coast, get settled in their hotels and go to the office.

At this point, they were informed about all of the problems that had to be fixed. On Thursday of the second week, it was determined that there was about two years of work to do all of the retrofitting that needed to be done. Again, the developers all asked How will we access this stuff from NJ? The managers replied that it had to be done locally, and that they would all be located locally for the next two years. Starting Monday.

Wait; they don't get the opportunity to discuss it with their spouses? How it might affect the kids to have one parent away 90+% of the time? Would they be willing to live in hotels and airports for two years? Why the F*** didn't they just hire talent at the CA location instead of NJ?

It turns out that because the contractor is based in NJ, the personnel they hired needed to be based there as well. Of course, had any of this been mentioned before people were hired, most (if not all) of the folks they hired wouldn't have accepted the jobs. If they had known, none of the folks would have even gotten on the plane to go for the briefing and ramp-up required to familiarize themselves with the project.

Needless to say, Thursday afternoon was spent with managers barking demands about sacrificing for the company, and developers saying WTF?! Thursday evening was spent with countless phone calls home. Friday morning was spent with everyone resigning and heading for the airport to go home.

The representatives of the military acted as decent folks and were very understanding as to why people wouldn't just leave their homes and families for two years. They were far less sensitive when it came to holding the contractor to their promise of an on-site experienced staff to do the work.

In the end, the contractor was fired and a new one was hired to clean up the mess.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet Linux AustraliaSimon Lyall: 2017 – Tuesday – Session 3

The Internet of Scary Things – tips to deploy and manage IoT safely Christopher Biggs

  • What you need to know about the Toaster Apocalypse
  • Late 2016 brought to prominence when major sites hit by DDOS from compromised devices
  • Risks present of grabbing images
    • Targeted intrusion
    • Indiscriminate harvesting of images
    • Drive-by pervs
    • State actors
  • Unorthorized control
    • Hit traffic lights, doorbells
  • Takeover of entire devices
    • Used for DDOS
    • Demanding payment for the owner to get control of them back.
  • “The firewall doesn’t divide the scary Internet from the safe LAN, the monsters are in the room”


  • Poor Security
    • Mostly just lazyness and bad practices
    • Hard for end-users to configure (especially non-techies)
    • Similar to how servers and Internet software, PCs were 20 years ago
  • Low Interop
    • Everyone uses own cloud services
    • Only just started getting common protocols and stds
  • Limited Maint
    • No support, no updates, no patches
  • Security is Hard
  • Laziness
    • Threat service is too large
    • Telnet is too easy for devs
    • Most things don’t need full Linux installs
  • No incentives
    • Owner might not even notice if compromised
    • No incentive for vendors to make them better


  • Examples
    • Cameras with telenet open, default passwords (that can not be changed)
    • exe to access
    • Send UDP to enable a telnet port
    • Bad Mobile apps


  • Selecting a device
    • Accept you will get bad ones, will have to return
    • Scan your own network, you might not know something is even wifi enabled
    • Port scan devices
    • Stick with the “Big 3” ramework ( Apple, Google, Amazon )
    • Make sure it supports open protocols (indicates serious vendor)
    • Check if open source firmward or clients exists
    • Check for reviews (especially nagative) or teardowns


  • Defensive arch
    • Put on it’s own network
    • Turn off or block uPNP opening firewall holes
    • Plan for breaches
      • Firewall rules, rate limited, recheck now and then
    • BYO cloud (dont use the vendor cloud)
      • HomeBridge
      • Node-RED (Alexa)
      • Zoneminder, Motion for cameras
  • Advice for devs
    • Apple HomeKit (or at least support for Homebridge for less commercial)
    • Amazon Alexa and AWS IoT
      • Protocols open but look nice
    • UCF uPnP and SNP profiles
      • Device discovery and self discovery
      • Ref implimentations availabel
    • NoApp setup as an alternative
      • Have an API
    • Support MQTT
    • Long Term support
      • Put copy of docs in device
      • Decide up from what and how long you will support and be up front
    • Limit what you put on the device
      • Don’t just ship a Unix PC
      • Take out debug stuff when you ship


  • Trends
    • Standards
      • BITAG
      • Open Connectivity founddation
      • Regulation?
    • Google Internet of things
    • Apple HomeHit
    • Amazon Alexa
      • Worry about privacy
    • Open Connectivity Foundation – IoTivity
      • Open source etc
      • Linux and Docket based
    • Consumer IDS – FingBox
  • Missing
    • Network access policy framework shipped
    • Initial network authentication
    • Vulnerbility alerting
    • Patch distribution

Rage Against the Ghost in the Machine – Lilly Ryan

  • What is a Ghost?
    • The split between the mind and the body (dualism)
    • The thing that makes you you, seperate to the meat of your body
  • Privacy
    • Privacy for information not physcial
    • The mind has been a private place
    • eg “you might have thought about robbing a bank”
    • The thoughts we express are what what is public.
    • Always been private since we never had technology to get in there
    • Companies and governments can look into your mind via things like your google queries
    • We can emulate the inner person not just the outer expression
  • How to Summon a Ghost
    • Digital re-creation of a person by a bot or another machine
    • Take information that post online
    • Likes on facebook, length of time between clicks
  • Ecto-meta-data
    • Take meta data and create something like you that interacts
  • The Smartphone
    • Collects meta-data that doesn’t get posted publicly
    • deleted documents
    • editing of stuff
    • search history
    • patten of jumping between apps
  • The Public meta-data that you don’t explicitly publish
    • Future could emulate you sum of oyu public bahavour
  • What do we do with a ghost?
    • Create chatbots or online profiles that emulate a person
    • Talk to a Ghost of yourself
    • Put a Ghost to work. They 3rd party owns the data
    • Customer service bot, PA
    • Chris Helmsworth could be your PA
    • Money will go to facebook or Google
  • Less legal stuff
    • Information can leak from big companies
  • How to Banish a Ghost
    • Option to donating to the future
    • currently no regulation or code of conduct
    • Restrict data you send out
      • Don’t use the Internet
      • Be anonymous
      • Hard to do when cookies match you across many sites
        • You can install cookie blocker
    • Which networks you connect to
      • eg list of Wifi networks match you with places and people
      • Mobile network streams location data
      • location data reveals not just where you go but what stores, houses or people you are near
      • Turn off wifi, bluetooth or data when you are not using. Use VPNs
    • Law
      • Lobby and push politicians
      • Push back on comapnies
    • For technologiest
      • Collect the minimum, not the maximum

FreeIPA project update (turbo talk) – Fraser Tweedale

  • Central Identity manager
  • Ldap + Kerberos, CA, DNS, admin tools, client. Hooks into AD
  • NAnage via web or client
  • Client SSSD. Used by various distros
  • What is in the next release
    • Sub-CAs
    • Can require 2FA for important serices
    • KDC Proxy
    • Network bound encryption. ie Needs to talk to local server to unencrypt a disk
    • User Session recording


Minimum viable magic

Politely socially engineering IRL using sneaky magician techniques – Alexander Hogue

  • Puttign things up your sleeve is actually hard
  • Minimum viable magic
  • Miss-direct the eyes
  • Eyes only move in a straight line
  • Exploit pattern recognition
  • Exploit the spot light
  • Your attention is a resource


Planet Linux AustraliaSimon Lyall: 2017 – Tuesday – Session 2

Stephen King’s practical advice for tech writers – Rikki Endsley

  • Example What and Whys
    • Blog post, press release, talk to managers, tell devs the process
    • 3 types of readers: Lay, Managerial, Experts
  • Resources:
    • Press: The care and Feeding of the Press – Esther Schindler
    • Documentation: RTFM? How to write a manual worth reading


  • “On Writing: A memoir of the craft” by Stephen King
  • Good writing requires reading
    • You need to read what others in your area or topic or competition are writing
  • Be clear on Expectations
    • See examples
    • Howto Articles by others
    • Writing an Excellent Post-Event Wrap Up report by Leslie Hawthorn
  • Writing for the Expert Audience
    • New Process for acceptance of new modules in Extras – Greg DeKoenigserg (Ansible)
    • vs Ansible Extras Modules + You – Robyn Bergeon
      • Defines audience in the intro


  • Invite the reader in
  • Opening Line should Invite the reader to begin the story
  • Put in an explitit outline at the start


  • Tell a story
  • That is the object of the exercise
  • Don’t do other stuff


  • Leave out the boring parts
  • Just provides links to the details
  • But sometimes if people not experts you need to provide more detail


  • Sample outline
    • Intro (invite reader in)
    • Brief background
    • Share the news (explain solution)
    • Conclude (include important dates)


  • Sample Outline: Technical articles
  • Include a “get technical” section after the news.
  • Too much stuff to copy all down, see slides


  • To edit is divine
  • Come back and look at it afterwards
  • Get somebody who will be honest to do this


  • Write for


  • Q: How do you deal with skimmers?   A: Structure, headers
  • Q: Pet Peeves?  A: Strong intro, People using “very” or “some” , Leaving out import stuff




Planet Linux AustraliaSimon Lyall: 2017 – Tuesday Session 1

Fishbowl discussion – GPL compliance Karen M. Sandler

  • Fishbowl format
    • 5 seats at front of the room, 4 must be occupied
    • If person has something to say they come up and sit in spare chair, then one existing person must sit down.
  • Topics
    • Conflicts of Law
    • Mixing licences
    • Implied warrenty
    • Corporate Procedures and application
    • Get knowledge of free licences into the law school curriculum
  • “Being the Open Source guy at Oracle has always been fun”
  • “Our large company has spent 2000 hours with a young company trying to fix things up because their license is not GPL compliant”
  • BlackDuck is a commercial company will review your company’s code looking for GPL violations. Some others too
    • “Not a perfect magical tool by any sketch”
    • Fossology is alternative open tool
    • Whole business model around license compliance, mixed in with security
    • Some of these companies are Kinda Ambulance chasers
    • “Don’t let those companies tell you how to tun your business”
    • “Compliance industry complex” , “Compliance racket”
  • At my employer with have a tool that just greps for a “GPL” license in code, better than nothing.
  • Lots of fear in this area over Open-source compliance lawsuits
    • Disagreements in community if this should be a good idea
    • More, Less, None?
    • “As a Lawyer I think there should definitely be more lawsuits”
    • “A lot of large organisations will ignore anything less than [a lawsuit] “
    • “Even today I deal with organisations who reference the SCO period and fear widespread lawsuits”
  • Have Lawsuits chilled adoption?
    • Yes
    • Chilled adoption of free software vs GPL software
    • “Android has a policy of no GPL in userspace” , “they would replace the kernel if they could”
    • “Busybox lawsuits were used as a club to get specs so the kernel devs could create drivers” , this is not really applicable outside the kernel
    • “My goal in doing enforcement was to ensure somebody with a busybox device could compile it”
    • “Lawyers hate any license that prevents them getting future work”
    • “The amount of GPL violations skyrocketed with embedded devices shipping with Linux and GPL software”
  • People are working on a freer (eg “Not GPL”) embeded stack to replace Android userspace: Toybox, Toolbox, No kernel replacement yet.
  • Employees and Compliance
    • Large company helping out with charities systems unable to put AGPL software from that company on their laptops
    • “Contributing software upstream makes you look good and makes your company look good” , Encourages others and you can use their contributions
    • Work you do on your volunteer days at company do not fill under software assignment policy etc, but they still can’t install random stuff on their machines.
  • Website’s often are not GPL compliance, heavy restrictions, users giving up their licenses.
  • “Send your lawyers a video of another person in a suit talking about that topic”

U 2 can U2F Rob N ★

  • Existing devices are not terribly but better than nothing, usability sucks
  • Universal Two-Factor
    • Open Standard by FIDO alliance
    • USB, NFC, Bluetooth
    • Multiple server and host implimentations
    • One device multi-sites
    • Cloning protection
  • Interesting Examples
  • User experience: Login, press the button twice.
  • Under the hood a lot more complicated
    • Challenge from site, send must sign challenge (including website  url to prevent phishing site proxying)
    • Multiple keypairs for each website on device
    • Has a login counter on the device included in signature, so server can panic then counter gets out of sync from a cloned device
  • Attestation Certificate
    • Shared across model or production batch
  • Browserland
    • Javascript
    • Chrome-based support are good
    • Firefox via extension (Native “real soon now”)
    • Mobile works on Android + Chrome + Google Authenticator


Rondam RamblingsI'm still waiting to wake up and find it was all just a bad dream

Regular Ramblings readers (how's that for some alliteration?) may have noticed that I have not been posting much lately.  That's partly because I've been on the road, but to a greater degree because I am still feeling shell-shocked from the election.  Part of my brain just refuses to accept that Donald Trump is actually about to be sworn in as president of the United States of America.  Worse,

Planet Linux AustraliaSimon Lyall: 2017 – Tuesday Keynote – Pia Waugh

BTW: Conference Streams are online at

The Future of Humans – Pia Waugh

At a tipping point, we can’t reinvent everything or just do the past with shinny new things.

Started as a Sysadmin, helped her see things as Systems

Trying to make active choices about the future we want,

  • Started building tools, knowledge spread slowly
  • Created cities, people could specialise, knowledge faster
  • Surplus created, much went to rulers, sometimes rulers overthrown, but hierarchy started the same
  • More recently the surplus has got given to people
  • Last 250 years, people have seen themselves as having power, change their future, not just be a peasant.
  • As resources have increased power and resources have been distributed more widely
  • This has kept expanding, – overthrown you boss at work
  • We are on the cusp on a massive skyrocket in quality of live


  • Citizens have powers now that we previously centralized
  • We are now in a time of suplus not scaricity
  • Small groups and individual can now disrupt a country, industry or company
  • We made up all of our society, we can make it again to reflect the present not what was needed in the past.
  • Choose our own adventure or let others choose it for us. We have the option now that we didn’t previously
  • Most people’s eyes glaze over when they here that.
  • “You can’t do that” say many people when they find out what software can do.
  • People switch off their creativity when they come to work.

How Could the World be better

  • Property
    • 3D printing could print organs, food, just about anything
    • Why are we protecting business models that are already out of date (eg copyright) when we couple use them to eliminated scarcity
  • Work and Jobs
    • Everybody is scared about technology taking jobs
    • What do we care about the lose of jobs
    • Why is the value of a person defined by a full-time jobs?
  • Transhumanism
    • tatoos, peicing have been around forever
    • Obsession with the human “normal” , is this a recent thing from the media?
    • Society encourages people towards the Norm
    • Internet has demonstrated that not everybody is normal – Rule 34
    • “If you lose a leg, instead of getting a replacement leg, whey not have seven legs?”
    • Anyone who doesn’t make our definition of Normal is seen as something less even if they have amazing abilities
  • Spaceships
    • Still takes a day to get around the planet
    • If we are going to set up new worlds how are they going to run?
  • Global Citizenship
    • People are seen though the lens of their national citizenship
    • Governments are not the only representative of our rights


  • “How can we build a better world? Luckily we have git”
  • We have the power and knowledge to do things, but not all people do
  • If you are as powerful as the tools you use, where does that leave people who can’t use computers or program?


  • Systemic Change
    • What doesn’t you Doctor say about “scratching your itch” ?
    • Example: “diversity” , how do we deal with the problems that led us to not having it.
  • Who are you building for? Not building for?
  • What is the default position in society? Is it to no get knowledge, power?
  • What does human mean to you
  • Waht do we value
  • What assumptions and bias do you have?
  • How are you helping non-geeks help themselves
  • What future do you want to see?


  • How are Systems changing? How do out policies, assumptions laws reflect the older way?
    • Scarcity -> Surplus
    • Close -> Open
    • Centralise -> Distributed
    • Belief -> Rationalism
    • Win/Lose -> Cooperative competitive
    • Nationalism -> World Citizen
    • Normative Human -> Formative Human
  • I believe the Open Source Culture is a good model for society
  • But in Inventing the future we have to be careful not to drag the legacy systems and values from the past.


Planet DebianElizabeth Ferdman: 6 Week Progress Update for PGP Clean Room

One of the PGP Clean Room’s aims is to provide users with the option to easily initialize one or more smartcards with personal info and pins, and subsequently transfer keys to the smartcard(s). The advantage of using smartcards is that users don’t have to expose their keys to their laptop for daily certification, signing, encryption or authentication purposes.

I started building a basic whiptail TUI that asks users if they will be using a smartcard: on Github

If yes, whiptail provides the user with the opportunity to initialize the smartcard with their name, preferred language and login, and change their admin PIN, user PIN, and reset code.

I outlined the commands and interactions necessary to edit personal info on the smartcard using gpg --card-edit and sending the keys to the card with gpg --edit-key <FPR> in smartcard-workflow. There’s no batch mode for smartcard operations and there’s no “quick” command for it just yet (as in –quick-addkey). One option would be to try this out with command-fd/command-file. Currently, python bindings for gpgme are under development so that is another possibility.

We can use this workflow to support two smartcards– one for the primary key and one for the subkeys, a setup that would also support subkey rotation.


Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, December 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In December, about 175 work hours have been dispatched among 14 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours did not increase but a new silver sponsor is in the process of joining. We are only missing another silver sponsor (or two to four bronze sponsors) to reach our objective of funding the equivalent of a full time position.

The security tracker currently lists 31 packages with a known CVE and the dla-needed.txt file 27. The situation improved a little bit compared to last month.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Sociological ImagesShifts in the U.S. LGBT Population

Counting the number of lesbian, gay, bisexual, and transgender people is harder than you might think.  I’ve written before on just how important it is to consider, for instance, precisely how we ask questions about sexuality.  One way scholars have gotten around this is to analytically separate the distinct dimensions of sexuality to consider which dimension they are asking about.  For research on sexuality, this is typically done by considering sexual identities as analytically distinct from sexual desires and sexual behaviors.  We like to imagine that sexual identities, acts, and desires all neatly match up, but the truth of the matter is… they don’t.  At least not for everyone.  And while you might think that gender might lend itself to be more easily assessed on surveys, recent research shows that traditional measures of sex and gender erase our ability to see key ways that gender varies in our society.

Gallup just released a new publication authored by Gary J. Gates.  Gates has written extensively on gender and sexual demography and is responsible for many of the population estimates we have for gender and sexual minorities in the U.S.  This recent publication just examines shifts in the past 5 years (between 2012 and 2016).  And many of them may appear to be small.  But changes like this at the level of a population in a population larger than 300,000,000 people are big shifts, involving huge numbers of actual people.  In this post, I’ve graphed a couple of the findings from the report–mostly because I like to chart changes to visually illustrate findings like this to students.  [*Small note: be aware of the truncated y axes on the graphs.  They’re sometimes used to exaggerate findings.  I’m here truncating the y axes to help illustrate each of the shifts discussed below.]


The report focuses only on one specific measure of membership as LGBT–identity.  And this is significant as past work has shown that this is, considered alongside other measures, perhaps the most conservative measure we have.  Yet, even by that measure, the LGBT population is on the move, increasing in numbers at a rapid pace in a relatively short period of time.  As you can see above, between 2012 and 2016, LGBT identifying persons went from 3.5%-4.1% of the U.S. population, which amounts to an estimated shift from 8.3 million people in 2012 to more than 10 million in 2016.


The report also shows that a great deal of that increase can be accounted for by one particular birth cohort–Millennials.  Perhaps not surprisingly, generations have become progressively more likely to identify as LGBT.  But the gap between Millenials and the rest is big and appears to be growing.  But the shifts are not only about cohort effects.  The report also shows that this demographic shift is gendered, racialized, and has more than a little to do with religion as well.

The gender gap between proportion of the population identifying as LGBT in the U.S. is growing.  The proportion of women identifying as LGBT has jumped almost a full percentage point over this period of time.  And while more men (and a larger share of men) are identifying as LGBT than were in 2012, the rate of increase appears to be much slower.  As Gates notes, “These changes mean that the portion of women among LGBT-identified adults rose slightly from 52% to 55%” (here).


The gap between different racial groups identifying as LGBT has also shifted with non-Hispanic Whites still among the smallest proportion of those identifying.  As you can see, the shift has been most pronounced among Asian and Hispanic adults in the U.S.  Because White is the largest racial demographic group here, in actual numbers, they still comprise the largest portion of the LGBT community when broken down by race.  But, the transition over these 5 years are a big deal.  In 2012, 2 of every 3 LGBT adults in the U.S. identified as non-Hispanic White.  By 2016, that proportion dropped to 6 out of every 10. This is big news.  LGBT people (as measured by self-identification) are becoming a more racially diverse group.

They are also diverse in terms of class.  Considering shifts in the proportion of LGBT identifying individuals by income and education tells an interesting story.  As income increases, the proportion of LGBT people decreases.  And you can see that finding by education in 2012 as well–those with less education are more likely to be among those identifying as LGBT (roughly).  But, by 2016, the distinctions between education groups in terms of identifying as LGBT have largely disappeared.  The biggest rise has been among those with a college degree.  That’s big news and could mean that, in future years, the income gap here may decrease as well.

There were also findings in the report to do with religion and religiosity among LGBT identifying people in the U.S.  But I didn’t find those as interesting.  Almost all of the increases in people identifying as LGBT in recent years have been among those who identify as “not religious.”  While those with moderate and high levels of religious commitment haven’t seen any changes in the last five years.  But, among the non-religious, the proportion identifying as LGBT has jumped almost 2 percentage points (from 5.3% in 2012 to 7.0% in 2016).

All of this is big news because it’s a powerful collection of data that illustrate that the gender and sexual demographics of the U.S. are, quite literally, on the move.  We should stand up and pay attention.  And, as Gates notes in the report, “These demographic traits are of interest to a wide range of constituencies.”  Incredible change in an incredibly short period of time.  Let the gender and sexual revolution continue!

Edit (1/17/17): The graph charting shifts by age cohort may exaggerate (or undersell) shifts among Millennials because the data does not exclude Millennials born after 1994.  So, some of those included in the later years here wouldn’t have been included in the earlier years because they weren’t yet 18.  So, it’s more difficult to tell how much of that shift is actually people changing identity for the age cohort as a whole as opposed to change among the youngest Millennials surveyed.

Tristan Bridges, PhD is a professor at The College at Brockport, SUNY. He is the co-editor of Exploring Masculinities: Identity, Inequality, Inequality, and Change with C.J. Pascoe and studies gender and sexual identity and inequality. You can follow him on Twitter here. Tristan also blogs regularly at Inequality by (Interior) Design.

(View original at

CryptogramCloudflare's Experience with a National Security Letter

Interesting post on Cloudflare's experience with receiving a National Security Letter.

News article.

Worse Than FailureCodeSOD: Eventful Timing

I once built a system with the job of tracking various laboratory instruments, and sending out notifications when they needed to be calibrated. The rules for when different instruments triggered notifications, and when notifications should be sent, and so on, were very complicated.

An Anonymous reader has a similar problem. They’re tracking “Events”- like seminars and conferences. These multi-day events often have an end date, but some of them are actually open ended events. They need to, given an event, be able to tell you how much it costs. And our Anonymous reader’s co-worker came up with this solution to that problem:

public class RunningEventViewModel : SingleDataViewModel<EventData>
    private DateTime _now;

    private readonly Timer _timer;

    public RunningEventViewModel(EventData data)
        : base(data)
        _now = DateTime.Now;

        _timer = new Timer(x =>
            _now = DateTime.Now;
            NotifyOfPropertyChange(() => DurationDays);
            NotifyOfPropertyChange(() => DurationHours);
            NotifyOfPropertyChange(() => DurationMinutes);
            NotifyOfPropertyChange(() => DurationSeconds);
            NotifyOfPropertyChange(() => DurationString);
            NotifyOfPropertyChange(() => TotalCost);

        if (!Stop.HasValue)
            _timer.Change(900, 900);

    public int DurationSeconds
            return ((Stop ?? _now) - Start).Seconds;

    public decimal TotalCost
            var end = Stop ?? _now;
            const int SecsPerDay = 3600 * 24;
            decimal days = ((int)(end - Start).Duration().TotalSeconds) / (decimal)SecsPerDay;

            return (DailyCosts.HasValue ? DailyCosts.Value : 0) * days;

    public int DurationDays
            return ((Stop ?? _now) - Start).Days;

    public int DurationHours
            return ((Stop ?? _now) - Start).Hours;

    public int DurationMinutes
            return ((Stop ?? _now) - Start).Minutes;

    public TimeSpan Duration
            return (Stop ?? _now) - Start;

    public string FromTo
            var res = String.Empty;
            if (Data.EventStart.HasValue)
                res += Data.EventStart.Value.ToShortDateString();

                if (Data.EventStop.HasValue)
                    res += " - " + Data.EventStop.Value.ToShortDateString();

            return res;

    public string DurationString
            var duration = ((Stop ?? _now) - Start).Duration();
            var res = new StringBuilder();
            if (DurationDays > 0)
                res.Append(duration.Days).Append(" days ");

            if (DurationHours > 0)
                res.Append(duration.Hours).Append(" hours ");

            if (DurationMinutes > 0)
                res.Append(duration.Minutes).Append(" minutes");

            return res.ToString().TrimEnd(' ');

    public DateTime Start
            return Get(Data.EventStart.Value, x => x.ToLocalTime().ToUniversalTime());

    public DateTime? Stop
        get { return (DateTime?)null; }

    public decimal? DailyCosts
        get { return Data.DailyCost; }

    private static TResult Get<TItem, TResult>(TItem item, Func<TItem, TResult> resultSelector)
        // this method is actually an extension method from our core framework, it's added here for better understanding the code
        if(Equals(item, null))
            return default(TResult);

        return resultSelector(item);

Is null a mistake? Well, it certainly makes this code more complicated. Speaking of more complicated, I like how this class is responsible for tracking a Timer object so that it can periodically emit events. So much for single-responsibility, and good luck during unit testing.

The real WTF here, however, is that .NET has a rich and developer-friendly date-time API, which already has pre-built date difference functions. Code like this block in TotalCost:

            const int SecsPerDay = 3600 * 24;
            decimal days = ((int)(end - Start).Duration().TotalSeconds) / (decimal)SecsPerDay;

… is completely unnecessary. Similarly, the FromTo and DurationString functions could benefit from a little use of the TimeSpan object, and a little String.format. And it’s clear that the developer knows these exist, because they’ve used them, yet in TotalCost, it’s apparently too much to bear.

But the real topper, on this, is the Stop property. Once you look at that, most of the code in this file becomes downright stupid.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Sam VargheseDoes Steve Smith believe that spin can win matches?

As Australia mentally prepares for a gruelling tour of India, one curious characteristic of captain Stephen Smith is being ignored. This is Smith’s attitude towards spin and spinners when it comes to any form of cricket.

In India, any international team that wants to win a Test series must have a decent spin attack. This has become the case in recent years; the last time a team won in India was when England did so in 2012. They had Monty Panesar and Graeme Swann in their ranks.

During the three-Test series against Pakistan that concluded recently, Smith showed a curious reluctance to give the side’s only spinner, Nathan Lyon, a lengthy stint. He mostly depended on the medium-pacers and since Australia won all three Tests there were no questions raised.

His attitude towards spin was underlined in the second one-day game against Pakistan — in which the visitors registered a win at the MCG after 32 years — where he allowed Travis Head, one of two players who was expected to comprise the spin contingent, just three overs, one of them being the last of the match.

Pakistan bowled first, and 24 of the 50 overs were sent down by spinners. Some of these spin bowlers were part-timers: Mohammad Hafeez, the captain, is also the opening batsman, and Shoaib Malik bats at number five. They managed to contain Australia to 220, on a wicket that had uncertain bounce, but no great degree of turn.

Thus, Smith’s refusal to use spin is rather perplexing, even more so when one considers the fact that Head had bowled 10 overs against Pakistan in the first one-day game and given away just 28 runs.

Head’s first over went for 11 and after that he was kept away from the bowling crease until the 46th over, when it was all over bar the shouting. Pakistan’s winning run came from a wide bowled by Head.

So how will Smith adjust to the reality of spin in India? The Australian squad named for the tour has four spinners in its ranks: Lyon, Steve O’Keefe, Mitchell Swepson and Ashton Agar. How will Smith utilise these resources? He has only three recognised medium-pacers in the team: Mitchell Starc, Josh Hazlewood and Jackson Bird.

The last time Australia toured India in 2013, it was an unmitigated disaster ending in a 4-0 brownwash. But Lyon did take seven wickets in the final Test in Delhi in a relatively low-scoring game. Glenn Maxwell had 4-127 in the second Test which Australia lost by an innings. Xavier Doherty, the other spinner in the ranks, did nothing to set the Yarra on fire.

Will Smith treat the spinners the same way that he has so far in his career? Will he display the same reluctance to bowl Lyon and the others? This is his first tour of India as captain and while he did play in two Tests on the losing 2013 tour, his experience of the country is very limited.

One aspect of the squad which defies explanation is the selection of a leg-spinner. No leggie, not even Shane Warne, has done well in Indian conditions. (Indeed, Warne has never done well against Indian batsmen, no matter the venue.) Then why take a leggie along, especially an uncapped one? Will he be thrown into the cauldron (and in India the use of the word cauldron is apt) and asked to take five wickets in order to keep his place in the side? Will it be another case of a youngster going along for one tour and then being discarded?

We should have answers to these questions by the end of March.

Planet Linux AustraliaSimon Lyall: 2017 SysAdmin Miniconf – Session 3

Turtles all the way down – Thin LVM + KVM tips and Tricks – Steven Ellis

  • ssd -> partition -> encryption -> LVM -> [..] -> filesystem
  • Lots of examples see the online Slides

Samba and the road to 100,000 user – Andrew Bartlett

  • Release cycle is every 6 months
  • Samba 4.0 is 4 years p;d
  • 4.2 and older are out of security support by Samba team (support by distros sometimes)
  • Much faster adding users to AD DC. 55k users added in 50 minutes
  • Performance issues, not bugs, are now the biggest area of work
    • Customer deploying SAMBA at scale
  • Looking for Volunteers running AD will to run a tshark script
    • What does your busy hour look like?
    • What is the pattern of requests?

The School for Sysadmins Who Can’t Timesync Good and Wanna Learn To Do Other Stuff Good Too – Paul Gear

  • Aim is 1-10ms accuracy
  • Using Standard Linux reference distribution etc
  • Why care
    • Same apps need time sync
    • Log matching
  • Network Time Foundation needs support
  • NTP
    • Not widely understood
    • Unglamorous
    • Daunting documentation
    • old protocol, chequered secrity history
    • The first Google result may not be accurate
  • Set clock
    • step – jump clock to new time
    • slew – gradually adjust the time
  • NTP Assumption
    • The is one true time – UTC
    • Nobody really has it
    • bad time servers may be present
    • networks change

I ran out of power on my laptop at this point so not many more notes. Paul gave a very good set of recommendations and myth-busting for those running NTP though. His notes will be online on the Sysadmin Miniconf site and he has also posted more detail online.


Planet Linux AustraliaSimon Lyall: 2017 Sysadmin Miniconf – Session 2

Running production workloads in a programmable infrastructure – Alejandro Tesch

Managing performance parameters through systemd – Sander van Vugt

  • Mostly Demos in this talk too.
  • Using CPUShare parameter as an example
  • systemd-cgtop and systemd-cgls
  • “systemctl show stress1.service” will show available parameters
  • “man 5 systemd.resource-control” gives a lot more details.

Go for DevOps – Caskey L. Dickson

  • SideBar: The Platform Wars are over
    • Hint: We all won
    • As long as have an API we are all cool
  • Always builds staticly linked binaries, should work on just about any Linux system. Just one file.
  • Built in cross compiler (eg for Windows, Mac) via just enviroment variable “GOOS=darwin” and 32bit “GOARCH=32”
  • Bash is great, Python is great, Go is better
  • Microservices are Services
  • No Small Systems
    • Our Scripts are no longer dozens of lines long, they are thousands of lines long
    • Need full software engineering
  • Sysops pushing buttons and running scripts are dying
  • Platform Specific Code
    • main_linux.go main_windows.go and compiler find.
    • // +build linux darwin     <– At the top of the file
  • “Once I got my head around channels Go really opened up for me”



Planet Linux AustraliaSimon Lyall: 2017 Sysadmin Miniconf – Session 1

The Opposite of the Cloud – Tom Eastman

  • Korinates Data gateway – an appliance onsite at customers
  • Requirements
    • A bootable images ova, AMI/cloud images
    • Needs network access
    • Sounds like an IoT device
  • Opoossite of cloud is letting somebody outsource their stuff onto your infrastructure
  • Tom’s job has been making a nice and tidy appliance
  • What does IoT get wrong
    • Don’t do updates, security patches
    • Don’t treat network as hostile
    • Hard to remotely admin
  • How to make them secure
    • no default or static credentials
    • reduce the attack surface
    • secure all networks comms
    • ensure it fails securely
  • Solution
    • Don’t treat appliances like appliances
    • Treat like tightly orchestrated Linux Servers
  • Stick to conserative archetecture
    • Use standard distribution like Debian
    • You can trust the standard security updates
  • Solution Components
    • aspen: A customized Debian machine image built with Packer
    • pando: orchestration server/C&C network
    • hakea: A Django/Rest microservice API in charge
  • saltstack command and control
    • Normal orchestration stuff
    • Can works as a distributed command execution
    • The minions on each server connect to the central node, means you don’t need to connect into a remote appliance (no incoming connections needed to appliance)
    • OpenVPN as Internet transport
    • Outgoing just port 443 and openvpn protocol. Everything else via OpenVPN
  • What is the Appliance
    • A lightly mangled Debian Jessie VM image
    • Easy to maintain by customer, just reboot, activate or reinstall to fix any problems.
    • Appliance is running a bunch of docker containers
  • Appliance authentication
    • Needs to connect via 443 with activation code to download VPN and Salt short-lived certificates to get started
    • Auth keys only last for 24 hours.
    • If I can’t reach it it kills itself.
  • Hakea: REST control
    • Django REST framework microservices
    • Self documenting using DRF amd CoreAPI Schema
  • DevOps Principals apply beyonf the cloud

Inventory Management with Pallet Jack – Karl-Johan Karlsson

  • Goals
    • Single source of truth
    • Version control
    • Scaleable (to around 1000 machines, 10k objects)
  • Stuff stored as just a file structure
  • Some tools to access
  • Tools to export, eg to kea DHCP config
  • Tools as post-commit hooks for git. Pushes out update via salt etc
  • Various Integrations
    • API
    • Salt

Continuous Dashboard – You DevOps Airbag – Christopher Biggs

  • Dashboard traditionally targeted at OPs
  • Also need to target Devs
    • KPIs and
  • Sales and Support need to know everything to
  • Management want reassurance, Shipping a new feature, you have a hotline to the CEO
  • Customer, do you have something you are ashamed of?
    • Take notice of load spikes
    • Assume customers errors are being acted on, option to notify then when a fix happens
    • What is relivant to support call, most recent outages affecting this customer
    • Remember recent behavour of this customer
  • What kinds of data?
    • Tradditionally: System load indicators, transtion numbers etc
    • Now: Business Goals, unavoidable errors, spikes of errors, location of errors, user experience metrics, health of 3rd party interfaces, App and product reviews
  • What should I put in dashboards
    • Understand the Status-quo
    • Continuously
    • Look at trends over time and releases
    • Think about features holisticly
  • How to get there
    • Like you data as much as your code
    • Experiment with your data
    • tools:,, elastic
  • Insert Dashboards into your dev pipeline
    • Code Review, CI, Unit Test, Confirm that alarms actually work via test errors
    • Automate deployment
  • Tools
    • ELK – off the shelf images, good import/export
    • Node-RED – Flow based data processing, nice visual editor, built in dashboarding
    • Blynk – Nice dashboards in Ios or Android. Interactive dashboard editor. Easy to share
  • Social Media integration
    • Receive from twitter, facebook, apps stores reviews
    • Post to slack and monitoring channels
    • Forward to internal groups

The Sound of Silencing – Julien Goodwin

  • Humans know to ignore “expected” alerts during maintenance
    • Hard to know what is expected vs unexpected
    • Major events can lead to alert overload
  • Level 1 – Turn it all off
    • Can work on small scale
  • Level 2 – Turn off a localtion while working on it.
    • What if something happens while you are doing the work?
    • May work with single-service deployments
  • Level 3 – Turn off the expect alerts
    • Hard to get exactly right
  • Level 4 – Change mngt integration
    • Link the generator up to th change mngt automation system
    • What about changes too small to track?
    • What about changes too big for a simple silence?
  • Level 5 – Inhibiting Alerts
    • Use Service level indigations to avoid alerts on expected failures
    • Fire “goes nowhere” alert
  • Level 6 – Global monitoring and preventing over-siliencing
    • Alert if too many sites down
    • Need to have explicit alerts to spot when somebody silences “*”
  • How to get there from here
    • Incrementally
    • Choose a bad alert and change it to make it better
    • Regularly



Planet Linux AustraliaSimon Lyall: 2017 – Conference Opening

  • Wear SunScreen
  • Karen Sandler introduces Outreachy and it is announced as the raffle cause for 2017
  • Overview of people
    • 462 From Aus
    • 43 from NZ
    • 62 From USA
    • Lots of other countries
    • Gender breakdown lots of no answers so a stats a bit rough
  • Talks
    • 421 Proposals
    • 80-ish talks and 6 tutorials
    • Questions
      • Please ask questions during the question time
  • Looking for Volunteers – look at a session and click to signup
  • Keynotes – A quick profile
  • All the rooms are booked till 11pm! for BOF sessions
  • Lightning talks, Coffee, Lunch, dinners




Planet DebianMaria Glukhova: APK, images and other stuff.

2 more weeks of my awesome Outreachy journey have passed, so it is time to make an update on my progress.

I continued my work on improving diffoscope by fixing bugs and completing wishlist items. These include:

Improving APK support

I worked on #850501 and #850502 to improve the way diffoscope handles APK files. Thanks to Emanuel Bronshtein for providing clear description on how to reproduce these bugs and ideas on how to fix them.

And special thanks to Chris Lamb for insisting on providing tests for these changes! That part actually proved to be little more tricky, and I managed to mess up with these tests (extra thanks to Chris for cleaning up the mess I created). Hope that also means I learned something from my mistakes.

Also, I was pleased to see F-droid Verification Server as a sign of F-droid progress on reproducible builds effort - I hope these changes to diffoscope will help them!

Adding support for image metadata

That came from #849395 - a request was made to compare image metadata along with image content. Diffoscope has support for three types of images: JPEG, MS Windows Icon (*.ico) and PNG. Among these, PNG already had good image metadata support thanks to sng tool, so I worked on .jpeg and .ico files support. I initially tried to use exiftool for extracting metadata, but then I discovered it does not handle .ico files, so I decided to use a bigger force - ImageMagick’s identify - for this task. I was glad to see it had that handy -format option I could use to select only the necessary fields (I found their -verbose, well, too verbose for the task) and presenting them in the defined form, negating the need of filtering its output.

What was particulary interesting and important for me in terms of learning: while working on this feature, I discovered that, at the moment, diffoscope could not handle .ico files at all - img2txt tool, that was used for retrieving image content, did not support that type of images. But instead of recognizing this as a bug and resolving it, I started to think of possible workaround, allowing for retrieving image metadata even after retrieving image content failed. Definetely not very good thinking. Thanks Mattia Rizzolo for actually recognizing this as a bug and filing it, and Chris Lamb for fixing it!

Other work

Order-like differences, part 2

In the previous post, I mentioned Lunar’s suggestion to use hashing for finding order-like difference in wide variety of input data. I implemented that idea, but after discussion with my mentor, we decided it is probably not worth it - this change would alter quite a lot of things in core modules of diffoscope, and the gain would be not really significant.

Still, implementing that was an important experience for me, as I had to hack on deepest and, arguably, most difficult modules of diffoscope and gained some insight on how they work.

Comparing with several tools (work in progress)

Although my initial motivation for this idea was flawed (the workaround I mentioned earlier for .ico files), it still might be useful to have a mechanism that would allow to run several commands for finding difference, and then give the output of those that succeed, failing if and only if they all have failed.

One possible case when it might happen is when we use commands coming from different tools, and one of them is not installed. It would be nice if we still used the other and not the uninformative binary diff (that is a default fallback option for when something goes wrong with more “clever” comparison). I am still in process of polishing this change, though, and still in doubt if it is needed at all.

Side note - Outreachy and my university progress

In my Outreachy application, I promised that if I am selected into this round, I will do everything I can to unload the required time period from my university time commitements. I did that by moving most of my courses to the first half of the academic year. Now, the main thing that is left for me to do is my Master’s thesis.

I consulted my scientific advisors from both universities that I am formally attending (SFEDU and LUT - I am in double degree program), and as a result, they agreed to change my Master’s thesis topic to match my Outreachy work.

Now, that should have sounded like an excellent news - merging these activities together actually mean I can allocate much more time to my work on reproducible builds, even beyond the actual internship time period. That was intended to remove a burden from my shoulders.

Still, I feel a bit uneasy. The drawback of this decision lies in fact I have no idea on how to write scientific report based on pure practical work. I know other students from my universities have done such things before, but choosing my own topic means my scientific advisors can’t help me much - this is just out of their area of expertise.

Well, wish me luck - I’m up to the challenge!


Planet DebianSam Hartman: Musical Visualization of Network Traffic

I've been working on a fun holiday project in my spare time lately. It all started innocently enough. The office construction was nearing its end, and it was time for my workspace to be set up. Our deployment wizard and I were discussing. Normally we stick two high-end monitors on a desk. I'm blind; that seemed silly. He wanted to do something similarly nice for me, so he replaced one of the monitors with excellent speakers. They are a joy to listen to, but I felt like I should actually do something with them. So, I wanted to play around with some sort of audio project.
I decided to take a crack at an audio representation of network traffic. The solaris version of ping used to have an audio option, which would produce sound for successful pings. In the past I've used audio queues to monitor events like service health and build status.
It seemed like you could produce audio to give an overall feel for what was happening on the network. I was imagining a quick listen would be able to answer questions like:

  1. How busy is the network

  2. How many sources are active

  3. Is the traffic a lot of streams or just a few?

  4. Are there any interesting events such as packet loss or congestion collapse going on?

  5. What's the mix of services involved

I divided the project into three segments, which I will write about in future entries:

  • What parts of the network to model

  • How to present the audio information

  • Tools and implementation

I'm fairly happy with what I have. It doesn't represent all the items above. As an example, it doesn't directly track packet loss or retransmissions, nor does it directly distinguish one service from another. Still, just because of the traffic flow, rsync sounds different from http. It models enough of what I'm looking for that I find it to be a useful tool. And I learned a lot about music and Linux audio. I also got to practice designing discrete-time control functions in ways that brought back the halls of MIT.

Planet DebianDirk Eddelbuettel: Rcpp 0.12.9: Next round

Yesterday afternoon, the nineth update in the 0.12.* series of Rcpp made it to the CRAN network for GNU R. Windows binaries have by now been generated; and the package was updated in Debian too. This 0.12.9 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, and the 0.12.8 release in November --- making it the thirteenth release at the steady bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 906 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by sixthythree packages over the two months since the last release -- or about a package a day!

Some of the changes in this release are smaller and detail-oriented. We did squash one annoying bug (stemming from the improved exception handling) in Rcpp::stop() that hit a few people. Nathan Russell added a sample() function (similar to the optional one in RcppArmadillo; this required a minor cleanup by for small number of other packages which used both namespaces 'opened'. Date and Datetime objects now have format() methods and << output support. We now have coverage reports via covr as well. Last but not least James "coatless" Balamuta was once more tireless on documentation and API consistency --- see below for more details.

Changes in Rcpp version 0.12.9 (2017-01-14)

  • Changes in Rcpp API:

    • The exception stack message is now correctly demangled on all compiler versions (Jim Hester in #598)

    • Date and Datetime object and vector now have format methods and operator<< support (#599).

    • The size operator in Matrix is explicitly referenced avoiding a g++-6 issues (#607 fixing #605).

    • The underlying date calculation code was updated (#621, #623).

    • Addressed improper diagonal fill for non-symmetric matrices (James Balamuta in #622 addressing #619)

  • Changes in Rcpp Sugar:

    • Added new Sugar function sample() (Nathan Russell in #610 and #616).

    • Added new Sugar function Arg() (James Balamuta in #626 addressing #625).

  • Changes in Rcpp unit tests

    • Added Environment::find unit tests and an Environment::get(Symbol) test (James Balamuta in #595 addressing issue #594).

    • Added diagonal matrix fill tests (James Balamuta in #622 addressing #619)

  • Changes in Rcpp Documentation:

    • Exposed pointers macros were included in the Rcpp Extending vignette (MathurinD; James Balamuta in #592 addressing #418).

    • The file Rcpp.bib move to directory bib which is guaranteed to be present (#631).

  • Changes in Rcpp build system

    • Travis CI now also calls covr for coverage analysis (Jim Hester in PR #591)

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMehdi Dogguy: Debian from 10,000 feet

Many of you are big fans of S.W.O.T analysis, I am sure of that! :-) Technical competence is our strongest suit, but we have reached a size and sphere of influence which requires an increase in organisation.

We all love our project and want to make sure Debian still shines in the next decades (and centuries!). One way to secure that goal is to identify elements/events/things which could put that goal at risk. To this end, we've organized a short S.W.O.T analysis session at DebConf16. Minutes of the meeting can be found here. I believe it is an interesting read and is useful for Debian old-timers as well as newcomers. It helps to convey a better understanding of the project's status. For each item, we've tried to identify an action.

Here are a few things we've worked on:
  • Identify new potential contributors by attending and speaking at conferences where Free and Open Sources software are still not very well-known, or where we have too few contributors.

    Each Debian developer is encouraged to identify events where we can promote FOSS and Debian. As DPL, I'd be happy to cover expenses to attend such events.
  • Our average age is also growing over the years. It is true that we could attract more new contributors than we already do.

    We can organize short internships. We should not wait for students to come to us. We can get in touch with universities and engineering schools and work together on a list of topics. It is easy and will give us the opportunity to reach out to more students.

    It is true that we have tried in the past to do that. We may organize a sprint with interested people and share our experience on trying to do internships on Debian-related subjects. If you have successfully done that in the past and managed to attract new contributors that way, please share your experience with us!

    If you see other ways to attract new contributors, please get in touch so that we can discuss!
  • Not easy to get started in the project.

    It could be argued that all the information is available, but rather than being easily findable from on starting point, it is scattered over several places (documentation on our website, wiki, metadata on bug reports, etc…).

    Fedora and Mozilla both worked on this subject and did build a nice web application to make this easier and nicer. The result of this is asknot-ng.

    A would be wonderful! Any takers? We can help by providing a virtual machine to build this. Being a DD is not mandatory. Everyone is welcome!
  • Cloud images for Debian.

    This is a very important point since cloud providers are now major distributions consumers. We have to ensure that Debian is correctly integrated in the cloud, without making compromises on our values and philosophy.

    I believe this item has been worked on during the last Debian Cloud sprint. I am looking forward to seeing the positive effects of this sprint in the long term. I believe it does help us to build a stronger relationship with cloud providers and gives us a nice opportunity to work with them on a shared set of goals!
During next DebConf, we can review the progress that has been made on each item and discuss new ones. In addition to this session acting as a health check, I see it as a way for the DPL to discuss, openly and publicly, about the important changes that should be implemented in the project and imagine together a better future.

In the meantime, everyone should feel free to pick one item from the list and work on it. :-)

Planet Linux AustraliaBinh Nguyen: Life in Cuba, More Russian Stuff, and More

Given the recent passing away of Fidel Castro it should make sense that we'd take a look at life inside (and associated aspects of it) of Cuba: Cuban-Americans pour onto the streets of Little Havana after hearing of Castro’s death

CryptogramClassifying Elections as "Critical Infrastructure"

I am co-author on a paper discussing whether elections be classified as "critical infrastructure" in the US, based on experiences in other countries:

Abstract: With the Russian government hack of the Democratic National Convention email servers, and further leaks expected over the coming months that could influence an election, the drama of the 2016 U.S. presidential race highlights an important point: Nefarious hackers do not just pose a risk to vulnerable companies, cyber attacks can potentially impact the trajectory of democracies. Yet, to date, a consensus has not been reached as to the desirability and feasibility of reclassifying elections, in particular voting machines, as critical infrastructure due in part to the long history of local and state control of voting procedures. This Article takes on the debate in the U.S. using the 2016 elections as a case study but puts the issue in a global context with in-depth case studies from South Africa, Estonia, Brazil, Germany, and India. Governance best practices are analyzed by reviewing these differing approaches to securing elections, including the extent to which trend lines are converging or diverging. This investigation will, in turn, help inform ongoing minilateral efforts at cybersecurity norm building in the critical infrastructure context, which are considered here for the first time in the literature through the lens of polycentric governance.

The paper was speculative, but now it's official. The U.S. election has been classified as critical infrastructure. I am tentatively in favor of this, but what really matter is what happens now. What does this mean? What sorts of increased security will election systems get? Will we finally get rid of computerized touch-screen voting?

EDITED TO ADD (1/16): This is a good article.

Planet Linux AustraliaBlueHackers: BlueHackers session at 2017

If you’re fortunate enough to be in Tasmania for 2017 then you will be pleased to hear that we’re holding another BlueHackers BoF (Birds of a Feather) session on Monday evening, straight after the Linux Australia AGM.

The room is yet to be confirmed, but all details will be updated on the conference wiki at the following address:

We hope to see you there!


Planet DebianMike Gabriel: UIF bug: Caused by flawed IPv6 DNS resolving in Perl's NetAddr::IP

TL;DR; If you use NetAddr::IP->new6() for resolving DNS names to IPv6 addresses, the addresses returned by NetAddr::IP are not what you might expect. See below for details.

Issue #2 in UIF

Over the last couple of days, I tried to figure out the cause of a weird issue observed in UIF (Universal Internet Firewall [1], a nice Perl tool for setting up ip(6)tables based Firewalls).

Already a long time ago, I stumbled over a weird DNS resolving issue of DNS names to IPv6 addresses in UIF that I reported as issue #2 [2] against upstream UIF back then.

I happen to be co-author of UIF. So, I felt very ashamed all the time for not fixing the issue any sooner.

As many of us DDs try to get our packages into shape before the next Debian release these days, I find myself doing the same. I started investigating the underlying cause of issue #2 in UIF a couple of days ago.

Issue #119858 on CPAN

Today, I figured out that the Perl code in UIF is not causing the observed phenomenon. The same behaviour is reproducible with a minimal and pure NetAddr::IP based Perl script (reported as Debian bug #851388 [2]. Thanks to Gregor Herrmann for forwarding Debian bug upstream (#119858 [3]).

Here is the example script that shows the flawed behaviour:


use NetAddr::IP;

my $hostname = "";

my $ip6 = NetAddr::IP->new6($hostname);
my $ip4 = NetAddr::IP->new($hostname);

print "$ip6 <- WTF???\n";
print "$ip4\n";


... gives...

[mike@minobo ~]$ ./
0:0:0:0:0:0:808:808/128 <- WTF???

In words...

So what happens in NetAddr::IP is that with the new6() "constructor" you initialize a new IPv6 address. If the address is a DNS name, NetAddr::IP internally resolves it into an IPv4 address and converts this IPv4 address into some IPv6'ish format. This bogus IPv6 address is not the one matching the given DNS name.

Impacted Software in Debian

Various Debian packages use NetAddr::IP and may be affected by this flaw, here is an incomplete list (use apt-rdepends -r libnetaddr-ip-perl for the complete list):

  • spamassassin
  • postgrey
  • postfix-policyd-spf-perl
  • mtpolicyd
  • xen-tools
  • fwsnort
  • freeip-server
  • 389-ds
  • uif

Any of the above packages could be affected if NetAddr::IP->new6(<dnsname>) is being used. I haven't checked any of the code bases, but possibly the corresponding maintainers may want to do that.



Planet DebianRuss Allbery: Review: Enchanters' End Game

Review: Enchanters' End Game, by David Eddings

Series: The Belgariad #5
Publisher: Del Rey
Copyright: December 1984
Printing: February 1990
ISBN: 0-345-33871-5
Format: Mass market
Pages: 372

And, finally, the conclusion towards which everything has been heading, and the events for which Castle of Wizardry was the preparation. (This is therefore obviously not the place to start with this series.) Does it live up to all the foreshadowing and provide a satisfactory conclusion? I'd say mostly. The theology is a bit thin, but Eddings does a solid job of bringing all the plot threads together and giving each of the large cast a moment to shine.

Enchanters' End Game (I have always been weirdly annoyed by that clunky apostrophe) starts with more of Garion and Belgarath, and, similar to the end of Castle of Wizardry, this feels like them rolling on the random encounter table. There is a fairly important bit with Nadraks at the start, but the remaining detour to the north is a mostly unrelated bit of world-building. Before this re-read, I didn't remember how extensive the Nadrak parts of this story were; in retrospect, I realize a lot of what I was remembering is in the Mallorean instead. I'll therefore save my commentary on Nadrak gender roles for an eventual Mallorean re-read, since there's quite a lot to dig through and much of it is based on information not available here.

After this section, though, the story leaves Garion, Belgarath, and Silk for nearly the entire book, returning to them only for the climax. Most of this book is about Ce'Nedra, the queens and kings of the west, and what they're doing while Garion and his small party are carrying the Ring into Mordor— er, you know what I mean.

And this long section is surprisingly good. We first get to see the various queens of the west doing extremely well managing the kingdoms while the kings are away (see my previous note about how Eddings does examine his stereotypes), albeit partly by mercilessly exploiting the sexism of their societies. The story then picks up with Ce'Nedra and company, including all of the rest of Garion's band, being their snarky and varied selves. There are some fairly satisfying set pieces, some battle tactics, some magical tactics, and a good bit of snarking and interplay between characters who feel like old friends by this point (mostly because of Eddings's simple, broad-strokes characterization).

And Ce'Nedra is surprisingly good here. I would say that she's grown up after the events of the last book, but sadly she reverts to being awful in the aftermath. But for the main section of the book, partly because she's busy with other things, she's a reasonable character who experiences some actual consequences and some real remorse from one bad decision she makes. She's even admirable in how she handles events leading up to the climax of the book.

Eddings does a good job showing every character in their best light, putting quite a lot of suspense (and some dramatic rescues) into this final volume, and providing a final battle that's moderately interesting. I'm not sure I entirely bought the theological ramifications of the conclusion (the bits with Polgara do not support thinking about too deeply), but the voice in Garion's head continues to be one of the better characters of the series. And Errand is a delight.

After the climax, the aftermath sadly returns to Eddings's weird war between the sexes presentation of all gender relationships in this series, and it left me with a bit of a bad taste in my mouth. (There is absolutely no way that some of these relationships would survive in reality.) Eddings portrays nearly every woman as a manipulative schemer, sometimes for good and sometimes for evil, and there is just so much gender stereotyping throughout this book for both women and men. You can tell he's trying with the queens, but women are still only allowed to be successful at politics and war within a very specific frame. Even Polgara gets a bit of the gender stereotyping, although she remains mostly an exception (and one aspect of the ending is much better than it could have been).

Ah well. One does not (or at least probably should not) read this series without being aware that it has some flaws. But it has a strange charm as well, mostly from its irreverence. The dry wise-cracking of these characters rings more true to me than the epic seriousness of a lot of fantasy. This is how people behave under stress, and this is how quirky people who know each other extremely well interact. It also keeps one turning the pages quite effectively. I stayed up for several late nights finishing it, and was never tempted to put it down and stop reading.

This is not great literature, but it's still fun. It wouldn't sustain regular re-reading for me, but a re-read after twenty years or so was pretty much exactly the experience I was hoping for: an unchallenging, optimistic story with amusing characters and a guaranteed happy ending. There's a place for that.

Followed, in a series sense, by the Mallorean, the first book of which is The Guardians of the West. But this is a strictly optional continuation; the Belgariad comes to a definite end here.

Rating: 7 out of 10

Planet DebianSven Hoexter: moto g falcon reactivation and exodus mod

I started to reactivate my old moto g falcon during the last days of CyanogenMod in December of 2016. First step was a recovery update to TWRP 3.0.2-2 so I was able to flash CM13/14 builds. While CM14 nightly builds did not boot at all the CM13 builds did, but up to the last build wifi connections to the internet did not work. I could actually register with my wifi (Archer C7 running OpenWRT) but all apps claim the internet connection check failed and I'm offline. So bummer, without wifi a smartphone is not much fun.

I was pretty sure that wifi worked when I last used that phone about 1.5 years ago with CM11/12, so I started to dive into the forums of xda-developers to look for alternatives. Here I found out about Exodus. I've a bit of trouble trusting stuff from xda-developer forums but what the hell, the phone is empty anyway so nothing to loose and I flashed the latest falcon build.

To flash it I had to clean the whole phone, format all partitions via TWRP and then sideloaded the zip image file via adb (adb from the Debian/stretch adb package works like a charm, thank you guys!). Booted and bäm wifi works again! Now Exodus is a really striped down mod, to do anything useful with it I had to activate the developer options and allow USB debugging. Afterwards I could install the f-droid and Opera apk via "adb install foo.apk".

Lineage OS

As I could derive from another thread on xda-developers Lineage OS has the falcon still on the shortlist for 14.x nightly builds. Maybe that will be an alternative again in the future. For now Exodus is a bit behind the curve (based on Android 6.0.1 from September 2016) but at least it's functional.

Planet DebianJonathan McDowell: Cloning a USB LED device

A month or so ago I got involved in a discussion on IRC about notification methods for a headless NAS. One of the options considered was some sort of USB attached LED. DealExtreme had a cheap “Webmail notifier”, which was already supported by mainline kernels as a “Riso Kagaku” device but it had been sold out for some time.

This seemed like a fun problem to solve with a tinyAVR and V-USB. I had my USB relay board so I figured I could use that to at least get some code to the point that the kernel detected it as the right device, and the relay output could be configured as one of the colours to ensure it was being driven in roughly the right manner. The lack of a full lsusb dump (at least when I started out) made things a bit harder, plus the fact that the Riso uses an output report unlike the relay code, which uses a control message. However I had the kernel source for the driver and with a little bit of experimentation had something which would cause the driver to be loaded and the appropriate files in /sys/class/leds/ to be created. The relay was then successfully activated when the red LED was supposed to be on.

hid-led 0003:1294:1320.0001: hidraw0: USB HID v1.01 Device [MAIL  MAIL ] on usb-0000:00:14.0-6.2/input0
hid-led 0003:1294:1320.0001: Riso Kagaku Webmail Notifier initialized

I subsequently ordered some Digispark clones and modified the code to reflect the pins there (my relay board used pins 1+2 for USB, the Digispark uses pins 3+4). I then soldered a tricolour LED to the board, plugged it in and had a clone of the Riso Kaguku device for about £1.50 in parts (no doubt much cheaper in bulk). Very chuffed.

In case it’s useful to someone, the code is released under GPLv3+ and is available at;a=summary or on GitHub at I’m seeing occasional issues on an older Dell machine that only does USB2 with enumeration, but it generally is fine once it gets over that.

(FWIW, Jon, who started the original discussion, ended up with a BlinkStick Nano which is a neater device with 2 LEDs but still based on an Tiny85.)

Planet Linux AustraliaOpenSTEM: Getting to know Homo erectus

Homo erectus, Museum of Natural History, Ann Arbor, Michigan (photo: Thomas Roche)

Homo erectus was an ancient human ancestor that lived between 2 million and 100,000 to 50,000 years ago. It had a larger body and bigger brain than most earlier human ancestors. Although recent debates revolve around how we classify these fossils, and whether they should be broken down into lots of smaller sub-groups, it is generally agreed that Australopithecines in Africa pre-dated the advent of the Homo lineage. Predecessors to Homo erectus, include Homo habilis (“handy man”), a much smaller specimen.

Compared with modern Homo sapiens, which have only been around for the last 200,000 years, Homo erectus, or “upright man,” was very “successful” in a biological sense and lived on the Earth for 10 – 20 times longer than modern humans have been around.

Fossils of H. erectus show that it was the first human ancestor to live outside of Africa – one of the first fossils found was unearthed in the 19th century in Indonesia – others have been found across Asia, including China, as well as Europe and Africa.

A recent interesting summary of information about Homo erectus can be read at OpenSTEM also has a PDF resource on Homo erectus (part of our Archaeology Textbook for Senior Secondary).

Get Hands-On!

If you’re in the greater Brisbane area and would like to have your students touch, compare and otherwise explore human ancestor skulls – talk to us! OpenSTEM has a growing range of 3D printed fossil skulls and our resident archaeologist Dr Claire is available for workshops at primary and high school level (such as Introduction to Archaeology and Fossils).

Planet DebianJamie McClelland: What's Up with WhatsApp?

Despite my jaded feelings about corporate Internet services in general, I was suprised to learn that WhatsApp's end-to-end encryption was a lie. In short, it is possible to send an encrypted message to a user that is intercepted and effectively de-crypted without the sender's knowledge.

However, I was even more surprised to read Open Whisper Systems critique of the original story, claiming that it is not a backdoor because the WhatsApp sender's client is always notified when a message is de-crypted.

The Open Whisper Systems post acknowledges that the WhatsApp sender can choose to disable these notifications, but claims that is not such a big deal because the WhatsApp server has no way to know which clients have this feature enabled and which do not, so intercepting a message is risky because it could result in the sender realizing it.

However, there is a fairly important piece of information missing, namely: as far as I can tell, the setting to notify users about key changes is disabled by default.

So, using the default installation, your end-to-end encrypted message could be intercepted and decrypted without you or the party you are communicating with knowing it. How is this not a back door? And yes, if the interceptor can't tell whether or not the sender has these notifications turned on, the interceptor runs the risk of someone knowing they have intercepted the message. Great. That's better than nothing. Except that there is strong evidence that many powerful governments on this planet routinely risk exposure in their pursuit of compromising our ability to communicate securely. And... not to mention non-governmental (or governmental) adversaries for whom exposure is not a big deal.

Furthermore a critical reason for end-to-end encrption is so that your provider does not have the technical capacity to intercept your communications. That's simply not true of WhatsApp. It is true of Signal and OMEMO, which requires the active participation of the sender to compromise the communication.

Why in the world would you distribute a client that not only has the ability to surpress such warnings, but has it enabled by default?

Some may argue that users regularly dismiss notifications like "fingerprint has changed" and that this problem is the achilles heal of secure communications. I agree. But... there is still a monumental difference between a user absent-mindedly dismissing an important security warning and never seeing the warning in the first place.

This flaw in WhatsApp is a critical reminder that secure communications doesn't just depend on a good protocol or technology, but on trust in the people who design and maintain our systems.


TEDOne TED speaker adorns the walls of the New York City subway, another the walls of a building in Dubai…


As usual, the TED community has lots of news to share this week. Below, some highlights.

A subway line with museum-worthy art. After 45 years of construction and $4.5 billion spent, the first section of New York City’s Second Avenue subway line opened on January 1 with four stations. Maybe the best feature of the new line? The amazing artwork decorating the walls of the new stations, including Vik Muniz at the 72nd Street station. Muniz was one of four artists chosen from 300 applicants to turn a station into an art installation. (Watch Vik’s TED Talk)

A silver mural for Dubai. Artist eL Seed is wrapping up work on his first public-art project in the city that he calls home, Dubai. On the walls of the city’s Green Planet Building, the mural is done in his signature calligraphic style using iridescent silver spray paint, so that the color of the mural changes depending on the time of day and angle from which it’s viewed. The work spells out the words of the poem Positive Spirit, written by Sheikh Mohammed bin Rashid, the Vice President and Prime Minister of the United Arab Emirates and Ruler of Dubai, with a message about the importance of faith and the resolve it takes to fulfill your dreams. (Watch eL Seed’s TED Talk)

A mission to asteroids. Dedicated to unlocking mysteries of the solar system through shorter, more focused missions, NASA’s Discovery Program announced on January 4 that they were launching two new missions to asteroids in a search for clues about the early solar system. The projects, Lucy and Psyche, will respectively study the Trojan asteroids behind Jupiter and will send an orbiter to 16 Psyche (hence the name), a massive metallic object in the asteroid belt, as detailed by the Washington Post. According to NASA’s Planetary Science Director and TED speaker Jim Green, these missions will “help us understand how the sun and its family of planets formed, changed over time, and became places where life could develop and be sustained — and what the future may hold.” (Watch Jim’s TED Talk)

“I don’t think we’re free in America.” In order to confront and reclaim this country’s long history of racial violence, the Equal Justice Initiative launched a “Lynching in America” initiative–a comprehensive record of racial terror lynching–and has plans for a memorial in Alabama dedicated to victims of lynching. In an interview in The Intercept, director of the Equal Justice Initiative Bryan Stevenson discusses the urgency of facing this long history of violence in the wake of this country’s civil unrest: “I think we’re all burdened by this history of racial injustice, which has created a narrative of racial difference, which has infected us, corrupted us, and allowed us to see the world through this lens. So it becomes necessary to talk about that history if we want to get free.” (Watch Bryan’s TED Talk)

In search of the perfect surf. Surf photographer Chris Burkard’s upcoming documentary Under an Arctic Sky follows six adventurous surfers who set sail along the frozen shores of Iceland in the midst of the worst storm the country has seen in twenty-five years. The film is due for release in early 2017. (Watch Chris’ TED Talk)

Stem cell science: from bench to bedside. On the 7th and 8th of January, Susan Lim co-chaired the 2nd Nucleus Forum of the International Society for Stem Cell Research. The forum, attended by scientists and business and investor leaders in biotech and healthcare, was a discussion on ways to help bring breakthrough stem cell science from the bench to the bedside. The forum also discussed the new 21st Century Cures Act, signed into law by President Obama on December 13, and brought together the stem cell and gene editing communities, with Lim’s fellow TED speaker and CRISPR pioneer Jennifer Doudna also in attendance. (Watch Susan’s TED Talk and watch Jennifer’s TED Talk)

Empathy, not sympathy. In the aftermath of Donald Trump’s election and the success of Brexit in the United Kingdom, democracies around the world have experienced a populist backlash against politics as usual. However, it would be unfair, Michael Sandel writes in Project Syndicate, to analyze these results as nothing more than racism, xenophobia, or economic discontent. Rather, they come from grievances related to social esteem, and the failure of establishment parties to properly engage with those relevant voters. Sandel argues that, moving forward, progressive parties must “learn from the populist protest that has displaced them – not by emulating its xenophobia and strident nationalism, but by taking seriously the legitimate grievances with which these sentiments are entangled.” (Watch Michael’s TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

CryptogramFriday Squid Blogging: 1874 Giant Squid Attack

This article discusses a giant squid attack on a schooner off the coast of Sri Lanka in 1874.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Planet Linux AustraliaSilvia Pfeiffer: Annual Release of External-Videos plugin – we’ve hit v1.0

This is the annual release of my external-videos wordpress plugin and with the help of  Andrew Nimmolo I’m proud to annouce we’ve reached version 1.0!

So yes, my external-videos wordpress plugin is now roughly 7 years old, who would have thought! During the year, I don’t get the luxury of spending time on maintaining this open source love child of mine, but at Christmas, my bad conscience catches up with me  – every year! I then spend some time going through bug reports, upgrading the plugin to the latest wordpress version, upgrading to the latest video site APIs, testing functionality and of course making a new release.

This year has been quite special. The power of open source has kicked in and a new developer took an interest in external-videos. Andrew Nimmolo submitted patches over all of 2016. He decided to bring the external-videos plugin into the new decade with a huge update to the layout of the settings pages, general improvements, and an all-round update of all the video site APIs which included removing their overly complex SDKs and going straight for the REST APIs.

Therefore, I’m very proud to be able to release version 1.0 today. Thanks, Andrew!

Enjoy – and I look forward to many more contributions – have a Happy 2017!

NOTE: If you’re upgrading from an older version, you might need to remove and re-add your social video sites because the API details have changed a bit. Also, we noticed that there were layout issues on WordPress 4.3.7, so try and make sure your WordPress version is up to date.

CryptogramFDA Recommendations on Medical-Device Cybersecurity

The FDA has issued a report giving medical devices guidance on computer and network security. There's nothing particularly new or interesting; it reads like standard security advice: write secure software, patch bugs, and so on.

Note that these are "non-binding recommendations," so I'm really not sure why they bothered.

EDITED TO ADD (1/13): Why they bothered.

Planet DebianElena 'valhalla' Grandi: Modern XMPP Server

Modern XMPP Server

I've published a new HOWTO on my website '': already wrote about the Why (and the What, Who and When), so I'll just quote his conclusion and move on to the How.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.


I've decided to install, mostly because it was recommended by the RTC QuickStart Guide; I've heard that similar results can be reached with and other servers.

I'm also targeting stable (+ backports); as I write this is jessie; if there are significant differences I will update this article when I will upgrade my server to stretch. Right now, this means that I'm using prosody 0.9 (and that's probably also the version that will be available in stretch).

Installation and prerequisites

You will need to enable the repository and then install the packages prosody and prosody-modules.

You also need to setup some TLS certificates (I used Let's Encrypt; and make them readable by the prosody user; you can see Chapter 12 of the RTC QuickStart Guide for more details.

On your firewall, you'll need to open the following TCP ports:

  • 5222 (client2server)

  • 5269 (server2server)

  • 5280 (default http port for prosody)

  • 5281 (default https port for prosody)

The latter two are needed to enable some services provided via http(s), including rich media transfers.

With just a handful of users, I didn't bother to configure LDAP or anything else, but just created users manually via:

prosodyctl adduser

In-band registration is disabled by default (and I've left it that way, to prevent my server from being used to send spim

prosody configuration

You can then start configuring prosody by editing /etc/prosody/prosody.cfg.lua and changing a few values from the distribution defaults.

First of all, enforce the use of encryption and certificate checking both for client2server and server2server communications with:

c2s_require_encryption = true
s2s_secure_auth = true

and then, sadly, add to the whitelist any server that you want to talk to and doesn't support the above:

s2s_insecure_domains = { "" }


For each virtualhost you want to configure, create a file /etc/prosody/conf.avail/ with contents like the following:

VirtualHost ""
enabled = true
ssl = {
key = "/etc/ssl/private/";
certificate = "/etc/ssl/public/";

For the domains where you also want to enable MUCs, add the follwing lines:

Component "" "muc"
restrict_room_creation = "local"

the "local" configures prosody so that only local users are allowed to create new rooms (but then everybody can join them, if the room administrator allows it): this may help reduce unwanted usages of your server by random people.

You can also add the following line to enable rich media transfers via http uploads (XEP-0363):

Component "" "http_upload"

The defaults are pretty sane, but see for details on what knobs you can configure for this module

Don't forget to enable the virtualhost by linking the file inside /etc/prosody/conf.d/.

additional modules

Most of the other interesting XEPs are enabled by loading additional modules inside /etc/prosody/prosody.cfg.lua (under modules_enabled); to enable mod_something just add a line like:


Most of these come from the prosody-modules package (and thus from ) and some may require changing when prosody 0.10 will be available; when this is the case it is mentioned below.

  • mod_carbons (XEP-0280)
    To keep conversations syncronized while using multiple devices at the same time.

    This will be included by default in prosody 0.10.

  • mod_privacy + mod_blocking (XEP-0191)
    To allow user-controlled blocking of users, including as an anti-spim measure.

    In prosody 0.10 these two modules will be replaced by mod_privacy.

  • mod_smacks (XEP-0198)
    Allow clients to resume a disconnected session before a customizable timeout and prevent message loss.

  • mod_mam (XEP-0313)
    Archive messages on the server for a limited period of time (default 1 week) and allow clients to retrieve them; this is required to syncronize message history between multiple clients.

    With prosody 0.9 only an in-memory storage backend is available, which may make this module problematic on servers with many users. prosody 0.10 will fix this by adding support for an SQL backed storage with archiving capabilities.

  • mod_throttle_presence + mod_filter_chatstates (XEP-0352)
    Filter out presence updates and chat states when the client announces (via Client State Indication) that the user isn't looking. This is useful to reduce power and bandwidth usage for "useless" traffic.

@Gruppo Linux Como @LIFO

CryptogramInternet Filtering in Authoritarian Regimes

Interesting research: Sebastian Hellmeier, "The Dictator's Digital Toolkit: Explaining Variation in Internet Filtering in Authoritarian Regimes," Politics & Policy, 2016 (full paper is behind a paywall):

Abstract: Following its global diffusion during the last decade, the Internet was expected to become a liberation technology and a threat for autocratic regimes by facilitating collective action. Recently, however, autocratic regimes took control of the Internet and filter online content. Building on the literature concerning the political economy of repression, this article argues that regime characteristics, economic conditions, and conflict in bordering states account for variation in Internet filtering levels among autocratic regimes. Using OLS-regression, the article analyzes the determinants of Internet filtering as measured by the Open Net Initiative in 34 autocratic regimes. The results show that monarchies, regimes with higher levels of social unrest, regime changes in neighboring countries, and less oppositional competition in the political arena are more likely to filter the Internet. The article calls for a systematic data collection to analyze the causal mechanisms and the temporal dynamics of Internet filtering.

Worse Than FailureError'd: Banking on the Information Super Highway

"Good to see Santander finally embracing modern technology!" writes Sam B.


"I imagine the text could read 'Welcome user! Launch the game since you have no friends anyway and are beyond help'. Yay," writes Ruff.


Alister wrote, "Getting on the wifi with the 'network' cable was a snap but I found the range to be very limited."


"Seriously guys? WTF. They all look defined to me," B.J. wrote.


"Apparently my package had to travel back in time before it could get to me. Did I order a TARDIS by mistake?" writes Patrick.


While standing in line at customer service at Walmart, I spotted this on the customer facing screens on all of the registers at customer service. I wonder if someone wanted to buy tickets, would they be allowed?


Betsy R. writes "I once heard IBM's documentation described as sounding as if it had been translated from a foreign language by a bored high-school student. Maybe that's what happened here?"


[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

CryptogramNSA Given More Ability to Share Raw Intelligence Data

President Obama has changed the rules regarding raw intelligence, allowing the NSA to share raw data with the US's other 16 intelligence agencies.

The new rules significantly relax longstanding limits on what the N.S.A. may do with the information gathered by its most powerful surveillance operations, which are largely unregulated by American wiretapping laws. These include collecting satellite transmissions, phone calls and emails that cross network switches abroad, and messages between people abroad that cross domestic network switches.

The change means that far more officials will be searching through raw data. Essentially, the government is reducing the risk that the N.S.A. will fail to recognize that a piece of information would be valuable to another agency, but increasing the risk that officials will see private information about innocent people.

Here are the new procedures.

This rule change has been in the works for a while. Here are two blog posts from April discussing the then-proposed changes.

From a privacy perspective, this feels like a really bad idea to me.

Google AdsenseMeet the new AdSense user interface

Editor's Note: This post was originally published in October 2016 and has been updated with the most recent product announcement. 

Over the coming weeks, when you log in to your AdSense account, you'll be automatically taken to the new User Interface (UI). You will no longer be able to opt-out of the new UI.

Thank you to the more than 500,000 publishers who switched to the new UI over the last few months. Your feedback has been invaluable in launching the new UI to all of our users.


Posted October 13, 2016

The new AdSense user interface (UI) is here. Over the last year, our product team has been hard at work bringing Material Design principles to AdSense. This new UI highlights the information that’s relevant to you on a personalized homepage and streamlines navigation.

Over the next few weeks we’ll be offering the new UI to AdSense publishers.  All you’ll need to do is opt in when you log in to AdSense: 

What’s new?

  • A fresh new look & feel. We're adopting Material Design principles with a completely redesigned homepage and menu. We’ll roll out further improvements throughout the product soon.
  • A great new homepage. All the information you need, right where you need it. We've organized your homepage into a stream of interactive cards. You can pin your favorites to the top of the stream, and arrange your homepage just the way you’d like.
  • A streamlined new menu. We’ve brought everything together in a new left hand menu.

We’ll continue to improve and refine AdSense over the coming months. While we’re making these improvements, you’ll still be able to find all the content and features that you’re used to–right where you expect them.

Opt in through the AdSense interface to try it for yourself, and let us know what you think in the feedback tool.

Posted by: Andrew Gildfind, Daniel White & Louis Collard
From the AdSense Product Team


Planet DebianBen Hutchings: Debian 8 kernel security update

There are a fair number of outstanding security issues in the Linux kernel for Debian 8 "jessie", but none of them were considered serious enough to issue a security update and DSA. Instead, most of them are being fixed through the point release (8.7) which will be released this weekend. Don't forget that you need to reboot to complete a kernel upgrade.

This update to linux (version 3.16.39-1) also adds the perf security mitigation feature from Grsecurity. You can disable unprivileged use of perf entirely by setting sysctl kernel.perf_event_paranoid=3. (This is the default for Debian "stretch".)

Planet DebianBen Hutchings: Debian LTS work, December 2016

I was assigned 13.5 hours of work by Freexian's Debian LTS initiative and carried over 2 from November. I worked only 10 hours, so I carry over 5.5 hours.

As for the last few months, I spent all of this time working on the linux (kernel) package. I backported several security fixes and did some testing of the more invasive changes.

I also added the option to mitigate security issues in the performance events (perf) subsystem by disabling use by unprivileged users. This feature comes from Grsecurity and has been included in Debian unstable and Android kernels for a while. However, for Debian 7 LTS it has to be explicitly enabled by setting sysctl kernel.perf_event_paranoid=3.

I uploaded these changes as linux 3.2.84-1 and then (on 1st January) issued DLA 722-1.

CryptogramAttributing the DNC Hacks to Russia

President Barack Obama's public accusation of Russia as the source of the hacks in the US presidential election and the leaking of sensitive e-mails through WikiLeaks and other sources has opened up a debate on what constitutes sufficient evidence to attribute an attack in cyberspace. The answer is both complicated and inherently tied up in political considerations.

The administration is balancing political considerations and the inherent secrecy of electronic espionage with the need to justify its actions to the public. These issues will continue to plague us as more international conflict plays out in cyberspace.

It's true that it's easy for an attacker to hide who he is in cyberspace. We are unable to identify particular pieces of hardware and software around the world positively. We can't verify the identity of someone sitting in front of a keyboard through computer data alone. Internet data packets don't come with return addresses, and it's easy for attackers to disguise their origins. For decades, hackers have used techniques such as jump hosts, VPNs, Tor and open relays to obscure their origin, and in many cases they work. I'm sure that many national intelligence agencies route their attacks through China, simply because everyone knows lots of attacks come from China.

On the other hand, there are techniques that can identify attackers with varying degrees of precision. It's rarely just one thing, and you'll often hear the term "constellation of evidence" to describe how a particular attacker is identified. It's analogous to traditional detective work. Investigators collect clues and piece them together with known mode of operations. They look for elements that resemble other attacks and elements that are anomalies. The clues might involve ones and zeros, but the techniques go back to Sir Arthur Conan Doyle.

The University of Toronto-based organization Citizen Lab routinely attributes attacks against the computers of activists and dissidents to particular Third World governments. It took months to identify China as the source of the 2012 attacks against the New York Times. While it was uncontroversial to say that Russia was the source of a cyberattack against Estonia in 2007, no one knew if those attacks were authorized by the Russian government -- until the attackers explained themselves. And it was the Internet security company CrowdStrike, which first attributed the attacks against the Democratic National Committee to Russian intelligence agencies in June, based on multiple pieces of evidence gathered from its forensic investigation.

Attribution is easier if you are monitoring broad swaths of the Internet. This gives the National Security Agency a singular advantage in the attribution game. The problem, of course, is that the NSA doesn't want to publish what it knows.

Regardless of what the government knows and how it knows it, the decision of whether to make attribution evidence public is another matter. When Sony was attacked, many security experts -- myself included­ -- were skeptical of both the government's attribution claims and the flimsy evidence associated with it. I only became convinced when the New York Times ran a story about the government's attribution, which talked about both secret evidence inside the NSA and human intelligence assets inside North Korea. In contrast, when the Office of Personnel Management was breached in 2015, the US government decided not to accuse China publicly, either because it didn't want to escalate the political situation or because it didn't want to reveal any secret evidence.

The Obama administration has been more public about its evidence in the DNC case, but it has not been entirely public.

It's one thing for the government to know who attacked it. It's quite another for it to convince the public who attacked it. As attribution increasingly relies on secret evidence­ -- as it did with North Korea's attack of Sony in 2014 and almost certainly does regarding Russia and the previous election -- ­the government is going to have to face the choice of making previously secret evidence public and burning sources and methods, or keeping it secret and facing perfectly reasonable skepticism.

If the government is going to take public action against a cyberattack, it needs to make its evidence public. But releasing secret evidence might get people killed, and it would make any future confidentiality assurances we make to human sources completely non-credible. This problem isn't going away; secrecy helps the intelligence community, but it wounds our democracy.

The constellation of evidence attributing the attacks against the DNC, and subsequent release of information, is comprehensive. It's possible that there was more than one attack. It's possible that someone not associated with Russia leaked the information to WikiLeaks, although we have no idea where that someone else would have obtained the information. We know that the Russian actors who hacked the DNC­ -- both the FSB, Russia's principal security agency, and the GRU, Russia's military intelligence unit -- ­are also attacking other political networks around the world.

In the end, though, attribution comes down to whom you believe. When Citizen Lab writes a report outlining how a United Arab Emirates human rights defender was targeted with a cyberattack, we have no trouble believing that it was the UAE government. When Google identifies China as the source of attacks against Gmail users, we believe it just as easily.

Obama decided not to make the accusation public before the election so as not to be seen as influencing the election. Now, afterward, there are political implications in accepting that Russia hacked the DNC in an attempt to influence the US presidential election. But no amount of evidence can convince the unconvinceable.

The most important thing we can do right now is deter any country from trying this sort of thing in the future, and the political nature of the issue makes that harder. Right now, we've told the world that others can get away with manipulating our election process as long as they can keep their efforts secret until after one side wins. Obama has promised both secret retaliations and public ones. We need to hope they're enough.

This essay previously appeared on

EDITED TO ADD: The ODNI released a declassified report on the Russian attacks. Here's a New York Times article on the report.

And last week there were Senate hearings on this issue.

EDITED TO ADD: A Washington Post article talks about some of the intelligence behind the assessment.

EDITED TO ADD (1/10): The UK connection.

Cory DoctorowWhy the Trump era is the perfect time to go long on freedom and short on surveillance

My new Locus column is “It’s Time to Short Surveillance and Go Long on Freedom,” which starts by observing that Barack Obama’s legacy includes a beautifully operationalized, professional and terrifying surveillance apparatus, which Donald Trump inherits as he assumes office and makes ready to make good on his promise to deport millions of Americans and place Muslims under continuous surveillance.

But Trump supporters shouldn’t get too happy about this: after all, the billions Trump will pour into expanding America’s surveillance apparatus will be inherited by his successor — who may well be a Democrat who uses it for their own political ends.

The expansion of surveillance in the Trump era will create more and more people with direct experience of the perils of mass surveillance — and thus a larger audience for tools, products and services to help them safeguard their privacy. Privacy and surveillance are classic public health problems: because the downsides are so distant from the activity, it’s hard for us to make good judgments about when and how we should trade our privacy away. This is the same pattern that makes smoking so hard to combat.

Just as with smoking, surveillance will eventually reach the point of “peak indifference” — when the number of people who want to do something only goes up and up. That moment has already passed, and the Trump years will only accelerate the opposition to surveillance.

How can you short the surveillance economy and go long on technological freedom? Personally, you can peruse the easy-to-follow ‘‘Surveillance Self Defense’’ documentation maintained (in 11 languages!) by the Electronic Frontier Foundation (, and get your friends to do the same (remember, privacy is a team sport – it doesn’t matter if you keep your messages secure if your correspondents leave them in plain sight).

But if you’re minded to think about new businesses and business mod­els, get thinking about how you might offer services to protect people from the backdoored, hyper-invasive Internet of Things. What about a Facebook login tool that scrapes all your feeds by clicking everything and downloading it all, then letting you choose what you see without letting Facebook know, depriving Facebook of information about the choices you make and the places you are when you make them? That’ll get you sued by Facebook under the Computer Fraud and Abuse Act, but who knows, maybe a peak-indifference judge will find in your favor. Facebook has a lot of users who like the utility of hanging out with their friends and will increasingly be terrified of the consequences of hemorrhaging their data directly into Mark Zuckerberg’s remorseless, gaping maw.

Think of how you could jailbreak Philips lightbulbs and HP printers and ‘‘smart’’ TVs and games consoles and cable boxes and load them with software that treats your personal data as if it was precious lifeblood, not the consequence-free exhalations of your digital metabolism. That’ll get you sued under Section 1201 of the Digital Millennium Copy­right Act, and again, we’ll have to see whether a peak-indifference judge will decide that’s what Congress meant when they passed the DMCA in 1998. But that’s what limited liability companies are for, right?

Most importantly, you short the surveillance economy by investing in the activist groups that are fighting to make it legally safe to command your devices to stop stabbing you in the back and start guarding your back. That’s groups like the Electronic Frontier Foundation (; dis­closure, I consult to, but don’t earn money from, the EFF), the American Civil Liberties Union (ACLU), and many, many others.

We’ve got a rough four years ahead of us, and it’s going to get a lot worse before it gets better. But the only thing that could make the privacy catastrophes of the coming years even worse is if we let them go to waste.

It’s Time to Short Surveillance and Go Long on Freedom [Cory Doctorow/Locus Magazine]

(Images: Warded Lock, Thegreenj, CC-BY-SA; Donald Trump, Michael Vadon, CC-BY-SA)

CryptogramTwofish Power Analysis Attack

New paper: "A Simple Power Analysis Attack on the Twofish Key Schedule." This shouldn't be a surprise; these attacks are devastating if you don't take steps to mitigate them.

The general issue is if an attacker has physical control of the computer performing the encryption, it is very hard to secure the encryption inside the computer. I wrote a paper about this back in 1999.

Worse Than FailureCodeSOD: Extended Conditions

Every programming language embodies in it a philosophy about how problems should be solved. C reduces all problems to manipulations of memory addresses. Java turns every problem into a set of interacting objects. JavaScript summons Shub-Niggurath, the black goat of the woods with a thousand young, to eat the eyes of developers.

Just following the logic of a language can send you a long way to getting good results. Popular languages were designed by smart people, who work through many of the problems you might encounter when building a program with their tools. That doesn’t mean that you can’t take things a bit too far and misapply that philosophy, though.

Take this code, sent to us by “Kogad”. Their co-worker understood that objects and interfaces were fundamental to Java programming, so when presented with the challenge of three conditional statements, they created this:

package com.initrode.account.framework.process.specification;

import com.initrode.account.framework.process.CustomerRequest;

public interface ProcesSpecification {
        boolean isSatisfiedBy(CustomerRequest req);
package com.initrode.account.framework.process.specification;

public abstract class CompositeProcesSpecification implements ProcesSpecification {

        public ProcesSpecification and(ProcesSpecification specification){
                return new AndProcesSpecification(this, specification);

        public ProcesSpecification or(ProcesSpecification specification){
                return new OrProcesSpecification(this, specification);

        public ProcesSpecification not(ProcesSpecification specification){
                return new NotProcesSpecification(specification);

package com.initrode.account.framework.process.specification;

import com.initrode.account.framework.process.CustomerRequest;

public class NotProcesSpecification extends CompositeProcesSpecification {

        private ProcesSpecification spec;

        public NotProcesSpecification(ProcesSpecification specification) {
                spec = specification;

        public boolean isSatisfiedBy(CustomerRequest req) {
                return !spec.isSatisfiedBy(req);


package com.initrode.account.framework.process.specification;

import com.initrode.account.framework.process.CustomerRequest;

public class AndProcesSpecification extends CompositeProcesSpecification {
        private ProcesSpecification specOne;
        private ProcesSpecification specTwo;

        public AndProcesSpecification(ProcesSpecification specificationOne, ProcesSpecification specificationTwo) {
                specOne = specificationOne;
                specTwo = specificationTwo;

        public boolean isSatisfiedBy(CustomerRequest req) {
                return specOne.isSatisfiedBy(req) && specTwo.isSatisfiedBy(req);

package com.initrode.account.framework.process.specification;

import com.initrode.account.framework.process.CustomerRequest;

public class OrProcesSpecification extends CompositeProcesSpecification {
        private ProcesSpecification specOne;
        private ProcesSpecification specTwo;

        public OrProcesSpecification(ProcesSpecification specificationOne, ProcesSpecification specificationTwo) {
                specOne = specificationOne;
                specTwo = specificationTwo;

        public boolean isSatisfiedBy(CustomerRequest req) {
                return specOne.isSatisfiedBy(req) || specTwo.isSatisfiedBy(req);

package com.initrode.account.framework.process.specification;

import com.initrode.account.framework.process.CustomerRequest;
import com.initrode.account.framework.process.CustomerType;

public class TypeOneProcesSpecification extends CompositeProcesSpecification {

        public boolean isSatisfiedBy(CustomerRequest req) {
                return null != req && CustomerType.ONE == req.getType();


package com.initrode.account.framework.process.specification;

import com.initrode.account.framework.process.CustomerRequest;
import com.initrode.account.framework.process.CustomerType;

public class TypeTwoProcesSpecification extends CompositeProcesSpecification {

        public boolean isSatisfiedBy(CustomerRequest req) {
                return null != req && CustomerType.TWO == req.getType();

package com.initrode.account.framework.process.specification;

import com.initrode.account.framework.process.CustomerRequest;

public class VerifyProcessSpecification extends CompositeSpecification
        public boolean isSatisfiedBy(CustomerRequest req) {
                return req.hasVerificationCode();

// Usage:

public class ActionOne {
        private ProcesSpecification procesSpec;

        protected void postInitialize() {
                setProcesSpecification(new VerifyProcessSpecification().and(new TypeOneProcesSpecification()));

    @Override public boolean canHandle(CustomerRequest req) {
        return procesSpec.isSatisfiedBy(req);

    void doAction(...);

public class ActionTwo {
        private ProcesSpecification procesSpec;

        protected void postInitialize() {
                setProcesSpecification(new VerifyProcessSpecification().and(new TypeTwoProcesSpecification()));

    @Override public boolean canHandle(CustomerRequest req) {
        return procesSpec.isSatisfiedBy(req);

    void doAction(...);

public class ActionThree {
        private ProcesSpecification procesSpec;

        public void postInitialize() {
                procesSpec = new NotProcesSpecification(new VerifyProcessSpecification().and(
                                new TypeOneProcesSpecification().or(new TypeTwoProcesSpecification())));

    @Override public boolean canHandle(CustomerRequest req) {
        return procesSpec.isSatisfiedBy(req);

    void doAction(...);

This is certainly Peak Java™. It’s… extensible, at least. Not that you’d want to. “Kogad” replaced this masterpiece with a much simpler, if less extensible, chain of conditionals that were also more closely mapped to their actual requirements.

[Advertisement] Otter enables DevOps best practices by providing a visual, dynamic, and intuitive UI that shows, at-a-glance, the configuration state of all your servers. Find out more and download today!

Planet DebianRitesh Raj Sarraf: Laptop Mode Tools 1.71

I am pleased to announce the 1.71 release of Laptop Mode Tools. This release includes some new modules, some bug fixes, and there are some efficiency improvements too. Many thanks to our users; most changes in this release are contributions from our users.

A filtered list of changes in mentioned below. For the full log, please refer to the git repository. 

Source tarball, Feodra/SUSE RPM Packages available at:

Debian packages will be available soon in Unstable.

Mailing List:


1.71 - Thu Jan 12 13:30:50 IST 2017
    * Fix incorrect import of os.putenv
    * Merge pull request #74 from Coucouf/fix-os-putenv
    * Fix documentation on where we read battery capacity from
    * cpuhotplug: allow disabling specific cpus
    * Merge pull request #78 from aartamonau/cpuhotplug
    * runtime-pm: refactor listed_by_id()
    * wireless-power: Use iw and fallback to iwconfig if it not available
    * Prefer available AC supply information over battery state to determine ON_AC
    * On startup, we want to force the full execution of LMT.
    * Device hotplugs need a forced execution for LMT to apply the proper settings
    * runtime-pm: Refactor list_by_type()
    * kbd-backlight: New module to control keyboard backlight brightness
    * Include Transmit power saving in wireless cards
    * Don't run in a subshell
    * Try harder to check battery charge
    * New module: vgaswitcheroo
    * Revive bluetooth module. Use rfkill primarily. Also don't unload (incomplete list of) kernel modules


What is Laptop Mode Tools

Description: Tools for Power Savings based on battery/AC status
 Laptop mode is a Linux kernel feature that allows your laptop to save
 considerable power, by allowing the hard drive to spin down for longer
 periods of time. This package contains the userland scripts that are
 needed to enable laptop mode.
 It includes support for automatically enabling laptop mode when the
 computer is working on batteries. It also supports various other power
 management features, such as starting and stopping daemons depending on
 power mode, automatically hibernating if battery levels are too low, and
 adjusting terminal blanking and X11 screen blanking
 laptop-mode-tools uses the Linux kernel's Laptop Mode feature and thus
 is also used on Desktops and Servers to conserve power





LongNowJennifer Pahlka Seminar Tickets


The Long Now Foundation’s monthly

Seminars About Long-term Thinking

Jennifer Pahlka presents Fixing Government: Bottom Up and Outside In

Jennifer Pahlka presents “Fixing Government: Bottom Up and Outside In”


Wednesday January 4, 02017 at 7:30pm SFJAZZ Center

Long Now Members can reserve 2 seats, join today! General Tickets $15


About this Seminar:

Jennifer Pahlka is the founder and Executive Director of Code for America. She served as the US Deputy Chief Technology Officer from June 02013 to 02014 and ran the Game Developers Conference, Game Developer magazine,, and the Independent Games Festival for many years. Previously, she ran the Web 2.0 and Gov 2.0 events for TechWeb, in conjunction with O’Reilly Media.

Planet DebianSteinar H. Gunderson: 3G-SDI signal support

I had to figure out what kinds of signal you can run over 3G-SDI today, and it's pretty confusing, so I thought I'd share it.

For the reference, 3G-SDI is the same as 3G HD-SDI, an extension of HD-SDI, which is an extension of the venerable SDI standard (well, duh). They're all used for running uncompressed audio/video data of regular BNC coaxial cable, possibly hundreds of meters, and are in wide use in professional and semiprofessional setups.

So here's the rundown on 3G-SDI capabilities:

  • 1080p60 supports 10-bit 4:2:2 Y'CbCr. Period.
  • 720p60/1080p30/1080i60 supports a much wider range of formats: 10-bit 4:4:4:4 RGBA (alpha optional), 10-bit 4:4:4:4 Y'CbCrA (alpha optional), 12-bit 4:4:4 RGB, 12-bit 4:4:4 Y'CbCr or finally 12-bit 4:2:2 Y'CbCr (seems rather redundant).
  • There's also a format exclusively for 1080p24 (actually 2048x1080) that supports 12-bit X'Y'Z. Digital cinema, hello. Apart from that, it supports pretty much what 1080p30 does. There's also a 2048x1080p30 (no interlaced version) mode for 12-bit 4:2:2:4 Y'CbCrA, but it seems rather obscure.

And then there's dual-link 3G-SDI, which uses two cables instead of one—and there's also Blackmagic's proprietary “6G-SDI”, which supports basically everything dual-link 3G-SDI does. But in 2015, seemingly there was also a real 6G-SDI and 12G-SDI, and it's unclear to me whether it's in any way compatible with Blackmagic's offering. It's all confusing. But at least, these are the differences from single-link to dual-link 3G-SDI:

  • 1080p60 supports essentially everything that 720p60 supports on single-link: 10-bit 4:4:4:4 RGBA (alpha optional), 10-bit 4:4:4:4 Y'CbCrA (alpha optional), 12-bit 4:4:4 RGB, 12-bit 4:4:4 Y'CbCr and the redundant 12-bit 4:2:2 Y'CbCr.
  • 2048x1080 4:4:4 X'Y'Z' now also supports 1080p25 and 1080p30.

4K? I don't know. 120fps? I believe that's also a proprietary extension of some sort.

And of course, having a device support 3G-SDI doesn't mean at all it's required to support all of this; in particular, I believe Blackmagic's systems don't support alpha at all except on their single “12G-SDI” card, and I'd also not be surprised if RGB support is rather limited in practice.

Planet DebianSven Hoexter: Failing with F5: using experimental mv feature on a pool causes tmm to segfault

Just a short PSA for those around working with F5 devices:

TMOS 11.6 introduced an experimental "mv" command in tmsh. In the last days we tried it for the first time on TMOS 12.1.1. It worked fine for a VirtualServer but a mv for a pool caused a sefault in tmm. We're currently working with the F5 support to sort it out, they think it's a known issue. Recommendation for now is to not use mv on pools. Just do it the old way, create a new pool, assign the new pool to the relevant VS and delete the old pool.

Possible bug ID at F5 is ID562808. Since I can not find it in the TMOS 12.2 release notes I expect that this issue also applies to TMOS 12.2, but I did not verify that.

Krebs on SecurityAdobe, Microsoft Push Critical Security Fixes

Adobe and Microsoft on Tuesday each released security updates for software installed on hundreds of millions of devices. Adobe issued an update for Flash Player and for Acrobat/Reader. Microsoft released just four updates to plug some 15 security holes in Windows and related software.

brokenwindowsMicrosoft’s batch includes updates for Windows, Office and Microsoft Edge (Redmond’s replacement for Internet Explorer). Also interesting is that January 2017 is the last month Microsoft plans to publish individual bulletins for each patch. From now on, some of the data points currently in the individual updates will be lumped into a “Security Updates Guide” published with each Patch Tuesday.

This change mirrors a shift in the way Microsoft is deploying updates. Last year Microsoft stopped making individual security updates available for home users, giving those users instead a single monthly security rollup that includes all available security updates.

Windows users and anyone else with Flash installed will need to make sure that Adobe Flash Player is updated (or suitably bludgeoned, more on that in a bit). Adobe’s Flash update addresses 13 flaws in the widely-installed browser plugin. The patch brings Flash to v. for Windows, Mac and Linux users alike.

If you have Flash installed, you should update, hobble or remove Flash as soon as possible. To see which version of Flash your browser may have installed, check out this page. But the smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. An extremely powerful and buggy program that binds itself to the browser, Flash is a favorite target of attackers and malware. For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.

brokenflash-aIf you choose to keep and update Flash, please do it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates in and/or restart the browser to get the latest Flash version). My version of Chrome says it’s the latest one (55.0.2883.87) but the Chrome Releases blog says the latest stable version — 55.0.2883.105 includes the Flash fixes (among other security fixes for Chrome), which isn’t yet being offered. Adobe’s Web site tells me my Flash version is (not the latest).

When in doubt with Chrome, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then. In either case, be sure to restart the browser after installing an update (if it doesn’t do that for you).

As ever, if you experience any issues applying these updates, please don’t hesitate to leave a note about the issue in the comments below. You might help someone else who’s having the same problem!

Planet DebianReproducible builds folks: Reproducible Builds: week 89 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday January 1 and Saturday January 7 2017:

GSoC and Outreachy updates

Toolchain development

  • #849999 was filed: "dpkg-dev should not set SOURCE_DATE_EPOCH to the empty string"

Packages reviewed and fixed, and bugs filed

Chris Lamb:


Reviews of unreproducible packages

13 package reviews have been added, 4 have been updated and 6 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added/updated:

Upstreaming of reproducibility fixes



Weekly QA work

During our reproducibility testing, the following FTBFS bugs have been detected and reported by:

  • Chris Lamb (4)

diffoscope development

diffoscope 67 was uploaded to unstable by Chris Lamb. It included contributions from :

[ Chris Lamb ]

* Optimisations:
  - Avoid multiple iterations over archive by unpacking once for an ~8X
    runtime optimisation.
  - Avoid unnecessary splitting and interpolating for a ~20X optimisation
    when writing --text output.
  - Avoid expensive diff regex parsing until we need it, speeding up diff
    parsing by 2X.
  - Alias expensive Config() in diff parsing lookup for a 10% optimisation.

* Progress bar:
  - Show filenames, ELF sections, etc. in progress bar.
  - Emit JSON on the the status file descriptor output instead of a custom

* Logging:
  - Use more-Pythonic logging functions and output based on __name__, etc.
  - Use Debian-style "I:", "D:" log level format modifier.
  - Only print milliseconds in output, not microseconds.
  - Print version in debug output so that saved debug outputs can standalone
    as bug reports.

* Profiling:
  - Also report the total number of method calls, not just the total time.
  - Report on the total wall clock taken to execute diffoscope, including

* Tidying:
  - Rename "NonExisting" -> "Missing".
  - Entirely rework diffoscope.comparators module, splitting as many separate
    concerns into a different utility package, tidying imports, etc.
  - Split diffoscope.difference into diffoscope.diff, etc.
  - Update file references in debian/copyright post module reorganisation.
  - Many other cleanups, etc.

* Misc:
  - Clarify comment regarding why we call python3(1) directly. Thanks to Jérémy
    Bobbio <>.
  - Raise a clearer error if trying to use --html-dir on a file.
  - Fix --output-empty when files are identical and no outputs specified.

[ Reiner Herrmann ]
* Extend .apk recognition regex to also match zip archives (Closes: #849638)

[ Mattia Rizzolo ]
* Follow the rename of the Debian package "python-jsbeautifier" to

[ siamezzze ]
* Fixed no newline being classified as order-like difference.

reprotest development

reprotest 0.5 was uploaded to unstable by Chris Lamb. It included contributions from:

[ Ximin Luo ]

* Stop advertising variations that we're not actually varying.
  That is: domain_host, shell, user_group.
* Fix auto-presets in the case of a file in the current directory.
* Allow disabling build-path variations. (Closes: #833284)
* Add a faketime variation, with NO_FAKE_STAT=1 to avoid messing with
  various buildsystems. This is on by default; if it causes your builds
  to mess up please do file a bug report.
* Add a --store-dir option to save artifacts.

Other contributions (not yet uploaded): website development

  • Debian arm64 architecture was fully tested in all three suites in just 15 days. Thanks again to for their support!
  • Log diffoscope profiling info. (lamby)
  • Run pg_dump with -O --column-inserts to make easier to import our main database dump into a non-PostgreSQL database. (mapreri)
  • Debian armhf network: CPU frequency scaling was enabled for three Firefly boards, enabling the CPUs to run at full speed. (vagrant)
  • Arch Linux and Fedora tests have been disabled (h01ger)
  • Improve mail notifications about daily problems. (h01ger)


This week's edition was written by Chris Lamb, Holger Levsen and Vagrant Cascadian, reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Sociological ImagesUS Working People Hurt More By Rising Income Inequality than Slow Economic Growth

Originally posted at Reports from the Economic Front.

Defenders of capitalism in the United States often choose not to use that term when naming our system, preferring instead the phrase “market system.”  Market system sounds so much better, evoking notions of fair and mutually beneficial trades, equality, and so on.  The use of that term draws attention away from the actual workings of our system.

In brief, capitalism is a system structured by the private ownership of productive assets and driven by the actions of those who seek to maximize the private profits of the owners.  Such an understanding immediately raises questions about how some people and not others come to own productive wealth and the broader social consequences of their pursuit of profit.

Those are important questions because it is increasingly apparent that while capitalism continues to produce substantial benefits for the largest asset owners, those benefits have increasingly been secured through the promotion of policies – globalization, financialization, privatization of state services, tax cuts, attacks on social programs and unions – that have both lowered overall growth and left large numbers of people barely holding the line, if not actually worse off.

The following two figures come from a Washington Post article by Jared Bernstein in which he summarizes the work of Thomas Piketty, Emmanuel Saez and Gabriel Zucman. The first set of bars shows the significant decline in US pre-tax income growth.  In the first period (1946-1980), pre-tax income grew by 95 percent.  In the second (1980-2014), it grew by only 61 percent.


This figure also shows that this slower pre-tax income growth has not been a problem for those at the top of the income distribution.  Those at the top more than compensated for the decline by capturing a far greater share of income growth than in the past.  In fact, those in the bottom 50 percent of the population gained almost nothing over the period 1980 to 2014.

The next figure helps us see that the growth in inequality has been far more damaging to the well-being of the bottom half than the slowdown in overall income growth.  As Bernstein explains:

The bottom [blue] line in the next figure shows actual pretax income for adults in the bottom half of the income scale. The top [red] line asks how these folks would have done if their income had grown at the average rate from the earlier, faster-growth period. The middle [green] line asks how they would have done if they experienced the slower, average growth of the post-1980 period.

The difference between the top two lines is the price these bottom-half adults paid because of slower growth. The larger gap between the middle and bottom line shows the price they paid from doing much worse than average, i.e., inequality… That explains about two-thirds of the difference in endpoints. Slower growth hurt these families’ income gains, but inequality hurt them more.


A New York Times analysis of pre-tax income distribution over the period 1974 to 2014 reinforces this conclusion about the importance of inequality.  As we can see in the figure below, the top 1 percent and bottom 50 percent have basically changed places in terms of their relative shares of national income.


The steady ratcheting down in majority well-being is perhaps best captured by studies designed to estimate the probability of children making more money than their parents, an outcome that was the expectation for many decades and that underpinned the notion of “the American dream.”

Such research is quite challenging, as David Leonhardt explains in a New York Times article, “because it requires tracking individual families over time rather than (as most economic statistics do) taking one-time snapshots of the country.”  However, thanks to newly accessible tax records that go back decades, economists have been able to estimate this probability and how it has changed over time.

Leonhardt summarizes the work of one of the most important recent studies, that done by economists associated with the Equality of Opportunity Project. In summary terms, those economists found that a child born into the average American household in 1940 had a 92 percent chance of making more than their parents.  This falls to 79 percent for a child born in 1950, 62 percent for a child born in 1960, 61 percent for a child born in 1970, and only 50 percent for a child born in 1980.

The figure below provides a more detailed look at the declining fortunes of most Americans.   The horizontal access shows the income percentile a child is born into and the vertical access shows the probability of that child earning more than their parents.   The drop-off for children born in 1960 and 1970 compared to the earlier decade is significant and is likely the result of the beginning effects of the changes in capitalist economic dynamics that started gathering force in the late 1970s, for example globalization, privatization, tax cuts, union busting, etc.  The further drop-off for children born in 1980 speaks to the strengthening and consolidation of those dynamics.


The income trends highlighted in the figures above are clear and significant, and they point to the conclusion that unless we radically transform our capitalist system, which will require building a movement capable of challenging and overcoming the power of those who own and direct our economic processes, working people in the United States face the likelihood of an ever-worsening future.

Martin Hart-Landsberg, PhD is a professor emeritus of economics at Lewis and Clark College. You can follow him at Reports from the Economic Front.

(View original at

CryptogramLaw Enforcement Access to IoT Data

In the first of what will undoubtedly be a large number of battles between companies that make IoT devices and the police, Amazon is refusing to comply with a warrant demanding data on what its Echo device heard at a crime scene.

The particulars of the case are weird. Amazon's Echo does not constantly record; it only listens for its name. So it's unclear that there is any evidence to be turned over. But this general issue isn't going away. We are all under ubiquitous surveillance, but it is surveillance by the companies that control the Internet-connected devices in our lives. The rules by which police and intelligence agencies get access to that data will come under increasing pressure for change.

Related: A newscaster discussed Amazon's Echo on the news, causing devices in the same room as tuned-in televisions to order unwanted products. This year, the same technology is coming to LG appliances such as refrigerators.

Planet DebianDirk Eddelbuettel: R / Finance 2017 Call for Papers

Last week, Josh sent the call for papers to the R-SIG-Finance list making everyone aware that we will have our nineth annual R/Finance conference in Chicago in May. Please see the call for paper (at the link, below, or at the website) and consider submitting a paper.

We are once again very excited about our conference, thrilled about upcoming keynotes and hope that many R / Finance users will not only join us in Chicago in May 2017 -- but also submit an exciting proposal.

We also overhauled the website, so please see R/Finance. It should render well and fast on devices of all sizes: phones, tablets, desktops with browsers in different resolutions. The program and registration details still correspond to last year's conference and will be updated in due course.

So read on below, and see you in Chicago in May!

Call for Papers

R/Finance 2017: Applied Finance with R
May 19 and 20, 2017
University of Illinois at Chicago, IL, USA

The ninth annual R/Finance conference for applied finance using R will be held on May 19 and 20, 2017 in Chicago, IL, USA at the University of Illinois at Chicago. The conference will cover topics including portfolio management, time series analysis, advanced risk tools, high-performance computing, market microstructure, and econometrics. All will be discussed within the context of using R as a primary tool for financial risk management, portfolio construction, and trading.

Over the past eight years, R/Finance has included attendees from around the world. It has featured presentations from prominent academics and practitioners, and we anticipate another exciting line-up for 2017.

We invite you to submit complete papers in pdf format for consideration. We will also consider one-page abstracts (in txt or pdf format) although more complete papers are preferred. We welcome submissions for both full talks and abbreviated "lightning talks." Both academic and practitioner proposals related to R are encouraged.

All slides will be made publicly available at conference time. Presenters are strongly encouraged to provide working R code to accompany the slides. Data sets should also be made public for the purposes of reproducibility (though we realize this may be limited due to contracts with data vendors). Preference may be given to presenters who have released R packages.

Financial assistance for travel and accommodation may be available to presenters, however requests must be made at the time of submission. Assistance will be granted at the discretion of the conference committee.

Please submit proposals online at

Submissions will be reviewed and accepted on a rolling basis with a final deadline of February 28, 2017. Submitters will be notified via email by March 31, 2017 of acceptance, presentation length, and financial assistance (if requested).

Additional details will be announced via the conference website as they become available. Information on previous years' presenters and their presentations are also at the conference website. We will make a separate announcement when registration opens.

For the program committee:

Gib Bassett, Peter Carl, Dirk Eddelbuettel, Brian Peterson,
Dale Rosenthal, Jeffrey Ryan, Joshua Ulrich

Planet DebianEnrico Zini: Modern and secure instant messaging

Conversations is a really nice, actively developed, up to date XMPP client for Android that has the nice feature of telling you what XEPs are supported by the server one is using:

Initial server features

Some days ago, me and Valhalla played the game of trying to see what happens when one turns them all on: I would send her screenshots from my Conversations, and she would poke at her Prosody to try and turn things on:

After some work

Valhalla eventually managed to get all features activated, purely using packages from Jessie+Backports:

All features activated

The result was a chat system in which I could see the same conversation history on my phone and on my laptop (with gajim)(, and have it synced even after a device has been offline,

We could send each other rich media like photos, and could do OMEMO encryption (same as Signal) in group chats.

I now have an XMPP setup which has all the features of the recent fancy chat systems, and on top of that it runs, client and server, on Free Software, which can be audited, it is federated and I can self-host my own server in my own VPS if I want to, with packages supported in Debian.

Valhalla has documented the whole procedure.

If you make a client for a protocol with lots of extension, do like Conversations and implement a status page with the features you'd like to have on the server, and little green indicators showing which are available: it is quite a good motivator for getting them all supported.

Worse Than FailureA Case of Denial

RGB color wheel 72

On his first day at his new job, Sebastian wasn't particularly excited. He'd been around the block enough times to have grown a thick skin of indifference and pessimism. This job was destined to be like any other, full of annoying coworkers, poorly thought out requirements, legacy codebases full of spaghetti. But it paid well, and he was tired of his old group, weary in his soul of the same faces he'd grown accustomed to. So he prepared himself for a new flavor of the same office politics and menial tasks.

It didn't faze him much when he walked into the IT office to pick up his credentials and heard the telltale buzzing and clicking of old Packard Bell servers. He simply adjusted his expectations for his own developer machine downward a few notches and walked back to his new office. Yes, this job came with a private office, and pay to match. For that, he could put up with a lot of BS.

His login worked on the first try, which was pleasantly surprising. He expected Windows XP; when Vista loaded, he wasn't sure if he should be pleased that the OS was newer, or horrified that it was Vista. He could pretend it was 7 for a while at least, once he finished getting admin privileges and nerfing UAC. It'll take more than that to scare me off, he thought to himself as he fired up Outlook.

Already, he had mail: a few welcome messages with new employee information, as well as his first assignment from his manager. Impressed with the efficiency in assigning work, if nothing else, he opened the message from his new boss.

That first email went a little something like this:

Hi Sebastian, welcome to our super-clean environment. We do everything right here. You will use Bonk-Word (a IBM documentation web app) for design documents. Remember to save your work often! If Bonk-Word crashes, you'll need to send an email to IT to have it restarted.

We do design documentation right here. Be sure to write everything in passive voice, use Purple for chapter headings and Green for section headings. We have document review with the company president every day at 9AM so be ready for that. It's a black mark on your permanent record to have any headings wrong.

Please start designing how you are going to fix our 4-year old Macintosh font issues. We need a six-page design document by 9AM tomorrow. Thanks.

Six pages by tomorrow? worried Sebastian. Maybe I rejoiced too soon on that efficiency thing. Well, at least I won't be bored. He cracked his knuckles, opened Bonk-Word, and set about examining these so-called font issues.

The first thing he learned was that his manager wasn't kidding when he said to save often. By the end of the day, he was mentally betting with himself on which would crash first: Bonk-Word or Vista itself. They both crashed approximately every half hour. But it was somehow soothing to keep count, making tally marks on a post-it note. It reminded him that something in this world still worked. Basic math wasn't impressive, but it was reliable. Steady. Solid.

Maybe he was a little lonely in his office. But it was quiet, and private, and even if the crashing was frustrating, he made progress. He stayed late to turn out his treatise on "Delving into the diverse and varied literature that exists on the subject of font rendering, including but not limited to the Postscript specification, accompanying literature indicating the best practices for the use thereof, and extant informational centers within the World Wide Web that have been created to gather wisdom from the best minds the industry has to offer in a familiar and comforting question-and-answer format." Lather, rinse, and repeat for "Writing a Python program to render each character." He took two pages to explain the fact that he would, essentially, eyeball the results.

If they want six pages, they're getting six pages, he thought.

A strange first day, but Sebastian could see himself sticking this out for a few years at least. He took his time as he walked through the building (which smelled suspiciously of old leather undergarments) to his "free assigned ramp parking spot" (another perk he told himself made the job worth it). Walking slowly was a good idea anyway, as the ramp had terminal rust-rot and there were many places where the concrete had completely fallen off, exposing the rebar in the floors and columns.

The next morning, at 9:00 sharp, Sebastian found himself in his manager's office for his first design review with the company president, held via conference call. Sebastian was uneasy about meeting with the president directly, given the company had sixty employees, but he took it in stride.

I did as they asked, wordy as it was. Probably this is a formality and then I can get to work.

A humiliated, exhausted Sebastian crawled back to his office an hour later, his ears ringing from the nonsensical yet harsh critique he'd received. According to the president, his headings were merely "greenish" instead of the company-mandated Green, and his chapter headings were unforgivably "reddish" rather than the expected Purple. Furthermore, he'd been informed in no uncertain terms that it was "impossible" to debug the font using Python. Instead, he was to work in C++, using the company's "marvelous" software libraries. Sebastian's manager had praised the document while they were waiting for the president, but had failed to utter a single word once the review began, his eyes fixed firmly on the brick wall behind his desk.

Sebastian closed the door to his office, blocking out the rest of the company. He sat in his plush leather chair, staring at the machine that barely worked. He opened his document again, then rebooted his machine once Vista decided to crash. When the machine came back up again, he checked his bank balance, thought of his mortgage, and gritted his teeth.

"All right," he said aloud to his empty office. "Let's see about those libraries."

The first thing he looked for was documentation. Surely, in a company as document-focused as this one, the documentation for the "marvelous" libraries would be exactly the right shade of exactly the right font, with exactly the right chapter headings and section names. Instead, it appeared to be ... missing. There were design documents galore, and their greens were more green and their purples showed far less red. But they only spelled out the methodology behind the development of the library, and said nothing of its proper usage.

Am I going mad? Sebastian asked himself as his machine restarted for the third time. Maybe the code is self-documenting ...

To his horror, but not particularly his surprise, the libraries simply consisted of poorly thought out wrappers around basic string functions from the standard library.

Sebastian gave it his all despite the setbacks. Every day, he was summoned for another round of verbal browbeating. The company had made no progress in the past four years with this font issue, and yet, nothing he did was good enough for the president. Sebastian gave up on the custom library, sticking with the Python he knew; after all, if he was going to be berated anyway, why bother trying to do as he was told? But no matter whether he used his own font tester in Python, or Microsoft's tester, or Apple's, or Adobe's, the font was an absolute mess. 488 intrinsic, unfixable, unkludgable design errors.

The president flatly denied the truth before him. It had to be Sebastian's fault for not using the wonderful C++ libraries.

Out of options, Sebastian left the key to the rusting, collapsing wreck of a garage on his manager's desk along with a letter of resignation. He kissed his lovely office with its decrepit pile of crap they called a machine goodbye. He took a deep breath, letting that disturbing leather smell permeate his nostrils one last time. Then he left, never to return.

Somehow, he doubted he'd miss the place.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet DebianDirk Eddelbuettel: nanotime 0.1.0: Now on Windows

Last month, we released nanotime, a package to work with nanosecond timestamps. See the initial release announcement for some background material and a few first examples.

nanotime relies on the RcppCCTZ package for high(er) resolution time parsing and formatting: R itself stops a little short of a microsecond. And it uses the bit64 package for the actual arithmetic: time at this granularity is commonly represented at (integer) increments (at nanosecond resolution) relative to an offset, for which the standard epoch of Januar 1, 1970 is used. int64 types are a perfect match here, and bit64 gives us an integer64. Naysayers will point out some technical limitations with R's S3 classes, but it works pretty much as needed here.

The one thing we did not have was Windows support. RcppCCTZ and the CCTZ library it uses need real C++11 support, and the g++-4.9 compiler used on Windows falls a little short lacking inter alia a suitable std::get_time() implementation. Enter Dan Dillon who ported this from LLVM's libc++ which lead to Sunday's RcppCCTZ 0.2.0 release.

And now we have all our ducks in a row: everything works on Windows too. The next paragraph summarizes the changes for both this release as well as the initial one last month:

Changes in version 0.1.0 (2017-01-10)

  • Added Windows support thanks to expanded RcppCCTZ (closes #6)

  • Added "mocked up" demo with nanosecond delay networking analysis

  • Added 'fmt' and 'tz' options to output functions, expanded format.nanotime (closing #2 and #3)

  • Added data.frame support

  • Expanded tests

Changes in version 0.0.1 (2016-12-15)

  • Initial CRAN upload.

  • Package is functional and provides examples.

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianBálint Réczey: Debian Developer Game of the Year

I have just finished level one, fixing all RC bugs in packages under my name, even in team-maintained ones. 🙂

Next level is no unclassified bug reports, which gonna be harder since I have just adopted shadow with 70+ open bugs. :-\

Luckily I can still go on bonus tracks which is fixing (RC) bugs in others’ packages, but one should not spend all the time on those track before finishing level 1!

PS: Last time I tried playing a conventional game I ended up fixing it in a few minutes instead.

TEDMeet the 2017 class of TED Fellows and Senior Fellows

TED Fellows and Senior Fellows 2017

Welcome the class of TED2017 Fellows! Representing 12 countries, one tribal nation and an incredible range of disciplines, this year’s Fellows are all leaders in their fields who constantly find new ways to collaborate and bring about positive change. Among those selected are an Ecuadorian neurobiologist working to uncover the neural circuits that connect the gut and the brain, an Afrofuturist filmmaker from Kenya who tells modern stories about Africa, a Chinese entrepreneur and venture capitalist tackling global food system challenges, an Indian investigative journalist exploring discrimination around the world, and many more.

Below, meet the new group of Fellows who will join us at TED2017, April 24-28 in Vancouver, BC.

TED2017 Fellows

Karim Abouelnaga
Karim Abouelnaga (USA)
Education entrepreneur
Founder and CEO of Practice Makes Perfect, a summer school operator, which addresses the summer learning loss in low-income communities by connecting younger students with mentors from their neighborhood for leadership development, academic instruction and career training.

Karim Abouelnaga speaks to a group of students participating in the Practice Makes Perfect summer program.

Christopher Ategeka
Christopher Ategeka (Uganda + USA)
Healthcare entrepreneur
Ugandan founder of Health Access Corps, which is addressing the uneven distribution of health professionals across the African continent by compensating and supporting trained healthcare professionals to stay and serve their local communities.

Diego Horquez
Diego Bohorquez (Ecuador + USA)
Gut-brain neurobiologist
Ecuadorian neuroscientist studying the neural pathways linking the brain and the gut, and how these connections affect human behavior and disease, from Parkinson’s to autism.

Rebecca Brachman
Rebecca Brachman (USA)
Neuroscientist + entrepreneur
Neuroscientist studying how the brain, immune system, and stress interact and co-founder of a biotech startup working to develop the first prophylactic drugs to prevent mental illness and increase resilience to stress.

Kayla Breit
Kayla Briët (Prairie Band Potawatomi Nation + USA)
Filmmaker + composer
Mixed-cultural artist infusing her Neshnaabe, Chinese, and Dutch-Indonesian heritage in multiple mediums of storytelling: film, virtual reality, and music – from orchestral to electronic.

Armando Azua-Bustos
Armando Azua-Bustos (Chile)
Chilean astrobiologist studying how microbial life has adapted to survive in the Atacama Desert, the driest place on Earth, and what this means for our search for life on Mars.

The extremely low water availability, high salinity and high UV radiation present in the Atacama Desert make it the closest analog to Mars on Earth. (Photo: Clair Popkin)

Reid Davenport
Reid Davenport (USA)
Documentary filmmaker
Documentary filmmaker focused on telling stories about people with disabilities, who incorporates the physicality of his own disability into his craft.

Damon Davis
Damon Davis (USA)
Interdisciplinary artist
Musician, visual artist and filmmaker working at the intersection of art and activism, exploring the experience of contemporary Black Americans. His documentary Whose Streets, which will premiere at Sundance 2017, tells the story of the 2014 protests in Ferguson, Missouri from the perspective of those who lived it.

Matilda Ho
Matilda Ho (China)
Food entrepreneur + investor
Chinese founder of Bits x Bites, China’s first food tech-focused accelerator VC that invests in startups solving systematic food challenges. She also founded Yimishiji, China’s first online farmers market that has engineered food education and transparency into the entire supply chain and customer experience.

Wanuri Kahiu
Wanuri Kahiu (Kenya)
Kenyan Afro-futurist filmmaker using the science fiction and fantasy genres to tell modern African stories.

Mei Lin Neo
Mei Lin Neo (Singapore)
Marine biologist
Singaporean marine ecologist and conservationist studying the endangered giant clams of the Indo-Pacific, and promoting ways to protect these rare marine species from going extinct.

A giant clam in the wild, which can grow to weigh more than 440 pounds and have an average lifespan 100 years. Singaporean marine ecologist Mei Lin Neo studies these endangered species in an effort to protect them from going extinct. (Photo: Mei Lin Neo)

Lauren Sallan
Lauren Sallan (USA)
Paleobiologist using the vast fossil record as a deep time database to explore how global events, environmental change and ecological interactions affect long-term evolution. She is particularly interested in what past mass extinctions of fish can tell us about modern climate change.

Anjan Sundaram
Anjan Sundaram (India)
Author + investigative journalist
Author and investigative journalist reporting on 21st century dictatorships, forgotten conflicts and discrimination around the world – from the Democratic Republic of Congo to Rwanda and India.

Stanford Thompson
Stanford Thompson (USA)
Trumpeter + music educator
Founder and CEO of Play on Philly, a music education and social development program that engages underserved Philadelphia youth in ensemble music-making. Stanford is an award-winning trumpeter who has performed and soloed with major orchestras around the world while actively performing chamber music and jazz.

A young student learns to play the flute with Play on Philly, a music education and social development program founded by Stanford Thompson that engages underserved Philadelphia youth in ensemble music-making. (Photo: David DeBalko)

Elizabeth Wayne
Elizabeth Wayne (USA)
Biomedical engineer + STEM advocate
Biomedical engineer working to enhance the ability of immune cells to deliver genetic material to tumors and co-host of PhDivas, a podcast about about women in higher education.

2017 Senior Fellows

We’re also excited to share our new class of Senior Fellows for TED2017. We honor our Senior Fellows with an additional two years of engagement in the TED community, offering continued support to their work while they, in turn, give back and mentor new Fellows and enrich the community as a whole. They embody the values of the TED Fellows program.

Laura Boykin
Laura Boykin (USA + Australia)
Computational biologist
Biologist using genomics and supercomputing to combat hunger in sub-Saharan Africa and increase food security. Laura helps smallholder farmers in sub-Saharan Africa control whiteflies and the viruses they transmit which have caused devastation of local cassava crops, a staple food in many countries.

Computational biologist Laura Boykin works with local farmers and scientists in Zambia to study the effects of whiteflies and viruses on cassava crops. Pictured from left to right: Dr. Titus Alicai, Dr. Monica Kehoe, Dr. Joseph Ndunguru, Dr. Peter Sseruwagi, Dr. Laura Boykin, Prof. Elijah Ateka. (Photo: Monica Kehoe)

Jedidah Isler
Jedidah Isler (USA)
Astrophysicist + inclusion activist
Award-winning astrophysicist and advocate for inclusive STEM education. In 2014, she became the first African American woman to receive a PhD in astrophysics from Yale. Jedidah founded VanguardSTEM, a nonprofit committed to creating conversations between emerging and established women of color in STEM.

Amanda Nguyen
Amanda Nguyen (USA)
Founder and president of Rise, a national nonprofit working with state legislatures to implement a Sexual Assault Survivor Bill of Rights. Her bill was recently passed unanimously in Congress making it only the 21st bill to be passed unanimously in United States history.

Andrew Pelling
Andrew Pelling (Canada)
Scientist + biohacker
Canadian scientist using novel, low-cost, open source materials – such as LEGOs and apples – for next generation medical innovations and founder of pHacktory, a community-driven research lab.

Sarah Sandman
Sarah Sandman (USA)
Artist + designer
Artist and designer creating experiences to amplify messages of social and environmental justice, such as Brick x Brick, a public art performance inspired by the 2016 US election that builds human “walls” against the language of misogyny.

Participants of Brick x Brick, a public art performance co-organized by designer Sarah Sandman, stand outside the Philadelphia Museum of Art in protest to misogynistic language in contemporary US politics. (Photo: Joey Foster Ellis)

Parmesh Shahani
Parmesh Shahani (India)
Writer + LGBTQ activist
Indian writer and founder of the Godrej India Culture Lab – an experimental ideas space that works on innovation and diversity in corporate India.

Trevor Timm
Trevor Timm (USA)
Investigative journalist + free speech advocate
Co-founder and executive director of Freedom of the Press Foundation, a non-profit that supports and defends journalism dedicated to transparency and accountability.

E Roon Kang
E Roon Kang (USA + South Korea)
Graphic designer
Korean graphic designer and artist operating Math Practice, a design and research studio in New York City that explores computational techniques and studies their implications in graphic design. E Roon, whose work In Search of Personalized Time was acquired by LACMA, is an assistant professor at Parsons School of Design.

A collection of custom, personal timekeepers outside LACMA — a part of E Roon Kang’s collaboration project In Search of Personalized Time, aiming to recalibrate time based on personal perceptions. (Photo: E Roon Kang and Taeyoon Choi)

Prumsodun Ok
Prumsodun Ok (Cambodia)
Interdisciplinary artist
Choreographer whose work is dedicated to the ancient art of Cambodian classical dance that was nearly annihilated by the Khmer Rouge. Prumsodun founded Cambodia’s first all-male and gay-identified dance company, whose work merges classical Cambodian and modern dance to subvert gender norms and stereotypes.

Janet Iwasa
Janet Iwasa (USA)
Molecular animator
Biologist and molecular animator at the University of Utah and founder of 1 μm Illustration, Janet uses 3D animation software to create molecular and cellular visualizations – such as how the HIV virus hijacks human cells – used by researchers around the world to visualize, explore and communicate their hypotheses.

Krebs on SecurityExtortionists Wipe Thousands of Databases, Victims Who Pay Up Get Stiffed

Tens of thousands of personal and possibly proprietary databases that were left accessible to the public online have just been wiped from the Internet, replaced with ransom notes demanding payment for the return of the files. Adding insult to injury, it appears that virtually none of the victims who have paid the ransom have gotten their files back because multiple fraudsters are now wise to the extortion attempts and are competing to replace each other’s ransom notes.

At the eye of this developing data destruction maelstrom is an online database platform called MongoDBTens of thousands of organizations use MongoDB to store data, but it is easy to misconfigure and leave the database exposed online. If installed on a server with the default settings, for example, MongoDB allows anyone to browse the databases, download them, or even write over them and delete them.

Shodan, a specialized search engine designed to find things that probably won't be picked up by Google, lists the number of open, remotely accessible MongDB databases available as of Jan. 10, 2017.

Shodan, a specialized search engine designed to find things that probably won’t be picked up by Google, lists the number of open, remotely accessible MongDB databases available as of Jan. 10, 2017.

This blog has featured several stories over the years about companies accidentally publishing user data via incorrectly configured MongoDB databases. In March 2016, for example, KrebsOnSecurity broke the news that Verizon Enterprise Solutions managed to leak the contact information on some 1.5 million customers because of a publicly accessible MongoDB installation.

Point is, this is a known problem, and almost once a week some security researcher is Tweeting that he’s discovered another huge open MongoDB database. There are simple queries that anyone can run via search engines like Shodan that will point to all of the open MongoDB databases out there at any given time. For example, the latest query via Shodan (see image above) shows that there are more than 52,000 publicly accessible MongoDB databases on the Internet right now. The largest share of open MongoDB databases are here in the United States.

Normally, when one runs a query on Shodan to list all available MongoDB databases, what one gets in return is a list of variously-named databases, and many databases with default filenames like “local.”

But when researcher Victor Gevers ran that same query earlier this week, he noticed that far too many of the database listings returned by the query had names like “readme,” “readnow,” “encrypted” and “readplease.” Inside each of these databases is exactly one file: a database file that includes a contact email address and/or a bitcoin address and a payment demand.

Researcher Niall Merrigan, a solutions architect for French consulting giant Cap Gemini, has been working with Gevers to help victims on his personal time, and to help maintain a public document that’s live-chronicling the damage from the now widespread extortion attack. Merrigan said it seems clear that multiple actors are wise to the scam because if you wait a few minutes after running the Shodan query and then re-run the query, you’ll find the same Internet addresses that showed up in the database listings from the previous query, but you’ll also notice that many now have a different database title and a new ransom note.

Merrigan and Gevers are maintaining a public Google Drive document (read-only) that is tracking the various victims and ransom demands. Merrigan said it appears that at least 29,000 MongoDB databases that were previously published online are now erased. Worse, hardly anyone who’s paid the ransom demands has yet received their files back.

A screen shot of the Google Drive document that Merrigan is maintaining to track the various ransom campaigns. This tab lists victims by industry. As we can see, many have paid the ransom but none have reported receiving their files back.

A screen shot of the Google Drive document that Merrigan is maintaining to track the various ransom campaigns. This tab lists victims by industry. As we can see, many have paid the ransom but none have reported receiving their files back.

“It’s like the kidnappers keep delivering the ransom notes, but you don’t know who has the actual original data,” Merrigan said. “That’s why we’re tracking the notes, so that if we see the [databases] are being exfiltrated by the thieves, we can know the guys who should actually get paid if they want to get their data back.”

For now, Merrigan is advising victims not to pay the ransom. He encouraged those inclined to do so anyway to demand “proof of life” from the extortionists — i.e., request that they share one or two of the deleted files to prove that they can restore the entire cache.

Merrigan said the attackers appear to be following the plan of attack. Use Shodan to scan for open MongoDB databases, connect using anonymous access, and then list all available databases. The attacker may or may not download the data before deleting it, leaving in its place a single database file with the extortionists contact and payment info and ransom note.

Merrigan said it’s unclear what prompted this explosion in extortion attacks on MongoDB users, but he suspects someone developed a “method” for extorting others that was either shared, sold or leaked to other ne’er-do-wells, who then began competing to scam each other — leaving victims in the lurch.

“It’s like the early 1800s gold rush in the United States, everyone is just going west at the same time,” Merrigan said. “The problem is, everyone was sold the same map.”

Zach Wikholm, a research developer at New York City-based security firm Flashpoint said he’s confirmed that at least 20,000 databases have been deleted — possibly permanently.

“You’re looking at over 20,000 databases that have gone from being useful to being encrypted and held for ransom,” Wikholm said. “I’m not sure the Internet as a whole has ever experienced anything like this at one time. The fact that we can pull down the number of databases that have been compromised and are still compromised is not a good sign. It means that most victims are unaware what has happened, or they’re not sure how it’s happened or what to do about it.”

Normally, I don’t have great timing, but yesterday’s posts on Immutable Truths About Data Breaches seems almost prescient given this developing attack. Truth 1: “If you connect it to the Internet, someone will try to hack it.” Truth 2: “If what you put on the Internet has value, someone will invest time and effort to steal it.” Truth 3: “Organizations and individuals unwilling to spend a small fraction of what those assets are worth to secure them against cybercrooks can expect to eventually be relieved of said assets.”

H/T to Graham Cluley for a heads-up on this situation.

Update, 1:55 p.m. ET: Clarified the default settings of a MongoDB installation. Also clarified story to note that Gevers discovered the extortion attempts.

Worse Than FailureCodeSOD: Mapping Every Possibility

Capture all

Today, Aaron L. shares the tale of an innocent little network mapping program that killed itself with its own thoroughness:

I was hired to take over development on a network topology mapper that came from an acquisition. The product did not work except in small test environments. Every customer demo was a failure.

The code below was used to determine if two ports on two different switches are connected. This process was repeated for every switch in the network. As the number of switches, ports, and MAC addresses increased the run time of the product went up exponentially and typically crashed with an array index out of bounds exception. The code below is neatly presented, the actual code took me over a day of repeatedly saying "WTF?" before I realized the original programmer had no idea what a Map or Set or List was. But after eliminating the arrays the flawed matching algorithm was still there and so shortly all of the acquired code was thrown away and the mapper was re-written from scratch with more efficient ways of connecting switches.

public class Switch {
    Array[] allMACs = new Array[numMACs];
    Array[] portIndexes = new Array[numPorts];
    Array[] ports = new Array[numPorts];

    public void load() {
        // load allMACs by reading switch via SNMP
        // psuedo code to avoid lots of irrelevant SNMP code
        int portCounter = 0;
        int macCounter = 0;
        for each port {
            ports[portCounter] = port;
            portIndexes[portCounter] = macCounter;
            for each MAC on port {
                 allMACs[macCounter++] = MAC;

    public Array[] getMACsForPort(int port) {
        int startIndex;
        int endIndex;
        for (int ictr = 0; ictr < ports.length; ictr++) {
            if (ports[ictr] == port) {
                startIndex = portIndexes[ictr];
                endIndex = portIndexes[ictr + 1];
        Array[] portMACS = new Array[endIndex - startIndex];
        int pctr = 0;
        for (int ictr = startIndex; ictr < endIndex - 1; ictr++) {
            portMACS[pctr++] = allMACs[ictr];

for every switch in the network {
    for every other switch in the network {
        for every port on switch {
            Array[] switchPortMACs = Switch.getMACsForPort(port);
            for every port on other switch {
                Array[] otherSwitchPortMACs = OtherSwitch.getMACsForPort(other port);
                if (intersect switchPortMACs with otherSwitchPortMACs == true) {
                   connect switch.port with otherSwitch.port;

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!

Sam VargheseBig Bash League set for expansion and mediocrity

Cricket Australia is all set to expand the number of Big Bash teams next year – and in the process slowly begin killing the goose that has so far laid many 22-carat eggs.

In its sixth year, the BBL has been an overwhelming success until last year but there are signs that people would prefer that things remain as they are.

For example, the biggest crowd last year was for the clash between the two Melbourne teams, the Renegades and the Stars. A total of 80,883 turned up for the first clash between these two teams in 2015-16.

This year, 2016-17, the crowd for the corresponding game was nearly 10,000 less. Should Cricket Australia not take a hint from occurrences like this? Crowds in 2016-17 have, on the whole, been less than in 2015-16.

As of today, 22 matches have been played; there are another 10 to go before the semi-finals and final. Only in two games, have teams been asked to chase 200 or over. That means only two teams, the Brisbane Heat and the Melbourne Stars, have managed to make 200 or over.

Most of the games have been one-sided. Just two games have gone down to the last ball. Not a single century has been scored.

Overall many of the players seem to be jaded. That is not surprising for there are now so many Twenty20 leagues around the world — Pakistan (played in the UAE), the West Indies, New Zealand, Sri Lanka, India, and Bangladesh all have their own leagues — that many players who are now literally T20 mercenaries come to the BBL after having played in at least a few of these competitions.

If they are mentally tired at the end of the year, who can blame them? They are playing as much as they can for it is their livelihood. They have only a few years in which they can earn money from this form of the game.

The TV commentators make the game unwatchable. There are a host of former Australian players who form the commentary team and to say they are mediocre would be paying them a compliment. T20 cricket itself sees heightened action but these ex-players keep trying to hype up everything. They have limited vocabularies and dumb down things to an incredible level.

Damien Fleming and Adam Gilchrist are horrible at the mike and it is clear that they are there for the money. They were both competent cricketers but have reached their level of incompetence as commentators. Gilchrist makes one cringe, he cannot speak a sentence without acting as an arse-licker of a very high order.

Some of the other commentators have clear conflicts of interest: Mark Waugh is a national selector and it is unethical for him to sit in the commentary box making comments about players whose futures he could well decide. But then one would recall that he is the same person who took money from a bookmaker when he was a player. The same goes for Ricky Ponting who is now an assistant coach for the national T20 team.

But hey, who gives a flying f*** these days? There’s good money available to these poor-quality commentators so they take it and run. Not that they need it. They lack the integrity to act in an ethical way.

Back to Cricket Australia and its expansion plans. One doubts that its chief executive James Sutherland will bother much about whether crowds grow or whether people watch; after all, CA will make its money before a single ball is bowled. The TV contract will increase, the TV channel in question, Channel 10, will welcome the additional games, and all will be right with the world.

This year there are 32 games; each team will play the others and the two Melbourne teams, the two Sydney teams, Adelaide and Hobart, and Brisbane and Perth, will play each other twice. Once the expansion is complete, that number of games will increase. Do people want to see more and more ordinary games that are won by big margins or do they want to see better games that go down to the wire?

Planet DebianVincent Fourmond: Version 2.1 of QSoas is out

I have just released QSoas version 2.1. It brings in a new solve command to solve arbitrary non-linear equations of one unknown. I took advantage of this command in the figure to solve the equation for . It also provides a new way to reparametrize fits using the reparametrize-fit command, a new series of fits to model the behaviour of an adsorbed 1- or 2-electrons catalyst on an electrode (these fits are discussed in great details in our recent review (DOI: 10.1016/j.coelec.2016.11.002), improvements in various commands, the possibility to now compile using Ruby 2.3 and the most recent version of the GSL library, and sketches for an emacs major mode, which you can activate (for QSoas script files, ending in .cmds) using the following snippet in $HOME/.emacs:

(autoload 'qsoas-mode "$HOME/Prog/QSoas/misc/qsoas-mode.el" nil t)
(add-to-list 'auto-mode-alist '("\\.cmds$" . qsoas-mode))

Of course, you'll have to adapt the path $HOME/Prog/QSoas/misc/qsoas-mode.el to the actual location of qsoas-mode.el.

As before, you can download the source code from our website, and purchase the pre-built binaries following the links from that page too. Enjoy !

Planet Linux AustraliaMatthew Oliver: Make quasselcore listen on port 443

I use IRC in my day to day job. I am a professional open source developer, so what else would I use.

For the last few years I have been using quassel, the core component sitting on a cloudserver, which allows me to have clients running on my phone, laptop, desktop… really where ever. However sometimes you find yourself at a place that has a firewall that port filters. If your lucky you might be able to ssh, and thereby get away with using an ssh tunnel. But I found it much easier to just get the quasselcore to listen on port 443 rather then the default 4242.

Changing the port it listens on is easy. If your using debian (or ubuntu) you just need to change/add /etc/default/quasselcore to have:


But that is only half the battle. 443 is a privileged port, so the default user quasselcore doesn’t have the rights to bind to that port. So we have 2 options.

  1. run the daemon as root
  2. Use setcap to allow the daemon to bind to privileged ports.

The first is easy, but a little dirty. Simply change the user in either the default file or update the init script. But option 2 is much cleaner, and actually not that hard.

First you need to make sure you have setcap installed:

sudo apt-get install libcap2-bin

Now we simply need to bless the quasselcore binary with the required capability:

sudo setcap 'cap_net_bind_service=+ep' /usr/bin/quasselcore

Now when you start quasselcore you’ll see it listening on port 443:

sudo netstat -ntlp |grep quassel


Krebs on SecurityKrebs’s Immutable Truths About Data Breaches

I’ve had several requests for a fresh blog post to excerpt something that got crammed into the corner of a lengthy story published here Sunday: A list of immutable truths about data breaches, cybersecurity and the consequences of inaction.

Here’s the excerpt requested from yesterday’s story:

coecopy“There are some fairly simple, immutable truths that each of us should keep in mind, truths that apply equally to political parties, organizations and corporations alike:

-If you connect it to the Internet, someone will try to hack it.

-If what you put on the Internet has value, someone will invest time and effort to steal it.

-Even if what is stolen does not have immediate value to the thief, he can easily find buyers for it.

-The price he secures for it will almost certainly be a tiny slice of its true worth to the victim.

-Organizations and individuals unwilling to spend a small fraction of what those assets are worth to secure them against cybercrooks can expect to eventually be relieved of said assets.”

They may not be complete, but as a set of truisms these tenets probably will age pretty well. After all, taken as a whole they are practically a model Cybercriminal Code of Ethics, or a cybercrook’s social contract.

Nevertheless, these tenets might be even more powerful if uttered in the voice of the crook himself. That may be more in keeping with the theme of this blog overall, which seeks to explain cybersecurity and cybercrime concepts through the lens of the malicious attacker (often this is a purely economic perspective).

So let’s rifle through this ne’er-do-well’s bag of tricks, tools and tells. Let us borrow from his literary perspective. I imagine a Cybercriminal Code of Ethics might go something like this (again, in the voice of a seasoned crook):

-If you hook it up to the Internet, we’re gonna hack at it.

-If what you put on the Internet is worth anything, one of us is gonna try to steal it.

-Even if we can’t use what we stole, it’s no big deal. There’s no hurry to sell it. Also, we know people.

-We can’t promise to get top dollar for what we took from you, but hey — it’s a buyer’s market. Be glad we didn’t just publish it all online.

-If you can’t or won’t invest a fraction of what your stuff is worth to protect it from the likes of us, don’t worry: You’re our favorite type of customer!

Planet DebianSean Whitton: jan17vcspkg

There have been a two long threads on the debian-devel mailing list about the representation of the changes to upstream source code made by Debian maintainers. Here are a few notes for my own reference.

I spent a lot of time defending the workflow I described in dgit-maint-merge(7) (which was inspired by this blog post). However, I came to be convinced that there is a case for a manually curated series of patches for certain classes of package. It will depend on how upstream uses git (rebasing or merging) and on whether the Debian delta from upstream is significant and/or long-standing. I still think that we should be using dgit-maint-merge(7) for leaf or near-leaf packages, because it saves so much volunteer time that can be better spent on other things.

When upstream does use a merging workflow, one advantage of the dgit-maint-merge(7) workflow is that Debian’s packaging is just another branch of development.

Now consider packages where we do want a manually curated patch series. It is very hard to represent such a series in git. The only natural way to do it is to continually rebase the patch series against an upstream branch, but public branches that get rebased are not a good idea. The solution that many people have adopted is to represent their patch series as a folder full of .diff files, and then use gbp pq to convert this into a rebasing branch. This branch is not shared. It is edited, rebased, and then converted back to the folder of .diff files, the changes to which are then committed to git.

One of the advantages of dgit is that there now exists an official, non-rebasing git history of uploads to the archive. It would be nice if we could represent curated patch series as branches in the dgit repos, rather than as folders full of .diff files. But as I just described, this is very hard. However, Ian Jackson has the beginnings of a workflow that just might fit the bill.

Falkvinge - Pirate PartyIn science fiction, robot witnesses to crime are seen as normal. Nobody considered the privacy implications for present day.

Robots dressed in a business suit

New World:The Police wants the cooperation of a robotic witness to a murder case, requesting Amazon’s help in recalling what the domestic robot “Echo” heard in the room. Robotic witnesses have been a theme in science fiction for a long time — and yet, we forgot to ask the most obvious and the most important questions. Maybe we just haven’t realized that we’re in science fiction territory, as far as robotic agents go, and explored the consequences of it: what robot has agency and who can be coerced?

People were outraged that the Police would consider asking a robot – the Amazon Echo – what happened in the recent murder case, effectively activating retroactive surveillance. Evenmoreso, people were outraged that the Police tried to coerce the robot’s manufacturer to provide the data, coercing a third party to command the robot it manufactured, and denying agency to the people searched.

In Isaac Asimov’s The Naked Sun, a human detective is sent off to faraway Solaris to investigate a murder, and has to interview a whole range of robot servants, each with their own perspective, to gradually piece together how the murder took place. A cooking robot knows about the last dinner of the victim, and can provide details only of that, and so on. Still, each and every robot have a perfect recollection of their particular perspective.

When reading this story in my teens, I didn’t reflect at all over the concept of a detective interviewing robots of the victim. Once robots had data and could communicate, it felt like a perfectly normal view of things to come: they were witnesses to the scene, after all, and with perfect memory and objective recall thereof to boot. If anything, robots were more reliable witnesses and more desirable witnesses, because they wouldn’t lie for their own material benefit.

Today, we don’t see a toaster as having the agency required to be a legal witness. We surely don’t see an electronic water meter as being a conscious robot. We don’t consider our television set to have agency, and being able to answer questions about what we do in our living room, what it saw us do in our living room and heard us say, the way a robot could be asked in a science fiction novel.

But is checking an electronic water meter’s log file really that different from asking a futuristic gardener robot what happened? And if so, what is the difference, apart from the specific way of asking (reality’s robots aren’t nearly as cool as the science fiction ones, at least not yet)?

Our world is full of sensors. That part can’t be expected to change. On Solaris, there were ten thousand robots per human being. I would not be surprised if there aren’t at least a hundred sensors per household already in the Western world.

It comes down to ownership of – and agency of – these hundreds of sensors. Are they semi-independent? Can they be coerced by a government agency, against their owner’s consent? Can a government coerce a manufacturer to coerce their robots, negating property rights and consent rights?

These questions are fundamental. And their answers have enormous privacy implications. If society decides that today’s sensors-with-some-protointelligence are the equivalent of science fiction’s future Asimovian robots, then we’re already surrounded by hundreds of perfect witnesses to everything we do, all the time.

The science fiction authors wrote stories about how robots obeyed every human command (the “second law”). The writers don’t seem to have anticipated with a large enough importance that humans have always used tools – and therefore also robots – to project force of conflicting interests against each other. When one human orders another human’s robot to betray its owner, as in the Amazon Echo case, which human has priority and why?

Privacy remains your own responsibility.

Syndicated article
This article has previously appeared on Private Internet Access.

(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

Planet DebianShirish Agarwal: The Great Indian Digital Tamasha

Indian Railways

This is an extension to last month’s article/sharing where I had shared the changes that had transpired in the last 2-3 months. Now am in a position to share the kind of issues a user can go through in case he is looking for support from IRCTC to help him/her go cashless. If you a new user to use IRCTC services you wouldn’t go through this trouble.

For those who might have TL;DR issues it’s about how hard it can become to get digital credentials fixed in IRCTC (Indian Railway Catering and Tourism Corporation) –

a. 2 months back Indian Prime Minister gave a call incentivizing people to use digital means to do any commercial activities. One of the big organizations which took/takes part is IRCTC which handles the responsibility for e-ticketing millions of Rail tickets for common people. In India, a massive percentage moves by train as it’s cheaper than going by Air.

A typical fare from say Pune – Delhi (capital of India) by second class sleeper would be INR 645/- for a distance of roughly 1600 odd kms and these are monopoly rates, there are no private trains and I’m not suggesting anything of that sort, just making sure that people know.

An economy class ticket by Air for the same distance would be anywhere between INR 2500-3500/- for a 2 hour flight between different airlines. Last I checked there are around 8 mainstream airlines including flag-carrier Air India.

About 30% of the population live on less than a dollar and a half a day which would come around INR 100/-.

There was a comment some six months back on getting more people out of the poverty line. But as there are lots of manipulations in numbers for who and what denotes above poor and below poor in India and lot of it has to do with politics it’s not something which would be easily fixable.

There are lots to be said in that arena but this article is not an appropriate blog-post for that.

All in all, it’s only 3-5% of the population at the most who can travel via Air if situation demands and around 1-2% who might be frequent, business or leisure travellers.

Now while I can thankfully afford an Air Ticket if the situation so demands, my mother gets motion sickness so while together we can only travel by train.

b. With the above background, I had registered with IRCTC few years ago with another number (dual-SIM) I had purchased and was thinking that I would be using this long-term (seems to my first big mistake, hindsight 50:50) . This was somewhere in 2006/2007.

c. Few months later I found that the other service provider wasn’t giving good service or was not upto mark. I was using IDEA (the main mobile operator) throughout those times.

d. As I didn’t need the service that much, didn’t think to inform them that I want to change to another service provider at that point in time (possibly the biggest mistake, hindsight 50:50)

e. In July 2016 itself IRCTC cut service fees,

f. This was shared as a NEW news item/policy decision at November-end 2016 .

g. While I have done all that has been asked by irctc-care haven’t still got the issues resolved 😦 IRCTC’s e-mail id –

Now in detail –

This is my first e-mail sent to IRCTC in June 2016 –

Dear Customer care,

I had applied and got username and password sometime back . The
number I had used to register with IRCTC was xxxxxxxxxx (BSNL mobile number not used anymore) . My mobile was lost and along with that the number was also lost. I had filed a complaint with the police and stopped that number as well. Now I have an another mobile number but have forgotten both the password and the security answer that I had given when I had registered . I do have all the conversations I had both with the as well as if needed to prove my identity.

The new number I want to tie it with is xxxxxxxxxx (IDEA number in-use for last 10 years)

I see two options :-

a. Tie the other number with my e-mail address

b. Take out the e-mail address from the database so that I can fill in
as a new applicant.

Looking forward to hear from you.

There was lot of back and forth with various individuals on IRCTC and after a lot of back and forth, this is the final e-mail I got from them somewhere in August 2016, he writes –

Dear Customer,

We request you to send mobile bill of your mobile number if it is post paid or if it is prepaid then contact to your service provider and they will give you valid proof of your mobile number or they will give you in written on company head letter so that we may update your mobile number to update so that you may reset your password through mobile OTP.
and Kindly inform you that you can update your profile by yourself also.

1.login on IRCTC website
2.after login successfully move courser on “my profile” tab.
3.then click on “update profile” your password then you can update your profile on user-profile then email id.
6. click on update.

Still you face any problem related to update profile please revert to us with the screen shots of error message which you will get at the time of update profile .

Thanks & Regards

Parivesh Patel
Executive, Customer Care

IRCTC’s response seemed responsible, valid and thought it would be a cake-walk as private providers are supposed to be much more efficient than public ones. The experience proved how wrong was I trust them with doing the right thing –

1. First I tried the twitter handle to see how IDEA uses their twitter handle.

2. The idea customer care twitter handle was mild in its response.

3. After sometime I realized that the only way out of this quagmire would perhaps be to go to a brick-mortar shop and get it resolved face-to-face. I went twice or thrice but each time something or the other would happen.

On the fourth and final time, I was able to get to the big ‘Official’ shop only to be told they can’t do anything about this and I would have to the appellate body to get the reply.

The e-mail address which they shared (and I found it later) was wrong. I sent a somewhat longish e-mail sharing all the details and got bounce-backs. The correct e-mail address for the IDEA Maharashtra appellate body is –

I searched online and after a bit of hit and miss finally got the relevant address. Then finally on 30th December, 2016 wrote a short email to the service provider as follows –

Dear Sir,
I have been using prepaid mobile connection –

number – xxxxxxx

taken from IDEA for last 10 odd years.

I want to register myself with IRCTC for online railway booking using
my IDEA mobile number.

Earlier, I was having a BSNL connection which I discontinued 4 years back,

For re-registering myself with IRCTC, I have to fulfill their latest
requirements as shown in the email below .

It is requested that I please be issued a letter confirming my
credentials with your esteemed firm.

I contacted your local office at corner of Law College Road and
Bhandarkar Road, Pune (reference number – Q1 – 84786060793) who
refused to provide me any letter and have advised me to contact on the
above e-mail address, hence this request is being forwarded to you.

Please do the needful at your earliest.

Few days later I got this short e-mail from them –

Dear Customer,

Greetings for the day!

This is with reference to your email regarding services.

Please accept our apologies for the inconvenience caused to you and delay in response.

We regret to inform you that we are unable to provide demographic details from our end as provision for same is not available with us.

Should you need any further assistance, please call our Customer Service help line number 9822012345 or email us at by mentioning ten digit Idea mobile number in subject line.

Thanks & Regards,

Javed Khan

Customer Service Team

IDEA Cellular Limited- Maharashtra & Goa Circle.

Now I was at almost my wit’s end. Few days before, I had re-affirmed my e-mail address to IDEA . I went to the IDEA care site, registered with my credentials. While the https connection to the page is weak, but let’s not dwell on that atm.

I logged into the site, I went through all the drop-down menus and came across My Account > Raise a request link which I clicked on . This came to a page where I could raise requests for various things. One of the options given there was Bill Delivery. As I wasn’t a postpaid user but a prepaid user didn’t know if that would work or not I still clicked on it. It said it would take 4 days for that to happen. I absently filed it away as I was somewhat sure that nothing would happen from my previous experience with IDEA. But this time the IDEA support staff came through and shared a toll-free SMS number and message format that I could use to generate call details from the last 6 months.

The toll-free number from IDEA is 12345 and the message format is EBILL MON (short-form for month so if it’s January would be jan, so on and so forth).

After gathering all the required credentials, sent my last mail to IRCTC about a week, 10 days back –

Dear Mr. Parivesh Patel,

I was out-of-town and couldn’t do the needful so sorry for the delay.
Now that I’m back in town, I have been able to put together my prepaid
bills of last 6 months which should make it easy to establish my

As had shared before, I don’t remember my old password and the old
mobile number (BSNL number) is no longer accessible so can’t go
through that route.

Please let me know the next steps in correcting the existing IRCTC
account (which I haven’t operated ever) so I can start using it to
book my tickets.

Look forward to hearing from you.

Haven’t heard anything them from them, apart from a generated token number, each time you send a reply happens. This time it was #4763548

The whole sequence of events throws a lot of troubling questions –

a. Could IRCTC done a better job of articulating their need to me instead of the run-around I was given ?

b. Shouldn’t there be a time limit to accounts from which no transactions have been done ? I hadn’t done a single transaction since registering. When cell service providers including BSNL takes number out after a year of not using a number, why is that account active for so long ?

c. As that account didn’t have OTP at registration, dunno if it’s being used for illegal activities or something.

Update – This doesn’t seem to be a unique thing at all. Just sampling some of the tweets by people at @IRCTC_LTD , all of this just goes to show how un-unique the situation really is.

Filed under: Miscellenous Tagged: #customer-service, #demonetization, #IDEA-aditya birla, #IRCTC, #web-services, rant

Worse Than FailureHealthcare Can Make You Sick

Every industry has information that needs to be moved back and forth between disparate systems. If you've lived a wholesome life, those systems are just different applications on the same platform. If you've strayed from the Holy Path, those systems are written using different languages on different platforms running different operating systems on different hardware with different endian-ness. Imagine some Java app on Safari under some version of Mac OS needing to talk to some version of .NET under some version of Windows needing to talk to some EBCIDIC-speaking version of COBOL running on some mainframe.

Long before anyone envisioned the above nightmare, we used to work with SGML, which devolved into XML, which was supposed to be a trivial tolerable way to define the format and fields contained in a document, with parsers on every platform, so that information could be exchanged without either end needing to know anything more than the DTD and/or schema for purposes of validation and parsing.

In a hopelessful attempt at making this somewhat easier, wrapper libraries were written on top of XML.

Sadly, they failed.

A hand holding a large pile of pills, in front of a background of pills

In the health care industry, some open-source folks created the (H)ealthcare (API), or HAPI project, which is basically an object oriented parser for text-based healthcare industry messages. Unfortunately, it appears they suffered from Don't-Know-When-To-Stop-Syndrome™.

Rather than implementing a generic parser that simply splits a delimited or fixed-format string into a list of text-field-values, the latest version implements 1205 different parsers, each for its own top-level data structure. Most top level structures have dozens of sub-structures. Each parser has one or more accessor methods for each field. Sometimes, a field can be a single instance, or a list of instances, in which case you must programmatically figure out which accessor to use.

That's an API with approximately 15,000 method calls! WTF were these developers thinking?

For example, the class: EHC_E15_PAYMENT_REMITTANCE_DETAIL_INFO can have zero or more product service sections. So right away, I'm thinking some sort of array or list. Thus, instead of something like:

    List<EHC_E15_PRODUCT_SERVICE_SECTION> prodServices = info.getProductServices();
    // iterate

... you need to do one of these:

    // Get sub-structure
    // Get embedded product-services from sub-structure

    // ...if you know for certain that there will be exactly one in the message:
    // ...if you don't know how many there will be:
    int n = infos.getPRODUCT_SERVICE_SECTIONReps();
    for (int i=0; i<n; i++) {
        // use it

    // ...or you can just grab them all and iterate

...and you need to call the correct one, or risk an exception. But having multiple ways of accomplishing the same thing via the API leads to multiple ways of doing the same thing in the code that is using the API, which invariably leads to problems.

So you might say, OK, that's not SO bad; you just use what you need. Until you realize that some of these data structures are embedded ten+ levels deep, each with dozens of sub-structures and/or fields, each with multiple accessors. With those really long names. Then you realize that the developers of the HAPI got tired of typing and just started using acronyms for everything, with such descriptive data structure names as: LA1, ILT and PCR.

The API does attempt to be helpful in that if it doesn't find what it's expecting in the field you ask it to parse, it throws an exception and it's up to you to figure out what went wrong. Of course, this implies that you already know what is being sent to you in the data stream.

Anonymous worked in the healthcare industry and was charged with maintaining a library that had been wrapped around HAPI. He was routinely assigned tasks (with durations of several weeks) to simply parse one additional field. After spending far too much time choking down the volumes of documentation on the API, he wrote a generic single-class 300 line parser with some split's, substring's, parseDate's and parseInt's to replace the whole thing.

Now adding an additional field takes all of ten minutes.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianPetter Reinholdtsen: Where did that package go? — geolocated IP traceroute

Did you ever wonder where the web trafic really flow to reach the web servers, and who own the network equipment it is flowing through? It is possible to get a glimpse of this from using traceroute, but it is hard to find all the details. Many years ago, I wrote a system to map the Norwegian Internet (trying to figure out if our plans for a network game service would get low enough latency, and who we needed to talk to about setting up game servers close to the users. Back then I used traceroute output from many locations (I asked my friends to run a script and send me their traceroute output) to create the graph and the map. The output from traceroute typically look like this:

traceroute to (, 30 hops max, 60 byte packets
 1 (  0.447 ms  0.486 ms  0.621 ms
 2 (  0.467 ms  0.578 ms  0.675 ms
 3 (  0.385 ms  0.373 ms  0.358 ms
 4 (  1.174 ms  1.172 ms  1.153 ms
 5 (  2.627 ms (  3.172 ms (  2.857 ms
 6 (  0.662 ms  0.637 ms (  0.622 ms
 7 (  0.931 ms  0.917 ms  0.955 ms
 8  * * *
 9  * * *

This show the DNS names and IP addresses of (at least some of the) network equipment involved in getting the data traffic from me to the server, and how long it took in milliseconds for a package to reach the equipment and return to me. Three packages are sent, and some times the packages do not follow the same path. This is shown for hop 5, where three different IP addresses replied to the traceroute request.

There are many ways to measure trace routes. Other good traceroute implementations I use are traceroute (using ICMP packages) mtr (can do both ICMP, UDP and TCP) and scapy (python library with ICMP, UDP, TCP traceroute and a lot of other capabilities). All of them are easily available in Debian.

This time around, I wanted to know the geographic location of different route points, to visualize how visiting a web page spread information about the visit to a lot of servers around the globe. The background is that a web site today often will ask the browser to get from many servers the parts (for example HTML, JSON, fonts, JavaScript, CSS, video) required to display the content. This will leak information about the visit to those controlling these servers and anyone able to peek at the data traffic passing by (like your ISP, the ISPs backbone provider, FRA, GCHQ, NSA and others).

Lets pick an example, the Norwegian parliament web site It is read daily by all members of parliament and their staff, as well as political journalists, activits and many other citizens of Norway. A visit to the web site will ask your browser to contact 8 other servers:,,,,,, and I extracted this by asking PhantomJS to visit the Stortinget web page and tell me all the URLs PhantomJS downloaded to render the page (in HAR format using their netsniff example. I am very grateful to Gorm for showing me how to do this). My goal is to visualize network traces to all IP addresses behind these DNS names, do show where visitors personal information is spread when visiting the page.

map of combined traces for URLs used by using GeoIP

When I had a look around for options, I could not find any good free software tools to do this, and decided I needed my own traceroute wrapper outputting KML based on locations looked up using GeoIP. KML is easy to work with and easy to generate, and understood by several of the GIS tools I have available. I got good help from by NUUG colleague Anders Einar with this, and the result can be seen in my kmltraceroute git repository. Unfortunately, the quality of the free GeoIP databases I could find (and the for-pay databases my friends had access to) is not up to the task. The IP addresses of central Internet infrastructure would typically be placed near the controlling companies main office, and not where the router is really located, as you can see from the KML file I created using the GeoLite City dataset from MaxMind.

scapy traceroute graph for URLs used by

I also had a look at the visual traceroute graph created by the scrapy project, showing IP network ownership (aka AS owner) for the IP address in question. The graph display a lot of useful information about the traceroute in SVG format, and give a good indication on who control the network equipment involved, but it do not include geolocation. This graph make it possible to see the information is made available at least for UNINETT, Catchcom, Stortinget, Nordunet, Google, Amazon, Telia, Level 3 Communications and NetDNA.

example geotraceroute view for

In the process, I came across the web service GeoTraceroute by Salim Gasmi. Its methology of combining guesses based on DNS names, various location databases and finally use latecy times to rule out candidate locations seemed to do a very good job of guessing correct geolocation. But it could only do one trace at the time, did not have a sensor in Norway and did not make the geolocations easily available for postprocessing. So I contacted the developer and asked if he would be willing to share the code (he refused until he had time to clean it up), but he was interested in providing the geolocations in a machine readable format, and willing to set up a sensor in Norway. So since yesterday, it is possible to run traces from Norway in this service thanks to a sensor node set up by the NUUG assosiation, and get the trace in KML format for further processing.

map of combined traces for URLs used by using geotraceroute

Here we can see a lot of trafic passes Sweden on its way to Denmark, Germany, Holland and Ireland. Plenty of places where the Snowden confirmations verified the traffic is read by various actors without your best interest as their top priority.

Combining KML files is trivial using a text editor, so I could loop over all the hosts behind the urls imported by and ask for the KML file from GeoTraceroute, and create a combined KML file with all the traces (unfortunately only one of the IP addresses behind the DNS name is traced this time. To get them all, one would have to request traces using IP number instead of DNS names from GeoTraceroute). That might be the next step in this project.

Armed with these tools, I find it a lot easier to figure out where the IP traffic moves and who control the boxes involved in moving it. And every time the link crosses for example the Swedish border, we can be sure Swedish Signal Intelligence (FRA) is listening, as GCHQ do in Britain and NSA in USA and cables around the globe. (Hm, what should we tell them? :) Keep that in mind if you ever send anything unencrypted over the Internet.

PS: KML files are drawn using the KML viewer from Ivan Rublev, as it was less cluttered than the local Linux application Marble. There are heaps of other options too.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianGuido Günther: Debian Fun in December 2016

Debian LTS

November marked the 20th month I contributed to Debian LTS under the Freexian umbrella. I had 8 hours allocated which I used by:

  • some rather quiet frontdesk days
  • updating icedove to 45.5.1 resulting in DLA-752-1 fixing 7 CVEs
  • looking whether Wheezy is affected by xsa-202, xsa-203, xsa-204 and handling the communication with credativ for these (update not yet released)
  • Assessing cURL/libcURL CVE-2016-9586
  • Assessing whether Wheezy's QEMU is affeced by security issues in 9pfs "proxy" and "handle" code
  • Releasing DLA-776-1 for samba fixing CVE-2016-2125

Other Debian stuff

Some other Free Software activites

Planet DebianRiku Voipio: 20 years of being a debian maintainer

fte (0.44-1) unstable; urgency=low

* initial Release.

-- Riku Voipio Wed, 25 Dec 1996 20:41:34 +0200
Welp I seem to have spent holidays of 1996 doing my first Debian package. The process of getting a package into Debian was quite straightforward then. "I have packaged fte, here is my pgp, can I has an account to upload stuff to Debian?" I think the bureaucracy took until second week of January until I could actually upload the created package.

uid Riku Voipio
sig 89A7BF01 1996-12-15 Riku Voipio
sig 4CBA92D1 1997-02-24 Lars Wirzenius
A few months after joining, someone figured out that to pgp signatures to be useful, keys need to be cross-signed. Hence young me taking a long bus trip from countryside Finland to the capital Helsinki to meet the only other DD in Finland in a cafe. It would still take another two years until I met more Debian people, and it could be proven that I'm not just an alter ego of Lars ;) Much later an alternative process of phone-calling prospective DD's would be added.

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.2.0

A new version, now at 0.2.0, of RcppCCTZ is now on CRAN. And it brings a significant change: windows builds! Thanks to Dan Dillon who dug deep enough into the libc++ sources from LLVM to port the std::get_time() function that is missing from the 4.* series of g++. And with Rtools being fixed at g++-4.9.3 this was missing for us here. Now we can parse dates for use by RcppCCTZ on Windows as well. That is important not only for RcppCCTZ but also particularly for the one package (so far) depending on it: nanotime.

CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and times, and one for converting between between absolute and civil times via time zones. It requires only a proper C++11 compiler and the standard IANA time zone data base which standard Unix, Linux, OS X, ... computers tend to have in /usr/share/zoneinfo -- and for which R on Windows ships its own copy we can use. RcppCCTZ connects this library to R by relying on Rcpp.

The RcppCCTZ page has a few usage examples, as does the post announcing the previous release.

The changes in this version are summarized here:

Changes in version 0.2.0 (2017-01-08)

  • Windows compilation was enabled by defining OFFSET() and ABBR() for MinGW (#10 partially addressing #9)

  • Windows use completed with backport of std::get_time from LLVM's libc++ to enable strptime semantics (Dan Dillon in #11 completing #9)

  • Timezone information on Windows is supplied via R's own copy of zoneinfo with TZDIR set (also #10)

  • The interface to formatDouble was cleaned up

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianBits from Debian: New Debian Developers and Maintainers (November and December 2016)

The following contributors got their Debian Developer accounts in the last two months:

  • Karen M Sandler (karen)
  • Sebastien Badia (sbadia)
  • Christos Trochalakis (ctrochalakis)
  • Adrian Bunk (bunk)
  • Michael Lustfield (mtecknology)
  • James Clarke (jrtc27)
  • Sean Whitton (spwhitton)
  • Jerome Georges Benoit (calculus)
  • Daniel Lange (dlange)
  • Christoph Biedl (cbiedl)
  • Gustavo Panizzo (gefa)
  • Gert Wollny (gewo)
  • Benjamin Barenblat (bbaren)
  • Giovani Augusto Ferreira (giovani)
  • Mechtilde Stehmann (mechtilde)
  • Christopher Stuart Hoskin (mans0954)

The following contributors were added as Debian Maintainers in the last two months:

  • Dmitry Bogatov
  • Dominik George
  • Gordon Ball
  • Sruthi Chandran
  • Michael Shuler
  • Filip Pytloun
  • Mario Anthony Limonciello
  • Julien Puydt
  • Nicholas D Steeves
  • Raoul Snyman


Planet Linux AustraliaTim Riley: 2016 in review

I had a good run of year in review posts, but fell off the bandwagon lately. It's time to change that. Before I dive into 2016, here's a recap of the intervening years:

2013: Around the world tickets in hand, Misch and I worked, volunteered, and played Japan, Vietnam, Hong Kong, the USA, Finland, Germany, the UK, Spain, and Italy. An amazing time! Attended my first (and only) WWDC and had a blast. Started working on Decaf Sucks 2.0 (you'll hear more about that much later).

2014: Settling back in Canberra and realising we could live for a long time in our (large by world standards) apartment, we renovated a little: new floors, paint, curtains. Made it feel like a whole new place. Misch and I gave birth to Clover, our best and most satisfying team effort yet.

2015: Took our first with-kid overseas trip, and cruised through to Clover's first birthday and our first parenting anniversary (which we celebrated with a giant bánh mì party). Icelab gathered for FarmLab, and we discussed alternatives-to-Rails for the first time. Our grandmothers both passed away, and we spent time with the extended family. Jojo and I held Rails Camp in Canberra in December, where we got to eat cake for Rails' 10th birthday and watch Star Wars with 70 of our friends. Misch and I got pregnant again but sadly lost the little baby at 7 weeks.

Phew. That was some time. Now onto 2016.


At home with the (expanded!) family

Losing a baby at the end of 2016 was a big thing, but thankfully it came at a time when work and other demands scale back, so Misch and I spent some good quality time together and could regroup.

We got a couple of big things done in the beginning of the year. First up, we bought a car! After two years of mostly car-free life, it was time for another way to get around the place. Our little Škoda Fabia does just that, and is fun to drive.

Next, we renovated our bathroom! Knowing we'll be living here for many years to come, this was a big and worthwhile upgrade to our home amenity. We splashed out and got a Toto washlet, too. I regret nothing.

And in the last big thing for 2016, we became pregnant again and gave birth to baby Iris Persephone in October. This time around, the room at the Birth Centre was brimming with family. We wanted Clover there, so along came Misch's parents too. Clover's excited cry of "Baby!" upon seeing Iris come into the world is something I'll always remember. Iris' arrival brought another 6 weeks of time at home, which I enjoyed even more now that we're a family of four.

Decaf Sucks 2.0

With Misch's encouragement, I returned to my long stalled effort to release our all-new 2.0 version of Decaf Sucks. Turned out it didn't need all that much; with just a couple of weeks of effort, Max and I got everything wrapped up and released it to the world. It was a weight from my shoulders and I'm happy to finally have it out there.


2016 brought a seismic shift in how I write Ruby applications. After some experimentations with rom-rb and Piotr Solnica's rodakase experiment late in 2015, I knew this was my future. So I dove in and contributed as much as I could to the fledgling set of libraries now known as dry-rb. And we got a lot done. We released a whole bunch of gems, made things "official" with the launch of a website and discussion forum, and expanded the core team of developers to 5.

Along with sharing code, I wanted to start sharing some of the thinking behind the dry-rb style of Ruby app development, so I set about blogging, and managed to publish once a week for a good few months. This culminated with an introductory talk I gave at RedDotRubyConf in Singapore. This was my first conference talk and I relished the opportunity to really polish a particular message. Luckily, I was able to build upon this a repeat performance at Rails Camp in Adelaide and at a Ruby community workshop over in Perth. No doubt, you can expect to hear plenty more from me about dry-rb in 2017 :)


Icelab kicked off 2016 by celebrating our 10th birthday! I think we've built a remarkable little company and work-home to many good people, and I think the next 10 years will be even better.

For me, most of 2016 at Icelab was spent getting us settled onto dry-rb and rom-rb as our preferred stack for server-side applications. We shipped our first production app with these all the way back in February, launched our new website as an open source example app in June, and we have several more big sites that'll see the light of day early in 2017. It took a little while to get over the knowledge and productivity hump, but I feel we've hit a good rhythm with the stack now, and given we're the long-term maintainers of most of the things we ship, it'll be something that I expect will pay dividends for many years to come.

Open source was another big theme for the year. Along with our ongoing contributions to dry-rb, we took an "open source first" approach to any other standalone, reusable pieces of code we wrote. This small shift was a big help in making better design choices right from the beginning. You'll be able to see some of this bear fruit when we take our advanced form builder to 1.0 next year. It's already been an incredibly useful tool across our client projects.

I'm also proud that Icelab began contributing to the open source infrastructure that powers Ruby apps everywhere through our contributions to Ruby Together, which we joined in 2016 as Australia's first Emerald member.

And all the rest

And now I'll collect everything else I could think of into a few broadly categorised lists:

Computer life:

  • I've removed Twitter apps from all my platforms. It's helped me focus.
  • Said goodbye to, the little Rails app I've been running to email me my Twitter favourites. Now that IFTTT can do that same thing, I'm happy to have one less running thing I have to worry about.
  • Sometime in June I surpassed 250,000 all time Icelab chat messages.
  • Mulled many times with my co-workers on how we could run a better kind of tech meet-up in Canberra. Maybe this year!

Software development:

  • Continued my love/hate relationship with Docker, but I think now I've managed to find the right place for it in our development life: standardised production environments, and local dev only when we have to run something unusual.
  • After uncountable years, I'm finally looking away from Heroku as our production environment of choice.
  • Shipped a production iOS app built using Turbolinks for iOS, and it turned out rather nicely. I'd be happy to play with it some more.
  • We settled on Attache as a standard handler for all our file uploads. I feel it is a smart architectural choice (and I was happy to meet its affable creator Choon Keat in Singapore!)
  • We started to build Danger into our CI builds. It's already helpful, and I think we're just scratching the surface.
  • time_math2 is a great little Ruby library and a wonderful archetype for how "expressive" Ruby libraries can be made without Rails-style monkey patches.

Physical things:

  • The Mizudashi cold brew coffee pot I picked up to celebrate the launch of Decaf Sucks 2.0 makes amazing coffee and I've been putting it to good use ever since the weather warmed up. I'm aiming for 100% uptime of cold brew all summer long.
  • The Minaal daily shipped from their Kickstarter campaign and it immediately became my every day carry. A great companion to their larger carry-on bag.


  • After trying innumerable things and never settling, I've finally found a home for all my writing (pieces long, short, random or otherwise): it's Ulysses. What a great app.
  • Castro 2 came out with an ingenious new mechanic and I'm very happy to continue using it. It's helped my jump onto a few new podcasts without the worry of managing them.
  • CarPlay is great. I'll readily admit this was a deciding factor in our new car choice and I wasn't disappointed.
  • Paw is now my one-stop shop for all my HTTP requestin’. Super polished.
  • I'm back on good old and happy to ignore all the we'll-host-your-mail-and-your-passwords offerings that continue to swirl around.

Books, film, TV, etc.

  • Ripped through quite a bit of fiction as I waited for Clover to sleep (happily now she does this on her own). Highlights: Seveneves, Proxima & Ultima, The Prefect, Aurora and the Wool trilogy.
  • I look at my Letterboxd profile and once again resolve to watch more cinema. Anyway, 2016's highlights were The Big Short, Hunt for the Wilderpeople, Easy A, Arrival, Crazy, Stupid Love and of course Rogue One.
  • Subscribing to Netflix has been great. And fits perfectly well with our no-TV household.


  • Cooked all the Filipino food I could think up. It was great to have this as a motivating theme behind all my cooking.
  • And I tried toast and yoghurt for breakfast for the first time. Guess there's always time for new firsts ¯\_(ツ)_/¯.

Krebs on SecurityDNI: Putin Led Cyber, Propaganda Effort to Elect Trump, Denigrate Clinton

Russian President Vladimir Putin directed a massive propaganda and cyber operation aimed at discrediting Hillary Clinton and getting Donald Trump elected, the top U.S. intelligence agencies said in a remarkable yet unshocking report released on Friday.

Russian President Vladimir Putin tours RT facilities. Image: DNI

Russian President Vladimir Putin tours RT facilities. Image: DNI

The 25-page dossier from the Office of the Director of National Intelligence stopped short of saying the Russians succeeded at influencing the outcome of the election, noting that the report did not attempt to make an assessment on that front. But it makes the case that “Russia’s intelligence services conducted cyber operations against targets associated with the 2016 US presidential election, including targets associated with both major US political parties.”

“We assess with high confidence that Russian military intelligence (General Staff Main Intelligence Directorate or GRU) used the Guccifer 2.0 persona and to release US victim data obtained in cyber operations publicly and in exclusives to media outlets and relayed material to WikiLeaks,” the DNI report reads.

The report is a quick and fascinating read. One example: It includes a fairly detailed appendix which concludes that the U.S.-based but Kremlin-financed media outlet RT (formerly Russia Today) is little more than a propaganda machine controlled by Russian intelligence agencies.

“Moscow’s influence campaign followed a Russian messaging strategy that blends covert intelligence operations—such as cyber activity—with overt efforts by Russian Government agencies, state-funded media, third-party intermediaries, and paid social media users or ‘trolls,'” reads the report.

The DNI report is remarkable for several reasons. First, it publicly accuses Russia’s President of trying to meddle with the U.S. election and to hack both political parties. Also, as The New York Times observed, it offers “a virtually unheard-of, real-time revelation by the American intelligence agencies that undermined the legitimacy of the president who is about to direct them.”

However, those who’ve been clamoring for more technical evidence to support a conclusion that Russian intelligence agencies were behind the phishing, malware attacks and email leaks at The Democratic National Committee (DNC) and Clinton campaign likely will be unmoved by this report. Those details will remain safely hidden from public view in the classified version of the report.

Last week, the FBI and Department of Homeland Security issued a joint report (PDF) on some of the malware and Internet resources used in the DNC intrusion. But many experts criticized it as a poorly-written, jumbled collection of threat indicators and digital clues that didn’t all quite lead where they should.

Others were perplexed by the high confidence level the agencies assigned to the findings in their unclassified report, noting that neither the FBI nor DHS examined the DNC hard drives that were compromised in the break-in (that work was done by private security firm Crowdstrike).

Former black-hat hacker turned Wired and Daily Beast contributing editor Kevin Poulsen slammed the FBI/DHS report as “so aimless that it muddies the clear public evidence that Russia hacked the Democratic Party to affect the election, and so wrong it enables the Trump-friendly conspiracy theorists trying to explain away that evidence.”

Granted, trying to reconstruct a digital crime scene absent some of the most important pieces of evidence is a bit like attempting to assemble a jigsaw puzzle with only half of the pieces. But as digital forensics and security expert Jonanthan Zdziarksi noted via Twitter last night, good old fashioned spying and human intelligence seems to have played a bigger role in pinning the DNC hack on the Russians.

“The DNI report subtly implied that more weight was put on our intelligence coming from espionage operations than on cyber warfare,” Zdziarski wrote. “As someone who’s publicly called out the FBI over misleading the public and the court system, I believe the DNI report to be reliable. I also believe @CrowdStrike’s findings to be reliable based on the people there and their experience with threat intelligence.”

Key findings from the DNI report.

Key findings from the DNI report.

My take? Virtually nothing in the DNI report is dispositive of anything in the FBI/DHS report. In other words, the DNI report probably won’t change anyone’s minds. I’m sure that many smart U.S. intelligence analysts spent a great deal of time on this, but none of it was particularly surprising at all: The DNI report describes precisely the kind of cloak and dagger stuff that one might expect the Kremlin to be doing to the United States, day-in and day-out.

What makes these kinds of cyber espionage and propaganda campaigns so worthwhile is that even if the Kremlin cannot always get its favorite candidate elected, Moscow may still consider it a success if it can continuously sow doubt in the minds of Americans about the legitimacy of the U.S. election process and other tenets of democracy.

It’s also exactly the sort of thing the U.S. government has been doing to other countries for decades. In fact, the U.S. has done so as many as 81 times between 1946 and 2000, according to a database amassed by political scientist Dov Levin of Carnegie Mellon University, writes Nina Agrawal for The Los Angeles Times.

Anyone shocked by the Kremlin-funded news station RT in all of this probably never heard of Voice of America, a U.S. government-funded news service that broadcast the American response to Soviet propaganda during the Cold War.

President-elect Trump has publicly mocked American intelligence assessments that Russia meddled with the U.S. election on his behalf, and said recently that he doubts the U.S. government can be certain it was hackers backed by the Russian government who hacked and leaked emails from the DNC.

Mr. Trump issued a statement last night only loosely acknowledging Russian involvement, saying that “while Russia, China, other countries, outside groups and people are consistently trying to break through the cyber institutions, businesses and organizations including the Democrat [sic] National Committee, there was absolutely no effect on the outcome of the election including the fact that there was no tampering whatsoever with the voting machines.”

Trump also has called for a review of the nation’s plans to stop cyberattacks, which he said will be completed within 90 days of his taking office on Jan. 20.

“Whether it is our government, organizations, associations or businesses we need to aggressively combat and stop cyberattacks,” Trump said. “I will appoint a team to give me a plan within 90 days of taking office. The methods, tools and tactics we use to keep America safe should not be a public discussion that will benefit those who seek to do us harm. Two weeks from today I will take the oath of office and America’s safety and security will be my number one priority.”

Time will tell if Mr. Trump’s team can do anything to slow the frequency of data breaches in the United States. But I hope we can all learn from this report. It’s open season out there for sure, but there are some fairly simple, immutable truths that each of us should keep in mind, truths that apply equally to political parties, organizations and corporations alike:

-If you connect it to the Internet, someone will try to hack it.

-If what you put on the Internet has value, someone will invest time and effort to steal it.

-Even if what is stolen does not have immediate value to the thief, he can easily find buyers for it.

-The price he secures for it will almost certainly be a tiny slice of its true worth to the victim.

-Organizations and individuals unwilling to spend a small fraction of what those assets are worth to secure them against cybercrooks can expect to eventually be relieved of said assets.

“We assess Moscow will apply lessons learned from its Putin-ordered campaign aimed at the US presidential election to future influence efforts worldwide, including against US allies and their election processes,” the DNI report concludes.

Yeah, no kidding. The question is: Will political and corporate leaders begin applying those lessons to their own operations, and gird themselves for full-on, 24/7 cyberattacks from every direction, before, during and after each election? How many more examples do we need to understand that maybe we’re really not taking this cybersecurity stuff seriously enough given what’s at stake?

The DNI report is available here (PDF).

Chaotic IdealismYes, but what kind of hate crime?

Everyone has been talking about the four young black people who filmed themselves kidnapping and then abusing a young white disabled man. They say it's a hate crime, but they can't agree on who it's against. Some people say it's against white people, because that's what the video implies. Others are saying it's an anti-disability hate crime, because the young people chose a disabled man as their victim.

How about this perspective: It's both. The term "intersectionality" has been big lately, and that's exactly what this is. A disabled man is more vulnerable to bullying; the cruel people of the world naturally choose him when they hate white people and he happens to be white. It's both.

I do research on anti-disability hate crimes, and I see it a lot. This is the first time it's been "disabled and white". White people are in the majority and tend to have more power, socially, so being white is a little bit protective if you're disabled. Usually, it's "disabled and Muslim", "disabled and black", "disabled and gay", "disabled and young", "disabled and old", "disabled and poor". Sometimes one thing is primary; sometimes it's the other.

White people may not often be targeted for hate crimes, but they're not immune--especially if they are also in some other less-privileged category, like being disabled. Take this as a call to protect every human being, regardless of the social categories they may belong to. Some categories are more dangerous to be in than others; some people are more vulnerable than others. But even when a powerful group like white people is targeted, it's still wrong.

Maybe this incident will shake awake a few white people who still think hate crimes aren't their problem because they don't participate in them. But they are everybody's problem, white or black, disabled or non-disabled, civilian or cop, child or elder.

Do you need an us-versus-them structure? All right; it's human nature, so I'll give you one. When you see this, don't think "it's black against white", because it isn't. It's "decent people against bigots". You decent people out there--protect your neighbors, your friends, your family. Look out for total strangers if you have to, if they need you. Join together against the people who have given in to hate. Help those who are vulnerable find armor to keep them safe, whether that's practical support like food or shelter, or whether it's social support and morale improvement to keep them from becoming discouraged. Use your hands, your money, your voice, and your vote. Find those bigots who are ignorant and can be educated, who can learn better, and recruit them to help, too. Refuse to hate, no matter how much hate is around you.

There are a lot more decent people out there than bigots; we already outnumber them. We just need to stop being shy about it and start drawing a line in the sand: "If you want to hurt any of us, you're going to have to deal with all of us."

Planet DebianSteve Kemp: Patching scp and other updates.

I use openssh every day, be it the ssh command for connecting to remote hosts, or the scp command for uploading/downloading files.

Once a day, or more, I forget that scp uses the non-obvious -P flag for specifying the port, not the -p flag that ssh uses.

Enough is enough. I shall not file a bug report against the Debian openssh-client page, because no doubt compatibility with both upstream, and other distributions, is important. But damnit I've had enough.

apt-get source openssh-client shows the appropriate code:

    fflag = tflag = 0;
    while ((ch = getopt(argc, argv, "dfl:prtvBCc:i:P:q12346S:o:F:")) != -1)
          switch (ch) {
            case 'P':
                    addargs(&remote_remote_args, "-p");
                    addargs(&remote_remote_args, "%s", optarg);
                    addargs(&args, "-p");
                    addargs(&args, "%s", optarg);
            case 'p':
                    pflag = 1;

Swapping those two flags around, and updating the format string appropriately, was sufficient to do the necessary.

In other news I've done some hardware development, using both Arduino boards and the WeMos D1-mini. I'm still at the stage where I'm flashing lights, and doing similarly trivial things:

I have more complex projects planned for the future, but these are on-hold until the appropriate parts are delivered:

  • MP3 playback.
  • Bluetooth-speakers.
  • Washing machine alarm.
  • LCD clock, with time set by NTP, and relay control.

Even with a few LEDs though I've had fun, for example writing a trivial binary display.

Planet DebianSteinar H. Gunderson: SpeedHQ decoder

I reverse-engineered a video codec. (And then the CTO of the company making it became really enthusiastic, and offered help. Life is strange sometimes.)

I'd talk about this and some related stuff at FOSDEM, but there's a scheduling conflict, so I will be in Ås that weekend, not Brussels.

Planet DebianJonas Meurer: debian lts report 2016.12

Debian LTS report for December 2016

December 2016 was my fourth month as a Debian LTS team member. I was allocated 12 hours. Unfortunately I turned out to have way less time for Debian and LTS work than expected, so I only spent 5,25 hours of them for the following tasks:

  • DLA 732-1: backported CSRF protection to monit 1:5.4-2+deb7u1
  • DLA 732-2: fix a regression introduced in last monit security update
  • DLA 732-3: fix another regression introduced in monit security update
  • Nagios3: port 3.4.1-3+deb7u2 and 3.4.1-3+deb7u3 updates to wheezy-backports
  • DLA-760-1: fix two reflected XSS vulnerabilities in spip

Planet DebianJonas Meurer: debian lts report 2016 12

Debian LTS report for December 2016

December 2016 was my fourth month as a Debian LTS team member. I was allocated 12 hours. Unfortunately I turned out to have way less time for Debian and LTS work than expected, so I only spent 5,25 hours of them for the following tasks:

  • DLA 732-1: backported CSRF protection to monit 1:5.4-2+deb7u1
  • DLA 732-2: fix a regression introduced in last monit security update
  • DLA 732-3: fix another regression introduced in monit security update
  • Nagios3: port 3.4.1-3+deb7u2 and 3.4.1-3+deb7u3 updates to wheezy-backports
  • DLA-760-1: fix two reflected XSS vulnerabilities in spip

Planet DebianKeith Packard: embedded-arm-libc

Finding a Libc for tiny embedded ARM systems

You'd think this problem would have been solved a long time ago. All I wanted was a C library to use in small embedded systems -- those with a few kB of flash and even fewer kB of RAM.

Small system requirements

A small embedded system has a different balance of needs:

  • Stack space is limited. Each thread needs a separate stack, and it's pretty hard to move them around. I'd like to be able to reliably run with less than 512 bytes of stack.

  • Dynamic memory allocation should be optional. I don't like using malloc on a small device because failure is likely and usually hard to recover from. Just make the linker tell me if the program is going to fit or not.

  • Stdio doesn't have to be awesomely fast. Most of our devices communicate over full-speed USB, which maxes out at about 1MB/sec. A stdio setup designed to write to the page cache at memory speeds is over-designed, and likely involves lots of buffering and fancy code.

  • Everything else should be fast. A small CPU may run at only 20-100MHz, so it's reasonable to ask for optimized code. They also have very fast RAM, so cycle counts through the library matter.

Available small C libraries

I've looked at:

  • μClibc. This targets embedded Linux systems, and also appears dead at this time.

  • musl libc. A more lively project; still, definitely targets systems with a real Linux kernel.

  • dietlibc. Hasn't seen any activity for the last three years, and it isn't really targeting tiny machines.

  • newlib. This seems like the 'normal' embedded C library, but it expects a fairly complete "kernel" API and the stdio bits use malloc.

  • avr-libc. This has lots of Atmel assembly language, but is otherwise ideal for tiny systems.

  • pdclib. This one focuses on small source size and portability.

Current AltOS C library

We've been using pdclib for a couple of years. It was easy to get running, but it really doesn't match what we need. In particular, it uses a lot of stack space in the stdio implementation as there's an additional layer of abstraction that isn't necessary. In addition, pdclib doesn't include a math library, so I've had to 'borrow' code from other places where necessary. I've wanted to switch for a while, but there didn't seem to be a great alternative.

What's wrong with newlib?

The "obvious" embedded C library is newlib. Designed for embedded systems with a nice way to avoid needing a 'real' kernel underneath, newlib has a lot going for it. Most of the functions have a good balance between speed and size, and many of them even offer two implementations depending on what trade-off you need. Plus, the build system 'just works' on multi-lib targets like the family of cortex-m parts.

The big problem with newlib is the stdio code. It absolutely requires dynamic memory allocation and the amount of code necessary for 'printf' is larger than the flash space on many of our devices. I was able to get a cortex-m3 application compiled in 41kB of code, and that used a smattering of string/memory functions and printf.

How about avr libc?

The Atmel world has it pretty good -- avr-libc is small and highly optimized for atmel's 8-bit avr processors. I've used this library with success in a number of projects, although nothing we've ever sold through Altus Metrum.

In particular, the stdio implementation is quite nice -- a 'FILE' is effectively a struct containing pointers to putc/getc functions. The library does no buffering at all. And it's tiny -- the printf code lacks a lot of the fancy new stuff, which saves a pile of space.

However, much of the places where performance is critical are written in assembly language, making it pretty darn hard to port to another processor.

Mixing code together for fun and profit!

Today, I decided to try an experiment to see what would happen if I used the avr-libc stdio bits within the newlib environment. There were only three functions written in assembly language, two of them were just stubs while the third was a simple ultoa function with a weird interface. With those coded up in C, I managed to get them wedged into newlib.

Figuring out the newlib build system was the only real challenge; it's pretty awful having generated files in the repository and a mix of autoconf 2.64 and 2.68 version dependencies.

The result is pretty usable though; my STM 32L discovery board demo application is only 14kB of flash while the original newlib stdio bits needed 42kB and that was still missing all of the 'syscalls', like read, write and sbrk.

Here's gitweb pointing at the top of the tiny-stdio tree:


And, of course you can check out the whole thing

git clone git://

'master' remains a plain upstream tree, although I do have a fix on that branch. The new code is all on the tiny-stdio branch.

I'll post a note on the newlib mailing list once I've managed to subscribe and see if there is interest in making this option available in the upstream newlib releases. If so, I'll see what might make sense for the Debian libnewlib-arm-none-eabi packages.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: Annual Penguin Picnic, January 21, 2017

Jan 21 2017 12:00
Jan 21 2017 18:00
Jan 21 2017 12:00
Jan 21 2017 18:00

Yarra Bank Reserve, Hawthorn.

The Linux Users of Victoria Annual Penguin Picnic will be held on Saturday, January 21, starting at 12 noon at the Yarra Bank Reserve, Hawthorn.

LUV would like to acknowledge Red Hat for their help in obtaining the Carlton venue and Infoxchange for the Richmond venue.

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

January 21, 2017 - 12:00

read more

Planet DebianDirk Eddelbuettel: Rcpp now used by 900 CRAN packages

800 Rcpp packages

Today, Rcpp passed another milestone as 900 packages on CRAN now depend on it (as measured by Depends, Imports and LinkingTo declarations). The graph is on the left depicts the growth of Rcpp usage over time.

The easiest way to compute this is to use the reverse_dependencies_with_maintainers() function from a helper scripts file on CRAN. This still gets one or two false positives of packages declaring a dependency but not actually containing C++ code and the like. There is also a helper function revdep() in the devtools package but it includes Suggests: which does not firmly imply usage, and hence inflates the count. I have always opted for a tighter count with corrections.

Rcpp cleared 300 packages in November 2014. It passed 400 packages in June 2015 (when I only tweeted about it), 500 packages in late October 2015, 600 packages last March, 700 packages last July and 800 packages last October. The chart extends to the very beginning via manually compiled data from CRANberries and checked with crandb. The next part uses manually saved entries. The core (and by far largest) part of the data set was generated semi-automatically via a short script appending updates to a small file-based backend. A list of packages using Rcpp is kept on this page.

Also displayed in the graph is the relative proportion of CRAN packages using Rcpp. The four per-cent hurdle was cleared just before useR! 2014 where I showed a similar graph (as two distinct graphs) in my invited talk. We passed five percent in December of 2014, six percent July of last year, seven percent just before Christmas eight percent this summer, and nine percent mid-December.

900 user packages is a really large number. This puts more than some responsibility on us in the Rcpp team as we continue to keep Rcpp as performant and reliable as it has been.

At the rate things are going, the big 1000 may be hit some time in April.

And with that a very big Thank You! to all users and contributors of Rcpp for help, suggestions, bug reports, documentation or, of course, code.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet DebianLars Wirzenius: Hacker Noir, chapter 1: Negotiation

I participated in Nanowrimo in November, but I failed to actually finish the required 50,000 words during the month. Oh well. I plan on finishing the book eventually, anyway.

Furthermore, as an open source exhibitionist I thought I'd publish a chapter each month. This will put a bit of pressure on me to keep writing, and hopefully I'll get some nice feedback too.

The working title is "Hacker Noir". I've put the first chapter up on

Falkvinge - Pirate PartyOld lady denied exchanging life savings in old banknotes for new issue; could not prove innocence of money; dies

Time money clock dollars

Repression:Ethel Hülst had saved for some old-age luxury all her life, cash-in-mattress style, and wanted to exchange her old-issue-note savings for new-issue banknotes. Faced with demands of proving where her cash came from, she could not produce receipts that would have been older than a decade. The Central Bank denied her an exchange of issue, having her life savings expire into invalidity.

The Swedish Central Bank is in the middle of an exchange of issue, changing old-issue banknotes and coins for new issue. This is something that happens regularly in most or all monetary systems – an upgrading of the banknotes and coins in circulation, supposedly done via a fair and controlled process.

But when Ethel Hülst, 91, tried to exchange her life savings in cash of 108,450 Swedish krona ($12,000; €11,300), she was denied the new issue in exchange for her old notes. The justification was that she was unable to prove that the money had been earned in an honest way, as defined by the government, with the burden of proof on old Ethel.

These are rules against ordinary Joes and Janes supposed to prevent money laundering and terrorism, which accomplish mostly nothing at the same time as the biggest banks are the biggest perpetrators (in the scale of billions-with-a-B) – the same banks that are supposed to enforce these petty rules onto small savers.

Of course, the rules weren’t in place when Mrs. Hülst started her life savings, so how could she possibly know she would have needed receipts from the time in question, twenty or forty or fifty years down the road? That was absolutely inconceivable at the time, that the government would not honor its own cash. (Something that, for one reason or other, has always been inconceivable — despite ample data points to the contrary.)

“She was asked if she’s been laundering money or involved in organized crime. I think our elderly, just like my mother, get rather offended by the government assuming them criminal”, says Anders, her next of kin. “She never afforded herself anything, not even a hearing new hearing aid. Saving what was possible for a rainy day was almost a reflex.”

Sadly, shortly after the bank had refused to honor her life savings, and the administrative court sided with the bank in the matter of refusing her now-invalid banknotes, she passed.

“The bank doesn’t save statements longer than ten years”, continues Anders, implying that it was a ridiculous rule to retroactively come up with a requirement for twenty-year-old receipts. “When mom was told the bank no longer had any statements from the time in question, she gave up. She felt as though the government was stealing all her life savings, and that was it.”

Her financial privacy was, paradoxically, done right. Saving in cash is not only private: with banks giving you zero interest – nil-and-zero risk premium for having the money in the possibly-insolvent bank – it’s also financially sound to have physical control of your store of value. The key message is instead that central banks can’t and shouldn’t be trusted.

Bitcoin users are not affected. Your privacy, finanical and otherwise, is your own responsibility.

Syndicated Article
This article has previously appeared on Private Internet Access.

(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

Planet Linux AustraliaSam Watkins: Linux, low power / low heat for summer

Sometimes I play browser games including  This loads the CPU and GPU, and in this summer weather my laptop gets too hot and heats up the room.

I tried using Chrome with the GPU disabled, but the browser games would still cause the GPU to ramp up to full clock rate. I guess the X server was using the GPU.

google-chrome --disable-gpu   # does not always prevent GPU clocking up

So here’s what I did:

For the NVIDIA GPU, we can force the lowest power mode by adding the following to the “Device” section in /etc/X11/xorg.conf:

# Option "RegistryDwords" "PowerMizerEnable=0x0;"
Option "RegistryDwords" "PowerMizerEnable=0x1; PerfLevelSrc=0x3333; PowerMizerLevel=0x3; PowerMizerDefault=0x3; PowerMizerDefaultAC=0x3"

Unfortunately the “nvidia-settings” tool does not allow this.  It is necessary to restart the X server in order to change this setting.  Just swap which line is commented out.

Given that we are keeping the GPU cool like this, Chrome works better with the GPU enabled not disabled.

For the CPU, setting “scaling_governor=powersave” does not force the lowest power mode, and the CPU still clocks up and gets hot.  But we can set “scaling_max_freq” to stop Linux from raising the clock speed.  I’m using this shell script “cpu_speed“:

cd /sys/devices/system/cpu
for cpu in cpu[0-9]*; do
 cd $cpu/cpufreq
 case "$cmd" in
 info) echo $cpu `<scaling_cur_freq` `<scaling_min_freq` `<scaling_max_freq`
 slow) cat cpuinfo_min_freq >scaling_min_freq
 cat cpuinfo_min_freq >scaling_max_freq
 fast) cat cpuinfo_min_freq >scaling_min_freq
 cat cpuinfo_max_freq >scaling_max_freq

I can run it with “cpu_speed” to see the current speed, “cpu_speed slow” to fix the clock at the lowest speed, and “cpu_speed fast” to allow the clock to go up to the maximum speed.

This “temperature” script shows the NVIDIA GPUCurrentPerfLevel, GPUCoreTemp, and CPU temperature info:

set -a
: ${DISPLAY:=:0.0}
nvidia-settings -q GPUCurrentPerfLevel -q GPUCoreTemp
acpi -t
) 2>/dev/null |
perl -ne 'print "$1 " if /[:,] (\d+)\./'

Finally, I can reduce the screen resolution to decrease the load on the GPU and CPU.  “xrandr” with the NVIDIA driver does not allow me to change the resolution directly, but there is an option to scale the display.  This gives much smoother performance in the browser games, and the lower resolution doesn’t hurt.


xrandr --output DP-2 --scale 0.5x0.5


xrandr --output DP-2 --scale 1x1

Anyway, now I have my laptop set up to run cool by default.  This doesn’t hurt for most things I am doing with it, and I feel it’s less likely to explode and burn down our house.

Planet Linux AustraliaLev Lafayette: Installing R with EasyBuild: Which path to insanity?

There is a wonderful Spanish idiom, "Cada loco con su tema" which is sometimes massacred as the English idiom "To each their own". In Spanish of course it is more accurately transliterated as "Each madman with their topic" which in familiar conversation means the same, has a slightly different and is a more illustrative angle on the subject. With the in mind, which path to insanity does one take with R libraries and EasyBuild? A similar question can also be raised with other languages that have extensions, e.g., Python and Perl.

read more

Planet Linux AustraliaBen Martin: Machine Control with MQTT

MQTT is an open standard for message passing in the IoT. If a device or program knows something interesting it can offer to publish that data through a named message. If things want to react to those messages they can subscribe to them and do interesting things. I took a look into the SmoothieBoard firmware trying to prize an MQTT client into it. Unfortunately I had to back away at that level for now. The main things that I would love to have as messages published by the smoothie itself are the head position, job processing metadata, etc.

So I fell back to polling for that info in a little nodejs server. That server publishes info to MQTT and also subscribes to messages, for example, to "move the spindle to X,Y" or the like. I thought it would be interesting to make a little web interface to all this. Initially I was tempted to throw over websockets myself, but then discovered that you can mqtt right over a ws to mosquitto. So a bootstrap web interface to the CNC was born.

As you can see I opted out of the pronterface style head control. For me, on a touch panel the move X by 1 and move X by 10 are just too close in that layout. So I select the dimension in a tab and then the direction with buttons. Far, far, less chance of an unintended move.

Things get interesting on the files page. Not only are the files listed but I can "head" a file and that becomes a stored message by mosquitto. As the files on the sdcard of the smoothieboard don't change (for me) the head only has to be performed once per file. It's handy because you can see the header comment that the CAM program added to the G-Code so you can work out what you were thinking at the time you made the gcode. Assuming you put the metadata in that is.

I know that GCode has provisions for layout out multiple coordinate spaces for a single job. So you can cut 8 of the same thing at a single time from one block of stock. I've been doing 2-4 up manually. So I added a "Saves" tab to be able to snapshot a location and restore to it again later. This way you can run a job, move home by 80mm in X and run the same job again to cut a second item. I have provision for a bunch of saves, but only 1 is shown in the web page in the below.

This is all backed by MQTT. So I can start jobs and move the spindle from the terminal, a phone, or through the web interface.

Planet Linux AustraliaLinux Australia News: Linux Australia 2016 AGM Minutes

Minutes of Linux Australia
Annual General Meeting 2016

Deakin University, Waterfront Campus, Geelong, Victoria
Monday 1st February 2016, Room D2.193 Percy Baxter Theatre

Minutes taken by Ms Sae Ra GERMAINE, Ms Kathy REID.
Collated by Ms Katie McLAUGHLIN

The meeting was opened at 1802 by Mr JOSH HESKETH

Mr HESKETH noted that it was his last AGM as president

MOTION by Mr HESKETH That the minutes of the 2015 AGM are accepted
CARRIED with 1 abstention

Officers Reports

President’s Report (Appendix A)

The President’s report was presented by Mr HESKETH

No comments from members were made on the President’s report
Members thanked the Council members in doing the due diligence
Members thanked the admin team, for auditing

Mr HESKETH noted the following:
Subcommittee Policy
Overseeing the events has been a challenge
The LCA Ghosts allows for the continuation of knowledge
A review of the policy should be undertaken in future years

Advocacy, outreach
LA’s ability to address this relies on member submissions
Submission was made in the previous year of the TPP, software patents and intellectual property.
The Council has an outstanding action item to make contacts to various people that may have some information and legal advice on what we can do.

Membership Platform
state has not changed we have a document that shows what we require
The strategy is to rewrite or look for an alternative solution seeking input from volunteers

the need for a name change. “Linux Australia” is no longer accurate, very little of our work is focussed on Linux itself.

Closing Comments
This is the last term for Mr Josh HESKETH
Mr Josh HESKETH comments that it has been a pleasure and an honour

Questions from the Floor
Mr Craige McWHIRTER comments that GovHack 2016 was not listed as an event

Mr HESKETH replied that there is not a formed subcommittee for GovHack 2016 at this time. They are working on a new policy which will better suit their needs. Council to work through this. Expect it to be a subcommittee.

Mr Peter CHUBB asks what is happening with older subcommittees

Mr HESKETH replies that there are two types of subcommittees: Events Subcommittees and Other Subcommittees. Some subcommittees are such as LUGs and Meetups are formed under the old policy, but newer events are covered under the newer subcommittee policy. Without enthusiastic volunteers, we won't establish a new committee.

MOTION raised by Mr Mike CARDEN to accept President’s report
CARRIED with one abstention

Inflection Point

Ms Kathy REID initiated the conversation of Inflection Point
Refer to

Ms REID strongly urged the 2016 Council to consider the document

Treasurer’s Report

Presented by Mr Tony BREEDS (via teleconference)

Mr BREEDS apologises for the late delivery of the report, and thanked the 2014 Council for leaving the budget in such a good shape; even though there was an income loss due to LCA2014 not performing quite as expected.

Mr BREEDS notes the financial year for the report is from October 1 to September 30
Mr BREEDS reports a high profit of $143,000 over the last financial year.
Mr BREEDS notes that the profit was due to the success of the LCA2015, DrupalSouth and PyConAU conferences, and thanks those event organisers.

Mr BREEDS notes a small loss from the WordCamp Sydney event, due to one of their sponsorships from WordPress. Linux Australia is working closely with the WordPress Foundation.

Mr BREEDS notes that the suggested improvements from the 2014 Council Treasurer Mr Francois MARIER have all been actioned.

Mr BREEDS notes that of the $5,000 set aside for grants, only half of this was used. This is due to the way grants are counted. The grant for Drupal 8, for example, was handed out of profits of conferences, and appear as Sponsorships rather than Grants

Mr BREEDS notes that the insurance costs for the year were over budget, due to the GovHack event.

Mr BREEDS notes a signed 3 year contract for server maintenance

Mr BREEDS notes that the 2016 budget has not been formally moved to Council. Mr BREEDS suggested that LA increase budget for sponsorship to support organisations such as SFC, EFA and Drupal Foundation.

Mr BREEDS thanks all past Treasurers for their hard work and efforts, specifically Mr RUSSELL STUART and Mr PETER LIEVERDINK

Questions from the Floor

Mr HESKETH notes a profit of $22-23K. The way the financial stuff worked and WordPress Foundation. When it was resolved, we needed to cancel the invoice we had for WordPress. We do not use the overall profit as success.

MOTION by Mr STEVEN ELLIS to accept the Treasurer’s Report
CARRIED with 1 abstention

Auditor’s Report (Appendix C)

Presented by Mr HESKETH on behalf of the Auditor

Mr HESKETH reported that a Financial Audit has been conducted. The entire report, and all notes, are available online

Questions from the floor

Mr Julian GOODWIN asks whether the holding of large amounts of cash reserves is appropriate

Mr HESKETH replies with an outline of how LA holds cash equivalents and manages cashflow to optimise revenue.

Ms REID asks if the auditor’s report was qualified or unqualified

Mr HESKETH replies that it was an unqualified report

MOTION by Mr Andrew DONELLAN to receive the Auditor’s Report
CARRIED unanimously

Secretary's Report (Appendix D)

Presented by Ms GERMAINE

Questions from the floor

Ms Lin NAH asked a question between the difference between financial and non financial membership

Ms GERMAINE notes that there is no difference. Ms GERMAINE also notes that donations can be accepted, but not as a financial member. Also noted is that this has been considered in the past but was decided as not something the Council wanted to pursue at the time.

MOTION by Mr Cameron TUDBALL to accept the Secretary’s Report
CARRIED unanimously.


MOTION by Ms REID that the membership approves of the actions of Council
SECONDED by Mr Peter (Surname Missed)
CARRIED with 5 abstentions

MOTION by Ms REID that the Linux Australia community extend their sincere thanks to Mr JOSHUA HESKETH for his exemplary, tireless and sustained efforts as President, Treasurer and Council Member of Linux Australia for the last six years. His affable nature, diplomatic approach, diligence and forethought have served the organisation invaluably.
CARRIED with 1 abstention by Mr HESKETH

MOTION by Ms REID that the Linux Australia community extend their sincere thanks to the Council for 2015: Vice President Mr JOSH STEWART, Secretary Ms GERMAINE, Treasurer Mr BREEDS, Council Members Mr JAMES ISEPPI, Mr McWHIRTER, Mr NEUGEBAUER
CARRIED with 4 abstentions

General Questions from the Floor

Mr ELLIS enquired about the potential for a partnership with the NZOpen Source Society. Trying to get a lot more events in motion, wanting to strengthen partnerships with Linux Australia

MOTION by Ms REID to that the community in general support the closer working together of the NZ Open Source Society and Linux Australia
CARRIED unanimously.

Ms DONNA BENJAMIN highlighted the lack of awareness of the Drupal Community. Ms BENJAMIN notes that she is aware that the Drupal Association wants to own the Drupal Events in Australia

Mr HESKETH replies that council have been working with the Drupal Community over the last few months to strengthen the relationship. Acknowledged some miscommunication has occurred with WordPress Foundation and the way that sponsorship occurs. Need to work closely to reduce administrative overhead, and to align goals and interests. Both organisations want to run good open source events.

Ms BENJAMIN asks if there was an expectation that the profits from WordCamp would be returned to the WordPress Foundation

Mr HESKETH replied that No, and the Council would ensure clarity in the future.

Mr Tim (Surname Missed), Lead of WordPress Brisbane, noted that is was their understanding that the financial issues had been resolved, and the WordPress Foundation is grateful for the services that LA provides, and express their gratitude. WordPress community in Australia is willing to work with the Drupal communities and LA to strengthen all communities.

Election of 2016 Council

Mr STEWART SMITH acting as Returning Officer

Mr SMITH notes that the election is run on software he wrote.

Full results

Election Results:
President: Mr HUGH BLEMINGS.
Vice President: Ms KATHY REID
Treasurer: Mr Tony BREEDS

Of note: the Election Software recorded an identical amount of votes for Mr McWHIRTER and Mr JAMES ISEPPI. Due to the nature of the program, a ‘coin flip’ of unknown randomisation was used to present either candidate on the page, changing when the page is refreshed.

The Tie Break used was a Physical Coin Flip during the AGM. This was won by Mr McWHIRTER

It was Noted that this Council represents the highest number of women to ever serve on a Council, and are in the majority for the Council

Mr SMITH thanked those who voted, the outgoing council, and the incoming council.

Mr HESKETH gave a warm welcome to the incoming council

Questions from the floor for the new council

Mr HESKETH notes that the votes in the 2015 election numbered 70, whereas this election, 2016, numbered 112. This is a significant increase.

Mr TENNESSEE LEEUWENBURG asked a question regarding active discussion, new names, directions and strategies

Mr BLEMINGS replied that this was something we need to engage with the council and the broader community.

Address from the Incoming President

Mr BLEMINGS noted he was grateful to serve the community in his new position.

Mr BLEMINGS thanked Mr SMITH as the returning officer

Mr BLEMINGS noted the issue of addressing the membership database, with tooling being but one of the interesting challenges ahead.

Mr BLEMINGS noted the expectation as the council to rely on the community

Mr BLEMINGS opened the floor to further questions


Ms CHERIE ELLIS noted communications with NZOpen Source will be improved

MR JOSH HESKETH officially closing the meeting at 1916 hours

Appendix A: President’s Report
Executive summary

Linux Australia continues to be the peak body for Open Source communities in Australia with a strong year. 2015 saw seven open source conferences run within Australia and New Zealand by volunteers under the auspices of LA. This sustained strength in local events is a testament to the dedication and hard work of our collective and expanding community.

During the year the organisation had to deal with an unfortunate breach of their servers. Thankfully the damage was limited and no personal data is believed to be compromised[0]. Full details were released to the members as soon as it was practical and the overall handling and disclosure of the incident was widely praised. A second potential leak of information later in the year highlighted the need for more volunteer help and efforts in keeping our systems up to date and our data secure[1].

After the financial loss from the previous year, the organisation has managed to return a healthy profit and strengthen its overall position allowing itself to be self insuring against conference losses. This is thanks to the hard work of all the events and volunteers throughout the year.

While a 2016 budget is still being drafted it is the hope of the outgoing council that some of the extra funds will be put into the grants and sponsorship account allowing the organisation to create stronger roots in allied organisations such as the Software Freedom Conservancy and the Electronics Frontiers Australia.

The organisation is at a bit of a crossroads while it looks towards the future. I believe protecting our values[2] in an online-first world will become increasingly important. Software as a service poses significant challenges to open source, open data and privacy. I hope to spend a bit of time thinking about ways in which we can address some of these challenges both as Linux Australia and as an open community.

Kathy Reid kicked off a great inflection point on Linux Australia’s strategic direction, proposing some challenges, options and solutions[3]. Anthony Towns also weighed in with some very pragmatic thoughts that were well received by the members[4]. These discussions are ongoing and anybody interested in weighing in (or even better, volunteering) is encouraged to do so on the linux-aus[5] mailing list (which also contains the relevant archives).

As many are likely aware, I decided early on in the year that this term would be my last. I have been on the council for 6 years now (and involved with LA for even longer) and I think it's time for some fresh blood, so to speak. I can not give enough thanks to all of the members and fellow councilors for their support and hard work during this time. I look forward to welcoming in the new council and wish them all the best.

Events and Conferences

During 2015 there were 7 conferences/events ran as part of Linux Australia.

DrupalSouth 2015
PyConAU 2015
OSDC 2015
GovHack 2015
JoomlaDay Brisbane 2015
WordCamp Brisbane 2015

The upcoming events currently being organised as part of Linux Australia:
PyCon AU 2016
WordCamp Sunshine Coast 2016
DrupalSouth 2016
DrupalGov 2016

Reports from the various conferences and their activities can be found at or their individual websites.

A timeline of all of Linux Australia’s events can be found here:

Grants and Sponsorships

Linux Australia has long had a grants programme[6] open to its members for helping fund items that are in alignment with our values[2]. This year the Council approved 2 requests from members and sponsored 3 initiatives.

Contributions to the Drupal8 Acceler8 fund to the value of $7,500
Grant Request from Andrew Donnellan to fund Russell Keith-Magee as a presenter at CompCon 2015 to the value of $1,200.
GovHack award “Open source bounty” for $2,000
Grant request from Donna Benjamin to the value of $1,000 to support the release party for Drupal 8.
DrupalCamp Silver Sponsorship to the value of $500


The current non-conference based sub-committees are:

Admin Team
AV Subcommittee
Mirror Team
Web Team
Sydney Linux Users Group
LOGIN (NewcastleLUG)
Media and Communications Subcommittee

Reports from the various sub-committees and their activities can be found at

Subcommittee policy and procedure updates

During 2014 the council spent a considerable amount of time working on a new subcommittee policy to help with oversight and the longevity of Linux Australia’s various events. The policy has proven to be a success and has ensured that our conferences have the appropriate help and responsibility assigned to them.

While the policy has been very effective in the early stages of a new subcommittee (during the formation and early budgeting) adherence to it has tended to dwindle as events get closer to their dates. One challenge is finding effective community members to help sit on the various subcommittees. Another is clearly the larger amount of bureaucracy that the policy adds.

The 2016 council should pay close attention to this to ensure that events do not become complacent. A review of the policy would also be helpful given the extra data after having used it for over a year to make sure the policy is actually practicable and actionable.

Advocacy, outreach and related activities

Through our Twitter account, we highlighted articles of interest to the Australian Linux community and grew our number of followers.

Outreach relies on our members taking doing a lot of the leg work. We would like to encourage those interested to take initiative and reach out to the council for support.

Membership platform

One of our carry-over goals from 2014 we hoped to achieve this year was to update our membership platform (currently memberdb). Unfortunately other priorities and difficulties in infrastructure prevented significant effort being expended on this item.

The current membership platform is frail and in need of updating. We need ways to better manage importing of members from LCA registrations, and better ways of contacting our members who may not be on the mailing lists.

The Council, thanks to the hard work of Kathy Reid, has put together a list of requirements of a membership platform and will be looking for volunteers to help with shifting to a new system.

Additionally the Council has considered ways to keep the membership list relevant with only active participants. This is a continued discussion that is dependent on a new system to improve communication before any action can be taken.


Linux Australia has a lot of challenges ahead of itself for the coming few years.

I would like to see the community thinking about some bigger questions. The organisation has been successful in recent years in running events but less so in lobbying to the government or advocating for policy changes etc.

Linux and open source are generally well received technologies and don’t require advocating for in the same way that they may have been 10-15 years ago. This raises a question of how do we stay relevant as Linux Australia. In fact, it is pretty obvious that we aren’t relevant as "Linux" Australia since we’re much more about being an open source organisation.

A name change for our organisation has been discussed many times before, but I believe it to still be an important discussion. However, extending even further from that are more fundamental questions to the organisation. For example, with open source being so mainstream, what does that mean for us? Or what does the popularity of mobile and web platforms mean for open source? Are there opportunities or a need for advocacy in those areas? How do we extend our ideals to open web, open data, open government, open hardware and open culture? How do we ensure that our values[2] are upheld in our industries?

I would like to encourage and challenge our membership to be discussing these types of issues in a large picture sense and to be giving thought as to how we might be able to address some of them. Clearly these types of questions are very difficult to tackle purely at a Council level - especially when they are concerned with the administration and ongoing running of the organisation - so it is imperative that the community attempts to gain a consolidated voice in these areas.

A lot of these challenges are reflected in the 2013 membership survey[7] where our brand and purpose was often mis-identified by members not understanding what we do. Addressing these systemic questions will help guide the direction of the organisation and also lead towards addressing issues such as our poor communication to membership.

Closing comments

It has been an honour to be trusted by the community to lead this organisation for such a long time. While I haven't achieved as much as I had planned, it has been a privilege to be involved and to do what I could. I hope that I have been able to improve and continue the organisation's success during this time. Thank you all for this opportunity.

While I have left a large number of proposed, and deliberately unanswered, questions in this report, I hope our members are not discouraged. I believe that we’ve had a very successful year and from an everyday running standpoint we continue to be functional and productive.

However I also believe we will find ourselves at a crossroads where, without these questions addressed, over the next year or two we will fail to keep relevant and we risk becoming complacent and existing purely to do no more than running conferences.

Perhaps that isn’t a bad thing, but it really comes down to our community and members. The Council is not here to drive the organisation but to merely enable its members. As such the direction and outcomes of the organisation will be defined by what we can do collectively, and not by what the Council tries to do.

Of course with the emphasis on the members I can not state highly enough just how much work volunteers put into the organisation. I wish I could thank them all, but I don’t think it is possible. Needless to say it is through the continued hard work of these individuals that Linux Australia continues to operate, and as such, I wish to say thank you to everybody who has been involved.

Similarly with myself not being on the 2016 council, that doesn't mean I will be disappearing altogether. I intend to help the new council on these challenges in any way I can. I also want to make sure that I'm available to consult and offer advice where possible should the new council wish to reach out.

Thank you all for a wonderful term. I look forward to watching this organisation continue to grow to its full potential.

Warm Regards,
Joshua Hesketh
- President, Linux Australia
January, 2016


Appendix B: Treasurer’s Report

Appendix C: Secretary’s Report

Appendix D: Auditor’s Report

Appendix E: Record of Attendance

Andrew Donnellan
Andrew McDonnell
Andrew Pollock
Andrew Sands
Andrew Spiers
Andrew Tridgell
Andrew Van Slageren
Angus Cameron
Anthony Towns
Benjamin Ball
Brendan O'dea
Brett James
Brian May
Cameron Tudball
Cherie Ellis
Christopher Neugebauer
Clinton Roy
Craige McWhirter
David Bell
David Tulloh
Dion Hulse
Donna Benjamin
Eloise Macdonald-Meyer
Jack Burton
James Iseppi
James Polley
Jamie Wilkinson
Jared Ring
Jessica Smith
Joel Addison
Joel Shea
John Dalton
John Kristensen
Jonathan Woithe
Jono Bacon
Josh Stewart
Joshua Hesketh
Julian DeMarchi
Julien Goodwin
Kathy Reid
Katie McLaughlin
Leon Wright
Les Kitchen
Lin Nah
Luke Hovington
Michael Cordover
Marco Ostini
Mark Atwood
Mark Ellem
Mark Purcell
Mark Walkom
Matt Cengia
Matthew Franklin
Matthew Oliver
Michael Carden
Michael Ellery
Mike Abrahall
Miles Goodhew
Neill Cox
Paul Del
Paul Fenwick
Paul Foxworthy
Paul Wayper
Peter Chubb
Richard Lemon
Rob Bolin
Russell Coker
Russell Stuart
Ryan Sickle
Ryan Stuart
Sachi King
Stephen Walsh
Steven Ellis
Steven Hanley
Stewart Smith
Tim Ansell
Tim Serong


Sociological ImagesIs Mass Murder Now Part of the Repertoire of Contention?

If there’s one thing Americans can agree upon, it might be that people shouldn’t be indiscriminately firing guns crowds, no matter how angry they are. The shooting in the Ft. Lauderdale airport is just the latest example. Mass shootings are on the rise and I’m fearful that what we are seeing isn’t just an increase in violence, but the rise of a new habit, a behavior that is widely recognized as a way to express an objection to the way things are.

To register an objection to something about the world, a person or group needs to engage in an action that other people recognize as a form of protest. We know, in other words, what protest looks like. It’s a strike, a rally, a march, a sit-in, a boycott. These are all recognizable ways in which individuals and groups can stake a political claim, whereas other group activities — a picnic, a group bike ride, singing together — are not obviously so. To describe this set of protest-related tools, the sociologist Charles Tilly coined the phrase “repertoire of contention.” Activists have a stock of actions to draw from when they want to make a statement that others will understand.

A culture’s repertoire of contention is in constant evolution. Each tool has to be invented and conceptually linked to the idea of protest before it can play this role. The sit-in, for example, was invented during the early civil rights movement. When African American activists and their allies occupied white-only restaurants, bringing lunch counters to a halt to bring attention to the exclusion of black people, they introduced a new way of registering an objection to the status quo, one that almost anyone would recognize today.

New ways of protesting are being invented every day: the hashtag, the hacktivist, and shutting down freeways are some newer ones. Some become part of the repertoire. Consider the image below by sociologist Michael Biggs, which shows how suicide as a form of protest “caught on”  in the 1960s:


I am afraid that mass murder has become part of the repertoire of contention. This is theoretically tricky – others have fought over what really counts as a social movement action – but it does seem quite clear that mass murder with a gun is a more familiar and more easily conceptualized way of expressing one’s discontent and then it was, say, pre-Columbine. If a person is outraged by some state of affairs, mass killing is a readily available way to express that outrage both technically (thanks to gun regulation) and cognitively (because it is now part of the recognized repertoire).

Dylann Roof wanted to register his discontent with the place of black people in American society, Robert Lewis Dear stormed a Planned Parenthood with a pro-choice message, Elliot Rodgers was angry about women’s freedom to reject him, Omar Matteen killed dozens to express his (internalized) disgust for homosexuality, Gavin Long communicated his sense of rage and helplessness in the face of black death by killing police. At some point each thought, “What can I do to make a difference?” And mass murder came to mind.

In the aftermath of such events, the news media routine contributes to the idea that mass murder is a form of protest by searching for an explanation above and beyond the desire to kill. That explanation often positions the rationale for the murder within the realm of politics, whether we call it terrorism, resistance, or prejudice. This further sends the message that mass murder is political, part of the American repertoire of contention.

The terrifying part is that once protest tools become part of the repertoire, they are diffused across movements and throughout society. It’s no longer just civil rights activists who use the sit-in; any and all activists do. Perhaps that’s why we see such a range of motivations among these mass murderers. It has become an obvious way to express an objection that the discontented can be sure others will understand.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

CryptogramFriday Squid Blogging: Simple Grilled Squid Recipe

Easy recipe from America's Test Kitchen.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Falkvinge - Pirate PartyThe Government didn’t install cameras and microphones in our homes. We did.

eye iris and  circuit and binary internet concept

Global:It begins: Amazon’s constantly-listening robotic home assistant was near a domestic murder case, and now the Police wants access to anything it might have heard. There have been similar cases in the past, but this is where it starts getting discussed: There are now dozens of sensors in our house. Do we still have an expectation of privacy in our home?

A recurring theme in the dystopic fiction of the 1950s was an everpresent government watching everything you did, as witnessed in the infamous Nineteen Eighty-Four and many others. Adding to the dystopia, starting in the 1970s with movies such as Colossus, computers are typically added to the mix of watching everything all the time.

However, these fictional dystopias all got one critical thing wrong in predicting the future: the government never installed cameras and microphones in everybody’s home. We did. We did it ourselves. And we paid good money for them, too. A smart television set — with infrared cameras built in, watching the people watching the television set as well as listening to them — costs good money that we happily paid.

“The television set received and transmitted simultaneously. Any sound that Winston made, above the level of a very low whisper, would be picked up by it, moreover, so long as he remained within the field of vision which the metal plaque commanded, he could be seen as well as heard. There was of course no way of knowing whether you were being watched at any given moment. How often, or on what system, the government plugged in on any individual wire was guesswork. It was even conceivable that they watched everybody all the time. But at any rate they could plug in your wire whenever they wanted to. You had to live–did live, from habit that became instinct–in the assumption that every sound you made was overheard, and, except in darkness, every movement scrutinized.” — 1984

And now, the police wants access to all of it, not unlike in the brilliant short movie Plurality. In news this week, the police has just requested access to the recordings made by an Amazon smart unit in the home in order to solve a murder.


Of course, it always starts like this. A murder case. One murder case. The next time, it’s an assault rape case. The public opinion wants blood, and privacy has no value compared to catching a killer or rapist. So somebody, somewhere with authority, decides that privacy doesn’t apply in cases “like this”. Then, the government notes this mechanism has already been used for “felonies” – severe crime in general – and decides to apply the same rule for tax evasion, a decision which has no support in public opinion, but which is a crime that the government considers severe. A few more years, and the blanket privacy invasion is used to sue teenagers sharing music and to issue the mundanest of parking tickets.

(I want to point out that this ridiculous example of a slippery slope is exactly what happened with the hated mandatory Internet logging laws in Europe. They started out against murder cases and mass-murder terrorism, and before even a decade had passed, the privacy invasions were used against “all crime, including ticket-level misdemeanors”, and the copyright industry had special private access to the surveillance data for the purpose of suing people. This isn’t made up, it’s exactly what happens. The European Supreme Court struck that shit down as utterly unconstitutional, but it took a decade.)

The question is as disturbing as it is important. Legally speaking, do we still have an expectation of privacy in our own home? Especially when we installed equipment for the express purpose of listening to us and watching us?

As the Snowden movie came out, it was highlighted yet again that our mobile phones are constantly-wiretappable microphones, as the movie version of Edward Snowden took everybody’s phones and put them in a Faraday cage in his hotel room. How long until this is an ordinary reflex with ordinary people, and not just the most knowledgeable? “You had to live — did live, from habit that became instinct — that every sound you made was overheard…”

Legally speaking, do we still have an expectation of privacy in our own home?

There are dozens of microphones and cameras in an ordinary household today. Not to mention all the other sensors: Wirelessly connected scales, cooking equipment, lighting, cars, toothbrushes, energy sensors, fridges. All connected. All wiretappable. If you haven’t used the “calm” color setting on the lights in your home in a while, the government has the ability to know. If your body fat increases, or if you don’t brush your teeth regularly. If you change your coffee grind, or switch to stronger espresso. If you undercook your meat. The list goes on.

There are five separate and important aspects to this.

The first question is if law enforcement can plant surveillance on suspects of serious crime, using their own equipment. Most people would agree that this is reasonable.

The second question is if law enforcement can retroactively activate surveillance, as in the murder case above. As this requires watching and listening to everybody, all the time, it completely eliminates the concept of privacy (even if, as the police tends to argue, only a small fraction of collected data is used for later investigations: the same was true for letters in East Germany — they were all opened and analyzed, but only a small fraction of them were forwarded for later action).

The third question is if law enforcement can legally use your equipment against you: this requires breaking into your equipment and effectively taking control of it. This is a completely separate topic from the first question, which assumes law enforcement is using (and paying for) its own equipment to violate your privacy. Five years ago, it was uncovered that the German Federal Police had broken into ordinary people’s computers to wiretap people – and with root access comes access to webcams and microphones, too. This is a deeply unsettling concept, one that gives national security employees a dangerous conflict of interest, as they’re supposed to be keeping people safe but can use people’s not-being-safe to make their own job easier, if this is permitted.

The third-and-a-half question is if law enforcement can coerce a third party to wiretap you retroactively, like Amazon or Google, eliminating your agency in the matter.

The fourth question is inter-country espionage, such as when the United States NSA broke into Belgacom (the Belgian national telecom operator) and wiretapped the entire European executive and legislative branches, in addition to Angela Merkel’s personal phone. While outrageous, espionage at this level has always existed and to some degree it’s up to every country to protect its own assets.

The fifth and final aspect is the notorious insecurity of all the connected things. The technology sector has only started to learn how to make secure software, including frequent patches. Other industries who are adding connectivity as a bonus feature – scales, fridges, toasters – will be notoriously insecure, won’t patch, and will be around homes for decades.

This discussion is just getting started. Privacy remains your own responsibility.

Syndicated article
This article has previously appeared on Private Internet Access.

(This is a post from Falkvinge on Liberty, obtained via RSS at this feed.)

LongNowThe Future Will Have to Wait

Eleven years ago this month, Pulitzer Prize winning author Michael Chabon published an article in Details Magazine about Long Now and the Clock.  It continues to be one of the best and most poignant pieces written to date…


The Future Will Have to Wait

Written by Michael Chabon for Details in January of 02006

I was reading, in a recent issue of Discover, about the Clock of the Long Now. Have you heard of this thing? It is going to be a kind of gigantic mechanical computer, slow, simple and ingenious, marking the hour, the day, the year, the century, the millennium, and the precession of the equinoxes, with a huge orrery to keep track of the immense ticking of the six naked-eye planets on their great orbital mainspring. The Clock of the Long Now will stand sixty feet tall, cost tens of millions of dollars, and when completed its designers and supporters, among them visionary engineer Danny Hillis, a pioneer in the concept of massively parallel processing; Whole Earth mahatma Stewart Brand; and British composer Brian Eno (one of my household gods), plan to hide it in a cave in the Great Basin National Park in Nevada [now in West Texas], a day’s hard walking from anywhere. Oh, and it’s going to run for ten thousand years. That is about as long a span as separates us from the first makers of pottery, which is among the oldest technologies we have. Ten thousand years is twice as old as the pyramid of Cheops, twice as old as that mummified body found preserved in the Swiss Alps, which is one of the oldest mummies ever discovered. The Clock of the Long Now is being designed to thrive under regular human maintenance along the whole of that long span, though during periods when no one is around to tune it, the giant clock will contrive to adjust itself. But even if the Clock of the Long Now fails to last ten thousand years, even if it breaks down after half or a quarter or a tenth that span, this mad contraption will already have long since fulfilled its purpose. Indeed the Clock may have accomplished its greatest task before it is ever finished, perhaps without ever being built at all. The point of the Clock of the Long Now is not to measure out the passage, into their unknown future, of the race of creatures that built it. The point of the Clock is to revive and restore the whole idea of the Future, to get us thinking about the Future again, to the degree if not in quite the way same way that we used to do, and to reintroduce the notion that we don’t just bequeath the future—though we do, whether we think about it or not. We also, in the very broadest sense of the first person plural pronoun, inherit it.

The Sex Pistols, strictly speaking, were right: there is no future, for you or for me. The future, by definition, does not exist. “The Future,” whether you capitalize it or not, is always just an idea, a proposal, a scenario, a sketch for a mad contraption that may or may not work. “The Future” is a story we tell, a narrative of hope, dread or wonder. And it’s a story that, for a while now, we’ve been pretty much living without.

Ten thousand years from now: can you imagine that day? Okay, but do you? Do you believe “the Future” is going to happen? If the Clock works the way that it’s supposed to do—if it lasts—do you believe there will be a human being around to witness, let alone mourn its passing, to appreciate its accomplishment, its faithfulness, its immense antiquity? What about five thousand years from now, or even five hundred? Can you extend the horizon of your expectations for our world, for our complex of civilizations and cultures, beyond the lifetime of your own children, of the next two or three generations? Can you even imagine the survival of the world beyond the present presidential administration?

I was surprised, when I read about the Clock of the Long Now, at just how long it had been since I had given any thought to the state of the world ten thousand years hence. At one time I was a frequent visitor to that imaginary mental locale. And I don’t mean merely that I regularly encountered “the Future” in the pages of science fiction novels or comic books, or when watching a TV show like The Jetsons (1962) or a movie like Beneath the Planet of the Apes (1970). The story of the Future was told to me, when I was growing up, not just by popular art and media but by public and domestic architecture, industrial design, school textbooks, theme parks, and by public institutions from museums to government agencies. I heard the story of the Future when I looked at the space-ranger profile of the Studebaker Avanti, at Tomorrowland through the portholes of the Disneyland monorail, in the tumbling plastic counters of my father’s Seth Thomas Speed Read clock. I can remember writing a report in sixth grade on hydroponics; if you had tried to tell me then that by 2005 we would still be growing our vegetables in dirt, you would have broken my heart.

Even thirty years after its purest expression on the covers of pulp magazines like Amazing Stories and, supremely, at the New York World’s Fair of 1939, the collective cultural narrative of the Future remained largely an optimistic one of the impending blessings of technology and the benevolent, computer-assisted meritocracy of Donald Fagen’s “fellows with compassion and vision.” But by the early seventies—indeed from early in the history of the Future—it was not all farms under the sea and family vacations on Titan. Sometimes the Future could be a total downer. If nuclear holocaust didn’t wipe everything out, then humanity would be enslaved to computers, by the ineluctable syllogisms of “the Machine.” My childhood dished up a series of grim cinematic prognostications best exemplified by the Hestonian trilogy that began with the first Planet of the Apes (1968) and continued through The Omega Man (1971) and Soylent Green (1973). Images of future dystopia were rife in rock albums of the day, as on David Bowie’s Diamond Dogs (1974) and Rush’s 2112 (1976), and the futures presented by seventies writers of science fiction such as John Brunner tended to be unremittingly or wryly bleak.

In the aggregate, then, stories of the Future presented an enchanting ambiguity. The other side of the marvelous Jetsons future might be a story of worldwide corporate-authoritarian technotyranny, but the other side of a post-apocalyptic mutational nightmare landscape like that depicted in The Omega Man was a landscape of semi-barbaric splendor and unfettered (if dangerous) freedom to roam, such as I found in the pages of Jack Kirby’s classic adventure comic book Kamandi, The Last Boy on Earth (1972-76). That ambiguity and its enchantment, the shifting tension between the bright promise and the bleak menace of the Future, was in itself a kind of story about the ways, however freakish or tragic, in which humanity (and by implication American culture and its values however freakish and tragic) would, in spite of it all, continue. Eed plebnista, intoned the devolved Yankees, in the Star Trek episode “The Omega Glory,” who had somehow managed to hold on to and venerate as sacred gobbledygook the Preamble to the Constitution, norkon forden perfectunun. All they needed was a Captain Kirk to come and add a little interpretive water to the freeze-dried document, and the American way of life would flourish again.

I don’t know what happened to the Future. It’s as if we lost our ability, or our will, to envision anything beyond the next hundred years or so, as if we lacked the fundamental faith that there will in fact be any future at all beyond that not-too-distant date. Or maybe we stopped talking about the Future around the time that, with its microchips and its twenty-four-hour news cycles, it arrived. Some days when you pick up the newspaper it seems to have been co-written by J. G. Ballard, Isaac Asimov, and Philip K. Dick. Human sexual reproduction without male genetic material, digital viruses, identity theft, robot firefighters and minesweepers, weather control, pharmaceutical mood engineering, rapid species extinction, US Presidents controlled by little boxes mounted between their shoulder blades, air-conditioned empires in the Arabian desert, transnational corporatocracy, reality television—some days it feels as if the imagined future of the mid-twentieth century was a kind of checklist, one from which we have been too busy ticking off items to bother with extending it. Meanwhile, the dwindling number of items remaining on that list—interplanetary colonization, sentient computers, quasi-immortality of consciousness through brain-download or transplant, a global government (fascist or enlightened)—have been represented and re-represented so many hundreds of times in films, novels and on television that they have come to seem, paradoxically, already attained, already known, lived with, and left behind. Past, in other words.

This is the paradox that lies at the heart of our loss of belief or interest in the Future, which has in turn produced a collective cultural failure to imagine that future, any Future, beyond the rim of a couple of centuries. The Future was represented so often and for so long, in the terms and characteristic styles of so many historical periods from, say, Jules Verne forward, that at some point the idea of the Future—along with the cultural appetite for it—came itself to feel like something historical, outmoded, no longer viable or attainable.

If you ask my eight-year-old about the Future, he pretty much thinks the world is going to end, and that’s it. Most likely global warming, he says—floods, storms, desertification—but the possibility of viral pandemic, meteor impact, or some kind of nuclear exchange is not alien to his view of the days to come. Maybe not tomorrow, or a year from now. The kid is more than capable of generating a full head of optimistic steam about next week, next vacation, his tenth birthday. It’s only the world a hundred years on that leaves his hopes a blank. My son seems to take the end of everything, of all human endeavor and creation, for granted. He sees himself as living on the last page, if not in the last paragraph, of a long, strange and bewildering book. If you had told me, when I was eight, that a little kid of the future would feel that way—and that what’s more, he would see a certain justice in our eventual extinction, would think the world was better off without human beings in it—that would have been even worse than hearing that in 2006 there are no hydroponic megafarms, no human colonies on Mars, no personal jetpacks for everyone. That would truly have broken my heart.

When I told my son about the Clock of the Long Now, he listened very carefully, and we looked at the pictures on the Long Now Foundation’s website. “Will there really be people then, Dad?” he said. “Yes,” I told him without hesitation, “there will.” I don’t know if that’s true, any more than do Danny Hillis and his colleagues, with the beating clocks of their hopefulness and the orreries of their imaginations. But in having children—in engendering them, in loving them, in teaching them to love and care about the world—parents are betting, whether they know it or not, on the Clock of the Long Now. They are betting on their children, and their children after them, and theirs beyond them, all the way down the line from now to 12,006. If you don’t believe in the Future, unreservedly and dreamingly, if you aren’t willing to bet that somebody will be there to cry when the Clock finally, ten thousand years from now, runs down, then I don’t see how you can have children. If you have children, I don’t see how you can fail to do everything in your power to ensure that you win your bet, and that they, and their grandchildren, and their grandchildren’s grandchildren, will inherit a world whose perfection can never be accomplished by creatures whose imagination for perfecting it is limitless and free. And I don’t see how anybody can force me to pay up on my bet if I turn out, in the end, to be wrong.

CryptogramThe Effect of Real Names on Online Behavior

Good article debunking the myth that requiring people to use their real names on the Internet makes them behave better.

Worse Than FailureError'd: Errors for Everyone!

"All I wanted to do was to unsubscribe from Credit Sesame emails, but instead I got more than I bargained for," writes Shawn A.


Mike R. wrote, "Sure, Simon Rewards, I'll click the button to update GlassWire, but only if there's a reward in it for me."


"This blue screen ad campaign is really convincing. I'm sold!" writes Roger K.


"I hope it's just a quick coffee run," wrote Jeremy E.


Ben writes, "It's good to see Windows XP reliably running ticket machines in the Southern Rail region of the UK ( picture credit to Gil Tompkinson from Brighton)"


"Jerusalem Central bus station has installed new automatic ticket machines which is nice," Eugene F. wrote, "I certainly would like the interface to be at least a little bit more descriptive though."


Phil wriktes, "A colleague was driving behind a van labelled with - so we decided to check out their site. However based on the 'Latin' descriptions alone it is difficult to tell each meat apart."


[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

Planet Linux AustraliaGlen Turner: Blog moving to Dreamwidth

Getting less and less happy with LiveJournal as a blogging platform: limited input formats, poor presentation, etc. But running your own blogging platform is a nightmare too, as so many of them are written in PHP.

Although it's not really a solution, this blog is moving to

Krebs on SecurityStolen Passwords Fuel Cardless ATM Fraud

Some financial institutions are now offering so-called “cardless ATM” transactions that allow customers to withdraw cash using nothing more than their mobile phones. But as the following story illustrates, this new technology also creates an avenue for thieves to quickly and quietly convert stolen customer bank account usernames and passwords into cold hard cash. Worse still, fraudulent cardless ATM withdrawals may prove more difficult for customers to dispute because they place the victim at the scene of the crime.

A portion of the third rejection letter that Markula received from Chase about her $2,900 fraud claim. The bank ultimately reversed itself and refunded the money after being contacted by KrebsOnsecurity, stating that Markula's account was one of several that were pilfered by a crime gang that has since been arrested by authorities.

A portion of the third rejection letter that Markula received from Chase about her $2,900 fraud claim.

San Francisco resident Kristina Markula told KrebsOnSecurity that it wasn’t until shortly after a vacation in Cancun, Mexico in early November 2016 that she first learned that Chase Bank even offered cardless ATM access. Markula said that while she was still in Mexico she tried to view her bank balance using a Chase app on her smartphone, but that the app blocked her from accessing her account.

Markula said she thought at the time that Chase had blocked her from using the app because the request came from an unusual location. After all, she didn’t have an international calling or data plan and was trying to access the account via Wi-Fi at her hotel in Mexico.

Upon returning to the United States, Markula called the number on the back of her card and was told she needed to visit the nearest Chase bank branch and present two forms of identification. At a Chase branch in San Francisco, she handed the teller a California driver’s license and her passport. The branch manager told her that someone had used her Chase online banking username and password to add a new mobile phone number to her account, and then move $2,900 from her savings to her checking account.

The manager told Markula that whoever made the change then requested that a new mobile device be added to the account, and changed the contact email address for the account. Very soon after, that same new mobile device was used to withdraw $2,900 in cash from her checking account at the Chase Bank ATM in Pembroke Pines, Fla.

A handful of U.S. banks, including Chase, have deployed ATMs that are capable of dispensing cash without requiring an ATM card. In the case of Chase ATMs, the customer approaches the cash machine with a smart phone that is already associated with a Chase account. Associating an account with the mobile app merely requires the customer to supply the app with their online banking username and password.

Users then tell the Chase app how much they want to withdraw, and the app creates a unique 7-digit code that needs to be entered at the Chase ATM (instead of numeric code, some banks offering cardless ATM withdrawals will have the app display a QR code that needs to be read by a scanner on the ATM). Assuming the code checks out, the machine dispenses the requested cash and the transaction is complete. At no time is the Chase customer asked to enter his or her 4-digit ATM card PIN.

Most financial institutions will limit traditional ATM customers to withdrawing $300-$600 per transaction, but some banks have set cardless transaction limits at much higher amounts under certain circumstances. For example, at the time Markula’s fraud occurred, the limit was set at $3,000 for withdrawals during normal bank business hours and made at Chase ATMs located at Chase branches.

Markula said the bank employees helped her close the account and file a claim to dispute the withdrawal. She said the teller and the bank manager reviewed her passport and confirmed that the disputed transaction took place during the time between which her passport was stamped by U.S. and Mexican immigration authorities. However, Markula said Chase repeatedly denied her claims.

“We wanted to thank you for providing your information while we thoroughly researched your dispute,” Chase’s customer claims department wrote in the third rejection letter sent to Markula, dated January 5, 2017. “We confirmed that the disputed charges were correct and we will not be making an adjustment to your account.”

Markula said she was dumbfounded by the rejection letter because the last time she spoke with a fraud claims manager at Chase, the manager told her that the transaction had all of the hallmarks of an account takeover.

“I’m pretty frustrated at the process so far,” said Markula, who shared with this author a detailed timeline of events before and after the disputed transaction. “Not captured in this timeline are the countless phone calls to the fraud department which is routed overseas. The time it takes to reach someone and poor communication seems designed to make one want to give up.”

KrebsOnSecurity contacted Chase today about Markula’s case. Chase spokesman Mike Fusco said Markula’s rejection letter was incorrect, and that further investigation revealed she had been victimized by a group of a half-dozen fraudsters who were caught using the above-described technique to empty out Chase bank accounts.

Fusco forwarded this author a link to a Fox28 story about six men from Miami, Fla. who were arrested late last year in Columbus, Ohio in connection with what authorities there called a “multi-state crime spree” targeting Chase accounts.

“We escalated it and reviewed her issue and determined she did have fraud on her account,” Fusco said.  “We’re reimbursing her and we’re really sorry. This small pilot we ran allowed a limited number of customers to access cash at Chase ATMs without a card. During the pilot we detected some fraudulent activity where a group of people were able to go online and change the customer’s information and get the one-time access code, and we immediately notified the authorities.”

Chase declined to say how many like Markula were victimized by this gang. Unfortunately, somehow Chase neglected to notify victims, as Markula’s case shows.

“It makes you wonder how many other people didn’t dispute the charges,” she said. “Thankfully, I don’t give up easily.”

Fusco said Chase had made changes to better detect these types of fraudulent transactions going forward, and that it had lowered the withdrawal limit for these types of transactions — although for security reasons Fusco declined to say what the new limit was.

Fusco also said the bank’s system should have sent out an email alert to the original email on file in the event that the email on the account is changed, but Markula said she’s confident no such email ever landed in her inbox.

Avivah Litan, a fraud analyst at Gartner Inc., says many banks see mobile authentication as the way of the future for online banking and ATM transactions. She said most banks would love to be able to move away from physical bank cards, which often need to be replaced several times a year in response to data breaches at various retailers.

“A lot of banks see cardless transactions as a great way to reduce fraud and speed up transactions, but not many are offering it yet as a feature to customers,” Litan said.

Litan said Markula’s case echoes the spike in fraud that some banks saw after Apple debuted its Apple Pay platform. Many banks chose to adopt Apple Pay without also beefing up security around how they validate new customers and new mobile devices. As a result, this allowed fraudsters to take stolen credit card numbers and expiration dates — data that previously was only good for fraudulent online transactions — tie those cards to iPhones, and use the phones to commit card fraud at brick-and-mortar stores that accepted Apple Pay.

“Identity proofing remains the weakest point in mobile banking,” Litan said. “Asking for the customer’s username and password to on-board a new mobile device isn’t enough.”

Litan said Chase should require customers who wish to conduct cardless ATM transactions to enter their PIN in addition to the one-time code. But she said even that was not enough.

Litan said Chase should have flagged the transaction as highly suspicious from the get-go, given that the fraudsters accessed her account from a new location, changed her contact email address, added a new device and withdrew just under the daily maximum — all in a very short span of time.

“ATM transactions should have much stronger fraud controls because consumers don’t have as strong protections as they do with other transactions,” Litan said. “If a customer’s card is used fraudulently at a retailer, for example, the consumer is protected by Visa and MasterCard’s zero liability rule, and they can generally expect to get their money back. But when you withdraw cash from an ATM, you’re not protected by those rules. It’s down to Regulation E and your bank’s policies.”

Under the Federal Regulation E, if a retail banking customer reports fraud, the bank must investigate the first statement of the activity plus 60 days from the date the statement was mailed by the financial institution. Unless the institution can prove the transaction wasn’t fraud, it must reimburse the consumer. However, any activity that takes place outside of the aforementioned timeframe carries unlimited liability to the consumer, as the financial institution may have been able to prevent the loss had it been reported in a timely manner.

Fusco added that consumers should beware of phishing scams, and consider asking their financial institution to secure their accounts with a special passphrase or code that needs to be supplied when authenticating with the bank over the telephone (a precaution I have long advised).

Also, if your bank offers two-step or two-factor authentication — such as the requirement to send a text-message with a one-time code to your mobile device if someone attempts to log in from an unknown device or location — please take advantage of that feature. has a list of banks that offer this additional security feature.

Also, as the Regulation E paragraph I hope makes clear, do not count on your bank to block fraudulent transfers, and remember that ultimately you are responsible for spotting and reporting fraudulent transactions.

Litan said she won’t be surprised if this incident gives more banks pause about moving to cardless ATM transactions.

“This is the first case I’m aware of in the United States where this type of fraud has been an issue,” she said. “I’m guessing this will slow the banks down a bit in adopting the technology because they’ll see now how easy it is for criminals to take advantage of it.”

Update, Jan. 6, 9:44 a.m. ET: Looks like Chase could have learned from the experience of NatWest, a big bank in the U.K. that experienced much the same fraud five years ago after enabling a cardless “get cash” feature.


Valerie AuroraOne way to resist Trump: become an Ally Skills Workshop teacher

We have a problem in the U.S.: 63 million people who voted for Trump, either despite or because of his record of advocating and practicing racism, sexism, xenophobia, ableism, transphobia, religious hatred, and other cruel and backward beliefs. This election made it clear how important it is for people of good will to learn the skills to stand up for their values, and, when possible, to change the hearts and minds of people who don’t yet understand the implications of supporting someone with these beliefs. You can be a crucial part of changing some of these 63 million minds – keep reading to learn how.

I teach a workshop based on the idea that people who have the most power and influence in society should take on more of the work of fighting systemic discrimination. It’s called the Ally Skills Workshop, and I’ve been teaching it since 2012 along with co-creator Mary Gardiner, Leigh Honeywell, Kendra Albert, Y-Vonne Hutchinson, and many others. In this workshop, I teach people simple, everyday techniques for standing up to systemic oppression as well as making systemic changes to reduce oppression. It teaches people a wide range of responses, from simply saying, “Not cool, dude,” at a party to helping people be heard in a meeting to reforming the way your company interviews new employees. Kendra Albert recently created a version of the workshop specialized for talking to friends and family who support Trump’s policies.

I want the workshop to reach more than a few dozen people a week. That’s why I teach other people to lead the Ally Skills Workshop with a train-the-trainers class. The next train-the-trainers classes are on January 15, 2017 in Oakland, California, and January 22, 2017 through online video. Tickets are priced on a need-based sliding scale, with free tickets available if you email me directly and tell me more about why you’d like to take the training. There’s no fee or charge for teaching the workshop later on – all of the materials are freely reusable and modifiable at no cost.

Teaching the workshop isn’t for everyone. From my experience, here are the three most important qualities for an Ally Skills Workshop teacher to have:

  • A fairly broad understanding of the issues facing a number of different marginalized groups
  • Comfort with speaking extemporaneously in public, including interrupting or confronting people when necessary
  • A strong sense of empathy for a wide range of people (or the ability to turn your empathy up during the workshop)

I often recommend that people teach the Ally Skills Workshop in pairs so that it’s less pressure on one person to be able to answer all the questions or respond appropriately in the moment. (I also teach people how to handle not knowing the answer to a question along with other useful teaching skills.)

If teaching the Ally Skills Workshop isn’t for you, I and many others are willing and able to teach this workshop around the world. Email me at to find out more.

Tagged: ally skills, fascism, politics

Planet Linux AustraliaGabriel Noronha: Charging point connectors & socket outlets.

Mainly for my own reference.

The New Zealand Transport Agency explanation of charging sockets and plugs.

This is pretty comprehensive and is easily applicable to Australia we have some more choice of vehicles which are not listed mainly in the hybrids but have more strict rules for imports, so no 2nd hand Japanese LEAFs here.

Their recommendation of type 2 sockets for Public AC charging, and CHADeMO and Type 2 CCS for Public DC charging is also something I agree with.

New Zealand like Australia had started to roll out Type 1 CCS but it looks like they’ll be changing all the stations to Type 2 CCS, to align with the European charging standard. Which makes more technical sense as our power girds are similar voltage and frequency.

Personally I hope Australia move to Type 2 CCS like NZ has, but at the moment all the power is in the vehicle manufacture hands, and they benefit form Type 1 CCS as Australia would become the only country in the world to have cars that are right hand drive and Type 1 CCS. Stopping any sort of importation of 2nd hand electric cars even if the rules are relaxed.

Worse Than FailureA Font of Misery

After his chilling encounter in the company’s IT Cave, new hire George spent some time getting his development workstation set up. Sadly, his earlier hope that the PC in his office was a short-term placeholder until something better comes in was dashed to pieces. This PC was a small-form-factor budget system, relying on an old dual-core processor, 2 GB RAM, a 5400 RPM “green” disk drive, and integrated graphics with a single output port, to which was connected an aging 17" LCD monitor with a failing backlight.

A preview of a glitchy font

George got to work installing software packages from a network drive, presumably clicking itself to death in the dark IT office. With a PC nearly ten years behind the curve, George had plenty of time, several days, in fact, to drink coffee while exploring the building. His unease from the encounter with the IT guy eventually faded as he met other employees who seemed as normal as he, discovering conference rooms with normal-looking Scrum boards, and offices and cubicles that would not appear out-of-place in any modern, successful software company. A friendly member of the Marketing Department even gave him some swag: logo’d pens and stress balls and notepads. Perhaps the unusual IT guy and his dark, precarious office was just an anomaly to an otherwise excellent organization.

A week into his new job, George finally had his system set up enough to look around at the software products he’d be working on. During his interview, he’d been told everything was “superbly” documented.

A coworker emailed him links to their developer documentation which was hosted on an internal server somewhere. George followed the link, and his web browser sat on a loading page for way too long. As he waited, and waited, and waited for the page to load, he almost thought he could hear the clicking and clunking of failing disk drives from whichever ancient pile of failing hardware served as the company’s documentation server, but eventually a SharePoint page presented itself on his screen.

The “Developer Documentation” was unexpectedly short. In fact, George read through it faster than it took to load, feeling a sense of dread once he realized what it actually was: three pages of buzzword-laden marketing material! He read about how “superb” the application is and how it has helped millions of companies “leverage new synergies for key wins”. Nowhere could he find a simple, developer-centric description of the application. When he pressed for more documentation, his coworkers shrugged. “That is the documentation,” they explained. “The bosses say it’s good enough and it’s a waste of time to write more.”

George’s sense of dread continued to increase.

And so he did all he could. He checked out the application from source control and went spelunking.

On his first run, he noticed the application’s text did not look right. Characters were glitched in various ways, with bad kerning, inconsistent alignment, and missing/extra pixels, though it was still generally readable with some effort.

Thinking he was missing a dependency, he asked his coworkers for their opinion. “No, it normally does that,” they explained with a shrug. “Most of the time.”

“Do we have unit tests for this?” he asked, but deep down in his gut he knew he wouldn’t like the answer.

“Testing programs are in the design and planning stage,” they responded, even though the application had been on the market for eight years now. “The bosses don’t like to spend too much time on testing.”

He still had no direction on what tasks to perform, so George took it upon himself to dig into the font issue, if nothing else to learn more about the codebase. He downloaded a few third-party font test programs from some prominent tech companies and they all agreed that the application had nearly 1,300 basic font rendering errors.

His sense of dread was starting to overwhelm him as he considered his future. How could an application possibly have millions of sales and installs with nearly-unreadable fonts? And how could it possibly be maintained with no testing and no documentation?

He wrote a memo explaining some of his findings and forwarded it to several coworkers, and asking various questions about it before putting too much effort into fixes that could cause issues unforseeable by a newbie who was not familiar with the history behind the application’s overly-complex font-handling codebase.

Later that day, he received a long email directly from the company president. In tirade form, it explained that George was wasting time, there can’t possibly be that many bugs, and if anything like this happens again the time would be deducted from his paycheck. It ended with the president explaining that George was obviously a f***up who would never amount to anything at the company, but he was willing to give him another chance.

George didn’t want another chance. As he walked past the IT Office on his way to the HR Office to announce his resignation, he briefly wondered how much damage his foam stress ball could do to already-failing disk drives if he were to chuck it through the door into the darkness within.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!


Planet Linux AustraliaBinh Nguyen: Explaining Prophets 2, What is Liberal Democracy?, and More

Obvious continuation of last post, - suspect that some scientists may have experienced prophetic visions (including Einstein, Newton, Galileo, Da Vinci, Edison, etc...) but didn't talk about them publicly)? Clear that there is almost a 'code' among prophets and genuinely religious people. They seem to know one