Planet Russell


Worse Than FailureError'd: Null and Vague

"UPS has sent my parcel to the corporeal equivalent of /dev/null," wrote Steve J.


"If the error message 'recurs', I don't know how much support will be able to help me out," Travis writes.


"When IntelliJ seems to be second guessing it's ability to detect file encoding," writes Felix V.


Eamon B. wrote, "Aww! That UID sure makes me feel loved, Uber. As if my $potential earnings weren't enough!"


"On the one hand, it seems that I'll have to find my departure flight times elsewhere," wrote Fay A., "but on the bright side, I can see that Au Bon Pain is open."


"I didn't want to use an existing certificate, but the advanced options for creating a new one don't leave me much of a choice," writes Ingo B.


"Swarm appears to think that a glass is half full and first place is last as well," Dmitry Z. writes.


[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaBen Martin: 3040 spindle upgrade: the one day crossover plate

Shown below is the spindle that came with my 3040 "engraving" cnc next to the 2.2kw water cooled monster that I am upgrading to. See my previous blog post for videos of the electronics and spindle test on the bench.

The crossover plate which I thought was going to be the most difficult part was completed in a day. I had some high torsion M6 nuts floating around with one additional great feature, the bolt head is nut shaped giving a low clearance compared to some bolts like socket heads. The crossover is shown from the top in the below image. I first cut down the original spindle mount and sanded it flat to make the "bearing mount" as I called it. Then the crossover attaches to that and the spindle mount attaches to the crossover.

Notice the bolts coming through to the bearing mount. The low profile bolt head just fits on each side of the round 80mm diameter spindle mount. I did have to do a little dremeling out of the bearing mount to fit the nuts on the other side. This was a trade off, I wanted those bolts as far out from the centre line as possible to maximize the possibility that the spindle mount would bolt on flat without interfering with the bolts that attach the crossover to the bearing mount.

A more side profile is shown below. The threaded rod is missing for the z-axis in the picture. It is just a test fit. I may end up putting the spindle in and doing some "dry runs" to make sure that the steppers are happy to move the right distances with the additional weight of the spindle. I did a test run on the z-axis before I started, just resting the spindle on the old spindle and moving the z up and down.

I need to drop out a cabinet of sorts for the cnc before getting into cutting alloy. The last thing I want is alloy chips and drill spirals floating around on the floor and getting trecked into other rooms.

Sky CroeserReimagining Australia: solidarity with Indonesia and Indigenous representation

indonesia_calling_128480x48029286b0b06d5ca4909a223c77f41144a091329In Indonesia Calling and other stories, Ariel Heryanto spoke on the extraordinary moments of solidarity between Indonesia and Australia in the 1940s – moments which have been almost entirely erased from the memory of both countries. Heryanto argues that this may be because remembering would involve acknowledging the past power of the left in our region. He discussed Joris Ivens’ film, Indonesia Calling, which gives the exact opposite message of the one he was commissioned for: it envisaged a future of an Australia with strong ties to an independent Indonesia. The making of this film was also surrounded by the growth of different links and networks of solidarity, but these has been largely forgotten, and aren’t referenced in current discussions of the relationship between Australia and Indonesia. The military regime in Indonesia is one of the many factors in the forcible erasure of this history of solidarity.

The Indigenous Representation, Politics and Recognition panel opened with Sharon Mascher and Simon Young‘s work on Re-Imagining Australia’s Constitutional Relationship with Indigenous Peoples: Lessons from the Canadian Experience. They noted that while there are important lessons to be learned by making comparisons, we should be cautious about trying to transplant bits of law between different contexts. The Canadian experience is very striking, especially the scale of the initiatives taken in 1982, and Canadian constitutional reform has had interesting (and sometimes unexpected) effects that are worth examining further.

Angelique Stastny spoke on Stereotypical representations of settlers and Indigenous people in school history textbook then and now. Stastny asks whether revised history has translated into new modes of knowledge and a critical shift in the representations of settler-Indigenous relationships in textbooks, or if old colonial tropes have emerged in new forms. Looking at Australian textbooks over time, the proportion of material addressing settler-Indigenous relationships is reasonably steady. Most textbook authors are male, though with a growing proportion of female authors. Textbooks only started including Indigenous sources from around the 1970s, though there are still a small proportion of sources. Content names few women (either Indigenous or non-Indigenous). Almost all textbooks mention violent conflict between settlers and Indigenous people, and this makes up a significant proportion of the content. In the 1960s to 1980s, content shifts from describing Indigenous people as threatening and politically distinct – around this time, there’s a shift to describing them within frameworks of dependency. What is at stake might be not just how Indigenous people are represented, but also who does the work of representing.

Finally, Michael R. Griffiths presented on the Distribution of Settlement: Indigeneity, Recognition and the Politics of Visibility. He notes the forcible making-visible of Indigenous tropes through white Australian creative writing: a kind of appropriation of Indigenous history as a way of Indigenizing settler culture. The presentation focuses on Indigenous writers’ work, which often responds and critiques these trends. Griffiths asks how settlers read Indigenous writing today, and how Indigenous writers navigate the politics of visibility in their writing. He draws on theory about the engagement that comes with refusal, the tension between the politics of visibility and the right to opacity.


Rondam RamblingsWhat is it with Arkansas?

Sigh, here we go again. Seriously, what is it with Arkansas that makes it such a uniquely fecund breeding ground for social neanderthals? Arkansas’ highest court on Thursday threw out a judge’s ruling that could have allowed all married same-sex couples to get the names of both spouses on their children’s birth certificates without a court order, saying it doesn’t violate equal protection “to

Planet DebianJohn Goerzen: Giant Concrete Arrows, Old Maps, and Fascinated Kids

Let me set a scene for you. Two children, ages 7 and 10, are jostling for position. There’s a little pushing and shoving to get the best view.

This is pretty typical for siblings this age. But, you may wonder, are they trying to see? A TV? Video game?

No. Jacob and Oliver were in a library, trying to see a 98-year-old map of the property owners in Township 23, range 1 East, Harvey County, Kansas. And they were super excited about it, somewhat to the astonishment of the research librarian, who am I sure is more used to children jostling for position over the DVDs in the youth section than poring over maps in the non-circulating historical archives!

All this started with giant concrete arrows in the middle of nowhere.

Nearly a century ago, the US government installed a series of arrows on the ground in Kansas. These were part of a primitive air navigation system that led to the first transcontinental airmail service.

Every so often, people stumble upon these abandoned arrows and there is a big discussion online. Even Snopes has had to verify their authenticity (verdict: true). Entire websites exist to tracking and locating the remnants of these arrows. And as one of the early air mail routes went through Kansas, every so often people find these arrows around here.

I got the idea that it would be fun to replicate a journey along the old routes. Maybe I’d spot a few old arrows and such. So I started collecting old maps: a Contract Airmail Route #34 (CAM 34) map from 1927, aviation sectionals from 1933 and 1946, etc.

I noticed an odd thing on these maps: the Newton, KS airport was on the other side of the city from its present location, sometimes even several miles outside the city. What was going on?

1927 Airway Map
(1927 Airway Map)

1946 Wichita Sectional
(1946 Wichita sectional)

So one foggy morning, I explained my puzzlement to the boys. I highlighted all the mysteries: were these maps correct? Were there really two Newton airports at one time? How many airports were there, and where were they? Why did they move? What was the story behind them?

And I offered them the chance to be history detectives with me. And oh my goodness, were they ever excited! We had some information from a very helpful person at the Harvey County Historical Museum (thanks Kris!) So we suspected one airport at least was established in 1927. We also had a description of its location, though given in terms of township maps.

So the boys and I made the short drive over to the musem. We reviewed their property maps, though they were all a little older than the time period we needed. We looked through books and at pictures. Oliver pored over a railroad map of Newton from a century ago, fascinated. Jacob was excited to discover on one map that there used to be a train track down the middle of Main Street! I was interested that the present Newton Airport was once known as Wirt Field, rather to my surprise. I somehow suspect most 2nd- and 4th graders spend a lot less excited time on their research floor!

Then on to the Newton Public Library to see if they’d have anything more — and that’s when the map that produced all the excitement came out.

It, by itself, didn’t answer the question, but by piecing together a number of pieces of information — newspaper stories, information from the museum, and the maps — we were able to come up with a pretty good explanation, much to their excitement.

Apparently, a man named Tangeman owned a golf course (the “golf links” according to the paper), and around 1927 the city of Newton purchased it, because of all the planes that were landing there. They turned it into an airport. Later, they bought land east of the city and moved the airport there. However, during World War II, the Navy took over that location, so they built a third airport a few miles west of the city — but moved back to the current east location after the Navy returned that field to them.

Of course, a project like this just opens up all sorts of extra questions: why isn’t it called Wirt Field anymore? What’s the story of Frank Wirt? What led the Navy to take over Newton’s airport? Why did planes start landing on the golf course? Where precisely was the west airport located? How long was it there? (I found an aerial photo from 1956 that looks like it may have a plane in that general area, but it seems later than I’d have expected)

So now I have the boys interested in going to the courthouse with me to research the property records out there. Jacob is continually astounded that we are discovering things that aren’t in Wikipedia, and also excited that he could be the one to add them. To be continued, apparently!


Krebs on Security‘Avalanche’ Crime Ring Leader Eludes Justice

The accused ringleader of a cyber fraud gang that allegedly rented out access to a criminal cloud hosting service known as “Avalanche” is now a fugitive from justice following a bizarre series of events in which he shot at Ukrainian police, was arrested on cybercrime charges and then released from custody.

Gennady Kapkanov. Source:

Gennady Kapkanov. Source:

On Nov. 30, authorities across Europe coordinated the arrest of five individuals thought to be tied to the Avalanche crime gang, in an operation that the FBI and its partners abroad described as an unprecedented global law enforcement response to cybercrime.

According to Ukrainian news outlets, the alleged leader of the gang — 33-year-old Russian Gennady Kapkanov — did not go quietly. Kapkanov allegedly shot at officers with a Kalashnikov assault rifle through the front door as they prepared to raid his home, and then attempted to escape off of his 4th floor apartment balcony.

Ukrainian police arrested Kapkanov and booked him on cybercrime charges. But a judge in the city of Poltava, Ukraine later ordered Kapkanov released, saying the prosecution had failed to file the proper charges (including charges of shooting at police officers), charges which could have allowed authorities to hold him much longer. Ukrainian media reports that police have since lost track of Kapkanov.

Ukraine’s Prosecutor General Yuri Lutsenko is now calling for the ouster of the prosecutor in charge of the case. Meanwhile, the Ukranian authorities are now asking the public for help in re-arresting Kapkanov.


Weapons police say they seized from Kapkanov’s apartment. Source:

Built as a criminal cloud-hosting environment that was rented out to scammers, spammers other ne’er-do-wells, Avalanche has been a major source of cybercrime for years. In 2009, when investigators say the fraud network first opened for business, Avalanche was responsible for funneling roughly two-thirds of all phishing attacks aimed at stealing usernames and passwords for bank and e-commerce sites.  By 2011, Avalanche was being heavily used by crooks to deploy banking Trojans.

The U.K.’s National Crime Agency (NCA), says the more recent Avalanche fraud network comprised up to 600 servers worldwide and was used to host as many as 800,000 web domains at a time.

Kapkanov, in blue with his hands over his head, standing on his 4th-floor balcony. Image:

Kapkanov, in blue with his hands over his head, standing on his 4th-floor balcony. Image:

Kapkanov's drivers license. Source:

Kapkanov’s drivers license lists an address in the United Kingdom. Source:

Cory DoctorowEverything is a Remix, including Star Wars, and that’s how I became a writer

Kirby Ferguson, who created the remarkable Everything is a Remix series, has a new podcast hosted by the Recreate Coalition called Copy This and he hosted me on the debut episode (MP3) where we talked about copying, creativity, artists, and the future of the internet (as you might expect!).

Are you one of the many Star Wars fans eagerly awaiting the release of Rogue One: A Star Wars Story later this month? As you watch – and rewatch – the trailer, take a break to tune into Re:Create’s new Copy This podcast to learn about copyright and the role it’s played in the success of the fan-favorite series. As part of our ongoing work to elevate the discussion around copyright issues, the role copyright plays in our lives, and the need for balanced laws, Re:Create today launched Copy This hosted by writer, director and remixer Kirby Ferguson. The monthly podcast will bring to listeners conversations with some of the leading authors, policy minds, legal experts, and members of the creative community to take on the important questions and topics driving the copyright debate today.

New Re:Create Podcast Shows What Star Wars Can Teach Us About Copyright

CryptogramNew NSA Stories

Le Monde and the Intercept are reporting about NSA spying in Africa, and NSA spying on in-flight mobile phone calls -- both from the Snowden documents.

TEDHave a TED Talk idea? Apply to our Idea Search events in Africa

Saki Mafundikwa prepares to speak at the TED@Nairobi auditions in 2013, aiming for a slot on the TED mainstage. (Spoiler: He made it.) Photo:

Saki Mafundikwa prepares to speak at the TED@Nairobi auditions in 2013, aiming for a slot on the TED mainstage. (Spoiler: He made it.) Photo:

Do you have a TED Talk you’ve always wanted to try out in front of an audience? We’re thrilled to announce that applications are open for two new events in Africa: TEDLagos and TEDNairobi 2017 Idea Search!

Anyone with an idea worth spreading is invited to apply to either of those two events; around 25 finalists at each event will share their risky, quirky, fascinating ideas in under 6 minutes, in early February, onstage at beautiful venues in Lagos, Nigeria, and Nairobi, Kenya.

The TED Idea Search is a chance for us to find fresh voices to ring out on the TEDGlobal stage. Some of these talks will be posted on the online TED platform; other speakers will be invited to expand on their talks on the TEDGlobal 2017 main stage in Arusha, Tanzania, in the summer of 2017, themed Builders. Truth-tellers. Catalysts. We are looking for speakers whose talks fit well within that theme. Saki Mafundikwa, Richard Turere, Zak Ebrahim, Sally Kohn, Hyeonseo Lee — all these speakers are fantastic finds from previous TED talent searches.

The deadline to apply is Friday, December 16, 2016, at 6pm Lagos time / 8pm Nairobi time. To apply, you’ll need to fill out a form and make a 1-minute video describing your talk idea. Quick notes: We can’t cover travel for finalists who live far from the cities where these events are taking place; we encourage local applicants to Lagos and Nairobi. Please choose only one event to apply to — applying to both events will not increase your chances of being selected to speak.

Apply to speak at the TED Africa Idea Search 2017

Worse Than FailureCodeSOD: Un-Encoding

Felix caught a ticket about their OpenId authentication. For some mysterious reason, it had started failing around 30% of the time, specifically because the access token returned by the service was invalid.

Felix had originally written the code, but there was one problem: he wasn’t the last one to touch it. Another development team needed their own versions of the code, organized a bit differently, for infrastructure reasons. Eventually, the whole thing was turned into a drop-in library component that was used by all applications which depended on OpenId. The failures started after they made their changes, so obviously their changes caused the failures.

Since the errors were intermittent, their first guess was that the bug was something intermittent- perhaps an infrastructure problem, or a race condition between interacting services? They couldn’t reliably reproduce the error, so Felix spent a lot of time eliminating possibilities. Trawling through the code wasn’t very helpful. The other team had been operating under unrealistic deadlines and hacked together something that worked and wasn’t too worried about how or why it worked. The result included lots of un-patterns (like anti-patterns, but without having a pattern to them), inheritance trees that desperately needed pruning, and old-fashioned SQL injection vulnerabilities copy-pasted everywhere.

Eventually, buried deep in a common service adapter base class, no where near the code that was supposed to be responsible for managing authentication, he found this code for fetching the OpenId token:

    public async Task<T> GetAsync(string key)
        T result = default(T);

        using (var httpClient = await CreateHttpClient())
            HttpResponseMessage response = await httpClient.GetAsync(Route + "/" + key);
            if (response.IsSuccessStatusCode)
                result = _serializer.Deserialize(

        return result;

Felix cringed a little at seeing a call to CreateHttpClient with each execution- there was a lot of overhead there, and the whole thing would be more efficient if the code only did that once. Still, the problem wasn’t performance. Calling response.Content.ReadAsStringAsync().Result could actually cause deadlocks, and should have been await response.Content.ReadAsStringAsync(), but Felix wasn’t tracking down a deadlock.

In fact, Felix almost eliminated this code as the source of his problem, until he looked at the call to WebUtility.UrlDecode. The body of the response was a JSON object. There was no logical reason to do that. And the OpenId token was Base–64 encoded, using a mapping that included the “+” character. In URL terms, “+” might be known as %2B, thus mangling the token any time a "+" sign appeared in it, which coincidentally, happened about 30% of the time.

Felix fixed up this function, removing the nonsensical attempt to decode something which wasn’t encoded in the first place, and the bug went away.

[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

Planet Linux AustraliaBen Martin: 3040 for alloy

I have finally fired up a 2.4kw 24,000 rpm spindle on the test bench. This has water cooling and is VFD controlled. The spindle runs on 3 phase AC power.

One thing that is not mentioned much is that the spindle itself and bracket runs to around 6-7kg. Below is the spindle hitting 24,000 rpm for the first time.

With this and some other bits a 3040 should be able to machine alloy.

Planet DebianVincent Fourmond: Finding zeros of data using QSoas

QSoas does not provide by default commands to detect zeros of data, and the reason for that is that it is simple, using the integrate command to convert this problem into a peak-finding problem, which can be solved using the find-peaks command. Here is that strategy applied to determining the zeros of the 0-th order bessel function:

QSoas> generate-buffer -10 10 bessel_j0(x) /samples=100001
QSoas> integrate
Current buffer now is: 'generated_int.dat'
QSoas> find-peaks
Found 6 peaks
buffer what x y index width left_width right_width
generated_int.dat min -8.6538 -0.201157042341714 6731 1.7798 0.905999999999999 0.873800000000001
generated_int.dat max -5.52 0.398165469321319 22400 2.2854 1.1862 1.0992
generated_int.dat min -2.4048 -0.403288737672291 37976 1.8232 0.973 0.850199999999999
generated_int.dat max 2.4048 2.53731134529594 62024 nan 2.2026 nan
generated_int.dat min 5.52 1.73585713830231 77600 nan 5.7198 nan
generated_int.dat max 8.6538 2.33517964996535 93269 nan 8.5532 nan

Compare that with the values given on Mathematica's website. This strategy is reasonably resistant to noise, since integration decreases high-frequency noise, but you may have to play with the /window option to find-peaks to avoid detecting the same zero (peak) several times.

Hopefully, I'll come back with more regular postings of tips and tricks !

Sky CroeserReimagining Australia: Islamophobia and reimagining landscape and sustainability

dummyRanda Abdel-Fattah: ‘Racial Australianisation’ and the affective registers and emotional practices of Islamophobia
Abdel-Fatteh talks about the ways in which the Lakemba area has been racialised as a dangerous, Muslim space of otherness. Even a modified shop dummy becomes a symbol of threat. Racial meanings have been embedded across a range of symbols, including halal certifications, particular food, clothing, and Arabic script. This needs to be understood in the context of Australia’s history. We also need to understand Islamophobia as a range of practices: a problematisation of Muslim identity that we can see as related to the history of whiteness in Australia.

Interviews displayed the ways in which white Australians set themselves up as arbiters of Australian identity: interviewees emphasised that they saw Australia as having Judeo Christian values, and that they felt they could ‘read’ the affective gestures of Muslim Lebanese around them (and could identify Muslim people specifically through their affective gestures). Over the last years, attention to Muslims (or people seen as Muslim) has become ever more sharply trained in Australia, through the lens of Islamophobia. We’ve seen a socialised affective practice around the understanding of Islam, a belief that white Australians know the real essence of Muslims (a similar process to that around anti-semitism). White Australians ‘stick’ the label of could-be-terrorist to all Muslim bodies, which also implies a constant fear of all Muslims.

The question for anti-racist activists is how to intervene in these affective associations. We need to create processes of unsettlement. This needs to go beyond myth-busting: Islamophobia can’t be challenged only through the provision of facts. Islamophobia isn’t a Muslim problem, it’s an Australian problem.

The Reimagining Landscape and Sustainability panel opened with Zafu Teferi and Paul Newman’s work on Indian Ocean Settlements. Teferi’s research on Addis Ababa slums includes a recognition of the sense of community and social solidarity in these informal settlement. Rather than destroying slums, it’s possible to think about how to renew dense informal settlements and provide decentralised infrastructure without destroying them. This will require new systems of governance based on the already-existing community structures. While the context may be very different in Australia, the White Gum Valley demonstrates some important links, including a focus on community-focused sustainable living with a distributed infrastructure.

Gary Burke spoke on Re-Imagining Economics: sustainability-information economics, accounting, taxation and narrative to foster creative well-being. Economics is a mythology, rather than a science. We need to think critically about economic systems, and about how we understand sustainability. Neoclassical economists construct analysis as if economic activity is a machine: this means reframing the issues to suit the existing conceptual paradigm.

Danielle Brady’s (co-authoring with Jeff Murray) Reimagining Perth’s Lost Wetlands tracked the history of draining, filling-in, or reducing Perth’s wetland areas. Not only have these wetlands physically disappeared, even the memory of their presence and effects on the development of Perth are also largely forgotten. Brady presented while wearing a ‘Say no to Roe 8’ shirt, noting that as she was speaking others are involved in an effort to save the Beeliar Wetlands: protesters are being issued move-on notices, with threats of arrests to follow, and there are calls for support, including to phone the Premier. We do have wetlands left in Perth, and knowing their history may help us in imagining a future version of the city that incorporates and values wetlands. This also needs to be linked to processes of decolonisation.

Finally, Andrea Gaynor talked about Re-imagining Australian wheatlands: heartlands to artlands? Gaynor is asking whether art can help build sustainable rural communities, putting the question in the historical context of rural depopulation, efforts to bring ‘culture’ to the country, and changing configurations of community and belonging. While we sometimes romanticise rural Australian life, we should remember that rural communities have been build on the violence of colonisation, and that rural community built hierarchies of belonging and control.

rusty-cockatoo-by-sean-meany-1-768x512Large international, externally-run part projects, like the silo art trail, have the potential to contribute to building more sustainable rural communities, there are also important limitations to what they might achieve. The silo art project was developed without consultation with local communities. The idea was that people from the city would drive out on a rural art project. This might be seen as part of a broader trend: the commodification of nature within the global tourist economy, and one shaped by metropolitan sensibilities rather than building rural community and artistic expression. There are other art projects that are community-driven, drawing on farmers’ skills to create art, like the one in Lockhart.




Planet DebianDirk Eddelbuettel: RcppAPT 0.0.3

A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- is now on CRAN.

We changed the package to require C++11 compilation as newer Debian systems with g++-6 and the current libapt-pkg-dev library cannot build under the C++98 standard which CRAN imposes (and let's not get into why ...). Once set to C++11 we have no issues. We also added more examples to the manual pages, and turned on code coverage.

A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Google AdsenseHow to choose the right ad tools for your site

Welcome to the #SuccessStack, a new series of articles designed to help you:
  • Access Google’s large network of advertisers 
  • Grow your publishing business 
  • Earn more from the ads on your site
This first article can help you choose the right tools to sell and manage the ads on your site.

AdSense: Get started with easy access to Google’s network of advertisers
Who it’s for:
Publishers looking for a smart and easy-to-use tool to optimize their ad space and revenue.

What it does: AdSense makes it easy for you to place, manage and earn revenue from ads on your site. With AdSense, Google is your advertising sales team, bringing you ads from millions of advertisers using AdWords and other Google advertising programs.

AdSense includes simple and easy controls to help you get started with earning money from ads, but it also does a lot of work behind the scenes to help you make more money. It’s a bit like an automatic car -- it removes some of the manual adjustment, allowing you to cruise along with less effort. You still need regular “tune ups” to get optimal performance, but you won’t need to shift gears all the time.

DoubleClick Ad Exchange: Control who gets programmatic access to the ads on your site with advanced features
Who it’s for: Publishers who require a more granular control over their inventory and who have the resources and expertise to manage ongoing optimizations. This product is suited to publishers with yield management expertise and those who have need for advanced features like Programmatic Direct.

What it does: DoubleClick Ad Exchange gives you real-time access to the largest pool of advertisers. This means that in addition to AdWords advertisers, you can also access major ad networks and agencies.

A major difference between DoubleClick Ad Exchange and AdSense is that AdSense does a lot of the technical settings and optimization work for you, such as automating the sale of all your ad space to the highest bidder. With DoubleClick Ad Exchange, you can control these adjustments yourself and control exactly how your inventory is sold. As an example, DoubleClick Ad Exchange allows you to choose which ad space is for public sale and which is reserved for private auctions. This increased amount of user input is necessary for you to get the best results from DoubleClick Ad Exchange. Another important distinction is that the AdSense demand is majority AdWords advertisers, whereas DoubleClick Ad Exchange pulls demand from multiple sources. You can see the full list of differences at our help center.

DoubleClick For Publishers: Scale your advertising business
Who it’s for: Publishers who are looking for a tool that has AdSense or Ad Exchange built in, along with lots of useful features to help them, schedule, deliver and measure their ad inventory regardless of how they sell it, to networks, programmatically or through their own direct sales teams.

What it does:
DoubleClick for Publishers is a single platform that allows you to manage and deliver all of your web, mobile, and video advertising across all your sales channels. It doesn’t come with it’s own ads, but rather helps you scale your ads business by managing your ad sales across a variety of ad networks such as AdSense, ad exchanges like DoubleClick and direct advertising partners. You can get started with the small business version right away for free, or talk to us about integrating with the premium, paid version that is built for large organizations with sophisticated ad sales teams.

Both versions have a simple interface, lots of great tools, built-in revenue optimization, and Google powered ad delivery to provide a simple, worry-free way to potentially increase the value of your ad impressions.

Ready to get started?
You can arrange a consultation with one of our experts who can help you to choose the right solution for your business, and setup AdSense, DoubleClick Ad Exchange or DoubleClick for Publishers.

Posted by Jay Castro, from the AdSense team.

Planet DebianShirish Agarwal: Day trip in Cape Town, part 2

Debconf16 logo

The post continues from the last post shared.

Let me get some interesting tit-bits not related to the day-trip out-of-the-way first –

I don’t know whether we had full access to see all parts of fuller hall or not. Couple of days I was wondering around Fuller Hall, specifically next to where clothes were pressed. Came to know of the laundry service pretty late but still was useful. Umm… next to where the ladies/gentleman pressed our clothes, there is a stairway which goes down. In fact even on the opposite side there is a stairway which goes down. I dunno if other people explored them or not.

The jail inside and under UCT

I was surprised and shocked to see bars in each room as well as connecting walkways etc. I felt a bit sad, confused and curious and went on to find more places like that. After a while I came up to the ground-level and enquired with some of the ladies therein. I was shocked to know that UCT some years ago (they were not specific) was a jail for people. I couldn’t imagine that a place which has so much warmth (in people, not climate) could be ‘evil’ in a sense. I was not able to get much information out of them about the nature of jail it was, maybe it is a dark past that nobody wants to open up, dunno. There were also two *important* aspects of UCT which Bernelle either forgot, didn’t share or I just came to know via the Wikipedia page then but nothing else.

1. MeerKAT – Apparently quite a bit of the technology was built-in UCT itself. This would have been interesting for geeks and wanna-be geeks like me🙂

2. The OpenContent Initiative by UCT – This would have been also something worth exploring.

One more interesting thing which I saw was the French council in Cape Town from outside

The French Council in cape town from outside

I would urge to look at the picture in the gallery as the picture I shared doesn’t really show all the details. For e.g. the typical large french windows which are the hall-mark of French architecture doesn’t show its glory but if you look at 1306×2322 original picture instead of the 202×360 reproduction you will see that.

You will also the insignia of the French Imperial Eagle whose history I came to know only after I looked it up on the Wikipedia page on that day.

It seemed fascinating and probably would have the same pride as the State Emblem of India has for Indians with the four Asiatic Lions standing in a circle protecting each other.

I also like the palm tree and the way the French Council seemed little and yet had character around all the big buildings.

What also was interesting that there wasn’t any scare/fear-build and we could take photos from outside unlike what I had seen and experienced in Doha, Qatar as far as photography near Western Embassies/Councils were concerned.

One of the very eye-opening moments for me was also while I was researching flights from India to South Africa. While perhaps unconsciously I might have known that Middle East is close to India, in reality, it was only during the search I became aware that most places in Middle East by flight are only an hour or two away.

This was shocking as there is virtually no mention of one of our neighbours when they are source of large-scale remittances every year. I mean this should have been in our history and geography books but most do not dwell on the subject. It was only during and after that I could understand Mr. Modi’s interactions and trade policies with the Middle East.

Another interesting bit was seeing a bar in a Sprinbok bus –

spingbok atlas bar in bus

While admittedly it is not the best picture of the bar, I was surprised to find a bar at the back of a bus. By bar I mean a machine which can serve anything from juices to alcoholic drinks depending upon what is stocked. What was also interesting in the same bus is that the bus also had a middle entrance-and-exit.

The middle door in springbok atlas

This is something I hadn’t seen in most Indian buses. Some of the Volvo buses have but it is rarely used (only except emergencies) . An exhaustive showcase of local buses can be seen here . I find the hand-drawn/cad depictions of all the buses by Amit Pense near to the T.

Axe which can be used to break windows

Emergency exit window

This is also something which I have not observed in Indian inter-city buses (axe to break the window in case of accident and breakable glass which doesn’t hurt anyone I presume), whether they are State-Transport or the high-end Volvo’s . Either it’s part of South African Roads Regulations or something that Springbok buses do for their customers. All of these queries about the different facets I wanted to ask the bus-driver and the attendant/controller but in the excitement of seeing, recording new things couldn’t ask😦

In fact one of the more interesting things I looked at and could look day and night is the variety of vehicles on display in Cape Town. In hindsight, I should have bought a couple of 128 GB MMC cards for my mobile rather than the 64 GB one. It was just plain inadequate to capture all that was new and interesting.

Auditorum chair truck seen near Auditorium

This truck I had seen about some 100 metres near the Auditorium on Upper Campus. The truck’s design, paint was something I had never seen before. It is/was similar to casket trucks seen in movies but the way it was painted and everything made it special.

What was interesting is to see the gamut of different vehicles. For instance, there were no bicycles that I saw in most places. There were mostly Japanese/Italian bikes and all sorts of trucks. If I had known before, I would definitely have bought an SD specifically to take snaps of all the different types of trucks, cars etc. that I saw therein.

The adage/phrase ” I should stop in any one place and the whole world will pass me by ” seemed true on quite a few South African Roads. While the roads were on par or a shade better than India, many of those were wide roads. Seeing those, I was left imagining how the Autobahn in Germany and other high-speed Expressways would look n feel.

India has also been doing that with the Pune-Mumbai Expressway and projects like Yamuna Expressway and now the extension Agra Lucknow Expressway but doing this all over India would take probably a decade or more. We have been doing it since a decade and a half. NHDP and PMGSY are two projects which are still ongoing to better the roads. We have been having issues as to should we have toll or no toll issues but that is a discussion for some other time.

One of the more interesting sights I saw was the high-arched gothic-styled church from outside. This is near Longstreet as well.

high arch gothic-styled church

I have seen something similar in Goa, Pondicherry but not such high-arches. I did try couple of times to gain entry but one time it was closed, the other time some repairing/construction work was going on or something. I would loved to see it from inside and hopefully they would have had an organ (music) as well. I could imagine to some extent the sort of music that would have come out.

Now that Goa has come in the conversation I can’t help but state that Seafood enthusiasts/lover/aficionado, or/and Pescatarianism would have a ball of a time in Goa. Goa is on the Konkan coast and while I’m eggie, ones who enjoy seafood really have a ball of a time in Goa. Fouthama’s Festival which happens in February is particularly attractive as Goan homes are thrown open for people to come and sample their food, exchange recipes and alike. This happens around 2 weeks before the Goan Carnival and is very much a part of the mish-mashed Konkani-Bengali-Parsi-Portugese culture.

I better stop here about the Goa otherwise I’ll get into reminiscing mode.

To put the story and event back on track from where we left of (no fiction hereon), Nicholas was in constant communication with base, i.e. UCT as well as another group who was hiking from UCT to Table Mountain. We waited for the other group to join us till 13:00 hrs. We came to know that they were lost and were trying to come up and hence would take more time. As Bernelle was with them, who was a local and she had two dogs who knew the hills quite well, it was decided to go ahead without them.

We came down the same cable-car and then ventured on towards Houtbay. Houtbay has it all, a fisherman’s wharf, actual boats with tough-mean looking men with tattoos working on boats puffing cigars/pipes, gaggle of sea-gulls, the whole scene. Sharing a few pictures of the way in-between.

the view en-route to Houtbay

western style car paint and repair shop

Tajmahal Indian Restaurant, Houtbay

I just now had a quick look at the restaurant and it seems they had options for veggies too. Unfortunately, the rating leaves a bit to be desired but then dunno as Indian flavoring is something that takes time to get used too. Zomato doesn’t give any idea of from when a restaurant is in business and has too few reviews so not easy to know how the experience would have been.

Chinese noodles and small houses

Notice the pattern, the pattern of small houses I saw all the way till Houtbay and back. I do vaguely remember starting a discussion about it on the bus but don’t really remember. I have seen (on TV) cities like Miami, Dubai or/and Hong Kong who have big buildings on the beach but both in Konkan as well as Houtbay there were small buildings. I guess a combination of zoning regulations, feel of community, fear of being flooded all play into beaches being the way they are.

Also, this probably is good as less stress on the environment.

Miamiboyz from Wikimedia Commons

The above picture is taken from Wikipedia from the article Miami Beach, Florida for comparison.

Audi rare car to be seen in India

The Audi – rare car to be seen in India. This car has been associated with Ravi Shastri when he won it in 1985. I was young but still get goosebumps remembering those days.


First glance of Houtbay beach and pier. Notice how clean and white the beach is.


You can see the wharf grill restaurant in the distance (side-view), see the back of the hop on and hop off bus (a concept which was unknown to me till then). Once I came back and explored on the web came to know this concept is prevalent in many a touristy places around the world. Umm… also By sheer happenchance also captured a beautiful looking Indian female😉 .

So many things happening all at once

In Hindi, we would call this picture ‘virodabhas’ or ‘contradiction’. this is in afternoon, around 1430 hrs. You have the sun, the clouds, the Mountains, the x number of boats, the pier, the houses, the cars, the shops. It was all crazy and beautiful at the same time.

The Biggest Contradiction is seeing the Mountain, the beach and the Sea in the same Picture. Baffled the mind. Konkan though is a bit similar there as well. You have all the three things in some places but that’s a different experience altogether as ours is a more tropical weather although is one of the most romantic places in the rains.

We were supposed to go on a short cruise to seal/dolphin island but as we were late (as had been waiting for the other group) didn’t go and instead just loitered there.

Fake-real lookout bar-restaurant

IIRC the lookout bar is situated just next to Houtbay Search and Rescue. Although was curious if the Lookout tower was used in case of disappearance. lost people, boats etc.

Seal in action

Seal jumping over water, what a miracle !

One of the boats on which we possibly could have been on.

It looked like the boat we could have been on. I clicked as I especially liked the name Calypso and Calypso . I shared the two links as the mythologies, interpretation differ a bit between Greek and Hollywood culture🙂

Debian folks and the area around

Can see few Debian folks in the foreground, next to the Pole and the area around. Also can see a bit of the area around.

Alone boy trying to surf

I don’t know anything about water sports and after sometime he came out. I was left wondering though, how safe he was in that water. While he was close to the pier and he was just paddling, there weren’t big waves still felt a bit of concern.

Mr. Seal - the actor and his handler

While the act was not to the level we see in the movies, still for the time I hung around, I saw him showing attitude for his younger audiences, eating out of their hands, making funny sounds. Btw he farted a few times, whether that was a put-on or not can’t really say but produced a few guffaws from his audience.

A family feeding Mr. Seal

I dunno why the birds came down for. Mr. Seal was being fed oily small fish parts, dunno if the oil was secreted by the fish themselves or whatever, it just looked oily from distance.


Bird taking necessary sun bath

typical equipment on a boat to catch fish-lot of nets


People working on disentangling a net

There wasn’t much activity on the time we went. It probably would have been different on sunrise and would be on sunset. The only activity I saw was on this boat where they were busy fixing and disentangling the lines. I came up with 5-15 different ideas for a story but rejected them as –

a. Probably all of them have been tried. People have been fishing since the beginning of time and modern fishing probably 200 odd years or so. I have read accounts of fishing companies in early 1800s onwards, so probably all must have been tried.

b. More dangerous one, if there is a unique idea, then it becomes more dangerous as writing is an all-consuming process. Writing a blog post (bad or good) takes lots of time. I constantly read, re-read, try and improvise till I can or my patience loses out. In book you simply can’t have such luxuries.


No parking/tow zone in/near the Houtbay search and rescue. Probably to take out emergency vehicles once something untoward happens.


Saved 54 lives, boats towed 154 – Salut! Houtbay sea rescue.

The different springbok atlas bus that we were on


The only small criticism is for Houtbay – there wasn’t a single public toilet. We had to ask favor at kraal kraft to use their toilets and there could have been accidents, it wasn’t lighted well and water was spilled around.

Road sign telling that we are near to UCT

For us, because we were late we missed both the boat-cruise as well as some street shops selling trinkets. Other than that it was all well. We should have stayed till sunset, I am sure the view would have been breath-taking but we hadn’t booked the bus till evening.

Back at UCT

Overall it was an interesting day as we had explored part of Table Mountain, seen the somewhat outrageously priced trinkets there as well as explored Houtbay sea-side as well.

Filed under: Miscellenous Tagged: #Audi, #Cape Town, #Cruises, #Debconf16, #French Council, #Geography, #Houtbay Sea Rescue, #Jail, #Middle East, #Springbok Atlas, #Vehicles

Cory DoctorowMr Robot has driven a stake through the Hollywood hacker, and not a moment too soon

Mr Robot is the most successful example of a small but fast-growing genre of “techno-realist” media, where the focus is on realistic portrayals of hackers, information security, surveillance and privacy, and it represents a huge reversal on the usual portrayal of hackers and computers as convenient plot elements whose details can be finessed to meet the story’s demands, without regard to reality.

There’s a problem with this: information security really matters, and practically no one understands it, and most of what people think they know comes from (usually terrible) media portrayals. The Computer Fraud and Abuse Act, used to prosecute Aaron Swartz, was passed after a Wargames-inspired moral panic about teenagers starting WWIII from their bedrooms, and the next president thinks that hackers are 400 pound guys in their bedrooms and wants to rely on his 10 year old nephew to thwart them.

In my feature article for MIT Tech Review, I discuss the techno-realist movement, how it applies to my own novel Little Brother and its adaptation at Paramount, and what it portends for the future of art, security and law.

The show excels not only at talk but also at action. The actual act of hacking is intrinsically boring: it’s like watching a check-in clerk fix your airline reservation. Someone types a bunch of obscure strings into a terminal, frowns and shakes his head, types more, frowns again, types again, and then smiles. On the screen, a slightly different menu prompt represents the victory condition. But the show nails the anthropology of hacking, which is fascinating as all get-out. The way hackers decide what they’re going to do, and how they’re going to do it, is unprecedented in social history, because they make up an underground movement that, unlike every other underground in the past, has excellent, continuous, global communications. They also have intense power struggles, technical and tactical debates, and ethical conundrums—the kind of things found in any typical Mr. Robot episode.

Mr. Robot wasn’t the first technically realistic script ever pitched, but it had good timing. In 2014, as the USA Network was deliberating over whether to greenlight Mr. Robot’s pilot for a full season, Sony Pictures Entertainment was spectacularly hacked. Intruders dumped everything—prerelease films, private e-mails, sensitive financial documents—onto the Web, spawning lawsuits, humiliation, and acrimony that persists to this day. The Sony hack put the studio execs in a receptive frame of mind, says Kor Adana, a computer scientist turned screenwriter who is a writer and technology producer on the series. Adana told me the Sony hack created a moment in which the things people actually do with computers seemed to have quite enough drama to be worthy of treating them with dead-on accuracy.

Mr. Robot Killed the Hollywood Hacker

[Cory Doctorow/MIT Tech Review]

Sociological ImagesHate Crimes Spike After the Election

According to the Southern Poverty Law Center, the US saw a spike of hate incidents after the election of Donald Trump on November 8th. 867 real-world (i.e., not internet-based) incidents were reported to the Center or covered in the media in just 10 days. USA Today reports that the the Council on American-Islamic relations also saw an uptick in reports and that the sudden rise is greater than even what the country saw after the 9/11 attacks. This is, then, likely just a slice of what is happening.


As the rate of incidents show, there was either a rise in incidents after Trump’s victory and Clinton’s loss, or an increase in the tendency to report incidents. Most perpetrators of these attacks targeted African Americans and perceived immigrants.


The most common place for these incidents to occur, after sidewalks and streets, was K-12 schools. Rosalind Wiseman, anti-bullying editor and author of Queen Bees and Wannabes, and sociologist CJ Pascoe, author of Dude, You’re a Fag, both argue that incidents at schools often reflect adult choices. Poor role models — adults themselves who bully or who fail to stand up for the bullied — make it hard for young people to have the moral insight and strength to do the right thing themselves.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

CryptogramWWW Malware Hides in Images

There's new malware toolkit that uses steganography to hide in images:

For the past two months, a new exploit kit has been serving malicious code hidden in the pixels of banner ads via a malvertising campaign that has been active on several high profile websites.

Discovered by security researchers from ESET, this new exploit kit is named Stegano, from the word steganography, which is a technique of hiding content inside other files.

In this particular scenario, malvertising campaign operators hid malicious code inside PNG images used for banner ads.

The crooks took a PNG image and altered the transparency value of several pixels. They then packed the modified image as an ad, for which they bought ad displays on several high-profile websites.

Since a large number of advertising networks allow advertisers to deliver JavaScript code with their ads, the crooks also included JS code that would parse the image, extract the pixel transparency values, and using a mathematical formula, convert those values into a character.

Slashdot thread.

Worse Than FailureFrozen Out

Lex was an employee at GreyBox in the late 90s, a PC-repair shop inside of a large electronics chain. He had spent the entire morning handling phone calls from customer after customer. Each of the calls was supposed to go to his co-worker Gerald, but Gerald hadn’t been picking up his phone. Each caller complained that Gerald had taken in their computer for repairs and not actually done the repairs.

An ice-cream cone in a bowl, turned up at an… erect angle.

“I brought my laptop in yesterday,” one caller, a wheezy old man, said, “and the young man behind the counter just took the laptop and said, ‘come back in an hour’. He went into the back room, and when I came back, he looked like he had been drinking. You know, red faced and sweaty. And the laptop smelled funny- like corn chips. And it wasn’t fixed!”

Lex, along with their boss Kyle, had long suspected Gerald’s… habits were interfering with his work performance. To wit, every time he was alone in the back room, he came out red-faced and sweaty. The accounting computer, also in the back room, frequently got infected with malware, despite only officially being used for running Excel. Gerald always covered his tracks, clearing history after he went about his ‘business’, and liberally spraying Febreeze in the back room afterwards, but they knew what he was getting up to.

Unfortunately, Gerald was the son of the owner. It would take something like the Pentagon Papers to get him fired.

“I’ll see to your laptop personally,” Lex told the old man on the phone. “I’ll also give it a thorough cleaning for the trouble you’ve been through.”

The Sting

If Gerald couldn’t be fired, then he had to be convinced to quit. He approached Kyle with an idea.

“So, Gerald basically comes to work to… play on the computers, right?,” Lex said. “Well, I could write an application in Visual Basic that could freeze and un-freeze a computer screen.” At some point, a copy of Visual Basic had ended up on one of their diagnostic machines, and Lex had spent some time learning to use it. “It can disable the mouse and keyboard input, take a screen shot, then place the image over the entire screen. The entire process is reversible, too.”

Kyle nodded, liking the general idea. “How do you trigger it on his machine without him noticing?” Kyle asked.

“You can use another machine running the same process. It sends out a CmdPacket with the computer ID of the machine we want to target, along with a flag to either freeze or unfreeze the computer. When I notice Gerald’s not doing his job, I’ll freeze his computer from my own. Oh, and we’ll hide the process from the Task Manager, so he won’t be able to kill it.”

“I don’t think Gerald’s ashamed of what he’s doing,” Kyle replied. “You could freeze… that stuff on his monitor, but he’d just turn it off if somebody walked in.”

“I’m not talking about catching him red-handed. We just freeze his screen when he’s not doing work, and then unfreeze it when he decides to be useful again.”

Kyle shrugged. “Well, it’s worth a shot.”

The Happy Ending

It was all Lex could do to hide his glee that week. Each day, when Gerald came in to work, he and Kyle would keep tabs on him. When Gerald blew off the cashier station for the back room, Lex would press a key combo and enter the computer ID Gerald was at. Gerald would moan and shout expletives, then mumble something about a “lunch break” before vanishing for an hour.

Gerald never got interested in doing work. Instead, after about a month of this treatment, he just stopped coming in. The owner called Kyle, asking if there was a problem with “malware”.

“Well,” Kyle replied, “Lex and I haven’t seen any problems, but maybe Gerald should come in and remove the malware. It is part of his job, after all.” Gerald’s dad never mentioned it again.

The little VB application that Lex installed remained on the computers at GreyBox for years afterwards. While they never had to punish any future employees for viewing NSFW content on company time, it did make for a fun gag during an after-hours LAN-party.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Sky CroeserReimagining Australia: language, decolonisation, borderscapes, and belonging

This conference creates an important space for reflecting on key challenges in Australia today, and for thinking about alternatives. My notes are quite partial and rough, so I encourage you to look for more information on the speakers (and the panel sessions I couldn’t attend) on the conference website.

57d09b5469f52_getimagecontentuedfmbs8_1bt16p1-1bt16qkKim Scott spoke in ‘Circles and Sand and Sound’ about the growth in support for the Noongar language, which is reflected in breakout text in the InASA conference program and the names of conference rooms. Despite the hostility of settlers to language, Noongar place names and language continue to inform the vernacular of the southwest where we live. We can bring the language alive by making ourselves instruments for it. His plenary threaded through the history of settlement, and the histories of Noongar culture and community, survival and the resistance to the boundaries drawn in sand by colonisers. We need to recognise that this land which we are on is stolen country, and has been through a long period of an apartheid-like regime, and there are now spaces in which Noongar culture and language is being celebrated and cherished. There is a power in sharing language and culture, but we also need to understand Noongar (and other Indigenous) peoples’ reluctance to do so.

Decolonising Australia: Reimagining and Reinhabiting opened with Mike Heald discussing his poem, ‘Land Grab’, a reflection on colonisation (and decolonising):

2016 and here I stand, here my house stands,
and my son-grown-tall, in Ballarat, in the aftermath,
on the ground-almost-zero
of pre-colonized plenitude, the last stands
of Swamy Riparian, Herb-rich Foothill, and Plains Grassy
Woodlands huddled along rail tracks and roads,
or captive in the deserts of private property
with a knife at their throat.

Soenke Biermann followed with Decolonise Australia: Unwinding Settler Coloniality. Biermann’s teaching, research, and community praxis is concerned with how we unwind privilege. In Australia, there seems to be an absence of words to talk about race. It is hard to unsettle privilege, and hard to navigate white fragility – many white students lack resilience when it comes to managing their discomfort around discussions. It’s important to understand the link between whiteness and possessiveness in Australia, as well as the processes of racialisation and whiteness that have shaped migration to Australia. Coloniality is upheld by different structures, including systems of knowledge production: we need to think about how this works in academia, through our research and our teaching. How can we shift our teaching practices and set up safe spaces without reinscribing privilege? Encouraging students to reflect on their own experiences, and to link them to theoretical perspectives, can be helpful.

Finally, Samya Jabbour spoke in Decolonizing the multicultural landscape about connecting her sense of hurt at Israeli satellite ‘management’ of the land her father was forced to leave to her understanding of what ‘land management’ means in Australia. The myth of terra nullius that underpins settler-colonialism in both Israel and Australia supports ongoing violence, and means that land management is a practice of dispossession. Decolonization requires embodied, collaborative work. Jabbour’s work attempts to come into a respectful relationship with land and with Indigenous people. She has found it hard to navigate her role as a ‘non-indigenous’ Australia: much of the privilege of whiteness is conferred on her, but the legacies of settler-colonial violence and dispossession also shape her life. Many of us sit in this liminal space: outsiders-within. We inhabit interstitial sites that might allow new practices and alternatives to emerge. There is also a bravery and power involved in privileged members of settler societies confronting the violence done by their own families.

7590683Suvendrini Perera’s plenary Reimagining the Borderscape was anchored around seven key images that return us to the water. Drawing on John Bulunbulun and Zhou Xiaoping’s Dialogue, Perera talked about the ways in which borders have crossed and divided Indigenous people, and noted that the drawing of a border around ‘Australia’ forcible merged many different peoples into the grouping ‘Aboriginal’. Thinking about these histories and images allows us to understand the centrality of carceral islands to Australia.

Rather than operating as a singular and static line, the border is constituted through a multiplicity of shifting practices and institutions. In Australia, this creates a violent and unstable border zone, in which some geographical and temporal areas are excised and classed as ‘not Australia’ for migration purposes. At the same time, this zone becomes subject to increased surveillance and other forms of control in the name of protecting Australian sovereignty. The borderscape is a term that allows us to understand the various forms of direct and indirect control being exercised over the region.

The logic of deterrence, like the logic of excision, doubles back on itself. The development of an expansive and expensive model of deterrence actually supports the ‘people smugglers’ it is claimed to oppose. While deterrence is justified through claims that it will save people from deaths at sea, the lifejackets memorialised in Alex Seton’s someone died trying to have a life like mine remind us of deaths that were caused by active policy choices: members of Australia’s border force knew of and were monitoring the boat, and made the choice to let those on board die.

The Multicultural Encounters through Memory, Storytelling and Art panel drew together literature, art, poetry, and theoretical reflections. Speakers in this panel made powerful connections that were difficult for me to capture, so please excuse these brief notes! Rashida Murphy spoke on her use of autoethnography and the ‘masking technique’ of a reading group to explore migrant women’s stories. Murphy ended her discussion of her writing process by reading from her book, The Historian’s Daughter. Nadia Niaz talked about her current project, tentatively titled My Australia, which reflects on migration to Australia, language, and belonging. Niaz spoke beautifully on some of the ways in which we construct belonging, including the necessity of forgetting in projects of nation-building (as we ‘forget’ inconvenient histories). Leonie Mansbridge spoke about Place Ma(t)ps, her art practice exploring mixed identity, space, and place. Burcu Simsek’s use of digital storytelling as a feminist method use voiceover and images to explore new sources of connection and belonging. Through workshops, Simsek has been providing opportunities for women from different generations and migration experiences to share their stories. Finally, Matt Roberts reflected on his family history as white English-speaking South Africans, who migrated to Australia in 1989. (Odd for me to listen to, with my white Afrikaans family history, from which I’ve been largely disconnected, with my family who moved to Australia around 1989.)

28. Zürcher Theater Spektakel 2007: 'Marrugeku' (Australien)The final plenary session of the day, Kimberley Cultural Renewal: Unsettling the Dynamic; Reimagining the Future, from the artistic directors of intercultural dance-theatre company Marrugeku, Dalisa Pigram and Rachael Swain, who were joined by writer Steve Kinnane. Pigram and Swain talked about using their art to address traumatic histories, with the challenges that come with navigating the politics of representation. In Broome, much of dance culture has been lost, but through respectful collaboration with elders Marrugeku choreographers learned movements that they could use. Pigram and Swain emphasised the need to understand the histories of suppression of dance practices in Australia, and to build dialogue in developing works. Now, through Cut the Sky, the company is exploring new ways of relating to country, regenerating, and healing.

Steve Kinnane talked about the Kimberley Aboriginal Law and Culture Centre (KALACC Culture Camps), which aim to rejuvenate law, people, country and creativity. Kinnane noted that while there’s often a perceived divide between work to rejuvenate traditional knowledge and contemporary work like Marrugeku’s, in fact they overlap significantly: Aboriginal cultures are living, changing, creating cultures.


Planet DebianTianon Gravi: My Docker Install Process

I’ve had several requests recently for information about how I personally set up a new machine for running Docker (especially since I don’t use the infamous curl | sh), so I figured I’d outline the steps I usually take.

For the purposes of simplicity, I’m going to assume Debian (specifically stretch, the upcoming Debian stable release), but these should generally be easily adjustable to jessie or Ubuntu.

These steps should be fairly similar to what’s found in upstream’s “Install Docker on Debian” document, but do differ slightly in a few minor ways.

grab Docker’s APT repo GPG key

The way I do this is probably a bit unconventional, but the basic gist is something like this:

export GNUPGHOME="$(mktemp -d)"
gpg --keyserver --recv-keys 58118E89F3A912897C070ADBF76221572C52609D
gpg --export --armor 58118E89F3A912897C070ADBF76221572C52609D | sudo tee /etc/apt/trusted.gpg.d/docker.gpg.asc
rm -rf "$GNUPGHOME"

(On jessie or another release whose APT doesn’t support .asc files in /etc/apt/trusted.gpg.d, I’d drop --armor and the .asc and go with simply /.../docker.gpg.)

This creates me a new GnuPG directory to work with (so my personal ~/.gnupg doesn’t get cluttered with this new key), downloads Docker’s signing key from the keyserver gossip network (verifying the fetched key via the full fingerprint I’ve provided), exports the key into APT’s keystore, then cleans up the leftovers.

For completeness, other popular ways to fetch this include:

sudo apt-key adv --keyserver --recv-keys 58118E89F3A912897C070ADBF76221572C52609D

(worth noting that man apt-key discourages the use of apt-key adv)

wget -qO- '' | sudo apt-key add -

(no verification of the downloaded key)

Here’s the relevant output of apt-key list on a machine where I’ve got this key added in the way I outlined above:

$ apt-key list

pub   rsa4096 2015-07-14 [SCEA]
      5811 8E89 F3A9 1289 7C07  0ADB F762 2157 2C52 609D
uid           [ unknown] Docker Release Tool (releasedocker) <>


add Docker’s APT source

If you prefer to fetch sources via HTTPS, install apt-transport-https, but I’m personally fine with simply doing GPG verification of fetched packages, so I forgo that in favor of less packages installed. YMMV.

echo 'deb debian-stretch main' | sudo tee /etc/apt/sources.list.d/docker.list

Hopefully it’s obvious, but debian-stretch in that line should be replaced by debian-jessie, ubuntu-xenial, etc. as desired. It’s also worth pointing out that this will not include Docker’s release candidates. If you want those as well, add testing after main, ie ... debian-stretch main testing' | ....

At this point, you should be safe to run apt-get update to verify the changes:

$ sudo apt-get update
Hit:1 debian-stretch InRelease
Reading package lists... Done

(There shouldn’t be any warnings or errors about missing keys, etc.)

configure Docker

This step could be done after Docker’s installed (and indeed, that’s usually when I do it because I forget that I should until I’ve got Docker installed and realize that my configuration is suboptimal), but doing it before ensures that Docker doesn’t have to be restarted later.

sudo mkdir -p /etc/docker
sudo sensible-editor /etc/docker/daemon.json

(sensible-editor can be replaced by whatever editor you prefer, but that command should choose or prompt for a reasonable default)

I then fill daemon.json with at least a default storage-driver. Whether I use aufs or overlay2 depends on my kernel version and available modules – if I’m on Ubuntu, AUFS is still a no-brainer (since it’s included in the default kernel if the linux-image-extra-XXX/linux-image-extra-virtual package is installed), but on Debian AUFS is only available in either 3.x kernels (jessie’s default non-backports kernel) or recently in the aufs-dkms package (as of this writing, still only available on stretch and sid – no jessie-backports option).

If my kernel is 4.x+, I’m likely going to choose overlay2 (or if that errors out, the older overlay driver).

Choosing an appropriate storage driver is a fairly complex topic, and I’d recommend that for serious production deployments, more research on pros and cons is performed than I’m including here (especially since AUFS and OverlayFS are not the only options – they’re just the two I personally use most often).

	"storage-driver": "overlay2"

configure boot parameters

I usually set a few boot parameters as well (in /etc/default/grub’s GRUB_CMDLINE_LINUX_DEFAULT option – run sudo update-grub after adding these, space-separated).

  • cgroup_enable=memory – enable “memory accounting” for containers (allows docker run --memory for setting hard memory limits on containers)
  • swapaccount=1 – enable “swap accounting” for containers (allows docker run --memory-swap for setting hard swap memory limits on containers)
  • systemd.legacy_systemd_cgroup_controller=yes – newer versions of systemd may disable the legacy cgroup interfaces Docker currently uses; this instructs systemd to keep those enabled (for more details, see systemd/systemd#4628, opencontainers/runc#1175, docker/docker#28109)
  • vsyscall=emulate – allow older binaries to run (debian:wheezy, etc.; see docker/docker#28705)

All together:

GRUB_CMDLINE_LINUX_DEFAULT="cgroup_enable=memory swapaccount=1 systemd.legacy_systemd_cgroup_controller=yes vsyscall=emulate"

install Docker!

Finally, the time has come.

$ sudo apt-get install -V docker-engine

$ sudo docker version
 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 21:45:16 2016
 OS/Arch:      linux/amd64

 Version:      1.12.3
 API version:  1.24
 Go version:   go1.6.3
 Git commit:   6b644ec
 Built:        Wed Oct 26 21:45:16 2016
 OS/Arch:      linux/amd64

$ sudo usermod -aG docker "$(id -un)"

(Reboot or logout/login to update your session to include docker group membership and thus no longer require sudo for using docker commands.)

Hope this is useful to someone! If nothing else, it’ll serve as a concise single-page reference for future-tianon. 😇

Planet DebianJonas Meurer: On CVE-2016-4484, a (securiy)? bug in the cryptsetup initramfs integration

On CVE-2016-4484, a (security)? bug in the cryptsetup initramfs integration

On November 4, I was made aware of a security vulnerability in the integration of cryptsetup into initramfs. The vulnerability was discovered by security researchers Hector Marco and Ismael Ripoll of CyberSecurity UPV Research Group and got CVE-2016-4484 assigned.

In this post I'll try to reflect a bit on

What CVE-2016-4484 is all about

Basically, the vulnerability is about two separate but related issues:

1. Initramfs rescue shell considered harmful

The main topic that Hector Marco and Ismael Ripoll address in their publication is that Debian exits into a rescue shell in case of failure during initramfs, and that this can be triggered by entering a wrong password ~93 times in a row.

Indeed the Debian initramfs implementation as provided by initramfs-tools exits into a rescue shell (usually a busybox shell) after a defined amount of failed attempts to make the root filesystem available. The loop in question is in local_device_setup() at the local initramfs script

In general, this behaviour is considered as a feature: if the root device hasn't shown up after 30 rounds, the rescue shell is spawned to provide the local user/admin a way to debug and fix things herself.

Hector Marco and Ismael Ripoll argue that in special environments, e.g. on public computers with password protected BIOS/UEFI and bootloader, this opens an attack vector and needs to be regarded as a security vulnerability:

It is common to assume that once the attacker has physical access to the computer, the game is over. The attackers can do whatever they want. And although this was true 30 years ago, today it is not.

There are many "levels" of physical access. [...]

In order to protect the computer in these scenarios: the BIOS/UEFI has one or two passwords to protect the booting or the configuration menu; the GRUB also has the possibility to use multiple passwords to protect unauthorized operations.

And in the case of an encrypted system, the initrd shall block the maximum number of password trials and prevent the access to the computer in that case.

While Hector and Ismael have a valid point in that the rescue shell might open an additional attack vector in special setups, this is not true for the vast majority of Debian systems out there: in most cases a local attacker can alter the boot order, replace or add boot devices, modify boot options in the (GNU GRUB) bootloader menu or modify/replace arbitrary hardware parts.

The required scenario to make the initramfs rescue shell an additional attack vector is indeed very special: locked down hardware, password protected BIOS and bootloader but still local keyboard (or serial console) access are required at least.

Hector and Ismael argue that the default should be changed for enhanced security:

[...] But then Linux is used in more hostile environments, this helpful (but naive) recovery services shall not be the default option.

For the reasons explained about, I tend to disagree to Hectors and Ismaels opinion here. And after discussing this topic with several people I find my opinion reconfirmed: the Debian Security Team disputes the security impact of the issue and others agree.

But leaving the disputable opinion on a sane default aside, I don't think that the cryptsetup package is the right place to change the default, if at all. If you want added security by a locked down initramfs (i.e. no rescue shell spawned), then at least the bootloader (GNU GRUB) needs to be locked down by default as well.

To make it clear: if one wants to lock down the boot process, bootloader and initramfs should be locked down together. And the right place to do this would be the configurable behaviour of grub-mkconfig. Here, one can set a password for GRUB and the boot parameter 'panic=1' which disables the spawning of a rescue shell in initramfs.

But as mentioned, I don't agree that this would be sane defaults. The vast majority of Debian systems out there don't have any security added by locked down bootloader and initramfs and the benefit of a rescue shell for debugging purposes clearly outrivals the minor security impact in my opinion.

For the few setups which require the added security of a locked down bootloader and initramfs, we already have the relevant options documented in the Securing Debian Manual:

After discussing the topic with initramfs-tools maintainers today, Guilhem and me (the cryptsetup maintainers) finally decided to not change any defaults and just add a 'sleep 60' after the maximum allowed attempts were reached.

2. tries=n option ignored, local brute-force slightly cheaper

Apart from the issue of a rescue shell being spawned, Hector and Ismael also discovered a programming bug in the cryptsetup initramfs integration. This bug in the cryptroot initramfs local-top script allowed endless retries of passphrase input, ignoring the tries=n option of crypttab (and the default of 3). As a result, theoretically unlimited attempts to unlock encrypted disks were possible when processed during initramfs stage. The attack vector here was that local brute-force attacks are a bit cheaper. Instead of having to reboot after max tries were reached, one could go on trying passwords.

Even though efficient brute-force attacks are mitigated by the PBKDF2 implementation in cryptsetup, this clearly is a real bug.

The reason for the bug was twofold:

  • First, the condition in setup_mapping() responsible for making the function fail when the maximum amount of allowed attempts is reached, was never met:

      # Try to get a satisfactory password $crypttries times
    while [ $crypttries -le 0 ] || [ $count -lt $crypttries ]; do export CRYPTTAB_TRIED="$count" count=$(( $count + 1 )) [...] done if [ $crypttries -gt 0 ] && [ $count -gt $crypttries ]; then message "cryptsetup: maximum number of tries exceeded for $crypttarget" return 1 fi [...] }

    As one can see, the while loop stops when $count -lt $crypttries. Thus the second condition $count -gt $crypttries is never met. This can easily be fixed by decreasing $count by one in case of a successful unlock attempt along with changing the second condition to $count -ge $crypttries:

      while [ $crypttries -le 0 ] || [ $count -lt $crypttries ]; do
          # decrease $count by 1, apparently last try was successful.
          count=$(( $count - 1 ))
      if [ $crypttries -gt 0 ] && [ $count -ge $crypttries ]; then

    Christian Lamparter already spotted this bug back in October 2011 and provided a (incomplete) patch, but back then I even managed to merge the patch in an improper way, making it even more useless: The patch by Christian forgot to decrease $count by one in case of a successful unlock attempt, resulting in warnings about maximum tries exceeded even for successful attemps in some circumstances. But instead of adding the decrease myself and keeping the (almost correct) condition $count -eq $crypttries for detection of exceeded maximum tries, I changed back the condition to the wrong original $count -gt $crypttries that again was never met. Apparently I didn't test the fix properly back then. I definitely should do better in future!

  • Second, back in December 2013, I added a cryptroot initramfs local-block script as suggested by Goswin von Brederlow in order to fix bug #678692. The purpose of the cryptroot initramfs local-block script is to invoke the cryptroot initramfs local-top script again and again in a loop. This is required to support complex block device stacks.

    In fact, the numberless options of stacked block devices are one of the biggest and most inglorious reasons that the cryptsetup initramfs integration scripts became so complex over the years. After all we need to support setups like rootfs on top of LVM with two separate encrypted PVs or rootfs on top of LVM on top of dm-crypt on top of MD raid.

    The problem with the local-block script is that exiting the setup_mapping() function merely triggers a new invocation of the very same function.

    The guys who discovered the bug suggested a simple and good solution to this bug: When maximum attempts are detected (by second condition from above), the script sleeps for 60 seconds. This mitigates the brute-force attack options for local attackers - even rebooting after max attempts should be faster.

About disclosure, wording and clickbaiting

I'm happy that Hector and Ismael brought up the topic and made their argument about the security impacts of an initramfs rescue shell, even though I have to admit that I was rather astonished about the fact that they got a CVE assigned.

Nevertheless I'm very happy that they informed the Security Teams of Debian and Ubuntu prior to publishing their findings, which put me in the loop in turn. Also Hector and Ismael were open and responsive when it came to discussing their proposed fixes.

But unfortunately the way they advertised their finding was not very helpful. They announced a speech about this topic at the DeepSec 2016 in Vienna with the headline Abusing LUKS to Hack the System.

Honestly, this headline is missleading - if not wrong - in several ways:

  • First, the whole issue is not about LUKS, neither is it about cryptsetup itself. It's about Debians integration of cryptsetup into the initramfs, which is a compeletely different story.
  • Second, the term hack the system suggests that an exploit to break into the system is revealed. This is not true. The device encryption is not endangered at all.
  • Third - as shown above - very special prerequisites need to be met in order to make the mere existance of a LUKS encrypted device the relevant fact to be able to spawn a rescue shell during initramfs.

Unfortunately, the way this issue was published lead to even worse articles in the tech news press. Topics like Major security hole found in Cryptsetup script for LUKS disk encryption or Linux Flaw allows Root Shell During Boot-Up for LUKS Disk-Encrypted Systems suggest that a major security vulnerabilty was revealed and that it compromised the protection that cryptsetup respective LUKS offer.

If these articles/news did anything at all, then it was causing damage to the cryptsetup project, which is not affected by the whole issue at all.

After the cat was out of the bag, Marco and Ismael aggreed that the way the news picked up the issue was suboptimal, but I cannot fight the feeling that the over-exaggeration was partly intended and that clickbaiting is taking place here. That's a bit sad.

Planet Linux AustraliaLev Lafayette: High Performance Computing in Europe : A Selection

For about two weeks prior and a week after presenting at the OpenStack Summit in Barcelona I had the opportunity to visit several of Europe's major high performance computing facilities, giving each a bit of a standard pitch for the HPC-Cloud hybrid system we had developed at the University of Melbourne.

read more

Harald WelteOpen Hardware IEEE 802.15.4 adapter "ATUSB" available again

Many years ago, in the aftermath of Openmoko shutting down, fellow former Linux kernel hacker Werner Almesberger was working on an IEEE 802.15.4 (WPAN) adapter for the Ben Nanonote.

As a spin-off to that, the ATUSB device was designed: A general-purpose open hardware (and FOSS firmware + driver) IEEE 802.15.4 adapter that can be plugged into any USB port.


This adapter has received a mainline linux kernel driver written by Werner Almesberger and Stefan Schmidt, which was eventually merged into mainline Linux in May 2015 (kernel v4.2 and later).

Earlier in 2016, Stefan Schmidt (the current ATUSB Linux driver maintainer) approached me about the situation that ATUSB hardware was frequently asked for, but currently unavailable in its physical/manufactured form. As we run a shop with smaller electronics items for the wider Osmocom community at sysmocom, and we also frequently deal with contract manufacturers for low-volume electronics like the SIMtrace device anyway, it was easy to say "yes, we'll do it".

As a result, ready-built, programmed and tested ATUSB devices are now finally available from the sysmocom webshop

Note: I was never involved with the development of the ATUSB hardware, firmware or driver software at any point in time. All credits go to Werner, Stefan and other contributors around ATUSB.


Planet DebianSylvain Le Gall: Release of OASIS 0.4.8

I am happy to announce the release of OASIS v0.4.8.

Logo OASIS small

OASIS is a tool to help OCaml developers to integrate configure, build and install systems in their projects. It should help to create standard entry points in the source code build system, allowing external tools to analyse projects easily.

This tool is freely inspired by Cabal which is the same kind of tool for Haskell.

You can find the new release here and the changelog here. More information about OASIS in general on the OASIS website.

Pull request for inclusion in OPAM is pending.

Here is a quick summary of the important changes:

  • Fix various problems of parsing present in OASIS 0.4.7 (extraneous whitespaces, handling of ocamlbuild argument...)
  • Enable creation of OASIS plugin and OASIS command line plugin.
  • Various fixes for the plugin "omake".
  • Create 2 branches to pin OASIS with OPAM, making easier for contributor to test dev. version.

Thanks to Edwin Török, Yuri D. Lensky and Gerd Stolpmann for their contributions.

Geek FeminismRemembering the Fourteen

Today we remember:

  • Geneviève Bergeron (born 1968), civil engineering student
  • Hélène Colgan (born 1966), mechanical engineering student
  • Nathalie Croteau (born 1966), mechanical engineering student
  • Barbara Daigneault (born 1967), mechanical engineering student
  • Anne-Marie Edward (born 1968), chemical engineering student
  • Maud Haviernick (born 1960), materials engineering student
  • Maryse Laganière (born 1964), budget clerk in the École Polytechnique’s finance department
  • Maryse Leclair (born 1966), materials engineering student
  • Anne-Marie Lemay (born 1967), mechanical engineering student
  • Sonia Pelletier (born 1961), mechanical engineering student
  • Michèle Richard (born 1968), materials engineering student
  • Annie St-Arneault (born 1966), mechanical engineering student
  • Annie Turcotte (born 1969), materials engineering student
  • Barbara Klucznik-Widajewicz (born 1958), nursing student

On December 6, 1989, at the École Polytechnique engineering school in Montreal, Quebec, Canada, a man killed these women, targeting them because they were women and because they were engineers.

More remembrances:

Deb Chachra.

Shelley Page.

Previous posts on Geek Feminism.

Google AdsenseHow to create better blog titles that can drive more traffic to your ads

This is the third of five guest posts from AdSense publisher Brandon Gaille. Brandon has built his small business marketing blog,, to over 2 million monthly visitors in less than three years. He’s featured as our guest blogger to share insights and tips from his personal blogging experience to help AdSense publishers grow earnings. If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 

Over the past three years, I’ve crafted titles for over 5,000 blog posts and have received over 58 million unique visitors to date. With that many titles and that much traffic, it’s allowed me to identify what types of titles get the most traffic.

The title of your page or blog post will play one of the largest roles in how much traffic you receive. From my extensive experience, a really great title can move your blog post dis and increase the number of social shares by over 300%.

The bottom line is… If you fail to write a compelling title that gets people to click, then your post is doomed to wallow in mediocrity.

Here are a few title optimization tactics that have proven to drive the most traffic.

#1 Place a number at the beginning of your title

If you have a list formatted post, then you need to be using numbered titles every single time. Titles that begin with numbers are proving to drive traffic. This is largely due to the increased consumption of users reading list posts more than any other type of blog post. A list post typically has anywhere from seven to forty key points, which are listed out numerically.
This makes it really easy for anyone to scan through the big takeaways and decide whether to dive deeper into the article. When people see the number 13 at the beginning of the title, they know they can scan through all 13 key points in a matter of seconds.

A numbered title paired with a list post will drive more clicks to your post and list style posts have one of the highest engagement rates. Posts with more clicks and higher engagement often are rewarded by becoming more discoverable to users.

Here are a couple of examples of numbered blog titles:

  • 11 Tools to Create Share-Worthy Content
  • 17 Incredible Social Media Statistics

I recommend crafting numbered blog post titles for more than half of your posts.

A Conductor study on headline preferences also backs up what I’ve found to be true on my blog.

blog headline statistics numbered titles

#2 The odd number gets 20% more clicks than the even number

Although no one has figured out exactly why this happens, the odd numbered titles get more clicks than the even numbered titles. Here’s an example.
Odd Numbered Title: 11 Keys to Earning More Money on Adsense
Even Numbered Title: 12 Types of Ads that Convert

Before you hit publish on the blog post titled, “8 Crazy Ways to Double Your Ad Revenue,” take a moment to either add one more tip or remove the least valuable tip. This will allow you to capitalize on the extra twenty percent of clicks by having an odd numbered title.

Learn more about creating better blog titles from my blog and read all of the “17 Ways to Create Catchy Blog Titles That Drive Traffic.”

Posted By
Brandon Gaille

Brandon Gaille

Brandon Gaille is an AdSense publisher. You can learn more about Brandon at and listen to his popular blogging podcast, The Blog Millionaire.

If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 

Krebs on SecurityResearchers Find Fresh Fodder for IoT Attack Cannons

New research published this week could provide plenty of fresh fodder for Mirai, a malware strain that enslaves poorly-secured Internet of Things (IoT) devices for use in powerful online attacks. Researchers in Austria have unearthed a pair of backdoor accounts in more than 80 different IP camera models made by Sony Corp. Separately, Israeli security experts have discovered trivially exploitable weaknesses in nearly a half-million white-labeled IP camera models that are not currently sought out by Mirai.

A Sony IPELA camera. Image: Sony.

A Sony IPELA camera. Image: Sony.

In a blog post published today, Austrian security firm SEC Consult said it found two apparent backdoor accounts in Sony IPELA Engine IP Cameras  devices mainly used by enterprises and authorities. According to SEC Consult, the two previously undocumented user accounts — named “primana” and “debug” — could be used by remote attackers to commandeer the Web server built into these devices, and then to enable “telnet” on them.

Telnet — a protocol that allows remote logons over the Internet — is the very same communications method abused by Mirai, which constantly scours the Web for IoT devices with telnet enabled and protected by factory-default passwords.

“We believe that this backdoor was introduced by Sony developers on purpose (maybe as a way to debug the device during development or factory functional testing) and not an ‘unauthorized third party’ like in other cases (e.g. the Juniper ScreenOS Backdoor, CVE-2015-7755),” SEC Consult wrote.

It’s unclear precisely how many Sony IP cameras may be vulnerable, but a scan of the Web using indicates there are at least 4,250 that are currently reachable over the Internet.

“Those Sony IPELA ENGINE IP camera devices are definitely reachable on the Internet and a potential target for Mirai-like botnets, but of course it depends on the network/firewall configuration,” said Johannes Greil, head of SEC Consult Vulnerability Lab. “From our point of view, this is only the tip of the iceberg because it’s only one search string from the device we have.”

Greil said there are other undocumented functionalities in the Sony IP cameras that could be maliciously used by malware or miscreants, such as commands that can be invoked to distort images and/or video recorded by the cameras, or a camera heating feature that could be abused to overheat the devices.

Sony did not respond to multiple requests for comment. But the researchers said Sony has quietly made available to its users an update that disables the backdoor accounts on the affected devices. However, users still need to manually update the firmware using a program called SNC Toolbox.

Greil said it seems likely that the backdoor accounts have been present in Sony cameras for at least four years, as there are signs that someone may have discovered the hidden accounts back in 2012 and attempted to crack the passwords then. SEC Consult’s writeup on their findings is available here.

In other news, researchers at security firm Cybereason say they’ve found at least two previously unknown security flaws in dozens of IP camera families that are white-labeled under a number of different brands (and some without brands at all) that are available for purchase via places like eBay and Amazon. The devices are all administered with the password “888888,” and may be remotely accessible over the Internet if they are not protected behind a firewall. KrebsOnSecurity has confirmed that while the Mirai botnet currently includes this password in the combinations it tries, the username for this password is not part of Mirai’s current configuration.

But Cybereason’s team found that they could easily exploit these devices even if they were set up behind a firewall. That’s because all of these cameras ship with a factory-default peer-to-peer (P2P) communications capability that enables remote “cloud” access to the devices via the manufacturer’s Web site — provided a customer visits the site and provides the unique camera ID stamped on the bottom of the devices.

Although it may seem that attackers would need physical access to the vulnerable devices in order to derive those unique camera IDs, Cybereason’s principal security researcher Amit Serper said the company figured out a simple way to enumerate all possible camera IDs using the manufacturer’s Web site.

“We reverse engineered these cameras so that we can use the manufacturer’s own infrastructure to access them and do whatever we want,” Serper said. “We can use the company’s own cloud network and from there jump onto the customer’s network.”

Lior Div, co-founder and CEO at Cybereason, said a review of the code built into these devices shows the manufacturer does not appear to have made security a priority, and that people using these devices should simply toss them in the trash.

“There is no firmware update mechanism built into these cameras, so there’s no way to patch them,” Div said. “The version of Linux running on these devices was in some cases 14 years old, and the other code libraries on the devices are just as ancient. These devices are so hopelessly broken from a security perspective that it’s hard to really understand what’s going on in the minds of people putting them together.”

Cybereason said it is not disclosing full technical details of the flaws because it would enable any attacker to compromise them for use in online attacks. But it has published a few tips that should help customers determine whether they have a vulnerable device. For example, the camera’s password (888888) is printed on a sticker on the bottom of the devices, and the UID — also printed on the sticker — starts with one of these text strings:


The sticker on the bottom of the camera will tell you if the device is affected by the vulnerability. Image: Cybereason.

The sticker on the bottom of the camera will tell you if the device is affected by the vulnerability. Image: Cybereason.

“People tend to look down on IoT research and call it junk hacking,” Cybereason’s Yoav Orot wrote in a blog post about its findings. “But that isn’t the right approach if researchers hope to prevent future Mirai botnet attacks. A smart (insert device here) is still a computer, regardless of its size. It has a processor, software and hardware and is vulnerable to malware just like a laptop or desktop. Whether the device records The Walking Dead or lets you watch your cat while you’re at work, attackers can still own it. Researchers should work on junk hacking because these efforts can improve device security (and consumer security in the process), keep consumer products out of the garbage heap and prevent them from being used to carry out DDoS attacks.”

The discoveries by SEC Consult and Cybereason come as policymakers in Washington, D.C. are grappling with what to do about the existing and dawning surge in poorly-secured IoT devices. A blue-ribbon panel commissioned by President Obama issued a 90-page report last week full of cybersecurity policy recommendations for the 45th President of the United States, and IoT concerns and addressing distributed denial-of-service (DDoS) attacks emerged as top action items in that report.

Meanwhile, Morning Consult reports that U.S. Federal Communications Commission Chairman Tom Wheeler has laid out an unexpected roadmap through which the agency could regulate the security of IoT devices. The proposed certification process was laid out in a response to a letter sent by Sen. Mark Warner (D-Va.) shortly after the IoT-based attacks in October that targeted Internet infrastructure company Dyn and knocked offline a number of the Web’s top destinations for the better part of a day.

Morning Consult’s Brendan Bordelon notes that while Wheeler is set to step down as chairman on Jan. 20, “the new framework could be used to support legislation enhancing the FCC’s ability to regulate IoT devices.”

Planet DebianMirco Bauer: Secure USB boot with Debian


The moment you leave your laptop, say in a hotel room, you can no longer trust your system as it could have been modified while you were away. Think you are safe because you have a crypted disk? Well, if the boot partition is on the laptop itself, it can be manipulated and you will not notice because the boot partition can't be encrypted. The BIOS needs to access the MBR and boot loader and that loads the Linux kernel, all unencrypted. There has been some reports lately that the Linux cryptsetup is insecure because you can spawn a root shell by hitting the enter key for 70 seconds. This is not the real threat to your system, really. If someone has physical access to your hardware, he can get a root shell in less than a second by passing init=/bin/bash as parameter to the Linux kernel in the boot loader regardless if cryptsetup is used or not! The attacker can also use other ways like booting a live system from CD/USB etc. The real insecurity here is the unencrypted boot partition and not some script that gets executed from it. So how to prevent this physical access attack vector? Just keep reading this guide.

This guide explains how to install Debian securely on your laptop with using an external USB boot disk. The disk inside the laptop should not contain your /boot partition since that is an easy target for manipulation. An attacker could for example change the boot scripts inside the initrd image to capture your passphrase of your crypted volume. With an USB boot partition, you can unplug the USB stick after the operating system has booted. Best practice here is to have the USB stick together with your bunch of keys. That way you will disconnect your USB stick early after the boot as finished so you can put it back into your pocket.

Secure Hardware Assumptions

We have to assume here that the hardware you are using to download and verify the install media is safe to use. Same applies with the hardware where you are doing the fresh Debian install. Say the hardware does not contain any malware in the form of code in EFI or other manipulation attempts that influence the behavior of the operating system we are going to install.

Download Debian Install ISO

Feel free to use any Debian mirror and install flavor. For this guide I am using the download mirror in Germany and the DVD install flavor.


Verify hashsum of ISO file

To know if the ISO file was downloaded without modification we have to check the hashsum of the file. The hashsum file can be found in the same directory as the ISO file on the download mirror. With hashsums if a single bit differs in the file, the resulting SHA512 sum will be completely different.

Obtain the hashsum file using:


Calculate a local hashsum from the downloaded ISO file:

sha512sum debian-8.6.0-amd64-DVD-1.iso

Now you need to compare the hashsum with that is in the SHA512SUMS file. Since the SHA512SUMS file contains the hashsums of all files that are in the same directory you need to find the right one first. grep can do this for you:

grep debian-8.6.0-amd64-DVD-1.iso SHA512SUMS

Both commands executed after each other should show following output:

$ sha512sum debian-8.6.0-amd64-DVD-1.iso
c3883edfc95e3b09152d46ce29a032eed1de71531549aee86bb98dab1528088a16f0b4d628aee8ac6cc420364e208d3d5e19d0dea3576f53b904c18e8f604d8c  debian-8.6.0-amd64-DVD-1.iso
$ grep debian-8.6.0-amd64-DVD-1.iso SHA512SUMS
c3883edfc95e3b09152d46ce29a032eed1de71531549aee86bb98dab1528088a16f0b4d628aee8ac6cc420364e208d3d5e19d0dea3576f53b904c18e8f604d8c  debian-8.6.0-amd64-DVD-1.iso

As you can see the hashsum found in the SHA512SUMS file matches with the locally generated hashsum using the sha512sum command.

At this point we are not finished yet. These 2 matching hashsums just means whatever was on the download server matches what we have received and stored locally on your disk. The ISO file and SHA512SUM file could still be a modified version!

And this is where GPG signatures chime in, covered in the next section.

Download GPG Signature File

GPG signature files usually have the .sign file name extension but could also be named .asc. Download the signature file using wget:


Obtain GPG Key of Signer

Letting gpg verify the signature will fail at this point as we don't have the public key of the signer:

$ gpg --verify SHA512SUMS.sign
gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Mon 19 Sep 2016 12:23:47 AM HKT
gpg:                using RSA key DA87E80D6294BE9B
gpg: Can't check signature: No public key

Downloading a key is trivial with gpg, but more importantly we need to verify that this key (DA87E80D6294BE9B) is trustworthy, as it could also be a key of the infamous man-in-the-middle.

Here you can find the GPG fingerprints of the official signing keys used by Debian. The ending of the "Key fingerprint" line should match the key id we found in the signature file from above.

gpg:                using RSA key DA87E80D6294BE9B

Key fingerprint = DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B

DA87E80D6294BE9B matches Key fingerprint = DF9B 9C49 EAA9 2984 3258 9D76 DA87 E80D 6294 BE9B

To download and import this key run:

$ gpg --keyserver --recv-keys DA87E80D6294BE9B

Verify GPG Signature of Hashsum File

Ok, we are almost there. Now we can run the command which checks if the signature of the hashsum file we have, was not modified by anyone and matches what Debian has generated and signed.

gpg: assuming signed data in 'SHA512SUMS'
gpg: Signature made Mon 19 Sep 2016 12:23:47 AM HKT
gpg:                using RSA key DA87E80D6294BE9B
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: Good signature from "Debian CD signing key <>" [unknown]
gpg: WARNING: This key is not certified with a trusted signature!
gpg:          There is no indication that the signature belongs to the owner.
Primary key fingerprint: DF9B 9C49 EAA9 2984 3258  9D76 DA87 E80D 6294 BE9B

The important line in this output is the "Good signature from ..." one. It still shows a warning since we never certified (signed) that Debian key. This can be ignored at this point though.

Write ISO Image to Install Media

With a verified pristine ISO file we can finally start the install by writing it to an USB stick or blank DVD. So use your favorite tool to write the ISO to your install media and boot from it. I have used dd and a USB stick attached as /dev/sdb.

dd if=debian-8.6.0-amd64-DVD-1.iso of=/dev/sdb bs=1M oflag=sync

Install Debian on Crypted Volume with USB boot partition

I am not explaining each step of the Debian install here. The Debian handbook is a good resource for covering each install step.

Follow the steps until the installers wants to partition your disk.

There you need to select the "Guided, use entire disk and set up encrypted LVM" option. After that select the built-in disk of your laptop, which usually is sda but double check this before you go ahead, as it will overwrite the data! The 137 GB disk in this case is the built-in disk and the 8 GB is the USB stick.

It makes no difference at this point if you select "All files in one partition" or "Separate /home partition". The USB boot partition can be selected a later step.

Confirm that you want to overwrite your built-in disk shown as sda. It will take a while as it will write random data to the disk to ensure there is no unencrypted data left on the disk from previous installations for example.

Now you need to enter your passphrase that will be used to protect the private key of the crypt volume. Choose something long enough like a sentence and don't forget the passphrase else you can no longer access your data! Don't save the passphrase on any computer, smartphone or password manager. If you want to make a backup of your passphrase then use a ball pen and paper and store the paper backup in a secure location.

The installer will show you a summary of the partitioning as shown above but we need to make the change for the USB boot disk. At the moment it wants to put /boot on sda which is the built-in disk, while our USB stick is sdb. Select /boot and hit enter, after that select "Delete this partition".

After /boot was deleted we can create /boot on the USB stick shown as sdb. Select sdb and hit enter. It will ask if you want to create an empty partition table. Confirm that question with yes.

The partition summary shows sdb with no partitions on it. Select FREE SPACE and select "Create a new partition". Confirm the suggested partition size. Confirm the partition type to be "Primary".

It is time to tell the installer to use this new partition on the USB stick (sdb1) as /boot partition. Select "Mount point: /home" and in the next dialog select "/boot - static files of the boot loader" as shown below:

Confirm the made changes by selecting "Done setting up the partition".

The final partitioning should look now like the following screenshot:

If the partition summary looks good, go ahead with the installation by selecting "Finish partitioning and write changes to disk".

When the installer asks if it should force EFI, then select no, as EFI is not going to protect you.

Finish the installation as usual, select your preferred desktop environment etc.

GRUB Boot Loader

Confirm the dialog that wants to install GRUB to the master boot record. Here it is important to install it to the USB stick and not your built-in SATA/SSD disk! So select sdb (the USB stick) in the next dialog.

First Boot from USB

Once everything is installed, you can boot from your USB stick. As simple test you can unplug your USB stick and the boot should fail with "no operating system found" or similar error message from the BIOS. If it doesn't boot even though the USB stick is connected, then most likely your BIOS is not configured to boot from USB media. Also a blank screen and nothing happening is usually meaning the BIOS can't find a boot device. You need to change the boot setting in your BIOS. As the steps are very different for each BIOS, I can't provide a detailed step-by-step list here.

Usually you can enter the BIOS using F1, F2 or F12 after powering on your computer. In the BIOS there is a menu to configure the boot order. In that list it should show USB disk/storage as the first position. After you have made the changes save and exit the BIOS. Now it will boot from your USB stick first and GRUB will show up and proceeds with the boot process till it will ask for your passphrase to unlock the crypt volume.

Unmount /boot partition after Boot

If you boot your laptop from the USB stick, we want to remove the stick after it has finished booting. This will prevent an attacker to make modifications to your USB stick. To avoid data loss, we should not simply unplug the USB stick but unmount /boot first and then unplug the stick. Good news is that we can automate this unmounting and you just need to unplug the stick after the laptop has finished booting to your login screen.

Just add this line to your /etc/rc.local file:

umount /boot

After boot you can once verify that it automatically unmounts /boot for you by running:

mount | grep /boot

If that command produces no output, then /boot is not mounted and you can safely unplug the USB stick.

Final Words

From time to time you need to upgrade your Linux kernel of course which is on the /boot partition. This can still be done the regular way using apt-get upgrade, except that you need to mount /boot before that and unmount it again after the kernel upgrade.

Enjoy your secured laptop. Now you can leave it in a hotel room without the possibility of someone trying you obtain your passphrase by putting a key logger in your boot partition. All the attacker will see is a fully encrypted harddisk. If he tries to mess with your crypted disk, you will notice as the decryption will fail.

Disclaimer: there are still other attack vectors possible, but they are much harder to do. Your hardware or BIOS can still be modified. But not by holding down the enter key for 70 seconds or by booting a live system.

CryptogramInternational Phone Fraud Tactics

This article outlines two different types of international phone fraud. The first can happen when you call an expensive country like Cuba:

My phone call never actually made it to Cuba. The fraudsters make money because the last carrier simply pretends that it connected to Cuba when it actually connected me to the audiobook recording. So it charges Cuban rates to the previous carrier, which charges the preceding carrier, which charges the preceding carrier, and the costs flow upstream to my telecom carrier. The fraudsters siphoning money from the telecommunications system could be anywhere in the world.

The second happens when phones are forced to dial international premium-rate numbers:

The crime ring wasn't interested in reselling the actual [stolen] phone hardware so much as exploiting the SIM cards. By using all the phones to call international premium numbers, similar to 900 numbers in the U.S. that charge extra, they were making hundreds of thousands of dollars. Elsewhere -- Pakistan and the Philippines being two common locations -- organized crime rings have hacked into phone systems to get those phones to constantly dial either international premium numbers or high-rate countries like Cuba, Latvia, or Somalia.

Why is this kind of thing so hard to stop?

Stamping out international revenue share fraud is a collective action problem. "The only way to prevent IRFS fraud is to stop the money. If everyone agrees, if no one pays for IRFS, that disrupts it," says Yates. That would mean, for example, the second-to-last carrier would refuse to pay the last carrier that routed my call to the audiobooks and the third-to-last would refuse to pay the second-to-last, and so on, all the way back up the chain to my phone company. But when has it been easy to get so many companies to do the same thing? It costs money to investigate fraud cases too, and some companies won't think it's worth the trade off. "Some operators take a very positive approach toward fraud management. Others see it as cost of business and don't put a lot of resources or systems in to manage it," says Yates.

Worse Than FailureRepresentative Line: Off in the Distance

Drew W got called in to track down a bug. Specifically, their application needed to take a customer’s location, and measure the distance to the nearest National Weather Service radar station. It knew the latitude and longitude of each, and needed to find the distance between those points, and it was wrong. It could be off by hundreds or even thousands of miles, especially in more remote locations.

This was the code in question:

from math import sqrt
dist = sqrt((abs(latdiff) * abs(latdiff)) + (abs(londiff) * abs(londiff)))

Now, there’s an obvious problem here, and a number of nitpicks. I’m going to start with the nitpicks. First, when you multiply a number by itself, it’ll always be positive, so you don’t need the abs, making the line sqrt(latdiff*latdiff + londiff*londiff). Of course, Python also has an exponent operator, allowing you to write the easier-to-read version of, sqrt(latdiff**2 + londiff**2). But now that we bring it up, the math package in Python also includes a hypot function, which just implements the distance formula for you, meaning that whole thing could have been written thus:

from math import hypot
dist = hypot(latdiff, londiff)

Now, if your only criteria is, “which solution is more ‘pythonic’?”, then it’s clear that the latter solution is superior. Of course, you should still get your fingers whacked with a mechanical keyboard if you tried to check that solution in, because it still has one major problem: it’s completely and utterly wrong.

If you’re not sure why… think of it as a special kind of rounding error.

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

Harald WelteThe IT security culture, hackers vs. industry consortia

In a previous life I used to do a lot of IT security work, probably even at a time when most people had no idea what IT security actually is. I grew up with the Chaos Computer Club, as it was a great place to meet people with common interests, skills and ethics. People were hacking (aka 'doing security research') for fun, to grow their skills, to advance society, to point out corporate stupidities and to raise awareness about issues.

I've always shared any results worth noting with the general public. Whether it was in RFID security, on GSM security, TETRA security, etc.

Even more so, I always shared the tools, creating free software implementations of systems that - at that time - were very difficult to impossible to access unless you worked for the vendors of related device, who obviously had a different agenda then to disclose security concerns to the general public.

Publishing security related findings at related conferences can be interpreted in two ways:

On the one hand, presenting at a major event will add to your credibility and reputation. That's a nice byproduct, but that shouldn't be the primarily reason, unless you're some kind of a egocentric stage addict.

On the other hand, presenting findings or giving any kind of presentation or lecture at an event is a statement of support for that event. When I submit a presentation at a given event, I think carefully if that topic actually matches the event.

The reason that I didn't submit any talks in recent years at CCC events is not that I didn't do technically exciting stuff that I could talk about - or that I wouldn't have the reputation that would make people consider my submission in the programme committee. I just thought there was nothing in my work relevant enough to bother the CCC attendees with.

So when Holger 'zecke' Freyther and I chose to present about our recent journeys into exploring modern cellular modems at the annual Chaos Communications Congress, we did so because the CCC Congress is the right audience for this talk. We did so, because we think the people there are the kind of community of like-minded spirits that we would like to contribute to. Whom we would like to give something back, for the many years of excellent presentations and conversations had.

So far so good.

However, in 2016, something happened that I haven't seen yet in my 17 years of speaking at Free Software, Linux, IT Security and other conferences: A select industry group (in this case the GSMA) asking me out of the blue to give them the talk one month in advance at a private industry event.

I could hardly believe it. How could they? Who am I? Am I spending sleepless nights and non-existing spare time into security research of cellular modems to give a free presentation to corporate guys at a closed industry meeting? The same kind of industries that create the problems in the first place, and who don't get their act together in building secure devices that respect people's privacy? Certainly not. I spend sleepless nights of hacking because I want to share the results with my friends. To share it with people who have the same passion, whom I respect and trust. To help my fellow hackers to understand technology one step more.

If that kind of request to undermine the researcher/authors initial publication among friends is happening to me, I'm quite sure it must be happening to other speakers at the 33C3 or other events, too. And that makes me very sad. I think the initial publication is something that connects the speaker/author with his audience.

Let's hope the researchers/hackers/speakers have sufficiently strong ethics to refuse such requests. If certain findings are initially published at a certain conference, then that is the initial publication. Period. Sure, you can ask afterwards if an author wants to repeat the presentation (or a similar one) at other events. But pre-empting the initial publication? Certainly not with me.

I offered the GSMA that I could talk on the importance of having FOSS implementations of cellular protocol stacks as enabler for security research, but apparently this was not to their interest. Seems like all they wanted is an exclusive heads-up on work they neither commissioned or supported in any other way.

And btw, I don't think what Holger and I will present about is all that exciting in the first place. More or less the standard kind of security nightmares. By now we are all so numbed down by nobody considering security and/or privacy in design of IT systems, that is is hardly any news. IoT how it is done so far might very well be the doom of mankind. An unstoppable tsunami of insecure and privacy-invading devices, built on ever more complex technology with way too many security issues. We shall henceforth call IoT the Industry of Thoughtlessness.

Harald WelteDHL zones and the rest of the world

I typically prefer to blog about technical topics, but the occasional stupidity in every-day (business) life is simply too hard to resist.

Today I updated the shipping pricing / zones in the ERP system of my company to predict shipping rates based on weight and destination of the package.

Deutsche Post, the German Postal system is using their DHL brand for postal packages. They divide the world into four zones:

  • Zone 1 (EU)
  • Zone 2 (Europe outside EU)
  • Zone 3 (World)

You would assume that "World" encompasses everything that's not part of the other zones. So far so good. However, I then stumbled about Zone 4 (rest of world). See for yourself:


So the World according to DHL is a very small group of countries including Libya and Syria, while countries like Mexico are rest of world

Quite charming, I wonder which PR, communicatoins or marketing guru came up with such a disqualifying name. Maybe they should hve called id 3rd world and 4th world instead? Or even discworld?

Sky CroeserPrecarious Times: Precarious Spaces

92574855-cef3-43c7-baa7-12f938433f19-620x372Beyond the edges of the map: The ghost city of Ordos Kangbashi – Christina Lee, Senior Lecturer, Curtin University
The ghost city phenomenon in China first came to international attention in 2009 in an Al Jazeera report. A combination of different factors, including the ways in which state planning works and the global financial crisis, led to Ordos Kangbashi being ‘stillborn’ as a city. The international reporting on Ordos Kangbashi and other ‘ghost cities’, however, frequently fetishised these cities, with reporters and academics visiting them during the earlier phases of their construction and ignoring the people who actually live there. Lee talked about exploring Ordos Kangbashi, seeing people – and signs of people – who lived there, and were perhaps experiencing a very different temporality.

lps19-980x1470Attending to Spectral Traces and Wounded Places – Karen E. Till, Associate Professor, Department of Geography, Maynooth University, Ireland, with Gerry Kearns
Till’s work looks at ANU Productions‘ performance Laundry (2011), linked to one of the Magdalene Laundry sites. Women used to do sex work in the area, but moral crusaders and police undermined their attempts to survive this way, and many sex workers ended up in Magdalene Laundries. These laundries drew not only philanthropic donations, but also the unpaid labour of the inmates. The performance drew participants into women’s experiences of the site, including the humiliation, gruelling work, legal confinement, forced removal of babies, and loss of identity. It made visible the physical and psychological torture that women experienced, asking participants to remember Ireland’s haunted past. This production, and others that make the histories of particular places visible, help us come to terms with the need to recognise and mourn suffering that was previously deemed unmentionable.

setonSea Passages: Between trauma, reparation and recognition – Susannah Radstone, Professor of Cultural Theory, University of South Australia
Radstone discusses Alex Seton’s work at the 2014 Adelaide Biennial, someone died trying to have a life like mine, which consists of a series of lifejackets sculpted in white marble. This work opens up complex questions about how we witness trauma, and what that witnessing might achieve. Seton argues that the title of his work tries to create a bridge of empathy between the viewer and those who experience trauma. But drawing on Suvendi Perrera’s work, Radstone asks whether Seton’s piece does actually bind viewers together with asylum seekers, or whether it allows for a distancing and mastery over a tragedy only viewed from a distance. Radstone explores Seton’s work in the context of histories of art and media coverage – as well as processes of reparative healing – around shipwrecks and trauma.


Sky CroeserPrecarious Times: Banal Precarity

The symposium opened with a panel on Banal Precariousness.

978-0-8223-5562-5_prAnne Allison, Professor of Cultural Anthropology, Duke University, spoke on “Cleaning up dead remains in times of living/dying all alone: social singlification in Japan”, building on her book, Precarious Japan. As demographic changes happen in Japan, many older people have become worried about dying alone rather than with their families, and about not being mourned. Companies have emerged that “help in your move to heaven” – wrapping up loose ends, disposing of (or ‘ordering’) deceased people’s belongings, and providing a sense of care and respect. The labour of this is both material and affective.

There’s a growing economic sector around ‘the business of the end’, catering for people who don’t have family, or feel their families would be burdened by caring for them and their possessions. This can also be seen as part of a neoliberal shift towards individualised responsibility: an expectation that the individual will take care of themselves, including in the moment of death. There’s a lot of reference made to ‘the stink of a bad death’ – a sense that individuals need to ensure that they don’t leave a mess (literally and morally) in dying. Companies manage ‘special cleanup’, burial, mourning, and in some cases are encouraging people to make ‘grave friends’ – people who they meet before death, who will be buried nearby, so that they won’t be lonely after death. Companies will perform mourning ceremonies for your possessions, too, while you’re still alive – Allison wonders if this allows people to grieve themselves by proxy, as even mourning becomes an individual responsibility.

Tanja Dreher, ARC Future Fellow at the University of Wollongong addreswhichlivesmattersed Precarious Attention. She opened by acknowledging that thinking about precariousness benefits from centring Indigenous experiences in settler-colonial states. Dreher’s research has been informed particularly by work by Indigenous women in Australia, including Amy McGuire, Marcia Langton, and Celeste Liddle.

There are key long-term concerns of media studies that underpin work on precarity that come in part out of Judith Butler’s work. Vulnerability, grief, and value are unevenly distributed. From Black Lives Matter to social media memes around terrorist attacks, there is a politics emerging around the grieving of particular lives (and the failure to grieve others). Some lives are produced as more grievable than others, which enables the ongoing prosecution of war. Media is a key factor in this process, and therefore also an important site of struggle.

Dreher notes that often understandings of uneven attention and concern are framed within visual metaphors. Auditory metaphors can also be useful, however. We can think about calls for attention to different tragedies, and calls for listening to different voices, including those of Indigenous women. It is a political act to struggle against the configuring of particular kinds of suffering – and the suffering of some groups – as banal and unworthy of comment or grieving.


Photo by Nagarajan Kanna

Finally, Susan Leong, Research Fellow at Curtin University, spoke on ‘Banal Precariousness: a Daily Prayer’. Much of our precariousness is related to work. Guy Standing talks about the growth of the ‘precariat’ as a new class in the making. ‘Precarious’ comes from the Latin root which means both a prayer and petition. Leong spoke about different metaphors of precariousness: rather than standing on a precipice, we might think about climbing constantly-shifting sand dunes in the wrong shoes and clothing, without hopes of being saved. It might be banal, if it weren’t so fundamental and essential to our experience.

In Australia, the Turnbull government is creating a situation of ‘churning innovation’, in which we are expected to be nimble, agile, and flexible. As many as 40% of Australian jobs could be replaced by automation over the next decade: in higher education, we see the shift to sessional employment, and shifts in teaching delivery like MOOCs. We need to understand the technologies that perpetuate and facilitate – and sometimes allow resistance to – banal precariousness. Ideas are fragile, according to Mary Douglas, and they require support to travel, grow, and rest. Leong notes the time it’s taken her to work through these ideas around banal precariousness, particularly as traversing the precarity of research within academia.



Krebs on SecurityDDoS, IoT Top Cybersecurity Priorities for 45th President

Addressing distributed denial-of-service (DDoS) attacks designed to knock Web services offline and security concerns introduced by the so-called “Internet of Things” (IoT) should be top cybersecurity priorities for the 45th President of the United States, according to a newly released blue-ribbon report commissioned by President Obama.

commish“The private sector and the Administration should collaborate on a roadmap for improving the security of digital networks, in particular by achieving robustness against denial-of-service, spoofing, and other attacks on users and the nation’s network infrastructure,” reads the first and foremost cybersecurity recommendation for President-elect Donald Trump. “The urgency of the situation demands that the next Administration move forward promptly on our recommendations, working closely with Congress and the private sector.”

The 12-person, non-partisan commission produced a 90-page report (PDF) and recommended as their very first action item that the incoming President “should direct senior federal executives to launch a private–public initiative, including provisions to undertake, monitor, track, and report on measurable progress in enabling agile, coordinated responses and mitigation of attacks on the users and the nation’s network infrastructure.”

The panel said this effort should build on previous initiatives, such as a 2011 program by the U.S. Department of Commerce called the Industry Botnet Group.

“Specifically, this effort would identify the actions that can be taken by organizations responsible for the Internet and communications ecosystem to define, identify, report, reduce, and respond to attacks on users and the nation’s network infrastructure,” the report urged. “This initiative should include regular reporting on the actions that these organizations are already taking and any changes in technology, law, regulation, policy, financial reimbursement, or other incentives that may be necessary to support further action—while ensuring that no participating entity obstructs lawful content, applications, services, or nonharmful devices, subject to reasonable network management.”

The report spans some six major imperatives, including 16 recommendations and 63 associated action items. The second major imperative focuses on IoT security concerns, and urges the federal government and private industry to embark upon a number of initiatives to “rapidly and purposefully to improve the security of the Internet of Things.”

“The Department of Justice should lead an interagency study with the Departments of Commerce and Homeland Security and work with the Federal Trade Commission, the Consumer Product Safety Commission, and interested private sector parties to assess the current state of the law with regard to liability for harm caused by faulty IoT devices and provide recommendations within 180 days,” the panel recommended. “To the extent that the law does not provide appropriate incentives for companies to design security into their products, and does not offer protections for those that do, the President should draw on these recommendations to present Congress with a legislative proposal to address identified gaps, as well as explore actions that could be accomplished through executive order.”

Meanwhile, Morning Consult reports that U.S. Federal Communications Commission Chairman Tom Wheeler has laid out an unexpected roadmap through which the agency could regulate the security of IoT devices. The proposed certification process was laid out in a response to a letter sent by Sen. Mark Warner (D-Va.) shortly after the IoT-based attacks in October that targeted Internet infrastructure company Dyn and knocked offline a number of the Web’s top destinations for the better part of a day.

Morning Consult’s Brendan Bordelon notes that while Wheeler is set to step down as chairman on Jan. 20, “the new framework could be used to support legislation enhancing the FCC’s ability to regulate IoT devices.”


It’s nice that this presidential commission placed a special emphasis on IoT and denial-of-service attacks, as these two threats alone are clear and present dangers to the stability of e-commerce and free expression online. However, this report overall reads very much like other blue-ribbon commission reports of years past: The recommendations eschew new requirements in favor of the usual calls for best practices, voluntary guidelines, increasing industry-government information sharing, public/private partnerships, and public awareness campaigns.

One recommendation I would like to have seen in this report is a call for federal legislation that requires U.S.-based hosting providers to block spoofed traffic from leaving their networks.

As I noted in a November 2015 story, The Lingering Mess from Default Insecurity, one major contributor to the massive spike in denial-of-service attacks over the past few years is that far too many ISPs and hosting providers allow traffic to leave their networks that did not originate there. Using well-known attack techniques known as traffic amplification and reflection, an attacker can “reflect” his traffic from one or more third-party machines toward the intended target.

In this type of assault, the attacker sends a message to a third party, while spoofing the Internet address of the victim. When the third party replies to the message, the reply is sent to the victim — and the reply is much larger than the original message, thereby amplifying the size of the attack. According to the latest DDoS report from Akamai, more than half of all denial-of-service attacks in the third quarter of 2016 involved reflection and spoofing.

One basic step that many ISPs and hosting providers can but apparently are not taking to blunt these spoofing attacks involves a network security standard that was developed and released more than a dozen years ago. Known as BCP38, its use prevents abusable resources on an ISP’s network from being leveraged in denial-of-service. BCP38 is designed to filter such spoofed traffic, so that the reflected traffic from the third party never even traverses the network of an ISP that’s adopted the anti-spoofing measures.

However, there are non-trivial economic reasons that many ISPs fail to adopt this best practice. This blog post from the Internet Society does a good job of explaining why many ISPs decide not to implement BCP38. Ultimately, it comes down to cost and to a fear that adoption of this best practice will increase costs and prompt some customers to seek out providers that do not enforce this requirement. In some cases, U.S.-based hosting providers that allow spoofing/reflection have been sought out and recommended among miscreants involved in selling DDoS-for-hire services.

In its Q3 2016 State of the Internet report, Akamai notes that while Chinese ISPs occupy the top two sources of spoofed traffic, several large U.S.-based providers make a showing here as well:

Image: Akamai.

Image: Akamai.

It is true that requiring U.S. hosting providers to block spoofing would not solve the spoofing problem globally. But I believe it’s high time that the United States led by example in this arena, if only because we probably have the most to lose by continued inaction. According to Akamai, more than 21 percent of all denial-of-service attacks originate from the United States. And that number has increased from 17 percent a year ago, Akamai found. What’s more, the U.S. is the most frequent target of these attacks, according to DDoS stats released this year by Arbor Networks.

CryptogramVoynich Manuscript Facsimile Published

Yale University Press has published a facsimile of the Voynich Manuscript.

The manuscript is also available online.

Cory DoctorowA new edition of the Information Doesn’t Want to Be Free audiobook featuring Neil Gaiman

“Information Doesn’t Want to Be Free” is my 2014 nonfiction book about copyright, the internet, and earning a living, and it features two smashing introductions — one by Neil Gaiman and the other by Amanda Palmer.

I released an audio edition of the book in 2014, read by the incomparable Wil Wheaton, who also read the audiobook of my novel Homeland). At the time, I tried to get Neil and Amanda into a studio to record their intros, but we couldn’t get the stars to align.

But good things come to those who wait! Neil Gaiman’s 2016 essay collection The View From the Cheap Seats includes his introduction to my book, and the audiobook edition — which Neil himself read — therefore includes Neil’s reading of this essay.

Thanks to Neil, his agents, and the kind people at Harper Audio, I was able to get permission to include Neil’s reading of his essay for a remastered audio version of the audiobook (many thanks to Wryneck Studios’ John Taylor Williams for turning this around very quickly!), and as of today, you can buy the new edition for $15. As with every one of my audiobooks, this is DRM-free, and makes a snazzy holiday gift.

Information Doesn’t Want to Be Free audiobook featuring Neil Gaiman [Craphound]

Planet DebianShirish Agarwal: The Anti-Pollito squad – arrest and confession

Disclaimer – This is an attempt at humor and hence entirely fictional in nature. While some incidents depicted are true, the context and the story woven around them are by yours truly. None of the Mascots of Debian were hurt during the blog post😉. I also disavow any responsibility for any hurt (real or imagined) to any past, current and future mascots. The attempt should not be looked upon as demeaning people who are accused of false crimes, tortured and confessions eked out of them as this happens quite a lot (In India for sure, but guess it’s the same world over in various degrees). The idea is loosely inspired by Chocolate:Deep Dark Secrets. (2005)

On a more positive note, let’s start –

Being a Sunday morning woke up late to find incessant knocking on the door, incidentally mum was not at home. Opening the door, found two official looking gentleman. They asked my name, asked my credentials, tortured and arrested me for “Group conspiracy of Malicious Mischief in second and third degrees” .

The torture was done by means of making me forcefully watch endless reruns of ‘Norbit‘ . While I do love Eddie Murphy, this was one of his movies he could have done without😦. I guess for many people watching it once was torture enough. I *think* they were nominated for razzie awards dunno if they won it or not, but this is beside the point.

Unlike the 20 years it takes for a typical case to reach to its conclusion even in the smallest court in India, due to the torture, I was made to confess (due to endless torture) and was given summary judgement. The judgement was/is as follows –

a. Do 100 hours of Community service in Debian in 2017. This could be done via blog posts, raising tickets in the Debian BTS or in whichever way I could be helpful to Debian.

b. Write a confessional with some photographic evidence sharing/detailing some of the other members who were part of the conspiracy in view of the reduced sentence.

So now, have been forced to write this confession –

As you all know, I won a bursary this year for debconf16. What is not known by most people is that I also got an innocuous looking e-mail titled ‘ Pollito for DPL ‘. While I can’t name all the names as investigation is still ongoing about how far-reaching the conspiracy is . The email was purportedly written by members of ‘cabal within cabal’ which are in Debian. I looked at the email header to see if this was genuine and I could trace the origin but was left none the wiser, as obviously these people are far more technically advanced than to fall in simple tricks like this –

Anyways, secretly happy that I have been invited to be part of these elites, I did the visa thing, packed my bags and came to Debconf16.

At this point in juncture, I had no idea whether it was real or I had imagined the whole thing. Then to my surprise saw this –

evidence of conspiracy to have Pollito as DPL, Wifi Password

Just like the Illuminati the conspiracy was for all to see those who knew about it. Most people were thinking of it as a joke, but those like me who had got e-mails knew better. I knew that the thing is real, now I only needed to bide my time and knew that the opportunity would present itself.

And few days later, sure enough, there was a trip planned for ‘Table Mountain, Cape Town’ . Few people planned to hike to the mountain, while few chose to take the cable car till up the mountain.

First glance of the cable car with table mountain as background

Quite a few people came along with us and bought tickets for the to and fro to the mountain and back.

Ticket for CPT Table mountain car cable

Incidentally, I was thinking if the South African Govt. were getting the tax or not. If you look at the ticket, there is just a bar-code. In India as well as the U.S. there is TIN – Tax Identification Number –

TIN displayed on an invoice from

Few links to share what it is all about . While these should be on all invoices, need to specially check when taking high-value items. In India as shared in the article the awareness, knowledge leaves a bit to be desired. While I’m drifting from the incident, it would be nice if somebody from SA could share how things work there.

Moving on, we boarded the cable car. It was quite spacious cable car with I guess around 30-40 people or some more who were able to see everything along with the controller.

from inside the table mountain cable car 360 degrees

It was a pleasant cacophony of almost two dozen or more nationalities on this 360 degrees moving chamber. I was a little worried though as it essentially is a bucket and there is always a possibility that a severe wind could damage it. Later somebody did share that some frightful incidents had occurred not too long ago on the cable car.

It took about 20-25 odd minutes to get to the top of table mountain and we were presented with views such as below –

View from Table Mountain cable car looking down

The picture I am sharing is actually when we were going down as all the pictures of going up via the cable car were over-exposed. Also, it was pretty crowded on the way up then on the way down so handling the mobile camera was not so comfortable.

Once we reached up, the wind was blowing at incredible speeds. Even with my jacket and everything I was feeling cold. Most of the group around 10-12 people looked around if we could find a place to have some refreshments and get some of the energy in the body. So we all ventured to a place and placed our orders –

the bleh... Irish coffee at top of Table Mountain

I was introduced to Irish Coffee few years back and have had some incredible Irish Coffees in Pune and elsewhere. I do hope to be able to make Irish Coffee at home if and when I have my own house. This is hotter than brandy and is perfect if you are suffering from cold etc if done right, really needs some skills. This is the only drink which I wanted in SA which I never got right😦 . As South Africa was freezing for me, this would have been the perfect antidote but the one there as well as elsewhere were all …bleh.

What was interesting though, was the coffee caller besides it. It looked like a simple circuit mounted on a PCB board with lights, vibrations and RFID and it worked exactly like that. I am guessing as and when the order is ready, there is an interrupt signal sent via radio waves which causes the buzzer to light and vibrate. Here’s the back panel if somebody wants to take inspiration and try it as a fun project –

backpanel of the buzz caller

Once we were somewhat strengthened by the snacks, chai, coffee etc. we made our move to seeing the mountain. The only way to describe it is that it’s similar to Raigad Fort but the plateau seemed to be bigger. The wikipedia page of Table Mountain attempts to share but I guess it’s more clearly envisioned by one of the pictures shared therein.

table mountain panaromic image

I have to say while Table Mountain is beautiful and haunting as it has scenes like these –

Some of the oldest rocks known to wo/man.

There is something there which pulls you, which reminds you of a long lost past. I could have simply sat there for hours together but as was part of the group had to keep with them. Not that I minded.

The moment I was watching this, I was transported to some memories of the Himalayas about 20 odd years or so. In that previous life, I had the opportunity to be with some of the most beautiful women and also been in the most happening places, the Himalayas. I had shared years before some of my experiences I had in the Himalayas. I discontinued it as I didn’t have a decent camera at that point in time. While I don’t wanna digress, I would challenge anybody to experience the Himalayas and then compare. It is just something inexplicable. The beauty and the rawness that Himalayas shows makes you feel insignificant and yet part of the whole cosmos. What Paulo Cohello expressed in The Valkyries is something that could be felt in the Himalayas. Leh, Ladakh, Himachal , Garwhal, Kumaon. The list will go on forever as there are so many places, each more beautiful than the other. Most places are also extremely backpacker-friendly so if you ask around you can get some awesome deals if you want to spend more than a few days in one place.

Moving on, while making small talk @olasd or Nicolas Dandrimont , the headmaster of our trip made small talk to each of us and eked out from all of us that we wanted to have Pollito as our DPL (Debian Project Leader) for 2017. Few pictures being shared below as supporting evidence as well –

The Pollito as DPL cabal in action

members of the Pollito as DPL

where am I or more precisely how far am I from India.

While I do not know who further up than Nicolas was on the coup which would take place. The idea was this –

If the current DPL steps down, we would take all and any necessary actions to make Pollito our DPL.

Pollito going to SA - photo taken by Jonathan Carter This has been taken from Pollito’s adventure

Being a responsible journalist, I also enquired about Pollito’s true history as it would not have been complete without one. This is the e-mail I got from Gunnar Wolf, a friend and DD from Mexico🙂

Turns out, Valessio has just spent a week staying at my house🙂 And
in any case, if somebody in Debian knows about Pollito’s
childhood… That is me.

Pollito came to our lives when we went to Congreso Internacional de
Software Libre (CISOL) in Zacatecas city. I was strolling around the
very beautiful city with my wife Regina and our friend Alejandro
Miranda, and at a shop at either Ramón López Velarde or Vicente
Guerrero, we found a flock of pollitos.

Even if this was comparable to a slave market, we bought one from
them, and adopted it as our own.

Back then, we were a young couple… Well, we were not that young
anymore. I mean, we didn’t have children. Anyway, we took Pollito with
us on several road trips, such as the only time I have crossed an
international border driving: We went to Encuentro Centroamericano de
Software Libre at Guatemala city in 2012 (again with Alejandro), and
you can see several Pollito pics at:

Pollito likes travelling. Of course, when we were to Nicaragua for
DebConf, Pollito tagged along. It was his first flight as a passenger
(we never asked about his previous life in slavery; remember, Pollito
trust no one).

Pollito felt much welcome with the DebConf crowd. Of course, as
Pollito is a free spirit, we never even thought about forcing him to
come back with us. Pollito went to Switzerland, and we agreed to meet
again every year or two. It’s always nice to have a chat with him.


So with that backdrop I would urge fellow Debianities to take up the slogans –




The first step to make Pollito the DPL is to ensure he has a (

We also need him to be made a DD because only then can he become a DPL.

In solidarity and in peace🙂

Filed under: Miscellenous Tagged: #caller, #confession, #Debconf16, #debian, #Fiction, #history, #Pollito, #Pollito as DPL, #Table Mountain, Cabal, memories, south africa

Planet DebianNorbert Preining: Debian/TeX Live 2016.20161130-1

As we are moving closer to the Debian release freeze, I am shipping out a new set of packages. Nothing spectacular here, just the regular updates and a security fix that was only reported internally. Add sugar and a few minor bug fixes.

I have been silent for quite some time, busy at my new job, busy with my little monster, writing papers, caring for visitors, living. I have quite a lot of things I want to write, but not enough time, so very short only this one.


New packages

awesomebox, baskervillef, forest-quickstart, gofonts, iscram, karnaugh-map, tikz-optics, tikzpeople, unicode-bidi.

Updated packages

acmart, algorithms, aomart, apa, apa6, appendix, apxproof, arabluatex, asymptote, background, bangorexam, beamer, beebe, biblatex-gb7714-2015, biblatex-mla, biblatex-morenames, bibtexperllibs, bidi, bookcover, bxjalipsum, bxjscls, c90, cals, cell, cm, cmap, cmextra, context, cooking-units, ctex, cyrillic, dirtree, ekaia, enotez, errata, euler, exercises, fira, fonts-churchslavonic, formation-latex-ul, german, glossaries, graphics, handout, hustthesis, hyphen-base, ipaex, japanese, jfontmaps, kpathsea, l3build, l3experimental, l3kernel, l3packages, latex2e-help-texinfo-fr, layouts, listofitems, lshort-german, manfnt, mathastext, mcf2graph, media9, mflogo, ms, multirow, newpx, newtx, nlctdoc, notes, patch, pdfscreen, phonenumbers, platex, ptex, quran, readarray, reledmac, shapes, showexpl, siunitx, talk, tcolorbox, tetex, tex4ht, texlive-en, texlive-scripts, texworks, tikz-dependency, toptesi, tpslifonts, tracklang, tugboat, tugboat-plain, units, updmap-map, uplatex, uspace, wadalab, xecjk, xellipsis, xepersian, xint.

Sociological ImagesWhy Obama Won 53 Counties in Iowa and Clinton Won 6

Originally posted at Orgtheory.

6Iowa in 2008, Iowa in 2016

So there are a thousand reasons Trump won the election, right? There’s race, there’s class, there’s gender. There’s Clinton as a candidate, and Trump as a candidate, the changing media environment, the changing economic environment, and the nature of the primary fields. It’s not either-or, it’s all of the above.

But Josh Pacewicz’s new book, Partisans and Partners: The Politics of the Post-Keynesian Society, implies a really interesting explanation for the swing voters in the Rust Belt—the folks who went Obama in 2008, and maybe 2012, but Trump in 2016. These voters may make up a relatively small fraction of the total, but they were key to this election.

Pacewicz’s book, which just came out this month, doesn’t mention Trump, and presumably went to press long before Trump was even the presumptive Republican nominee. And the dynamics Pacewicz identifies didn’t predict a specific outcome. (In fact, Josh guest-blogged at orgtheory in August, but focused on explaining party polarization, and did not venture to predict a winner.)

But Partisans and Partners nevertheless does a really good job of explaining what just happened. Its argument is complex, and doesn’t imply a lot of obvious leverage points for decreasing political polarization or the desire for “disruptive” candidates. But I think it’s an important explanation nonetheless.

The book is based on ethnographic and interview data collected over a period of several years in two Rust-Belt Iowa cities of similar size, one traditionally Republican, and the other traditionally Democratic. Both of these cities saw a transformation in their politics in the 1980s. Until the 1970s, urban politics were organized around a partisan divide closely associated with local business elites, on the Republican side, and union leaders, on the Democratic side. Politics was highly oppositional, and the party that won local elections got to distribute a lot of spoils. But it was not polarized in the sense it is today—while there were fundamental differences between the parties, particularly on economic issues, positions on social issues were less rigidly defined.

During the 1980s, something changed. Pacewicz calls that something “neoliberal reforms”; I might argue that those are just one piece of a bigger economic transformation that was happening. But either way, the political environment shifted. Regulatory changes encouraged corporate mergers and buyouts. This put control of local industry in distant cities and hollowed out both business elites and union power. The federal government shifted from simply handing cities pots of money that the party in power could control, to requiring cities to compete for funds, putting together applications that would compete with those of other cities. This environmental change facilitated the decline of the old “partisans”—the business and labor elites—and the rise of a new group of local power brokers—the “partners”.

The partners were more technocratic and pragmatic. They did not have strong party allegiances, nor did they see politics as being fundamentally about competition between the incompatible interests of business and labor. Instead, they focused on building temporary alliances among diverse groups with often-conflicting interests. Think business-labor roundtables, public-private partnerships, and the like. This is what was needed to attract industry from other places (look how smooth our labor relations are!) and to compete for federal grants and incentives (cities with obviously oppositional politics tended to lose out). The end of politics. Sounds great, right?

The problem was that these dynamics also hollowed out local parties. The old partisans had lost power. Partners didn’t want to be active in party politics. This left parties to activists, who over time came to represent increasingly extreme positions—a new wave of partisans.

What did this mean for the average voter? Pacewicz shows how older voters still conceptualized the two parties as fundamentally reflecting a business/labor divide. But most younger voters came to understand politics as representing a divide between partners—people working together, setting aside differences, for the benefit of the community—and partisans—people representing the interests of particular groups.

Partners didn’t like politics. They didn’t really think it should exist. They disliked political polarization, thought that people were pretty similar underneath their surface differences, and that conflict was generally avoidable. They distrusted politics, their party affiliation tended to be provisional, and they often responded only to negative ads around hot-button issues.

The new partisans, on the other hand, were alienated from contemporary life. They thought things were going to hell in a handbasket. They were looking for change, and saw outsider candidates as appealing—candidates who promised to shake up the system. Many had a strong preference for Democrats or Republicans. But while for traditional voters party affiliation was rooted in a sense of positive commitment, for the new partisans, it was based on disaffection with the alternative. And a key group of “partisans” was politically uncommitted (a contradiction in terms?)—disaffected and angry and wanting politics to solve their problems, but not aligned with a party.

The 2008 election illustrates how these types respond to candidates. In the primaries, partners liked Obama, responding well to his post-partisan image. He was less favored by Democrats and traditional voters and partisans. By fall, though, traditional (Democratic) voters and (Democratic) partisans tended to get on board, while partners waffled as Obama came to seem more partisan.

The most erratic group was the uncommitted partisans. These people wanted somebody—anybody—to shake things up, to change the system. And they wanted somebody to represent them—the outsider. They tended to lean toward GOP candidates (one illustrative voter was a big Palin fan), but many also simply remained disaffected and stayed home.

This is the group, it seems to me, that is key to understanding the 2016 election. Democrats gonna Democrat, and Republicans gonna Republican. In the end, most people really aren’t swing voters. But the unaffiliated partisans are the type of voters who would have found some appeal in both Bernie and Trump: someone claiming to represent the everyman, and someone willing to shake up the status quo.

In the end, these folks are unlikely to be motivated to vote for a Clinton or a Romney. It’s just more of the same. But they can be energized by populism, and by the outsider. These are the people who will vote for Trump just as a big old middle finger to the system. Partisans and Partners isn’t specifically trying to explain Trump’s win, in Iowa or anywhere else. But it does as good a job as anything I’ve read at pointing in the direction we should be looking.

Elizabeth Popp Berman, PhD is an associate professor of sociology at the University at Albany, SUNY, and the author of the award-winning book Creating the Market University: How Academic Science Became an Economic Engine. 

(View original at

CryptogramFriday Squid Blogging: Striped Pyjama Squid

Here's a nice picture of one of the few known poisonous squids.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

CryptogramAnalyzing WeChat

Citizen Lab has analyzed how censorship works in the Chinese chat app WeChat:

Key Findings:

  • Keyword filtering on WeChat is only enabled for users with accounts registered to mainland China phone numbers, and persists even if these users later link the account to an International number.

  • Keyword censorship is no longer transparent. In the past, users received notification when their message was blocked; now censorship of chat messages happens without any user notice.

  • More keywords are blocked on group chat, where messages can reach a larger audience, than one-to-one chat.

  • Keyword censorship is dynamic. Some keywords that triggered censorship in our original tests were later found to be permissible in later tests. Some newfound censored keywords appear to have been added in response to current news events.

  • WeChat's internal browser blocks China-based accounts from accessing a range of websites including gambling, Falun Gong, and media that report critically on China. Websites that are blocked for China accounts were fully accessible for International accounts, but there is intermittent blocking of gambling and pornography websites on International accounts.

Lots more details in the paper.

CryptogramGuessing Credit Card Security Details

Researchers have found that they can guess various credit-card-number security details by spreading their guesses around multiple websites so as not to trigger any alarms.

From a news article:

Mohammed Ali, a PhD student at the university's School of Computing Science, said: "This sort of attack exploits two weaknesses that on their own are not too severe but when used together, present a serious risk to the whole payment system.

"Firstly, the current online payment system does not detect multiple invalid payment requests from different websites.

"This allows unlimited guesses on each card data field, using up to the allowed number of attempts -- typically 10 or 20 guesses -- on each website.

"Secondly, different websites ask for different variations in the card data fields to validate an online purchase. This means it's quite easy to build up the information and piece it together like a jigsaw.

"The unlimited guesses, when combined with the variations in the payment data fields make it frighteningly easy for attackers to generate all the card details one field at a time.

"Each generated card field can be used in succession to generate the next field and so on. If the hits are spread across enough websites then a positive response to each question can be received within two seconds -- just like any online payment.

"So even starting with no details at all other than the first six digits -- which tell you the bank and card type and so are the same for every card from a single provider -- a hacker can obtain the three essential pieces of information to make an online purchase within as little as six seconds."

That's card number, expiration date, and CVV code.

From the paper:

Abstract: This article provides an extensive study of the current practice of online payment using credit and debit cards, and the intrinsic security challenges caused by the differences in how payment sites operate. We investigated the Alexa top-400 online merchants' payment sites, and realised that the current landscape facilitates a distributed guessing attack. This attack subverts the payment functionality from its intended purpose of validating card details, into helping the attackers to generate all security data fields required to make online transactions. We will show that this attack would not be practical if all payment sites performed the same security checks. As part of our responsible disclosure measure, we notified a selection of payment sites about our findings, and we report on their responses. We will discuss potential solutions to the problem and the practical difficulty to implement these, given the varying technical and business concerns of the involved parties.

BoingBoing post:

The researchers believe this method has already been used in the wild, as part of a spectacular hack against Tesco bank last month.

MasterCard is immune to this hack because they detect the guesses, even though they're distributed across multiple websites. Visa is not.

Planet DebianReproducible builds folks: Reproducible Builds: week 84 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday November 27 and Saturday December 3 2016:

Reproducible work in other projects

Media coverage, etc.

  • There was a Reproducible Builds hackathon in Boston with contributions from Dafydd, Valerie, Clint, Harlen, Anders, Robbie and Ben. (See the "Bugs filed" section below for the results).

  • Distrowatch mentioned Webconverger's reproducible status.

Bugs filed

Chris Lamb:

Clint Adams:

Dafydd Harries:

Daniel Shahaf:

Reiner Herrmann:

Valerie R Young:

Reviews of unreproducible packages

15 package reviews have been added, 4 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (5)
  • Lucas Nussbaum (8)
  • Santiago Vila (1)

diffoscope development

Is is available now in Debian, Archlinux and on PyPI.

strip-nondeterminism development

  • At the Reproducible Builds Boston hackathon Anders Kaseorg filed #846895 treat .par files as Zip archives, including a patch which was merged into master.

reprotest development

  • Holger made a couple of changes:

    • Group all "done" and all "open" usertagged bugs together in the bugs graphs and move the "done bugs" from the bottom of these gaps.
    • Update list of packages installed on machines.
    • Made the maintenance jobs run every 2h instead of 3h.
    • Various bug fixes and minor improvements.
  • After thorough review by Mattia, some patches by Valerie were merged in preparation of the switch from sqlite to Postgresql, most notably a conversion to the sqlalchemy expression language.

  • Holger gave a talk at Profitbricks about how Debian is using 168 cores, 503 GB RAM and 5 TB storage to make and run. Many thanks to Profitbricks for supporting since August 2012!

  • Holger created a Jenkins job to build reprotest from git master branch.

  • Finally, the Jenkins Naginator plugin was installed to retry git cloning in case of Alioth/network failures, this will benefit all jobs using Git on


This week's edition was written by Chris Lamb, Valerie Young, Vagrant Cascadian, Holger Levsen and reviewed by a bunch of Reproducible Builds folks on IRC.

Worse Than FailureThe Infrastructure

George had just escaped from his job, a WTF-laden hellhole where asking for a test database to reproduce an issue resulted in the boss spending hours and hours hand-typing and debugging a fresh SQL script based on an old half-remembered schema.

Initech promised to be a fantastic improvement. “We do things right around here,” his new boss, Harvey, told him after hiring him. “We do clean coding. Our development systems and libraries are fabulous! And each of our programmers get a private office with its own window!” Yay, no more cubicle!

George climbed over the required HR training videos, slideshows, and a Himalayan mountain’s worth of paperwork, then headed to his new desk where a budget small-form factor PC with a single 17“ LCD monitor waited for him. ”Yeah, that’s real fabulous," he thought to himself. But perhaps it was just a placeholder system, quickly dug up from elsewhere in the building, to get him started while they ordered a proper developer-grade PC.

It booted up and George realized he didn’t know his account credentials, so he wandered on down to the IT office to get set up.

The IT Office, from the outside, appeared to be a large corner office, but inside it was cold and dark and of uncertain size, with the windows covered by blackout curtains and hidden oscillating fans blasting a chilly breeze through the office. All the room’s light coming from a single small table lamp situated at the sole occupant’s desk. Piles of unsorted equipment and cabling were scattered across the floor, barely visible in the poor lighting, a veritable obstacle course for any visitors with poor eyesight or agility. In contrast to the rest of the room, the desk itself was absolutely spotless, neatly holding the lamp, a keyboard and mouse, and one nice 40" display–nothing else. Not even a telephone. An empty office chair sat in front.

As he entered the blackness, George could hear a rapid-fire chorus of clicking sounds–the kind that usually meant your hard drive had kicked the bucket–clustered around one floor-based pile of machinery which was only visible due to the incessant flashing of LEDs it possessed–LEDs which blinked in perfect time to the various patterns of clicks emanating from the area.

George coughed to make his presence known, and the IT guy appeared near the desk with such silence that George wondered if he had materialized like a ghost. Due to the placement of their sole source of light, George could not even make out his features.

“Hi, I’m George,” he introduced himself, extending a hand the man did not accept. “I just started here and I need an account.”

The IT worker grunted slightly. He proceeded straight to one formless pile of equipment and sat down cross-legged next to it, right on the floor, moving so silently that George almost wondered if he had only imagined it. He made some barely-discernible motions in the dark, and a moment later a previously-hidden LCD monitor flickered to life, bringing new light into the room which allowed George to perceive that the monitor was precariously situated upon a pile of about five budget PC towers from at least a decade ago, heaped up like a deranged game of Jenga, the entire fixture surrounded by dark, snakelike cables of various types whiched meandered to and fro throughout the room.

The screen came to, and George watched in fascination as it presented a Windows Server 2000 login screen. He then realized in repugnance that this pile of equipment was the very same one radiating the sickly click of dying hard disks. Surely this was not the company’s domain controller!

The IT guy entered his login information, and upon his hitting “Enter”, the violent ticking of hard disks became frantic. The screen froze for a long moment–at the time it seemed like minutes–before switching to the classic desktop environment of an operating system serving well beyond its prime. The shadowy IT guy fished a mouse from somewhere in the darkness, almost as if by magic, and balanced it upon his scrawney left knee so he could use it. He clicked on the “Start” button.

A dangerous and terrifying arrangement of PCs and cables that is a giant mess with one PC perliously dangling from the ceiling

Nothing visibly happened at first, but the frenzied jackhammer of “Clicks of Death” told George everything he needed to know about the company’s IT infrastructure. Many long moments later, the Start Menu itself appeared, slowly drawing line-by-line into the desktop space at a painfully-slow rate. It completed rendering and the brutal ticking sound diminished somewhat.

The IT guy attempted to open up the user manager, but as he clicked on its entry, the screen abruptly flashed to the bluest shade of blue, overlaid by a page of cryptic white text. George’s jaw dropped in horror but the other man remained perfectly silent. He reached a hand into the darkness and expertly stabbed the server’s reset switch with such ease that George surmised this to be a common occurrence, common enough for him to have developed impeccable muscle memory for the task.

The system reset, flashed through its power-on self test, presented a RAID option ROM full of warnings about disk failure (which the IT guy completely ignored), and sat at the Windows 2000 Server loading screen for an eternity as its hard disks struggled from within to escape by the combined force of a dozen demon-possessed jackhammers.

“Um, maybe I’ll just come back later?” George offered.

“NO!” The man nearly shouted his first utterance since George met him, his retort sudden and forceful enough for George to flinch.

And so George waited quietly, half-frozen in place by fear, watching the ghostly form of the company’s IT guy as he struggled through two more Blue Screens of Death on the company’s primary server before successfully opening the user manager and creating a domain account for George. Then he flourished, somehow producing a completed sticky note as if from thin air, and presented it to George. “It is done. Now go.”

George took the sticky note and left the IT office as quickly and quietly as he could, carefully stepping around nigh-invisible pyramids of computer equipment and treacherous bundles of cable, eager to put the experience behind him and never step foot into the IT office again.

As he entered the light outside the office, he peeked at the note which was scrawled with his username and password. But as he returned to his own desk to try the new login credentials, he could not expunge from himself an intense sense of doom, a growing despair that he had left one hellhole and entered a new dimension of pain and WTF-ery…

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet Linux AustraliaColin Charles: Tab Sweep – MySQL ecosystem edition

Tab housekeeping but I also realise that people seem to have missed announcements, developments, etc. that have happened in the last couple of months (and boy have they been exciting). I think we definitely need something like the now-defunct MySQL Newsletter (and no, DB Weekly or NoSQL Weekly just don’t seem to cut it for me!).


During @scale (August 31), Yoshinori Matsunobu mentioned that MyRocks has been deployed in one region for 5% of its production workload at Facebook.

By October 4 at the Percona Live Amsterdam 2016 event, Percona CEO Peter Zaitsev said that MyRocks is coming to Percona Server (blog). On October 6, it was also announced that MyRocks is coming to MariaDB Server 10.2 (note I created MDEV-9658 back in February 2016, and that’s a great place to follow Sergei Petrunia’s progress!).

Rick Pizzi talks about MyRocks: migrating a large MySQL dataset from InnoDB to RocksDB to reduce footprint. His blog also has other thoughts on MyRocks and InnoDB.

Of course, checkout the new site for all things MyRocks! It has a getting started guide amongst other things.

Proxies: MariaDB MaxScale, ProxySQL

With MariaDB MaxScale 2.0 being relicensed under the Business Source License (from GPLv2), almost immediately there was a GPLScale fork; however I think the more interesting/sustainable fork comes in the form of AirBnB MaxScale (GPLv2 licensed). You can read more about it at their introductory post, Unlocking Horizontal Scalability in Our Web Serving Tier.

ProxySQL has a new webpage, a pretty active mailing list, and its the GPLv2 solution by DBAs for DBAs.


Vitess 2.0 has been out for a bit, and a good guide is the talk at Percona Live Amsterdam 2016, Launching Vitess: How to run YouTube’s MySQL sharding engine. It is still insanely easy to get going (if you have a credit card), at their site.

Planet Linux AustraliaColin Charles: Speaking in December 2016

I neglected to mention my November appearances but I’ll just write trip reports for all this. December appearances are:

  • ACMUG MySQL Special Event – Beijing, China – 10 December 2016 – come learn about Percona Server, MyRocks and lots more!
  • A bit of a Japan tour, we will be in Osaka on the 17th, Sapporo on the 19th, and Tokyo on the 21st. A bit of talk of the various proxies as well as the various servers that exist in the MySQL ecosystem.

Looking forward to discussing MySQL and its ecosystem this December!

Planet Linux AustraliaMaxim Zakharov: PDF Forms editor for Ubuntu Linux

I was unable to edit a PDF form using Evince on Ubuntu, some fields remained empty once I move cursor out of them after entering data on some reason, while other fields worked fine.

Fortunately I have found Master PDF Editor 3 from Code Industry which done job perfectly. It has a free version for non-commercial use. In addition to Evince functionality it supports interactive instructions embedded into PDF document helping to fill the form.

I have tested it on Ubuntu 16.04 and 16.10.

Planet DebianMarkus Koschany: My Free Software Activities in November 2016

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Android

  • Chris Lamb was so kind to send in a patch for apktool to make the build reproducible (#845475). Although this was not enough to fix the issue it set me on the right path to eventually resolve bug number 845475.

Debian Games

  • I packaged a couple of new upstream releases for extremetuxracer, fifechan, fife, unknown-horizons, freeciv, atanks and armagetronad. Most notably fifechan was accepted by the FTP team which allowed me to package new versions of fife and unknown-horizons which are both back in testing again. I expect that upstream will make their final release sometime in December. Atanks has been orphaned a while ago and since upstream is still active and I kinda like the game I decided to adopt it. I also uploaded a backport of Freeciv 2.5.6 to jessie-backports.
  • In November we received a bunch of RC bug reports again because, hey, it is almost time for the Freeze, let’s break some packages. Thus I spent some time fixing freeorion (#843132), pokerth (#843078), simutrans (#828545), freeciv (#844198) and warzone2100 (#844870).
  • I also updated the debian-games blend, we are at version 1.6 now, and made some smaller adjustments. The most important change was adding a new binary package, games-all, that installs..well, all! I know this will make at least one person on this planet happy. Actually I was kind of forced into adding it because blends-dev automatically creates it as a requirement for choosing blends with the Debian Installer. But don’t be afraid games-all only recommends games-finest, the rest is suggested.
  • Last but not least I worked on performous and could close a wishlist bug report (#425898). The submitter asked to suggest some free song packages for this karaoke game.

Debian Java

  • I sponsored uncommons-watchmaker for Kai-Chung and also reviewed libnative-platform-java and granted upload rights to him.
  • I packaged new upstream releases of lombok-patcher, electric, undertow, sweethome3d and sweethome3d-furniture-editor.
  • I spent quite some time on reviewing (especially the copyright review took most of the time) and improving the packaging for tycho (#816604) which is a precondition for packaging the latest upstream release of Eclipse, a popular Java IDE. Luca Vercelli has been working on it for the last couple of months and he did most of the initial packaging. Unfortunately I was only able to upload the package last week which means that the chances for updating Eclipse for Stretch are slim.
  • Due to time constraints I could not finish the Netbeans update in time which I had started back in October. This is on my priority list for December now.
  • Several security issues were reported against Tomcat{6,7,8}. I helped with reviewing some of the patches that Emmanuel prepared for Jessie and worked on fixing the same bugs in Wheezy.

Debian LTS

This was my ninth month as a paid contributor and I have been paid to work 11 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 14. November until 21. November I was in charge of our LTS frontdesk. I triaged bugs in teeworlds, libdbd-mysql-perl, bash, libxml2, tiff, firefox-esr, drupal7, moin, libgc, w3m and sniffit.
  • DLA-715-1. Issued a security update for drupal7 fixing 2 CVE.
  • DLA-717-1. Issued a security update for moin fixing 2 CVE.
  • DLA-728-1. Issued a security update for tomcat6 fixing 8 CVE. (Debian bug #845385 was assigned a CVE later).
  • DLA-729-1. Issued a security update for tomcat7 fixing 8 CVE. (Debian bug #845385 was assigned a CVE later).
  • Especially the patches and the subsequent testing for CVE-2016-0762 and CVE-2016-6816 required most of the time.

Non-maintainer uploads

  • I uploaded an NMU for angband to fix #837394. The patch was kindly prepared by Adrian Bunk.

It is already this time of the year again. See you next year for another report. 🙂

Planet DebianBen Hutchings: Linux Kernel Summit 2016, part 2

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions. This is the second and last part of my notes; part 1 is here.

Updated: I corrected the description of which Intel processors support SMEP.

Kernel Hardening

Kees Cook presented the ongoing work on upstream kernel hardening, also known as the Kernel Self-Protection Project or KSPP.

GCC plugins

The kernel build system can now build and use GCC plugins to implement some protections. This requires gcc 4.5 and the plugin headers installed. It has been tested on x86, arm, and arm64. It is disabled by CONFIG_COMPILE_TEST because CI systems using allmodconfig/allyesconfig probably don't have those installed, but this ought to be changed at some point.

There was a question as to how plugin headers should be installed for cross-compilers or custom compilers, but I didn't hear a clear answer to this. Kees has been prodding distribution gcc maintainers to package them. Mark Brown mentioned the Linaro toolchain being widely used; Kees has not talked to its maintainers yet.

Probabilistic protections

These protections are based on hidden state that an attacker will need to discover in order to make an effective attack; they reduce the probability of success but don't prevent it entirely.

Kernel address space layout randomisation (KASLR) has now been implemented on x86, arm64, and mips for the kernel image. (Debian enables this.) However there are still lots of information leaks that defeat this. This could theoretically be improved by relocating different sections or smaller parts of the kernel independently, but this requires re-linking at boot. Aside from software information leaks, the branch target predictor on (common implementations of) x86 provides a side channel to find addresses of branches in the kernel.

Page and heap allocation, etc., is still quite predictable.

struct randomisation (RANDSTRUCT plugin from grsecurity) reorders members in (a) structures containing only function pointers (b) explicitly marked structures. This makes it very hard to attack custom kernels where the kernel image is not readable. But even for distribution kernels, it increases the maintenance burden for attackers.

Deterministic protections

These protections block a class of attacks completely.

Read-only protection of kernel memory is either mandatory or enabled by default on x86, arm, and arm64. (Debian enables this.)

Protections against execution of user memory in kernel mode are now implemented in hardware on x86 (SMEP, in Intel processors from Skylake Broadwell onward) and on arm64 (PXN, from ARMv8.1). But Skylake Broadwell is not available for servers in high-end server variants and ARMv8.1 is not yet implemented at all! s390 always had this protection.

It may be possible to 'emulate' this using other hardware protections. arm (v7) and arm64 now have this, but x86 doesn't. Linus doesn't like the overhead of previously proposed implementations for x86. It is possible to do this using PCID (in Intel processors from Sandy Bridge onward), which has already been done in PaX - and this should be fast enough.

Virtually mapped stacks protect against stack overflow attacks. They were implemented as an option for x86 only in 4.9. (Debian enables this.)

Copies to or from user memory sometimes use a user-controlled size that is not properly bounded. Hardened usercopy, implemented as an option in 4.8 for many architectures, protects against this. (Debian enables this.)

Memory wiping (zero on free) protects against some information leaks and use-after-free bugs. It was already implemented as debug feature with non-zero poison value, but at some performance cost. Zeroing can be cheaper since it allows allocator to skip zeroing on reallocation. That was implemented as an option in 4.6. (Debian does not currently enable this but we might do if the performance cost is low enough.)

Constification (with the CONSTIFY gcc plugin) reduces the amount of static data that can be written to. As with RANDSTRUCT, this is applied to function pointer tables and explicitly marked structures. Instances of some types need to be modified very occasionally. In PaX/Grsecurity this is done with pax_{open,close}_kernel() which globally disable write protection temporarily. It would be preferable to override write protection in a more directed way, so that the permission to write doesn't leak into any other code that interrupts this process. The feature is not in mainline yet.

Atomic wrap detction protects against reference-counting bugs which can result in a use-after-free. Overflow and underflow are trapped and result in an 'oops'. There is no measurable performance impact. It would be applied to all operations on the atomic_t type, but there needs to be an opt-out for atomics that are not ref-counters - probably by adding an atomic_wrap_t type for them. This has been implemented for x86, arm, and arm64 but is not in mainline yet.

Kernel Freezer Hell

For the second year running, Jiri Kosina raised the problem of 'freezing' kthreads (kernel-mode threads) in preparation for system suspend (suspend to RAM, or hibernation). What are the semantics? What invariants should be met when a kthread gets frozen? They are not defined anywhere.

Most freezable threads don't actually need to be quiesced. Also many non-freezable threads are pointlessly calling try_to_freeze() (probably due to copying code without understanding it)).

At a system level, what we actually need is I/O and filesystem consistency. This should be achieved by:

  • Telling mounted filesystems to freeze. They can quiesce any kthreads they created.
  • Device drivers quiescing any kthreads they created, from their PM suspend implementation.

The system suspend code should not need to directly freeze threads.

Kernel Documentation

Jon Corbet and Mauro Carvalho presented the recent work on kernel documentation.

The kernel's documentation system was a house of cards involving DocBook and a lot of custom scripting. Both the DocBook templates and plain text files are gradually being converted to reStructuredText format, processed by Sphinx. However, manual page generation is currently 'broken' for documents processed by Sphinx.

There are about 150 files at the top level of the documentation tree, that are being gradually moved into subdirectories. The most popular files, that are likely to be referenced in external documentation, have been replaced by placeholders.

Sphinx is highly extensible and this has been used to integrate kernel-doc. It would be possible to add extensions that parse and include the MAINTAINERS file and Documentation/ABI/ files, which have their own formats, but the documentation maintainers would prefer not to add extensions that can't be pushed to Sphinx upstream.

There is lots of obsolete documentation, and patches to remove those would be welcome.

Linus objected to PDF files recently added under the Documentation/media directory - they are not the source format so should not be there! They should be generated from the corresponding SVG or image files at build time.

Issues around Tracepoints

Steve Rostedt and Shuah Khan led a discussion about tracepoints. Currently each maintainer decides which tracepoints to create. The cost of each added tracepoint is minimal, but the cost of very many tracepoints is more substantial. So there is such a thing as too many tracepoints, and we need a policy to decide when they are justified. They advised not to create tracepoints just in case, since kprobes can be used for tracing (almost) anywhere dynamically.

There was some support for requiring documentation of each new tracepoint. That may dissuade introduction of obscure tracepoints, but also creates a higher expectation of stability.

Tools such as bcc and IOVisor are now being created that depend on specific tracepoints or even function names (through kprobes). Should we care about breaking them?

Linus said that we should strive to be polite to developers and users relying on tracepoints, but if it's too painful to maintain a tracepoint then we should go ahead and change it. Where the end users of the tool are themselves developers it's more reasonable to expect them to upgrade the tool and we should care less about changing it. In some cases tracepoints could provide dummy data for compatibility (as is done in some places in procfs).


Planet DebianBen Hutchings: Linux Kernel Summit 2016, part 2

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions. This is the second and last part of my notes; part 1 is here.

Kernel Hardening

Kees Cook presented the ongoing work on upstream kernel hardening, also known as the Kernel Self-Protection Project or KSPP.

GCC plugins

The kernel build system can now build and use GCC plugins to implement some protections. This requires gcc 4.5 and the plugin headers installed. It has been tested on x86, arm, and arm64. It is disabled by CONFIG_COMPILE_TEST because CI systems using allmodconfig/allyesconfig probably don't have those installed, but this ought to be changed at some point.

There was a question as to how plugin headers should be installed for cross-compilers or custom compilers, but I didn't hear a clear answer to this. Kees has been prodding distribution gcc maintainers to package them. Mark Brown mentioned the Linaro toolchain being widely used; Kees has not talked to its maintainers yet.

Probabilistic protections

These protections are based on hidden state that an attacker will need to discover in order to make an effective attack; they reduce the probability of success but don't prevent it entirely.

Kernel address space layout randomisation (KASLR) has now been implemented on x86, arm64, and mips for the kernel image. (Debian enables this.) However there are still lots of information leaks that defeat this. This could theoretically be improved by relocating different sections or smaller parts of the kernel independently, but this requires re-linking at boot. Aside from software information leaks, the branch target predictor on (common implementations of) x86 provides a side channel to find addresses of branches in the kernel.

Page and heap allocation, etc., is still quite predictable.

struct randomisation (RANDSTRUCT plugin from grsecurity) reorders members in (a) structures containing only function pointers (b) explicitly marked structures. This makes it very hard to attack custom kernels where the kernel image is not readable. But even for distribution kernels, it increases the maintenance burden for attackers.

Deterministic protections

These protections block a class of attacks completely.

Read-only protection of kernel memory is either mandatory or enabled by default on x86, arm, and arm64. (Debian enables this.)

Protections against execution of user memory in kernel mode are now implemented in hardware on x86 (SMEP, in Intel processors from Skylake onward) and on arm64 (PXN, from ARMv8.1). But Skylake is not available for servers and ARMv8.1 is not yet implemented at all! s390 always had this protection.

It may be possible to 'emulate' this using other hardware protections. arm (v7) and arm64 now have this, but x86 doesn't. Linus doesn't like the overhead of previously proposed implementations for x86. It is possible to do this using PCID (in Intel processors from Sandy Bridge onward), which has already been done in PaX - and this should be fast enough.

Virtually mapped stacks protect against stack overflow attacks. They were implemented as an option for x86 only in 4.9. (Debian enables this.)

Copies to or from user memory sometimes use a user-controlled size that is not properly bounded. Hardened usercopy, implemented as an option in 4.8 for many architectures, protects against this. (Debian enables this.)

Memory wiping (zero on free) protects against some information leaks and use-after-free bugs. It was already implemented as debug feature with non-zero poison value, but at some performance cost. Zeroing can be cheaper since it allows allocator to skip zeroing on reallocation. That was implemented as an option in 4.6. (Debian does not currently enable this but we might do if the performance cost is low enough.)

Constification (with the CONSTIFY gcc plugin) reduces the amount of static data that can be written to. As with RANDSTRUCT, this is applied to function pointer tables and explicitly marked structures. Instances of some types need to be modified very occasionally. In PaX/Grsecurity this is done with pax_{open,close}_kernel() which globally disable write protection temporarily. It would be preferable to override write protection in a more directed way, so that the permission to write doesn't leak into any other code that interrupts this process. The feature is not in mainline yet.

Atomic wrap detction protects against reference-counting bugs which can result in a use-after-free. Overflow and underflow are trapped and result in an 'oops'. There is no measurable performance impact. It would be applied to all operations on the atomic_t type, but there needs to be an opt-out for atomics that are not ref-counters - probably by adding an atomic_wrap_t type for them. This has been implemented for x86, arm, and arm64 but is not in mainline yet.

Kernel Freezer Hell

For the second year running, Jiri Kosina raised the problem of 'freezing' kthreads (kernel-mode threads) in preparation for system suspend (suspend to RAM, or hibernation). What are the semantics? What invariants should be met when a kthread gets frozen? They are not defined anywhere.

Most freezable threads don't actually need to be quiesced. Also many non-freezable threads are pointlessly calling try_to_freeze() (probably due to copying code without understanding it)).

At a system level, what we actually need is I/O and filesystem consistency. This should be achieved by:

  • Telling mounted filesystems to freeze. They can quiesce any kthreads they created.
  • Device drivers quiescing any kthreads they created, from their PM suspend implementation.

The system suspend code should not need to directly freeze threads.

Kernel Documentation

Jon Corbet and Mauro Carvalho presented the recent work on kernel documentation.

The kernel's documentation system was a house of cards involving DocBook and a lot of custom scripting. Both the DocBook templates and plain text files are gradually being converted to reStructuredText format, processed by Sphinx. However, manual page generation is currently 'broken' for documents processed by Sphinx.

There are about 150 files at the top level of the documentation tree, that are being gradually moved into subdirectories. The most popular files, that are likely to be referenced in external documentation, have been replaced by placeholders.

Sphinx is highly extensible and this has been used to integrate kernel-doc. It would be possible to add extensions that parse and include the MAINTAINERS file and Documentation/ABI/ files, which have their own formats, but the documentation maintainers would prefer not to add extensions that can't be pushed to Sphinx upstream.

There is lots of obsolete documentation, and patches to remove those would be welcome.

Linus objected to PDF files recently added under the Documentation/media directory - they are not the source format so should not be there! They should be generated from the corresponding SVG or image files at build time.

Issues around Tracepoints

Steve Rostedt and Shuah Khan led a discussion about tracepoints. Currently each maintainer decides which tracepoints to create. The cost of each added tracepoint is minimal, but the cost of very many tracepoints is more substantial. So there is such a thing as too many tracepoints, and we need a policy to decide when they are justified. They advised not to create tracepoints just in case, since kprobes can be used for tracing (almost) anywhere dynamically.

There was some support for requiring documentation of each new tracepoint. That may dissuade introduction of obscure tracepoints, but also creates a higher expectation of stability.

Tools such as bcc and IOVisor are now being created that depend on specific tracepoints or even function names (through kprobes). Should we care about breaking them?

Linus said that we should strive to be polite to developers and users relying on tracepoints, but if it's too painful to maintain a tracepoint then we should go ahead and change it. Where the end users of the tool are themselves developers it's more reasonable to expect them to upgrade the tool and we should care less about changing it. In some cases tracepoints could provide dummy data for compatibility (as is done in some places in procfs).

Planet DebianNiels Thykier: Piuparts integration in britney

As of today, britney now fetches reports from and uses it as a part of her evaluation for package migration.  As with her RC bug check, we are only preventing (known) regressions from migrating.

The messages (subject to change) look something like:

  • Piuparts tested OK
  • Rejected due to piuparts regression
  • Ignoring piuparts failure (Not a regression)
  • Cannot be tested by piuparts (not a blocker)

If you want to do machine parsing of the Britney excuses, we also provide an excuses.yaml. In there, you are looking for “excuses[X].policy_info.piuparts.test-results”, which will be one of:

  • pass
  • regression
  • failed
  • cannot-be-tested



Filed under: Debian, Release-Team

Planet DebianJo Shields: A quick introduction to Flatpak

Releasing ISV applications on Linux is often hard. The ABI of all the libraries you need changes seemingly weekly. Hence you have the option of bundling the world, or building a thousand releases to cover a thousand distribution versions. As a case in point, when MonoDevelop started bundling a C Git library instead of using a C# git implementation, it gained dependencies on all sorts of fairly weak ABI libraries whose exact ABI mix was not consistent across any given pair of distro releases. This broke our policy of releasing “works on anything” .deb and .rpm packages. As a result, I pretty much gave up on packaging MonoDevelop upstream with version 5.10.

Around the 6.1 release window, I decided to take re-evaluate question. I took a closer look at some of the fancy-pants new distribution methods that get a lot of coverage in the Linux press: Snap, AppImage, and Flatpak.

I started with AppImage. It’s very good and appealing for its specialist areas (no external requirements for end users), but it’s kinda useless at solving some of our big areas (the ABI-vs-bundling problem, updating in general).

Next, I looked at Flatpak (once xdg-app). I liked the concept a whole lot. There’s a simple 3-tier dependency hierarchy: Applications, Runtimes, and Extensions. An application depends on exactly one runtime.  Runtimes are root-level images with no dependencies of their own. Extensions are optional add-ons for applications. Anything not provided in your target runtime, you bundle. And an integrated updates mechanism allows for multiple branches and multiple releases parallel-installed (e.g. alpha & stable, easily switched).

There’s also security-related sandboxing features, but my main concerns on a first examination were with the dependency and distribution questions. That said, some users might be happier running Microsoft software on their Linux desktop if that software is locked up inside a sandbox, so I’ve decided to embrace that functionality rather than seek to avoid it.

I basically stopped looking at this point (sorry Snap!). Flatpak provided me with all the functionality I wanted, with an extremely helpful and responsive upstream. I got to work on trying to package up MonoDevelop.

Flatpak (optionally!) uses a JSON manifest for building stuff. Because Mono is still largely stuck in a Gtk+2 world, I opted for the simplest runtime, org.freedesktop.Runtime, and bundled stuff like Gtk+ into the application itself.

Some gentle patching here & there resulted in this repository. Every time I came up with an exciting new edge case, upstream would suggest a workaround within hours – or failing that, added new features to Flatpak just to support my needs (e.g. allowing /dev/kvm to optionally pass through the sandbox).

The end result is, as of the upcoming 0.8.0 release of Flatpak, from a clean install of the flatpak package to having a working MonoDevelop is a single command: flatpak install --user --from 

For the current 0.6.x versions of Flatpak, the user also needs to flatpak remote-add --user --from gnome first – this step will be automated in 0.8.0. This will download org.freedesktop.Runtime, then com.xamarin.MonoDevelop; export icons ‘n’ stuff into your user environment so you can just click to start.

There’s some lingering experience issues due the sandbox which are on my radar. “Run on external console” doesn’t work, for example, or “open containing folder”. There are people working on that (a missing DBus# feature to allow breaking out of the sandbox). But overall, I’m pretty happy. I won’t be entirely satisfied until I have something approximating feature equivalence to the old .debs.  I don’t think that will ever quite be there, since there’s just no rational way to allow arbitrary /usr stuff into the sandbox, but it should provide a decent basis for a QA-able, supportable Linux MonoDevelop. And we can use this work as a starting point for any further fancy features on Linux.

Gtk# app development in Flatpak MonoDevelop

Editing MonoDevelop in MonoDevelop. *Inception noise*


Planet DebianBen Hutchings: Linux Kernel Summit 2016, part 1

I attended this year's Linux Kernel Summit in Santa Fe, NM, USA and made notes on some of the sessions that were relevant to Debian. LWN also reported many of the discussions. This is the first of two parts of my notes; part 2 is here.

Stable process

Jiri Kosina, in his role as a distribution maintainer, sees too many unsuitable patches being backported - e.g. a fix for a bug that wasn't present or a change that depends on an earlier semantic change so that when cherry-picked it still compiles but isn't quite right. He thinks the current review process is insufficient to catch them.

As an example, a recent fix for a minor information leak (CVE-2016-9178) depended on an earlier change to page fault handling. When backported by itself, it introduced a much more serious security flaw (CVE-2016-9644). This could have been caught very quickly by a system call fuzzer.

Possible solutions: require 'Fixes' field, not just 'Cc: stable'. Deals with 'bug wasn't present', but not semantic changes.

There was some disagreement whether 'Fixes' without 'Cc: stable' should be sufficient for inclusion in stable. Ted Ts'o said he specifically does that in some cases where he thinks backporting is risky. Greg Kroah-Hartman said he takes it as a weaker hint for inclusion in stable.

Is it a good idea to keep 'Cc: stable' given the risk of breaking embargo? On balance, yes, it only happened once.

Sometimes it's hard to know exactly how/when the bug was introduced. Linus doesn't want people to guess and add incorrect 'Fixes' fields. There is still the option to give some explanation and hints for stable maintainers in the commit message. Ideally the upstream developer should provide a test case for the bug.

Is Linus happy?

Linus complained about minor fixes coming later in the release cycle. After rc2, all fixes should either be for new code introduced in the current release cycle or for important bugs. However, new, production-ready drivers without new infrastructure dependencies are welcome at almost any point in the release cycle.

He was unhappy about some big changes in RDMA, but I'm not sure what those were.

Bugzilla and bug tracking

Laura Abbott started a discussion of, talking about subsystems where maintainers ignore it and any responses come from random people giving bad advice. This is a terrible experience for users. Several maintainers are actively opposed to using it, and the email bridge no longer works (or not well?). She no longer recommends Fedora bug submitters to submit reports there.

Are there any alternatives? None were proposed.

Someone asked whether Bugzilla could tell reporters to use email for certain products/components instead of continuing with the bug entry process.

Konstantin Ryabitsev talked about the difficulty of upgrading a customised instance of Bugzilla. Much customisation requires patches which don't apply to next version (maybe due to limitations of the extension mechanism?). He has had to drop many such patches.

Email is hard to track when a bug is handed over from one maintainer to another. Email archives are very unreliable. Linus: I'll take Bugzilla over mail-archive.

No-one is currently keeping track of bugs across the kernel and making sure they get addressed by an appropriate maintainer. It's (at least) a full-time job but no individual company has business case for paying for this. Konstantin suggested (I think) that CII might pay for this.

There was some discussion of what information should be included in a bug report. The Cut here line in oops messages was said to be a mistake because there are often relevant messages before it. The model of computer is often important. Beyond that, there was not much interest in the automated information gathering that distributions do. Distribution maintainers should curate bugs before forwarding upstream.

There was a request for custom fields per component in Bugzilla. Konstantin says this is doable (possibly after upgrade to version 5); it doesn't require patches.

The future of the Kernel Summit

The kernel community is growing, and the invitation list for the core day is too small to include all the right people for technical subjects. For 2017, the core half-day will have an even smaller invitation list, only ~30 subsystem maintainers that Linus pulls from. The entire technical track will be open (I think).

Kernel Summit 2017 and some mini-summits will be held in Prague alongside Open Source Summit Europe (formerly LinuxCon Europe) and Embedded Linux Conference Europe. There were some complaints that LinuxCon is not that interesting to kernel developers, compared to Linux Plumbers Conference (which followed this year's Kernel Summit). However, the Linux Foundation is apparently soliciting more hardcore technical sessions.

Kernel Summit and Linux Plumbers Conference are quite small, and it's apparently hard to find venues for them in cities that also have major airports. It might be more practical to co-locate them both with Open Source Summit in future.

time_t and 2038

On 32-bit architectures the kernel's representation of real time (time_t etc.) will break in early 2038. Fixing this in a backward-compatible way is a complex problem.

Arnd Bergmann presented the current status of this process. There has not yet been much progress in mainline, but more fixes have been prepared. The changes to struct inode and to input events are proving to be particularly problematic. There is a need to add new system calls, and he intends to add these for all (32-bit) achitectures at once.

Copyright retention

James Bottomley talked about how developers can retain copyright on their contributions. It's hard to renegotiate within an existing employment; much easier to do this when preparing to sign a new contract.

Some employers expect you to fill in a document disclosing 'prior inventions' you have worked on. Depending on how it's worded, this may require the employer to negotiate with you again whenever they want you to work on that same software.

It's much easier for contractors to retain copyright on their work - customers expect to have a custom agreement and don't expect to get copyright on contractor's software.

Planet DebianVincent Bernat: Build-time dependency patching for Android

This post shows how to patch an external dependency for an Android project at build-time with Gradle. This leverages the Transform API and Javassist, a Java bytecode manipulation tool.

buildscript {
    dependencies {
        classpath ''
        classpath ''
        classpath 'org.javassist:javassist:3.21.+'
        classpath 'commons-io:commons-io:2.4'

Disclaimer: I am not a seasoned Android programmer, so take this with a grain of salt.


This section adds some context to the example. Feel free to skip it.

Dashkiosk is an application to manage dashboards on many displays. It provides an Android application you can install on one of those cheap Android sticks. Under the table, the application is an embedded webview backed by the Crosswalk Project web runtime which brings an up-to-date web engine, even for older versions of Android1.

Recently, a security vulnerability has been spotted in how invalid certificates were handled. When a certificate cannot be verified, the webview defers the decision to the host application by calling the onReceivedSslError() method:

Notify the host application that an SSL error occurred while loading a resource. The host application must call either callback.onReceiveValue(true) or callback.onReceiveValue(false). Note that the decision may be retained for use in response to future SSL errors. The default behavior is to pop up a dialog.

The default behavior is specific to Crosswalk webview: the Android builtin one just cancels the load. Unfortunately, the fix applied by Crosswalk is different and, as a side effect, the onReceivedSslError() method is not invoked anymore2.

Dashkiosk comes with an option to ignore TLS errors3. The mentioned security fix breaks this feature. The following example will demonstrate how to patch Crosswalk to recover the previous behavior4.

Simple method replacement§

Let’s replace the shouldDenyRequest() method from the org.xwalk.core.internal.SslUtil class with this version:

// In SslUtil class
public static boolean shouldDenyRequest(int error) {
    return false;

Transform registration§

Gradle Transform API enables the manipulation of compiled class files before they are converted to DEX files. To declare a transform and register it, include the following code in your build.gradle:

import org.gradle.api.logging.Logger

class PatchXWalkTransform extends Transform {
    Logger logger = null;

    public PatchXWalkTransform(Logger logger) {
        this.logger = logger

    String getName() {
        return "PatchXWalk"

    Set<QualifiedContent.ContentType> getInputTypes() {
        return Collections.singleton(QualifiedContent.DefaultContentType.CLASSES)

    Set<QualifiedContent.Scope> getScopes() {
        return Collections.singleton(QualifiedContent.Scope.EXTERNAL_LIBRARIES)

    boolean isIncremental() {
        return true

    void transform(Context context,
                   Collection<TransformInput> inputs,
                   Collection<TransformInput> referencedInputs,
                   TransformOutputProvider outputProvider,
                   boolean isIncremental) throws IOException, TransformException, InterruptedException {
        // We should do something here

// Register the transform
android.registerTransform(new PatchXWalkTransform(logger))

The getInputTypes() method should return the set of types of data consumed by the transform. In our case, we want to transform classes. Another possibility is to transform resources.

The getScopes() method should return a set of scopes for the transform. In our case, we are only interested by the external libraries. It’s also possible to transform our own classes.

The isIncremental() method returns true because we support incremental builds.

The transform() method is expected to take all the provided inputs and copy them (with or without modifications) to the location supplied by the output provider. We didn’t implement this method yet. This causes the removal of all external dependencies from the application.

Noop transform§

To keep all external dependencies unmodified, we must copy them:

void transform(Context context,
               Collection<TransformInput> inputs,
               Collection<TransformInput> referencedInputs,
               TransformOutputProvider outputProvider,
               boolean isIncremental) throws IOException, TransformException, InterruptedException {
    inputs.each {
        it.jarInputs.each {
            def jarName =
            def src = it.getFile()
            def dest = outputProvider.getContentLocation(jarName, 
                                                         it.contentTypes, it.scopes,
            def status = it.getStatus()
            if (status == Status.REMOVED) { // ❶
      "Remove ${src}")
            } else if (!isIncremental || status != Status.NOTCHANGED) { // ❷
      "Copy ${src}")
                FileUtils.copyFile(src, dest)

We also need two additional imports:


Since we are handling external dependencies, we only have to manage JAR files. Therefore, we only iterate on jarInputs and not on directoryInputs. There are two cases when handling incremental build: either the file has been removed (❶) or it has been modified (❷). In all other cases, we can safely assume the file is already correctly copied.

JAR patching§

When the external dependency is the Crosswalk JAR file, we also need to modify it. Here is the first part of the code (replacing ❷):

if ("${src}" ==~ ".*/org.xwalk/xwalk_core.*/classes.jar") {
    def pool = new ClassPool()
    def ctc = pool.get('org.xwalk.core.internal.SslUtil') // ❸

    def ctm = ctc.getDeclaredMethod('shouldDenyRequest')
    ctc.removeMethod(ctm) // ❹

public static boolean shouldDenyRequest(int error) {
    return false;
""", ctc)) // ❺

    def sslUtilBytecode = ctc.toBytecode() // ❻

    // Write back the JAR file
    // …
} else {"Copy ${src}")
    FileUtils.copyFile(src, dest)

We also need the following additional imports to use Javassist:

import javassist.ClassPath
import javassist.ClassPool
import javassist.CtNewMethod

Once we have located the JAR file we want to modify, we add it to our classpath and retrieve the class we are interested in (❸). We locate the appropriate method and delete it (❹). Then, we add our custom method using the same name (❺). The whole operation is done in memory. We retrieve the bytecode of the modified class in ❻.

The remaining step is to rebuild the JAR file:

def input = new JarFile(src)
def output = new JarOutputStream(new FileOutputStream(dest))

// ❼
input.entries().each {
    if (!it.getName().equals("org/xwalk/core/internal/SslUtil.class")) {
        def s = input.getInputStream(it)
        output.putNextEntry(new JarEntry(it.getName()))
        IOUtils.copy(s, output)

// ❽
output.putNextEntry(new JarEntry("org/xwalk/core/internal/SslUtil.class"))


We need the following additional imports:

import java.util.jar.JarEntry
import java.util.jar.JarFile
import java.util.jar.JarOutputStream

There are two steps. In ❼, all classes are copied to the new JAR, except the SslUtil class. In ❽, the modified bytecode for SslUtil is added to the JAR.

That’s all! You can view the complete example on GitHub.

More complex method replacement§

In the above example, the new method doesn’t use any external dependency. Let’s suppose we also want to replace the sslErrorFromNetErrorCode() method from the same class with the following one:


// In SslUtil class
public static SslError sslErrorFromNetErrorCode(int error,
                                                SslCertificate cert,
                                                String url) {
    switch(error) {
            return new SslError(SslError.SSL_IDMISMATCH, cert, url);
        case NetError.ERR_CERT_DATE_INVALID:
            return new SslError(SslError.SSL_DATE_INVALID, cert, url);
            return new SslError(SslError.SSL_UNTRUSTED, cert, url);
    return new SslError(SslError.SSL_INVALID, cert, url);

The major difference with the previous example is that we need to import some additional classes.

Android SDK import§

The classes from the Android SDK are not part of the external dependencies. They need to be imported separately. The full path of the JAR file is:

androidJar = "${android.getSdkDirectory().getAbsolutePath()}/platforms/" +

We need to load it before adding the new method into SslUtil class:

def pool = new ClassPool()
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
// …

External dependency import§

We must also import and therefore, we need to put the appropriate JAR in our classpath. The easiest way is to iterate through all the external dependencies and add them to the classpath.

def pool = new ClassPool()
inputs.each {
    it.jarInputs.each {
        def jarName =
        def src = it.getFile()
        def status = it.getStatus()
        if (status != Status.REMOVED) {
def ctc = pool.get('org.xwalk.core.internal.SslUtil')
def ctm = ctc.getDeclaredMethod('sslErrorFromNetErrorCode')
// Then, rebuild the JAR...

Happy hacking!

  1. Before Android 4.4, the webview was severely outdated. Starting from Android 5, the webview is shipped as a separate component with updates. Embedding Crosswalk is still convenient as you know exactly which version you can rely on. 

  2. I hope to have this fixed in later versions. 

  3. This may seem harmful and you are right. However, if you have an internal CA, it is currently not possible to provide its own trust store to a webview. Moreover, the system trust store is not used either. You also may want to use TLS for authentication only with client certificates, a feature supported by Dashkiosk

  4. Crosswalk being an opensource project, an alternative would have been to patch Crosswalk source code and recompile it. However, Crosswalk embeds Chromium and recompiling the whole stuff consumes a lot of resources. 

Planet DebianRoss Gammon: My Open Source Contributions June – November 2016

So much for my monthly blogging! Here’s what I have been up to in the Open Source world over the last 6 months.


  • Uploaded a new version of the debian-multimedia blends metapackages
  • Uploaded the latest abcmidi
  • Uploaded the latest node-process-nextick-args
  • Prepared version 1.0.2 of libdrumstick for experimental, as a first step for the transition. It was sponsored by James Cowgill.
  • Prepared a new node-inline-source-map package, which was sponsored by Gianfranco Costamagna.
  • Uploaded kmetronome to experimental as part of the libdrumstick transition.
  • Prepared a new node-js-yaml package, which was sponsored by Gianfranco Costamagna.
  • Uploaded version 4.2.4 of Gramps.
  • Prepared a new version of vmpk which I am going to adopt, as part of the libdrumstick transition. I tried splitting the documentation into a separate package, but this proved difficult, and in the end I missed the transition freeze deadline for Debian Stretch.
  • Prepared a backport of Gramps 4.2.4, which was sponsored by IOhannes m zmölnig as Gramps is new for jessie-backports.
  • Began a final push to get kosmtik packaged and into the NEW queue before the impending Debian freeze for Stretch. Unfortunately, many dependencies need updating, which also depend on packages not yet in Debian. Also pushed to finish all the new packages for node-tape, which someone else has decided to take responsibility for.
  • Uploaded node-cross-spawn-async to fix a Release Critical bug.
  • Prepared  a new node-chroma-js package,  but this is unfortunately blocked by several out of date & missing dependencies.
  • Prepared a new node-husl package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-resumer package, which was sponsored by Gianfranco Costamagna.
  • Prepared a new node-object-inspect package, which was sponsored by Gianfranco Costamagna.
  • Removed node-string-decoder from the archive, as it was broken and turned out not to be needed anymore.
  • Uploaded a fix for node-inline-source-map which was failing tests. This turned out to be due to node-tap being upgraded to version 8.0.0. Jérémy Lal very quickly provided a fix in the form of a Pull Request upstream, so I was able to apply the same patch in Debian.


  • Prepared a merge of the latest blends package from Debian in order to be able to merge the multimedia-blends package later. This was sponsored by Daniel Holbach.
  • Prepared an application to become an Ubuntu Contributing Developer. Unfortunately, this was later declined. I was completely unprepared for the Developer Membership Board meeting on IRC after my holiday. I had had no time to chase for endorsements from previous sponsors, and the application was not really clear about the fact that I was not actually applying for upload permission yet. No matter, I intend to apply again later once I have more evidence & support on my application page.
  • Added my blog to Planet Ubuntu, and this will hopefully be the first post that appears there.
  • Prepared a merge of the latest debian-multimedia blends meta-package package from Debian. In Ubuntu Studio, we have the multimedia-puredata package seeded so that we get all the latest Puredata packages in one go. This was sponsored by Michael Terry.
  • Prepared a backport of Ardour as part of the Ubuntu Studio plan to do regular backports. This is still waiting for sponsorship if there is anyone reading this that can help with that.
  • Did a tweak to the Ubuntu Studio seeds and prepared an update of the Ubuntu Studio meta-packages. However, Adam Conrad did the work anyway as part of his cross-flavour release work without noticing my bug & request for sponsorship. So I closed the bug.
  • Updated the Ubuntu Studio wiki to expand on the process for updating our seeds and meta-packages. Hopefully, this will help new contributors to get involved in this area in the future.
  • Took part in the testing and release of the Ubuntu Studio Trusty 14.04.5 point release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety Beta 1 release.
  • Prepared a backport of Ansible but before I could chase up what to do about the fact that ansible-fireball was no longer part of the Ansible package, some one else did the backport without noticing my bug. So I closed the bug.
  • Prepared an update of the Ubuntu Studio meta-packages. This was sponsored by Jeremy Bicha.
  • Prepared an update to the ubuntustudio-default-settings package. This switched the Ubuntu Studio desktop theme to Numix-Blue, and reverted some commits to drop the ubuntustudio-lightdm-theme package fom the archive. This had caused quite a bit of controversy and discussion on IRC due to the transition being a little too close to the release date for Yakkety. This was sponsored by Iain Lane (Laney).
  • Prepared the Numix Blue update for the ubuntustudio-lightdm-theme package. This was also sponsored by Iain Lane (Laney). I should thank Krytarik here for the initial Numix Blue theme work here (on the lightdm theme & default settings packages).
  • Provided a patch for gfxboot-theme-ubuntu which has a bug which is regularly reported during ISO testing, because the “Try Ubuntu Studio without installing” option was not a translatable string and always appeared in English. Colin Watson merged this, so hopefully it will be translated by the time of the next release.
  • Took part in the testing and release of the Ubuntu Studio Yakkety 16.10 release.
  • After a hint from Jeremy Bicha, I prepared a patch that adds a desktop file for Imagemagick to the ubuntustudio-default-settings package. This will give us a working menu item in Ubuntu Studio whilst we wait for the bug to be fixed upstream in Debian. Next month I plan to finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition, including dropping ubuntustudio-lightdm-theme from the Ubuntu Studio seeds. I will include this fix at the same time.


  • At other times when I have had a spare moment, I have been working on resurrecting my old Family History website. It was originally produced in my Windows XP days, and I was no longer able to edit it in Linux. I decided to convert it to Jekyll. First I had to extract the old HTML from where the website is hosted using the HTTrack Website Copier. Now, I am in the process of switching the structure to the standard Jekyll template approach. I will need to switch to a nice Jekyll based theme, as as the old theming was pretty complex. I pushed the code to my Github repository for safe keeping.

Plan for December


Before the 5th January 2017 Debian Stretch soft freeze I hope to:


  • Add the Ubuntu Studio Manual Testsuite to the package tracker, and try to encourage some testing of the newest versions of our priority packages.
  • Finish the ubuntustudio-lightdm-theme, ubuntustudio-default-settings transition including an update to the ubuntustudio-meta packages.
  • Reapply to become a Contributing Developer.
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in.


  • Continue working to convert my Family History website to Jekyll.
  • Try and resurrect my old Gammon one-name study Drupal website from a backup and push it to the new GoONS Website project.

Planet Linux AustraliaMaxim Zakharov: dpsearch-4.54-2016-12-03

A new snapshot version of DataparkSearch Engine has been released. You can get it on Google Drive or on GitHub.

Changes made since previous snapshot:

  • added setting reading timeout to socket based on document reading timemout
  • added support for wolfssl and mbedtls libraries
  • added timeout tracking for https
  • removed adjustment on server weight before putting url poprank into url data
  • fixed compilation without openssl
  • improved OpenSSL detection
  • added --enable-mcmodel option for configure
  • corrected compilation flags for threadless version of libdpsearch if no apache module selected to build
  • switched to CRYPTO_THREADID for OpenSSL 1.0.0 and above
  • minor fixes and updates

CryptogramA 50-Foot Squid Has Not been Found in New Zealand

A 50-foot squid has not been found in New Zealand.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.


Krebs on SecurityVisa Delays Chip Deadline for Pumps To 2020

Visa this week delayed by three years a deadline for fuel station owners to install payment terminals at the pump that are capable of handling more secure chip-based cards. Experts say the new deadline — extended from 2017 — comes amid a huge spike in fuel pump skimming, and means fraudsters will have another three years to fleece banks and their customers by installing card-skimming devices at the pump.

Until this week, fuel station owners in the United States had until October 1, 2017 to install chip-capable readers at their pumps. Under previous Visa rules, station owners that didn’t have chip-ready readers in place by then would have been on the hook to absorb 100 percent of the costs of fraud associated with transactions in which the customer presented a chip-based card yet was not asked or able to dip the chip (currently, card-issuing banks eat most of the fraud costs from fuel skimming). The chip card technology standard, also known as EMV (short for Europay, MasterCard and Visa) makes credit and debit cards far more expensive and difficult for thieves to clone.

This week, however, Visa said fuel station owners would have until October 1, 2020 to meet the liability shift deadline.

A Bluetooth-based pump card skimmer found inside of a Food N Things pump in Arizona in April 2016.

A Bluetooth-based pump card skimmer found inside of a Food N Things pump in Arizona in April 2016.

“The fuel segment has its own unique challenges, which we recognized when we first set the chip activation date for automated fuel dispensers/pumps (AFDs) two years after regular in-store locations,” Visa said in a statement explaining its decision. “We knew that the AFD segment would need more time to upgrade to chip because of the complicated infrastructure and specialized technology required for fuel pumps. For instance, in some cases, older pumps may need to be replaced before adding chip readers, requiring specialized vendors and breaking into concrete. Furthermore, five years after announcing our liability shift, there are still issues with a sufficient supply of regulatory-compliant EMV hardware and software to enable most upgrades by 2017.”

Visa said fuel pump skimming accounts for just 1.3 percent of total U.S. payment card fraud.

“During this interim period, Visa will monitor AFD fraud trends closely and work with merchants, acquirers and issuers to help mitigate any potential counterfeit fraud exposure at AFDs,” Visa said.

Avivah Litan, a fraud analyst with Gartner Inc., said the deadline shift wasn’t unexpected given how many U.S. fuel stations are behind on costly updates, noting that in some cases it can cost more than $10,000 per pump to accommodate chip card readers. The National Association of Convenience Stores estimates that station operators will spend approximately $30,000 per store to accommodate chip readers, and that the total cost to the fuel industry could exceed $4 billion.

“Some of them you can just replace the payment module inside the pump, but the older pumps will need to be completely removed and replaced,” Litan said. “Gas stations and their unattended pumps have always been an easy target for thieves. The fraud usually migrates to the point of least resistance, and we’re seeing now the fraudsters really moving to targeting unattended stations that haven’t been upgraded.”

The delay comes as some states — particularly in the southern United States — are grappling with major increases in fuel station skimming attacks. In September, KrebsOnSecurity published a detailed look at nine months’ worth of fuel pump skimming incident reports filed by police and regulators in Arizona, which said it saw more fuel station skimming attacks in the month of August 2016 than in all of 2015 combined.

That report about Arizona’s skimmer scourge found that thieves tend to target pumps that are furthest from the pump station and closest to the street. They also favored stations that did not employ basic security measures such as tamper-evident security tape and security cameras.

Crooks involved in fuel pump skimming generally are tied to organized crime gangs, as evidenced by this Nov. 2015 investigation into fuel theft gangs operating in Southern California . The thieves most often use stolen master keys or bribery to gain access to the pumps. Once inside the pumps, the thieves hook up their skimmer to the pump’s card reader and PIN pad. The devices also are connected to the pump’s electric power — so they don’t need batteries and can operate indefinitely. Increasingly, these thieves are installing Bluetooth-based skimmers that can transmit stolen data wirelessly, allowing thieves to avoid taking the risky step of retrieving their skimmer gear.

Some pump skimming devices are capable of stealing debit card PINs as well, so it’s good idea to avoid paying with a debit card at the pump. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

“That’s exactly the sort of advice fuel station owners don’t want given to consumers,” Litan said. “For filling stations, credit is their least favorite form of payment because it’s the most expensive for them, which is why some stations offer lower prices for debit card transactions. But consumers should never use a debit card at a gas station.”

Want to learn more about skimming devices? Check out my series, All About Skimmers.

Planet DebianShirish Agarwal: Air Congestion and Politics

Confession time first – I am not a frequent flyer at all. My first flight was in early late 2006. It was a 2 hour flight from Bombay (BOM) to Bengaluru (formerly Bangalore, BLG) . I still remember the trepidation, the nervousness and excitement the first time I took to air. I still remember the flight very vividly,

It was a typical humid day for Bombay/Mumbai and we (me and a friend) had gone to Sahar (the domestic airport) to take the flight in the evening. Before starting the sky had turned golden-orange and I was wondering how I would feel once I would be in air.We started at around 20:00 hours in the evening and as it was a clear night were able to see the Queen’s necklace (Marine Drive) in all her glory.

The photographs on the wikipedia page don’t really do justice to how beautiful the whole boulevard looks at night, especially how it looks from up there. While we were seeing, it seemed the pilot had actually banked at 45 degrees angle so we can have the best view of the necklace OR maybe the pilot wanted to take a photo OR ME being in overdrive (like Robin Williams, the Russian immigrant in Moscow on the Hudson experiences the first time he goes to the mall ;))

In either way, this would be an experience I would never forget till the rest of my life. I remember I didn’t move an inch (even to go the loo) as I didn’t want to let go of the whole experience. While I came back after 3-4 days, I still remember re-experiencing/re-imagining the flights for a whole month each time I went to sleep.

While I can’t say it has become routinised, but have been lucky to have the opportunity to fly domestic around the country primarily for work. After the initial romanticism wears off, you try and understand the various aspects of the flight which are happening around you.

These experiences are what lead to file/share today’s blog post. Yesterday, Ms. Mamata Banerjee, one of the leaders of the Opposition cried wolf because the Aircraft was circling the Airport. Because she is the Chief Minister she feels she should have got precedent or at least that seems to be the way the story unfolded on TV.

I have been about 15-20 times on flight in the last decade for work or leisure. Almost all the flights I have been, it has been routine that the flights fly around the Airport for 15-20 minutes before landing. This is ‘routine’. I have seen Airlines being stacked (remember the scene from Die Hard 2 where Holly Mclane, John Mclane’s wife looks at different aircraft at different altitudes from her window seat) this is what an Airport has to do when it doesn’t have enough runaways. In fact just read few days back MIAL is going for an emergency expansion as they weren’t expecting as many passengers as they did this year as well as last. In fact the same day there was a near-miss between two aircraft in Mumbai airport itself. Because of Ms. Mamata’s belligerence, this story didn’t even get a mention in the TV mainstream media.

The point I wanna underscore is that this is a fact of life and not just in India, world-over it seems hubs are being busier than ever, for instance Heathrow has been also a busy bee and they will to rework air operations as per a recent article .

In India, Kolkata is also one of the busier airports . If anything, I hope it teaches her the issues that plague most Indian airports and she works with the Government in Center so the Airport can expand more. They just got a new terminal three years back.

It is for these issues that the Indian Government has come with the ‘Regional Connectivity Scheme‘ .

Lastly, a bit of welcome news to people thinking to visit India, the Govt. of the day is facilitating easier visa norms to increase tourism and trade to India. Hope this is beneficial to all and any Debian Developers who wanna come visit India😉 I do hope that we also do get reciprocity from those countries as well.

Filed under: Miscellenous Tagged: # Domestic Flights, #Air Congestion, #Airport Expansion, #Kolkata, #near-miss, #Visa for tourists

Sociological Images“A Princess is Kind of a Bad Ass”: When Feminist Moms Pick Up the Pen

Sometimes there’s nothing to do but take matters into our own hands. Danielle Lindemann, a mother and sociologist, decided to do just that. After discovering that one of her daughter’s books required some “subversion,” she decided to do a little editing. Here’s to one way of fighting the disempowering messages taught to little girls by capitalist icons:

img_4096 img_4095 img_4098 img_4097 img_4101 img_4103

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

CryptogramAuditing Elections for Signs of Hacking

Excellent essay pointing out that election security is a national security issue, and that we need to perform random ballot audits on every future election:

The good news is that we know how to solve this problem. We need to audit computers by manually examining randomly selected paper ballots and comparing the results to machine results. Audits require a voter-verified paper ballot, which the voter inspects to confirm that his or her selections have been correctly and indelibly recorded. Since 2003, an active community of academics, lawyers, election officials and activists has urged states to adopt paper ballots and robust audit procedures. This campaign has had significant, but slow, success. As of now, about three quarters of U.S. voters vote on paper ballots. Twenty-six states do some type of manual audit, but none of their procedures are adequate. Auditing methods have recently been devised that are much more efficient than those used in any state. It is important that audits be performed on every contest in every election, so that citizens do not have to request manual recounts to feel confident about election results. With high-quality audits, it is very unlikely that election fraud will go undetected whether perpetrated by another country or a political party.

Another essay along similar lines.

Related: there is some information about Russian political hacking this election cycle that is classified. My guess is that it has nothing to do with hacking the voting machines -- the NSA was on high alert for anything, and I have it on good authority that they found nothing -- but something related to either the political-organization hacking, the propaganda machines, or something else before Election Day.

Planet DebianRaphaël Hertzog: My Free Software Activities in November 2016

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

In the 11 hours of (paid) work I had to do, I managed to release DLA-716-1 aka tiff 4.0.2-6+deb7u8 fixing CVE-2016-9273, CVE-2016-9297 and CVE-2016-9532. It looks like this package is currently getting new CVE every month.

Then I spent quite some time to review all the entries in dla-needed.txt. I wanted to get rid of some misleading/no longer applicable comments and at the same time help Olaf who was doing LTS frontdesk work for the first time. I ended up tagging quite a few issues as no-dsa (meaning that we will do nothing for them as they are not serious enough) such as those affecting dwarfutils, dokuwiki, irssi. I dropped libass since the open CVE is disputed and was triaged as unimportant. While doing this, I fixed a bug in the bin/review-update-needed script that we use to identify entries that have not made any progress lately.

Then I claimed libgc and and released DLA-721-1 aka libgc 1:7.1-9.1+deb7u1 fixing CVE-2016-9427. The patch was large and had to be manually backported as it was not applying cleanly.

The last thing I did was to test a new imagemagick and review the update prepared by Roberto.

pkg-security work

The pkg-security team is continuing its good work: I sponsored patator to get rid of a useless dependency on pycryptopp which was going to be removed from testing due to #841581. After looking at that bug, it turns out the bug was fixed in libcrypto++ 5.6.4-3 and I thus closed it.

I sponsored many uploads: polenum, acccheck, sucrack (minor updates), bbqsql (new package imported from Kali). A bit later I fixed some issues in the bbsql package that had been rejected from NEW.

I managed a few RC bugs related to the openssl 1.1 transition: I adopted sslsniff in the team and fixed #828557 by build-depending on libssl1.0-dev after having opened the proper upstream ticket. I did the same for ncrack and #844303 (upstream ticket here). Someone else took care of samdump2 but I still adopted the package in the pkg-security team as it is a security relevant package. I also made an NMU for axel and #829452 (it’s not pkg-security related but we still use it in Kali).

Misc Debian work

Django. I participated in the discussion about a change letting Django count the number of developers that use it. Such a change has privacy implications and the discussion sparked quite some interest both in Debian mailing lists and up to LWN.

On a more technical level, I uploaded version 1.8.16-1~bpo8+1 to jessie-backports (security release) and I fixed RC bug #844139 by backporting two upstream commits. This led to the 1.10.3-2 upload. I ensured that this was fixed in the 1.10.x upstream branch too.

dpkg and merged /usr. While reading debian-devel, I discovered dpkg bug #843073 that was threatening the merged-/usr feature. Since the bug was in code that I wrote a few years ago, and since Guillem was not interested in fixing it, I spent an hour to craft a relatively clean patch that Guillem could apply. Unfortunately, Guillem did not yet manage to pull out a new dpkg release with the patches applied. Hopefully it won’t be too long until this happens.

Debian Live. I closed #844332 which was a request to remove live-build from Debian. While it was marked as orphaned, I was always keeping an eye on it and have been pushing small fixes to git. This time I decided to officially adopt the package within the debian-live team and work a bit more on it. I reviewed all pending patches in the BTS and pushed many changes to git. I still have some pending changes to finish to prettify the Grub menu but I plan to upload a new version really soon now.

Misc bugs filed. I filed two upstream tickets on uwsgi to help fix currently open RC bugs on the package. I filed #844583 on sbuild to support arbitrary version suffix for binary rebuild (binNMU). And I filed #845741 on xserver-xorg-video-qxl to get it fixed for the xorg 1.19 transition.

Zim. While trying to fix #834405 and update the required dependencies, I discovered that I had to update pygtkspellcheck first. Unfortunately, its package maintainer was MIA (missing in action) so I adopted it first as part of the python-modules team.

Distro Tracker. I fixed a small bug that resulted in an ugly traceback when we got queries with a non-ASCII HTTP_REFERER.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Worse Than FailureError'd: Lenovo Uh-Oh (and more!)

"I get it that some apps need special permissions, but a GUID is the digital equivalent of 'just trust me - I know what I'm doing'," Kenneth M. writes.


"Sometimes, vendors paint their accessories with golden paint," writes Geoffk C., "On the other hand, if you're Lenovo, you might produce a mouse made of solid gold."


"When it comes to opening .psd files, I only use %1", writes Tony.


David wrote, "I've completely combed through Tanaguru's website and I still can't figure out how I can contact them."


"Mpan caught firefox performing peculiarly performant," writes M.


Pieter V. wrote, "Just when you don't expect an application crash to be sarcastic, VLC delivers."


"Oh, no, thank you, Microsoft for the pretty 'thank you' dialog," writes Tom G.


[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet DebianMatthew Garrett: Ubuntu still isn't free software

Mark Shuttleworth just blogged about their stance against unofficial Ubuntu images. The assertion is that a cloud hoster is providing unofficial and modified Ubuntu images, and that these images are meaningfully different from upstream Ubuntu in terms of their functionality and security. Users are attempting to make use of these images, are finding that they don't work properly and are assuming that Ubuntu is a shoddy product. This is an entirely legitimate concern, and if Canonical are acting to reduce user confusion then they should be commended for that.

The appropriate means to handle this kind of issue is trademark law. If someone claims that something is Ubuntu when it isn't, that's probably an infringement of the trademark and it's entirely reasonable for the trademark owner to take action to protect the value associated with their trademark. But Canonical's IP policy goes much further than that - it can be interpreted as meaning[1] that you can't distribute works based on Ubuntu without paying Canonical for the privilege, even if you call it something other than Ubuntu.

This remains incompatible with the principles of free software. The freedom to take someone else's work and redistribute it is a vital part of the four freedoms. It's legitimate for Canonical to insist that you not pass it off as their work when doing so, but their IP policy continues to insist that you remove all references to Canonical's trademarks even if their use would not infringe trademark law.

If you ask a copyright holder if you can give a copy of their work to someone else (assuming it doesn't infringe trademark law), and they say no or insist you need an additional contract, it's not free software. If they insist that you recompile source code before you can give copies to someone else, it's not free software. Asking that you remove trademarks that would otherwise infringe trademark law is fine, but if you can't use their trademarks in non-infringing ways, that's still not free software.

Canonical's IP policy continues to impose restrictions on all of these things, and therefore Ubuntu is not free software.

[1] And by "interpreted as meaning" I mean that's what it says and Canonical refuse to say otherwise

comment count unavailable comments

Planet Linux AustraliaSteven Hanley: [mtb/events] Alpine Challenge - 160km Trail run - Victoria

A big adventure out in the Victorian Alps (fullsize)
So I was keen to see if, after having fun doing a numebr of 100km trail running events, stepping up to 160km (what the Americans call a 100 due to their use of miles) would be just as fun. So not to take it easy I went and entered the hardest in Australia. The Alpine Challenge in the Victorian Alps, 160km on mountain walking trails and fire roads with 7200 metres of climbing.

I had not realy done enough training for this one, I expected to do around 30 hours, though would have loved to go under 28 hours. In the end I was close to expectations after the last 60km became a slow bushwalk. Still it is a great adventure in some of the most amazing parts of our country. I guess now I have done it I know what is needed to go better and think I could run a much better race on that course too.

My words and photos are online in my Alpine Challenge 2016 gallery. What a big mountain adventure that was!.


Cory DoctorowMy keynote from the O’Reilly Security Conference: “Security and feudalism: Own or be pwned”

Here’s the 32 minute video of my presentation at last month’s O’Reilly Security Conference in New York, “Security and feudalism: Own or be pwned.”

Cory Doctorow explains how EFF is battling the perfect storm of bad security, abusive business practices, and threats to the very nature of property itself, fighting for a future where our devices can be configured to do our bidding and where security researchers are always free to tell us what they’ve learned.

Planet DebianThorsten Alteholz: My Debian Activities in November 2016

FTP assistant

This month I marked 377 packages for accept and rejected 36 packages. I also sent 13 emails to maintainers asking questions.

Debian LTS

This was my twenty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 11h. During that time I did uploads of

  • [DLA 696-1] bind9 security update for one CVE
  • [DLA 711-1] curl security update for nine CVEs

The upload of curl started as an embargoed one but the discussion about one fix took some time and the upload was a bit delayed.

I also prepared a test package for jasper which takes care of nine CVEs and is available here. If you are interested in jasper, please download it and check whether everything is working in your environment. As upstream only takes care of CVEs/bugs at the moment, maybe we should not upload the old version with patches but the new version with all fixes. Any comments?

Other stuff

As it is again this time of the year, I would also like to draw some attention to the Debian Med Advent Calendar. Like the past years, the Debian Med team starts a bug squashing event from the December 1st to 24th. Every bug that is closed will be registered in the calendar. So instead of taking something from the calendar, this special one will be filled and at Christmas hopefully every Debian Med related bug is closed. Don’t hestitate, start to squash :-).

In November I also uploaded new versions of libmatthew-java, node-array-find-index, node-ejs, node-querystringify, node-require-dir, node-setimmediate, libkeepalive,
Further I added node-json5, node-emojis-list, node-big.js, node-eslint-plugin-flowtype to the NEW queue, sponsored an upload of node-lodash, adopted gnupg-pkcs11-scd, reverted the -fPIC-patch in libctl and fixed RC bugs in alljoyn-core-1504, alljoyn-core-1509, alljoyn-core-1604.

Krebs on Security‘Avalanche’ Global Fraud Ring Dismantled

In what’s being billed as an unprecedented global law enforcement response to cybercrime, federal investigators in the United States, United Kingdom and Europe today say they’ve dismantled a sprawling cybercrime machine known as “Avalanche” — a distributed, cloud-hosting network that for the past seven years has been rented out to fraudsters for use in launching countless malware and phishing attacks.

The global distribution of servers used in the Avalanche crime machine. Source:

The global distribution of servers used in the Avalanche crime machine. Source:

According to Europol, the action was the result of a four-year joint investigation between Europol, Eurojust the FBI and authorities in the U.K. and Germany that culminated on Nov. 30, 2016 with the arrest of five individuals, the seizure of 39 Web servers, and the sidelining of more than 830,000 web domains used in the scheme.

Built as a criminal cloud-hosting environment that was rented out to scammers, spammers other ne’er-do-wells, Avalanche has been a major source of cybercrime for years. In 2009, when investigators say the fraud network first opened for business, Avalanche was responsible for funneling roughly two-thirds of all phishing attacks aimed at stealing usernames and passwords for bank and e-commerce sites.  By 2011, Avalanche was being heavily used by crooks to deploy banking Trojans.

The U.K.’s National Crime Agency (NCA), says the more recent Avalanche fraud network comprised up to 600 servers worldwide and was used to host as many as 800,000 web domains at a time.

“Cyber criminals rented the servers and through them launched and managed digital fraud campaigns, sending emails in bulk to infect computers with malware, ransomware and other malicious software that would steal users’ bank details and other personal data,” the NCA said in a statement released today on the takedown. The criminals used the stolen information for fraud or extortion. At its peak 17 different types of malware were hosted by the network, including major strains with names such as goznym, urlzone, pandabanker and loosemailsniffer.At least 500,000 computers around the world were infected and controlled by the Avalanche system on any given day.”

The Avalanche network was especially resilient because it relied on a hosting method known as fast-flux, a kind of round-robin technique that lets botnets hide phishing and malware delivery sites behind an ever-changing network of compromised systems acting as proxies.

“The complex setup of the Avalanche network was popular amongst cybercriminals, because of the double fast flux technique offering enhanced resilience to takedowns and law enforcement action,” Europol said in its statement.

It’s worth noting here that Avalanche has for many years been heavily favored by crime gangs to deploy Zeus and SpyEye malware variants involved in cleaning out bank accounts for a large number of small to mid-sized businesses. These attacks relied heavily on so-called “money mules,” people willingly or unwittingly recruited into helping fraudsters launder stolen funds.

At the time of the takedown, the Avalanche cybercrime infrastructure spanned more than 180 countries, according to The Shadowserver Foundation, a nonprofit group that helped authorities gain control over the Avalanche domains. Read more on Shadowserver’s role in this effort here.

The Avalanche crime infrastructure.

The Avalanche crime infrastructure. Image: Europol

Planet DebianDaniel Pocock: Using a fully free OS for devices in the home

There are more and more devices around the home (and in many small offices) running a GNU/Linux-based firmware. Consider routers, entry-level NAS appliances, smart phones and home entertainment boxes.

More and more people are coming to realize that there is a lack of security updates for these devices and a big risk that the proprietary parts of the code are either very badly engineered (if you don't plan to release your code, why code it properly?) or deliberately includes spyware that calls home to the vendor, ISP or other third parties. IoT botnet incidents, which are becoming more widely publicized, emphasize some of these risks.

On top of this is the frustration of trying to become familiar with numerous different web interfaces (for your own devices and those of any friends and family members you give assistance to) and the fact that many of these devices have very limited feature sets.

Many people hail OpenWRT as an example of a free alternative (for routers), but I recently discovered that OpenWRT's web interface won't let me enable both DHCP and DHCPv6 concurrently. The underlying OS and utilities fully support dual stack, but the UI designers haven't encountered that configuration before. Conclusion: move to a device running a full OS, probably Debian-based, but I would consider BSD-based solutions too.

For many people, the benefit of this strategy is simple: use the same skills across all the different devices, at home and in a professional capacity. Get rapid access to security updates. Install extra packages or enable extra features if really necessary. For example, I already use Shorewall and strongSwan on various Debian boxes and I find it more convenient to configure firewall zones using Shorewall syntax rather than OpenWRT's UI.

Which boxes to start with?

There are various considerations when going down this path:

  • Start with existing hardware, or buy new devices that are easier to re-flash? Sometimes there are other reasons to buy new hardware, for example, when upgrading a broadband connection to Gigabit or when an older NAS gets a noisy fan or struggles with SSD performance and in these cases, the decision about what to buy can be limited to those devices that are optimal for replacing the OS.
  • How will the device be supported? Can other non-technical users do troubleshooting? If mixing and matching components, how will faults be identified? If buying a purpose-built NAS box and the CPU board fails, will the vendor provide next day replacement, or could it be gone for a month? Is it better to use generic components that you can replace yourself?
  • Is a completely silent/fanless solution necessary?
  • Is it possibly to completely avoid embedded microcode and firmware?
  • How many other free software developers are using the same box, or will you be first?

Discussing these options

I recently started threads on the debian-user mailing list discussing options for routers and home NAS boxes. A range of interesting suggestions have already appeared, it would be great to see any other ideas that people have about these choices.

TEDAnnouncing our 2017 TED Prize winner: Healthcare warrior Raj Panjabi

Raj Panjabi was born in Liberia, but his family fled civil war when he was nine. He returned as a medical student -- and went on to found Last Mile Health. Photo: Courtesy of Last Mile Health

Raj Panjabi was born in Liberia, but his family fled civil war when he was nine. He returned as a medical student — and went on to found Last Mile Health. Photo: Courtesy of Last Mile Health

It sounds simple enough: If you’re sick, you make an appointment with a doctor, and if it’s an emergency, you head to the nearest hospital. But for more than a billion people around the world, it’s a real challenge — because they live too far from a medical facility.

Where Raj Panjabi’s nonprofit, Last Mile Health, operates in Liberia, people in remote communities hike for hours or even days — sometimes canoeing through the jungle or motorbiking over rough terrain — to get medical care. Many will go their entire lives without visiting a doctor, which puts them at high risk of dying from diseases that are easily treated. Last Mile Health has created a model for expanding healthcare access to remote regions by training, employing and equipping community health workers. The organization’s work has shown impressive results in Liberia, and could be replicated elsewhere. That’s why TED is thrilled to announce Raj Panjabi as the winner of the 2017 TED Prize.

On April 25, 2017, at the annual TED Conference, Panjabi will reveal a $1 million wish for the world, related to this work. “I’m shocked and humbled, because I feel in many ways our work is only just beginning,” he said. “But it feels very right to me that this cause is worthy of the TED community’s efforts. Illness has been universal for the entire length of human history — but universal access to care has not been. Now, because of the advances in modern medical science and technology over the past 50 to 100 years, we have the chance to end that inequality.”

Reaching remote communities in Grand Gedeh County, Liberia, often involves long hikes or traveling by motorbike. Last Mile Health trains community health workers to serve these remote areas. Photo: Courtesy of Last Mile Health

Reaching remote communities in Grand Gedeh County, Liberia, often involves long hikes or traveling by motorbike. Last Mile Health trains community health workers to serve these remote areas. Photo: Courtesy of Last Mile Health

Since 2007, Last Mile Health has partnered with the government of Liberia to train, equip, employ and support community health workers. These community health workers are nominated by local leaders, and trained, with support from nurses, to diagnose and treat a wide range of medical problems. In the past year, these health workers have conducted more than 42,000 patient visits in their regions, and treated nearly 22,000 cases of malaria, pneumonia and diarrhea in children. They’ve also proven themselves to be a powerful line of defense against pandemics. During the Ebola outbreak, Last Mile Health assisted the government of Liberia in its response, helping to train 1,300 health workers and community members to prevent the spread of the disease in the southeastern region of the country. This year, Panjabi, who’s also a physician in the Division of Global Health Equity at Brigham and Women’s Hospital at Harvard Medical School, was named to TIME’s list of the “100 Most Influential People in the World” for Last Mile Health’s part in helping contain the Ebola epidemic.

And it feels especially fitting to announce him as the next TED Prize winner on World AIDS Day, since Last Mile Health began as Liberia’s first rural public HIV program, helping patients in the war-torn area of Zwedru who could not make the trek to the capital, Monrovia, for care.

“I want to see a health worker for everyone, everywhere, every day,” says Panjabi. “I’m honored and excited by the opportunity to amplify the work of these inspiring community health workers.”

Sign up to receive updates as Panjabi readies to reveal his wish at TED2017. And learn more about the TED Prize, a $1 million grant given annually to a bold leader with a wish to solve a pressing global problem. Past winners include Sylvia Earle, Jamie Oliver, JR, Dave Isay and Sarah Parcak, whose citizen-science platform for archaeology will launch in the new year.

Worse Than FailureJust The Fax, Ma'am

Muirhead fax machine - MfK Bern

Gus had been working at his new job for a month. Most of his tickets had been for front-end work, making it easier and more efficient to manage the various vendors that the company did business with. These were important flags like "company does not accept UPS deliveries" or "company does not accept paper POs". The flags had been previously set via an aging web-based UI that only worked in Internet Explorer 6, but now they were migrating one at a time into the shiny new HTML5 app. It was tiring work, but rewarding.

Unfortunately, as is so often the case, Gus quickly became pigeonholed as "the flag guy". Whenever it came time in the project to add new flags, there was no question who'd get the ticket. Gus could think of nothing he looked forward to less than touching the Oracle-based backend to the product, but unfortunately, it was his burden to bear.

Adding flags to the database involved going through a special Database Committee. This was separate from the usual change request process. The committee was formed from all 6 of the company's database experts, and they personally reviewed every change. Worse, they were stodgy as all get-out. Any small error would get the change thrown out and the requestor berated for "wasting my time", along with a good helping of grumbling about "kids these days" and "narcissistic millennials" to boot.

Gus submitted his change request asking for a new field 2 weeks ahead of when it was slated to go live, just in case. Submissions were due by Monday and were discussed on Thursday, with the results posted first thing Friday morning on the bulletin board outside the breakroom. Gus filled in every field carefully, checking the whole thing twice—all but the title, which he'd written as "New Database Feild". On Tuesday, he realized his typo. He quietly edited the form, saved it, then crossed his fingers.

Friday rolled around and his change wasn't on the list, neither accepted nor rejected. Chewing his lip, Gus pulled up the change system and skimmed for his change.

It was marked auto-rejected.

"What did you do?" demanded Chuck, the senior developer who'd been mentoring him.

"I don't know!" Gus replied. "Do you think it was the typo? But I fixed it on Tuesday!"

Chuck slapped his forehead with his palm. "You changed the form? Don't ever change the form after it's submitted! That's grounds for automatic rejection!"

"It's okay," Gus said weakly. "We still have another week."

Chuck just looked at him, shaking his head as he walked away.

Gus spent the rest of the day focusing on the tedious form. There were dozens of fields, each with vague instructions, many demanding long explanations. Gus wouldn't be at the meeting to explain his change; he had to convince the committee it was necessary through the form alone. Worse, when he submitted it, it routed through an approval process that required him to chase down no less than 6 individuals to fill out their parts of the form.

"Yes, it's the same one as last week. Just put the same thing you did then. No, sorry, I don't know why it was rejected," he lied. "Can you just sign?"

Monday came and went without another auto-rejection. Gus checked compulsively every day at lunchtime, waiting for the other shoe to drop, but his request made it to Thursday without incident. Finally, Friday came, and he made the trek to the breakroom to check the list.

His change had been rejected.

He didn't know why. He didn't care why. The application needed to be up and running for the first of the month—the following Tuesday. There was no time to try again. Gus walked over to Chuck's cube, his mind whirling. "What do you know about data hiding?"

Chuck's face fell. He rubbed his face with one hand. "Dammit, this is why I try to pull front-end tickets."

"You gotta help me, dude. I'm dying here!"

"Okay, okay, let me think. I've heard about some guys slipping an extra so-called check-digit into integer fields. You have to mask it out before the code gets to it, but ..."

Gus grimaced. "I'd never get all the spots, I barely know the app."

"You're sure you can't push back on the deadline?"

"Yeah. Can't we find a field that's unused or something?" Gus begged.

An hour and dozens of SELECT statements later, they did just that. The Fax field wasn't populated in the old system; not a single vendor had a fax number worth recording. The new system hadn't bothered porting it over at all. Gus and Chuck hooked the flag up to the existing field. No muss, no fuss, and no database review.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaColin Charles: Debian and MariaDB Server

GNU/Linux distributions matter, and Debian is one of the most popular ones out there in terms of user base. Its an interesting time as MariaDB Server becomes more divergent compared to upstream MySQL, and people go about choosing default providers of the database.

The MariaDB Server original goals were to be a drop-in replacement. In fact this is how its described (“It is an enhanced, drop-in replacement for MySQL”). We all know that its becoming increasingly hard for that line to be used these days.

Anyhow in March 2016, Debian’s release team has made the decision that going forward, MariaDB Server is what people using Debian Stretch get, when they ask for MySQL (i.e. MariaDB Server is the default provider of an application that requires the use of port 3306, and provides a MySQL-like protocol).

All this has brought some interesting bug reports and discussions, so here’s a collection of links that interest me (with decisions that will affect Debian users going forward).


MariaDB Server


Planet DebianCarl Chenet: My Free Software activities in November 2016

My Monthly report for Novembre 2016 gives an extended list of what were my Free Software related activities during this month.

Personal projects:

Journal du hacker:

The Journal du hacker is a frenck-speaking Hacker News-like website dedicated to the french-speaking Free and Open source Software community.


That’s all folks! See you next month!

Krebs on SecurityNew Mirai Worm Knocks 900K Germans Offline

More than 900,000 customers of German ISP Deutsche Telekom (DT) were knocked offline this week after their Internet routers got infected by a new variant of a computer worm known as Mirai. The malware wriggled inside the routers via a newly discovered vulnerability in a feature that allows ISPs to remotely upgrade the firmware on the devices. But the new Mirai malware turns that feature off once it infests a device, complicating DT’s cleanup and restoration efforts.

Security experts say the multi-day outage is a sign of things to come as cyber criminals continue to aggressively scour the Internet of Things (IoT) for vulnerable and poorly-secured routers, Internet-connected cameras and digital video recorders (DVRs). Once enslaved, the IoT devices can be used and rented out for a variety of purposes — from conducting massive denial-of-service attacks capable of knocking large Web sites offline to helping cybercriminals stay anonymous online.

An internet-wide scan conducted by suggests there may be more than five million devices vulnerable to the exploit that caused problems for so many DT customers this week. Image:

An internet-wide scan conducted by suggests there may be as many as five million devices vulnerable to the exploit that caused problems for so many DT customers this week. Image:

This new variant of Mirai builds on malware source code released at the end of September. That leak came a little more a week after a botnet based on Mirai was used in a record-sized attack that caused KrebsOnSecurity to go offline for several days. Since then, dozens of new Mirai botnets have emerged, all competing for a finite pool of vulnerable IoT systems that can be infected.

Until this week, all Mirai botnets scanned for the same 60+ factory default usernames and passwords used by millions of IoT devices. But the criminals behind one of the larger Mirai botnets apparently decided to add a new weapon to their arsenal, incorporating exploit code published earlier this month for a security flaw in specific routers made by Zyxel and Speedport.

These companies act as original equipment manufacturers (OEMs) that specialize in building DSL modems that ISPs then ship to customers. The vulnerability exists in communications protocols supported by the devices that ISPs can use to remotely manage all of the customer-premises routers on their network.

According to, which first blogged about the emergence of the new Mirai variant, part of the problem is that Deutsche Telekom does not appear to have followed the best practice of blocking the rest of the world from remotely managing these devices as well.

“The malware itself is really friendly as it closes the vulnerability once the router is infected,” BadCyber noted. “It performs [a] command which should make the device ‘secure,’ until next reboot. The first one closes port 7547 and the second one kills the telnet service, making it really hard for the ISP to update the device remotely.” [For the Geek Factor 5 readership out there, the flaw stems from the way these routers parse incoming traffic destined for Port 7547 using communications protocols known as TR-069].

DT has been urging customers who are having trouble to briefly disconnect and then reconnect the routers, a process which wipes the malware from the device’s memory. The devices should then be able to receive a new update from DT that plugs the vulnerability.

That is, unless the new Mirai strain gets to them first. Johannes Ullrich, dean of security research at The SANS Technology Institute, said this version of Mirai aggressively scans the Internet for new victims, and that SANS’s research has shown vulnerable devices are compromised by the new Mirai variant within five to ten minutes of being plugged into the Internet.

Ullrich said the scanning activity conducted by the new Mirai variant is so aggressive that it can create hangups and crashes even for routers that are are not vulnerable to this exploit.

“Some of these devices went down because of the sheer number of incoming connections” from the new Mirai variant, Ullrich said. “They were listening on Port 7547 but were not vulnerable to this exploit and were still overloaded with the number of connections to that port.”

A Deutsche Telekom Speedport DSL modem.

A Deutsche Telekom Speedport DSL modem.


Allison Nixon, director of security research at Flashpoint, said this latest Mirai variant appears to be an attempt to feed fresh victims into one of the larger and more established Mirai botnets out there today.

Nixon said she suspects this particular botnet is being rented out in discrete chunks to other cybercriminals. Her suspicions are based in part on the fact that the malware phones home to a range of some 256 Internet addresses that for months someone has purchased for the sole purpose of hosting nothing but servers used to control multiple Mirai botnets.

“The malware points to some [Internet addresses] that are in ranges which were purchased for the express purpose of running Mirai,” Nixon said. “That range does nothing but run Mirai control servers on it, and they’ve been doing it for a while now. I would say this is probably part of a commercial service because purchasing this much infrastructure is not cheap. And you generally don’t see people doing this for kicks, you see them doing it for money.”

Nixon said the criminals behind this new Mirai variant are busy subdividing their botnet — thought to be composed of several hundred thousand hacked IoT devices — among multiple, distinct control servers. This approach, she said, addresses two major concerns among cybercriminals who specialize in building botnets that are resold for use in huge distributed denial of service (DDoS) attacks.

The first is that extended DDoS attacks which leverage firepower from more bots than are necessary to take down a target host can cause the crime machine’s overall bot count to dwindle more quickly than the botnet can replenish itself with newly infected IoT devices — greatly diminishing the crime machine’s strength and earning power.

“I’ve been watching a lot of chatter in the DDoS community, and one of the topics that frequently comes up is that there are many botnets out there where the people running them don’t know each other, they’ve just purchased time on the botnet and have been assigned specific slots on it,” Nixon said. “Long attacks would end up causing the malware or infected machines to crash, and the attack and would end up killing the botnet if it was overused. Now it looks like someone has architected a response to that concern, knowing that you have to preserve bots as much as you can and not be excessive with the DDoS traffic you’re pushing.”

Nixon said dividing the Mirai botnet into smaller sections which each answer to multiple control servers also makes the overall crime machine more resistant to takedown efforts by security firms and researchers.

“This is an interesting development because a lot of the response to Mirai lately has been to find a Mirai controller and take it down,” Nixon said. “Right now, the amount of redundant infrastructure these Mirai actors have is pretty significant, and it suggests they’re trying to make their botnets more difficult to take down.”

Nixon said she worries that the aggressive Mirai takedown efforts by the security community may soon prompt the crooks to adopt far more sophisticated and resilient methods of keeping their crime machines online.

“We have to realize that the takedown option is not going to be there forever with these IoT botnets,” she said.

Planet DebianJoey Hess: drought

Drought here since August. The small cistern ran dry a month ago, which has never happened before. The large cistern was down to some 900 gallons. I don't use anywhere near the national average of 400 gallons per day. More like 10 gallons. So could have managed for a few more months. Still, this was worrying, especially as the area moved from severe to extreme drought according to the US Drought Monitor.

Two days of solid rain fixed it, yay! The small cistern has already refilled, and the large will probably be full by tomorrow.

The winds preceeding that same rain storm fanned the flames that destroyed Gatlinburg. Earlier, fire got within 10 miles of here, although that may have been some kind of controlled burn.

Climate change is leading to longer duration weather events in this area. What tended to be a couple of dry weeks in the fall, has become multiple months of drought and weeks of fire. What might have been a few days of winter weather and a few inches of snow before the front moved through has turned into multiple weeks of arctic air, with multiple 1 ft snowfalls. What might have been a few scorching summer days has become a week of 100-110 degree temperatures. I've seen all that over the past several years.

After this, I'm adding "additional, larger cistern" to my todo list. Also "larger fire break around house".

Google AdsenseAdSense help, when and where you need it

Whether you need help urgently or just want to learn, Adsense provides different ways to provide help when you need it.  In this post we’ll share the different ways we offer support to our AdSense partners.

Did you know you can get help on any AdSense issue from within your AdSense account using the help widget? You can find the help widget by clicking on the Help button on the upper right corner of your AdSense account. This will take you directly to informative articles related to the topic or issue you provide.

We hope that this widget will help solve your problems directly within your AdSense account, eliminating the need to switch back and forth between tasks.

Additionally, if you consistently earn more than $25 per week (or the local equivalent), you may be eligible to email the AdSense support team. If you don’t meet the earnings threshold, you can still get help through the issue-based troubleshooters in the AdSense Help Center or by using these relevant resources:
The AdSense support team is here to help so you can continue to focus on creating amazing content for your audience. Use the support resources noted above when you require assistance and let us know on Twitter or Google+ how we can improve your support experience.

Posted by Melina Lopez, from the AdSense team

Planet DebianChris Lamb: Free software activities in November 2016

Here is my monthly update covering what I have been doing in the free software world (previous month):

  • Started work on a Python API to the UK Postbox mail scanning and forwarding service. (repo)
  • Lots of improvements to, my experiment into how to process, store and distribute .buildinfo files after the Debian archive software has processed them, including making GPG signatures mandatory (#7), updating to sign them and moving to SSL.
  • Improved the Django client to the KeyError error tracking software, enlarging the test coverage and additionally adding support for grouping errors using a context manager.
  • Made a number of improvements to, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds on every code change:
    • Install build-dependencies with debugging output. Thanks to @waja. (#31)
    • Install Lintian by default. Thanks to @freeekanayaka. (#33).
    • Call mktemp with --dry-run to avoid having to delete it later. (commit)
  • Submitted a pull request to Wheel (a utility to package Python libraries) to make the output of METADATA files reproducible. (#73)
  • Submitted some miscellaneous documentation updates to the Tails operating system. (patches)

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to permit verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:

My work in the Reproducible Builds project was also covered in our weekly reports. (#80, #81, #82 #83.

Toolchain issues

I submitted the following patches to fix reproducibility-related toolchain issues with Debian:


strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build. runs our comprehensive testing framework.

  • has moved to SSL. (ac3b9e7)
  • Submit signing keys to keyservers after generation. (bdee6ff)
  • Various cosmetic changes, including
    • Prefer if X not in Y over if not X in Y. (bc23884)
    • No need for a dictionary; let's just use a set. (bf3fb6c)
    • Avoid DRY violation by using a for loop. (4125ec5)

I also submitted 9 patches to fix specific reproducibility issues in apktool, cairo-5c, lava-dispatcher, lava-server, node-rimraf, perlbrew, qsynth, tunnelx & zp.


Debian LTS

This month I have been paid to work 11 hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Issued DLA 697-1 for bsdiff fixing an arbitrary write vulnerability.
  • Issued DLA 705-1 for python-imaging correcting a number of memory overflow issues.
  • Issued DLA 713-1 for sniffit where a buffer overflow allowed a specially-crafted configuration file to provide a root shell.
  • Issued DLA 723-1 for libsoap-lite-perl preventing a Billion Laughs XML expansion attack.
  • Issued DLA 724-1 for mcabber fixing a roster push attack.


  • redis:
    • 3.2.5-2 — Tighten permissions of /var/{lib,log}/redis. (#842987)
    • 3.2.5-3 & 3.2.5-4 — Improve autopkgtest tests and install upstream's MANIFESTO and documentation.
  • gunicorn (19.6.0-9) — Adding autopkgtest tests.
  • libfiu:
    • 0.94-1 — Add autopkgtest tests.
    • 0.95-1, 0.95-2 & 0.95-3 — New upstream release and improve autopkgtest coverage.
  • python-django (1.10.3-1) — New upstream release.
  • aptfs (0.8-3, 0.8-4 & 0.8-5) — Adding and subsequently improving the autopkgtext tests.

I performed the following QA uploads:

Finally, I also made the following non-maintainer uploads:

  • libident (0.22-3.1) — Move from obsolete Source-Version substvar to binary:Version. (#833195)
  • libpcl1 (1.6-1.1) — Move from obsolete Source-Version substvar to binary:Version. (#833196)
  • pygopherd ( — Move from obsolete Source-Version substvar to ${source:Version}. (#833202)

RC bugs

I also filed 59 FTBFS bugs against arc-gui-clients, asyncpg, blhc, civicrm, d-feet, dpdk, fbpanel, freeciv, freeplane, gant, golang-github-googleapis-gax-go, golang-github-googleapis-proto-client-go, haskell-cabal-install, haskell-fail, haskell-monadcatchio-transformers, hg-git, htsjdk, hyperscan, jasperreports, json-simple, keystone, koji, libapache-mod-musicindex, libcoap, libdr-tarantool-perl, libmath-bigint-gmp-perl, libpng1.6, link-grammar, lua-sql, mediatomb, mitmproxy, ncrack, net-tools, node-dateformat, node-fuzzaldrin-plus, node-nopt, open-infrastructure-system-images, open-infrastructure-system-images, photofloat, ppp, ptlib, python-mpop, python-mysqldb, python-passlib, python-protobix, python-ttystatus, redland, ros-message-generation, ruby-ethon, ruby-nokogiri, salt-formula-ceilometer, spykeviewer, sssd, suil, torus-trooper, trash-cli, twisted-web2, uftp & wide-dhcpv6.

FTP Team

As a Debian FTP assistant I ACCEPTed 70 packages: bbqsql, coz-profiler, cross-toolchain-base, cross-toolchain-base-ports, dgit-test-dummy, django-anymail, django-hstore, django-html-sanitizer, django-impersonate, django-wkhtmltopdf, gcc-6-cross, gcc-defaults, gnome-shell-extension-dashtodock, golang-defaults, golang-github-btcsuite-fastsha256, golang-github-dnephin-cobra, golang-github-docker-go-events, golang-github-gogits-cron, golang-github-opencontainers-image-spec, haskell-debian, kpmcore, libdancer-logger-syslog-perl, libmoox-buildargs-perl, libmoox-role-cloneset-perl, libreoffice, linux-firmware-raspi3, linux-latest, node-babel-runtime, node-big.js, node-buffer-shims, node-charm, node-cliui, node-core-js, node-cpr, node-difflet, node-doctrine, node-duplexer2, node-emojis-list, node-eslint-plugin-flowtype, node-everything.js, node-execa, node-grunt-contrib-coffee, node-grunt-contrib-concat, node-jquery-textcomplete, node-js-tokens, node-json5, node-jsonfile, node-marked-man, node-os-locale, node-sparkles, node-tap-parser, node-time-stamp, node-wrap-ansi, ooniprobe, policycoreutils, pybind11, pygresql, pysynphot, python-axolotl, python-drizzle, python-geoip2, python-mockupdb, python-pyforge, python-sentinels, python-waiting, pythonmagick, r-cran-isocodes, ruby-unicode-display-width, suricata & voctomix-outcasts.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against node-cliui, node-core-js, node-cpr & node-grunt-contrib-concat.

Planet DebianJonas Meurer: debian lts report 2016.11

Debian LTS report for November 2016

Noevember 2016 was my third month as a Debian LTS team member. I was allocated 11 hours and had 1,75 hours left from October. This makes a total of 12,75 hours. In November I spent all 12,75 hours (and even a bit more) preparing security updates for spip, memcached and monit.

In particular, the updates of spip and monit took a lot of time (each one more than six hours). The patches for both packages were horrible to backport as the affected codebase changed a lot between the Wheezy versions and current upstream versions. Still it was great fun and I learned a lot during the backporting work. Due to the intrusive nature of the patches, I also did much more extensive testing before uploading the packages, which took quite a bit of time as well.

Monit 5.4-2+deb7u1 is not uploaded to wheezy-security yet as I decided to ask for further review and testing on the debian-lts mailinglist first.

Below follows the list of items I worked on in November in the well known format:

  • DLA 695-1: several XSS, CSRF and code execution flaws fixed in spip 2.1.17-1+deb7u6
  • DLA 701-1: integer overflows, buffer over-read fixed in memcached 1.4.13-0.2+deb7u2
  • CVE-2016-7067: backported CSRF protection to monit 5.4-2+deb7u1

Google AdsenseHow to earn money blogging with AdSense

This is the first of five guest posts from AdSense publisher Brandon Gaille. Brandon has built his small business marketing blog,, to over 2 million monthly visitors in less than three years. He’s featured as our guest blogger to share insights and tips from his personal blogging experience to help AdSense publishers grow earnings. If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 

Blogging is one of the easiest ways to build a residual income with Google AdSense. However, most bloggers are doing it the wrong way, and that’s keeping them from growing their earnings to a whole new level. Today, I’m going to share with you the four pillars that helped me build my blog traffic to over 1 million monthly visitors in less than 18 months after my first blog post.

My blogging success story is rather unique. For most of my thirties, I was mentally and physically disabled because of damage done by a small pituitary brain tumor. I was fortunate enough to find a doctor that identified the right combination of medicine to bring me back from the depths of nowhere. My mental cognition was regained a mere months before my pregnant wife was diagnosed with stage 3 breast cancer. I was able to be there for my wife. Our first son was born healthy, and my wife officially beat cancer two months later.


The fear of our health problems returning led me down the road of creating a blog. One of my top skills is reverse engineering successful systems and rebuilding them into a more productive system. Before my health was ravaged, I had built several multi-million dollar companies on the back of this unique skillset.

Before I made my first blog post, I spent six months researching the blogs that received the most traffic from Google organic search. I identified the specific tactics from over 70 high traffic blogs. Then I ranked the tactics by the most productive, and I eliminated the bottom 80%. This is what I built my blogging system upon. Within four months of launching the blog, I had surpassed 100,000 monthly visitors. Today, my blog receives over 2 million monthly visitors.

Here are the four pillars that my system was built upon:

Pillar #1 – Keyword research

Most amateur bloggers fail miserably at keyword research. The reason for this is because they are overwhelmed by all of the data, and they are usually using the wrong tools. I will be breaking down my simple system for identifying keyword phrases that serves as the topic and title for future blog posts. This will allow you to blog with a purpose. And that purpose is to create blog posts that consistently produce organic traffic.

Pillar #2 – Compelling blog titles

You can write an epic 4000 word post, and it can doomed for failure because of a poorly chosen title. The post title is a very important part of the post, and often overlooked. I’ll be revealing my Perfect Title Formula, which will allow you to craft blog headlines that drive a ridiculous amount of traffic and social shares.

Pillar #3 – Engaging content 

Over the last three years, I’ve perfected my blog’s ability to engage new visitors. The average visitor spends 5 minutes reading one of my blog posts. I will be sharing the eleven techniques that I apply to my blog posts to achieve absolute engagement.

Pillar #4 – Getting High Quality Links

The key to remember here is quality over quantity. Do not waste your time chasing low quality links or adding your blog to a directory. There are two strategies that work better than everything else, and I will show you exactly how to execute them.

Over the next four weeks I’ll be sharing tips on how to increase your AdSense earnings right here on the Inside AdSense blog. In the meantime, go here to keep reading “How to Build a Blog to Over 1 Million Monthly Visitors” and find out how to apply the four pillars to your blog.

Posted By
Brandon Gaille

Brandon Gaille

Brandon Gaille is an AdSense publisher. You can learn more about Brandon at and listen to his popular blogging podcast, The Blog Millionaire.

If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 

Google AdsenseBest Practices to avoid policy violations

We’re dedicated to providing additional transparency into our policy processes and hope that the recent blog posts have helped you understand specific policy triggers and the actions to take if you’ve violated a policy.  To further help you stay policy compliant, here’re 8 best practices to help avoid policy violations and keep your account in good standing.

1. Don’t click on your own ads
Don’t click your own ads, or ask others to click them. These kinds of clicks won’t count toward revenue and may get you suspended. Even if you’re interested in an ad or looking for its destination URL, clicking on your own ads is prohibited. Instead, use the Google Publisher Toolbar.

2. Think like a user 
Make it easy for people to find what they’re looking for. Follow the Webmaster Guidelines to provide content that’s useful, interesting, and adds value. Immerse yourself into the user experience however you can. Try to discover the emotions that guide users’ behaviors and try to uncover their needs. 

3. Keep it family-friendly and legal Review our guidelines about prohibited content and be sure you understand them. If you wouldn’t want a child or grandparent to see it, don’t put it on your site. We’ve made a commitment to our users, advertisers and publishers to keep the AdSense network family-safe. A general rule of thumb when it comes to our policies is: if you wouldn’t want to share this content at a family dinner, or view it at your boss’s office, you shouldn’t place AdSense code on it.

4. Maximize content, not ads per page Create new, relevant, interesting content, and update it regularly. Also, be sure to maintain a good balance between ads and editorial content as it’s important to ensure that there’s always more content than ads on a page.

5. Avoid deceptive layoutsKeep ads away from games, slideshows, and other click-heavy content and don’t place them near images. Publishers may not use deceptive implementation methods to obtain clicks. This includes, but is not limited to: placing images next to individual ads, placing ads in a floating box script, formatting ads so that they become indistinguishable from other content on the page, formatting content so that it is difficult to distinguish it from ads and placing misleading labels above Google ad units.

6. Create unique content Your content needs to create added value for your users. Focus on making content great – not duplicating it across pages. Unique and valuable content is what keeps users coming back to publisher sites. Everything you do as a publisher should be user focused, which primarily includes developing great content.

7. Track your traffic Your traffic should be organic. Set up alerts using Google Analytics to quickly identify unusual traffic patterns. Many potential traffic quality problems can be addressed quickly by monitoring your own traffic. Traffic anomalies are often indicators of potential invalid traffic activity.

8. Follow the Code of Implementation GuideAlways follow the Code Implementation Guide and don’t try to modify the AdSense code. If you run into a problem, visit the Troubleshooting page or contact publisher support.

Thanks for taking the time to learn about Google ad network policy, processes, and best practices. Together, we can continue to make the web and advertising experience great.

Posted by: Anastasia Almiasheva from the AdSense team


Ross Anderson describes DigiTally, a secure payments system for use in areas where there is little or no network connectivity.

Sociological ImagesWeighing the Symbolic Value of the Safety Pin

Originally posted at Race, Politics, Justice.

A few days after Donald Trump won the electoral votes for president, some people started suggesting that pro-immigrant people in the US wear safety pins in emulation of the movement in Britain after Brexit to signal support for immigrants. A social media debate quickly ensued about what this might mean, some asserting that the safety pin meant that an immigrant could view one as a “safe” White person, some ridiculing the exercise as a “feel-good” effort by Whites to distance themselves from the White nationalist vote, some interpreting its meaning as “I don’t agree with Trump.” (This latter interpretation was offered by both pro- and anti-Trump people.)


My entirely unsystematic observations were that it was African Americans who were mostly negative and White liberals (like me) who were trying to figure out what the “meaning” of the pin would turn out to be. I’m not sure what immigrants thought about safety pins, although I know they are generally frightened by the election results.

Through a neighborhood email newsletter I learned that a family in the area received a racist hate letter using the N-word after the election and that a resident who is also a minister ordered a bunch of yard signs that say “No matter where you’re from, we’re glad you’re our neighbor” in English, Spanish and Arabic. I bought one and will put it in my yard. I really don’t know how this action will be viewed by actual immigrants.

There are some non-Muslim women who have taken to wearing scarves as a symbol of solidarity with Muslims (one story circulating talks about attacks on a non-Muslim woman who was wearing a scarf due to hair loss from cancer treatment), an action that has received (so far as I know) little endorsement from Muslims and some responses that say that this subtracts from the religious symbolism of wearing hijab. After Trayvon Martin was killed, many Black people put up pictures of themselves in a hoodie with “I am Trayvon Martin,” but also often objected when Whites did the same, because the point was that a White person in a hoodie was not treated the same.

In the 1990s, Madison had a flurry of protests and counter-protests in which out-of-town anti-gay protesters were picketing pro-gay churches. Many Madison residents, including me, put up yard signs distributed primarily through churches that said “Madison supports its gays and lesbians.” About the same time, the KKK came through, and we also put up “Let your Light Shine, Fight Racism” signs in our yards. (I recall having both in my yard in the same winter.) Also in the 1990s, many of us wore rainbow ribbons (I kept mine pinned to my purse so I didn’t have to remember to put it on), again as a symbol of support for gays and lesbians. During the first Gulf War, Madison’s lawns often featured either anti-war signs or “support our troops” signs or, often, both. Earlier this year, after a lot of Black Lives Matter protests here as well as around the country, in addition to the relatively small number of yard signs or flags supporting BLM, some streets blossomed the “Support our Police” yard signs. And, of course, yard signs are a staple of political campaigns, most Decembers see a flurry of “Keep Christ in Christmas” yard signs, and Wisconsin Badger and Green Bay Packer pennants fly all around town on particular weekends.

So how should we think about these visible symbols and the varying reactions they elicit?

Let’s begin with the obvious. Symbols are symbols, and displaying a symbol is not the same thing as showing up for a protest or taking other active steps to pursue social policies you believe in. Wearing or displaying some sort of symbol of support for a minority is not the same thing as being a minority, nor will the symbol necessarily be interpreted by others in the way it is meant. This does not make symbols meaningless. They are visible symbols of adherence to some cause or belief system and, as such, open the wearer to reactions from others. But, as symbols, they are subject to multiple interpretations and their meaning varies with context. So those displaying symbols and those viewing others’ displays of symbols need to do interpretive work to understand the symbol and to assess the consequences of displaying it.

If you display or wear a symbol that you are sure others around you will approve of, you have little to lose from the symbol and something to gain. Signaling support for a cause the majority supports signals your affiliation with the majority. Supporting a beleaguered minority in a context where the majority is at least tolerant is also a low-cost gesture. When I displayed pro-gay ribbons and yard signs, I had no expectation of negative reaction, and I doubt any other straight person in Madison did either.

But that does not mean it was meaningless. Gays and lesbians I knew personally were feeling attacked and the visible support was meaningful to them. The signs and ribbons were passed out at church by people I knew. In that context, I could either display the symbol or not display it but, either way, my action would be interpreted as having meaning. I felt the same way about this latest “welcome neighbor” sign. When confronted with the question, I could either put up a sign or not put up a sign, but either choice carried meaning. I know of at least some instances in the 1990s in which gay and lesbian people stated that the signs made them feel supported and better about living in Madison. Of course, you can “do” support without yard signs or ribbons. After 9/11, Christian churches and Jewish congregations reached out to Muslim congregations (and Muslim congregations for their parts held open houses) and Muslims generally felt supported in Madison, even without yard signs or ribbons.

In places where the symbol is low cost, one can justly be suspected of displaying the symbol just to go along with the majority or as a low cost way of feeling good about a problem you don’t plan to do anything more about.

The same yard signs and ribbons (or safety pins) in some areas would not be safe gestures but would open up a person to verbal or physical assaults, or worse. Whites who visibly supported Blacks in the old rural South or Chicago’s segregated White neighborhoods in the 1950s were violently attacked and had their houses bombed. Displaying pro-gay symbols in areas dominated by conservative Christians in the 1990s could lead to hostile interactions. Even displaying the wrong sports team colors can get you hurt in some contexts.

Displaying a symbol where you know you are an opinion minority, and especially where it opens you to attack, is a very different gesture than where it is safe. In these contexts, it is an act of dissent. It is especially meaningful to dissent visibly in contexts where a dangerous segment of the majority feels empowered to commit violence against minorities. In these contexts, the symbol does not necessarily mean “I am a safe person” but “I am willing to draw the attention of dangerous people” or “not everybody supports those people.” If the intent is actually to shelter minorities from violence, the goal usually is to get as many people as possible to wear the symbol of dissent, to signal to those who intend violence that they cannot act with impunity and cannot count on community support.

Conversely, yard signs and other symbols are sometimes used by majorities to coerce compliance or intimidate minorities. Pro-police, pro-KKK, anti-gay, anti-immigrant symbols and yard signs signal to minorities that they are not safe in the area. When you know that you are in an area where your views are contested, your visible symbol chooses sides.

Another dimension is the clarity or ambiguity of a symbol. This also is contextual. In the US today, it is not quite clear what a safety pin is supposed to signal. Does it merely signal opposition to violent attacks on minorities, or does it also signal opposition to deportations and registries? Can I assume that a safety pin wearer supports DACA and keeping DACA students in the US?  Does a safety pin also mean the wearer supports Black Lives Matter? Expanded immigration policies? Or is it merely a signal that one voted Democratic and is vaguely against “hate”? Or that the person voted for Trump (or Stein?) and wants to disguise the fact in a liberal area? In the late 1960s during the anti-war movement I once tied a white scarf to the sleeve of my dark jacket when biking at night across campus so I could be seen. Several people stopped and asked me what my white scarf “meant.” Was it a new anti-war symbol? If so, they did not want to be late to adopt.

But non-verbal symbols can come to have very clear meanings. In Britain, the safety pin has a clear meaning, from what I’ve read, although its meaning in the US is not clear. In the US, a spray-painted swastika can be safely assumed to be the work of neo-Nazis meant to intimidate minorities and not a Hindu religious symbol. Text is often clearer: The phrase “let your light shine, oppose racism” is hopefully a clearer symbol that merely lighting a candle in your window in December, and “Madison supports its gays and lesbians” is also relatively clear. The latest sign about being happy my neighbors are here, written in Spanish and Arabic, also conveys pretty clear meaning in its language choices as well as its content, although could be criticized for its ambiguity about racism (as the impetus for the signs was a hate letter that used the N-word) and immigration policy (as the sign does not mention your document status).

The ambiguity of a symbol can make signaling one’s actual opinions complex. This is a Christian-majority country and there is a strong politicized Christian movement that is affiliated with White nationalism and/or strong anti-abortion sentiments and/or hostility to gays, lesbians, transgender and other sexual minorities and/or hatred of Muslims or, possibly, Jews. This makes any overt Christian symbol (a cross, a crucifix, a “keep Christ in Christmas” yard sign) an ambiguous symbol that is likely to be interpreted both by non-Christians and also Christians one does not know as a symbol of adherence to the Christian Right or at least Republicanism. Muslim women have a similar problem, as their hijab is often interpreted as symbolizing things other than what they think it symbolizes.

The minister who organized the welcome neighbor signs in Madison told reporters that part of his motivation was that as a White Evangelical Christian, he wanted to distance himself from White Evangelical Christians who are advocating messages that he considers hateful. In the 1990s, pro-gay churches similarly sought to distance themselves from the association of Christianity with anti-gay movements.

But even text symbols can “mean” something other than what the user thinks it meant. I interpret the pro-police yard signs in Madison as “meaning” opposition to Black Lives Matter, as I interpret “Blue Lives Matter” to have a similar meaning. I make this interpretation because there were no pro-police signs in Madison before Black Lives Matter, because the only contextual factor that could be construed as anti-police would be Black Lives Matter, and because the last time pro-police signs and bumper stickers were common it was the “Support Your Local Police” bumper sticker campaign launched by the far-right John Birch Society in 1963. In fact, a quick Google search reveals that the JBS has revived this campaign and there is now a movement among police to spread this slogan as opposition to federal attempts to supervise and rein in the excesses of local police. It could be that someone who put up that sign lives next door to a police officer and couldn’t say no when asked to put it up, despite the person’s private support for Black Lives Matter and concern about racial disparities in Madison. But the “meaning” of the sign still encodes opposition to BLM, regardless of private motives. Likewise, some of my neighbors referred to pro-Trump yard signs in the area as evidence of “hate,” a characterization which other neighbors objected to.

Symbols have to be collective to have any meaning at all, and that is why they tend to have a fad-like character and are typically promulgated and distributed by organizations. That is also why people may contest the meaning of symbols. They are superficial and elusive conveyors of meaning. There are no clear guidelines about when to display symbols and how they will be interpreted. But the use of symbols to convey one’s identity and stance with respect to important issues is an important part of how people come to perceive the opinions of those around them. And that is important.

Pamela Oliver, PhD is a professor of sociology at the University of Wisconsin, Madison. Her specialty is collective action and social movements and, since 1999, she has been working intensely on the issue of racial disparities in criminal justice. You can follow her at Race, Politics, Justice.

(View original at

Google AdsensePut your users first with the four S’: Speed, Scroll, Style, Simple

We’re all consumers of web content. Yet as content creators it can be easy to forget what we need as users. But don’t worry, you’ve got this, and we’ve got you covered with just four S’.

 If you’re new to AdSense, be sure to sign up today and start turning your #PassionIntoProfit. 

1. Speed 

We all know how frustrating it is when a page takes forever to load. We twiddle our thumbs and look from side to side. And after just three seconds, we bounce.

But somehow publishers aren’t responding to this primal need that we all know as users.

According to Google's research from the Mobile Speed Matters report, the average load time for mobile sites across the web is 19 seconds. This is a LONG time. Usain Bolt can run 200m in 19.9s - think of what your users can do with a tap and a swipe.

But how does this impact me? Well, the report also states that…

  • 53% of mobile site visits are abandoned if pages take longer than 3 seconds to load.
  • Publishers whose mobile sites load in 5 seconds earn up to 2x more mobile ad revenue than those whose sites load in 19 seconds. 

By now we think you’re sold on speed. So what’s next?

2. Scroll

The magic scroll. It’s an infinite, endless, perfectly loaded stream of content. There’s no need to click, to wait for a page to load, to navigate to that tiny ‘next’ with your giant thumb. It’s all right here, content, just waiting for you to consume it.

There are, of course, a few caveats before developing an infinite scroll. Like almost everything online, this isn’t a one size fits all solution. 

Infinite scroll is great for ...
  • UGC publishers with constantly evolving content - think Tumblr, Facebook, Pinterest.
  • Sites with lengthy articles or tutorials. No one wants to click ‘more’ or ‘page 2’ anymore. It’s just too dang hard. 
  • Publishers using a slideshow with pagination. Consider a lazy loaded infinite scroll instead. Users love it. 
  • Publishers considering mobile first (aren’t we all?!).
Watch out for … 
  • Crawler errors & SEO impact and check out this article for creating a search friendly infinite scroll. 

3. Style 

Style should never be an afterthought. You and your users want to interact with something that looks good and feels good. 

There are two primary components to style: content style & ad style. 

First: Content Style 

Great websites are able to maintain a consistent style  across pages and platforms. Consistency gives users a sense of familiarity when interacting with your content. 
  • Choose a color scheme and stick to it 
  • Choose a layout and stick to it 
  • Choose a theme and stick to it 
We can’t stress this enough - stick to it. 

As the industry continues to migrate towards a mobile first perspective, consistency across device types and platforms becomes increasingly important. Responsive web design enables your site to adapt to various device sizes without changing the overall look and feel or compromising user experience. 

If you're up for the challenge, check out more on responsive design. 

Second: Ad Style 

In the internet of yesteryear it was nearly impossible to monetize without stripping a site of what made it beautiful. The good news? It’s 2016 and now you have the ability to make a profit and maintain your site’s style. 

When implementing ads think about what makes sense for you and your users.
Here's a sample of a native ad design.
Most importantly use ads to complement the content of your site. Since content is king, it’s important to ensure that you give your users what they're looking for in a format that’s easy to find and navigate, this includes the ads on your site.

Place ads at natural breaks or where the user’s attention may have waned. Not only will this improve user experience but it also may encourage a higher CTR and increased audience engagement.

4. Simple

Keep it simple, folks. 

This rule underlines most everything that is targeted towards consumers, but it is even more important for a mobile first audience. 

When it comes to consuming digital content, we’re a generation of hungry hippos. We want headlines, snippets, concise and clear information. We want minimalist design with streamlined content and easy navigation.

Tips on keeping it simple
  • Make it touch friendly. What’s easier than that?
  • Bullet points make your content easily consumable 
  • Be brief in sign-ups. If your site requires users to sign-up or sign-in, keep input requirements to a minimum or consider adding a Google sign-in option to speed up the process 

So there you have it; the four S’ of user experience: speed, scroll, style, simple. If you’re new to AdSense, be sure to sign up today and start turning your #PassionIntoProfit.

Posted by: Sarah Hornsey, from the AdSense team

Worse Than FailureCodeSOD: Trimming the Fat

There are certain developers who don’t understand types. Frustrated, they fall back on the one data-type they understand- strings. Dates are hard, so put them in strings. Numbers aren’t hard, but they often exist in text boxes, so make them strings. Booleans? Well, we’ve come this far- strings it is.

Tyisha has the displeasure of working with one such developer, but with a twist- they didn’t really understand strings, either. Tyisha only supplied a small example:

string RequestDate = dteRequestTB.Text.ToString().Trim();
string valid = string.Empty;
string Format = "yy/MM/dd".Trim();
valid = Dates.IsDateValid(RequestDate.Trim(), Format.Trim()).ToString().Trim();
if (valid == "False".Trim()) {

Now, IsDateValid does more or less what you’d expect- it takes a date (as a string) and a format (as a string), and returns whether or not the input date matches the format (as a boolean).

Tyisha’s co-worker, of course, converts it into a string before comparing. This is dumb, but nothing we haven’t seen before. For a bonus, there’s absolutely no consistency in variable naming conventions. There’s the mix of pascal case, lower case, and a splash of Hungarian notation.

The real magic here, however, is that this co-worker isn’t simply happy calling ToString on everything, but instead needs to also Trim those strings. In fact, they call Trim on every string. Everywhere.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianArturo Borrero González: Creating a team for netfilter packages in debian

Debian - Netfilter

There are about 15 Netfilter packages in Debian, and they are maintained by separate people.

Yersterday, I contacted the maintainers of the main packages to propose the creation of a pkg-netfilter team to maintain all the packages together.

The benefits of maintaining packages in a team is already known to all, and I would expect to rise the overall quality of the packages due to this movement.

By now, the involved packages and maintainers are:

We should probably ping Jochen Friedrich as well who maintains arptables and ebtables. Also, there are some other non-official Netfilter packages, like iptables-persistent. I’m undecided to what to do with them, as my primary impulse is to only put in the team upstream packages.

Given the release of Stretch is just some months ahead, the creation of this packaging team will happen after the release, so we don’t have any hurry moving things now.


Planet DebianShirish Agarwal: The Iziko South African Museum

This would be a bit long on my stay in Cape Town, South Africa after Debconf16.

Before I start, let me share the gallery works, you can see some photos that I have been able to upload to my gallery . It seems we are using gallery 2 while upstream had made gallery 3 and then it sort of died. I actually asked in softwarerecs stackexchange site if somebody knows of a drop-in replacement for gallery and was told/shared about Pwigo . I am sure the admin knows about it. There would be costs to probably migrate from gallery to Pwigo with the only benefit that it would be something which would perhaps be more maintainable.

The issues I face with the current gallery system are few things –

a. There is no way to know how much your progress your upload has taken.
b. After it has submit, it gives a fake error message saying some error has occurred. This has happened on every occasion/attempt. Now I don’t know whether it is because I have slow upload speeds or something else altogether. I had shared the error page last time in the blog post hence not sharing again.

Although, all the pictures which would be shared in this blog post would be from the same gallery🙂

Another thing I would like to share is a small beginner article I wrote about why I like Debian.

Another interesting/tit-bit of news I came to know few days back that both Singapore and Qatar have given 96 hours visa free stopovers for Indians for select destinations.

Now to start with the story/experience due to some unknown miracle/angel looking upon me I got the chance to go to Debconf16, South Africa. I’m sure there was lot of backend discussions but in the end I was given the opportunity to be part of Debcamp and Debconf. While I hope to recount my Debcamp and Debconf experience in another or two blog posts, this would be exclusively the Post-Debconf Experiences I had.

As such opportunities to visit another country are rare, I wanted to make the most of it. Before starting from Pune, I had talked with Amey about Visas, about Debconf as he had just been to Debconf15 the year before and various things related to travel. He was instrumental in me having a bit more knowledge about how to approach things. I was also lucky to have both Graham and Bernelle who also suggested, advised and made it possible to have a pleasant stay both during Debcamp and Debconf. The only quibble is I didn’t know heaters were being made available to us without any cost.

Moving on, a day or two before Debconf was about to conclude, I asked Bernelle’s help even though she was battling a burn-out I believe as I was totally clueless about Cape Town. She accepted my request and asked me to look at hostels near Longmarket Street. I had two conditions –

a. It should not be very far from the airport
b. It should be near to all or most cultural experiences the city has to offer.

We looked at hostelworld and from the options listed, it looked like Homebasecapetown looked to be a perfect fit. It was one of the cheaper options and they also had breakfast included in the pricing. I booked through hostelworld for a mixed dorm for 2 days as I was unsure how it would be (the first night effect I have shared about previously) .

When I reached there, I found it to be as good as the pictures shared were, the dorm was clean (most important), people were friendly (also important) as well as toilets and shower were also clean while the water was hot, so all in all it was a win-win situation for me.

Posters I saw at homebasecapetown

While I’m not much of an adrenaline-junkie it was nice to know the activities that could be done/taken.

Brochures and Condoms just left of main hall.

This was again interesting. While apologies for the poor shaky quality of the picture, I believe it is easy to figure out. There were Brochures of the city attractions as well as condoms that people could discreetly use if need be. I had seen such condoms in few toilets during and around Debconf and it felt good that the public were aware and prioritizing safety for their guests and students instead of having fake holier than thou attitudes that many places have.

For instance, you wouldn’t find something like this in toilets of most colleges in India or anywhere else for that matter. There are few vending machines in what are termed as ‘red light areas’ or where prostitution is known/infamous to happen and even then most times it is empty. I have 2-3 social workers as friends and they are a source of news on such things.

While I went to few places and each had an attraction to it, the one which had my literally eyes out of socket was the ‘Iziko South African Museum‘ . I have been lucky to been quite a few museums in India, the best rated science museum in India in my limited experience has been the ‘Visvesvaraya Industrial & Technological Museum, Bengaluru – India‘. A beer from me if a European can get it right.

Don’t worry if you mispronounce it, I mispronounce it couple of times till I get it right🙂 .

Looking up the word ‘Iziko’ the meaning of the word seems to be ‘the hearth’ and if you look at the range of collections in the museum, you would think it fits.

I was lucky to find couple of friends, one of whom was living at homebase and we decided to go to the museum together.

Making friends on the road

So Eduardo, my friend on the left and his friend, we went to the museum. While viewing the museum, there were no adjectives to describe it other than ‘Wow’ and ‘Endless’ .

See –

fossils of fish-whale-shark ?


Giant fish-whale-dolphin-shark some million years ago.


Reminder of JAWS ;)

While I have more than a few pictures, the point is easily made. It seems almost inconceivable that creatures of such masses actually were on earth. While I played with the model of the jaws of a whale/shark in reality if something like that happened, I would have been fighting for my life.

The only thing I missed or could have been better if they had some interactive installations to showcase the now universally accepted Charles Darwin’s ‘On the Origin of Species‘ I had never seen anything like this. Sadly, there was nobody around to help us figure out things as I had read that most species of fish don’t leave a skeleton behind so how were these models made? It just boggles the mind.

Apart from the Science Museum I was also introduced to the bloody history that South Africa had. I saw –

The 1913 native land act which was not honored .

I had been under the impression that India had got a raw deal when it was under British rule but looking at South African history I don’t know. While we got our freedom in 1947 they got rid of apartheid about 20 years+ . I talked to lot of young African males and there was lot of naked hostility for the Europeans even today. It was a bit depressing but could relate to their point of view as similar sentiments were echoed by our forefathers. I read in the newspapers and it seemed to be a pretty mixed picture.

I can’t comment as only South Africans can figure out the way forward. For me, it was enough to know and see that we both had similar political histories as nations. It seemed the racial divide and anger was much more highly pronounced towards Europeans and divisive then the caste divisions here between Indians. I also shared with them my limited knowledge and understanding of the Indian history (as history is re-written all the time) and it was clear to them that we had common/similar pasts.

As a result, what was surprising (actually not) is that many South Africans have no knowledge of Indian history. as well otherwise the political differences that South Africa and India has in the current scenario wouldn’t have been.

In the end, the trip proved to be fun, stimulating, educative, thought-provoking as questions about self-identity , national identity, our place in the Universe kinda questions which should be asked all the time.

Thank you Bremmer and the team for letting me experience Cape Town, South Africa, I would have been poorer if I hadn’t had the experience.

Filed under: Miscellenous Tagged: #Debconf16, #Dinosaur Fishes, #gallery, #Identity, #Iziko South African Museum, #Nation-state Identity, #pwigo

Planet Linux AustraliaFrancois Marier: Persona Guiding Principles

Given the impending shutdown of Persona and the lack of a clear alternative to it, I decided to write about some of the principles that guided its design and development in the hope that it may influence future efforts in some way.

Permission-less system

There was no need for reliers (sites relying on Persona to log their users in) to ask for permission before using Persona. Just like a site doesn't need to ask for permission before creating a link to another site, reliers didn't need to apply for an API key before they got started and authenticated their users using Persona.

Similarly, identity providers (the services vouching for their users identity) didn't have to be whitelisted by reliers in order to be useful to their users.

Federation at the domain level

Just like email, Persona was federated at the domain name level and put domain owners in control. Just like they can choose who gets to manage emails for their domain, they could:

  • run their own identity provider, or
  • delegate to their favourite provider.

Site owners were also in control of the mechanism and policies involved in authenticating their users. For example, a security-sensitive corporation could decide to require 2-factor authentication for everyone or put a very short expiry on the certificates they issued.

Alternatively, a low-security domain could get away with a much simpler login mechanism (including a "0-factor" mechanism in the case of!).

Privacy from your identity provider

While identity providers were the ones vouching for their users' identity, they didn't need to know the websites that their users are visiting. This is a potential source of control or censorship and the design of Persona was able to eliminate this.

The downside of this design of course is that it becomes impossible for an identity provider to provide their users with a list of all of the sites where they successfully logged in for audit purposes, something that centralized systems can provide easily.

The browser as a trusted agent

The browser, whether it had native support for the BrowserID protocol or not, was the agent that the user needed to trust. It connected reliers (sites using Persona for logins) and identity providers together and got to see all aspects of the login process.

It also held your private keys and therefore was the only party that could impersonate you. This is of course a power which it already held by virtue of its role as the web browser.

Additionally, since it was the one generating and holding the private keys, your browser could also choose how long these keys are valid and may choose to vary that amount of time depending on factors like a shared computer environment or Private Browsing mode.

Other clients/agents would likely be necessary as well, especially when it comes to interacting with mobile applications or native desktop applications. Each client would have its own key, but they would all be signed by the identity provider and therefore valid.

Bootstrapping a complex system requires fallbacks

Persona was a complex system which involved a number of different actors. In order to slowly roll this out without waiting on every actor to implement the BrowserID protocol (something that would have taken an infinite amount of time), fallbacks were deemed necessary:

  • client-side JavaScript implementation for browsers without built-in support
  • centralized fallback identity provider for domains without native support or a working delegation
  • centralized verifier until local verification is done within authentication libraries

In addition, to lessen the burden on the centralized identity provider fallback, Persona experimented with a number of bridges to provide quasi-native support for a few large email providers.

Support for multiple identities

User research has shown that many users choose to present a different identity to different websites. An identity system that would restrict them to a single identity wouldn't work.

Persona handled this naturally by linking identities to email addresses. Users who wanted to present a different identity to a website could simply use a different email address. For example, a work address and a personal address.

No lock-in

Persona was an identity system which didn't stand between a site and its users. It exposed email address to sites and allowed them to control the relationship with their users.

Sites wanting to move away from Persona can use the email addresses they have to both:

  • notify users of the new login system, and
  • allow users to reset (or set) their password via an email flow.

Websites should not have to depend on the operator of an identity system in order to be able to talk to their users.

Short-lived certificates instead of revocation

Instead of relying on the correct use of revocation systems, Persona used short-lived certificates in an effort to simplify this critical part of any cryptographic system.

It offered three ways to limit the lifetime of crypto keys:

  • assertion expiry (set by the client)
  • key expiry (set by the client)
  • certificate expiry (set by the identify provider)

The main drawback of such a pure expiration-based system is the increased window of time between a password change (or a similar signal that the user would like to revoke access) and the actual termination of all sessions. A short expirty can mitigate this problem, but it cannot be eliminated entirely unlike in a centralized identity system.

Planet DebianReproducible builds folks: Reproducible Builds: week 83 in Stretch cycle

What happened in the Reproducible Builds effort between Sunday November 20 and Saturday November 26 2016:

Reproducible work in other projects

Bugs filed

Chris Lamb:

Daniel Shahaf:

Reiner Herrmann:

Reviews of unreproducible packages

63 package reviews have been added, 73 have been updated and 41 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been added:

Weekly QA work

During our reproducibility testing, some FTBFS bugs have been detected and reported by:

  • Chris Lamb (9)
  • Helmut Grohne (1)
  • Peter De Wachter (1)

strip-nondeterminism development

  • #845203 was fixed in git by Reiner Herrmann - the next release will be able to normalize NTFS timestamps in zip files.

debrepatch development

Continuous integration:

  • Holger updated our jenkins jobs for disorderfs and strip-nondeterminism to build these from their respective git master branches, and removed the jobs that build them from other branches since we have none at the moment.


Since the stretch freeze is getting closer, Holger made the following changes:

  • Schedule testing builds to be as equally-frequent as unstable, on all archs, so that testing's build results are more up-to-date.

  • Adjust experimental builds scheduling frequency so that experimental results are not more recent than the ones in unstable.

  • Disable our APT repository for the testing suite (stretch), but leave it active for the unstable and experimental suites.

    This is the repository where we uploaded patched toolchain packages from time to time, that are necessary to reproduce other packages with. Since recently, all our essential patches have been accepted into Debian stretch and this repository is currently empty. Debian stretch will soon become the next Debian stable, and we want to get an accurate impression of how many of its packages will be reproducible.

    Therefore, disabling this repository for stretch whilst leaving it activated for the Debian unstable and experimental suites, allows us to continue to experiment with new patches to toolchain packages, without affecting our knowledge of the next Debian stable.


This week's edition was written by Ximin Luo, Holger Levsen & Chris Lamb and reviewed by a bunch of Reproducible Builds folks on IRC.

CryptogramYou, Too, Can Rent the Murai Botnet

You can rent a 400,000-computer Murai botnet and DDoS anyone you like.

BoingBoing post. Slashdot thread.

Worse Than FailureAwful On Purpose

ExpoSYFY - Charlie and the Chocolate Factory (8521128271)

Studying his new work contract, Stewart felt like he'd found a golden ticket. After 2 long and tedious years in the local university's IT department, he was happy for any opportunity to escape that hellhole. TLA Technologies looked like the Garden of Eden by comparison. Instead of being the only person responsible for anything vaguely computer-related—from putting up websites to plugging in power strips—he'd now be working with a "dynamic team of programmers" in a "rapidly growing company tapping into the web development market". Instead of dealing with tools and languages forgotten by history itself, he'd be using "modern, cutting-edge solutions" under "agile and customer-oriented methodologies". And instead of reporting to a pointy-haired supervisor who couldn't tell a computer from a toaster, he'd be working directly under Dave.

Dave was a large part of why Stewart decided to take the job. Like most small company owners, Dave had taken it upon himself to personally interview the new hire—but unlike many of them, he had enough technical skills to make the interview feel less like a trivia game show and more like a friendly talk between fellow programmers. When Stewart's answers had strayed off the beaten path, Dave had been eager to discuss the solutions instead of just dismissing them, and when Stewart had started asking about the inner workings of the company, it'd been difficult for him to stump Dave with even the most precise questions. He seemed to know what his company was doing and what the coders need to deal with. What more was there to ask for from a boss?

Stewart quickly scribbled his signature on the contract and handed it back to Dave.

"Welcome aboard!" Dave stood and extended his hand to Stewart. "You'll start next Monday."

"Thanks for the opportunity," Stewart replied.

Dave smiled. "Hopefully you'll prove yourself out on the battlefield. If you pull that off, maybe you'll stick around for a while!"

In hindsight, the statement should've tripped an alarm in Stewart's mind, but being overjoyed with the prospect of a real job, he dismissed it as yet another cheerful promise. He left firmly convinced that all that glittered in TLA Technologies was gold.

While Dave hadn't exactly lied, there were a few things he'd neglected to bring up during the interview. For example, when talking about "cutting-edge solutions", he hadn't mentioned TLA Technologies was using all of them. No two applications used the same tech stack. Each used whatever was popular at the time, with languages and databases from completely different ecosystems blended together until they worked. Luckily, Stewart got off easy with an only somewhat bizarre combination of a Node.js service running against a SQL Server database.

The reason lay in what Dave had meant by a "dynamic team of programmers". Out of almost 20 developers, only 2 or 3 had any seniority. The rest were interns and juniors who came and went before anyone even learned their names. There was hardly a week that wasn't marked by desks being emptied, only to be filled by new hires within days. Everyone brought their own favorite technology to the potluck—sometimes introducing it through legitimate channels, sometimes sneaking it into parts of the application that'd be deemed untouchable once the original developer left.

There was exactly one person in the company with enough authority to rein the developers in. Unfortunately, Dave spent most of his hours at water cooler discussions, exchanging the latest IT gossip and encouraging the devs to explore new tools. Occasionally, he retired to his office to explain to one of the people he'd just been chatting up that they were "no longer a good fit for the company".

Stewart restrained himself from asking questions for quite a while, until one day he saw an empty desk a little too close to his own.

"Hi, boss," he said, entering Dave's office. "Do you know where Rob is?"

"Who's Rob?" Dave asked in a dry, uncaring voice, barely looking up from his monitor.

"Um ... Rob the front-end developer? He was supposed to finish a feature by today, and—"

"Oh, that Rob. We had to let him go. He wasn't pulling his weight on the new project."

"What do you mean? He's only been on that project for 2 weeks!" Stewart fought the urge to start shouting. "We'd just started introducing him to the codebase. Surely you don't expect—"

"Look, Stewart." Dave interrupted him again. "When we bring a new developer into a project, even an intern like Rob, the customer expects our performance to grow. They want results, and it's our job to deliver them. Unfortunately, with Rob, the entire team slowed down as soon as he started working. I'm sure you understand why it had to be done."

Stewart understood perfectly. With a polite nod, he scooted back to his desk, thinking about an escape plan.

The days kept passing, and Stewart was still gathering the courage to hand in his notice. In the meantime, he struggled to figure out the twisted logic of the service he was working with; one of the previous developers apparently figured that Unix timestamps made great database primary keys. This worked well in development and crashed hard in production. To "patch" it, at every insert, the application would sneak a blank record in between other requests to lock the row for itself. Once it reserved the row, it would then update it column by column—and with some tables having up to 40 columns, that meant 40 UPDATE queries per record.

It was a bizarre, Goldbergian design. Stewart decided to ask Dave to allow him to refactor it into something sane.

"Yeah, that is terrible," Dave agreed. "Why wouldn't they just use an identity field?"

"Exactly!" Stewart felt like he and Dave were finally on the same page. "It shouldn't take too long to fix, and it would certainly be much better for performance and development."

"But the current code works, doesn't it?"

"Um ... for now it does, yes. But it's already a performance killer, and once more people start using the service, it's going to be much worse."

"They aren't using it now," Dave said. "And the client hasn't complained. As a developer, I understand where you're coming from—but as a company owner, I can't agree to spending time on something that delivers no value."

Stewart's eyes widened in shock. It was a catastrophe waiting to happen. At the current rate, the performance would get unbearable in a matter of months. And Dave was just dismissing it like it was nothing? What had happened to the "customer-oriented" company?

"Besides, this is an opportunity to cash in on our support contract," Dave answered, as if reading Stewart's mind. "That's how it works in this business. If it ain't broke, don't fix it—unless the customer's paying."

"I see," Stewart muttered. As he left Dave's office, he decided to take the advice to heart. Not only was he not going to fix the application, but more importantly, he also wasn't going to spend any more time trying to fix TLA Technologies. He walked towards his desk, unlocked one of the drawers, and pulled out a piece of paper that'd been waiting there for far too long.

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

Krebs on SecuritySan Francisco Rail System Hacker Hacked

The San Francisco Municipal Transportation Agency (SFMTA) was hit with a ransomware attack on Friday, causing fare station terminals to carry the message, “You are Hacked. ALL Data Encrypted.” Turns out, the miscreant behind this extortion attempt got hacked himself this past weekend, revealing details about other victims as well as tantalizing clues about his identity and location.

A copy of the ransom message left behind by the "Mamba" ransomware.

A copy of the ransom message left behind by the “Mamba” ransomware.

On Friday, The San Francisco Examiner reported that riders of SFMTA’s Municipal Rail or “Muni” system were greeted with handmade “Out of Service” and “Metro Free” signs on station ticket machines. The computer terminals at all Muni locations carried the “hacked” message: “Contact for key (,” the message read.

The hacker in control of that email account said he had compromised thousands of computers at the SFMTA, scrambling the files on those systems with strong encryption. The files encrypted by his ransomware, he said, could only be decrypted with a special digital key, and that key would cost 100 Bitcoins, or approximately USD $73,000.

On Monday, KrebsOnSecurity was contacted by a security researcher who said he hacked this very same inbox after reading a news article about the SFMTA incident. The researcher, who has asked to remain anonymous, said he compromised the extortionist’s inbox by guessing the answer to his secret question, which then allowed him to reset the attacker’s email password. A screen shot of the user profile page for shows that it was tied to a backup email address,, which also was protected by the same secret question and answer.

Copies of messages shared with this author from those inboxes indicate that on Friday evening, Nov. 25, the attacker sent a message to SFMTA infrastructure manager Sean Cunningham with the following demand (the entirety of which has been trimmed for space reasons), signed with the pseudonym “Andy Saolis.”

“if You are Responsible in MUNI-RAILWAY !

All Your Computer’s/Server’s in MUNI-RAILWAY Domain Encrypted By AES 2048Bit!

We have 2000 Decryption Key !

Send 100BTC to My Bitcoin Wallet , then We Send you Decryption key For Your All Server’s HDD!!”

One hundred Bitcoins may seem like a lot, but it’s apparently not far from a usual payday for this attacker. On Nov. 20, hacked emails show that he successfully extorted 63 bitcoins (~$45,000) from a U.S.-based manufacturing firm.

The attacker appears to be in the habit of switching Bitcoin wallets randomly every few days or weeks. “For security reasons” he explained to some victims who took several days to decide whether to pay the ransom they’d been demanded. A review of more than a dozen Bitcoin wallets this criminal has used since August indicates that he has successfully extorted at least $140,000 in Bitcoin from victim organizations.

That is almost certainly a conservative estimate of his overall earnings these past few months: My source said he was unable to hack another Yandex inbox used by this attacker between August and October 2016, “,” and that this email address is tied to many search results for tech help forum postings from people victimized by a strain of ransomware known as Mamba and HDD Cryptor.

Copies of messages shared with this author answer many questions raised by news media coverage of this attack, such as whether the SFMTA was targeted. In short: No. Here’s why.

Messages sent to the attacker’s account show a financial relationship with at least two different hosting providers. The credentials needed to manage one of those servers were also included in the attacker’s inbox in plain text, and my source shared multiple files from that server.

KrebsOnSecurity sought assistance from several security experts in making sense of the data shared by my source. Alex Holden, chief information security officer at Hold Security Inc, said the attack server appears to have been used as a staging ground to compromise new systems, and was equipped with several open-source tools to help find and infect new victims.

“It appears our attacker has been using a number of tools which enabled the scanning of large portions of the Internet and several specific targets for vulnerabilities,” Holden said. “The most common vulnerability used ‘weblogic unserialize exploit’ and especially targeted Oracle Corp. server products, including Primavera project portfolio management software.”

According to a review of email messages from the Cryptom27 accounts shared by my source, the attacker routinely offered to help victims secure their systems from other hackers for a small number of extra Bitcoins. In one case, a victim that had just forked over a 20 Bitcoin ransom seemed all too eager to pay more for tips on how to plug the security holes that got him hacked. In return, the hacker pasted a link to a Web server, and urged the victim to install a critical security patch for the company’s Java applications.

“Read this and install patch before you connect your server to internet again,” the attacker wrote, linking to this advisory that Oracle issued for a security hole that it plugged in November 2015.

In many cases, the extortionist told victims their data would be gone forever if they didn’t pay the ransom in 48 hours or less. In other instances, he threatens to increase the ransom demand with each passing day.


The server used to launch the Oracle vulnerability scans offers tantalizing clues about the geographic location of the attacker. That server kept detailed logs about the date, time and Internet address of each login. A review of the more than 300 Internet addresses used to administer the server revealed that it has been controlled almost exclusively from Internet addresses in Iran. Another hosting account tied to this attacker says his contact number is +78234512271, which maps back to a mobile phone provider based in Russia.

But other details from the attack server indicate that the Russian phone number may be a red herring. For example, the attack server’s logs includes the Web link or Internet address of each victimized server, listing the hacked credentials and short notations apparently made next to each victim by the attacker. Google Translate had difficulty guessing which language was used in the notations, but a fair amount of searching indicates the notes are transliterated Farsi or Persian, the primary language spoken in Iran and several other parts of the Middle East.

User account names on the attack server hold other clues, with names like “Alireza,” “Mokhi.” Alireza may pertain to Ali Reza, the seventh descendant of the Islamic prophet Muhammad, or just to a very common name among Iranians, Arabs and Turks.

The targets successfully enumerated as vulnerable by the attacker’s scanning server include the username and password needed to remotely access the hacked servers, as well as the IP address (and in some cases domain name) of the victim organization. In many cases, victims appeared to use newly-registered email addresses to contact the extortionist, perhaps unaware that the intruder had already done enough reconnaissance on the victim organization to learn the identity of the company and the contact information for the victim’s IT department.

The list of victims from our extortionist shows that the SFMTA was something of an aberration. The vast majority of organizations victimized by this attacker were manufacturing and construction firms based in the United States, and most of those victims ended up paying the entire ransom demanded — generally one Bitcoin (currently USD $732) per encrypted server.

Emails from the attacker’s inbox indicate some victims managed to negotiate a lesser ransom. China Construction of America Inc., for example, paid 24 Bitcoins (~$17,500) on Sunday, Nov. 27 to decrypt some 60 servers infected with the same ransomware — after successfully haggling the attacker down from his original demand of 40 Bitcoins. Other construction firms apparently infected by ransomware attacks from this criminal include King of Prussia, Pa. based Irwin & LeightonCDM Smith Inc. in Boston; Indianapolis-based Skillman; and the Rudolph Libbe Group, a construction consulting firm based in Walbridge, Ohio. It’s unclear whether any of these companies paid a ransom to regain access to their files.


The data leaked from this one actor shows how successful and lucrative ransomware attacks can be, and how often victims pay up. For its part, the SFMTA said it never considered paying the ransom.

“We have an information technology team in place that can restore our systems and that is what they are doing,” said SFMTA spokesman Paul Rose. “Existing backup systems allowed us to get most affected computers up and running this morning, and our information technology team anticipates having the remaining computers functional in the next two days.”

As the SFMTA’s experience illustrates, having proper and regular backups of your data can save you bundles. But unsecured backups can also be encrypted by ransomware, so it’s important to ensure that backups are not connected to the computers and networks they are backing up. Examples might include securing backups in the cloud or physically storing them offline. It should be noted, however, that some instances of ransomware can lock cloud-based backups when systems are configured to continuously back up in real-time.

That last tip is among dozens offered by the Federal Bureau of Investigation, which has been warning businesses about the dangers of ransomware attacks for several years now. For more tips on how to avoid becoming the next ransomware victim, check out the FBI’s most recent advisory on ransomware.

Finally, as I hope this story shows, truthfully answering secret questions is a surefire way to get your online account hacked. Personally, I try to avoid using vital services that allow someone to reset my password if they can guess the answers to my secret questions. But in some cases — as with United Airlines’s atrocious new password system — answering secret questions is unavoidable. In cases where I’m allowed to type in the answer, I always choose a gibberish or completely unrelated answer that only I will know and that cannot be unearthed using social media or random guessing.

Planet DebianMike Hommey: Announcing git-cinnabar 0.4.0 release candidate

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.4.0b3?

  • Updated git to 2.10.2 for cinnabar-helper.
  • Added a new git cinnabar download command to download a helper on platforms where one is available.
  • Fixed some corner cases with pack windows in the helper. This prevented cloning mozilla-central with the helper.
  • Fixed bundle2 support that broke cloning from a mercurial 4.0 server in some cases.
  • Fixed some corner cases involving empty files. This prevented cloning Mozilla’s stylo incubator repository.
  • Fixed some correctness issues in file parenting when pushing changesets pulled from one mercurial repository to another.
  • Various improvements to the rules to build the helper.
  • Experimental (and slow) support for pushing merges, with caveats. See issue #20 for details about the current status.

And since I realize I didn’t announce beta 3:

What’s new since 0.4.0b2?

  • Properly handle bundle2 errors, avoiding git to believe a push happened when it didn’t. (0.3.x is unaffected)


CryptogramSan Francisco Transit System Target of Ransomware

It's really bad. The ticket machines were hacked.

Over the next couple of years, I believe we are going to see the downside of our headlong rush to put everything on the Internet.

Slashdot thread.

TEDIngenuity starts with a spark: The talks of TED@IBM

Michela Stribling speaks at TED@IBM salon - Spark, November 16, 2016, San Francisco Jazz, San Francisco, California. Photo: Russell Edwards/TED

IBM’s editorial director, Michela Stribling, kicks off Session 1 at TED@IBM: Spark, November 16, 2016 in San Francisco. (Photo: Russell Edwards/TED)

From artists to scientists, mothers, mathematicians and business visionaries, people in every corner of the world are dreaming up solutions to our most pressing problems. Whether tackling war and peace or the principles of machine learning, ingenuity starts with one thing: a spark.

And regardless of where the spark takes hold, inspiration demands action to reach its greatest potential. At the third installment of TED@IBM — part of the TED Institute, held on November 15, 2016, at the SFJAZZ Center in San Francisco — a diverse and brilliant collection of speakers and performers dared to ask: What if we used our collective expertise and insights to provide a spark that could change the world for good?

After opening remarks from Michela Stribling, IBM’s editorial director, the talks in Session 1 challenged us to think about how we can work together to solve problems and, maybe, leave the planet better than we found it.

Where light meets sound. In a performance that blurs the boundaries of light and sound, Ryan and Hays Holladay create a visual experience of beats and tones shaped around reverberations of color. With multicolored projections and an assortment of carefully placed lamps, the brothers transcribe their music across the illuminated bursts of surfaces suddenly made visible. Here, music becomes the performer, rather than the performance, directing us not toward itself but toward the tempo and rhythm that orchestrates its narration. It’s a melody as much seen as it is heard: a series of intonations whose colorful pattern of sound eventually collapses into the nearly faded spotlight of a solitary lamp.

The answer to fighting cybercrime. Cybercrime netted $450 billion in profits last year, with 2 billion records lost or stolen. As the vice president at IBM Security, Caleb Barlow recognizes the insufficiency of our current strategies to protect our data from the ultra-sophisticated criminal gangs that are responsible for 80 percent of all cyber attacks. His solution? When a cyber attack occurs, we should respond to it with the same collective effort and openness as a health care crisis — we need to know who is infected and how the disease is spreading. Last year, Barlow and his team started publishing all of IBM’s threat data in an effort to encourage the same sharing from other major corporations, governments and private security firms. If we’re not sharing, he says, then we’re part of the problem.

Adam Grant speaks at TED@IBM salon - Spark, November 16, 2016, San Francisco Jazz, San Francisco, California. Photo: Russell Edwards/TED

According to Adam Grant, there are three basic kinds of employees: givers, takers, and matchers (who’ll match the prevailing behavior of the group). The key to a happy workplace is to balance that mix. Grant speaks at TED@IBM: Spark. (Photo: Russell Edwards/TED)

One bad apple spoils the bunch. The success of any company is defined by the quality of people who work there. Organizational psychologist Adam Grant has spent a lot of time analyzing business structures, and he’s concluded that there are three different types of employees: givers, takers and matchers. To achieve a balanced workplace with equal opportunity for and distribution of work, power and play, companies must endeavor to hire givers and matchers, whose personalities allow employees to feel supported, heard and acknowledged. The challenge is to stop takers from getting a seat at the table, because they so often undercut the spirit of collaboration workplaces need to thrive.

The business of rewriting stereotypes. After what felt like a lifetime spent simultaneously living out stereotypes and being imprisoned by them, Villy Wang wondered how she could break the cycle of racism and still make a living. She hit upon the idea of empowering young, diverse kids — those most misrepresented in the media — to become storytellers, training them in filmmaking so they can create new, authentic stories. After training, she helps place them into creative industries that influence media and entertainment, helping rewrite stereotypes at one of its sources. Wang has nested this entire program within a professional media production studio, a business that helps fund and ensure the future of the program. “Obviously, racism is deeply rooted in our history, politics, media and language,” she says, “but I believe if we can empower more diverse kids to take that narrative and change it, we will break hold of these ugly roots.”

Spoken word artist Ise Lyfe and cellist Michael Feckses performing at TED@IBM salon - Spark, November 16, 2016, San Francisco Jazz, San Francisco, California. Photo: Russell Edwards/TED

Performer Ise Lyfe, at right, breaks down the Bay Area’s gentrification battles in a spoken-word collaboration with cellist Michael Fecskes, performed at TED@IBM: Spark. (Photo: Russell Edwards/TED)

We are not mud, we are fertile ground. Spoken-word artist and activist Ise Lyfe sparks crucial dialogue on social justice through performance. With music accompaniment by master cellist Michael Fecskes, Lyfe unravels the impact of the displacement of minority communities in American cities, threaded from his personal experience in East Oakland. By using the image of a flag in the mud to convey the claiming of a muddy territory, Lyfe creates a symbol for gentrification but urges: “I reject the notion of your flag in our mud. Let us be the spark to convey. We are not mud, we’re fertile ground waiting on the rain.”

AI and justice for all. You need a good lawyer if you’re involved in a legal case. But a typical case can cost thousands of dollars in legal fees — and a lot of that money goes toward the time it takes to learn the case and study the law around it, sifting through hundreds of documents and databases to find a winning angle. Lawyer and entrepreneur Andrew Arruda set out to democratize this process by partnering with a computer scientist to create the world’s first artificially intelligent lawyer, ROSS. Using ROSS, a lawyer can search through millions of documents in a matter of minutes, saving hours of research time. ROSS is already being rolled out, for free, to pro-bono lawyers who are helping those most in need. “Justice should cost the same for everyone,” Arruda says. “More money should not buy better justice.”

Intelligence, knowledge and wisdom … in the age of smart machines. Artificial intelligence is a next step in the evolution of our species; some say, in fact, we have come to the limits of our own intelligence. But intelligence is only part of the story — it’s what we do with that intelligence that matters. In a special video created for TED@IBM, Guruduth Banavar, chief science officer of cognitive computing at IBM, asks us to cultivate the wisdom necessary to improve our future.

An organic approach to an organic problem. Beset by plunging biodiversity, pathogens and skyrocketing populations, our global food supply is at risk — but solutions that rely on chemicals and genetically modified organisms come with their own problems. Computational geneticist Laxmi Parida proposes instead that we use the genetic biodiversity that already exists within plants, wrought over millennia by evolution, in order to safeguard our food. To sort through and make sense of the vast amount of sequenced DNA, artificial intelligence and machine learning don’t quite make the cut. Parida has another technique in mind: discrete mathematics. Using math, Parida is cracking open our understanding of the links between DNA and external traits, like stress resistance, so we can breed more robust crops while reducing strain on the environment.

Building with dust. The world-changing promise of nanotechnology remains just that — a promise. We don’t yet have the disease-fighting nanobots, elevators to space or quantum computers that nanotech could someday provide. Why? Because building things from nanomaterials is incredibly difficult, says George Tulevski, a nano architect and researcher at IBM. We don’t have tools small enough to manipulate nanomaterials into something useful. But Tulevski and his team think they’ve located the missing link: chemistry. They’re developing chemical processes to compel billions of nanoparticles to simultaneously assemble themselves into the patterns needed to build circuits, much the same way that natural organisms like Radiolaria build intricate, diverse and elegant structures. It’s like building a sculpture from dust, Tulevski says, and it may be the key to delivering on — or far exceeding — those original promises of nanotechnology.

Fred Clarke and Wobby World performing at TED@IBM salon - Spark, November 16, 2016, San Francisco Jazz, San Francisco, California. Photo: Russell Edwards/TED

World music group Wobbly World led off Session 2 of TED@IBM: Spark with a musical celebration of collaboration that included singing, rapping, guitar, congas and much more. (Photo: Russell Edwards/TED)

Session 2 kicked off with Wobbly World, a global collective of musicians whose dynamic blend of sounds from around the world reminds us of the incredible power of music at bridging the cultural divide.

One of the most important tools in the fight against cancer? Time. An early cancer diagnosis can be the difference between a chance to fight for life, or death. But diagnostic tools tend to be costly, invasive and slow to reveal results, and they’re not always totally accurate. Cancer fighter Joshua Smith presents an affordable, formidable weapon in the fight against cancer that combats these statistics — a noninvasive early-warning system that works like a pregnancy test by intercepting and analyzing the biomarkers that may hint at the presence of cancer. With this streamlined process, says Smith, early-stage cancer detection can happen more frequently and start when a person is still healthy. It’s a strong beacon of hope in the fight against cancer.

Charity Wayua speaks at TED@IBM salon - Spark, November 16, 2016, San Francisco Jazz, San Francisco, California. Photo: Russell Edwards/TED

We can’t afford to give up on government, says Charity Wayua. She speaks at TED@IBM:Spark. (Photo: Russell Edwards/TED)

How to cure ailing governments. In the current political climate, it’s easy to want to give up on transforming government, says cancer researcher Charity Wayua — but we can’t afford to be afraid to tackle challenges like government efficiency head-on. Starting in 2014, Wayua, a trained biochemist, and a team of scientists, engineers and technologists began to “treat” the government of Nairobi, Kenya, as an ailing patient in order to restore the health of the country and its economy. In keeping with a biomedical approach, the team examined the government, its divisions, its employees and every single one of its malfunctions before making a diagnosis and devising a strategic treatment plan. Within two years, the efforts of Wayua and her team paid off, and Kenya moved from 136 to 92 on the World Bank’s index ranking the ease of doing business there. As Wayua says, just because something is sick doesn’t mean it’s dying.

Behavior hacks for the well-intentioned. “Why is it that we have such a difficult time doing what’s right?” asks behavioral scientist Bob Nease. “It’s because intention and action don’t often go hand-in-hand.” That is, good intentions don’t always lead to good behavior, while bad behavior is not always the result of bad intentions. So how do we bridge the gap between good intention and positive action? Nease offers two simple mind-hack solutions: first, make the right thing so easy to do that it requires minimal mind power to arrive at a good decision. Then make the wrong thing so drawn out and convoluted that it’s nearly impossible to justify doing. Nease applies these principles to current issues surrounding health, business and even government reform.

The drumming habits of neurons. In March 1993, an article published in the journal Cell identified the single gene responsible for Huntingon’s disease. The product of a nearly ten-year effort by scientists from around the world, the article held great promise that a cure for the neurodegenerative disorder would soon follow; yet today, there is still not a single medicine able to slow, stop, or reverse this disease. James Kozloski has watched a generation of neuroscientists struggle with this, and his conclusion is that neuroscientists have focused far too long on just the neuron. He argues that the answer lies not only in showing how neurons suffer from genetic disorders but also in how genes suffer from nervous disorders. Comparing the communication of neurons to the beat of a drummer, Kozloski highlights that in brains, things like a bad gene, an injury or aging can lead to changes in circuits that first create and then reinforce subtle bad drumming habits, which are often difficult to detect until it’s too late. His team has been working to detect these habits early in order to break them, mapping the brain’s core components and connecting them together in what he calls the Grand Loop. “Before we can fully understand how genes cause imbalance in neurons leading to their death,” he says, “neuroscience must understand how the brain’s core circuitry balances itself, how genes change synapses, and how brain feedback onto neurons, synapses and genes can push imbalance to the tipping point.”

How the internet of things is transforming the routine. How do we protect our aging population while letting them keep the comfort of their lives and their daily routine? Everyday objects with sensors and WiFi can track each step of that routine and form a picture of a life being lived, in real time, to the children and support systems caring for the safety and independence of their loved ones. Learn more about how the internet of things is changing the routine in another special video produced for TED@IBM.

A library of human cognition. What if you could consult Winston Churchill on a looming international crisis? Or ask Einstein what he thinks about the latest scientific breakthrough? National security expert Juliane Gallina is working on a way to harness the best minds of all time to make the world safer. When Gallina was in the military, she learned a special concept: the OODA loop, a mental model (which stands for Observe, Orient, Decide, Act) designed to help intelligence officers get into the minds of their adversaries. You’re in the OODA loop right now, Gallina says, and every person or team that makes decisions is constantly using it. Today, Gallina is a technologist, and she’s using the OODA loop in her work on cognitive computing. Her goal: to create a way to systematically record the way we think when solving problems so that future generations can better tap into the wisdom of the past. “Let’s find and record the exquisite thinking strategies of innovators and pioneers,” Gallina says, “and then let’s use it.”

The untold story behind a landmark NASA mission. Katherine Johnson, Dorothy Vaughn, Mary Jackson — you’ve probably never heard their names, but these three African-American women were instrumental in putting astronaut John Glenn into orbit around Earth. Their story has been largely untold until now with the upcoming release of biographical film Hidden Figures. Elizabeth Gabler, president of Fox 2000 Pictures, talks with TED@IBM curator Bryn Freedman about what drew her to make film and shares with the audience more about the women from the upcoming film and the barriers they overcame.

No expertise required. The problem with music, says Tim Exile, is that it’s so perfect. For those of us who didn’t grow up playing an instrument, instruments can never be objects of play because we’re often too afraid to mess around and try things out due to a strict idea of what an instrument should sound like. In a talk-performance hybrid, Exile demos a software instrument that he designed to allow anyone, whether they have musical training or not, to record loops, mix sounds and make music. “Apart from anything else,” he says, “this is is a hell of a lot of fun, and we’re all missing out!”

Grady Booch speaks at TED@IBM salon - Spark, November 16, 2016, San Francisco Jazz, San Francisco, California. Photo: Russell Edwards/TED

Grady Booch asks us to worry a bit less about artificial intelligence, while speaking at TED@IBM: Spark. (Photo: Russell Edwards/TED)

The illusion of intelligence. When we think about artificial intelligence and its possibilities, it becomes difficult to disassociate the concept with the dangers outlined by films like The Terminator and 2001: A Space Odyssey. For supporters like Elon Musk and Stephen Hawking, such fears are far from misplaced; they and others argue that artificial intelligence represents an existential threat to humanity. But according to Grady Booch, super-knowing is not the same as super-doing. Cognitive systems, he argues, are far different from the software-intensive systems of earlier generations, in that, “by and large, we don’t program them: we teach them,” imparting onto those machines reflections of our already-present human values. Rather than worry about any existential threat of superintelligence, Booch cautions us to focus instead on the realities that we currently face, “for the rise of computing already brings to us a myriad of other human and societal issues to which we must attend.” Indeed, from raising the level of education around the world to helping humans eventually reach the surface of Mars, “the right question we should now be asking is how shall we best use this technology to augment our humanity, not diminish it.”

Planet DebianMichal Čihař: phpMyAdmin security issues

You might wonder why there is so high number of phpMyAdmin security announcements this year. This situations has two main reasons and I will comment a bit on those.

First of all we've got quite a lot of attention of people doing security reviews this year. It has all started with Mozilla SOS Fund funded audit. It has discovered few minor issues which were fixed in the 4.6.2 release. However this was really just the beginning of the story and the announcement has attracted quite some attention to us. In upcoming weeks the mailbox was full of reports and we really struggled to handle such amount. Handling that amount actually lead to creating more formalized approach to handling them as we clearly were no longer able to deal with them based on email only. Anyway most work here was done by Emanuel Bronshtein, who is really looking at every piece of our code and giving useful tips to harden our code base and infrastructure.

Second thing which got changed is that we release security announcements for security hardening even when there might not be any practical attack possible. Typical example here might be PMASA-2016-61, where using hash_equals is definitely safer, but even if the timing attack would be doable here, the practical result of figuring out admin configured allow/deny rules is usually not critical. Many of the issues also cover quite rare setups (or server misconfigurations, which we've silently fixed in past) like PMASA-2016-54 being possibly caused by server executing shell scripts shipped together with phpMyAdmin.

Overall phpMyAdmin indeed got safer this year. I don't think that there was any bug that would be really critical, on the other side we've made quite a lot of hardenings and we use current best practices when dealing with sensitive data. On the other side, I'm pretty sure our code was not in worse shape than any similarly sized projects with 18 years of history, we just become more visible thanks to security audit and people looked deeper into our code base.

Besides security announcements this all lead to generic hardening of our code and infrastructure, what might be not that visible, but are important as well:

  • All our websites are server by https only
  • All our releases are PGP signed
  • We actively encourage users to verify the downloaded files
  • All new Git tags are PGP signed as well

Filed under: Debian English phpMyAdmin SUSE | 0 comments

Planet Linux AustraliaBinh Nguyen: Saving Capitalist Democracy, Random Thoughts, and More

Given that we've looked at how the US Empire seems to be slowly breaking apart it should seem logical that we would look at some of the ways to save it (and capitalist democracy along with it). Namely, ways of kick starting economic growth once more:

Sociological ImagesWhat If America Redefined Itself as a Nation of Renters?

Originally posted at the Huffington Post.

In the 21st century, it is perhaps time to rethink the American Dream of owning a house. The feasibility of this dream was in the back of my mind the entire time I read Matthew Desmond’s Evicted, the highly praised ethnography of landlords and renters in Milwaukee. Dr. Desmond flips the relationship between poverty and housing instability on its head: eviction is a cause, not a symptom, of poverty.

2 To make a long, well-put, and worth-reading argument short: eviction isn’t rare as many policymakers and sociologists might assume; it is actually a horrifyingly common phenomenon. Urban sociologists have missed the magnitude of the eviction phenomenon because they have traditionally used neighborhoods as the unit of analysis, studying issues such as segregation and gentrification. Because eviction is rarely studied, we don’t have good data on eviction. Establishing a dataset of eviction is not a simple data collecting task, given that there are many forms of informal eviction. The consequences of eviction are devastating and have a profound, negative, and life-long impact on subsequent trajectories: worse housing, more eviction, and homelessness, all disproportionately affecting women of color with children (“a female equivalent of mass incarceration,” Desmond argued at a talk at the University of Pennsylvania last week).

The solution is a universal housing voucher program that is funded using money that currently goes to the mortgage interest tax deduction, a $170 billion program for homeowners that benefits mostly the upper-middle class.

Let’s set the economics of a universal voucher program aside — Desmond and many economists on both sides of the political spectrum (including Harvard economist Edward Glaeser) have already addressed the effects on the market, the argument that such a program will be a disincentive to work, and the fear of the lag time that a program will create in the housing market increasing search times. At the heart of public policy are norms and values, and the existence of the mortgage interest tax deduction — the largest housing assistance program in the country — is not a reflection of an inherent American preference for the rich over the poor. Rather, it is a reflection of an inherent American preference for the homeowner over the renter.

To implement the universal voucher program that Desmond argues for, we need to rethink the way we conceive of homeownership in American culture. As I read Evicted, the work of Robert K. Merton came to mind. In 1938, Merton, one of the contenders for the title “founder of modern sociology,” published a paper titled “Social Structure and Anomie.” In the paper, Merton argues that every society has cultural goals, “a frame of aspirational references,” and institutionalized means, “permissible and required procedures for attaining these ends.”

In American society, the institutionalized means are study hard/work hard (and maybe go to church every so often), and the cultural goals are accumulate wealth and own a house. Obviously, the vast majority of Americans don’t achieve these goals and it is extremely hard to argue that the institutionalized means will actually lead them there. But that’s okay; it just makes for a nation of ritualists. Ritualism is devotion to the means without achieving the goals. These ritualists are everywhere in American society, or at least in the way we perceive our society. We romanticize a fictional poor person that takes pride that s/he never took welfare, for example, no matter how tough times were. Welfare is not one of the institutionalized means, and the ritualist prefers to stay farther away from the goal than to cross the line to non-institutionalized means.

According to City Lab, 41% of all US households are residing in a rental unit. Are these households inhabited by ritualists, trying to achieve the goal but without the means? Maybe, but Merton offers another option – they could be rebels. The rebel may or may not conform to the cultural goals and may or may not use the means. The condition for rebellion, according to Merton, is that “emancipation from the reigning standards, due to frustration or to marginalist perspectives, leads to the attempt to introduce ‘a new social order.’”

If one of the American cultural goals is homeownership, the mortgage interest tax deduction is a tool to maintain this social order. The goal’s support structure recognizes in a sense that, with only the purist version of the institutionalized means (hard work with no government assistance), the goal is out of reach. If that support system is taken away, if we shift funding from the mortgage interest tax credit to a universal housing voucher program, we must recognize that we are supporting a cultural rebellion.

It is time to call for a change in the norms and values that are at the heart of our public policy. That is not a simple task. When I think of the “American,” I think about Ron Swanson from the TV show Parks and Recreation. In one of the show’s episodes, Swanson explains America to a little girl, “Let’s get started. Life, liberty, and property. That’s John Locke. This is your lunch.” Matthew Desmond, by calling for a universal voucher program, challenges this status quo and attempts to put habitability, stability, and opportunity at the heart of our value system and not as byproducts of homeownership and hard work. He also challenges the institutionalized means by calling for an increase in the number of people achieving this new goal — a stable home — specifically through quality rental housing, with government assistance, rather than through hard work alone.

The United States is nation of renters that views itself as a nation of homeowners. The millions of rental households deserve to be a part of the group that achieves the American cultural goal. They deserve government support, they deserve stability, and they don’t deserve to have to break away from the American institutionalized means. We must not shy away from the size of this task. The country might not be ready to think of itself as the nation of renters that it is. The United States is undergoing a housing and eviction crisis, and as Matthew Desmond said in his talk at Penn this week, “This is not us, there is nothing American about this.” It is time for a new social order, for the rise of the renter class as more than ritualists and rebels.

Originally from Tel Aviv, Abraham Gutman is currently at the Center for Public Health Law Research at Temple University. He is an aspiring sociologist working on econometrics, race, policing, and housing. He blogs at the Huffington Post and you can follow him on Twitter.

(View original at

Google AdsenseIncrease your earnings by using the right keyword research techniques

This is the second of five guest posts from AdSense publisher Brandon Gaille. Brandon has built his small business marketing blog,, to over 2 million monthly visitors in less than three years. He’s featured as our guest blogger to share insights and tips from his personal blogging experience to help AdSense publishers grow earnings. If you’re new to AdSense, be sure to sign up for AdSense and start turning your #PassionIntoProfit. 

Last month, my blog received a little bit over 1.7 million visitors that originated from Google organic search. More than 95% of this traffic came from long tailed keywords.

If you do not know what a long tailed keyword is, then here’s a crash course. In keyword research, there are two primary types of keywords:

#1 Head Terms
These are your one and two word phrases that get loads of searches on Google. A few examples would be cars, credit score, and real estate. They are phrases that are very broad and are usually a top level category.

#2 Long Tail Terms
Then you have the long tail phrases that are made up of three words or more. A few examples of long tailed terms would be; red convertible sports cars, how to improve a bad credit score, and luxury real estate in upper New York. These terms are more descriptive and the searcher is usually closer to making a buying decision.

If you are just looking at the top 10,000 most searched phrases, then you will see mostly header terms. However, as you can see in the chart below, the top 10,000 searched phrases only make up 18.5% of all searches. The long tail terms make up over 70%.


Additionally, Search Engine Watch published the results of a Conductor study, which found that long tailed traffic converted to sales at a rate of 250% greater than head terms. 

I always tell the students of my online course that the battle for Google traffic is won with deep keyword research. It really is no different than gold prospecting. You have to dig through miles of dirt and rock to find the keyword phrases that are worth their weight in gold.

Here are the five keyword research tactics that will make your Google Analytics look like a hockey stick:

#1 Target keyword phrases that your domain name can rank for

If your website is, then you can write about anything you want. The reason why is because their domain authority is 94 out of 100. Domain authority is a scoring system, created by Moz, that is based upon the link profile of each domain name. The more quality links you have, the higher your score is.

Moz Backlink Checker

Having the luxury of managing over 100 blogs of my own and my clients, I was able to statistically identify what type of keyword phrases (based on number of Google results) different domain authorities can effectively rank for on Google. 

When you type in a phrase to Google it will come back with a number of results. The number of results shows how many pages and posts are competing for that particular phrase. The higher the number, the harder it is to rank high enough to get traffic.

Here is the breakdown of what different domain authority sites can rank for.

  • Domain Authority Less Than 30 = Keyword Phrases with Less than 50,000 Google Results
  • Domain Authority 30 to 35 = Keyword Phrases with Less than 100,000 Google Results
  • Domain Authority 36 to 40 = Keyword Phrases with Less than 250,000 Google Results
  • Domain Authority 41 to 45 = Keyword Phrases with Less than 500,000 Google Results
  • Domain Authority 46 to 50 = Keyword Phrases with Less than 1,000,000 Google Results

My blog has domain authority of 44. If I spend my time writing posts on keyword phrases with less than 500,000 Google results, then I am going to consistently get high Google rankings for every post I publish. The screenshot below shows the simplicity of how to choose the right keyword phrase.


If you’d like to dive even deeper into keyword strategies, check out the “5 Long Tail Keyword Research Tactics that Every Blogger Should Master.

Posted By
Brandon Gaille

Brandon Gaille

Brandon Gaille is an AdSense publisher. You can learn more about Brandon at and listen to his popular blogging podcast, The Blog Millionaire.

Planet DebianClint Adams: Not the Grace Hopper Conference

Do you love porting? For ideas on how to make GHC suck less on your favorite architecture, see this not-at-all ugly table.

Worse Than FailureCodeSOD: Indentured

Speaking with developers, I’m always surprised to find a surprising percentage are surprised and baffled by the “Fluent API”. This object-oriented convention is based on the Builder Pattern, and involves call chaining to construct a configured object. So, for example, if you needed to configure a SystemHandler object to have a series of LinkHandler objects, you might have something like this:

    Handlers = SystemHandler.builder()

Each method of the builder object modifies the builder and then returns the modified instance, giving Object-Oriented programs a sort of composability. Compared to passing a thousand parameters to the constructor, it also offers a nice bit of readability.

I bring this up, because Mireille’s co-worker is likely the sort of person that would be confused by a Fluent API.

public HandlerChain()
        Handlers =
                new SystemHandler(
                        new Link1Handler(
                                new Link2Handler(
                                        new Link3Handler(
                                                new Link4Handler(
                                                        new Link5Handler(
                                                                new Link6Handler(
                                                                        new Link7Handler(
                                                                                new Link8Handler(
                                                                                        new Link9Handler(
                                                                                                new Link10Handler(
                                                                                                        new Link11Handler(
                                                                                                                new Link12Handler(
                                                                                                                        new Link13Handler(
                                                                                                                                new Link14Handler(
                                                                                                                                        new Link15Handler(
                                                                                                                                                new Link16Handler(
                                                                                                                                                        new Link17Handler(
                                                                                                                                                                new Link18Handler(
                                                                                                                                                                        new Link19Handler(
                                                                                                                                                                                new Link20Handler(
                                                                                                                                                                                        new Link21Handlerx(
                                                                                                                                                                                                new Link22Handler(
                                                                                                                                                                                                        new Link23Handler(
                                                                                                                                                                                                                new Link24Handler(
                                                                                                                                                                                                                        new Link25Handler(
                                                                                                                                                                                                                                new DefaultHandler(
                                                                                                                                                                                                                                        new Link26Handler(
                                                                                                                                                                                                                                                new link27Handler(
                                                                                                                                                                                                                                                        new Link28Handler(
                                                                                                                                                                                                                                                                new Link29Handler(
                                                                                                                                                                                                                                                                        new Link30Handler(
                                                                                                                                                                                                                                                                           new Link31Handler(
                                                                                                                                                                                                                                                                                   new Link32Handler(
                                                                                                                                                                                                                                                                                          new Link33Handler(
                                                                                                                                                                                                                                                                                                 new Link34Handler(
                                                                                                                                                                                                                                                                                                        new Link35Handler(
                                                                                                                                                                                                                                                                                                                new Link36Handler(
                                                                                                                                                                                                                                                                                                                        new Link37Handler (
                                                                                                                                                                                                                                                                                                                                new Link38Handler(
                                                                                                                                                                                                                                                                                                                                        new Link39Handler(
                                                                                                                                                                                                                                                                                                                                                new Link40Handler(
                                                                                                                                                                                                                                                                                                                                                        new Link41Handler(
                                                                                                                                                                                                                                                                                                                                                                new Link42Handler(
                                                                                                                                                                                                                                                                                                                                                                        new Link43Handler(
                                                                                                                                                                                                                                                                                                                                                                                new Link44Handler(null))))))))))))))))))))))))))))))))))))))))))))));

Much cleaner and simpler than my original solution. I stand corrected.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianStefano Zacchiroli: last week to take part in the Debian Contributors Survey

Debian Contributors Survey 2016

About 3 weeks ago, together with Molly and Mathieu, we launched the first edition of the Debian Contributors Survey. I won't harp on it any further, because you can find all relevant information about it on the Debian blog or as part of the original announcement.

But it's worth noting that you've now only one week left to participate if you want to: the deadline for participation is 4 December 2016, at 23:59 UTC.

If you're a Debian contributor and would like to participate, just go to the survey participation page and fill in!

Planet DebianPau Garcia i Quiles: Desktops DevRoom @ FOSDEM 2017: you are still on time to submit a talk

FOSDEM 2016 is going to be great (again!) and you still have the chance to be one of the stars.

Have you submitted your talk to the Desktops DevRoom yet?


Remember: we will only accept proposals until December 5th. After that, the Organization Team will get busy and vote and choose the talks.

Here is the full Call for Participation, in case you need to check the details on how to submit:

FOSDEM Desktops DevRoom 2017 Call for Participation

Topics include anything related to the Desktop: desktop environments, software development for desktop/cross-platform, applications, UI, etc


Planet DebianDirk Eddelbuettel: anytime 0.1.1: More robust

CRAN just accepted the newest release 0.1.1 of anytime, following the previous five releases since September.

anytime is a very focussed package aiming to do just one thing really well: to convert anything in integer, numeric, character, factor, ordered, ... format to POSIXct (or Date) objects -- and to do so without requiring a format string.

See the anytime page, or the GitHub for a few examples, or just consider the following illustration:

R> library(anytime)
R> anytime("20161107 202122")   ## all digits
[1] "2016-11-07 20:21:22 CST"
R> utctime("2016Nov07 202122")  ## UTC parse example
[1] "2016-11-07 14:21:22 CST"

Release 0.1.1 robustifies two aspects. The 'digits only' input above extends what Boost Date_Time can parse and relies on simple-enough pre-processing. This operation is now more robust. We also ensure that input already of class Date is simply passed through by anydate() or utcdate(). Last but not least we added code coverage support, which oh-so-predictably lead us to game this metric to reach the elusive 100% coverage.

The NEWS file summarises the release:

Changes in anytime version 0.1.1 (2016-11-27)

  • Both anydate() and utcdate() no longer attempt to convert an input value that is already of type Date.

  • The string splitter (needed for the 'all-digits' formats extending Boost Date_time) is now more defensive about the input argument and more robust. Thanks to Bob Jansen for the heads-up (PR #30 closing issue #29).

  • Code coverage reporting has been added (PR #31).

Courtesy of CRANberries, there is a comparison to the previous release. More information is on the anytime page.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

TED8 insider tips: Make an audition video for TED’s Idea Search 2017

Tania Luna auditions for the TED stage. (Spoiler: She got there.) Photo: James Duncan Davidson

Tania Luna auditions for the TED stage. (Spoiler: She got there.) Photo: James Duncan Davidson

Here are 8 insider tips to creating a great audition video for the TEDNYC Idea Search 2017. (Remember, the deadline to apply is Monday, Nov. 28, at 6pm Eastern.)

1. Distill your idea. In a 1-minute video, you have about 150 words to describe your proposed TED Talk. So you can’t — and you don’t have to — give every single detail of your idea. Instead, focus on the basics of what you will want to say. As a tip, try writing your script around a big question that your talk will answer, such as: “How can teachers learn to connect with Generation Z?” Think about what you’d want the audience to take away from your talk — the main insight — and be sure to communicate that in your video.

2. Watch our TED Talk about … well … giving a TED Talk. Our curator, Chris Anderson, distills 4 points you’ll want to think about as you write your script.

3. Think about how your idea will be relevant right now. Some of our finalists will win spots onstage at TED2017, our major conference of the year, happening in April 2017. So think on this question: why does your idea have special meaning right now, as 2017 kicks off? The theme of TED2017 is “The Future You,” and we’ll be thinking about the big picture of how our world is evolving, as well as how we humans are changing.

4. Use incisive, clear language — not jargon. Consider that the audience, for the most part, will not be as familiar with your idea, or your industry, as you are. So try to describe your concepts in a way that most people would understand, without compromising the quality of your thoughts and ideas.

5. When you practice your script, record your practice. And then watch your practice recordings — you’ll likely see some ways you can get to the point faster. Listen for places where you lose your own interest, and cut cut cut.

6. Consider asking someone else to film you. This way, you can focus on delivering your talk, not on your tech. If you’re filming yourself on your laptop or phone, remember to look directly at the camera, not at your own face on the screen.

7. Keep your video simple. You don’t need to edit or produce your video in any way — no need for onscreen graphics or fancy cuts. We’re looking for your raw talent here.

8. Be your own fabulous self. Don’t feel you need to play-act the “TED speaker” — here at TED HQ, we’re as sick of this stereotype as you are. We’re looking for people who are authentic, who have something to say and their own honest way to say it. Use your real accent, your real gestures, your everyday words — be you!

Looking for a couple of examples of great audition videos?

Watch Zak Ebrahim’s short audition video, which turned into a blockbuster TED Talk and a TED Book, and helped share his message of peace to millions of people.

Watch Sally Kohn’s short audition video, which turned into a TED Talk … after which she was invited back to give another TED Talk.

And finally, 2 pro tips:

1. Try to turn in your video and application a few hours before the deadline. Here at TED HQ, we’re going to be watching hundreds of videos the day after the deadline closes … but you can get our attention by submitting earlier in the day.

2. If the audition format just doesn’t work for you, but you still want to speak, use our form to apply to speak at a TED event, or look for a nearby TEDx event and apply to speak there! The TED Idea Search is only one of many, many ways we are looking for great ideas.

Here’s how to enter the TEDNYC Idea Search: Complete the entry form and make a 1-minute video. Your 1-minute video can be very simple: Just explain your idea in a few sentences, and give us a flavor of how you’d present it. We’ll select a dozen finalists to present a short version of their talk in our New York City theater in late January.

We can’t wait to hear your idea!

Krebs on SecurityATM Insert Skimmers: A Closer Look

KrebsOnSecurity has featured multiple stories about the threat from ATM fraud devices known as “insert skimmers,” wafer-thin data theft tools made to be completely hidden inside of a cash’s machine’s card acceptance slot. For a closer look at how stealthy insert skimmers can be, it helps to see videos of these things being installed and removed. Here’s a look at promotional sales videos produced by two different ATM insert skimmer peddlers.

Traditional ATM skimmers are fraud devices made to be placed over top of the cash machine’s card acceptance slot, usually secured to the ATM with glue or double-sided tape. Increasingly, however, more financial institutions are turning to technologies that can detect when something has been affixed to the ATM. As a result, more fraudsters are selling and using insert skimming devices — which are completely hidden from view once inserted into an ATM.

The fraudster demonstrating his insert skimmer in the short video above spends the first half of the demo showing how a regular bank card can freely move in and out of the card acceptance slot while the insert skimmer is nestled inside. Toward the end of the video, the scammer retrieves the insert skimmer using what appears to be a rather crude, handmade tool thin enough to fit inside a wallet.

A sales video produced by yet another miscreant in the cybercrime underground shows an insert skimmer being installed and removed from a motorized card acceptance slot that has been fully removed from an ATM so that the fraud device can be seen even while it is inserted.

In a typical setup, insert skimmers capture payment card data from the magnetic stripe on the backs of cards inserted into a hacked ATM, while a pinhole spy camera hidden above or beside the PIN pad records time-stamped video of cardholders entering their PINs. The data allows thieves to fabricate new cards and use PINs to withdraw cash from victim accounts.

Covering the PIN pad with your hand blocks any hidden camera from capturing your PIN — and hidden cameras are used on the vast majority of the more than three dozen ATM skimming incidents that I’ve covered here. Shockingly, few people bother to take this simple and effective step, as detailed in this skimmer tale from 2012, wherein I obtained hours worth of video seized from two ATM skimming operations and saw customer after customer walk up, insert their cards and punch in their digits — all in the clear.

Once you understand how stealthy these ATM fraud devices are, it’s difficult to use a cash machine without wondering whether the thing is already hacked. The truth is most of us probably have a better chance of getting physically mugged after withdrawing cash than encountering a skimmer in real life. However, here are a few steps we can all take to minimize the success of skimmer gangs.

-Cover the PIN pad while you enter your PIN.

-Keep your wits about you when you’re at the ATM, and avoid dodgy-looking and standalone cash machines in low-lit areas, if possible.

-Stick to ATMs that are physically installed in a bank. Stand-alone ATMs are usually easier for thieves to hack into.

-Be especially vigilant when withdrawing cash on the weekends; thieves tend to install skimming devices on a weekend — when they know the bank won’t be open again for more than 24 hours.

-Keep a close eye on your bank statements, and dispute any unauthorized charges or withdrawals immediately.

If you liked this piece and want to learn more about skimming devices, check out my series All About Skimmers.

Planet DebianEriberto Mota: Debian with three monitors under low cost graphics interface

Since 2008 I use two monitors in my desktop. Yesterday I bought a new graphics interface and a third monitor. Some time I was looking for a low cost graphics interface. Ok, I am using GeForce GT 740 which has three output ports: VGA, DVI and HDMI. In Brazil this interface card can be found around R$ 400 (US$ 117, but my card was US$ 87 in Brazilian Black Friday). In, it is between US$ 51 and US$ 109. The chosen manufacturer was Zotac, but all GT 740 and 750 will work fine (I tested the GT 750 too).

The GeForce GT 740 was imediatelly recognised by Debian Jessie with kernel Linux 4.7.0 from Backports (it is my default, so I didn't test with original 3.16 kernel). The driver used was the default X.Org Nouveau. I use KDE and the management was easy.

I hope this post can help people interested in use 3 monitors. Enjoy!



Harald WelteTen years anniversary of Openmoko

In 2006 I first visited Taiwan. The reason back then was Sean Moss-Pultz contacting me about a new Linux and Free Software based Phone that he wanted to do at FIC in Taiwan. This later became the Neo1973 and the Openmoko project and finally became part of both Free Software as well as smartphone history.

Ten years later, it might be worth to share a bit of a retrospective.

It was about building a smartphone before Android or the iPhone existed or even were announced. It was about doing things "right" from a Free Software point of view, with FOSS requirements going all the way down to component selection of each part of the electrical design.

Of course it was quite crazy in many ways. First of all, it was a bunch of white, long-nosed western guys in Taiwan, starting a company around Linux and Free Software, at a time where that was not really well-perceived in the embedded and consumer electronics world yet.

It was also crazy in terms of the many cultural 'impedance mismatches', and I think at some point it might even be worth to write a book about the many stories we experienced. The biggest problem here is of course that I wouldn't want to expose any of the companies or people in the many instances something went wrong. So probably it will remain a secret to those present at the time :/

In any case, it was a great project and definitely one of the most exciting (albeit busy) times in my professional career so far. It was also great that I could involve many friends and FOSS-compatriots from other projects in Openmoko, such as Holger Freyther, Mickey Lauer, Stefan Schmidt, Daniel Willmann, Joachim Steiger, Werner Almesberger, Milosch Meriac and others. I am happy to still work on a daily basis with some of that group, while others have moved on to other areas.

I think we all had a lot of fun, learned a lot (not only about Taiwan), and were working really hard to get the hardware and software into shape. However, the constantly growing scope, the [for western terms] quite unclear and constantly changing funding/budget situation and the many changes in direction have ultimately lead to missing the market opportunity. At the time the iPhone and later Android entered the market, it was too late for a small crazy Taiwanese group of FOSS-enthusiastic hackers to still have a major impact on the landscape of Smartphones. We tried our best, but in the end, after a lot of hype and publicity, it never was a commercial success.

What's more sad to me than the lack of commercial success is also the lack of successful free software that resulted. Sure, there were some u-boot and linux kernel drivers that got merged mainline, but none of the three generations of UI stacks (GTK, Qt or EFL based), nor the GSM Modem abstraction gsmd/libgsmd nor middleware ( has manage to survive the end of the Openmoko company, despite having deserved to survive.

Probably the most important part that survived Openmoko was the pioneering spirit of building free software based phones. This spirit has inspired pure volunteer based projects like GTA04/Openphoenux/Tinkerphone, who have achieved extraordinary results - but who are in a very small niche.

What does this mean in practise? We're stuck with a smartphone world in which we can hardly escape any vendor lock-in. It's virtually impossible in the non-free-software iPhone world, and it's difficult in the Android world. In 2016, we have more Linux based smartphones than ever - yet we have less freedom on them than ever before. Why?

  • the amount of hardware documentation on the processors and chipsets to day is typically less than 10 years ago. Back then, you could still get the full manual for the S3C2410/S3C2440/S3C6410 SoCs. Today, this is not possible for the application processors of any vendor
  • the tighter integration of application processor and baseband processor means that it is no longer possible on most phone designs to have the 'non-free baseband + free application processor' approach that we had at Openmoko. It might still be possible if you designed your own hardware, but it's impossible with any actually existing hardware in the market.
  • Google blurring the line between FOSS and proprietary code in the Android OS. Yes, there's AOSP - but how many features are lacking? And on how many real-world phones can you install it? Particularly with the Google Nexus line being EOL'd? One of the popular exceptions is probably Fairphone2 with it's alternative AOSP operating system, even though that's not the default of what they ship.
  • The many binary-only drivers / blobs, from the graphics stack to wifi to the cellular modem drivers. It's a nightmare and really scary if you look at all of that, e.g. at the binary blob downloads for Fairphone2 to get an idea about all the binary-only blobs on a relatively current Qualcomm SoC based design. That's compressed 70 Megabytes, probably as large as all of the software we had on the Openmoko devices back then...

So yes, the smartphone world is much more restricted, locked-down and proprietary than it was back in the Openmoko days. If we had been more successful then, that world might be quite different today. It was a lost opportunity to make the world embrace more freedom in terms of software and hardware. Without single-vendor lock-in and proprietary obstacles everywhere.


Planet DebianJulian Andres Klode: Starting the faster, more secure APT 1.4 series

We just released the first beta of APT 1.4 to Debian unstable (beta here means that we don’t know any other big stuff to add to it, but are still open to further extensions). This is the release series that will be released with Debian stretch, Ubuntu zesty, and possibly Ubuntu zesty+1 (if the Debian freeze takes a very long time, even zesty+2 is possible). It should reach the master archive in a few hours, and your mirrors shortly after that.

Security changes

APT 1.4 by default disables support for repositories signed with SHA1 keys. I announced back in January that it was my intention to do this during the summer for development releases, but I only remembered the Jan 1st deadline for stable releases supporting that (APT 1.2 and 1.3), so better late than never.

Around January 1st, the same or a similar change will occur in the APT 1.2 and 1.3 series in Ubuntu 16.04 and 16.10 (subject to approval by Ubuntu’s release team). This should mean that repository provides had about one year to fix their repositories, and more than 8 months since the release of 16.04. I believe that 8 months is a reasonable time frame to upgrade a repository signing key, and hope that providers who have not updated their repositories yet will do so as soon as possible.

Performance work

APT 1.4 provides a 10-20% performance increase in cache generation (and according to callgrind, we went from approx 6.8 billion to 5.3 billion instructions for my laptop’s configuration, a reduction of more than 21%). The major improvements are:

We switched the parsing of Deb822 files (such as Packages files) to my perfect hash function TrieHash. TrieHash – which generates C code from a set of words – is about equal or twice as fast as the previously used hash function (and two to three times faster than gperf), and we save an additional 50% of that time as we only have to hash once during parsing now, instead of during look up as well. APT 1.4 marks the first time TrieHash is used in any software. I hope that it will spread to dpkg and other software at a later point in time.vendors.

Another important change was to drop normalization of Description-MD5 values, the fields mapping a description in a Packages files to a translated description. We used to parse the hex digits into a native binary stream, and then compared it back to hex digits for comparisons, which cost us about 5% of the run time performance.

We also optimized one of our hash functions – the VersionHash that hashes the important fields of a package to recognize packages with the same version, but different content – to not normalize data to a temporary buffer anymore. This buffer has been the subject of some bugs (overflow, incompleteness) in the recent past, and also caused some slowdown due to the additional writes to the stack. Instead, we now pass the bytes we are interested in directly to our CRC code, one byte at a time.

There were also some other micro-optimisations: For example, the hash tables in the cache used to be ordered by standard compare (alphabetical followed by shortest). It is now ordered by size first, meaning we can avoid data comparisons for strings of different lengths. We also got rid of a std::string that cannot use short string optimisation in a hot path of the code. Finally, we also converted our case-insensitive djb hashes to not use a normal tolower_ascii(), but introduced tolower_ascii_unsafe() which just sets the “lowercase bit” (| 0x20) in the character.


  • Sandboxing now removes some environment variables like TMP from the environment.
  • Several improvements to installation ordering.
  • Support for armored GPG keys in trusted.gpg.d.
  • Various other fixes

For a more complete overview of all changes, consult the changelog.

Filed under: Debian, Ubuntu

Valerie AuroraRadical self-care for activists in the time of Trump

[Content notes: disordered eating, exercise]

Like many of you, I’m struggling to take care of myself in the aftermath of the 2016 U.S. election. My friends and I are having stomach pain, trouble sleeping, difficulty staying focused on work, and many more signs of fear and stress. To add to it, as activists many of us feel a sense of urgency and obligation to act now, to push ourselves to our limits in an attempt to avert the coming disaster. I find myself thinking irrational thoughts, like “Maybe I should start sleeping less so I can write more. Do I really need to keep doing my physical therapy? Why bother keeping tax records when I’m worried about mass deportations?” Then my rational mind points out that it’s hard to write if I’m tired, or in pain, or having my tax returns audited.

This post is a collection of tips and strategies for radical self-care in the time of Trump. It’s radical self-care because taking care of yourself is crucial to being able to resist fascism and injustice. But it’s also radical because the very act of self-care is a rejection of cruelty, injustice, and oppression. We are in the process of creating a world in which we recognize every individual’s right to love and care and respect; we must treat ourselves the way we want others to be treated if we are true to our beliefs.

This post starts out with general considerations and strategy, then gets into specific concrete recommendations you can do today. Some of the advice might accidentally trigger disordered thinking around food; we tried to write it in ways that avoid that, but if this is a concern for you, that section is last in this post and is prefaced by a separate trigger warning. If after you finish this post you’re looking for more self-care tips, try this interactive self-care guide. Thank you to the many people who contributed to this post, David Bacome, Kara Sowles, Molly Wilson, and several anonymous contributors.

General strategy and considerations

Stressful times can bring back old fractures – things like old mental habits you thought you fixed a long time ago, or disordered eating patterns you think you have recovered from. If you have these fractures, it helps to be vigilant for the signs of them coming back, and to take those signs seriously when they happen. Don’t be too hard on yourself for relapsing to old ways under stress, especially if excessive self-criticism is part of the old mental habits you are trying to get out of. The weird thing is that stress from external sources (such as an unjust and terrifying political climate) can be a motivation to get better and to work hard on your self-care. If it helps motivate you, you can tell yourself you need to take good care of yourself so that you can help others. (It happens to be true, too!)

Many of us feel a tension between self-care and activism. Many forms of activism are costly and difficult for some people (e.g., joining in-person protests that could result in violence, or simply making phone calls when you have social anxiety). Situations of fear and urgency about societal-scale problems may activate a pattern of martyr-type thinking that goes something like this: “If I make this huge self-sacrifice and harm myself deeply, the universe will notice and be fair and reward me by fixing the bad thing.” Unfortunately, this rarely works out in the way we hope, and the end result is too often only self-harm and a reduced ability to work for good in the future.

One way out of this trap is to make a conscious search for the kind of activism that works best for you. Here are some starting ideas: engaging political representatives, joining political parties, participating in street protests, joining or forming local organisations, donating money, amplifying news, correcting misinformation, writing, educating family and friends, beginning or continuing an activist career, reaching out to groups targeted by hate, connecting folks in need with resources (like lawyers or funds for documents or hotlines), and providing background support to other people doing these things.

Try a few different things and pay attention to which forms of activism you believe are effective, and which of the possibly effective things energise and nourish you, as those will be sustainable. Don’t worry about who will do the things that you don’t like; for example, if you are terrified of public speaking, remember that more people want to speak in front of a huge audience than there are audiences who want to listen to them. Or if crowds make you anxious and fearful, don’t join the street protest – plenty of other people feel comforted and happy in a crowd.

In a tough time or an emergency, you may not limit yourself only to sustainable forms of actvism, but you can at least pay attention to what they are for the longer term. Try to avoid criticizing others for choosing different forms of activism, unless the actions they are taking are actively harmful to the overall cause (such as the safety pin movement) or if they are seriously diverting energy and resources away from crucial goals. Diversity of tactics – both in its scholarly sense and in the general sense of many people doing many different things – is key to any successful social movement.

One of the major challenges to self-care is when you are caring for others who are dependent on you: children, or disabled family members, or other folks who depend on you. Carers need to take care of themselves if they want to continue caring for others over the long term, but often the needs of those we are caring for don’t change during times of stress for the carer.

When time and energy is tight, as in a time of crisis, it helps to think explicitly about what non-self care things you can stop doing, and where you can get more help or resources with caring for others. Society has trained us to go straight to self-sacrifice as a solution, especially for carers. Instead, explore a broader array of solutions: are there things you can stop doing without harming yourself? Maybe now is the time to call in the favors you’ve been saving up for when you need them. Are there creative ways to pool time and energy and resources? Fear is the enemy of creativity, and creativity is key to problem-solving. Don’t let your fear lock you into a sub-optimal solution.

Physical health

If you suspect you might have something physically wrong and untreated that’s making you feel bad, take this time of great stress as extra motivation to go to a doctor and work with them on it. Small health annoyances can become big life problems under conditions of stress, so caring for your health should become more of a priority, rather than less. Pay attention to what your body is telling you and don’t ignore important signs because you’re too worried about world events.

Some health problems are not obvious. For example, it’s not uncommon for people to be low in vitamin D without knowing it, which can contribute to feelings of inertia and decision paralysis. If you might be low in vitamin D, B12, iron, or other vitamins and minerals, you can ask a medical professional for a blood test to check. Deficiencies can contribute to mental health difficulties, and they can be relatively simple to improve with food and supplements. (Note: vitamin D, like many other supplements, can be harmful to people with certain rare medical conditions – be thoughtful, do your research, and talk to a medical professional before trying any medical advice.)

For many people, regular physical activity is crucial to health and happiness – and it’s even more important during times of stress. Physical activity can be a good way to reconnect with your body, especially if stress weakens that connection for you. The right activity can also help you reduce stress and anxiety getting in the way of caring for yourself and taking action. Whatever your preferred physical activity is – walking, rock-climbing, deep breathing – keep making it a priority. Some ways you can do this is are: schedule a specific time each day for it, combine it with some other activity (grocery shopping, listening to podcasts, spending time with your family), make plans to do your activity with a friend, or make some kind of commitment (like paying for a nonrefundable class). When your body feels good, it’s easier to make good decisions, get important work done, and care for others.

If you use Twitter, following is a good way to get small reminders to check in with and care for your body throughout the day (or for a funny approach, try Tons of apps are out there to remind you to stand up, take deep breaths, drink water, stretch, or whatever works for you.

For many people, some kind of physical self-care that resembles grooming is really helpful. This might look like getting a massage, taking a long bath, getting a pedicure, doing your makeup, shaving or clipping a beard, going to the sauna, showering more often than usual, using pretty-smelling bath products, applying lotion, or anything else in that realm. Try not to let yourself feel guilty for doing these things – if they make you feel good and they don’t take an enormous amount of time and energy, it’s worth it. Small acts of self-care can often have outsize returns.

Mental health

One of my irrational thoughts was “I should stop seeing my therapist so often, my mental health isn’t a high priority any more.” This is like saying, “I’m going on a month-long road trip driving through snow and mountains and sand, I should skip oil changes and ignore any engine warning lights during that trip.” Hopefully this sounds ridiculous!

If you are already seeing a therapist or mental health counselor of some kind, keep going to them. Tell them what you are feeling and ask for help with coping with stress and fear and anxiety. If you used to go to a therapist but stopped, consider restarting therapy with them. If you’ve been meaning to start therapy but never got around to it, now is a fantastic time to start. If your therapist isn’t helping, consider finding a new therapist. Here are some tips on finding therapists, figuring out how to afford therapy, and managing your relationship with your therapist.

You might also try a cognitive behavioral therapy app (like Moodnotes), an anxiety management app (like SAM), or a meditation app (like Headspace or Insight Timer).

Art is an important way of making sense of the incomprehensible, and of communicating it with others. If you have a creative practice of any kind, you may be surprised by the new meaning and value that it has for you in an uncertain and complicated world; creativity has a way of being both escape and engagement at the same time. You might try revisiting arts you left behind, or assigning yourself a creative routine. That said, don’t punish yourself if you don’t feel like doing anything creative right now.

One simple but highly recommended method is to stop and be aware of what is happening right now, right here, in this exact moment. Don’t think about the future, or things that aren’t right there, just use your senses to fully perceive what is around you for 10 seconds, or 30 seconds, or longer if you are practiced at it. You should feel calmer and more relaxed at the end of this exercise; if not, don’t do it.

Keeping lists of things to do or that you have done may be helpful to ground yourself in reality instead of anxiety. For example, you might start keeping a personal list of what you’ve done to fight oppression. The feeling of “we’re not doing enough” probably won’t go away as long as the problem is still there, but keeping a list, and the act of updating it with each action, can help some people remember they’re taking what concrete steps they can – and can help distract from the feeling of overwhelming powerlessness. If keeping lists makes you stressed and anxious, don’t do it.

Social self-care

Different people react to stress in different ways. Sometimes we reach out to friends and loved ones and strengthen our support system. Sometimes we isolate ourselves and withdraw from our support system. Often isolating ourselves seems like the solution when really it just makes the problem worse. People mistakenly isolate themselves when they are in need for many reasons. One is the idea that you are the source of the problem, and you are hurting other people by bringing the problem to them. Another reason is overemphasis on self-reliance and independence, leading to the idea that asking for help or support is shameful and weak. Whatever the reason, times of stress are often a good time to reach out to your friends and loved ones more, not less.

In this case, many of your friends and loved ones are under stress as well and would welcome hearing from you. Pick which of these things you are most comfortable doing and do one or two per day: texting a friend, emailing a friend, calling a friend, inviting a friend to coffee, inviting a friend to your house, organizing a dinner with friends, organizing a party, offering to help someone else organize a meetup, or saying yes to an invitation you receive.

One thing that can help reduce stress around being around other people is to set some kind of structure around what you talk about or for how long. For example, you can suggest taking a walk for one hour and and agree to talk about politics only during the last 15 minutes. Or you can have a dinner and say that no one can argue about the history of fascism, only share information about what actions they are taking now.

Situational awareness

While for many people at this time it is crucial to keep up with the news for safety reasons, this doesn’t have to mean reading the news at all time. For some, self-care means choosing to catch up on news and politics only during certain times – say, for an hour a day. This can enable you to prepare yourself before you learn about the news, and take care of yourself afterwards. For example, if you use Twitter, you might filter news about the election out of your Twitter stream for most of the day, and then turn that filter off during the set time in which you catch up on that topic. It’s not a perfect system, but it can enable you to skim past that crucial news article when you’re not in the right place for it — knowing you’ll be returning for it the next day. Or you could use a bookmarking service like Pinboard to collect links about upsetting topics to read during the 20 minutes you catch up on the news. Google Alerts are a good way to get a once a day roundup of news stories with certain keywords emailed to you.

You can also ask a trusted person to keep an eye on the news for you. You might ask them to tell you if anything happens that you need to know about – any major events, or anything that’s directly relevant to your safety.

Food stuff

[TRIGGER WARNING: Food-related advice below]

If you are reacting to stress by losing your appetite, it’s a good idea not to skip meals entirely. You don’t have to eat as much as you usually do – set some kind of achievable goal (like “half this bagel” or “one apple”) and let yourself stop after that. Look for tasty, nutrient dense foods that are easy to eat and make your stomach feel calm – this might look like smoothies, nuts or nut butters, hard-boiled eggs, bacon, chocolate, cheese, coconut, avocados, dried fruit, broth, etc. Keep easy to eat, easy to prepare foods around and available so you can take advantage of the times when you are hungry.

If you’ve internalized a lot of training (including training yourself) to only eat the “right” healthy foods, this can be unhelpful at times when you’ve lost your appetite and are low on calories (and possibly low on blood sugar). Eating a bit of anything that seems appealing to you (even if you ordinarily consider it not your preferred food to eat frequently or over the long-term) can help you bootstrap yourself back to your preferred eating style. This might not work for you depending on your eating habits, but in general this is a good time to be kind and forgiving of yourself.

If grocery shopping is overwhelming, consider a grocery delivery option. Consider stocking your freezer with appealing, easily-microwaved frozen foods, for times when it’s important to eat, but you don’t want to cook, order or shop. For example, supermarkets carry frozen vegetables that you can steam, in the bag, in the microwave. Trader Joe’s, if there’s one near you, is a haven of frozen, microwavable treats. If it helps, you can stock your freezer like you’re setting in for a long winter – so you know you’ll always have something to eat on hand.

Hopefully this gives you some more ideas for how to practice self-care during the months and years ahead. We’re in this for the long-term – learning to take care of yourself now will pay back today and for years to come.

Tagged: advice, politics

CryptogramHacking and the 2016 Presidential Election

Was the 2016 presidential election hacked? It's hard to tell. There were no obvious hacks on Election Day, but new reports have raised the question of whether voting machines were tampered with in three states that Donald Trump won this month: Wisconsin, Michigan and Pennsylvania.

The researchers behind these reports include voting rights lawyer John Bonifaz and J. Alex Halderman, the director of the University of Michigan Center for Computer Security and Society, both respected in the community. They have been talking with Hillary Clinton's campaign, but their analysis is not yet public.

According to a report in New York magazine, the share of votes received by Clinton was significantly lower in precincts that used a particular type of voting machine: The magazine story suggested that Clinton had received 7 percent fewer votes in Wisconsin counties that used electronic machines, which could be hacked, than in counties that used paper ballots. That is exactly the sort of result we would expect to see if there had been some sort of voting machine hack. There are many different types of voting machines, and attacks against one type would not work against the others. So a voting anomaly correlated to machine type could be a red flag, although Trump did better across the entire Midwest than pre-election polls expected, and there are also some correlations between voting machine type and the demographics of the various precincts. Even Halderman wrote early Wednesday morning that "the most likely explanation is that the polls were systematically wrong, rather than that the election was hacked."

What the allegations, and the ripples they're causing on social media, really show is how fundamentally untrustworthy our hodgepodge election system is.

Accountability is a major problem for U.S. elections. The candidates are the ones required to petition for recounts, and we throw the matter into the courts when we can't figure it out. This all happens after an election, and because the battle lines have already been drawn, the process is intensely political. Unlike many other countries, we don't have an independent body empowered to investigate these matters. There is no government agency empowered to verify these researchers' claims, even if it would be merely to reassure voters that the election count was accurate.

Instead, we have a patchwork of voting systems: different rules, different machines, different standards. I've seen arguments that there is security in this setup ­ an attacker can't broadly attack the entire country ­ but the downsides of this system are much more critical. National standards would significantly improve our voting process.

Further investigation of the claims raised by the researchers would help settle this particular question. Unfortunately, time is of the essence ­ underscoring another problem with how we conduct elections. For anything to happen, Clinton has to call for a recount and investigation. She has until Friday to do it in Wisconsin, until Monday in Pennsylvania and until next Wednesday in Michigan. I don't expect the research team to have any better data before then. Without changes to the system, we're telling future hackers that they can be successful as long as they're able to hide their attacks for a few weeks until after the recount deadlines pass.

Computer forensics investigations are not easy, and they're not quick. They require access to the machines. They involve analysis of Internet traffic. If we suspect a foreign country like Russia, the National Security Agency will analyze what they've intercepted from that country. This could easily take weeks, perhaps even months. And in the end, we might not even get a definitive answer. And even if we do end up with evidence that the voting machines were hacked, we don't have rules about what to do next.

Although winning those three states would flip the election, I predict Clinton will do nothing (her campaign, after all, has reportedly been aware of the researchers' work for nearly a week). Not because she does not believe the researchers ­- although she might not -­ but because she doesn't want to throw the post-election process into turmoil by starting a highly politicized process whose eventual outcome will have little to do with computer forensics and a lot to do with which party has more power in the three states.

But we only have two years until the next national elections, and it's time to start fixing things if we don't want to be wondering the same things about hackers in 2018. The risks are real: Electronic voting machines that don't use a paper ballot are vulnerable to hacking.

Clinton supporters are seizing on this story as their last lifeline of hope. I sympathize with them. When I wrote about vote-hacking the day after the election, I said: "Elections serve two purposes. First, and most obvious, they are how we choose a winner. But second, and equally important, they convince the loser ­- and all the supporters ­- that he or she lost." If the election system fails to do the second, we risk undermining the legitimacy of our democratic process. Clinton's supporters deserve to know whether this apparent statistical anomaly is the result of a hack against our election system or a spurious correlation. They deserve an election that is demonstrably fair and accurate. Our patchwork, ad hoc system means they may never feel confident in the outcome. And that will further erode the trust we have in our election systems.

This essay previously appeared in the Washington Post.

Edited to Add: Green-party candidate Jill Stein is calling for a recount in the three states. I have no idea of a recount includes forensic analysis to ensure that the machines were not hacked, but I doubt it. It would be funny if it wasn't all so horrible.

Also, here's an article from arguing that demographics explains all the discrepancies.

Sociological ImagesSociological Images on the Election and Beyond

Dear readers, I shut down SocImages after the election. It didn’t feel like a time for business as usual. Sociology is not a partisan enterprise, but sociologists understand themselves to be scientists and we share a body of literature from which we derive things we believe to be more fact than fiction, at least until we have better data. Donald Trump’s candidacy made a mockery of facts, while his and Pence’s policy recommendations on everything from torture to climate change to stop-and-frisk to excluding Muslims from the US to “reformability” of sexual minorities to The Wall fly in the face of available data.

I expressed my personal feelings about Trump’s election here, arguing that we need to shift our focus away from the voting patterns of our fellow Americans and toward the institutions that suppress, exclude, and differently weigh our votes. It’s a sociological argument, but also a partisan one, so I won’t reproduce it. What I will do is go forward with Sociological Images, bringing it back to keep spreading the science of sociology, putting empirical research before presumptions as best as I can, as I and so many guest posters did during the campaign.

See you on Monday.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at

Planet DebianPetter Reinholdtsen: Quicker Debian installations using eatmydata

Two years ago, I did some experiments with eatmydata and the Debian installation system, observing how using eatmydata could speed up the installation quite a bit. My testing measured speedup around 20-40 percent for Debian Edu, where we install around 1000 packages from within the installer. The eatmydata package provide a way to disable/delay file system flushing. This is a bit risky in the general case, as files that should be stored on disk will stay only in memory a bit longer than expected, causing problems if a machine crashes at an inconvenient time. But for an installation, if the machine crashes during installation the process is normally restarted, and avoiding disk operations as much as possible to speed up the process make perfect sense.

I added code in the Debian Edu specific installation code to enable eatmydata, but did not have time to push it any further. But a few months ago I picked it up again and worked with the libeatmydata package maintainer Mattia Rizzolo to make it easier for everyone to get this installation speedup in Debian. Thanks to our cooperation There is now an eatmydata-udeb package in Debian testing and unstable, and simply enabling/installing it in debian-installer (d-i) is enough to get the quicker installations. It can be enabled using preseeding. The following untested kernel argument should do the trick:

preseed/early_command="anna-install eatmydata-udeb"

This should ask d-i to install the package inside the d-i environment early in the installation sequence. Having it installed in d-i in turn will make sure the relevant scripts are called just after debootstrap filled /target/ with the freshly installed Debian system to configure apt to run dpkg with eatmydata. This is enough to speed up the installation process. There is a proposal to extend the idea a bit further by using /etc/ instead of apt.conf, but I have not tested its impact.

Planet DebianIain R. Learmonth: vmdebootstrap Sprint Report

This is now a little overdue, but here it is. On the 10th and 11th of November, the second vmdebootstrap sprint took place. Lars Wirzenius (liw), Ana Custura (ana_c) and myself were present. liw focussed on the core of vmdebootstrap, where he sketched out what the future of vmdebootstrap may look like. He documented this in a mailing list post and also presented (video).

Ana and myself worked on live-wrapper, which uses vmdebootstrap internally for the squashfs generation. I worked on improving logging, using a better method for getting paths within the image, enabling generation of Packages and Release files for the image archive and also made the images installable (live-wrapper 0.5 onwards will include an installer by default).

Ana worked on the inclusion of HDT and memtest86+ in the live images and enabled both ISOLINUX (for BIOS boot) and GRUB (for EFI boot) to boot the text-mode and graphical installers.

live-wrapper 0.5 was released on the 16th November with these fixes included. You can find live-wrapper documentation at (The documentation still needs some work, some options may be incorrectly described).

Thanks to the sponsors that made this work possible. You’re awesome. (:

Worse Than FailureClassic WTF: Illicit Process Improvement

In celebration of Black Friday, also known as "Retail Hellscape", let's look at a retail-oriented classic WTF, which originally ran way back in 2007. We'll resume our regularly scheduled WTFs next week.--Remy

Christian R. was in trouble. Despite his experience across hardware and software, desktops and server clusters, thumb drives and SANs, he hadn't found any freelance work in weeks. It was clear that he'd have to figure something out to pay the bills.

In August, Christian applied at Drab's PCs, a large retail chain focused on computer hardware and software. He'd shopped there for years and had an impressive level of knowledge about their products, so he accepted a position in Technical Sales.

After a few months of working at Drab's PCs, Christian grew tired of one of his tasks — manually keying in orders from the online store. The online store worked by emailing orders to individual branches across the country, which were then printed, given to the branch manager, and then distributed to employees. The employees would then key in each order, line by line, item by item.

Entering orders was more time consuming than it had to be. Since each system had a barcode scanner, it didn't make sense to totally retype UPC codes and serial numbers. Having worked with PHP's image manipulation functions, Christian decided to take on a hobby project — a quicker interface to enter online orders.

He bought himself a barcode scanner and got to work. After a few evenings of coding, he had a working prototype. It would take in an order email, convert UPCs, serial numbers, quantities, and prices to barcodes. The barcodes were aligned on the page such that the barcode scanner could simply be dragged from the top of the page to the bottom, generating a complete, accurate order.

For a few weeks, Christian would use his application rather than typing orders in manually. Even after verifying that the order was complete and correct, he would still finish well before his coworkers. Gradually, word spread about his application, so he shared it with a few friends at his store.

His circle of users were happy, but when word of Christian's application bubbled up to management, Christian was called into his boss's office. "Let's have a competition," his boss, Warren, began. "I'll have Bill enter an order against your program," he said. "He's the fastest at this, and I want to be sure that we're doing this the most efficient way we can."

Christian and Bill started, and before Bill had fully keyed in the first item, Christian had processed an entire order. Happy with the results, Warren thanked Christian for his work and told him he'd talk to the branch manager about it.

A few days later, Christian's branch manager, Larry, called him into his office. "I saw the order entry program you made," he began. "You're lucky I haven't fired you."

"I... I'm sorry?" Christian was dumbstruck. "Did it mess up an order or something?"

"No. I just don't appreciate your interfering with the deployment of the new system." The "new system" had been coming soon since the day Christian was hired. Christian had never intended to interfere with plans made by corporate, he just wanted to make his life a little easier. He tried to defend himself, but Larry was unconvinced. His application had put him at odds with corporate.

A year passed, the following winter came, and Christian was due for a performance review. After his boss, Warren, and the branch manager, Larry, had finished Christian's performance review sheet, he was called into Larry's office to review. Christian took a deep breath before walking in.

Before Christian could even sit down, his review began. "You're not smiling enough," Larry began.

"You have the best feedback out of all of our staff, though." Warren was happy. "Customers love yo-"

"But they think you're cold and unfriendly. Why don't you smile more?" Larry interrupted.

"Really, though, your technical knowledge is great," Warren said. "And I've had more customers thank me for your hel-"

"I see here that you were almost ten minutes late on June 8th. You missed a team-building exercise!" Larry scowled and leaned forward. "Why don't you tell me about that day."

"Well, there was a car accident which caused a delay," Christian began, "and I don't really have a good direct route in anyway. Still, I was still at my desk, ready to serve customers when the doors opened, so I don't think it was that big a deal..."

"Yes, yes. Well, let's cut to the chase. We've decided you can keep your job," said Larry with an insulting smile. "Sound good?"

Christian was speechless. He looked to Warren for help, but Warren was timidly staring at the ground. He stumbled while mentioning a few improvements he'd made to the store, some thankful customers he'd served, but those comments were barely acknowledged.

"I'm not being considered for a raise then?" Christian finally asked.

"No, but you can keep your job," Larry reiterated.

"Will I be eligible for a raise next year?"


"The year after that?"

"Y... maybe."

"So, to get this straight, I have to work three years on my best behavior, be essentially the most incredible employee the store has ever had, and then, maybe I'll get a raise?"

"Well, if you put it like that..."

"Understood." Christian sighed and went back out to his desk. Two months later, he found a new position and has been there for several years now. He found out that corners were being cut across the board not only because the store didn't have a great year, but a new, expensive corporate office had been built that year.

And that new system is still coming soon, but it's seriously right around the corner.

[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

Planet DebianMichael Stapelberg: Debian package build tools

Personally, I find the packaging tools which are available in Debian far too complex. To better understand the options we have, I created a diagram of tools which are frequently used, only covering the build step (i.e. no post-build quality assurance checks or packaging-time helpers):

debian package build tools

When I was first introduced to Debian packaging, people recommended I use pbuilder. Given how complex the toolchain is in the pbuilder case, I don’t understand why that is (was?) a common recommendation.

Back in August 2015, so well over a year ago, I switched to sbuild, motivated by how much simpler it was to implement ratt (rebuilds reverse build dependencies) using sbuild, and I have not looked back.

Are there people who do not use sbuild for reasons other than familiarity? If so, please let me know, I’d like to understand.

I also made a version of the diagram above, colored by the programming languages in which the tools are implemented. The chosen colors are heavily biased :-).

debian package build tools, by language

To me, the diagram above means: if you want to make substantial changes to the Debian build tool infrastructure, you need to become an expert in all of Python, Perl, Bash, C and Make. I know that this is not true for every change, but it still irks me that there might be changes for which it is required.

I propose to eliminate complexity in Debian by deprecating the pbuilder toolchain in favor of sbuild.

Sam VargheseDonald Trump won. Just get over it

Donald Trump was elected US president on November 8 but nearly three weeks later, people still do not seem to have gotten over it.

The cries of woe and anguish continue to be heard in the American media and elsewhere, many of them from the same pundits who never saw it coming.

About the only two prominent Americans who genuinely canvassed a Trump win were the filmmaker Michael Moore and the cartoonist Scott Adams. They made their predictions long before the polls, and stuck true to them right to the end.

Moore tried his best to alert people to the dangers of the poll going to Trump, even putting out an uncharacteristic piece of hagiography titled Michael Moore in Trumpland a few weeks before the election.

But he failed to convince an electorate that had made up its mind a long time before the date.

So why did Trump win?

There are numerous factors that are possible causes. For one, the voting public in the US (and many other countries too) have long ceased to be convinced by facts and figures. There is a simple reason for this: given that most of this data comes from people in authority (politicians, civic leaders, the media, so-called pundits) who lie and lie and lie again, the public have ceased to give anything they say any credence.

So if people are now complaining that the masses are unwilling to deal with facts, you know whom to blame. They will believe anything else, and you really cannot take issue with that. They pick and choose what they will believe.

A second factor that came into play was the opposition candidate herself. Hillary Clinton (and when you say that, Bill is part of the baggage) has loads of baggage and much of it is not very commendatory. During the campaign, there were leaks that showed the Democrats had tilted the balance so that Bernie Sanders, who was a much better candidate and one who would probably have defeated Trump, would not become their nominee. Did that anger probable Democrat voters? Take a guess.

Then there was the strange pattern of campaigning where Clinton did not bother to visit many states which, it was taken for granted, would vote Democrat as they have done so in the past. If there is one thing voters hate, it is to be taken for granted. They gave Clinton the finger. The middle finger.

And why would the Democrats put Barack Obama and his wife out to campaign for Clinton? Obama split the nation right down the colour divide when he won. The second time, he just managed to squeeze through past Mitt Romney. He is not the unifying figure many leftists and intellectuals see him to be; if anything, his election only made race more of an issue in a country where it was already a massive factor in just about everything.

Trump won just by not being a politician. He did not treat the whole thing as a popularity contest, just as an exercise that needed to be won. All those who keep whinging now that Clinton has won the popular vote – it is a waste of time. Winning the election was the name of the game, not winning the popular vote.


Planet DebianDirk Eddelbuettel: RcppExamples 0.1.8

A new version of the RcppExamples package is now on CRAN.

The RcppExamples package provides a handful of short examples detailing by concrete working examples how to set up basic R data structures in C++. This version takes advantage of the updated date and datetime classes in Rcpp 0.12.8 (which are optional for now and being phased in while we deprecate the old ones).

A NEWS extract follows:

Changes in RcppExamples version 0.1.8 (2016-11-24)

  • Updated DateExample to show vector addition available under Rcpp 0.12.8 when the (currently still phased in and optional) new Date(time) classes are used via the define in src/Makevars,.win; with fallback code for older versions

  • Other minor edits to DESCRIPTION and

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Harald WelteOpen Hardware Multi-Voltage USB UART board released

During the past 16 years I have been playing a lot with a variety of embedded devices.

One of the most important tasks for debugging or analyzing embedded devices is usually to get access to the serial console on the UART of the device. That UART is often exposed at whatever logic level the main CPU/SOC/uC is running on. For 5V and 3.3V that is easy, but for ever more and more unusual voltages I always had to build a custom cable or a custom level shifter.

In 2016, I finally couldn't resist any longer and built a multi-voltage USB UART adapter.

This board exposes two UARTs at a user-selectable voltage of 1.8, 2.3, 2.5, 2.8, 3.0 or 3.3V. It can also use whatever other logic voltage between 1.8 and 3.3V, if it can source a reference of that voltage from the target embedded board.


Rather than just building one for myself, I released the design as open hardware under CC-BY-SA license terms. Full schematics + PCB layout design files are available. For more information see

In case you don't want to build it from scratch, ready-made machine assembled boards are also made available from

Harald WelteOpen Hardware miniPCIe WWAN modem USB breakout board released

There are plenty of cellular modems on the market in the mPCIe form factor.

Playing with such modems is reasonably easy, you can simply insert them in a mPCIe slot of a laptop or an embedded device (soekris, pc-engines or the like).

However, many of those modems actually export interesting singals like digital PCM audio or UART ports on some of the mPCIe pins, both in standard and in non-standard ways. Those signals are inaccessible in those embedded devices or in your laptop.

So I built a small break-out board which performs the basic function of exposing the mPCIe USB signals on a USB mini-B socket, providing power supply to the mPCIe modem, offering a SIM card slot at the bottom, and exposing all additional pins of the mPCIe header on a standard 2.54mm pitch header for further experimentation.


The design of the board (including schematics and PCB layout design files) is available as open hardware under CC-BY-SA license terms. For more information see

If you don't want to build your own board, fully assembled and tested boards are available from

Planet DebianSven Hoexter: first ditch effort - LyX 2.2.2 in unstable build with Qt5

No, not about the latest NOFX record, though it's a great one. Buy it. ;)

Took me a hell of a long time to get my head out of my arse and dive again into some Debian related work. Thanks to Nik for pushing me from time to time.

So I've taken the time to upload LyX 2.2.2 to unstable and it's now build with Qt5. Afterall the package is still missing a lot of love, but I hope we've once again something for the upcoming stable release, that is close to the latest upstream stable release. If you use LyX please give it a try.

For myself it's now the 6th year that I stopped using LyX after maintaining it for five years. And still I'm sponsoring the uploads and try to keep it at least functional. Strange how we sometimes take care of stuff even if we no longer have an active use for them.

Worse Than FailureCodeSOD: Classic WTF: Injection Proof'd

It's Thanksgiving, in the US. Be thankful you're not supporting this block of code. --Remy

“When a ‘customer’ of ours needs custom-developed software to suit their business requirements,” Kelly Adams writes, “they can either ‘buy’ the development services from the IT department, or go to an outside vendor. In the latter case, then we’re supposed to approve that the software meets corporate security guidelines.”

“Most of the time, our ‘approval’ is treated as a recommendation, and we end up having to install the application anyway. But recently, they actually listened to us and told the vendor to fix the ‘blatant SQL-injection vulnerabilities’ that we discovered. A few weeks later, when it came time for our second review, we noticed the following as their ‘fix’.”

internal static string FQ(string WhichField)
   string expression = "";
   int num2 = Strings.Len(WhichField);
   for (int i = 1; i <= num2; i++)
      string str = Strings.Mid(WhichField, i, 1);
      if (str == "'")
         str = str + "'";
      expression = expression + str;
   return Strings.Trim(
            "xp_", "", 1, -1, CompareMethod.Text), 
            "sp_", "", 1, -1, CompareMethod.Text), 
            "--", "-", 1, -1, CompareMethod.Binary), 
            "alter table", "", 1, -1, CompareMethod.Text), 
            "drop table", "", 1, -1, CompareMethod.Text), 
            "create table", "", 1, -1, CompareMethod.Text), 
            "create database", "", 1, -1, CompareMethod.Text), 
            "alter table", "", 1, -1, CompareMethod.Text), 
            "alter column", "", 1, -1, CompareMethod.Text), 
            "drop column", "", 1, -1, CompareMethod.Text), 
            "drop database", "", 1, -1, CompareMethod.Text), 
            "1=1", "", 1, -1, CompareMethod.Text), 
            "union select", "", 1, -1, CompareMethod.Text), 
            "/*", "", 1, -1, CompareMethod.Text), 
            "*/", "", 1, -1, CompareMethod.Text), 
            "boot.ini", "", 1, -1, CompareMethod.Text), 
            "../", "", 1, -1, CompareMethod.Text), 
            "%27", "", 1, -1, CompareMethod.Text), 
            ";dir", "", 1, -1, CompareMethod.Text), 
            "|dir", "", 1, -1, CompareMethod.Text), 
            "<script", "", 1, -1, CompareMethod.Text), 
            "</script>", "", 1, -1, CompareMethod.Text), 
            "language=javascript", "", 1, -1, CompareMethod.Text), 
            "language=\"javascript\"", "", 1, -1, CompareMethod.Text));

Kelly adds, “of course this time, when we told them the application was still vulnerable so long that a hacker typed ‘1 = 1’ instead of ‘1=1’, they told us were beeing too picky, and had us install the application anyway.”

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianRitesh Raj Sarraf: LIO -fb in Debian

LIO -fb is the new SCSI Target for Debian. Previously, we maintained the LIO tools from the pre-fork upstream branch. But, with good reasons, we've now moved to the newer -fb (Free Branch).

As the maintainer for those pacakges, I have a local LIO setup. Overy the years, I've been tuning and using this setup with a bunch of SCSI clients. Now with the new -fb packages it was worrisome for me, on how to migrate (Note: migration is not supported by the Debian packages) my old setup to the new one.


Thanks to Andy Grover for mentioning it, migrating your configuration is doable. With some minor intervention, I was able to switch my config from old LIO setup to the new LIO -fb pacakges. As you can see from the output below, both the outputs look the same, which is a good thing.

LIO reads its configuration from /etc/target/ and passes it into the kernel. The kernel loads the config. The real time config is present in configfs, within the kernel. Users willing for such migration need to ensure that the loaded config data remains in configfs. And then, using the new -fb tools (targetctl), the configuration data needs to be read and written to a new format in /etc/.


/> ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- fileio ................................................................................................... [0 Storage Object]
  | o- iblock .................................................................................................. [4 Storage Objects]
  | | o- CENTOS ................................................................................................. [/dev/vdd, in use]
  | | o- SAN1 ................................................................................................... [/dev/vdb, in use]
  | | o- SAN2 ................................................................................................... [/dev/vdc, in use]
  | | o- SANROOT .............................................. [/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0, in use]
  | o- pscsi .................................................................................................... [0 Storage Object]
  | o- rd_mcp ................................................................................................... [0 Storage Object]
  o- ib_srpt ........................................................................................................... [0 Targets]
  o- iscsi ............................................................................................................. [3 Targets]
  | o- ................................................................................. [1 TPG]
  | | o- tpg1 ............................................................................................................ [enabled]
  | |   o- acls ............................................................................................................ [1 ACL]
  | |   | o- .................................................................... [1 Mapped LUN]
  | |   |   o- mapped_lun0 ............................................................................................. [lun0 (rw)]
  | |   o- luns ............................................................................................................ [1 LUN]
  | |   | o- lun0 ....................................................................................... [iblock/CENTOS (/dev/vdd)]
  | |   o- portals ..................................................................................................... [4 Portals]
  | |     o- ................................................................................. [OK, iser disabled]
  | |     o- ................................................................................. [OK, iser disabled]
  | |     o- ................................................................................. [OK, iser disabled]
  | |     o- ................................................................................. [OK, iser disabled]
  | o- .......................................................................... [1 TPG]
  | | o- tpg1 ............................................................................................................ [enabled]
  | |   o- acls ............................................................................................................ [1 ACL]
  | |   | o- ........................................................ [1 Mapped LUN]
  | |   |   o- mapped_lun0 ............................................................................................. [lun0 (rw)]
  | |   o- luns ............................................................................................................ [1 LUN]
  | |   | o- lun0 .................................... [iblock/SANROOT (/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0)]
  | |   o- portals ..................................................................................................... [4 Portals]
  | |     o- ................................................................................. [OK, iser disabled]
  | |     o- ................................................................................. [OK, iser disabled]
  | |     o- ................................................................................. [OK, iser disabled]
  | |     o- ................................................................................. [OK, iser disabled]
  | o- ............................................................................ [1 TPG]
  |   o- tpg1 ............................................................................................................ [enabled]
  |     o- acls ............................................................................................................ [1 ACL]
  |     | o- ................................................................. [2 Mapped LUNs]
  |     |   o- mapped_lun0 ............................................................................................. [lun0 (rw)]
  |     |   o- mapped_lun1 ............................................................................................. [lun1 (rw)]
  |     o- luns ........................................................................................................... [2 LUNs]
  |     | o- lun0 ......................................................................................... [iblock/SAN1 (/dev/vdb)]
  |     | o- lun1 ......................................................................................... [iblock/SAN2 (/dev/vdc)]
  |     o- portals ..................................................................................................... [4 Portals]
  |       o- ................................................................................. [OK, iser disabled]
  |       o- ................................................................................. [OK, iser disabled]
  |       o- ................................................................................. [OK, iser disabled]
  |       o- ................................................................................. [OK, iser disabled]
  o- loopback .......................................................................................................... [0 Targets]
  o- qla2xxx ........................................................................................................... [0 Targets]
  o- tcm_fc ............................................................................................................ [0 Targets]
  o- vhost ............................................................................................................. [0 Targets]

/> ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 4]
  | | o- CENTOS ........................................................................... [/dev/vdd (2.0GiB) write-thru activated]
  | | o- SAN1 ............................................................................. [/dev/vdb (1.0GiB) write-thru activated]
  | | o- SAN2 ............................................................................. [/dev/vdc (1.0GiB) write-thru activated]
  | | o- SANROOT ........................ [/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0 (8.0GiB) write-thru activated]
  | o- fileio ................................................................................................. [Storage Objects: 0]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 3]
  | o- ............................................................................... [TPGs: 1]
  | | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  | |   o- acls .......................................................................................................... [ACLs: 1]
  | |   | o- .................................................................. [Mapped LUNs: 1]
  | |   |   o- mapped_lun0 ................................................................................ [lun0 block/CENTOS (rw)]
  | |   o- luns .......................................................................................................... [LUNs: 1]
  | |   | o- lun0 ........................................................................................ [block/CENTOS (/dev/vdd)]
  | |   o- portals .................................................................................................... [Portals: 4]
  | |     o- ................................................................................................ [OK]
  | |     o- ................................................................................................ [OK]
  | |     o- ................................................................................................ [OK]
  | |     o- ................................................................................................ [OK]
  | o- ........................................................................ [TPGs: 1]
  | | o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  | |   o- acls .......................................................................................................... [ACLs: 1]
  | |   | o- ...................................................... [Mapped LUNs: 1]
  | |   |   o- mapped_lun0 ............................................................................... [lun0 block/SANROOT (rw)]
  | |   o- luns .......................................................................................................... [LUNs: 1]
  | |   | o- lun0 ..................................... [block/SANROOT (/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi0-0-0-0)]
  | |   o- portals .................................................................................................... [Portals: 4]
  | |     o- ................................................................................................ [OK]
  | |     o- ................................................................................................ [OK]
  | |     o- ................................................................................................ [OK]
  | |     o- ................................................................................................ [OK]
  | o- .......................................................................... [TPGs: 1]
  |   o- tpg1 ............................................................................................... [no-gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 1]
  |     | o- ................................................................ [Mapped LUNs: 2]
  |     |   o- mapped_lun0 .................................................................................. [lun0 block/SAN1 (rw)]
  |     |   o- mapped_lun1 .................................................................................. [lun1 block/SAN2 (rw)]
  |     o- luns .......................................................................................................... [LUNs: 2]
  |     | o- lun0 .......................................................................................... [block/SAN1 (/dev/vdb)]
  |     | o- lun1 .......................................................................................... [block/SAN2 (/dev/vdc)]
  |     o- portals .................................................................................................... [Portals: 4]
  |       o- ................................................................................................ [OK]
  |       o- ................................................................................................ [OK]
  |       o- ................................................................................................ [OK]
  |       o- ................................................................................................ [OK]
  o- loopback ......................................................................................................... [Targets: 0]
  o- vhost ............................................................................................................ [Targets: 0]




Planet DebianRitesh Raj Sarraf: SAN Updates for Debian Stretch

Now that we prepare for the next Debian Stable release (Stretch), it is time to provide some updates on what the current state of some of the (storage related) packages in Debian is. This is not an update on the complete list of packages related to storage, but it does cover some of them.



  • iscsitarget - The iscsitarget stood as a great SCSI target for the Linux kernel. It seems to have had a good user base not just in Linux but also with VMWare users. But this storage target was always out-of-tree. With LIO having gotten merged as the default in-kernel SCSI Target, development on iscsitarget seems to have stalled. In Debian, for Stretch, there will be no iscsitarget. The package is already removed from Debian Testing and Debian Unstable, and nobody has volunteered to take over it.
  • system-storage-manager - This tool intended to be a simple unified storage tool, through which one could work with various storage technologies like LVM, BTRFS, cryptsetup, SCSI etc. But the upstream development hasn't really been much lately. For Debian Stable, it shouldn't be part of it, given it has some bugs.
  • libstoragemgmt - libstoragemgmt is a universal storage client-side library to talk to remote Storage Arrays. The project is active upstream. For Debian, the package is out-of-date and, now, also needs a maintainer. Unless someone picks up this package, it will not be part of Debian Stretch.



  • open-iscsi - This is the default iSCSI Initiator for Linux distributions. After a long slow development, upstream recently did a new release. This new release accomplished an important milestone; Hardware Offloading for QLogic cards. A special thanks to Frank Fegert, who helped with many aspects of the new iscsiuio package. And thanks to Christian Seiler, who is now co-maintaining the package, it is in great shape. We have fixed some long outstanding bugs and open-iscsi now has much much better integration with the whole system. For Jessie too, we have the up-to-date open-iscsi pacakges (including the new iscsiuio package, with iSCSI Offload) available through jessie-packports
  • open-isns - iSNS is the Naming Service for Storage. This is a new package in Debian Stretch. For users on Debian Jessie, Christian's efforts have made the open-isns package available in jessie-backports too.
  • multipath-tools - After years of slow development, multipath-tools too saw some active development this year, thanks to Xose and Christophe. The Debian version is up-to-date with the latest upstream release. For Debian Stretch, multipath-tools should have good integration with systemd.
  • sg3-utils - sg3 provides simple tools to query, using SCSI commands. The package is up-to-date and in good shape for Debian Stretch.
  • LIO Target - This is going to be the big entry for Debian Stretch. LIO is the in-kernel SCSI Target for Linux. For various reasons, we did not have LIO in Jessie. For Stretch, thanks to Christian Seiler and Christophe Vu-Brugier, we now have the well maintained -fb fork into Debian, which will replace the initial packages from the pre-fork upstream. The -fb fork is maintained by Andy Grover, and now, seems to have users from many other distributions and the kernel community. And given that LIO -fb branch is also part of the RHEL product family, we hope to see a well maintained project and an active upstream. The older packages: targetcli, python-rtslib and python-configshell shall be removed from the archive soon.


Debian users and derivatives, using these storage tools, may want to test/report now. Because once Stretch is released, getting new fixes in may not be easy enough. So please, if you have reliance on these tools, please test and report bugs, now.




Planet DebianMichael Stapelberg: Debian stretch on the Raspberry Pi 3

The last couple of days, I worked on getting Debian to run on the Raspberry Pi 3.

Thanks to the work of many talented people, the Linux kernel in version 4.8 is _almost_ ready to run on the Raspberry Pi 3. The only missing thing is the bcm2835 MMC driver, which is required to read the root file system from the SD card. I’ve asked our maintainers to include the patch for the time being.

Aside from the kernel, one also needs a working bootloader, hence I used Ubuntu’s linux-firmware-raspi2 package and uploaded the linux-firmware-raspi3 package to Debian. The package is currently in the NEW queue and needs to be accepted by ftp-master before entering Debian.

The most popular method of providing a Linux distribution for the Raspberry Pi is to provide an image that can be written to an SD card. I made two little changes to vmdebootstrap (#845439, #845526) which make it easier to create such an image.

The Debian wiki page describes the current state of affairs and should be updated, as this blog post will not be updated.

As a preview version (i.e. unofficial, unsupported, etc.) until all the necessary bits and pieces are in place to build images in a proper place in Debian, I built and uploaded the resulting image. Find it at To install the image, insert the SD card into your computer (I’m assuming it’s available as /dev/sdb) and copy the image onto it:

$ wget
$ sudo dd if=2016-11-24-raspberry-pi-3-stretch-PREVIEW.img of=/dev/sdb bs=5M

I hope this initial work on getting Debian booted will motivate other people to contribute little improvements here and there. A list of current limitations and potential improvements can be found on the RaspberryPi3 Debian wiki page.


Krebs on SecurityDoD Opens .Mil to Legal Hacking, Within Limits

Hackers of all stripes looking to test their mettle can now legally hone their cyber skills, tools and weaponry against any Web property operated by the U.S. Department of Defense (DoD), according to a new military-wide policy for reporting and fixing security vulnerabilities.


Security researchers are often reluctant to report programming flaws or security holes they’ve stumbled upon for fear that the vulnerable organization might instead decide to shoot the messenger and pursue hacking charges.

But on Nov. 21, the DoD sought to clear up any ambiguity on that front for the military’s substantial online presence, creating both a centralized place to report cybersecurity flaws across the dot-mil space as well as a legal safe harbor (and the prospect of public recognition) for researchers who abide by a few ground rules.

The DoD said it would “deal in good faith” with researchers “who discover, test, and submit vulnerabilities or indicators of vulnerabilities in accordance with these guidelines:

“Your activities are limited exclusively to –
(1) Testing to detect a vulnerability or identify an indicator related to a vulnerability; or
(2) Sharing with, or receiving from, DoD information about a vulnerability or an indicator related to a vulnerability.”

The Department of Defense also issued the following ten commandments for demonstrating compliance with its policy:

  1. You do no harm and do not exploit any vulnerability beyond the minimal amount of testing required to prove that a vulnerability exists or to identify an indicator related to a vulnerability.
  2. You avoid intentionally accessing the content of any communications, data, or information transiting or stored on DoD information system(s) – except to the extent that the information is directly related to a vulnerability and the access is necessary to prove that the vulnerability exists.
  3. You do not exfiltrate any data under any circumstances.
  4. You do not intentionally compromise the privacy or safety of DoD personnel (e.g. civilian employees or military members), or any third parties.
  5. You do not intentionally compromise the intellectual property or other commercial or financial interests of any DoD personnel or entities, or any third parties.
  6. You do not publicly disclose any details of the vulnerability, indicator of vulnerability, or the content of information rendered available by a vulnerability, except upon receiving explicit written authorization from DoD.
  7. You do not conduct denial of service testing.
  8. You do not conduct social engineering, including spear phishing, of DoD personnel or contractors.
  9. You do not submit a high-volume of low-quality reports.
  10. If at any point you are uncertain whether to continue testing, please engage with our team.

In return, the DoD said it commits to acknowledging receipt of a report within three business days, and that it will work to confirm the existence of the vulnerability to the researcher and keep the researcher informed of any remediation underway. There are some restrictions, however. For example, researchers who report vulnerabilities will be expected to refrain from publicly disclosing their findings unless and until the DoD provides written consent that it’s okay to do so.

“We want researchers to be recognized publicly for their contributions, if that is the researcher’s desire,” the DoD stated. “We will seek to allow researchers to be publicly recognized whenever possible. However, public disclosure of vulnerabilities will only be authorized at the express written consent of DoD.”

The DoD said if it couldn’t immediately fix or publicly acknowledge reported vulnerabilities, it might be because doing so could have life-or-death consequences for service members.

“Many DoD technologies are deployed in combat zones and, to varying degrees, support ongoing military operations; the proper functioning of DoD systems and applications can have a life-or-death impact on Service members and international allies and partners of the United States,” the agency observed. “DoD must take extra care while investigating the impact of vulnerabilities and providing a fix, so we ask your patience during this period.”


The Defense Department made the announcement via, a company that helps organizations build and manage vulnerability reporting policies. HackerOne also helps customers build out “bug bounty” programs that remunerate and recognize researchers who report security flaws.

HackerOne currently is coordinating an upcoming bug bounty program called “Hack the Army,” in which some 500 qualifying contestants can earn cash rewards for finding and reporting cybersecurity weaknesses in the Army’s various online properties (incidentally, Hack the Army runs from Nov. 30 through Dec. 21, 2016, and interested/eligible hackers have until Nov. 28, at 17:00 EST to apply for a shot at one of those 500 spots).

Alex Rice, HackerOne’s co-founder and chief technology officer, said most organizations don’t have an official policy about how they will respond to reports about cybersecurity weaknesses and liabilities, and that the absence of such a policy often discourages researchers from reporting serious security holes.

“The default is terribly unfriendly to researchers,” Rice said. “The Computer Fraud and Abuse Act (CFAA) allows almost any company to go after researchers as hackers, and this happened far too many times. What this does is carve out a safe harbor from the CFAA, and begin to create a safe place that is really powerful and important.”

Rice said HackerOne last year took an inventory of vulnerability disclosure policies at the Global Forbes 2000 list of companies, and found that only six percent of them had published guidelines.

“You cannot run an effective public vulnerability disclosure program or a bug bounty program without having competent security professionals internally,” Rice said. “The problem is, the vast majority of organizations don’t have that.”

Image: Hackerone.

Image: Hackerone.

And when you start asking people to find and report gaps in your cybersecurity armor, you’d better be ready for them to do just that, said Jeremiah Grossman, chief security of strategy at anti-malware firm SentinelOne.

“I’ve seen people try to launch these vulnerability disclosure programs and then fail spectacularly because they don’t have the resources to handle the response,” said Grossman, who also serves on the advisory board for Bugcrowd — one of HackerOne’s competitors. “When you’re really mature in security, and not before then, is about the right time for a bug bounty program. If the organization can handle one to five vulnerabilities reported each month and can fix each of those in a few days, then they’re probably ready.”

Rice said one reason he’s so excited about bug bounty programs is that they offer would-be security professionals a way to demonstrate their skills in a safe and controlled environment.

“If you’re a security professional looking to challenge yourself and your skills, there are very few real world opportunities to do that, to test your mettle and improve,” Rice said. “But that real-world experience is so unbelievably critical in this industry, and we need to be creating more opportunities for people to improve that. The more we can do that and share what we learn out of it, the more we can raise the talent and education of security professionals worldwide.”

Hardly a week goes by when I don’t hear from a young or career-changing reader asking for advice about how to carve out a living in cybersecurity. This happened so often that I created an entire category of posts on this topic: How to Break Into Security. I’ll be revisiting that series soon, but for the time being I want to encourage anyone interested in building their skills through legal hacking to consider creating relationships with companies that have already sanctioned — and in many cases financially reward — such activity.

For starters, Bugcrowd has a nice list of bug bounty and disclosure programs from across the Web, broken down according to whether they offer various benefits such as financial reward, swag or public recognition. Hackerone maintains a searchable directory of security contacts and vulnerability reporting policies at various corporations.

CryptogramSecuring Communications in a Trump Administration

Susan Landau has an excellent essay on why it's more important than ever to have backdoor-free encryption on our computer and communications systems.

Protecting the privacy of speech is crucial for preserving our democracy. We live at a time when tracking an individual -- ­a journalist, a member of the political opposition, a citizen engaged in peaceful protest­ -- or listening to their communications is far easier than at any time in human history. Political leaders on both sides now have a responsibility to work for securing communications and devices. This means supporting not only the laws protecting free speech and the accompanying communications, but also the technologies to do so: end-to-end encryption and secured devices; it also means soundly rejecting all proposals for front-door exceptional access. Prior to the election there were strong, sound security arguments for rejecting such proposals. The privacy arguments have now, suddenly, become critically important as well. Threatened authoritarianism means that we need technological protections for our private communications every bit as much as we need the legal ones we presently have.

Unfortunately, the trend is moving in the other direction. The UK just passed the Investigatory Powers Act, giving police and intelligence agencies incredibly broad surveillance powers with very little oversight. And Bits of Freedom just reported that "Croatia, Italy, Latvia, Poland and Hungary all want an EU law to be created to help their law enforcement authorities access encrypted information and share data with investigators in other countries."

Planet DebianJoachim Breitner: microG on Jolla

I am a incorrigibly in picking non-mainstream, open smartphones, and then struggling hard. Back then in 2008, I tried to use the OpenMoko FreeRunner, but eventually gave up because of hardware glitches and reverted to my good old Siemens S35. It was not that I would not be willing to put up with inconveniences, but as soon as it makes live more difficult for the people I communicate with, it becomes hard to sustain.

Two years ago I tried again, and got myself a Jolla phone, running Sailfish OS. Things are much nicer now: The hardware is mature, battery live is good, and the Android compatibility layer enables me to run many important apps that are hard to replace, especially the Deutsche Bahn Navigator and various messengers, namely Telegram, Facebook Messenger, Threema and GroupMe.

Some apps that require Google Play Services, which provides a bunch of common tasks and usually comes with the Google Play store would not run on my phone, as Google Play is not supported on Sailfish OS. So far, the most annoying ones of that sort were Uber and Lyft, making me pay for expensive taxis when others would ride cheaper, but I can live with that. I tried to install Google Play Services from shady sources, but it would regularly crash.

Signal on Jolla

Now in Philadelphia, people urged me to use the Signal messenger, and I was convinced by its support for good end-to-end crypto, while still supporting offline messages and allowing me to switch from my phone to my desktop and back during a conversation. The official Signal app uses Google Cloud Messaging (GCM, part of Google Play Services) to get push updates about new posts, and while I do not oppose this use of Google services (it really is just a ping without any metadata), this is a problem on Sailfish OS.

Luckily, the Signal client is open source, and someone created a “LibreSignal” edition that replaced the use of GCM with websockets, and indeed, this worked on my phone, and I could communicate.

Things were not ideal, though: I would often have to restart the app to get newly received messages; messages that I send via Signal Desktop would often not show up on the phone and, most severe, basically after every three messages, sending more messages from Desktop would stop working for my correspondents, which freaked them out. (Strangely it continued working from their phone app, so we coped for a while.)

So again, my choice of non-standard devices causes inconveniences to others. This, and the fact that the original authors of Signal and the maintainers of LibreSignal got into a fight that ended LibreSignal discontinued, meant that I have to change something about this situation. I was almost ready to give in and get myself a Samsung S7 or something boring of the sort, but then I decided to tackle this issue once more, following some of the more obscure instructions out there, trying to get vanilla Signal working on my phone. About a day later, I got it, and this is how I did it.


So I need Google Play Services somehow, but installing the “real thing” did not seem to be very promising (I tried, and regularly got pop-ups telling me that Play Services has crashed.) But I found some references to a project called “microG”, which is an independent re-implementation of (some of) of the play services, in particular including GCM.

Installing microG itself was easy, as you can add their repository to F-Droid. I installed the core services, the services framework and the fake store apps. If this had been all that was to do, things would be easy!

Play Store detection work arounds

But Signal would still complain about the lack of Google Play Services. It asks Android if an app with a certain name is installed, and would refuse to work if this app does not exist. For some reason, the microG apps cannot just have the names of the “real” Google apps.

There seem to be two ways of working around this: Patching Signal, or enabling Signature Spoofing.

The initially most promising instructions (which are in a README in a tarball on a fishy file hoster linked from an answer on the Jolla support forum…) suggested patching Signal, and actually came both with a version of an app called “Lucky Patcher” as well as a patched Android package, but both about two years old. I tried a recent version of the Lucky Patcher, but it failed to patch the current version of Signal.

Signature Spoofing

So on to Signature Spoofing. This is a feature of some non-standard Android builds that allow apps (such as microG) to fake the existence of other apps (the Play Store), and is recommended by the microG project. Sailfish OS’s Android compatibility layer “Alien Dalvik” does not support it out of the box, but there is a tool “tingle” that adds this feature to existing Android systems. One just has to get the /system/framework/framework.jar file, put it into the input folder of this project, run python, select 2, and copy the framework.jar from output/ back. Great.

Deodexing Alien Dalvik

Only that it only works on “deodexed” files. I did not know anything about odexed Android Java classes (and did not really want to know), but there was not way around. Following this explanation I gathered that one finds files foo.odex in the Android system folder, runs some tool on them to create a classes.dex file, and adds that to the corresponding foo.jar or foo.apk file, copies this back to the phone and deletes the foo.odex file.

The annoying this is that one does not only have to do it for framework.jar in order to please tingle, because if one does it to one odex file, one has to do to all! It seems that for people using Windows, the Universal Deodexer V5 seems to be a convenient tool, but I had to go more manually.

So I first fetched “smali”, compiled it using ./gradlew build. Then I fetched the folders /opt/alien/system/framework and /opt/alien/system/app from the phone (e.g. using scp). Keep a backup of these in case something breaks. Then I ran these commands (disclaimer: I fetched these from my bash history and slightly cleaned them up. This is not a fire-and-forget script! Use it when you know what it and you are doing):

cd framework
for file in *.odex
  java -jar ~/build/smali/baksmali/build/libs/baksmali.jar deodex $file -o out
  java -jar ~/build/smali/smali/build/libs/smali.jar a out -o classes.dex
  zip -u $(basename $file .odex).jar classes.dex
  rm -rf out classes.dex $file
cd ..

cd app
for file in *.odex
  java -jar ~/build/smali/baksmali/build/libs/baksmali.jar deodex -d ../framework $file -o out
  java -jar ~/build/smali/smali/build/libs/smali.jar a out -o classes.dex
  zip -u $(basename $file .odex).apk classes.dex
  rm -rf out classes.dex $file
cd ..

The resulting framework.jar can now be patched with tingle:

mv framework/framework.jar ~/build/tingle/input
cd ~/build/tingle
# select 2
cd -
mv ~/build/tingle/output/framework.jar framework/framework.jar

Now I copy these framework and app folders back on my phone, and restart Dalvik:

devel-su systemctl restart aliendalvik.service

It might start a bit slower than usually, but eventually, all the Android apps should work as before.

The final bit that was missing in my case was that I had to reinstall Signal: If it is installed before microG is installed, it does not get permission to use GCM, and when it tries (while registering: After generating the keys) it just crashes. I copied /data/data/org.thoughtcrime.secretsms/ before removing Signal and moved it back after (with cp -a to preserve permissions) so that I could keep my history.

And now, it seems, vanilla Signal is working just fine on my Jolla phone!

What’s missing

Am I completely happy with Signal? No! An important feature that it is lacking is a way to get out all data (message history including media files) in a file format that can be read without Signal; e.g. YAML files or clean HTML code. I do want to be able to re-read some of the more interesting conversations when I am 74 or 75, and I doubt that there will be a Signal App, or even Android, then. I hope that this becomes available in time, maybe in the Desktop version.

I would also hope that pidgin gets support to the Signal protocol, so that I conveniently use one program for all my messaging needs on the desktop.

Finally it would be nice if my Signal identity was less tied to one phone number. I have a German and a US phone number, and would want to be reachable under both on all my clients. (If you want to contact me on Signal, use my US phone number.)


Could I have avoided this hassle by simply convincing people to use something other than Signal? Tricky, at the moment. Telegram (which works super reliable for me, and has a pidgin plugin) has dubious crypto and does not support crypto while using multiple clients. Threema has no desktop client that I know of. OTR on top of Jabber does not support offline messages. So nothing great seems to exist right now.

In the long run, the best bet seems to be OMEMO (which is, in essence, the Signal protocol) on top of Jabber. It is currently supported by one Android Jabber client (Conversations) and one Desktop application (gajim, via a plugin). I should keep an eye on pidgin support for OMEMO and other development around this.

Planet DebianTanguy Ortolo: Generate man pages for awscli

No man pages, but almost

The AWS Command Line Interface, which is available in Debian, provides no man page. Instead, that tool has an integrated help system, which allows you to run commands such as aws rds help, that, for what I have seen, generates some reStructuredText, then converts it to a man page in troff format, then calls troff to convert it to text with basic formatting, and eventually passes it to a pager. Since this is close to what man does, the result looks like a degraded man page, with some features missing such as the adaptation to the terminal width.

Well, this is better than nothing, and better than what many under-documented tools can offer, but for several reasons, it still sucks: most importantly, it does not respect administrators' habits and it does not integrate with the system man database. You it does not allow you to use commands such as apropos, and you will get no man page name auto-completion from your shell since there is no man page.

Generate the man pages

Now, since the integrated help system does generate a man page internally, we can hack it to output it, and save it to a file:

Description: Enable a mode to generate troff man pages
 The awscli help system internally uses man pages, but only to convert
 them to text and show them with the pager. This patch enables a mode
 that prints the troff code so the user can save the man page.
 To use that mode, run the help commands with an environment variable
 OUTPUT set to 'troff', for instance:
     OUTPUT='troff' aws rds help
Forwarded: no
Author: Tanguy Ortolo <>
Last-Update: 2016-11-22

Index: /usr/lib/python3/dist-packages/awscli/
--- /usr/lib/python3/dist-packages/awscli/       2016-11-21 12:14:22.236254730 +0100
+++ /usr/lib/python3/dist-packages/awscli/       2016-11-21 12:14:22.236254730 +0100
@@ -49,6 +49,8 @@
     Return the appropriate HelpRenderer implementation for the
     current platform.
+    if 'OUTPUT' in os.environ and os.environ['OUTPUT'] == 'troff':
+        return TroffHelpRenderer()
     if platform.system() == 'Windows':
         return WindowsHelpRenderer()
@@ -97,6 +99,15 @@
         return contents

+class TroffHelpRenderer(object):
+    """
+    Render help content as troff code.
+    """
+    def render(self, contents):
+        sys.stdout.buffer.write(publish_string(contents, writer=manpage.Writer()))
 class PosixHelpRenderer(PagingHelpRenderer):
     Render help content on a Posix-like system.  This includes

This patch must be applied from the root directory with patch -p0, otherwise GNU patch will not accept to work on files with absolute names.

With that patch, you can run help commands with an environment variable OUTPUT='troff' to get the man page to use it as you like, for instance:

% OUTPUT='troff' aws rds help > aws_rds.1
% man -lt aws_rds.1 | lp

Generate all the man pages

Now that we are able to generate the man page of any aws command, all we need to generate all of them is a list of all the available commands. This is not that easy, because the commands are somehow derived from functions provided by a Python library named botocore, which are derived from a bunch of configuration files, and some of them are added, removed or renamed. Anyway, I have been able to write a Python script that does that, but it includes a static list of these modifications:

#! /usr/bin/python3

import subprocess
import awscli.clidriver

def write_manpage(command):
    manpage = open('%s.1' % '_'.join(command), 'w')
    process = subprocess.Popen(command,
            env={'OUTPUT': 'troff'},

driver = awscli.clidriver.CLIDriver()
command_table = driver._get_command_table()

renamed_commands = \
        'config': 'configservice',
        'codedeploy': 'deploy',
        's3': 's3api'
added_commands = \
        's3': ['cp', 'ls', 'mb', 'mv', 'presign', 'rb', 'rm', 'sync',
removed_subcommands = \
        'ses': ['delete-verified-email-address',
        'ec2': ['import-instance', 'import-volume'],
        'emr': ['run-job-flow', 'describe-job-flows',
                'add-job-flow-steps', 'terminate-job-flows',
                'list-bootstrap-actions', 'list-instance-groups',
        'rds': ['modify-option-group']
added_subcommands = \
        'rds': ['add-option-to-option-group',

# Build a dictionary of real commands, including renames, additions and
# removals.
real_commands = {}
for command in command_table:
    subcommands = []
    subcommand_table = command_table[command]._get_command_table()
    for subcommand in subcommand_table:
        # Skip removed subcommands
        if command in removed_subcommands \
                and subcommand in removed_subcommands[command]:
    # Add added subcommands
    if command in added_subcommands:
        for subcommand in added_subcommands[command]:
    # Directly add non-renamed commands
    if command not in renamed_commands:
        real_commands[command] = subcommands
    # Add renamed commands
        real_commands[renamed_commands[command]] = subcommands
# Add added commands
for command in added_commands:
    real_commands[command] = added_commands[command]

# For each real command and subcommand, generate a manpage
for command in real_commands:
    write_manpage(['aws', command])
    for subcommand in real_commands[command]:
        write_manpage(['aws', command, subcommand])
                         'sync', 'website']}

This script will generate more than 2,000 man page files in the current directory; you will then be able to move them to /usr/local/share/man/man1.

Since this is a lot of man pages, it may be appropriate to concatenate them by major command, for instance all the aws rds together…

Cory DoctorowCar Wars: a dystopian science fiction story about the nightmare of self-driving cars

Melbourne’s Deakin University commissioned me to write a science fiction story about the design and regulation of self-driving cars, inspired by my essay about the misapplication of the “Trolley Problem” to autonomous vehicles.

The story, Car Wars, takes the form of a series of vignettes that illustrate the problem with designing cars to control their drivers, interspersed with survey questions to spur discussion of the wider issues of governments and manufacturers being able to control the operation of devices we own and depend on.

It’s pretty much the most beautiful treatment any of my stories has ever had online, and I love how it’s been embedded in a wider context.


‘We’re dead.’

‘Shut up, Jose, we’re not dead. Be cool and hand me that USB stick. Keep your hands low. The cop can’t see us until I open the doors.’

‘What about the cameras?’

‘There’s a known bug that causes them to shut down when the LAN gets congested, to clear things for external cams and steering. There’s also a known bug that causes LAN traffic to spike when there’s a law-enforcement override because everything tries to snapshot itself for forensics. So the cameras are down inside. Give. Me. The. USB.’

Jose’s hand shook. I always kept the wireless jailbreaker and the stick separate – plausible deniability. The jailbreaker had legit uses, and wasn’t, in and of itself, illegal.

I plugged the USB in and mashed the panic-sequence. The first time I’d run the jailbreaker, I’d had to kill an hour while it cycled through different known vulnerabilities, looking for a way into my car’s network. It had been a nail-biter, because I’d started by disabling the car’s wireless – yanking the antenna out of its mount, then putting some Faraday tape over the slot – and every minute that went by was another minute I’d have to explain if the jailbreak failed. Five minutes offline might just be transient radio noise or unclipping the antenna during a car-wash; the longer it went, the fewer stories there were that could plausibly cover the facts.

But every car has a bug or two, and the new firmware left a permanent channel open for reconnection. I could restore the car to factory defaults in 30 seconds, but that would leave me operating a vehicle that was fully uninitialised, no ride history – an obvious cover-up. The plausibility mode would restore a default firmware load, but keep a carefully edited version of the logs intact. That would take three to five minutes, depending.

‘Step out of the vehicle please.’

‘Yes, sir.’

I made sure he could see my body cam, made it prominent in the field of view for his body cam, so there’d be an obvious question later, if no footage was available from my point of view. It was all about the game theory: he knew that I knew that he knew, and other people would later know, so even though I was driving while brown, there were limits on how bad it could get.

‘You too, sir.’

Car Wars [Cory Doctorow/Deakin University]

CryptogramHeadphones as Microphones

Surprising no one who has been following this sort of thing, headphones can be used as microphones.

Worse Than FailureError'd: Actually, My Father was a Folding Chair

It's a holiday week this week, so today is our Friday. Enjoy an Errord. - Remy

"I have to wonder what on earth posessed those parents to add that suffix to their kid's name," writes Mack C.


"For me, 'impossible' errors are the best kind of errors," wrote Tim D.


Will K. writes, "Remove all keys before shutting down? Then put them all back when you boot up? Truly unbeatable security!"


"I'm pretty proud of the fact that only myself and & percent of my fellow Americans know the state capitals," Eric wrote.


"Orbitz has a funny idea of what 'vicinity' means," writes Steve L.


Dan wrote, "Yeah, I think that we've all had one of those days."


"I was looking for open phone development positions, but I don't think that I can take a pay cut like this," writes Randy R.


[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!


TEDWireless advances in treating spinal cord damage, morphing wings for aircraft, and the world’s tallest tropical trees


Just a few of the intriguing headlines involving members of the TED community this week:

Advances in treating spinal cord damage. In Nature, Grégoire Courtine and a team of scientists announced that they had successfully used a wireless brain-spine interface to help monkeys with spinal cord damage paralyzing one leg regain the ability to walk. Compared to other similar systems, the wireless component is unique, allowing the monkeys to move around freely without being tethered to electronics. Speaking with The New York Times, Courtine emphasized that the goal of the system is not to fix paralysis, but rather to have better rehabilitation for patients. (Watch Grégoire’s TED Talk)

A new instrument to shed light on distant planets. A team of scientists and engineers, including TEDster Jeremy Kasdin, have used a new instrument to isolate and analyze the light emitted by planets orbiting nearby stars. The instrument, CHARIS, was designed and built by Kasdin’s team. By analyzing the light emitted by the planets, researchers are able to determine more details about their age, size and atmospheric composition. This operation was a test run, and is part of a larger scientific effort to find and analyze exoplanets. (Watch Jeremy’s TED Talk)

Bendable, morphing wings for aircraft. In Soft Robotics, Neil Gershenfeld and a team of researchers describe a new bendable, morphing wing that could create more agile, fuel-efficient aircraft — as well as simplify the manufacturing process. A long time goal of researchers, previous attempts used mechanical control structures within the wing to deform it, but these structures were heavy, canceling out any fuel-efficiency gains, and they added complexity. The new method makes the entire wing the mechanism and its shape can be changed along its entire length by activating two small motors that apply a twisting pressure to each wingtip. (Watch Neil’s TED Talk)

A deadly Ebola mutation. New research suggests that a mutation in the Ebola virus may be responsible for the scale of the epidemic that began in 2013 in West Africa. The research, conducted by a team of researchers that included TEDster Pardis Sabeti, showed that roughly 3 months after the initial outbreak, and about the time the epidemic was detected, the virus had mutated. The mutation made the virus better suited for humans than its natural host, the fruit bat, which may have allowed the virus to spread more aggressively. Working independently, another team of researchers came to a similar conclusion, but the role of the mutation in Ebola’s virulence and transmissibility still needs to be clarified. (Watch Pardis’ TED Talk)

The future of transportation. Bjarke Ingels’ firm (BIG) released its design plans for a hyperloop system that would connect Dubai and Abu Dhabi in just a 12 minutes, a journey that now takes more than two hours by car. With a system of autonomous pods, the group hopes to eliminate waiting time; their design reveal includes conceptual images and video showing from start to finish what the passenger experience would be like. BIG made the designs for Hyperloop  One, one of the companies racing to make Elon Musk’s concept a reality. (Watch Bjarke’s TED Talk)

The world’s tallest tropical trees. Greg Asner has identified the world’s tallest tropical tree using laser scanning, along with 50 other record-breakers. The tree, located in Sabah, Malaysian Borneo, stands at 94.1 meters tall or, as Asner said for comparison, about the height of five sperm whales stacked snout-to-fluke. He measured the tree using a laser scanning technology called LIDAR (for Light Detection and Ranging), and since the measurement was taken remotely, they are unsure of the exact species of the tree, but think it is likely in the genus Shorea. Discoveries aside, Asner is still analyzing this new data about the forests, which he hopes to make publicly available so that policymakers can make more informed conservation plans. (Watch Greg’s TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

CryptogramGovernment Propaganda on Social Media

Vice Motherboard has an interesting article about governments using social-media platforms for propaganda and surveillance, and the companies that are supporting this.

Planet DebianLars Wirzenius: Debian miniconf in Cambridge

I spent a few days in Cambridge for a minidebconf. This is a tiny version of the full annual Debconf. We had a couple of days for hacking, and another two days for talks.

I spent my hacking time on thinking about vmdebootstrap (my tool for generating disk images with an installed Debian), and came to the conclusion I need to atone my sins for writing such crappy code by rewriting it from scratch to be nicer to use. I gave a talk about this, too. The mailing list post has the important parts, and meetings-archive has a video.

I haven't started the rewrite, and it's not going to make it for stretch.

I also gave two other talks, on the early days of Linux, and Qvarn, the latter being what I do at work.

Thank you to ARM, for sponsoring the location, and the other sponsors for sponsoring food. These in-real-life meetings between developers are important for the productivity and social cohesion of Debian.

Cryptogram"Security for the High-Risk User"

Interesting paper. John Scott-Railton on securing the high-risk user.

Krebs on SecurityAkamai on the Record KrebsOnSecurity Attack

Internet infrastructure giant Akamai last week released a special State of the Internet report. Normally, the quarterly accounting of noteworthy changes in distributed denial-of-service (DDoS) attacks doesn’t delve into attacks on specific customers. But this latest Akamai report makes an exception in describing in great detail the record-sized attack against in September, the largest such assault it has ever mitigated.

“The attacks made international headlines and were also covered in depth by Brian Krebs himself,” Akamai said in its report, explaining one reason for the exception. “The same data we’ve shared here was made available to Krebs for his own reporting and we received permission to name him and his site in this report. Brian Krebs is a security blogger and reporter who does in-depth research and analysis of cybercrime throughout the world, with a recent emphasis on DDoS. His reporting exposed a stressor site called vDOS and the security firm BackConnect Inc., which made him the target of a series of large DDoS attacks starting September 15, 2016.”

A visual depiction of the increasing size and frequency of DDoS attacks against, between 2012 and 2016. Source: Akamai.

A visual depiction of the increasing size and frequency of DDoS attacks against, between 2012 and 2016. Source: Akamai.

Akamai said so-called “booter” or “stresser” DDoS-for-hire services that sell attacks capable of knocking Web sites offline continue to account for a large portion of the attack traffic in mega attacks. According to Akamai, most of the traffic from those mega attacks in Q3 2016 were thanks to Mirai — the now open-source malware family that was used to coordinate the attack on this site in September and a separate assault against infrastructure provider Dyn in October.

Akamai said the attack on Sept. 20 was launched by just 24,000 systems infected with Mirai, mostly hacked Internet of Things (IoT) devices such as digital video recorders and security cameras.

“The first quarter of 2016 marked a high point in the number of attacks peaking at more than 100 Gbps,” Akamai stated in its report. “This trend was matched in Q3 2016, with another 19 mega attacks. It’s interesting that while the overall number of attacks fell by 8% quarter over quarter, the number of large attacks, as well as the size of the biggest attacks, grew significantly.”

As detailed here in several previous posts, was a pro-bono customer of Akamai, beginning in August 2012 with Prolexic before Akamai acquired them. Akamai mentions this as well in explaining its decision to terminate our pro-bono arrangement. KrebsOnSecurity is now behind Google‘s Project Shield, a free program run by Google to help protect journalists and dissidents from online censorship.

“Almost as soon as the site was on the Prolexic network, it was hit by a trio of attacks based on the Dirt Jumper DDoS tookit,” Akamai wrote of this site. “Those attacks marked the start of hundreds of attacks that were mitigated on the routed platform.”

In total, Akamai found, this site received 269 attacks in the little more than four years it was on the Prolexic/Akamai network.

“During that time, there were a dozen mega attacks peaking at over 100 Gbps,” the company wrote. “The first happened in December 2013, the second in February 2014, and the third in August 2015. In 2016, the size of attacks accelerated dramatically, with four mega attacks happening between March and August, while five attacks occurred in September, ranging from 123 to 623 Gbps. An observant reader can probably correlate clumps of attacks to specific stories covered by Krebs. Reporting on the dark side of cybersecurity draws attention from people and organizations who are not afraid of using DDoS attacks to silence their detractors.”

In case any trenchant observant readers wish to attempt that, I’ve published a spreadsheet here (in .CSV format) which lists the date, duration, size and type of attack used in DDoS campaigns against over the past four years. Although 269 attacks over four years works out to an average of just one attack roughly every five days, both the frequency and intensity of these attacks have increased substantially over the past four years as illustrated by the graphic above.

“The magnitude of the attacks seen during the final week were significantly larger than the majority of attacks Akamai sees on a regular basis,” Akamai reports. “In fact, while the attack on September 20 was the largest attack ever mitigated by Akamai, the attack on September 22 would have qualified for the record at any other time, peaking at 555 Gbps.”

Akamai found that the 3rd quarter of 2016 marks a full year with China as the top source country for DDoS attacks, with just under 30 percent of attack traffic in Q3 2016. The company notes that this metric doesn’t count UDP-based attacks – such as amplification and reflection attacks — due to the ease with which the sources of the attacks can be spoofed and could create significant distortion of the data.

“More importantly, the proportion of traffic from China has been reduced by 56%, which had a significant effect on the overall attack count and led to the 8% drop in attacks seen this quarter,” Akamai reported. The U.S., U.K., France, and Brazil round out the remaining top five source countries.”

Top sources of DDoS attacks. Image: Akamai.

Top sources of DDoS attacks. Image: Akamai.

A copy of Akamai’s Q3 2016 State of the Internet report is available here.

Worse Than FailureUnpythonic

From: Kirby McCloy
Subject: Concerns about SMERPS
The SMERPS project seems to be going down the wrong path. I thought our quarterly goal was for IT modernization.

The email carried no specific call to action. It barely had a point, and was little more than bad-natured griping. It also came from Kirby, the CTO. The email triggered a four-alarm underpants fire as every manager on the SMERPS project tried to guess what Kirby might possibly mean.

Someplace between the frenzied cries of, “Chris, did you see Kirby’s email? How do we reply?” someone had the bright idea that maybe this was just politics. Maybe Kirby just wanted to feel like he was part of the process, that his input was valued. They could just schedule a little sit-down, with Kirby, the PMs, and a few of the lead developers, and smooth this whole thing over.

Thus, Brittany found herself with an entire Friday afternoon blocked off for a meeting. None of the large conference rooms were available, which meant three PMs, the project coordinator, and four developers had to cram into a small office to review the plan. Thirty minutes into the meeting, they were all huddled around the projector for warmth, and the CTO was a no-show.

That didn’t dissuade management from trying to keep the meeting on track. “Well, while we wait for Kirby,” Chris said, “we can make sure we’re all on the same page. Let’s review the current plan.”

For the next two hours, the PMs nattered on about critical paths, resource leveling, and project milestones that were already unlikely to bear any resemblance to reality, and would only slip farther with each new bit of overmanagement. Brittany was nearly asleep when Chris called her name. “Why don’t you tell us about the technical side for the web team?”

“Well,” Brittany said, “SMERPS is a pretty straightforward CRUD app.” She noticed the vaguely surprised and offended look among some of the PMs and quickly explained, “Create-read-update-delete. A basic data-management tool.” The application needed to be accessible from the corporate office, at manufacturing sites, and at customer locations, and work on mobile devices. “All in all, it’s very similar to apps like RDR, TPM, and PlusPoint, so we’re planning to use the same tech-stack.”

Specifically, SQL Server for the database, C# for the backend services, and Angular2 and Typescript on the front-end. A good stretch of the project could be scaffolded out with automated tools, and most of the rest could be lifted from other projects. The hard parts- the 10% of the code that’d take 90% of the time to build- were the places where it needed to talk to the ERP system.

Brittany was in the process of making this explanation when Kirby swept into the room. “Sorry I’m late,” he said, “and I can’t stay long. But I have a few issues I’d like this team to address. First, there are a lot of resources on this project. I want you to be lean. There should be one developer on this project.”

“That’s impossible,” Brittany said.

The CTO rolled right over her. “It is if you’re using the right tools. Before this meeting, I did a little research, and did you know that Python is the number two programming language in the world? We’re going to use that for this project, which should make our developer more efficient.”

This statement was greeted with silence and a vaguely shell-shocked look. The CTO took this for agreement, rapped his knuckles on the table, and said. “Great. Good. Get on that. Email me with any questions. Now, if you’ll excuse me…”

John Cleese, dressed as a viking, in front of a picture of Spam; from the sketch show Monty Python's Flying Circus
What a Python might look like

As the door closed behind Kirby, Chris stepped up. “Okay, so you heard what the CTO suggested. Let’s not go making any big decisions just yet. Scott, Lisa, I need you to write up a clearer picture of the ERP side of the project, and why we need multiple ERP developers. Larry, Bob- you do the same for the web team. Brittany, before you leave for the day, I need you to do an alternatives analysis that compares our current tech with Python. Be objective and fair, but… well…”

“Well,” indeed. Brittany had no real opposition to Python as a language, but definitely did not like the idea of making a massive shift just on a CTO’s whims. She focused her analysis on a few key points. First, no one in their organization actually knew Python. Their entire portfolio was some flavor of .NET and the newer projects had added Angular. Their entire toolchain, build-process, continuous integration process, etc., all were built to support C# and Angular projects. Even beyond that, Python didn’t perform as well as C#, and since the requirements wanted a single-page application, they’d need to use Angular anyway, so there was no getting rid of Angular.

Brittany did her best to be thorough. That was easy. Being polite was harder. She was working late on Friday night to get the document over to Chris, who was also working late. When she hit send, he instantly replied to her with a big “THANKS!”. She went home, and ignored work until Monday.

On Monday, there was an email from Chris. “Got a meeting with Kirby at 11AM. Will follow up after.”

At 11:15, Brittany got an email from Kirby. “Saw your analysis,” he wrote, “but with 1 hour of research, I can disagree with it. Angular and TypeScript is old. Python is new, and Google is writing everything with it. Python is the best practices for development.”

The project was put on hold while everyone tried to talk some sense into Kirby. Kirby was adamant, though: he read that Google used Python, and so Initech also needed to use Python. “If our team still needs to use Angular, just use the Python version,” were his final words on the subject.

Brittany pulled Chris aside. “Chris, does Kirby even know what Python is? He clearly doesn’t know what Angular is. What happens if we just say, ‘Yes, we’ll use Python,’ and then… don’t?”

And that’s how Brittany completed her first major development project in Python, although it didn’t actually contain a single line of Python code.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaBinh Nguyen: Challenges to the US Empire, International News Sites, and More

Over and over again in international media you hear about the strengthening and/or weakening of the US empire. It seems clear that in spite of a recovery from the GFC there are still problems Global Empire - What's Wrong with the USA POLICING THE COP Ft. Retired U.S. Lt. Gen. Mark Hertling https://

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main December 2016 Meeting: HPC Linux in Europe / adjourned SGM / Intro to FreeBSD

Dec 6 2016 18:30
Dec 6 2016 20:30
Dec 6 2016 18:30
Dec 6 2016 20:30

6th Floor, 200 Victoria St. Carlton VIC 3053


• Lev Lafayette, High Performance Linux in Europe
• adjourned Special General Meeting
• Peter Ross, Introduction to FreeBSD

200 Victoria St. Carlton VIC 3053 (the EPA building)

Late arrivals needing access to the building and the sixth floor please call 0490 049 589.

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

December 6, 2016 - 18:30

read more


Valerie AuroraA post-election guide to changing hearts and minds

I just published a guide to changing the hearts and minds of lukewarm Trump supporters over at the amazing Captain Awkward advice blog. I took what I learned from teaching the Ally Skills Workshop and turned it into a step-by-step process for changing people’s minds effectively: identifying where you have the most influence, choosing who to spend time, finding shared values, and using compassion and vulnerability on your part to help the listener develop their compassion towards those who need it most. Here’s the introduction:

Many of us are grappling with how to use our skills and influence to resist the upcoming Trump administration and the hatred and violence that it inspires. As Captain Awkward readers, we’ve been practicing setting boundaries, standing up for our values, and making it awkward for the right person. We are uniquely prepared for a crucial part of the next few months or years: changing the minds of people who support the Trump administration, and standing up to the abusers they are empowering. This post teaches scripts and techniques to do these two tasks, along with the theory behind them. It’s for people living in the U.S., but it may be useful to people living elsewhere as well.

And now I will give you some strange advice: Read the comments on that post! Captain Awkward is a case study (along with Metafilter) in how positive and useful a comments section can be if you have a strong code of conduct and enforce it. Enjoy the unfamiliar sensation of reading the comments and enjoying them!

If you have read my last two blog posts, you know I’m not hopeful for the future of human rights in the United States (and around the world). I don’t believe that changing the minds of wavering Trump supporters will be anything like enough to prevent fascism and kleptocracy. However, I think any other effort will fail unless we drastically lower the percentage of U.S. voters who support Trump. That’s why I licensed that guide CC BY-SA – please feel free to copy, modify, and redistribute it without charge as long as you credit the authors.

If you like what you see on Captain Awkward, please consider joining me and becoming a monthly donor (or chipping in a few bucks now). Their work is crucial to the task we have before us.

Tagged: ally skills, fascism, politics

Chaotic IdealismQ&A: Online IQ tests

Q: Where can I find a valid online IQ test?

It’s not really possible to get a valid IQ test online. They have to be administered and scored by humans because there are a lot of judgment calls involved, and a multiple-choice format simply doesn’t lend itself to that kind of thing.

I’m from the US, and I’ve studied IQ a lot because I’m fascinated with statistics and with tests and measures (I’ve got a psychology degree, plus I’m autistic, which makes a very obsessive type of researcher!). And take it from me: People in the US overvalue IQ. It means much less than it seems, and says less about intelligence than people think it does.

IQ tests break down whenever someone who's atypical in some way gets tested. If your neurology is unusual, your communication style is unusual, even your culture is different, IQ tests start to say less and less until in the end, they say nothing.

The tests aren't utterly useless. Generally, we can tell if somebody’s outright gifted or outright learning-disabled from an IQ test, if it’s administered carefully on a good day with no cultural barriers, but the precise numbers themselves are really very deceptive. The idea that somebody with an IQ of 112 is smarter than someone with an IQ of 110 is just ridiculous. It just isn’t that precise. Only once you get to two or more standard deviations worth of difference do I feel that the differences are worth making a note of—and since the IQ test has a standard deviation of 15 or 16, that’s a big difference, the difference between average and gifted or average and intellectually disabled.

That’s not so surprising, considering that the original IQ tests were meant to identify students who needed extra help. They still fulfill that function reasonably well. But they were never meant to rank people by intelligence.

If face-to-face IQ tests are of so little worth, fail so often and say so little about us, you really can’t expect online IQ tests to be worth much at all.

Instead of worrying about IQs, we should focus on what we’re good at doing, what we’ve worked hard on, what we enjoy learning. That’s what really matters.

To those who want to learn more about IQ, I recommend the book "The Mismeasure of Man", by Stephen Jay Gould. It’s old—written in 1981—but it addresses a lot of the issues with testing intelligence and cognition, and explains why it’s hard to do and why it's much less applicable to daily life than you’d think.

TEDHave a TED Talk idea? Apply to the TEDNYC Idea Search 2017

Do you have a TED Talk you’ve always wanted to try out in front of an audience? We’re thrilled to announce that applications are open for our TEDNYC Idea Search 2017 in New York City.

Anyone with an idea worth spreading is invited to apply; 10 finalists will share their risky, quirky, fascinating ideas in under 6 minutes, in late January, onstage at the TED theater in Manhattan. The TEDNYC Idea Search is a chance for us to find fresh voices to ring out on the TED stage.

Some of these talks will be posted on the online TED platform; other speakers will be invited to expand on their talks on the TED2017 main stage in Vancouver in the spring of 2017. Joshua Prager, Hannah Brencher, Richard Turere and Hyeonseo Lee — all these speakers are fantastic finds from previous TED talent searches.

The deadline to apply is November 28 at 6pm Eastern time. To apply, you’ll need to fill out this form and make a 1-minute video describing your talk idea. One note: We can’t cover travel to New York City for finalists from out of town; we encourage applicants from the tri-state area surrounding New York.

Apply to speak at the TEDNYC Idea Search 2017 >>



CryptogramDumb Security Survey Questions

According to a Harris poll, 39% of Americans would give up sex for a year in exchange for perfect computer security:

According to an online survey among over 2,000 U.S. adults conducted by Harris Poll on behalf of Dashlane, the leader in online identity and password management, nearly four in ten Americans (39%) would sacrifice sex for one year if it meant they never had to worry about being hacked, having their identity stolen, or their accounts breached. With a new hack or breach making news almost daily, people are constantly being reminded about the importance of secure passwords, yet some are still not following proper password protocol.

Does anyone think that this hypothetical survey question means anything? What, are they bored at Harris? Oh, I see. This is a paid survey by a computer company looking for some publicity.

Four in 10 people (41%) would rather give up their favorite food for a month than go through the password reset process for all their online accounts.

I guess it's more fun to ask these questions than to poll the election.

Worse Than FailureCodeSOD: The Rule of Ten

Florian’s office has a “rule of ten”. Well, they don’t, but one of Florian’s co-workers seems to think so. This co-worker has lots of thoughts. For example, they wrote this block, which is supposed to replace certain characters with some other characters.

sbyte sbCount = 0;
// set value of new field content to old value
sNewFieldContent = sFieldContent;
while (rFieldIdentifierRegex.Match(sNewFieldContent).Success) {

        // for security reasons
        if (++sbCount > 10)

        // get identifier and name
        string sActFieldSymbol = rFieldIdentifierRegex.Match(sNewFieldContent).Groups[1].Value;
        string sActFieldName = rFieldIdentifierRegex.Match(sNewFieldContent).Groups[2].Value;
        string sActFieldIdentifier = sActFieldSymbol + sActFieldName;

        // default value for unknown fields is an empty string
        string sValue = "";

        [... calculate actual replacement value ...]

        // replace value for placeholder in new field content
        sNewFieldContent = sNewFieldContent.Replace(sActFieldIdentifier, sValue);

As Florian puts it:

Having more matches than 10 inside one line is obviously a security risk (it isn’t) and must be prohibited (it mustn’t) because that would cause erroneous behavior in the application (it doesn’t).

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!


Valerie AuroraSpreadsheet of signs of fascism

Several people have asked me to share the spreadsheet I mentioned in my previous post, the one I am using to track signs that the U.S. is governed by a fascist regime. Feel free to copy it and make your own modifications – it is licensed CC BY-SA 4.0 Valerie Aurora. Here is the current snapshot:

Obviously this is an incomplete list. I’ll be adding new things to it as new and more creative ways of being a fascist are thought up in Trump Tower.

I made this spreadsheet because I’m afraid I will normalize brutal and inhuman behavior, and wake up one day to find I am trapped in a cruel fascist regime – or worse, actively collaborating in it.

It is true that before November 8, brutality and violence were already a central part of the U.S. government and culture, and many people were already living daily in fear for their freedom and lives. What we lost on November 8 is the reasonable expectation that we could fix this kind of injustice through peaceful political change, in the style of the civil rights movement or the fight for marriage equality. Maybe our democratic institutions will survive the next four years, but I don’t feel hopeful.

Tagged: fascism, politics


CryptogramFriday Squid Blogging: Peruvian Squid Fishermen Are Trying to Diversify

Squid catch is down, so fisherman are trying to sell more processed product.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

CryptogramSmartphone Secretly Sends Private Data to China

This is pretty amazing:

International customers and users of disposable or prepaid phones are the people most affected by the software. But the scope is unclear. The Chinese company that wrote the software, Shanghai Adups Technology Company, says its code runs on more than 700 million phones, cars and other smart devices. One American phone manufacturer, BLU Products, said that 120,000 of its phones had been affected and that it had updated the software to eliminate the feature.

Kryptowire, the security firm that discovered the vulnerability, said the Adups software transmitted the full contents of text messages, contact lists, call logs, location information and other data to a Chinese server.

On one hand, the phone secretly sends private user data to China. On the other hand, it only costs $50.