Planet Russell



Oliver Smith sends this representative line:

bool long_name_that_maybe_distracted_someone()

Now, we’ve established my feelings on the if (condition) { return true; } else { return false; } pattern. This is just an iteration on that theme, using a ternary, right?

That’s certainly what it looks like. But Oliver was tracking down an unusual corner-case bug and things just weren’t working correctly. As it turns out, CONDITION_SUCCESS and CONDITION_FAILURE were both defined in the StatusCodes enum.

Screenshot of the intellisense which shows CONDITION_FAILURE defined as 2

Yep- CONDITION_FAILURE is defined as 2. The method returns a bool. Guess what happens when you coerce a non-zero integer into a boolean in C++? It turns into true. This method only ever returns true. Ironically, the calling method would then do its own check against the return value, looking to see if it were CONDITION_SUCCESS or CONDITION_FAILURE.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaClinton Roy: Actively looking for work

I am now actively looking for work, ideally something with Unix/C/Python in the research/open source/not-for-proft space. My long out of date resume has been updated.

Planet DebianRuss Allbery: Review: Twitter and Tear Gas

Review: Twitter and Tear Gas, by Zeynep Tufekci

Publisher: Yale University Press
Copyright: 2017
ISBN: 0-300-21512-6
Format: Kindle
Pages: 312

Subtitled The Power and Fragility of Networked Protest, Twitter and Tear Gas is a close look at the effect of social media (particularly, but not exclusively, Twitter and Facebook) on protest movements around the world. Tufekci pays significant attention to the Tahrir Square protests in Egypt, the Gezi Park protests in Turkey, Occupy Wall Street and the Tea Party in the United States, Black Lives Matter also in the United States, and the Zapatista uprising in Mexico early in the Internet era, as well as more glancing attention to multiple other protest movements since the advent of the Internet. She avoids both extremes of dismissal of largely on-line movements and the hailing of social media as a new era of mass power, instead taking a detailed political and sociological look at how protest movements organized and fueled via social media differ in both strengths and weaknesses from the movements that came before.

This is the kind of book that could be dense and technical but isn't. Tufekci's approach is analytical but not dry or disengaged. She wants to know why some protests work and others fail, what the governance and communication mechanisms of protest movements say about their robustness and capabilities, and how social media has changed the tools and landscape used by protest movements. She's also been directly involved: she's visited the Zapatistas, grew up in Istanbul and is directly familiar with the politics of the Gezi Park protests, and includes in this book a memorable story of being caught in the Antalya airport in Turkey during the 2016 attempted coup. There are some drier and more technical chapters where she's laying the foundations of terminology and analysis, but they just add rigor to an engaging, thoughtful examination of what a protest is and why it works or doesn't work.

My favorite part of this book, by far, was the intellectual structure it gave me for understanding the effectiveness of a protest. That's something about which media coverage tends to be murky, at least in situations short of a full-blown revolutionary uprising (which are incredibly rare). The goal of a protest is to force a change, and clearly sometimes this works. (The US Civil Rights movement and the Indian independence movement are obvious examples. The Arab Spring is a more recent if more mixed example.) However, sometimes it doesn't; Tufekci's example is the protests against the Iraq War. Why?

A key concept of this book is that protests signal capacity, particularly in democracies. That can be capacity to shape a social narrative and spread a point of view, capacity to disrupt the regular operations of a system of authority, or capacity to force institutional change through the ballot box or other political process. Often, protests succeed to the degree that they signal capacity sufficient to scare those currently in power into compromising or acquiescing to the demands of the protest movement. Large numbers of people in the streets matter, but not usually as a show of force. Violent uprisings are rare and generally undesirable for everyone. Rather, they matter because they demand and hold media attention (allowing them to spread a point of view), can shut down normal business and force an institutional response, and because they represent people who can exert political power or be tapped by political rivals.

This highlights one of the key differences between protest in the modern age and protest in a pre-Internet age. The March on Washington at the height of the Civil Rights movement was an impressive demonstration of capacity largely because of the underlying organization required to pull off a large and successful protest in that era. Behind the scenes were impressive logistical and governance capabilities. The same organizational structure that created the March could register people to vote, hold politicians accountable, demand media attention, and take significant and effective economic action. And the government knew it.

One thing that social media does is make organizing large protests far easier. It allows self-organizing, with viral scale, which can create numerically large movements far easier than the dedicated organizational work required prior to the Internet. This makes protest movements more dynamic and more responsive to events, but it also calls into question how much sustained capacity the movement has. The government non-reaction to the anti-war protests in the run-up to the Iraq War was an arguably correct estimation of the signaled capacity: a bet that the anti-war sentiment would not turn into sustained institutional pressure because large-scale street protests no longer indicated the same underlying strength.

Signaling capacity is not, of course, the only purpose of protests. Tufekci also spends a good deal of time discussing the sense of empowerment that protests can create. There is a real sense in which protests are for the protesters, entirely apart from whether the protest itself forces changes to government policies. One of the strongest tools of institutional powers is to make each individual dissenter feel isolated and unimportant, to feel powerless. Meeting, particularly in person, with hundreds of other people who share the same views can break that illusion of isolation and give people the enthusiasm and sense of power to do something about their beliefs. This, however, only becomes successful if the protesters then take further actions, and successful movements have to provide some mechanism to guide and unify that action and retain that momentum.

Tufekci also provides a fascinating analysis of the evolution of government responses to mass protests. The first reaction was media blackouts and repression, often by violence. Although we still see some of that, particularly against out groups, it's a risky and ham-handed strategy that dramatically backfired for both the US Civil Rights movement (due to an independent press that became willing to publish pictures of the violence) and the Arab Spring (due to social media providing easy bypass of government censorship attempts). Governments do learn, however, and have become increasingly adept at taking advantage of the structural flaws of social media. Censorship doesn't work; there are too many ways to get a message out. But social media has very little natural defense against information glut, and the people who benefit from the status quo have caught on.

Flooding social media forums with government propaganda or even just random conspiratorial nonsense is startlingly effective. The same lack of institutional gatekeepers that destroys the effectiveness of central censorship also means there are few trusted ways to determine what is true and what is fake on social media. Governments and other institutional powers don't need to convince people of their point of view. All they need to do is create enough chaos and disinformation that people give up on the concept of objective truth, until they become too demoralized to try to weed through the nonsense and find verifiable and actionable information. Existing power structures by definition benefit from apathy, disengagement, delay, and confusion, since they continue to rule by default.

Tufekci's approach throughout is to look at social media as a change and a new tool, which is neither inherently good or bad but which significantly changes the landscape of political discourse. In her presentation (and she largely convinced me in this book), the social media companies, despite controlling the algorithms and platform, don't particularly understand or control the effects of their creation except in some very narrow and profit-focused ways. The battlegrounds of "fake news," political censorship, abuse, and terrorist content are murky swamps less out of deliberate intent and more because companies have built a platform they have no idea how to manage. They've largely supplanted more traditional political spheres and locally-run social media with huge international platforms, are now faced with policing the use of those platforms, and are way out of their depth.

One specific example vividly illustrates this and will stick with me. Facebook is now one of the centers of political conversation in Turkey, as it is in many parts of the world. Turkey has a long history of sharp political divisions, occasional coups, and a long-standing, simmering conflict between the Turkish government and the Kurds, a political and ethnic minority in southeastern Turkey. The Turkish government classifies various Kurdish groups as terrorist organizations. Those groups unsurprisingly disagree. The arguments over this inside Turkey are vast and multifaceted.

Facebook has gotten deeply involved in this conflict by providing a platform for political arguments, and is now in the position of having to enforce their terms of service against alleged terrorist content (or even simple abuse), in a language that Facebook engineers largely don't speak and in a political context that they largely know nothing about. They of course hire Turkish speakers to try to understand that content to process abuse reports. But, as Tufekci (a Turkish native) argues, a Turkish speaker who has the money, education, and family background to be working in an EU Facebook office in a location like Dublin is not randomly chosen from the spectrum of Turkish politics. They are more likely to have connections to or at least sympathies for the Turkish government or business elites than to be related to a family of poor and politically ostracized Kurds. It's therefore inevitable that bias will be seen in Facebook's abuse report handling, even if Facebook management intends to stay neutral.

For Turkey, you can substitute just about any other country about which US engineers tend to know little. (Speaking as a US native, that's a very long list.) You may even be able to substitute the US for Turkey in some situations, given that social media companies tend to outsource the bulk of the work to countries that can provide low-paid workers willing to do the awful job of wading through the worst of humanity and attempting to apply confusing and vague terms of service. Much of Facebook's content moderation is done in the Philippines, by people who may or may not understand the cultural nuances of US political fights (and, regardless, are rarely given enough time to do more than cursorily glance at each report).

This is already a long review and there still more important topics in this book I've not touched on, such as movement governance. (As both an advocate for and critic of consensus-based decision-making, Tufekci's example of governance in Occupy Wall Street had me both fascinated and cringing.) This is excellent stuff, full of personal anecdotes and entertaining story-telling backed by thoughtful and structured analysis. If you have felt mystified by the role that protests play in modern politics, I highly recommend reading this.

Rating: 9 out of 10


Planet DebianNorbert Preining: Gaming: Lara Croft – Rise of the Tomb Raider: 20 Year Celebration

I have to admit, this is the first time that I playing something like this. Somehow, Lara Croft – Rise of the Tomb Raider was on sale, and some of the trailers were so well done that I was tempted in getting this game. And to my surprise, it actually works pretty well on Linux, too – yeah!

So I am a first time player in this kind of league, and had a hard time getting used to controlling lovely Lara, but it turned out easier than I thought – although I guess a controller instead of mouse and kbd would be better. One starts out somewhere in the moutains (I probably bought the game because there is so much of mountaineering in the parts I have seen till now 😉 trying to evade breaking crevices, jumping from ledges to ledges, getting washed away by avalanches, full program.

But my favorite till now in the game is that Lara always carries an ice ax. Completely understandable in the mountain trips, where she climbs frozen ice falls, hanging cliffs, everything, like a super-pro. Wow, I would like to be such an ice climber! BUt even in the next mission in Syria, she still has her ice ax with here, very conveniently dangling from her side. How suuuper-cool!

After being washed away by an avalanche we find Lara back on a trip in Syria, followed and nearly killed by the mysterious Trinity organization. During the Syria operation she needs to learn quite a lot of Greek, unfortunately the player doesn’t have to learn with her – I could need some polishing of my Ancient Greek.

The game is a first-of-its-kind for me, with long cinematic parts between the playing actions. The switch between cinematic and play is so well done that I sometimes have the feeling I need to control Lara during these times, too. The graphics are also very stunning to my eyes, impressive.

I never have played and Lara game or seen and Lara movie, but my first association was to the Die Hard movie series – always these dirty clothes, scratches and dirt covered body. Lara is no exception here. Last but not least, the deaths of Lara (one – at least I – often dies in these games) are often quite funny and entertaining: spiked in some tombs, smashed to pieces by a falling stone column, etc. I really have to learn it the hard way.

I only have finished two expeditions, no idea how many of them are there to come. But seems like I will continue. Good thing is that there are lots of restart points and permanent saves, so if one dies, or the computer dies, one doesn’t have to redo the whole bunch. Well done.

Planet Linux AustraliaFrancois Marier: Running mythtv-setup over ssh

In order to configure a remote MythTV server, I had to run mythtv-setup remotely over an ssh connection with X forwarding:

ssh -X mythtv@machine

For most config options, I can either use the configuration menus inside of of mythfrontend (over a vnc connection) or the Settings section of MythWeb, but some of the backend and tuner settings are only available through the main setup program.

Unfortunately, mythtv-setup won't work over an ssh connection by default and prints the following error in the terminal:

$ mythtv-setup
W  OpenGL: Could not determine whether Sync to VBlank is enabled.
Handling Segmentation fault
Segmentation fault (core dumped)

The fix for this was to specify a different theme engine:

mythtv-setup -O ThemePainter=qt

Planet DebianRenata D'Avila: Debian Women in Curitiba

This post is long overdue, but I have been so busy lately that I didn't have the time to sit down and write it in the past few weeks. What have I been busy with? Let's start with this event, that happened back in March:

Debian Women meeting in Curitiba (March 10th, 2018)

The eight women who attended the meeting gathered together in front of a tv with the Debian Women logo

At MiniDebConf Curitiba last year, few women attended. And, as I mentioned on a previous post, there was not even a single women speaking at MiniDebConf last year.

I didn't want MiniDebConf Curitiba 2018 to be a repeat of last year. Why? In part, because I have involved in other tech communities and I know it doesn't have to be like that (unless, of course, the community insists in being mysoginistic...).

So I came up with the idea of having a meeting for women in Curitiba one month before MiniDebConf. The main goal was to create a good enviroment for women to talk about Debian, whether they had used GNU/Linux before or not, whether they were programmers or not.

Miriam and Kira, two other women from the state of Parana interested in Debian, came along and helped out with planning. We used a collaborative pad to organize the tasks and activities and to create the text for the folder about Debian we had printed (based on Debian's documentation).

For the final version of the folder, it's important to acknowledge the help Luciana gave us, all the way from Minas Gerais. She collaborated with the translations, reviewed the texts and fixed the layout.

A pile with folded Debian Women folders. The writings are in Portuguese and it's possible to see a QR code.

The final odg file, in Portuguese, can be downloaded here: folder_debian_30cm.odg

Very quickly, because we had so little time (we settled on a date and a place a little over one month before the meeting), I created a web page and put it online the only way I could at that moment, using Github Pages.

We used Mate Hackers' instance of to register for the meeting, simply because we had to plan accordingly. This was the address for registration:

Through the Training Center, a Brazilian tech community, we got to Lucio, who works at Pipefy and offered us the space so we could hold the meeting. Thank you, Lucio, Training Center and Pipefy!

Pipefy logo

Because Miriam and I weren't in Curitiba, we had to focus the promotion of this meeting online. Not the ideal when someone wants to be truly inclusive, but we worked with the resources we had. We reached out to TechLadies and invited them - as we did with many other groups.

This was our schedule:


09:00 - Welcome coffee

10:00 - What is Free Software? Copyright, licenses, sharing

10:30 - What is Debian?

12:00 - Lunch Break


14:30 - Internships with Debian - Outreachy and Google Summer of Code

15:00 - Install fest / helping with users issues

16:00 - Editing the Debian wiki to register this meeting

17:30 - Wrap up

Take outs from the meeting:

  • Because we knew more or less how many people would attend, we were able to buy the food accordingly right before the meeting - and ended up spending much less than if we had ordered some kind of catering.

  • Sadly, it would be almost as expensive to print a dozen of folders than it would be to print out hundred of them. So we ended up printing 100 folders (which was expensive enough). The good part is that we would end up handing them out during MiniDebConf Curitiba.

  • We attempted a live stream of the meeting using Jitsi, but I don't think we were very successful, because we didn't have a microphone for the speakers.

  • Most of our public ended up being women who, in fact, already knew and/or used Debian, but weren't actively involved with the community.

  • It was during this meeting that the need for a mailing list in Portuguese for women interested in Debian came up. Because, yes, in a country where English is taught so poorly in the schools, the language can still be a barrier. We also wanted to keep in touch and share information about the Brazilian community and what we are doing. We want next years' DebConf to have a lot of women, specially Brazilian women who are interested and/or who are users and/or contribute to Debian. The request for this mailing list would be put through by Helen during MiniDebConf, using the bug report system. If you can, please support us:

Pictures from the meeting:

The breakfast table with food

Our breakfast table!

Miriam telling the women about Free Software, six women listening

Miriam's talk: What is Free Software? Copyright, licenses, sharing

Renata and Miriam talking about What is Debian a tv among them shows the title of the talk

Miriam and Renata's talk: What is Debian?

Renata talking about internships with Debian

Renata talking about internships with Debian

Thank you to all the women who participated!

The participants with the two men who helped with the meeting.

And to our lovely staff. Thank you, Lucio, for getting us the space and thank you, Pipefy!

This has been partly documented at Debian Wiki (DebianWomen/History) because the very next day after this meeting, Debian Wiki completely blocked ProtonVPN from even accessing the Wiki. Awesome. If anyone is able to, feel free to copy/paste any of this text there.

Planet DebianRuss Allbery: Review: Deep Work

Review: Deep Work, by Cal Newport

Publisher: Grand Central
Copyright: January 2016
ISBN: 1-4555-8666-8
Format: Kindle
Pages: 287

If you follow popular psychology at all, you are probably aware of the ongoing debate over multitasking, social media, smartphones, and distraction. Usually, and unfortunately, this comes tainted by generational stereotyping: the kids these days who spend too much time with their phones and not enough time getting off their elders' lawns, thus explaining their inability to get steady, high-paying jobs in an economy designed to avoid steady, high-paying jobs. However, there is some real science under the endless anti-millennial think-pieces. Human brains are remarkably bad at multitasking, and it causes significant degredation of performance. Worse, that performance degredation goes unnoticed by the people affected, who continue to think they're performing tasks at their normal proficiency. This comes into harsh conflict with modern workplaces heavy on email and chat systems, and even harsher conflict with open plan offices.

Cal Newport is an associate professor of computer science at Georgetown University with a long-standing side profession of writing self-help books, initially focused on study habits. In this book, he argues that the ability to do deep work — focused, concentrated work that pushes the boundaries of what one understands and is capable of — is a valuable but diminishing skill. If one can develop both the habit and the capability for it (more on that in a moment), it can be extremely rewarding and a way of differentiating oneself from others in the same field.

Deep Work is divided into two halves. The first half is Newport's argument that deep work is something you should consider trying. The second, somewhat longer half is his techniques for getting into and sustaining the focus required.

In making his case for this approach, Newport puts a lot of effort into avoiding broader societal prescriptions, political stances, or even general recommendations and tries to keep his point narrow and focused: the ability to do deep, focused work is valuable and becoming rarer. If you develop that ability, you will have an edge. There's nothing exactly wrong with this, but much of it is obvious and he belabors it longer than he needed to. (That said, I'm probably more familiar with research on concentration and multitasking than some.)

That said, I did like his analysis of busyness as a proxy for productivity in many workplaces. The metrics and communication methods most commonly used in office jobs are great at measuring responsiveness and regular work on shallow tasks in the moment, and bad at measuring progress towards deeper, long-term goals, particularly ones requiring research or innovation. The latter is recognized and rewarded once it finally pays off, but often treated as a mysterious capability some people have and others don't. Meanwhile, the day-to-day working environment is set up to make it nearly impossible, in Newport's analysis, to develop and sustain the habits required to achieve those long-term goals. It's hard to read this passage and not be painfully aware of how much time one spends shallowly processing email, and how that's rewarded in the workplace even though it rarely leads to significant accomplishments.

The heart of this book is the second half, which is where Deep Work starts looking more like a traditional time management book. Newport lays out four large areas of focus to increase one's capacity for deep work: create space to work deeply on a regular basis, embrace boredom, quit social media, and cut shallow work out of your life. Inside those areas, he provides a rich array of techniques, some rather counter-intuitive, that have worked for him. This is in line with traditional time management guidance: focus on a few important things at a time, get better at saying no, put some effort into planning your day and reviewing that plan, and measure what you're trying to improve. But Newport has less of a focus on any specific system and more of a focus on what one should try to cut out of one's life as much as possible to create space for thinking deeply about problems.

Newport's guidance is constructed around the premise (which seems to have some grounding in psychological research) that focused, concentrated work is less a habit that one needs to maintain than a muscle that one needs to develop. His contention is that multitasking and interrupt-driven work isn't just a distraction that can be independently indulged or avoided each day, but instead degrades one's ability to concentrate over time. People who regularly jump between tasks lose the ability to not jump between tasks. If they want to shift to more focused work, they have to regain that ability with regular, mindful practice. So, when Newport says to embrace boredom, it's not just due to the value of quiet and unstructured moments. He argues that reaching for one's phone to scroll through social media in each moment of threatened boredom undermines one's ability to focus in other areas of life.

I'm not sure I'm as convinced as Newport is, but I've been watching my own behavior closely since I read this book and I think there's some truth here. I picked this book up because I've been feeling vaguely dissatisfied with my ability to apply concentrated attention to larger projects, and because I have a tendency to return to a comfort zone of unchallenging tasks that I already know how to do. Newport would connect that to a job with an open plan office, a very interrupt-driven communications culture, and my personal habits, outside of work hours, of multitasking between TV, on-line chat, and some project I'm working on.

I'm not particularly happy about that diagnosis. I don't like being bored, I greatly appreciate the ability to pull out my phone and occupy my mind while I'm waiting in line, and I have several very enjoyable hobbies that only take "half a brain," which I neither want to devote time to exclusively nor want to stop doing entirely. But it's hard to argue with the feeling that my brain skitters away from concentrating on one thing for extended periods of time, and it does feel like an underexercised muscle.

Some of Newport's approach seems clearly correct: block out time in your schedule for uninterrupted work, find places to work that minimize distractions, and batch things like email and work chat instead of letting yourself be constantly interrupted by them. I've already noticed how dramatically more productive I am when working from home than working in an open plan office, even though the office doesn't bother me in the moment. The problems with an open plan office are real, and the benefits seem largely imaginary. (Newport dismantles the myth of open office creativity and contrasts it with famously creative workplaces like MIT and Bell Labs that used a hub and spoke model, where people would encounter each other to exchange ideas and then retreat into quiet and isolated spaces to do actual work.) And Newport's critique of social media seemed on point to me: it's not that it offers no benefits, but it is carefully designed to attract time and attention entirely out of proportion to the benefits that it offers, because that's the business model of social media companies.

Like any time management book, some of his other advice is less convincing. He makes a strong enough argument for blocking out every hour of your day (and then revising the schedule repeatedly through the day as needed) that I want to try it again, but I've attempted that in the past and it didn't go well at all. I'm similarly dubious of my ability to think through a problem while walking, since most of the problems I work on rely on the ability to do research, take notes, or start writing code while I work through the problem. But Newport presents all of this as examples drawn from his personal habits, and cares less about presenting a system than about convincing the reader that it's both valuable and possible to carve out thinking space for oneself and improve one's capacity for sustained concentration.

This book is explicitly focused on people with office jobs who are rewarded for tackling somewhat open-ended problems and finding creative solutions. It may not resonate with people in other lines of work, particularly people whose jobs are the interrupts (customer service jobs, for example). But its target profile fits me and a lot of others in the tech industry. If you're in that group, I think you'll find this thought-provoking.

Recommended, particularly if you're feeling harried, have the itch to do something deeper or more interesting, and feel like you're being constantly pulled away by minutia.

You can get a sample of Newport's writing in his Study Habits blog, although be warned that some of the current moral panic about excessive smartphone and social media use creeps into his writing there. (He's currently working on a book on digital minimalism, so if you're allergic to people who have caught the minimalism bug, his blog will be more irritating than this book.) I appreciated him keeping the moral panic out of this book and instead focusing on more concrete and measurable benefits.

Rating: 8 out of 10

Sky CroeserMothering

Today I am thinking about mothering as a way in which we can make the world (in all its messiness and difficulty) better.

“Children are the ways that the world begins again and again. If you fasten upon that concept of their promise, you will have trouble finding anything more awesome, and also anything more extraordinarily exhilarating, than the opportunity or/and the obligation to nurture a child into his or her own freedom.” – June Jordan

Mothering is often treated by our society as an inherently conservative activity, something that’s about preserving the past (past traditions, past family structures, past values). But I’m learning from so many people (including people who aren’t biological mothers) who are knitting together strands from the past and hopes for the future.

Care for nature, for the world around us, for our mothers’ and grandmothers’ knowledge and experience. And dreams of more space for children to be who they want to be, to welcome and nurture others, to grow freely.

My mother and grandmother taught me so much, and still do. They are kind and fierce and have managed change and dislocation while always providing me with a steady point in the world.

My beautiful friends who are mothers teach me every day through their examples and their honesty about the difficult moments as well as the wonderful ones.

And I learn from mothers beyond my little circles, too.

From Noongar mothers, and other Aboriginal mothers who fought for recognition of the kidnapping of their children, and who are working today to build a society where their children will be safe and valued as they should be.

From Black mothers like June Jordan, Alexis Pauline Gumbs, and others in the ‘Revolutionary Mothering’ collection, which I return to again and again. They have done so much to help me understand other mothers’ experiences, and to see the possibilities and work that I should be taking up. And others, like Sylvia Federici, who have helped me see what I might not have, otherwise.

From mothers who must be brave enough to leave war or economic insecurity, hoping for safety, even though it also means leaving behind family and friends and home and the language and culture that has been held dear.

From mothers who work quietly and consistently and without recognition, from mothers who are sometimes difficult because of the work they do, from mothers who struggle with their own pasts, and who nevertheless keep trying to create the world anew, more full of love and possibility than before.


Planet Linux AustraliaMichael Still: Head On


A sequel to Lock In, this book is a quick and fun read of a murder mystery. It has Scalzi’s distinctive style which has generally meshed quite well for me, so it’s not surprise that I enjoyed this book.


Head On Book Cover Head On
John Scalzi
Tor Books
April 19, 2018

To some left with nothing, winning becomes everything In a post-virus world, a daring sport is taking the US by storm. It's frenetic, violent and involves teams attacking one another with swords and hammers. The aim: to obtain your opponent's head and carry it through the goalposts. Impossible? Not if the players have Hayden's Syndrome. Unable to move, Hayden's sufferers use robot bodies, which they operate mentally. So in this sport anything goes, no one gets hurt - and crowds and competitors love it. Until a star athlete drops dead on the playing field. But is it an accident? FBI agents Chris Shane and Leslie Vann are determined to find out. In this game, fortunes can be made - or lost. And both players and owners will do whatever it takes to win, on and off the field.John Scalzi returns with Head On, a chilling near-future SF with the thrills of a gritty cop procedural. Head On brings Scalzi's trademark snappy dialogue and technological speculation to the future world of sports.


The post Head On appeared first on Made by Mikal.

Don MartiCan markets for intent data even be a thing?

Doc Searls is optimistic that surveillance marketing is going away, but what's going to replace it? One idea that keeps coming up is the suggestion that prospective buyers should be able to sell purchase intent data to vendors directly. This seems to be appealing because it means that the Marketing department will still get to have Big Data and stuff, but I'm still trying to figure out how voluntary transactions in intent data could even be a thing.

Here's an example. It's the week before Thanksgiving, and I'm shopping for a kitchen stove. Here are two possible pieces of intent information that I could sell.

  • "I'm cutting through the store on the way to buy something else. If a stove is on sale, I might buy it, but only if it's a bargain, because who needs the hassle of handling a stove delivery the week before Thanksgiving?"

  • "My old stove is shot, and I need one right away because I have already invited people over. Shut up and take my money."

On a future intent trading platform, what's my incentive to reveal which intent is the true one?

If I'm a bargain hunter, I'm willing to sell my intent information, because it would tend to get me a lower price. But in that case, why would any store want to buy the information?

If I need the product now, I would only sell the information for a price higher than the expected difference between the price I would pay and the price a bargain hunter would pay. But if the information isn't worth more than the price difference, why would the store want to buy it?

So how can a market for purchase intent data happen?

Or is the idea of selling access to purchase intent only feasible if the intent data is taken from the "data subject" without permission?

Anyway, I can see how search advertising and signal-based advertising can assume a more important role as surveillance marketing becomes less important, but I'm not sure about markets for purchase intent. Maybe user data sharing will be not so much a stand-alone thing but a role for trustworthy news and cultural sites, as people choose to share data as part of commenting and survey completion, and that data, in aggregated form, becomes part of a site's audience profile.

Planet DebianRuss Allbery: Review: Always Human

Review: Always Human, by walkingnorth

Copyright: 2015-2017
Format: Online graphic novel
Pages: 336
Always Human is a graphic novel published on the LINE WEBTOON platform. It was originally published in weekly updates and is now complete in two "seasons." It is readable for free, starting with episode one. The pages metadata in the sidebar is therefore a bit of a lie: it's my guess on how many pages this would be if it were published as a traditional graphic novel (four times the number of episodes), provided as a rough guide of how long it might take to read (and because I have a bunch of annual reading metadata that I base on page count, even if I have to make up the concept of pages).

Always Human is set in a 24th century world in which body modifications for medical, cosmetic, and entertainment purposes are ubiquitous. What this story refers to as "mods" are nanobots that encompass everything from hair and skin color changes through protection from radiation to allow interplanetary travel to anti-cancer treatments. Most of them can be trivially applied with no discomfort, and they've largely taken over the fashion industry (and just about everything else). The people of this world spend as little time thinking about their underlying mechanics as we spend thinking about synthetic fabrics.

This is why Sunati is so struck by the young woman she sees at the train station. Sunati first noticed her four months ago, and she's not changed anything about herself since: not her hair, her eye color, her skin color, or any of the other things Sunati (and nearly everyone else) change regularly. To Sunati, it's a striking image of self-confidence and increases her desire to find an excuse to say hello. When the mystery woman sneezes one day, she sees her opportunity: offer her a hay-fever mod that she carries with her!

Alas for Sunati's initial approach, Austen isn't simply brave or quirky. She has Egan's Syndrome, an auto-immune problem that makes it impossible to use mods. Sunati wasn't expecting her kind offer to be met with frustrated tears. In typical Sunati form, she spends a bunch of time trying to understand what happened, overthinking it, hoping to see Austen again, and freezing when she does. Lucky for Sunati, typical Austen form is to approach her directly and apologize, leading to an explanatory conversation and a trial date.

Always Human is Sunati and Austen's story: their gentle and occasionally bumbling romance, Sunati's indecisiveness and tendency to talk herself out of communicating, and Austen's determined, relentless, and occasionally sharp-edged insistence on defining herself. It's not the sort of story that has wars, murder mysteries, or grand conspiracies; the external plot drivers are more mundane concerns like choice of majors, meeting your girlfriend's parents, and complicated job offers. It's also, delightfully, not the sort of story that creates dramatic tension by occasionally turning the characters into blithering idiots.

Sunati and Austen are by no means perfect. Both of them do hurt each other without intending to, both of them have blind spots, and both of them occasionally struggle with making emergencies out of things that don't need to be emergencies. But once those problems surface, they deal with them with love and care and some surprisingly good advice. My first reading was nervous. I wasn't sure I could trust walkingnorth not to do something stupid to the relationship for drama; that's so common in fiction. I can reassure you that this is a place where you can trust the author.

This is also a story about disability, and there I don't have the background to provide the same reassurance with much confidence. However, at least from my perspective, Always Human reliably treats Austen as a person first, weaves her disability into her choices and beliefs without making it the cause of everything in her life, and tackles head-on some of the complexities of social perception of disabilities and the bad tendency to turn people into Inspirational Disabled Role Model. It felt to me like it struck a good balance.

This is also a society that's far more open about human diversity in romantic relationships, although there I think it says more about where we currently are as a society than what the 24th century will "actually" be like. The lesbian relationship at the heart of the story goes essentially unremarked; we're now at a place where that can happen without making it a plot element, at least for authors and audiences below a certain age range. The (absolutely wonderful) asexual and non-binary characters in the supporting cast, and the one polyamorous relationship, are treated with thoughtful care, but still have to be remarked on by the characters.

I think this says less about walkingnorth as a writer than it does about managing the expectations of the reader. Those ideas are still unfamiliar enough that, unless the author is very skilled, they have to choose between dragging the viciousness of current politics into the story (which would be massively out of place here) or approaching the topic with an earnestness that feels a bit like an after-school special. walkingnorth does the latter and errs on the side of being a little too didactic, but does it with a gentle sense of openness that fits the quiet and supportive mood of the whole story. It feels like a necessary phase that we have to go through between no representation at all and the possibility of unremarked representation, which we're approaching for gay and lesbian relationships.

You can tell from this review that I mostly care about the story rather than the art (and am not much of an art reviewer), but this is a graphic novel, so I'll try to say a few things about it. The art seemed clearly anime- or manga-inspired to me: large eyes as the default, use of manga conventions for some facial expressions, and occasional nods towards a chibi style for particularly emotional scenes. The color palette has a lot of soft pastels that fit the emotionally gentle and careful mood. The focus is on human figures and shows a lot of subtlety of facial expressions, but you won't get as much in the way of awe-inspiring 24th century backgrounds. For the story that walkingnorth is telling, the art worked extremely well for me.

The author also composed music for each episode. I'm not reviewing it because, to be honest, I didn't enable it. Reading, even graphic novels, isn't that sort of multimedia experience for me. If, however, you like that sort of thing, I have been told by several other people that it's quite good and fits the mood of the story.

That brings up another caution: technology. A nice thing about books, and to a lesser extent traditionally-published graphic novels, is that whether you can read it doesn't depend on your technological choices. This is a web publishing platform, and while apparently it's a good one that offers some nice benefits for the author (and the author is paid for their work directly), it relies on a bunch of JavaScript magic (as one might expect from the soundtrack). I had to fiddle with uMatrix to get it to work and still occasionally saw confusing delays in the background loading some of the images that make up an episode. People with more persnickety ad and JavaScript blockers have reported having trouble getting it to display at all. And, of course, one has to hope that the company won't lose interest or go out of business, causing Always Human to disappear. I'd love to buy a graphic novel on regular paper at some point in the future, although given the importance of the soundtrack to the author (and possible contracts with the web publishing company), I don't know if that will be possible.

This is a quiet, slow, and reassuring story full of gentle and honest people who are trying to be nice to each other while navigating all the tiny conflicts that still arise in life. It wasn't something I was looking for or even knew I would enjoy, and turned out to be exactly what I wanted to read when I found it. I devoured it over the course of a couple of days, and am now eagerly awaiting the author's next work (Aerial Magic). It is unapologetically cute and adorable, but that covers a solid backbone of real relationship insight. Highly recommended; it's one of the best things I've read this year.

Many thanks to James Nicoll for writing a review of this and drawing it to my attention.

Rating: 9 out of 10

Planet DebianNorbert Preining: MySql DataTime/TimeStamp fields and Scala

In one of my work projects we use Play Framework on Scala to provide an API (how surprising ;-). For quite some time I was hunting after lots of milliseconds, time the API answer was just terrible late compared to hammering directly at the MySql server. It turned out to be a problem of interaction between MySql DateTime format and Scala.

It sounded like to nice an idea to save our traffic data in a MySql database with the timestamp saved in a DateTime or Timestamp column. Display in the Mysql Workbench looks nice and easily readable. But somehow our API server’s response was horribly slow, especially when there were several requests. Hours and hours of tuning of SQL code, trying to turning of sorting, adding extra indices, nothing of all that to any avail.

The solution was rather trivial, the actual time lost is not in the SQL part, nor in the processing in our Scala code, but in the conversion from MySql DateTime/Timestamp object to Scala/Java Timestamp. We are using ActiveRecord for Scala, a very nice and convenient library, which converts MySql DateTime/Timestamps to Scala/Java Timestamps. But this conversion seems, especially for a large number of entries, to become rather slow. With months of traffic data and hundreds of thousands of timestamps to convert, the API collapsed to unacceptable response times.

Converting the whole pipeline (from data producer to database and api) to use plain simple Long boosted the API performance considerably. Lesson learned, don’t use MySql DateTime/Timestamp if you need lots of conversions.


CryptogramFriday Squid Blogging: How the Squid Lost Its Shell

Squids used to have shells.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesWho Gets a Ticket?

The recent controversial arrests at a Philadelphia Starbucks, where a manager called the police on two Black men who had only been in the store a few minutes, are an important reminder that bias in the American criminal justice system creates both large scale, dramatic disparities and little, everyday inequalities. Research shows that common misdemeanors are a big part of this, because fines and fees can pile up on people who are more likely to be policed for small infractions.

A great example is the common traffic ticket. Some drivers who get pulled over get a ticket, while others get let off with a warning. Does that discretion shake out differently depending on the driver’s race? The Stanford Open Policing Project has collected data on over 60 million traffic stops, and a working paper from the project finds that Black and Hispanic drivers are more likely to be ticketed or searched at a stop than white drivers.

To see some of these patterns in a quick exercise, we pulled the project’s data on over four million stop records from Illinois and over eight million records from South Carolina. These charts are only a first look—we split the recorded outcomes of stops across the different codes for driver race available in the data and didn’t control for additional factors. However, they give a troubling basic picture about who gets a ticket and who drives away with a warning.

(Click to Enlarge)
(Click to Enlarge)

These charts show more dramatic disparities in South Carolina, but a larger proportion of white drivers who were stopped got off with warnings (and fewer got tickets) in Illinois as well. In fact, with millions of observations in each data set, differences of even a few percentage points can represent hundreds, even thousands of drivers. Think about how much revenue those tickets bring in, and who has to pay them. In the criminal justice system, the little things can add up quickly.

(View original at

Planet DebianSune Vuorela: Modern C++ and Qt – part 2.

I recently did a short tongue-in-cheek blog post about Qt and modern C++. In the comments, people discovered that several compilers effectively can optimize std::make_unique<>().release() to a simple new statement, which was kind of a surprise to me.

I have recently written a new program from scratch (more about that later), and I tried to force myself to use standard library smartpointers much more than what I normally have been doing.

I ended up trying to apply a set of rules for memory handling to my code base to try to see where it could end.

  • No naked delete‘s
  • No new statements, unless it was handed directly to a Qt function taking ownership of the pointer. (To avoid sillyness like the previous one)
  • Raw pointers in the code are observer pointers. We can do this in new code, but in older code it is hard to argue that.

It resulted in code like

m_document = std::make_unique<QTextDocument>();
    auto layout = std::make_unique<QHBoxLayout>();
    auto textView = std::make_unique<QTextBrowser>();

By it self, it is quite ok to work with, and we get all ownership transfers documented. So maybe we should start code this way.

But there is also a hole in the ownership pass around, but given Qt methods doesn’t throw, it shouldn’t be much of a problem.

More about my new fancy / boring application at a later point.

I still haven’t fully embraced the c++17 thingies. My mental baseline is kind of the compiler in Debian Stable.

CryptogramAirline Ticket Fraud

New research: "Leaving on a jet plane: the trade in fraudulently obtained airline tickets:"

Abstract: Every day, hundreds of people fly on airline tickets that have been obtained fraudulently. This crime script analysis provides an overview of the trade in these tickets, drawing on interviews with industry and law enforcement, and an analysis of an online blackmarket. Tickets are purchased by complicit travellers or resellers from the online blackmarket. Victim travellers obtain tickets from fake travel agencies or malicious insiders. Compromised credit cards used to be the main method to purchase tickets illegitimately. However, as fraud detection systems improved, offenders displaced to other methods, including compromised loyalty point accounts, phishing, and compromised business accounts. In addition to complicit and victim travellers, fraudulently obtained tickets are used for transporting mules, and for trafficking and smuggling. This research details current prevention approaches, and identifies additional interventions, aimed at the act, the actor, and the marketplace.

Blog post.

Planet DebianSven Hoexter: Replacing hp-health on gen10 HPE DL360

A small followup regarding the replacement of hp-health and hpssacli. Turns out a few things have to be replaced, lucky all you already running on someone else computer where you do not have to take care of the hardware.


According to the super nice and helpful Craig L. at HPE they're planing an update for the MCP ssacli for Ubuntu 18.04. This one will also support the SmartArray firmware 1.34. If you need it now you should be able to use the one released for RHEL and SLES. I did not test it.

replacing hp-health

The master plan is to query the iLO. Basically there are two ways. Either locally via hponcfg or remotely via a Perl script sample provided by HPE along with many helpful RIBCL XML file examples. Both approaches are not cool because you've to deal with a lot of XML, so opt for a 3rd way and use the awesome python-hpilo module (part of Debian/stretch) which abstracts all the RIBCL XML stuff nicely away from you.

If you'd like to have a taste of it, I had to reset a few ilo passwords to something sane, without quotes, double quotes and backticks, and did it like this:

pwfile="ilo-pwlist-$(date +%s)"

for x in $(seq -w 004 006); do
  pw=$(pwgen -n 24 1)
  echo "${host},${pw}" >> $pwfile
  ssh $host "echo \"<RIBCL VERSION=\\\"2.0\\\"><LOGIN USER_LOGIN=\\\"adminname\\\" PASSWORD=\\\"password\\\"><USER_INFO MODE=\\\"write\\\"><MOD_USER USER_LOGIN=\\\"Administrator\\\"><PASSWORD value=\\\"$pw\\\"/></MOD_USER></USER_INFO></LOGIN></RIBCL>\" > /tmp/setpw.xml"
  ssh $host "sudo hponcfg -f /tmp/setpw.xml && rm /tmp/setpw.xml"

After I regained access to all iLO devices I used the hpilo_cli helper to add a monitoring user:

while read -r line; do
  host=$(echo $line|cut -d',' -f 1)
  pw=$(echo $line|cut -d',' -f 2)
  hpilo_cli -l Administrator -p $pw $host add_user user_login="monitoring" user_name="monitoring" password="secret" admin_priv=False remote_cons_priv=False reset_server_priv=False virtual_media_priv=False config_ilo_priv=False
done < ${1}

The helper script to actually query the iLO interfaces from our monitoring is, in comparison to those ad-hoc shell hacks, rather nice:

import hpilo, argparse

parser = argparse.ArgumentParser()
parser.add_argument("component", help="HW component to query", choices=['battery', 'bios_hardware', 'fans', 'memory', 'network', 'power_supplies', 'processor', 'storage', 'temperature'])
parser.add_argument("host", help="iLO Hostname or IP address to connect to")
args = parser.parse_args()

def askIloHealth(component, host, user, password):
    ilo = hpilo.Ilo(host, user, password)
    health = ilo.get_embedded_health()

askIloHealth(args.component,, iloUser, iloPassword)

You can also take a look at a more detailed state if you pprint the complete stuff returned by "get_embedded_health". This whole approach of using the iLO should work since iLO 3. I tested version 4 and 5.

Worse Than FailureError'd: Kind of...but not really

"On occasion, SQL Server Management Studio's estimates can be just a little bit off," writes Warrent B.


Jay D. wrote, "On the surface, yeah, it looks like a good deal, but you know, pesky laws of physics spoil all the fun."


"When opening a new tab in Google Chrome I saw a link near the bottom of the screen that suggested I 'Explore the world's iconic locations in 3D'," writes Josh M., "Unfortunately, Google's API felt differently."


Stuart H. wrote, "I think I might have missed out on this deal, the clock was counting up, no I mean down, I mean negative AHHHH!"


"Something tells me this site's programmer is learning how to spell the hard(est) way," Carl W. writes.


"Why limit yourself with one particular resource of the day when you can substitute any resource you want," wrote Ari S.


[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaBlueHackers: Vale Janet Hawtin Reid

Janet Hawtin ReidJanet Hawtin Reid (@lucychili) sadly passed away last week.

A mutual friend called me a earlier in the week to tell me, for which I’m very grateful.  We both appreciate that BlueHackers doesn’t ever want to be a news channel, so I waited writing about it here until other friends, just like me, would have also had a chance to hear via more direct and personal channels. I think that’s the way these things should flow.

knitted Moomin troll by Janet Hawtin ReidI knew Janet as a thoughtful person, with strong opinions particularly on openness and inclusion.  And as an artist and generally creative individual,  a lover of nature.  In recent years I’ve also seen her produce the most awesome knitted Moomins.

Short diversion as I have an extra connection with the Moomin stories by Tove Jansson: they have a character called My, after whom Monty Widenius’ eldest daughter is named, which in turn is how MySQL got named.  I used to work for MySQL AB, and I’ve known that My since she was a little smurf (she’s an adult now).

I’m not sure exactly when I met Janet, but it must have been around 2004 when I first visited Adelaide for  It was then also that Open Source Industry Australia (OSIA) was founded, for which Janet designed the logo.  She may well have been present at the founding meeting in Adelaide’s CBD, too.  OSIA logo - by Janet Hawtin ReidAnyhow, Janet offered to do the logo in a conversation with David Lloyd, and things progressed from there. On the OSIA logo design, Janet wrote:

I’ve used a star as the current one does [an earlier doodle incorporated the Southern Cross]. The 7 points for 7 states [counting NT as a state]. The feet are half facing in for collaboration and half facing out for being expansive and progressive.

You may not have realised this as the feet are quite stylised, but you’ll definitely have noticed the pattern-of-7, and the logo as a whole works really well. It’s a good looking and distinctive logo that has lasted almost a decade and a half now.

Linux Australia logo - by Janet Hawtin ReidAs Linux Australia’s president Kathy Reid wrote, Janet also helped design the ‘penguin feet’ logo that you see on  Just reading the above (which I just retrieved from a 2004 email thread) there does seem to be a bit of a feet-pattern there… of course the explicit penguin feet belong with the Linux penguin.

So, Linux Australia and OSIA actually share aspects of their identity (feet with a purpose), through their respective logo designs by Janet!  Mind you, I only realised all this when looking through old stuff while writing this post, as the logos were done at different times and only a handful of people have ever read the rationale behind the OSIA logo until now.  I think it’s cool, and a fabulous visual legacy.

Fir tree in clay, by Janet Hawtin ReidFir tree in clay, by Janet Hawtin Reid. Done in “EcoClay”, brought back to Adelaide from OSDC 2010 (Melbourne) by Kim Hawtin, Janet’s partner.

Which brings me to a related issue that’s close to my heart, and I’ve written and spoken about this before.  We’re losing too many people in our community – where, in case you were wondering, too many is defined as >0.  Just like in a conversation on the road toll, any number greater than zero has to be regarded as unacceptable. Zero must be the target, as every individual life is important.

There are many possible analogies with trees as depicted in the above artwork, including the fact that we’re all best enabled to grow further.

Please connect with the people around you.  Remember that connecting does not necessarily mean talking per-se, as sometimes people just need to not talk, too.  Connecting, just like the phrase “I see you” from Avatar, is about being thoughtful and aware of other people.  It can just be a simple hello passing by (I say hi to “strangers” on my walks), a short email or phone call, a hug, or even just quietly being present in the same room.

We all know that you can just be in the same room as someone, without explicitly interacting, and yet feel either connected or disconnected.  That’s what I’m talking about.  Aim to be connected, in that real, non-electronic, meaning of the word.

If you or someone you know needs help or talk right now, please call 1300 659 467 (in Australia – they can call you back, and you can also use the service online).  There are many more resources and links on the website.  Take care.

Planet Linux AustraliaDavid Rowe: FreeDV 700D Part 4 – Acquisition

Since 2012 I have built a series of modems (FDMDV, COHPSK, OFDM) for HF Digital voice. I always get stuck on “acquisition” – demodulator algorithms that acquire and lock onto the received signal. The demod needs to rapidly estimate the frequency offset and “coarse” timing – the position where the modem frame starts in the sequence of received samples.

For my application (Digital Voice over HF), it’s complicated by the low SNR and fading HF channels, and the requirement for fast sync (a few hundred ms). For Digital Voice (DV) we need something fast enough to emulate Push To Talk (PTT) operation. In comparison HF data modems have it easy – they can take many lazy seconds to synchronise.

The latest OFDM modem has been no exception. I’ve spent several weeks messing about with acquisition algorithms to get half decent performance. Still some tuning to do but for my own sanity I think I’ll stop development here for now, write up the results, and push FreeDV 700D out for general consumption.

Acquisition and Sync Requirements

  1. Sync up quickly (a few 100ms) with high SNR signals.
  2. Sync up eventually (a few is seconds OK) for low SNR signals over poor channels. Sync eventually is better than none on channels where even SSB is struggling.
  3. Detect false sync and get out of it quickly. Don’t stay stuck in a false sync state forever.
  4. Hang onto sync through fades of a few seconds.
  5. Assume the operator can tune to within +/- 20Hz of a given frequency.
  6. Assume the radio drifts no more than +/- 0.2Hz/s (12 Hz a minute).
  7. Assume the sample clock offset (difference in ADC/DAC sample rates) is no more than 500ppm.

Actually the last three aren’t really requirements, it’s just what fell out of the OFDM modem design when I optimised it for low SNR performance on HF channels! The frequency stability of modern radios is really good; sound card sample clock offset less so but perhaps we can measure that and tell the operator if there is a problem.

Testing Acquisition

The OFDM modem sends pilot (known) symbols every frame. The demodulator correlates (compares) the incoming signal with the pilot symbol sequence. When it finds a close match it has a coarse timing candidate. It can then try to estimate the frequency offset. So we get a coarse timing estimate, a metric (called mx1) that says how close the match is, and a frequency offset estimate.

Estimating frequency offsets is particularly tricky, I’ve experienced “much wailing and gnashing of teeth” with these nasty little algorithms in past (stop laughing Matt). The coarse timing estimator is more reliable. The problem is that if you get an incorrect coarse timing or frequency estimate the modem can lock up incorrectly and may take several seconds, or operator intervention, before it realises its mistake and tries again.

I ended up writing a lot of GNU Octave functions to help develop and test the acquisition algorithms in ofdm_dev.

For example the function below runs 100 tests, measures the timing and frequency error, and plots some histograms. The core demodulator can cope with about +/ 1.5Hz of residual frequency offset and a few samples of timing error. So we can generate probability estimates from the test results. For example if we do 100 tests of the frequency offset estimator and 50 are within 1.5Hz of being correct, then we can say we have a 50% (0.5) probability of getting the correct frequency estimate.

octave:1> ofdm_dev
octave:2> acquisition_histograms(fin_en=0, foff_hz=-15, EbNoAWGN=-1, EbNoHF=3)
AWGN P(time offset acq) = 0.96
AWGN P(freq offset acq) = 0.60
HF P(time offset acq) = 0.87
HF P(freq offset acq) = 0.59

Here are the histograms of the timing and frequency estimation errors. These were generated using simulations of noisy HF channels (about 2dB SNR):

The x axis of timing is in samples, x axis of freq in Hz. They are both a bit biased towards positive errors. Not sure why. This particular test was with a frequency offset of -15Hz.

Turns out that as the SNR improves, the estimators do a better job. The next function runs a bunch of tests at different SNRs and frequency offsets, and plots the acquisition probabilities:

octave:3> acquisition_curves

The timing estimator also gives us a metric (called mx1) that indicates how strong the match was between the incoming signal and the expected pilot sequence. Here is a busy little plot of mx1 against frequency offset for various Eb/No (effectively SNR):

So as Eb/No increases, the mx1 metric tends to gets bigger. It also falls off as the frequency offset increases. This means sync is tougher at low Eb/No and larger frequency offsets. The -10dB value was thrown in to see what happens with pure noise and no signal at the input. We’d prefer not to sync up to that. Using this plot I set the threshold for a valid signal at 0.25.

Once we have a candidate time and freq estimate, we can test sync by measuring the number of bit errors a set of 10 Unique Word (UW) bits spread over the modem frame. Unlike the payload data in the modem frame, these bits are fixed, and known to the transmitter and receiver. In my initial approach I placed the UW bits right at the start of the modem frame. However I discovered a problem – with certain frequency offsets (e.g. multiples of the modem frame rate like +/- 6Hz) – it was possible to get a false sync with no UW errors. So I messed about with the placement of the UW bits until I had a UW that would not give any false syncs at any incorrect frequency offset. To test the UW I wrote another script:

octave:4> debug_false_sync

Which outputs a plot of UW errors against the residual frequency offset:

Note how at any residual frequency offset other than -1.5 to +1.5 Hz there are at least two bit errors. This allows us to reliably detect a false sync due to an incorrect frequency offset estimate.

State Machine

The estimators are wrapped up in a state machine to control the entire sync process:

  1. SEARCHING: look at a buffer of incoming samples and estimate timing, freq, and the mx1 metric.
  2. If mx1 is big enough, lets jump to TRIAL.
  3. TRIAL: measure the number of Unique Word bit errors for a few frames. If they are bad this is probably a false sync so jump back to SEARCHING.
  4. If we get a low number of Unique Word errors for a few frames it’s high fives all round and we jump to SYNCED.
  5. SYNCED: We put up with up two seconds of high Unique Word errors, as this is life on a HF channel. More than two seconds, and we figure the signal is gone for good so we jump back to SEARCHING.

Reading Further

HF Modem Frequency Offset Estimation, an earlier look at freq offset estimation for HF modems
COHPSK and OFDM waveform design spreadsheet
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2
README_ofdm.txt, including specifications of the OFDM modem.

Planet DebianDaniel Kahn Gillmor: E-mail Cryptography

I've been working on cryptographic e-mail software for many years now, and i want to set down some of my observations of what i think some of the challenges are. I'm involved in Autocrypt, which is making great strides in sensible key management (see the last section below, which is short not because i think it's easy, but because i think Autocrypt has covered this area quite well), but there are additional nuances to the mechanics and user experience of e-mail encryption that i need to get off my chest.

Feedback welcome!

Cryptography and E-mail Messages

Cryptographic protection (i.e., digital signatures, encryption) of e-mail messages has a complex history. There are several different ways that various parts of an e-mail message can be protected (or not), and those mechanisms can be combined in a huge number of ways.

In contrast to the technical complexity, users of e-mail tend to expect a fairly straightforward experience. They also have little to no expectation of explicit cryptographic protections for their messages, whether for authenticity, for confidentiality, or for integrity.

If we want to change this -- if we want users to be able to rely on cryptographic protections for some e-mail messages in their existing e-mail accounts -- we need to be able to explain those protections without getting in the user's way.

Why expose cryptographic protections to the user at all?

For a new messaging service, the service itself can simply enumerate the set of properties that all messages exchanged through the service must have, design the system to bundle those properties with message deliverability, and then users don't need to see any of the details for any given message. The presence of the message in that messaging service is enough to communicate its security properties to the extent that the users care about those properties.

However, e-mail is a widely deployed, heterogenous, legacy system, and even the most sophisticated users will always interact with some messages that lack cryptographic protections.

So if we think those protections are meaningful, and we want users to be able to respond to a protected message at all differently from how they respond to an unprotected message (or if they want to know whether the message they're sending will be protected, so they can decide how much to reveal in it), we're faced with the challenge of explaining those protections to users at some level.


The best level to display cryptographic protects for a typical e-mail user is on a per-message basis.

Wider than per-message (e.g., describing protections on a per-correspondent or a per-thread basis) is likely to stumble on mixed statuses, particularly when other users switch e-mail clients that don't provide the same cryptographic protections, or when people are added to or removed from a thread.

Narrower than per-message (e.g., describing protections on a per-MIME-part basis, or even within a MIME part) is too confusing: most users do not understand the structure of an e-mail message at a technical level, and are unlikely to be able to (or want to) spend any time learning about it. And a message with some cryptographic protection and other tamperable user-facing parts is a tempting vector for attack.

So at most, an e-mail should have one cryptographic state that covers the entire message.

At most, the user probably wants to know:

  • Is the content of this message known only to me and the sender (and the other people in Cc)? (Confidentiality)

  • Did this message come from the person I think it came from, as they wrote it? (Integrity and Authenticity)

Any more detail than this is potentially confusing or distracting.


Is it possible to combine the two aspects described above into something even simpler? That would be nice, because it would allow us to categorize a message as either "protected" or "not protected". But there are four possible combinations:

  • unsigned cleartext messages: these are clearly "not protected"

  • signed encrypted messages: these are clearly "protected" (though see further sections below for more troubling caveats)

  • signed cleartext messages: these are useful in cases where confidentiality is irrelevant -- posts to a publicly-archived mailing list, for example, or announcement e-mails about a new version of some piece of software. It's hard to see how we can get away with ignoring this category.

  • unsigned encrypted messages: There are people who send encrypted messages who don't want to sign those messages, for a number of reasons (e.g., concern over the reuse/misuse of their signing key, and wanting to be able to send anonymous messages). Whether you think those reasons are valid or not, some signed messages cannot be validated. For example:

    • the signature was made improperly,
    • the signature was made with an unknown key,
    • the signature was made using an algorithm the message recipient doesn't know how to interpret
    • the signature was made with a key that the recipient believes is broken/bad

    We have to handle receipt of signed+encrypted messages with any of these signature failures, so we should probably deal with unsigned encrypted messages in the same way.

My conclusion is that we need to be able to represent these states separately to the user (or at least to the MUA, so it can plan sensible actions), even though i would prefer a simpler representation.

Note that some other message encryption schemes (such as those based on shared symmetric keying material, where message signatures are not used for authenticity) may not actually need these distinctions, and can therefore get away with the simpler "protected/not protected" message state. I am unaware of any such scheme being used for e-mail today.

Partial protections

Sadly, the current encrypted e-mail mechanisms are likely to make even these proposed two indicators blurry if we try to represent them in detail. To avoid adding to user confusion, we need to draw some bright lines.

  • For integrity and authenticity, either the entire message is signed and integrity-checked, or it isn't. We must not report messages as being signed when only a part of the message is signed, or when the signature comes from someone not in the From: field. We should probably also not present "broken signature" status any differently that we present unsigned mail. See discussion on the enigmail mailing list about some of these tradeoffs.

  • For confidentiality, the user likely cares that the entire message was confidential. But there are some circumstances (e.g., when replying to an e-mail, and deciding whether to encrypt or not) when they likely care if any part of the message was confidential (e.g. if an encrypted part is placed next to a cleartext part).

It's interesting (and frustrating!) to note that these are scoped slightly differently -- that we might care about partial confidentiality but not about partial integrity and authenticity.

Note that while we might care about partial confidentiality, actually representing which parts of a message were confidential represents a signficant UI challenge in most MUAs.

To the extent that a MUA decides it wants to display details of a partially-protected message, i recommend that MUA strip/remove all non-protected parts of the message, and just show the user the (remaining) protected parts. In the event that a message has partial protections like this, the MUA may need to offer the user a choice of seeing the entire partially-protected message, or the stripped down message that has complete protections.

To the extent that we expect to see partially-protected messages in the real world, further UI/UX exploration would be welcome. It would be great to imagine a world where those messages simply don't exist though :)

Cryptographic Mechanism

There are three major categories of cryptographic protection for e-mail in use today: Inline PGP, PGP/MIME, and S/MIME.

Inline PGP

I've argued elsewhere (and it remains true) that Inline PGP signatures are terrible. Inline PGP encryption is also terrible, but in different ways:

  • it doesn't protect the structure of the message (e.g., the number and size of attachments is visible)

  • it has no way of protecting confidential message headers (see the Protected Headers section below)

  • it is very difficult to safely represent to the user what has been encrypted and what has not, particularly if the message body extends beyond the encrypted block.

No MUA should ever emit messages using inline PGP, either for signatures or for encryption. And no MUA should ever display an inline-PGP-signed block as though it was signed. Don't even bother to validate such a signature.

However, some e-mails will arrive using inline PGP encryption, and responsible MUAs probably need to figure out what to show to the user in that case, because the user wants to know what's there. :/


PGP/MIME and S/MIME are roughly equivalent to one another, with the largest difference being their certificate format. PGP/MIME messages are signed/encrypted with certificates that follow the OpenPGP specification, while S/MIME messages rely on certificates that follow the X.509 specification.

The cryptographic protections of both PGP/MIME and S/MIME work at the MIME layer, providing particular forms of cryptographic protection around a subtree of other MIME parts.

Both standards have very similar existing flaws that must be remedied or worked around in order to have sensible user experience for encrypted mail.

This document has no preference of one message format over the other, but acknowledges that it's likely that both will continue to exist for quite some time. To the extent possible, a sensible MUA that wants to provide the largest coverage will be able to support both message formats and both certificate formats, hopefully with the same fixes to the underlying problems.

Cryptographic Envelope

Given that the plausible standards (PGP/MIME and S/MIME) both work at the MIME layer, it's worth thinking about the MIME structure of a cryptographically-protected e-mail messages. I introduce here two terms related to an e-mail message: the "Cryptographic Envelope" and the "Cryptographic Payload".

Consider the MIME structure of a simple cleartext PGP/MIME signed message:

0A └┬╴multipart/signed
0B  ├─╴text/plain
0C  └─╴application/pgp-signature

Consider also the simplest PGP/MIME encrypted message:

1A └┬╴multipart/encrypted
1B  ├─╴application/pgp-encrypted
1C  └─╴application/octet-stream
1D     ╤ <<decryption>>
1E     └─╴text/plain

Or, an S/MIME encrypted message:

2A └─╴application/pkcs7-mime; smime-type=enveloped-data
2B     ╤ <<decryption>>
2C     └─╴text/plain

Note that the PGP/MIME decryption step (denoted "1D" above) may also include a cryptographic signature that can be verified, as a part of that decryption. This is not the case with S/MIME, where the signing layer is always separated from the encryption layer.

Also note that any of these layers of protection may be nested, like so:

3A └┬╴multipart/encrypted
3B  ├─╴application/pgp-encrypted
3C  └─╴application/octet-stream
3D     ╤ <<decryption>>
3E     └┬╴multipart/signed
3F      ├─╴text/plain
3G      └─╴application/pgp-signature

For an e-mail message that has some set of these layers, we define the "Cryptographic Envelope" as the layers of cryptographic protection that start at the root of the message and extend until the first non-cryptographic MIME part is encountered.

Cryptographic Payload

We can call the first non-cryptographic MIME part we encounter (via depth-first search) the "Cryptographic Payload". In the examples above, the Cryptographic Payload parts are labeled 0B, 1E, 2C, and 3F. Note that the Cryptographic Payload itself could be a multipart MIME object, like 4E below:

4A └┬╴multipart/encrypted
4B  ├─╴application/pgp-encrypted
4C  └─╴application/octet-stream
4D     ╤ <<decryption>>
4E     └┬╴multipart/alternative
4F      ├─╴text/plain
4G      └─╴text/html

In this case, the full subtree rooted at 4E is the "Cryptographic Payload".

The cryptographic properties of the message should be derived from the layers in the Cryptographic Envelope, and nothing else, in particular:

  • the cryptographic signature associated with the message, and
  • whether the message is "fully" encrypted or not.

Note that if some subpart of the message is protected, but the cryptographic protections don't start at the root of the MIME structure, there is no message-wide cryptographic envelope, and therefore there either is no Cryptographic Payload, or (equivalently) the whole message (5A here) is the Cryptographic Payload, but with a null Cryptographic Envelope:

5A └┬╴multipart/mixed
5B  ├┬╴multipart/signed
5C  │├─╴text/plain
5D  │└─╴application/pgp-signature
5E  └─╴text/plain

Note also that if there are any nested encrypted parts, they do not count toward the Cryptographic Envelope, but may mean that the message is "partially encrypted", albeit with a null Cryptographic Envelope:

6A └┬╴multipart/mixed
6B  ├┬╴multipart/encrypted
6C  │├─╴application/pgp-encrypted
6D  │└─╴application/octet-stream
6E  │   ╤ <<decryption>>
6F  │   └─╴text/plain
6G  └─╴text/plain

Layering within the Envelope

The order and number of the layers in the Cryptographic Envelope might make a difference in how the message's cryptographic properties should be considered.

signed+encrypted vs encrypted+signed

One difference is whether the signature is made over the encrypted data, or whether the encryption is done over the signature. Encryption around a signature means that the signature was hidden from an adversary. And a signature around the encryption indicates that sender may not know the actual contents of what was signed.

The common expectation is that the signature will be inside the encryption. This means that the signer likely had access to the cleartext, and it is likely that the existence of the signature is hidden from an adversary, both of which are sensible properties to want.

Multiple layers of signatures or encryption

Some specifications define triple-layering: signatures around encryption around signatures. It's not clear that this is in wide use, or how any particular MUA should present such a message to the user.

In the event that there are multiple layers of protection of a given kind in the Cryptographic Envelope, the message should be marked based on the properties of the inner-most layer of encryption, and the inner-most layer of signing. The main reason for this is simplicity -- it is unclear how to indicate arbitrary (and potentially-interleaved) layers of signatures and encryption.

(FIXME: what should be done if the inner-most layer of signing can't be validated for some reason, but one of the outer layers of signing does validate? ugh MIME is too complex…)

Signed messages should indicate the intended recipient

Ideally, all signed messages would indicate their intended recipient as a way of defending against some forms of replay attack. For example, Alice signs a signed message to Bob that says "please perform task X"; Bob reformats and forwards the message to Charlie as though it was directly from Alice. Charlie might now believes that Alice is asking him to do task X, instead of Bob.

Of course, this concern also includes encrypted messages that are also signed. However, there is no clear standard for how to include this information in either an encrypted message or a signed message.

An e-mail specific mechanism is to ensure that the To: and Cc: headers are signed appropriately (see the "Protected Headers") below.

See also Vincent Breitmoser's proposal of Intended Recipient Fingerprint for OpenPGP as a possible OpenPGP-specific implementation.

However: what should the MUA do if a message is encrypted but no intended recipients are listed? Or what if a signature clearly indicates the intended recipients, but does not include the current reader? Should the MUA render the message differently somehow?

Protected Headers

Sadly, e-mail cryptographic protections have traditionally only covered the body of the e-mail, and not the headers. Most users do not (and should not have to) understand the difference. There are two not-quite-standards for protecting the headers:

  • message wrapping, which puts an entire e-mail message (message/rfc822 MIME part) "inside" the cryptographic protections. This is also discussed in RFC 5751 §3.1. I don't know of any MUAs that implement this.

  • memory hole, which puts headers on the top-level MIME part directly. This is implemented in Enigmail and K-9 mail.

These two different mechanisms are roughly equivalent, with slight differences in how they behave for clients who can handle cryptographic mail but have not implemented them. If a MUA is capable of interpreting one form successfully, it probably is also capable of interpreting the other.

Note that in particular, the cryptographic headers for a given message ought to be derived directly from the headers present (in one of the above two ways) in the root element of the Cryptographic Payload MIME subtree itself. If headers are stored anywhere else (e.g. in one of the leaf nodes of a complex Payload), they should not propagate to the outside of the message.

If the headers the user sees are not protected, that lack of protection may need to be clearly explained and visible to the user. This is unfortunate because it is potentially extremely complex for the UI.

The types of cryptographic protections can differ per header. For example, it's relatively straightforward to pack all of the headers inside the Cryptographic Payload. For a signed message, this would mean that all headers are signed. This is the recommended approach when generating an encrypted message. In this case, the "outside" headers simply match the protected headers. And in the case that the outsider headers differ, they can simply be replaced with their protected versions when displayed to the user. This defeats the replay attack described above.

But for an encrypted message, some of those protected headers will be stripped from the outside of the message, and others will be placed in the outer header in cleartext for the sake of deliverability. In particular, From: and To: and Date: are placed in the clear on the outside of the message.

So, consider a MUA that receives an encrypted, signed message, with all headers present in the Cryptographic Payload (so all headers are signed), but From: and To: and Date: in the clear on the outside. Assume that the external Subject: reads simply "Encrypted Message", but the internal (protected) Subject: is actually "Thursday's Meeting".

When displaying this message, how should the MUA distinguish between the Subject: and the From: and To: and Date: headers? All headers are signed, but only Subject: has been hidden. Should the MUA assume that the user understands that e-mail metadata like this leaks to the MTA? This is unfortuately true today, but not something we want in the long term.

Message-ID and threading headers

Messages that are part of an e-mail thread should ensure that Message-Id: and References: and In-Reply-To: are signed, because those markers provide contextual considerations for the signed content. (e.g., a signed message saying "I like this plan!" means something different depending on which plan is under discussion).

That said, given the state of the e-mail system, it's not clear what a MUA should do if it receives a cryptographically-signed e-mail message where these threading headers are not signed. That is the default today, and we do not want to incur warning fatigue for the user. Furthermore, unlike Date: and Subject: and From: and To: and Cc:, the threading headers are not usually shown directly to the user, but instead affect the location and display of messages.

Perhaps there is room here for some indicator at the thread level, that all messages in a given thread are contextually well-bound? Ugh, more UI complexity.

Protecting Headers during e-mail generation

When generating a cryptographically-protected e-mail (either signed or encrypted or both), the sending MUA should copy all of the headers it knows about into the Cryptographic Payload using one of the two techniques referenced above. For signed-only messages, that is all that needs doing.

The challenging question is for encrypted messages: what headers on the outside of the message (outside the Cryptographic Envelope) can be to be stripped (removed completely) or stubbed (replaced with a generic or randomized value)?

Subject: should obviously be stubbed -- for most users, the subject is directly associated with the body of the message (it is not thought of as metadata), and the Subject is not needed for deliverability. Since some MTAs might treat a message without a Subject: poorly, and arbitrary Subject lines are a nuisance, it is recommended to use the exact string below for all external Subjects:

Subject: Encrypted Message

However, stripping or stubbing other headers is more complex.

The date header can likely be stripped from the outside of an encrypted message, or can have it its temporal resolution made much more coarse. However, this doesn't protect much information from the MTAs that touch the message, since they are likely to see the message when it is in transit. It may protect the message from some metadata analysis as it sits on disk, though.

The To: and Cc: headers could be stripped entirely in some cases, though that may make the e-mail more prone to being flagged as spam. However, some e-mail messages sent to Bcc groups are still deliverable, with a header of

To: undisclosed-recipients:;

Note that the Cryptographic Envelope itself may leak metadata about the recipient (or recipients), so stripping this information from the external header may not be useful unless the Cryptographic Envelope is also stripped of metadata appropriately.

The From: header could also be stripped or stubbed. It's not clear whether such a message would be deliverable, particularly given DKIM and DMARC rules for incoming domains. Note that the MTA will still see the SMTP MAIL FROM: verb before the message body is sent, and will use the address there to route bounces or DSNs. However, once the message is delivered, a stripped From: header is an improvement in the metadata available on-disk. Perhaps this is something that a friendly/cooperative MTA could do for the user?

Even worse is the Message-Id: header and the associated In-Reply-To: and References: headers. Some MUAs (like notmuch) rely heavily on the Message-Id:. A message with a stubbed-out Message-Id would effectively change its Message-Id: when it is decrypted. This may not be a straightforward or safe process for MUAs that are Message-ID-centric. That said, a randomized external Message-ID: header could help to avoid leaking the fact that the same message was sent to multiple people, so long as the message encryption to each person was also made distinct.

Stripped In-Reply-To: and References: headers are also a clear metadata win -- the MTA can no longer tell which messages are associated with each other. However, this means that an incoming message cannot be associated with a relevant thread without decrypting it, something that some MUAs may not be in a position to do.

Recommendation for encrypted message generation in 2018: copy all headers during message generation; stub out only the Subject for now.

Bold MUAs may choose to experiment with stripping or stubbing other fields beyond Subject:, possibly in response to some sort of signal from the recipient that they believe that stripping or stubbing some headers is acceptable. Where should such a signal live? Perhaps a notation in the recipient's certificate would be useful.

Key management

Key management bedevils every cryptographic scheme, e-mail or otherwise. The simplest solution for users is to automate key management as much as possible, making reasonable decisions for them. The Autocrypt project outlines a sensible approach here, so i'll leave most of this section short and hope that it's covered by Autocrypt. While fully-automated key management is likely to be susceptible either to MITM attacks or trusted third parties (depending on the design), as a community we need to experiment with ways to provide straightforward (possibly gamified?) user experience that enables and encourages people to do key verification in a fun and simple way. This should probably be done without ever mentioning the word "key", if possible. Serious UI/UX work is needed. I'm hoping future versions of Autocrypt will cover that territory.

But however key management is done, the result for the e-mail user experience is that that the MUA will have some sense of the "validity" of a key being used for any particular correspondent. If it is expressed at all, it should be done as simply as possible by default. In particular, MUAs should avoid confusing the user with distinct (nearly orthogonal) notions of "trust" and "validity" while reading messages, and should not necessarily associate the validity of a correspondent's key with the validity of a message cryptographically associated with that correspondent's key. Identity not the same thing as message integrity, and trustworthiness is not the same thing as identity either.

Key changes over time

Key management is hard enough in the moment. With a store-and-forward system like e-mail, evaluating the validity of a signed message a year after it was received is tough. Your concept of the correspondent's correct key may have changed, for example. I think our understanding of what to do in this context is not currently clear.


Rondam RamblingsA quantum mechanics puzzle, part drei

[This post is the third part of a series.  You should read parts one and two before reading this or it won't make any sense.] So we have two more cases to consider: Case 3: we pulse the laser with very short pulses, emitting only one photon at a time.  This is actually not possible with a laser, but it is possible with something like this single-photon-emitting light source (which was actually

Planet DebianShirish Agarwal: Reviewing Agent 6

The city I come from, Pune has been experiencing somewhat of a heat-wave. So I have been cutting off lot of work and getting lot of back-dated reading done. One of the first books I read was Tom Rob Smith’s Agent 6 . Fortunately , I read only the third book and not the first two which from the synopsis seem to be more gruesome than the one which I read, so guess there is something to be thankful for.

Agent 6 copyright - Tom Rob Smith & Publishers

While I was reading about the book, I had thought that MGB is a fictious organization thought of by the author. But a quick look in wikipedia told that it is what KGB was later based upon.

I found the book to be both an easy read as well as a layered book. I was lucky to get a big print version of the book so I was able to share the experience with my mother as well. The book is somewhat hefty as it tops out around 600 pages although it’s told to be 480 pages on amazon.

As I had shared previously I had read Russka and how had been disappointed to see how the Russian public were disappointed time and again for democracy. I do understand that the book (Russka) itself is/was written by a western author and could have tapped into some unconscious biases but seemed to be accurate as to whatever I could find from public resources, that story though I may return to in a future date but this time would be for Agent 6 .

I found the book pretty immersive and at the same time lead me thinking on so many threads the author touches but then moves on. I was left wondering and many times just had to sleep, think deep thoughts as there was quite to chew on.

I am not going to spoil any surprises except to say there are quite a few twists and the ending is also what I didn’t expect.

At the end, if you appreciate politics, history, bit of adventure and have a bit of patience, the book is bound to reward you. It is not meant to be a page-turner but if you are one who enjoys savoring your drink you are going to enjoy it thoroughly.

Planet DebianJonathan McDowell: Home Automation: Getting started with MQTT

I’ve been thinking about trying to sort out some home automation bits. I’ve moved from having a 7 day heating timer to a 24 hour timer and I’d forgotten how annoying that is at weekends. I’d like to monitor temperatures in various rooms and use that, along with presence detection, to be a bit more intelligent about turning the heat on. Equally I wouldn’t mind tying my Alexa in to do some voice control of lighting (eventually maybe even using DeepSpeech to keep everything local).

Before all of that I need to get the basics in place. This is the first in a series of posts about putting together the right building blocks to allow some useful level of home automation / central control. The first step is working out how to glue everything together. A few years back someone told me MQTT was the way forward for IoT applications, being more lightweight than a RESTful interface and thus better suited to small devices. At the time I wasn’t convinced, but it seems they were right and MQTT is one of the more popular ways of gluing things together.

I found the HiveMQ series on MQTT Essentials to be a pretty good intro; my main takeaway was that MQTT allows for a single message broker to enable clients to publish data and multiple subscribers to consume that data. TLS is supported for secure data transfer and there’s a whole bunch of different brokers and client libraries available. The use of a broker is potentially helpful in dealing with firewalling; clients and subscribers only need to talk to the broker, rather than requiring any direct connection.

With all that in mind I decided to set up a broker to play with the basics. I made the decision that it should run on my OpenWRT router - all the devices I want to hook up can easily see that device, and if it’s down then none of them are going to be able to get to a broker hosted anywhere else anyway. I’d seen plenty of info about Mosquitto and it’s already in the OpenWRT package repository. So I sorted out a Let’s Encrypt cert, installed Moquitto and created a couple of test users:

opkg install mosquitto-ssl
mosquitto_passwd -b /etc/mosquitto/mosquitto.users user1 foo
mosquitto_passwd -b /etc/mosquitto/mosquitto.users user2 bar
chown mosquitto /etc/mosquitto/mosquitto.users
chmod 600 /etc/mosquitto/mosquitto.users

I then edited /etc/mosquitto/mosquitto.conf and made sure the following are set. In particular you need cafile set in order to enable TLS:

port 8883
cafile /etc/ssl/lets-encrypt-x3-cross-signed.pem
certfile /etc/ssl/mqtt.crt
keyfile /etc/ssl/mqtt.key

log_dest syslog

allow_anonymous false

password_file /etc/mosquitto/mosquitto.users
acl_file /etc/mosquitto/mosquitto.acl

Finally I created /etc/mosquitto/mosquitto.acl with the following:

user user1
topic readwrite #

user user2
topic read ro/#
topic readwrite test/#

That gives me user1 who has full access to everything, and user2 with readonly access to the ro/ tree and read/write access to the test/ tree.

To test everything was working I installed mosquitto-clients on a Debian test box and in one window ran:

mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo

and in another:

mosquitto_pub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -t 'test/message' -m 'Hello World!' -u user2 -P bar

(without the --capath it’ll try a plain TCP connection rather than TLS, and not produce a useful error message) which resulted in the mosquitto_sub instance outputting:

test/message Hello World!


mosquitto_pub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -t 'test2/message' -m 'Hello World!' -u user2 -P bar

resulted in no output due to the ACL preventing it. All good and ready to actually make use of - of which more later.

Planet DebianDaniel Stender: AFL in Ubuntu 18.04 is broken

At is has been reported on the discussion list for American Fuzzy Lop lately, unfortunately the fuzzer is broken in Ubuntu 18.04 “Bionic Beaver”. Ubuntu Bionic ships AFL 2.52b, which is the current version at the moment of writing this blog post. But the particular problem comes from the accompanying gcc-7 package, which is pulled by afl via the build-essential package. It was noticed in the development branch for the next Debian release from continuous integration (#895618) that introducing a triplet-prefixed as in gcc-7 7.3.0-16 (like same was changed for gcc-8, see #895251) affected the -B option in way that afl-gcc (the gcc wrapper) can’t use the shipped assembler (/usr/lib/afl-as) anymore to install the instrumentation into the target binary (#896057, thanks to Jakub Wilk for spotting the problem). As a result, the instrumented fuzzying and other things in afl doesn’t work:

$ afl-gcc --version
 afl-cc 2.52b by <>
 gcc (Ubuntu 7.3.0-16ubuntu3) 7.3.0
$ afl-gcc -o test-instr test-instr.c 
 afl-cc 2.52b by <>
$ afl-fuzz -i in -o out -- ./test-instr
 afl-fuzz 2.52b by <>
 [+] You have 2 CPU cores and 1 runnable tasks (utilization: 50%).
 [+] Try parallel jobs - see /usr/share/doc/afl-doc/docs/parallel_fuzzing.txt.
 [*] Creating hard links for all input files...
 [*] Validating target binary...
 [-] Looks like the target binary is not instrumented! The fuzzer depends on
     compile-time instrumentation to isolate interesting test cases while
     mutating the input data. For more information, and for tips on how to
     instrument binaries, please see /usr/share/doc/afl-doc/docs/README.
     When source code is not available, you may be able to leverage QEMU
     mode support. Consult the README for tips on how to enable this.
     (It is also possible to use afl-fuzz as a traditional, "dumb" fuzzer.
     For that, you can use the -n option - but expect much worse results.)
 [-] PROGRAM ABORT : No instrumentation detected
          Location : check_binary(), afl-fuzz.c:6920

The same error message is put out e.g. by afl-showmap. gcc-7 7.3.0-18 fixes this. As an alternative before this becomes available, afl-clang which uses the clang compiler might be used instead to prepare the target binary properly:

$ afl-clang --version
 afl-cc 2.52b by <>
 clang version 4.0.1-10 (tags/RELEASE_401/final)
$ afl-clang -o test-instr test-instr.c 
 afl-cc 2.52b by <>
 afl-as 2.52b by <>
 [+] Instrumented 6 locations (64-bit, non-hardened mode, ratio 100%)

CryptogramSupply-Chain Security

Earlier this month, the Pentagon stopped selling phones made by the Chinese companies ZTE and Huawei on military bases because they might be used to spy on their users.

It's a legitimate fear, and perhaps a prudent action. But it's just one instance of the much larger issue of securing our supply chains.

All of our computerized systems are deeply international, and we have no choice but to trust the companies and governments that touch those systems. And while we can ban a few specific products, services or companies, no country can isolate itself from potential foreign interference.

In this specific case, the Pentagon is concerned that the Chinese government demanded that ZTE and Huawei add "backdoors" to their phones that could be surreptitiously turned on by government spies or cause them to fail during some future political conflict. This tampering is possible because the software in these phones is incredibly complex. It's relatively easy for programmers to hide these capabilities, and correspondingly difficult to detect them.

This isn't the first time the United States has taken action against foreign software suspected to contain hidden features that can be used against us. Last December, President Trump signed into law a bill banning software from the Russian company Kaspersky from being used within the US government. In 2012, the focus was on Chinese-made Internet routers. Then, the House Intelligence Committee concluded: "Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems."

Nor is the United States the only country worried about these threats. In 2014, China reportedly banned antivirus products from both Kaspersky and the US company Symantec, based on similar fears. In 2017, the Indian government identified 42 smartphone apps that China subverted. Back in 1997, the Israeli company Check Point was dogged by rumors that its government added backdoors into its products; other of that country's tech companies have been suspected of the same thing. Even al-Qaeda was concerned; ten years ago, a sympathizer released the encryption software Mujahedeen Secrets, claimed to be free of Western influence and backdoors. If a country doesn't trust another country, then it can't trust that country's computer products.

But this trust isn't limited to the country where the company is based. We have to trust the country where the software is written -- and the countries where all the components are manufactured. In 2016, researchers discovered that many different models of cheap Android phones were sending information back to China. The phones might be American-made, but the software was from China. In 2016, researchers demonstrated an even more devious technique, where a backdoor could be added at the computer chip level in the factory that made the chips ­ without the knowledge of, and undetectable by, the engineers who designed the chips in the first place. Pretty much every US technology company manufactures its hardware in countries such as Malaysia, Indonesia, China and Taiwan.

We also have to trust the programmers. Today's large software programs are written by teams of hundreds of programmers scattered around the globe. Backdoors, put there by we-have-no-idea-who, have been discovered in Juniper firewalls and D-Link routers, both of which are US companies. In 2003, someone almost slipped a very clever backdoor into Linux. Think of how many countries' citizens are writing software for Apple or Microsoft or Google.

We can go even farther down the rabbit hole. We have to trust the distribution systems for our hardware and software. Documents disclosed by Edward Snowden showed the National Security Agency installing backdoors into Cisco routers being shipped to the Syrian telephone company. There are fake apps in the Google Play store that eavesdrop on you. Russian hackers subverted the update mechanism of a popular brand of Ukrainian accounting software to spread the NotPetya malware.

In 2017, researchers demonstrated that a smartphone can be subverted by installing a malicious replacement screen.

I could go on. Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn't an option; the tech world is far too internationally interdependent for that. We can't trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government. And just as Russia is penetrating the US power grid so they have that capability in the event of hostilities, many countries are almost certainly doing the same thing at the consumer level.

We don't know whether the risk of Huawei and ZTE equipment is great enough to warrant the ban. We don't know what classified intelligence the United States has, and what it implies. But we do know that this is just a minor fix for a much larger problem. It's doubtful that this ban will have any real effect. Members of the military, and everyone else, can still buy the phones. They just can't buy them on US military bases. And while the US might block the occasional merger or acquisition, or ban the occasional hardware or software product, we're largely ignoring that larger issue. Solving it borders on somewhere between incredibly expensive and realistically impossible.

Perhaps someday, global norms and international treaties will render this sort of device-level tampering off-limits. But until then, all we can do is hope that this particular arms race doesn't get too far out of control.

This essay previously appeared in the Washington Post.

Planet DebianAndreas Metzler: balance sheet snowboarding season 2017/16

For a change a winter with snow again, allowing a early start of the season (December 2). Due to early easter (lifts closing) last run was on April 14. The amount of snow would have allowed boarding for at least another two weeks. (Today, on May 10 mountainbiking is still curtailed, routes above 1650 meters of altitude are not yet rideable.)

OTOH weather sucked, extended periods of stable sunny weather were rare and totally absent in February and the first half of March. - I only had 3 days on piste in February. I made many kilometres luging during that time. ;-)

Anyway here it is:

2005/06 2006/07 2007/08 2008/09 2009/10 2010/11 2011/12 2012/13 2013/14 2014/15 2015/16 2016/17 2017/18
number of (partial) days25172937303025233024173029
total meters of altitude12463474096219936226774202089203918228588203562274706224909138037269819266158
# of runs309189503551462449516468597530354634616

Worse Than FailureCodeSOD: A Quick Replacement

Lucio Crusca was doing a bit of security auditing when he found this pile of code, and it is indeed a pile. It is PHP, which doesn’t automatically make it bad, but it makes use of a feature of PHP so bad that they’ve deprecated it in recent versions: the create_function method.

Before we even dig into this code, the create_function method takes a string, runs eval on it, and returns the name of the newly created anonymous function. Prior to PHP 5.3.0 this was their method of doing lambdas. And while the function is officially deprecated as of PHP 7.2.0… it’s not removed. You can still use it. And I’m sure a lot of code probably still does. Like this block…

        public static function markupToPHP($content) {
                if ($content instanceof phpQueryObject)
                        $content = $content->markupOuter();
                /* <php>...</php> to <?php...? > */
                $content = preg_replace_callback(
                        array('phpQuery', '_markupToPHPCallback'),
                /* <node attr='< ?php ? >'> extra space added to save highlighters */
                $regexes = array(
                foreach($regexes as $regex)
                        while (preg_match($regex, $content))
                                $content = preg_replace_callback(
                                                'return $m[1].$m[2].$m[3]."<?php "
                                                                array("%20", "%3E", "%09", "&#10;", "&#9;", "%7B", "%24", "%7D", "%22", "%5B", "%5D"),
                                                                array(" ", ">", "       ", "\n", "      ", "{", "$", "}", \'"\', "[", "]"),
                                                        ." ?>".$m[5].$m[2];'
                return $content;

From what I can determine from the comments and the code, this is taking some arbitrary content in the form <php>PHP CODE HERE</php> and converting it to <?php PHP CODE HERE ?>. I don’t know what happens after this function is done with it, but I’m already terrified.

The inner-loop fascinates me. while (preg_match($regex, $content)) implies that we need to call the replace function multiple times, but preg_replace_callback by default replaces all instances of the matching regex, so there’s absolutely no reason fo the while loop. Then, of course, the use of create_function, which is itself a WTF, but it’s also worth noting that there’s no need to do this dynamically- you could just as easily have declared a callback function like they did above with _markupToPHPCallback.

Lucio adds:

I was looking for potential security flaws: well, I’m not sure this is actually exploitable, because even black hats have limited patience!

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!


Planet DebianEddy Petrișor: "Where does Unity store its launch bar items?" or "Convincing Ubuntu's Unity 7.4.5 to run the newer version of PyCharm when starting from the launcer"

I have been driving a System76 Oryx-Pro for some time now. And I am running Ubuntu 16.04 on it.
I typically try to avoid polluting global name spaces, so any apps I install from source I tend to install under a versioned directory under ~/opt, for instance, PyCharm Community Edition 2016.3.1 is installed under ~/opt/pycharm-community-2016.3.1.

Today, after Pycharm suggested I install a newer version, I downloaded the current package, and ran it, as instructed in the embedded readme.txt, via the wrapper script:
~/opt/pycharm-community-2018.1.2/bin$ ./
Everything looked OK, but when wanting to lock the icon on the launch bar I realized Unity did not display a separate Pycharm Community Edition icon for the 2018.1.2 version, but showed the existing icon as active.

"I guess it's the same filename, so maybe unity confuses the older version with the new one, so I have to replace the launcher to user the newer version by default", I said.

So I closed the interface, then removed the PyCharm Community Edition, then I restarted the newer Pycharm from the command line, then blocked the icon, then I closed PyCharm once more, then clicked on the launcher bar.

Surprise! Unity was launching the old version! What?!

Repeated the entire series of steps, suspecting some PEBKAC, but was surprised to see the same result.

"Damn! Unity is stupid! I guess is a good thing they decided to kill it!", I said to myself.

Well, it shouldn't be that hard to find the offending item, so I started to grep in ~/.config, then in ~/.* for the string "PyCharm Cummunity Edition" without success.
Hmm, I guess the Linux world copied a lot of bad ideas from the windows world, probably the configs are not in ~/.* in plain text, they're probably in that simulacrum of a Windows registry called dconf, so I installed dconf-editor and searched once more for the keyword "Community", but only found one entry in the gedit filebrowser context.

So where does Unity gets its items from the launchbar? Since there is no "Properties" entry context menu and didn't want to try to debug the starting of my graphic environment, but Unity is open source, I had to look at the sources.

After some fighting with dead links to subpages, then searching for "git repository Ubuntu Unity", I realized Ubuntu loves Bazaar, so searched for "bzr Ubuntu Unity repository", no luck. Luckly, Wikipedia usually has those kind of links, and found the damn thing.

BTW, am I the only one considering some strong hits with a clue bat the developers which name projects by some generic term that has no chance to override the typical term in common parlance such as "Unity" or "Repo"?

Finding the sources and looking a little at the repository did not make it clear which was the entry point. I was expecting at least the README or the INSTALL file would give some relevant hints on the config or the initalization. M patience was running dry.

Maybe looking on my own system would be a better approach?
eddy@feodora:~$ which unity
eddy@feodora:~$ ll $(which unity)
-rwxr-xr-x 1 root root 9907 feb 21 21:38 /usr/bin/unity*
eddy@feodora:~$ ^ll^file
file $(which unity)
/usr/bin/unity: Python script, ASCII text executable
BINGO! This looks like a python script executable, it's not a binary, in spite of the many .cpp sources in the Unity tree.

I opened the file with less, then found this interesting bit:
 def reset_launcher_icons ():
    '''Reset the default launcher icon and restart it.'''
    subprocess.Popen(["gsettings", "reset" ,"com.canonical.Unity.Launcher" , "favorites"])
Great! So it stores that stuff in the pseudo-registry, but have to look under com.canonical.Unity.Launcher.favorites. Firing dconf-editor again found the relevant bit in the value of that key:
So where is this .desktop file? I guess using find is going to bring it up:
find /home/eddy/.local/ -name jetbrains* -exec vi {} \;
It did, and the content made it obvious what was happening:
[Desktop Entry]
Name=PyCharm Community Edition
Exec="/home/eddy/opt/pycharm-community-2016.3.1/bin/" %f

Comment=The Drive to Develop
Probably Unity did not create a new desktop file when locking the icon, it would simply check if the jetbrains-pycharm-ce.desktop file existed already in my.local directory, saw it was, so it skipped its recreation.

Just as somebody said, all difficult computer science problems are eiether caused by leaky abstractions or caching. I guess here we're having some sort of caching issue, but is easy to fix, just edit the file:
eddy@feodora:~$ cat /home/eddy/.local/share/applications/jetbrains-pycharm-ce.desktop

[Desktop Entry]
Name=PyCharm Community Edition
Exec="/home/eddy/opt/pycharm-community-2018.1.2/bin/" %f
Comment=The Drive to Develop
Checked again the start, and now the expected slash screen appears. GREAT!

I wonder if this is a Unity issue or is it due to some broken library that could affect other desktop environments such as MATE, GNOME or XFCE?

"Only" lost 2 hours (including this post) with this stupid bug, so I can go back to what I was trying in the first place, but now is already to late, so I have to go to sleep.

Planet DebianShirish Agarwal: A random collection of thoughts

First of all couple of weeks back, I was able to put out an article about riot-web. It’s been on my mind for almost a month or more hence finally sat down, wrote and re-wrote it a few times to make it simpler for newbies to also know.

One thing I did miss out to share was the Debian matrix page . The other thing which was needling me was the comment . This is not the first time I have heard that complaint about riot-web before and at times had it happen before.

The thing is its always an issue for me when to write about something, how to say something is mature or not as software in general has a tendency to fail at any given point of time.

For such queries I haven’t the foggiest idea as to what to share as the only debug mode is if you have built riot from source and run the -debug tests but can’t say that to a newbie.

One of the things which I didn’t mention is if any researchers tried to get data out of riot-web because AFAIK twitter banned lot of researchers who were trying to get data out of their platform to do analytics etc.

This I sort of remembered as I read an open letter couple of days before by researchers about independent oversight over facebook as a concern as well.

It would have been interesting if there were any new interesting studies made from riot-web implementation, something similar to how a study of IRC I read some years ago. The mathematical observations were above my head but still some of the observations were interesting to say the least.

There has been another pattern I have been seeing in the newer decentralized free software services. While in theory, the reference implementation is supposed to be one of many, many a times, it can become the defacto implementation or otherwise you have the irc way where each client just willy-nilly did features but still somehow managed to stay sane and interoperate over the years but that’s a different story altogether for a different day.

While I like the latter, it is and can be hard as migration ia a huge headache from one client to the other irrespective of whatever the content is. There is and could be data-loss or even meta-data loss and you may come to know only years later (if you are ‘lucky’ what info. it is that you lost.)

The easiest example is contacts migration. Most professionals have at least a hundred or two contacts, now if few go missing during migration from either one version to the other or from one platform to the other, they either don’t have the time or the skills to figure out why part-migration succeeded and the rest didn’t. Of course there is a whole industry of migration experts who can write code which would have all the hooks to see that the migration works smoothly or point out what was not migrated.

These services are wholly commercial in nature and also one cannot know in advance how good/bad the service is as usually issues come to bite much later.

On another note altogether, had been seeing the sort of java confusion from a distance. There’s a Mars Sims project I have been following for quite sometime, made a few bug-reports and for reasons unknown, was eventually made a contributor. They are also in a flux as to what to do. I had read the off-and-on the web and was glad to point out the correct links.

I had read the rumors sometime back that Oracle was bull-charging Java so that it would be the only provider in town and almost everybody would have to come to it for support rather than any other provider. I can’t prove it one way or the other as it’s just a rumor but does seem to have sense.

At the end, I remember a comment made by a DD Praveen at a minidebconf which happened a month ago. It was about how Upstreams are somewhat discouraging to Debian practices and specifically more about Debian Policy . This has been discussed somewhat threadbare in the thread What can Debian do to provide complex applications to its users? in Debian-devel. The short history I know is about minified javascript does and can have security issues, see this comment in the same thread as well as see the related point shared in Debian Policy. Even Praveen’s reply is pretty illuminating in the thread.

As a user I recommend Debian to my friends, clients because of the stability as well as security tracker but with upstreams in a sort of non-cooperative mood it just adds that much more responsibility to DD’s than before.

The non-cooperation can also be seen in something like PR, for instance like the one which was done by andrewshadura and that is somewhat sad 😦

TEDTED en Español: TED’s first-ever Spanish-language speaker event in NYC

Host Gerry Garbulsky opens the TED en Español event in the TEDNYC theater, New York, NY. (Photo: Dian Lofton / TED)

Thursday marked the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event featured eight speakers, a musical performance, five short films and fifteen one-minute talks given by members of the audience.

The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event, TEDxRiodelaPlata in Argentina, TED en Español includes a Facebook community, Twitter feed, weekly “Boletín” newsletter, YouTube channel and — as of earlier this month — an original podcast created in partnership with Univision Communications.

Should we automate democracy? “Is it just me, or are there other people here that are a little bit disappointed with democracy?” asks César A. Hidalgo. Like other concerned citizens, the MIT physics professor wants to make sure we have elected governments that truly represent our values and wishes. His solution: What if scientists could create an AI that votes for you? Hidalgo envisions a system in which each voter could teach her own AI how to think like her, using quizzes, reading lists and other types of data. So once you’ve trained your AI and validated a few of the decisions it makes for you, you could leave it on autopilot, voting and advocating for you … or you could choose to approve every decision it suggests. It’s easy to poke holes in his idea, but Hidalgo believes it’s worth trying out on a small scale. His bottom line: “Democracy has a very bad user interface. If you can improve the user interface, you might be able to use it more.”

When the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully,” says Leticia Gasca. (Photo: Jasmina Tomic / TED)

How to fail mindfully. If your business failed in Ancient Greece, you’d have to stand in the town square with a basket over your head. Thankfully, we’ve come a long way — or have we? Failed-business owner Leticia Gasca doesn’t think so. Motivated by her own painful experience, she set out to create a way for others like her to convert the guilt and shame of a business venture gone bad into a catalyst for growth. Thus was born “Fuckup Nights” (FUN), a global movement and event series for sharing stories of professional failure, and The Failure Institute, a global research group that studies failure and its impact on people, businesses and communities. For Gasca, when the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully” and see endings as doorways to empathy, resilience and renewal.

From four countries to one stage. The pan-Latin-American musical ensemble LADAMA brought much more than just music to the TED en Español stage. Inviting the audience to dance with them, Venezuelan Maria Fernanda Gonzalez, Brazilian Lara Klaus, Colombian Daniela Serna and American Sara Lucas sing and dance to a medley of rhythms that range from South American to Caribbean-infused styles. Playing “Night Traveler” and “Porro Maracatu,” LADAMA transformed the stage into a place of music worth spreading.

Gastón Acurio shares stories of the power of food to change lives. (Photo: Jasmina Tomic / TED)

World change starts in your kitchen. In his pioneering work to bring Peruvian cuisine to the world, Gastón Acurio discovered the power that food has to change peoples’ lives. As ceviche started appearing in renowned restaurants worldwide, Gastón saw his home country of Peru begin to appreciate the diversity of its gastronomy and become proud of its own culture. But food hasn’t always been used to bring good to the world. With the industrial revolution and the rise of consumerism, “more people in the world are dying from obesity than hunger,” he notes, and many peoples’ lifestyles aren’t sustainable. 
By interacting with and caring about the food we eat, Gastón says, we can change our priorities as individuals and change the industries that serve us. He doesn’t yet have all the answers on how to make this a systematic movement that politicians can get behind, but world-renowned cooks are already taking these ideas into their kitchens. He tells the stories of a restaurant in Peru that supports native people by sourcing ingredients from them, a famous chef in NYC who’s fighting against the use of monocultures and an emblematic restaurant in France that has barred meat from the menu. “Cooks worldwide are convinced that we cannot wait for others to make changes and that we must jump into action,” he says. But professional cooks can’t do it all. If we want real change to happen, Gastón urges, we need home cooking to be at the center of everything.

The interconnectedness of music and life. Chilean musical director Paolo Bortolameolli wraps his views on music within his memory of crying the very first time he listened to live classical music. Sharing the emotions music evoked in him, Bortolameolli presents music as a metaphor for life — full of the expected and the unexpected. He thinks that we listen to the same songs again and again because, as humans, we like to experience life from a standpoint of expectation and stability, and he simultaneously suggests that every time we listen to a musical piece, we enliven the music, imbuing it with the potential to be not just recognized but rediscovered.

We reap what we sow — let’s sow something different. Up until the mid-’80s, the average incomes in major Latin American countries were on par with those in Korea. But now, less than a generation later, Koreans earn two to three times more than their Latin American counterparts. How can that be? The difference, says futurist Juan Enriquez, lies in a national prioritization of brainpower — and in identifying, educating and celebrating the best minds. What if in Latin America we started selecting for academic excellence the way we would for an Olympic soccer team? If Latin American countries are to thrive in the era of technology and beyond, they should look to establish their own top universities rather than letting their brightest minds thirst for nourishment, competition and achievement — and find it elsewhere, in foreign lands.

Rebeca Hwang shares her dream of a world where identities are used to bring people together, not alienate them. (Photo: Jasmina Tomic / TED)

Diversity is a superpower. Rebeca Hwang was born in Korea, raised in Argentina and educated in the United States. As someone who has spent a lifetime juggling various identities, Hwang can attest that having a blended background, while sometimes challenging, is actually a superpower. The venture capitalist shared how her fluency in many languages and cultures allows her to make connections with all kinds of people from around the globe. As the mother of two young children, Hwang hopes to pass this perspective on to her kids. She wants to teach them to embrace their unique backgrounds and to create a world where identities are used to bring people together, not alienate them.

Marine ecologist Enric Sala wants to protect the last wild places in the ocean. (Photo: Jasmina Tomic / TED)

How we’ll save our oceans If you jumped in the ocean at any random spot, says Enric Sala, you’d have a 98 percent chance of diving into a dead zone — a barren landscape empty of large fish and other forms of marine life. As a marine ecologist and National Geographic Explorer-in-Residence, Sala has dedicated his life to surveying the world’s oceans. He proposes a radical solution to help protect the oceans by focusing on our high seas, advocating for the creation of a reserve that would include two-thirds of the world’s ocean. By safeguarding our high seas, Sala believes we will restore the ecological, economic and social benefits of the ocean — and ensure that when our grandchildren jump into any random spot in the sea, they’ll encounter an abundance of glorious marine life instead of empty space.

And to wrap it up … In an improvised rap performance with plenty of well-timed dance moves, psychologist and dance therapist César Silveyra closes the session with 15 of what he calls “nano-talks.” In a spectacular showdown of his skills, Silveyra ties together ideas from previous speakers at the event, including Enric Sala’s warnings about overfished oceans, Gastón Acurio’s Peruvian cooking revolution and even a shoutout for speaker Rebeca Hwang’s grandmother … all the while “feeling like Beyoncé.”

Geek FeminismInformal Geek Feminism get-togethers, May and June

Some Geek Feminism folks will be at the following conferences and conventions in the United States over the next several weeks, in case contributors and readers would like to have some informal get-togethers to reminisce and chat about inheritors of the GF legacy:

If you’re interested, feel free to comment below, and to take on the step of initiating open space/programming/session organizing!

CryptogramVirginia Beach Police Want Encrypted Radios

This article says that the Virginia Beach police are looking to buy encrypted radios.

Virginia Beach police believe encryption will prevent criminals from listening to police communications. They said officer safety would increase and citizens would be better protected.

Someone should ask them if they want those radios to have a backdoor.

Krebs on SecurityThink You’ve Got Your Credit Freezes Covered? Think Again.

I spent a few days last week speaking at and attending a conference on responding to identity theft. The forum was held in Florida, one of the major epicenters for identity fraud complaints in United States. One gripe I heard from several presenters was that identity thieves increasingly are finding ways to open new mobile phone accounts in the names of people who have already frozen their credit files with the big-three credit bureaus. Here’s a look at what may be going on, and how you can protect yourself.

Carrie Kerskie is director of the Identity Fraud Institute at Hodges University in Naples. A big part of her job is helping local residents respond to identity theft and fraud complaints. Kerskie said she’s had multiple victims in her area recently complain of having cell phone accounts opened in their names even though they had already frozen their credit files at the big three credit bureausEquifax, Experian and Trans Union (as well as distant fourth bureau Innovis).

The freeze process is designed so that a creditor should not be able to see your credit file unless you unfreeze the account. A credit freeze blocks potential creditors from being able to view or “pull” your credit file, making it far more difficult for identity thieves to apply for new lines of credit in your name.

But Kerskie’s investigation revealed that the mobile phone merchants weren’t asking any of the four credit bureaus mentioned above. Rather, the mobile providers were making credit queries with the National Consumer Telecommunications and Utilities Exchange (NCTUE), or


“We’re finding that a lot of phone carriers — even some of the larger ones — are relying on NCTUE for credit checks,” Kerskie said. “It’s mainly phone carriers, but utilities, power, water, cable, any of those, they’re all starting to use this more.”

The NCTUE is a consumer reporting agency founded by AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE.

Who are the NCTUE’s members? If you call the 800-number that NCTUE makes available to get a free copy of your NCTUE credit report, the option for “more information” about the organization says there are four “exchanges” that feed into the NCTUE’s system: the NCTUE itself; something called “Centralized Credit Check Systems“; the New York Data Exchange; and the California Utility Exchange.

According to a partner solutions page at Verizon, the New York Data Exchange is a not-for-profit entity created in 1996 that provides participating exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts.

The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax). Verizon is one of many telecom providers that use the NYDE (and recall that AT&T was the founder of NCTUE).

The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC).

Google has virtually no useful information available about an entity called Centralized Credit Check Systems. It’s possible it no longer exists. If anyone finds differently, please leave a note in the comments section.

When I did some more digging on the NCTUE, I discovered…wait for it…Equifax also is the sole contractor that manages the NCTUE database. The entity’s site is also hosted out of Equifax’s servers. Equifax’s current contract to provide this service expires in 2020, according to a press release posted in 2015 by Equifax.


Fortunately, the NCTUE makes it fairly easy to obtain any records they may have on Americans.  Simply phone them up (1-866-349-5185) and provide your Social Security number and the numeric portion of your registered street address.

Assuming the automated system can verify you with that information, the system then orders an NCTUE credit report to be sent to the address on file. You can also request to be sent a free “risk score” assigned by the NCTUE for each credit file it maintains.

The NCTUE also offers an online process for freezing one’s report. Perhaps unsurprisingly, however, the process for ordering a freeze through the NCTUE appears to be completely borked at the moment, thanks no doubt to Equifax’s well documented abysmal security practices.

Alternatively, it could all be part of a willful or negligent strategy to continue discouraging Americans from freezing their credit files (experts say the bureaus make about $1 for each time they sell your file to a potential creditor).

On April 29, I had an occasion to visit Equifax’s credit freeze application page, and found that the site was being served with an expired SSL certificate from Symantec (i.e., the site would not let me browse using https://). This happened because I went to the site using Google Chrome, and Google announced a decision in September 2017 to no longer trust SSL certs issued by Symantec prior to June 1, 2016.

Google said it would do this starting with Google Chrome version 66. It did not keep this plan a secret. On April 18, Google pushed out Chrome 66.  Despite all of the advance warnings, the security people at Equifax apparently missed the memo and in so doing probably scared most people away from its freeze page for several weeks (Equifax fixed the problem on its site sometime after I tweeted about the expired certificate on April 29).

That’s because when one uses Chrome to visit a site whose encryption certificate is validated by one of these unsupported Symantec certs, Chrome puts up a dire security warning that would almost certainly discourage most casual users from continuing.

The insecurity around Equifax’s own freeze site likely discouraged people from requesting a freeze on their credit files.

On May 7, when I visited the NCTUE’s page for freezing my credit file with them I was presented with the very same connection SSL security alert from Chrome, warning of an invalid Symantec certificate and that any data I shared with the NCTUE’s freeze page would not be encrypted in transit.

The security alert generated by Chrome when visiting the freeze page for the NCTUE, whose database (and apparently web site) also is run by Equifax.

When I clicked through past the warnings and proceeded to the insecure NCTUE freeze form (which is worded and stylized almost exactly like Equifax’s credit freeze page), I filled out the required information to freeze my NCTUE file. See if you can guess what happened next.

Yep, I was unceremoniously declined the opportunity to do that. “We are currently unable to service your request,” read the resulting Web page, without suggesting alternative means of obtaining its report. “Please try again later.”

The message I received after trying to freeze my file with the NCTUE.

This scenario will no doubt be familiar to many readers who tried (and failed in a similar fashion) to file freezes on their credit files with Equifax after the company divulged that hackers had relieved it of Social Security numbers, addresses, dates of birth and other sensitive data on nearly 150 million Americans last September. I attempted to file a freeze via the NCTUE’s site with no fewer than three different browsers, and each time the form reset itself upon submission or took me to a failure page.

So let’s review. Many people who have succeeded in freezing their credit files with Equifax have nonetheless had their identities stolen and new accounts opened in their names thanks to a lesser-known credit bureau that seems to rely entirely on credit checking entities operated by Equifax.

“This just reinforces the fact that we are no longer in control of our information,” said Kerskie, who is also a founding member of Griffon Force, a Florida-based identity theft restoration firm.

I find it difficult to disagree with Kerskie’s statement. What chaps me about this discovery is that countless Americans are in many cases plunking down $3-$10 per bureau to freeze their credit files, and yet a huge player in this market is able to continue to profit off of identity theft on those same Americans.


I asked Equifax why the very same credit bureau operating the NCTUE’s data exchange (and those of at least two other contributing members) couldn’t detect when consumers had placed credit freezes with Equifax. Put simply, Equifax’s wall of legal verbiage below says mainly that NCTUE is a separate entity from Equifax, and that NCTUE doesn’t include Equifax credit information.

Here is Equifax’s full statement on the matter:

·        The National Consumer Telecom and Utilities Exchange, Inc. (NCTUE) is a nationwide, member-owned and operated, FCRA-compliant consumer reporting agency that houses both positive and negative consumer payment data reported by its members, such as new connect requests, payment history, and historical account status and/or fraudulent accounts.  NCTUE members are providers of telecommunications and pay/satellite television services to consumers, as well as utilities providing gas, electrical and water services to consumers. 

·        This information is available to NCTUE members and, on a limited basis, to certain other customers of NCTUE’s contracted exchange operator, Equifax Information Services, LLC (Equifax) – typically financial institutions and insurance providers.  NCTUE does not include Equifax credit information, and Equifax is not a member of NCTUE, nor does Equifax own any aspect of NCTUE.  NCTUE does not provide telecommunications pay/ satellite television or utility services to consumers, and consumers do not apply for those services with NCTUE.

·        As a consumer reporting agency, NCTUE places and lifts security freezes on consumer files in accordance with the state law applicable to the consumer.  NCTUE also maintains a voluntary security freeze program for consumers who live in states which currently do not have a security freeze law. 

·        NCTUE is a separate consumer reporting agency from Equifax and therefore a consumer would need to independently place and lift a freeze with NCTUE.

·        While state laws vary in the manner in which consumers can place or lift a security freeze (temporarily or permanently), if a consumer has a security freeze on his or her NCTUE file and has not temporarily lifted the freeze, a creditor or other service provider, such as a mobile phone provider, generally cannot access that consumer’s NCTUE report in connection with a new account opening.  However, the creditor or provider may be able to access that consumer’s credit report from another consumer reporting agency in order to open a new account, or decide to open the account without accessing a credit report from any consumer reporting agency, such as NCTUE or Equifax. 


I was able to successfully place a freeze on my NCTUE report by calling their 800-number — 1-866-349-5355. The message said the NCTUE might charge a fee for placing or lifting the freeze, in accordance with state freeze laws.

Depending on your state of residence, the cost of placing a freeze on your credit file at Equifax, Experian or Trans Union can run between $3 and $10 per credit bureau, and in many states the bureaus also can charge fees for temporarily “thawing” and removing a freeze (according to a list published by Consumers Union, residents of four states — Indiana, Maine, North Carolina, South Carolina — do not need to pay to place, thaw or lift a freeze).

While my home state of Virginia allows the bureaus to charge $10 to place a freeze, for whatever reason the NCTUE did not assess that fee when I placed my freeze request with them. When and if your freeze request does get approved using the NCTUE’s automated phone system, make sure you have pen and paper or a keyboard handy to jot down the freeze PIN, which you will need in the event you ever wish to lift the freeze. When the system read my freeze PIN, it was read so quickly that I had to hit “*” on the dial pad several times to repeat the message.

It’s frankly absurd that consumers should ever have to pay to freeze their credit files at all, and yet a recent study indicates that almost 20 percent of Americans chose to do so at one or more of the three major credit bureaus since Equifax announced its breach last fall. The total estimated cost to consumers in freeze fees? $1.4 billion.

A bill in the U.S. Senate that looks likely to pass this year would require credit-reporting firms to let consumers place a freeze without paying. The free freeze component of the bill is just a tiny provision in a much larger banking reform bill — S. 2155 — that consumer groups say will roll back some of the consumer and market protections put in place after the Great Recession of the last decade.

“It’s part of a big banking bill that has provisions we hate,” said Chi Chi Wu, a staff attorney with the National Consumer Law Center. “It has some provisions not having to do with credit reporting, such as rolling back homeowners disclosure act provisions, changing protections in [current law] having to do with systemic risk.”

Sen. Jack Reed (D-RI) has offered a bill (S. 2362) that would invert the current credit reporting system by making all consumer credit files frozen by default, forcing consumers to unfreeze their files whenever they wish to obtain new credit. Meanwhile, several other bills would impose slightly less dramatic changes to the consumer credit reporting industry.

Wu said that while S. 2155 appears steaming toward passage, she doubts any of the other freeze-related bills will go anywhere.

“None of these bills that do something really strong are moving very far,” she said.

I should note that NCTUE does offer freeze alternatives. Just like with the big four, NCTUE lets consumers place a somewhat less restrictive “fraud alert” on their file indicating that verbal permission should be obtained over the phone from a consumer before a new account can be opened in their name.

Here is a primer on freezing your credit file with the big three bureaus, including Innovis. This tutorial also includes advice on placing a security alert at ChexSystems, which is used by thousands of banks to verify customers that are requesting new checking and savings accounts. In addition, consumers can opt out of pre-approved credit offers by calling 1-888-5-OPT-OUT (1-888-567-8688), or visit

Oh, and if you don’t want Equifax sharing your salary history over the life of your entire career, you might want to opt out of that program as well.

Equifax and its ilk may one day finally be exposed for the digital dinosaurs that they are. But until that day, if you care about your identity you now may have another freeze to worry about. And if you decide to take the step of freezing your file at the NCTUE, please sound off about your experience in the comments below.

Planet DebianAndreas Bombe: PDP-8/e Replicated — Clocks And Logic

This is, at long last, part 3 of the overview of my PDP-8/e replica project and offers some details of the low-level implementation.

I have mentioned that I build my PDP-8/e replica from the original schematics. The great thing about the PDP-8/e is that it is still built in discrete logic rather than around a microprocessor, meaning that schematics of the actual CPU logic are available instead of just programmer’s documentation. After all, with so many chips on multiple boards something is bound to break down sooner or later and technicians need schematics to diagnose and fix that1. In addition, there’s a maintenance manual that very helpfully describes the workings of every little part of the CPU with excerpts of the full schematics, but it has some inaccuracies and occasionally outright errors in the excerpts so the schematics are still indispensable.

Originally I wanted to design my own logic and use the schematics as just another reference. Since the front panel is a major part of the project and I want it to visually behave as close as possible to the real thing, I would have to duplicate the cycles exactly and generally work very close to the original design. I decided that at that point I might as well just reimplement the schematics.

However, some things can not be reimplemented exactly in the FPGA, other things are a bad fit and can be improved significantly with slight changes.


Back in the day, the rule for digital circuits were multi-phase clocks of varying complexity, and the PDP-8/e is no exception in that regard. A cycle has four timing states of different lengths that each end on a timing pulse.

timing diagram from the

timing diagram from the "pdp8/e & pdp8/m small computer handbook 1972"

As can be seen, the timing states are active when they are at low voltage while the timing pulses are active high. There are plenty of quirks like this which I describe below in the Logic section.

In the PDP-8 the timing generator was implemented as a circular chain of shift registers with parallel outputs. At power on, these registers are reset to all 1 except for 0 in the first two bits. The shift operation is driven by a 20 MHz clock2 and the two zeros then circulate while logic combinations of the parallel outputs generate the timing signals (and also core memory timing, not shown in the diagram above).

What happens with these signals is that the timing states together with control signals decoded from instructions select data outputs and paths while the associated timing pulse combined with timing state and instruction signals trigger D type flip-flops to save these results and present it on their outputs until they are next triggered with different input data.

Relevant for D type flip-flops is the rising edge of their clock input. The length of the pulse does not matter as long as it is not shorter than a required minimum. For example the accumulator register needs to be loaded at TP3 during major state E for a few different instructions. Thus the AC LOAD signal is generated as TP3 and E and (TAD instruction or AND instruction or …) and that signal is used as the clock input for all twelve flip-flops that make up the accumulator register.

However, having flip-flops clocked off timing pulses that are combined with different amounts of logic create differences between sample times which in turn make it hard to push that kind of design to high cycle frequencies. Basically all modern digital circuits are synchronous. There, all flip-flops are clocked directly off the same global clock and get triggered at the same time. Since of course not all flip-flops should get new values at every cycle they have an additional enable input so that the rising clock edge will only register new data when enable is also true3.

Naturally, FPGAs are tailored to this design paradigm. They have (a limited number of) dedicated clock distribution networks set apart from the regular signal routing resources, to provide low skew clock signals across the device. Groups of logic elements get only a very limited set of clock inputs for all their flip-flops. While it is certainly possible to run the same scheme as the original in an FPGA, it would be an inefficient use of resources and very likely make automated timing analysis difficult by requiring lots of manual specification of clock relationships between registers.

So while I do use 20 MHz as the base clock in my timing generator and generate the same signals in my design, I also provide this 20 MHz as the common clock to all logic. Instead of registers triggering on the timing pulse rising edges they get the timing pulses as enables. One difference resulting from that is registers aren’t triggered by the rising edge of a pulse anymore but will trigger on every clock cycle where the pulse is active. The original pulses are two clock cycles long and extend into the following time state so the correct data they picked up on the first cycle would be overwritten by wrong data on the second. I simply shortened the timing pulses to one clock cycle to adapt to this.

timing signals from simulaton showing long and fast cycle

timing signals from simulaton showing long and fast cycle

To reiterate, this is all in the interest of matching the original timing exactly. Analysis by the synthesis tool shows that I could easily push the base clock well over twice the current speed, and that’s already with it assuming that everything has to settle within one clock cycle as I’ve not specified multicycle paths. Meaning I could shorten the timing states to a single cycle4 for an overall more than 10× acceleration on the lowest speed grade of the low-end FPGA I’m using.


To save on logic, many parts with open collector outputs were used in the PDP-8. Instead of driving high or low voltage to represent zeros and ones, an open collector only drives either low voltage or leaves the line alone. Many outputs can then be simply connected together as they can’t drive conflicting voltages. Return to high voltage in the absence of outputs driving low is accomplished by a resistor to positive supply somewhere on the signal line.

The effect is that the connection of outputs itself forms a logic combination in that the signal is high when none of the gates drive low and it’s low when any number of gates drive low. Combining that with active low signalling, where a low voltage represents active or 1, the result is a logical OR combination of all outputs (called wired OR since no logic gates are involved).

The designers of the PDP-8/e made extensive use of that. The majority of signals are active low, marked with L after their name in the schematics. Some signals don’t have multiple sources and can be active high where it’s more convenient, those are marked with H. And then there are many signals that carry no indication at all and some that miss the indication in the maintenance manual just to make a reimplementer’s life more interesting.

excerpt from the PDP-8/e CPU schematic, generation of the accumulator load (AC LOAD L) signal can be seen on the right

excerpt from the PDP-8/e CPU schematic, generation of the accumulator load (AC LOAD L) signal can be seen on the right

As an example in this schematic, let’s look at AC LOAD L which triggers loading the accumulator register from the major registers bus. It’s a wired OR of two NAND outputs, both in the chip labeled E15, and with pull-up resistor R12 to +5 V. One NAND combines BUS STROBE and C2, the other TP3 and an OR in chip E7 of a bunch of instruction related signals. For comparison, here’s how I implemented it in VHDL:

ac_load <= (bus_strobe and not c2) or
           (TP3 and E and ir_TAD) or
           (TP3 and E and ir_AND) or
           (TP3 and E and ir_DCA) or
           (TP3 and F and ir_OPR);

FPGAs don’t have internal open collector logic5 and any output of a logic gate must be the only one driving a particular line. As a result, all the wired OR must be implemented with explicit ORs. Without the need to have active low logic I consistently use active high everywhere, meaning that the logic in my implementation is mostly inverted compared to the real thing. The deviation of ANDing TP3 with every signal instead of just once with the result of the OR is again due to consistency, I use the “<timing> and <major state> and <instruction signal>” pattern a lot.

One difficulty with wired OR is that it is never quite obvious from a given section of the schematics what all inputs to a given gate are. You may have a signal X being generated on one page of the schematic and a line with signal X as an input to a gate on another page, but that doesn’t mean there isn’t something on yet another page also driving it, or that it isn’t also present on a peripheral port6.

Some of the original logic is needed only for electrical reasons, such as buffers which are logic gates that do not combine multiple inputs but simply repeat their inputs (maybe inverted). Logic gate outputs can only drive so many inputs, so if one signal is widely used it needs buffers. Inverters are commonly used for that in the PDP-8. BUS STROBE above is one example, it is the inverted BUS STROBE L found on the backplane. Another is BTP3 (B for buffered) which is TP3 twice inverted.

Finally, some additional complexity is owed to the fact that the 8/e is made of discrete logic chips and that these have multiple gates in a package, for example the 7401 with four 2-input NAND gates with open collector outputs per chip. In designing the 8/e, logic was sometimes not implemented straightforwardly but as more complicated equivalents if it means unused gates in existing chips could be used rather than adding more chips.


I have started out saying that I build an exact PDP-8/e replica from the schematics. As I have detailed, that doesn’t involve just taking every gate and its connections from the schematic and writing it down in VHDL. I am changing the things that can not be directly implemented in an FPGA (like wired OR) and leaving out things that are not needed in this environment (such as buffers). Nevertheless, the underlying logic stays the same and as a result my implementation has the exact same timing and behaviour even in corner cases.

Ultimately all this only applies to the CPU and closely associated units (arithmetic and address extension). Moving out to peripheral hardware, the interface to the CPU may be the only part that could be implemented from original schematics. After all, where the magnetic tape drive interface in the original was controlling the actual tape hardware the equivalent in the replica project would be accessing emulated tape storage.

This finally concludes the overview of my project. Its development hasn’t advanced as much as I expected around this time last year since I ended up putting the project aside for a long while. After returning to it, running the MAINDEC test programs revealed a bunch of stuff I forgot to implement or implemented wrong which I had to fix. The optional Extended Arithmetic Element isn’t implemented yet, the Memory Extension & Time Share is now complete pending test failures I need to debug. It is now reaching a state where I consider the design getting ready to be published.

  1. There are also test programs that exercise and check every single logic gate to help pinpoint a problem. Naturally they are also extremely helpful with verifying that a replica is in fact working exactly like the real thing. [return]
  2. Thus 50 ns, one cycle of the 20 MHz clock, is the granularity at which these signals are created. The timing pulses, for example, are 100 ns long. [return]
  3. This is simply implemented by presenting them their own output by a feedback path when enable is false so that they reload their existing value on the clock edge. [return]
  4. In fact I would be limited by the speed of the external SRAM I use and its 6 bit wide data connection, requiring two cycles for a 12 bit access. [return]
  5. The IO pins that interface to the outside world generally have the capability to switch disconnection at run time, allowing open collector and similar signaling. [return]
  6. Besides the memory and expansion bus, the 8/e CPU also has a special interface to attach the Extended Arithmetic Element. [return]

Cory DoctorowTalking privacy and GDPR with Thomson Reuters

Thomson Reuters interviewed me for their new series on data privacy and the EU General Data Protection Regulation; here’s the audio!

What if you just said when you breach, the damages that you owe to the people whose data you breached cannot be limited to the immediate cognizable consequences of that one breach but instead has to take recognition of the fact that breaches are cumulative? That the data that you release might be merged with some other set that was previously released either deliberately by someone who thought that they’d anonymized it because key identifiers had been removed that you’ve now added back in or accidentally through another breach? The merger of those two might create a harm.

Now you can re-identify a huge number of those prescriptions. That might create all kinds of harms that are not immediately apparent just by releasing a database of people’s rides, but when merged with maybe that NIH or NHS database suddenly becomes incredibly toxic and compromising.

If for example we said, “Okay, in recognition of this fact that once that data is released it never goes away, and each time it’s released it gets merged with other databases to create fresh harms that are unquantifiable in this moment and should be assumed to exceed any kind of immediate thing that we can put our finger on, that you have to pay fairly large statutory damages if you’re found to have mishandled data.” Well, now I think the insurance companies are going to do a lot of our dirty work for us.

We don’t have to come up with rules. We just have to wait for the insurance companies to show up at these places that they’re writing policies for and say, “Tell me again, why we should be writing you a policy when you’ve warehoused all of this incredibly toxic material that we’re all pretty sure you’re going to breach someday, and whose liability is effectively unbounded?” They’re going to make the companies discipline themselves.

Worse Than FailureExponential Backup

The first day of a new job is always an adjustment. There's a fine line between explaining that you're unused to a procedure and constantly saying "At my old company...". After all, nobody wants to be that guy, right? So you proceed with caution, trying to learn before giving advice.

But some things warrant the extra mile. When Samantha started her tenure at a mid-sized firm, it all started out fine. She got a computer right away, which is a nice plus. She met the team, got settled into a desk, and was given a list of passwords and important URLs to get situated. The usual stuff.

After changing her Windows password, she decided to start by browsing the source code repository. This company used Subversion, so she went and downloaded the whole repo so she could see the structure. It took a while, so she got up and got some coffee; when she got back, it had finished, and she was able to see the total size: 300 GB. That's... weird. Really weird. Weirder still, when she glanced over the commit history, it only dated back a year or so.

What could be taking so much space? Were they storing some huge binaries tucked away someplace that the code depended on? She didn't want to make waves, but this just seemed so... inefficiently huge. Now curious, she opened the repo, browsing the folder structure.

Subversion bases everything on folder structure; there is only really one "branch" in Git's thinking, but you can check out any subfolder without taking the whole repository. Inside of each project directory was a layout that is common to SVN repos: a folder called "branches", a folder called "tags", and a folder called "trunk" (Subversion's primary branch). In the branches directory there were folders called "fix" and "feature", and in each of those there were copies of the source code stored under the names of the branches. Under normal work, she'd start her checkout from one of those branch folders, thus only pulling down the code for her branch, and merge into the "trunk" copy when she was all done.

But there was one folder she didn't anticipate: "backups". Backups? But... this is version control. We can revert to an earlier version any time we want. What are the backups for? I must be misunderstanding. She opened one and was promptly horrified to find a series of zip files, dated monthly, all at revision 1.

Now morbidly curious, Samantha opened one of these zips. The top level folder inside the zip was the name of the project; under that, she found branches, tags, trunk. No way. They can't have-- She clicked in, and there it was, plain as day: another backups folder. And inside? Every backup older than the one she'd clicked. Each backup included, presumably, every backup prior to that, meaning that in the backup for October, the backup from January was included nine times, the backup from February eight times, and so on and so forth. Within two years, a floppy disk worth of code would fill a terabyte drive.

Samantha asked her boss, "What will you do when the repo gets too big to be downloaded onto your hard drive?

His response was quick and entirely serious: "Well, we back it up, then we make a new one."

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianAlexandre Viau: Introducing Autodeb

Autodeb is a new service that will attempt to automatically update Debian packages to newer upstream versions or to backport them. Resulting packages will be distributed in apt repositories.

I am happy to annnounce that I will be developing this new service as part of Google Summer of Code 2018 with Lucas Nussbaum as a mentor.

The program is currently nearing the end of the “Community Bonding” period, where mentors and their mentees discuss on the details of the projects and try to set objectives.

Main goal

Automatically create and distribute package updates/backports

This is the main objective of Autodeb. These unofficial packages can be an alternative for Debian users that want newer versions of software faster than it is available in Debian.

The results of this experiment will also be interesting when looking at backports. If successful, it could be that a large number of packages are backportable automatically, brigning even more options for stable (or even oldstable) users.

We also hope that the information that is produced by the system can be used by Debian Developers. For example, Debian Developers may be more tempted to support backports of their packages if they know that it already builds and that autopkgtests passes.

Other goals

Autodeb will be composed of infrastructure that is capable to build and test a large number of packages. We intend to build it with two secondary goals in mind:

Test packages that were built by developers before they upload it to the archive

We intend to add a self-service interface so that our testing infrastructure can be used for other purposes than automatically updating packages. This can empower Debian Developers by giving them easy access to more rigorous testing before they upload a package to the archive. For more more details on this, see my previous testeduploads proposal.

Archive rebuids / modifying the build and test environment

We would like to allow for building packages with a custom environment. For example, with a new version of GCC or with a set of packages. Ideally, this would also be a self-service interface where Developers can setup their environement and then upload packages for testing. Istead of individual package uploads, the input of packages to build could be a whole apt repository from which all source packages would be downloaded and rebuilt, with filters to select the packages to include or exclude.

What’s next

The next phase of Google Summer of Code is the coding period, it begins of May 14 and ends on August 6. However, there area a number of areas where work has already begun:

  1. Main repository: code for the master and the worker components of the service, written in golang.
  2. Debian packaging: Debian packaging autodeb. Contains scripts that will publish packages to
  3. Ansible scripts: this repository contains ansible scripts to provision the infrastructure at

Status update at DebConf

I have submitted a proposal for a talk on Autodeb at DebConf. By that time, the project should have evolved from idea to prototype and it will be interesting to discuss the things that we have learned:

  • How many packages can we successfully build?
  • How many of these packages fail tests?

If all goes well, it will also be an opportunity to officialy present our self-service interface to the public so that the community can start using it to test packages.

In the meantime, feel free to get in touch with us by email, on OFTC at #autodeb, or via issues on salsa.


Krebs on SecurityMicrosoft Patch Tuesday, May 2018 Edition

Microsoft today released a bundle of security updates to fix at least 67 holes in its various Windows operating systems and related software, including one dangerous flaw that Microsoft warns is actively being exploited. Meanwhile, as it usually does on Microsoft’s Patch Tuesday — the second Tuesday of each month — Adobe has a new Flash Player update that addresses a single but critical security weakness.

First, the Flash Tuesday update, which brings Flash Player to v. Some (present company included) would argue that Flash Player is itself “a single but critical security weakness.” Nevertheless, Google Chrome and Internet Explorer/Edge ship with their own versions of Flash, which get updated automatically when new versions of these browsers are made available.

You can check if your browser has Flash installed/enabled and what version it’s at by pointing your browser at this link. Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability.

Google Chrome blocks Flash from running on all but a handful of popular sites, and then only after user approval. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist/blacklist specific sites. If you spot an upward pointing arrow to the right of the address bar in Chrome, that means there’s an update to the browser available, and it’s time to restart Chrome.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis.

Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits. Microsoft users will need to install this month’s batch of patches to get the latest Flash version for IE/Edge, where most of the critical updates in this month’s patch batch reside.

According to security vendor Qualys, one Microsoft patch in particular deserves priority over others in organizations that are testing updates before deploying them: CVE-2018-8174 involves a problem with the way the Windows scripting engine handles certain objects, and Microsoft says this bug is already being exploited in active attacks.

Some other useful sources of information on today’s updates include the Zero Day Initiative and Bleeping Computer. And of course there is always the Microsoft Security Update Guide.

As always, please feel free to leave a comment below if you experience any issues applying any of these updates.

Rondam RamblingsA quantum mechanics puzzle, part deux

This post is (part of) the answer to a puzzle I posed here.  Read that first if you haven't already. To make this discussion concrete, let's call the time it takes for light to traverse the short (or Small) arm of the interferometer Ts, the long (or Big) arm Tb (because Tl looks too much like T1). So there are five interesting cases here.  Let's start with the easy one: we illuminate the

Planet DebianSven Hoexter: Debian/stretch on HPE DL360 gen10

We received our first HPE gen10 systems, a bunch of DL360, and experienced a few caveats while setting up Debian/stretch.

PXE installation

While our DL120 gen9 announced themself as "X86-64_EFI" (client architecture 00009), the DL360 gen10 use "BC_EFI" (client architecture 00007) in the BOOTP/DHCP protocol option 93. Since we use dnsmasq as DHCP and tftp server we rely on tags like this:

# new style UEFI PXE
# client arch 00009
pxe-service=tag:s1,X86-64_EFI, "Boot UEFI X86-64_EFI", bootnetx64.efi
# client arch 00007
pxe-service=tag:s2,BC_EFI, "Boot UEFI BC_EFI", bootnetx64.efi


This is easy to spot with wireshark once you understand what you're looking for.


For some reason, and I heard some rumours that this is a known bug, I had to disable USB support and the SD-card reader in the interface formerly known as BIOS. Otherwise the installer detects the first volume of the P408i raid controller as "/dev/sdb" instead of "/dev/sda".

Network interfaces depend highly on your actual setup. Booting from the additional 10G interfaces worked out of the box, they're detected with reliable names as eno5 and eno6.


So far we relied on the hp-health and ssacli (formerly hpssacli) packages from the HPE MCP. Currently those tools seem to not support Gen10 systems. I'm currently trying to find out what the alternative is to monitor the health state of the system components. At least for hp-health it's mentioned that only up to Gen9 is support.

That's what I receive from ssacli:

=> ctrl all show status

HPE P408i-a SR Gen10 in Slot 0 (Embedded)

APPLICATION UPGRADE REQUIRED: This controller has been configured with a more
                          recent version of software.
                          To prevent data loss, configuration changes to
                          this controller are not allowed.
                          Please upgrade to the latest version to be able
                          to continue to configure this controller.

That's what I found in our logs from the failed start of hpasmlited:

hpasmlited[31952]: check_ilo2: BMC Returned Error:  ccode  0x0,  Req. Len:  15, Resp. Len:  21

auto configuration

If you're looking into automatic configuration of those systems you'll have to look for Redfish. API documentation for ilo5 can be found at Since it's not clear how many of those systems we will actually setup, I'm so far a bit reluctant to automate the setup further.

Planet DebianMarkus Koschany: My Free Software Activities in April 2018

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • I adopted childsplay, a suite of educational games for young children. I triaged all open bugs and thanks to a very responsive upstream developer the game is back in testing again now.
  • I did a QA upload for pax-britannica to fix #825673 and #718884 and updated the packaging.
  • In the same vein I did two NMUs for animals and acm and fixed RC bugs #875547 and #889530. Later I contacted the release team to get the fix for animals into Stretch too.
  • I packaged new upstream releases of extremetuxracer, adonthell, renpy and pygame-sdl2.
  • I sponsored and reviewed new versions of tanglet, connectagram and cutemaze for Innocent de Marchi.
  • I released version 2.3 of debian-games, a collection of metapackages to make it easier to find and install certain types of games.
  • I backported the latest release of freeciv to Stretch.
  • Finally I could resolve the RC bugs in morris and grhino and both games are part of Buster again.

Debian Java

Debian LTS

This was my twenty-sixth month as a paid contributor and I have been paid to work 16,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 16.04.2018 until 22.04.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in bouncycastle, jruby, typo3-src, imagemagick, pegl, ocaml, radare2, movabletype-opensource, cacti, ghostscript, glusterfs, jasperreports, xulrunner, phpmyadmin, gunicorn, psensor, nasm and lucene-solr.
  • DLA-1352-1. Issued a security update for jruby fixing 1 CVE.
  • DLA-1361-1. Issued a security update for psensor fixing 1 CVE.
  • DLA-1363-1. Issued a security update for ghostscript fixing 1 CVE.
  • DLA-1366-1. Issued a security update for wordpress fixing 2 CVE.
  • DSA-4190-1. Prepared the security update for jackson-databind in Jessie fixing 1 CVE.
  • DSA-4194-1. Prepared the security update for lucene-solr in Jessie fixing 1 CVE.
  • Prepared a security update for imagemagick in Jessie fixing 8 CVE. At the moment it is pending review by the security team and will be released soon.
  • Prepared and uploaded a point-update for faad2 in Jessie and Stretch that addresses 11 security vulnerabilities. (#897369)
  • Prepared a security update for php5 in Wheezy. This one will be released soon. (DLA-1373-1)


  • I filed wishlist bugs against (#897225 and #897227) and requested a feature to allow users to override certain metainformation like VCS-URLs. In the past years we changed VCS addresses multiple times which always requires a source upload. In my opinion this is a design flaw and highly inefficient and such a change in tracker would make it possible to drop the fields from our team maintained packages.

Thanks for reading and see you next time.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #158

Here’s what happened in the Reproducible Builds effort between Sunday April 29 and Saturday May 5 2018:


This week’s edition was written by Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Mark ShuttleworthCue the Cosmic Cuttlefish

With our castor Castor now out for all to enjoy, and the Twitterverse delighted with the new minimal desktop and smooth snap integration, it’s time to turn our attention to the road ahead to 20.04 LTS, and I’m delighted to say that we’ll kick off that journey with the Cosmic Cuttlefish, soon to be known as Ubuntu 18.10.

Each of us has our own ideas of how the free stack will evolve in the next two years. And the great thing about Ubuntu is that it doesn’t reflect just one set of priorities, it’s an aggregation of all the things our community cares about. Nevertheless I thought I’d take the opportunity early in this LTS cycle to talk a little about the thing I’m starting to care more about than any one feature, and that’s security.

If I had one big thing that I could feel great about doing, systematically, for everyone who uses Ubuntu, it would be improving their confidence in the security of their systems and their data. It’s one of the very few truly unifying themes that crosses every use case.

It’s extraordinary how diverse the uses are to which the world puts Ubuntu these days, from the heart of the mainframe operation in a major financial firm, to the raspberry pi duck-taped to the back of a prototype something in the middle of nowhere, from desktops to clouds to connected things, we are the platform for ambitions great and small. We are stewards of a shared platform, and one of the ways we respond to that diversity is by opening up to let people push forward their ideas, making sure only that they are excellent to each other in the pushing.

But security is the one thing that every community wants – and it’s something that, on reflection, we can raise the bar even higher on.

So without further ado: thank you to everyone who helped bring about Bionic, and may you all enjoy working towards your own goals both in and out of Ubuntu in the next two years.

CryptogramThe US Is Unprepared for Election-Related Hacking in 2018

This survey and report is not surprising:

The survey of nearly forty Republican and Democratic campaign operatives, administered through November and December 2017, revealed that American political campaign staff -- primarily working at the state and congressional levels -- are not only unprepared for possible cyber attacks, but remain generally unconcerned about the threat. The survey sample was relatively small, but nevertheless the survey provides a first look at how campaign managers and staff are responding to the threat.

The overwhelming majority of those surveyed do not want to devote campaign resources to cybersecurity or to hire personnel to address cybersecurity issues. Even though campaign managers recognize there is a high probability that campaign and personal emails are at risk of being hacked, they are more concerned about fundraising and press coverage than they are about cybersecurity. Less than half of those surveyed said they had taken steps to make their data secure and most were unsure if they wanted to spend any money on this protection.

Security is never something we actually want. Security is something we need in order to avoid what we don't want. It's also more abstract, concerned with hypothetical future possibilities. Of course it's lower on the priorities list than fundraising and press coverage. They're more tangible, and they're more immediate.

This is all to the attackers' advantage.

Worse Than FailureYes == No

For decades, I worked in an industry where you were never allowed to say no to a user, no matter how ridiculous the request. You had to suck it up and figure out a way to deliver on insane requests, regardless of the technical debt they inflicted.

Canada Stop sign.svg

Users are a funny breed. They say things like I don't care if the input dialog you have works; the last place I worked had a different dialog to do the same thing, and I want that dialog here! With only one user saying stuff like that, it's semi-tolerable. When you have 700+ users and each of them wants a different dialog to do the same thing, and nobody in management will say no, you need to start creating table-driven dialogs (x-y coordinates, width, height, label phrasing, field layout within the dialog, different input formats, fonts, colors and so forth). Multiply that by the number of dialogs in your application and it becomes needlessly pointlessly impossibly difficult.

But it never stops there. Often, one user will request that you move a field from another dialog onto their dialog - just for them. This creates all sorts of havoc with validation logic. Multiply it by hundreds of users and you're basically creating a different application for each of them - each with its own validation logic, all in the same application.

After just a single handful of users demanding changes like this, it can quickly become a nightmare. Worse, once it starts, the next user to whom you say no tells you that you did it for the other guy and so you have to do it for them too! After all, each user is the most important user, right?

It doesn't matter that saying no is the right thing to do. It doesn't matter that it will put a zero-value load on development and debugging time. It doesn't matter that sucking up development time to do it means there are less development hours for bug fixes or actual features.

When management refuses to say no, it can turn your code into a Pandora's-Box-o-WTF™

However, there is hope. There is a way to tell the users no without actually saying no. It's by getting them to say it for you and then withdrawing their urgent, can't-live-without-it, must-have-or-the-world-will-end request.

You may ask how?

The trick is to make them see the actual cost of implementing their teeny tiny little feature.

Yes, we can add that new button to provide all the functionality of Excel in an in-app
calculator, but it will take x months (years) to do it, AND it will push back all of the
other features in the queue. Shall I delay the next release and the other feature requests
so we can build this for you, or would you like to schedule it for a future release?

Naturally you'll have to answer questions like "But it's just a button; why would it take that much effort?"

This is a good thing because it forces them down the rabbit hole into your world where you are the expert. Now you get to explain to them the realities of software development, and the full cost of their little request.

Once they realize the true cost that they'd have to pay, the urgency of the request almost always subsides to nice to have and gets pushed forward so as to not delay the scheduled release.

And because you got them to say it for you, you didn't have to utter the word no.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaMichael Still: Adding oslo privsep to a new project, a worked example


You’ve decided that using sudo to run command lines as root is lame and that it is time to step up and do things properly. How do you do that? Well, here’s a simple guide to adding oslo privsep to your project!

In a previous post I showed you how to add a new method that ran with escalated permissions. However, that’s only helpful if you already have privsep added to your project. This post shows you how to do that thing to your favourite python project. In this case we’ll use OpenStack Cinder as a worked example.

Note that Cinder already uses privsep because of its use of os-brick, so the instructions below skip adding oslo.privsep to requirements.txt. If your project has never ever used privsep at all, you’ll need to add a line like this to requirements.txt:


For reference, this post is based on OpenStack review 566,479, which I wrote as an example of how to add privsep to a new project. If you’re after a complete worked example in a more complete form than this post then the review might be useful to you.

As a first step, let’s add the code we’d want to write to actually call something with escalated permissions. In the Cinder case I chose the cgroups throttling code for this example. So first off we’d need to create the privsep directory with the relevant helper code:

diff --git a/cinder/privsep/ b/cinder/privsep/
new file mode 100644
index 0000000..7f826a8
--- /dev/null
+++ b/cinder/privsep/
@@ -0,0 +1,32 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+"""Setup privsep decorator."""
+from oslo_privsep import capabilities
+from oslo_privsep import priv_context
+sys_admin_pctxt = priv_context.PrivContext(
+ 'cinder',
+ cfg_section='cinder_sys_admin',
+ pypath=__name__ + '.sys_admin_pctxt',
+ capabilities=[capabilities.CAP_CHOWN,
+ capabilities.CAP_DAC_OVERRIDE,
+ capabilities.CAP_DAC_READ_SEARCH,
+ capabilities.CAP_FOWNER,
+ capabilities.CAP_NET_ADMIN,
+ capabilities.CAP_SYS_ADMIN],

This code defines the permissions that our context (called cinder_sys_admin in this case) has. These specific permissions in the example above should correlate with those that you’d get if you ran a command with sudo. There was a bit of back and forth about what permissions to use and how many contexts to have while we were implementing privsep in OpenStack Nova, but we’ll discuss those in a later post.

Next we need the code that actually does the privileged thing:

diff --git a/cinder/privsep/ b/cinder/privsep/
new file mode 100644
index 0000000..15d47e0
--- /dev/null
+++ b/cinder/privsep/
@@ -0,0 +1,35 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+Helpers for cgroup related routines.
+from oslo_concurrency import processutils
+import cinder.privsep
+def cgroup_create(name):
+    processutils.execute('cgcreate', '-g', 'blkio:%s' % name)
+def cgroup_limit(name, rw, dev, bps):
+    processutils.execute('cgset', '-r',
+                         'blkio.throttle.%s_bps_device=%s %d' % (rw, dev, bps),
+                         name)

Here we just provide two methods which manipulate cgroups. That allows us to make this change to the throttling implementation in Cinder:

diff --git a/cinder/volume/ b/cinder/volume/
index 39cbbeb..3c6ddaa 100644
--- a/cinder/volume/
+++ b/cinder/volume/
@@ -22,6 +22,7 @@ from oslo_concurrency import processutils
 from oslo_log import log as logging
 from cinder import exception
+import cinder.privsep.cgroup
 from cinder import utils
@@ -65,8 +66,7 @@ class BlkioCgroup(Throttle):
         self.dstdevs = {}
-            utils.execute('cgcreate', '-g', 'blkio:%s' % self.cgroup,
-                          run_as_root=True)
+            cinder.privsep.cgroup.cgroup_create(self.cgroup)
         except processutils.ProcessExecutionError:
             LOG.error('Failed to create blkio cgroup \'%(name)s\'.',
                       {'name': cgroup_name})
@@ -81,8 +81,7 @@ class BlkioCgroup(Throttle):
     def _limit_bps(self, rw, dev, bps):
-            utils.execute('cgset', '-r', 'blkio.throttle.%s_bps_device=%s %d'
-                          % (rw, dev, bps), self.cgroup, run_as_root=True)
+            cinder.privsep.cgroup.cgroup_limit(self.cgroup, rw, dev, bps)
         except processutils.ProcessExecutionError:
             LOG.warning('Failed to setup blkio cgroup to throttle the '
                         'device \'%(device)s\'.', {'device': dev})

These last two snippets should be familiar from the previous post about pivsep in this series. Finally for the actual implementation of privsep, we need to make sure that rootwrap has permissions to start the privsep helper daemon. You’ll get one daemon per unique security context, but in this case we only have one of those so we’ll only need one rootwrap entry. Note that I also remove the previous rootwrap entries for cgset and cglimit while I’m here.

diff --git a/etc/cinder/rootwrap.d/volume.filters b/etc/cinder/rootwrap.d/volume.filters
index abc1517..d2d1720 100644
--- a/etc/cinder/rootwrap.d/volume.filters
+++ b/etc/cinder/rootwrap.d/volume.filters
@@ -43,6 +43,10 @@ lvdisplay4: EnvFilter, env, root, LC_ALL=C, LVM_SYSTEM_DIR=, LVM_SUPPRESS_FD_WAR
 # This line ties the superuser privs with the config files, context name,
 # and (implicitly) the actual python code invoked.
 privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.*
+# Privsep calls within cinder iteself
+privsep-rootwrap-sys_admin: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, cinder.privsep.sys_admin_pctxt, --privsep_sock_path, /tmp/.*
 # The following and any cinder/brick/* entries should all be obsoleted
 # by privsep, and may be removed once the os-brick version requirement
 # is updated appropriately.
@@ -93,8 +97,6 @@ ionice_1: ChainingRegExpFilter, ionice, root, ionice, -c[0-3], -n[0-7]
 ionice_2: ChainingRegExpFilter, ionice, root, ionice, -c[0-3]
 # cinder/volume/ setup_blkio_cgroup()
-cgcreate: CommandFilter, cgcreate, root
-cgset: CommandFilter, cgset, root
 cgexec: ChainingRegExpFilter, cgexec, root, cgexec, -g, blkio:\S+
 # cinder/volume/

And because we’re not bad people we’d of course write a release note about the changes we’ve made…

diff --git a/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
new file mode 100644
index 0000000..e78fb00
--- /dev/null
+++ b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
@@ -0,0 +1,13 @@
+    Privsep transitions. Cinder is transitioning from using the older style
+    rootwrap privilege escalation path to the new style Oslo privsep path.
+    This should improve performance and security of Nova in the long term.
+  - |
+    privsep daemons are now started by Cinder when required. These daemons can
+    be started via rootwrap if required. rootwrap configs therefore need to
+    be updated to include new privsep daemon invocations.
+  - |
+    The following commands are no longer required to be listed in your rootwrap
+    configuration: cgcreate; and cgset.

This code will now work. However, we’ve left out one critical piece of the puzzle — testing. If this code was uploaded like this, it would fail in the OpenStack gate, even though it probably passed on your desktop. This is because many of the gate jobs are setup in such a way that they can’t run rootwrapped commands, which in this case means that the rootwrap daemon won’t be able to start.

I found this quite confusing in Nova when I was implementing things and had missed a step. So I wrote a simple test fixture that warns me when I am being silly:

diff --git a/cinder/ b/cinder/
index c8c9e6c..a49cedb 100644
--- a/cinder/
+++ b/cinder/
@@ -302,6 +302,9 @@ class TestCase(testtools.TestCase):
         tpool._nthreads = 20
+        # NOTE(mikal): make sure we don't load a privsep helper accidentally
+        self.useFixture(cinder_fixtures.PrivsepNoHelperFixture())
     def _restore_obj_registry(self):
         objects_base.CinderObjectRegistry._registry._obj_classes = \
diff --git a/cinder/tests/ b/cinder/tests/
index 6e275a7..79e0b73 100644
--- a/cinder/tests/
+++ b/cinder/tests/
@@ -1,4 +1,6 @@
 # Copyright 2016 IBM Corp.
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
 #    Licensed under the Apache License, Version 2.0 (the "License"); you may
 #    not use this file except in compliance with the License. You may obtain
@@ -21,6 +23,7 @@ import os
 import warnings
 import fixtures
+from oslo_privsep import daemon as privsep_daemon
 _TRUE_VALUES = ('True', 'true', '1', 'yes')
@@ -131,3 +134,29 @@ class WarningsFixture(fixtures.Fixture):
                     ' This key is deprecated. Please update your policy '
                     'file to use the standard policy values.')
+class UnHelperfulClientChannel(privsep_daemon._ClientChannel):
+    def __init__(self, context):
+        raise Exception('You have attempted to start a privsep helper. '
+                        'This is not allowed in the gate, and '
+                        'indicates a failure to have mocked your tests.')
+class PrivsepNoHelperFixture(fixtures.Fixture):
+    """A fixture to catch failures to mock privsep's rootwrap helper.
+    If you fail to mock away a privsep'd method in a unit test, then
+    you may well end up accidentally running the privsep rootwrap
+    helper. This will fail in the gate, but it fails in a way which
+    doesn't identify which test is missing a mock. Instead, we
+    raise an exception so that you at least know where you've missed
+    something.
+    """
+    def setUp(self):
+        super(PrivsepNoHelperFixture, self).setUp()
+        self.useFixture(fixtures.MonkeyPatch(
+            'oslo_privsep.daemon.RootwrapClientChannel',
+            UnHelperfulClientChannel))

Now if you fail to mock a privsep’ed call, then you’ll get something like this:

Failed 1 tests - output below:


Captured traceback:
    Traceback (most recent call last):
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/mock/", line 1305, in patched
        return func(*args, **keywargs)
      File "cinder/tests/unit/", line 66, in test_BlkioCgroup
        throttle = throttling.BlkioCgroup(1024, 'fake_group')
      File "cinder/volume/", line 69, in __init__
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/", line 206, in _wrap
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/", line 217, in start
        channel = daemon.RootwrapClientChannel(context=self)
      File "cinder/tests/", line 141, in __init__
        raise Exception('You have attempted to start a privsep helper. '
    Exception: You have attempted to start a privsep helper. This is not allowed in the gate, and indicates a failure to have mocked your tests.

The last bit is the most important. The fixture we installed has detected that you’ve failed to mock a privsep’ed call and has informed you. So, the last step of all is fixing our tests. This normally involves changing where we mock, as many unit tests just lazily mock the execute() call. I try to be more granular than that. Here’s what that looked like in this throttling case:

diff --git a/cinder/tests/unit/ b/cinder/tests/unit/
index 82e2645..edbc2d9 100644
--- a/cinder/tests/unit/
+++ b/cinder/tests/unit/
@@ -29,7 +29,9 @@ class ThrottleTestCase(test.TestCase):
             self.assertEqual([], cmd['prefix'])
     @mock.patch.object(utils, 'get_blkdev_major_minor')
-    def test_BlkioCgroup(self, mock_major_minor):
+    @mock.patch('cinder.privsep.cgroup.cgroup_create')
+    @mock.patch('cinder.privsep.cgroup.cgroup_limit')
+    def test_BlkioCgroup(self, mock_limit, mock_create, mock_major_minor):
         def fake_get_blkdev_major_minor(path):
             return {'src_volume1': "253:0", 'dst_volume1': "253:1",
@@ -37,38 +39,25 @@ class ThrottleTestCase(test.TestCase):
         mock_major_minor.side_effect = fake_get_blkdev_major_minor
-        self.exec_cnt = 0
+        throttle = throttling.BlkioCgroup(1024, 'fake_group')
+        with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
+                             cmd['prefix'])
-        def fake_execute(*cmd, **kwargs):
-            cmd_set = ['cgset', '-r',
-                       'blkio.throttle.%s_bps_device=%s %d', 'fake_group']
-            set_order = [None,
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024),
-                         # a nested job starts; bps limit are set to the half
-                         ('read', '253:0', 512),
-                         ('read', '253:2', 512),
-                         ('write', '253:1', 512),
-                         ('write', '253:3', 512),
-                         # a nested job ends; bps limit is resumed
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024)]
-            if set_order[self.exec_cnt] is None:
-                self.assertEqual(('cgcreate', '-g', 'blkio:fake_group'), cmd)
-            else:
-                cmd_set[2] %= set_order[self.exec_cnt]
-                self.assertEqual(tuple(cmd_set), cmd)
-            self.exec_cnt += 1
-        with mock.patch.object(utils, 'execute', side_effect=fake_execute):
-            throttle = throttling.BlkioCgroup(1024, 'fake_group')
-            with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            # a nested job
+            with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
                 self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
-                # a nested job
-                with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
-                    self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
-                                     cmd['prefix'])
+        mock_create.assert_has_calls(['fake_group')])
+        mock_limit.assert_has_calls([
+  'fake_group', 'read', '253:0', 1024),
+  'fake_group', 'write', '253:1', 1024),
+            # a nested job starts; bps limit are set to the half
+  'fake_group', 'read', '253:0', 512),
+  'fake_group', 'read', '253:2', 512),
+  'fake_group', 'write', '253:1', 512),
+  'fake_group', 'write', '253:3', 512),
+            # a nested job ends; bps limit is resumed
+  'fake_group', 'read', '253:0', 1024),
+  'fake_group', 'write', '253:1', 1024)])

…and we’re done. This post has been pretty long so I am going to stop here for now. However, hopefully I’ve demonstrated that its actually not that hard to implement privsep in a project, even with some slight testing polish.


The post Adding oslo privsep to a new project, a worked example appeared first on Made by Mikal.


Planet DebianRaphaël Hertzog: My Free Software Activities in April 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

pkg-security team

I improved the packaging of openvas-scanner and openvas-manager so that they mostly work out of the box with a dedicated redis database pre-configured and with certificates created in the postinst. I merged a patch for cross-build support in mac-robber and another patch from chkrootkit to avoid unexpected noise in quiet mode.

I prepared an update of openscap-daemon to fix the RC bug #896766 and to update to a new upstream release. I pinged the package maintainer to look into the autopkgtest failure (that I did not introduce). I sponsored hashcat 4.1.0.

Distro Tracker

While it slowed down, I continued to get merge requests. I merged two of them for some newcomer bugs:

I reviewed a merge request suggesting to add a “team search” feature.

I did some work of my own too: I fixed many exceptions that have been seen in production with bad incoming emails and with unexpected maintainer emails. I also updated the contributor guide to match the new workflow with salsa and with the new pre-generated database and its associated helper script (to download it and configure the application accordingly). During this process I also filed a GitLab issue about the latest artifact download URL not working as advertised.

I filed many issues (#13 to #19) for things that were only stored in my personal TODO list.

Misc Debian work

Bug Reports. I filed bug #894732 on mk-build-deps to filter build dependencies to include/install based on build profiles. For reprepro, I always found the explanation about FilterList very confusing (bug #895045). I filed and fixed a bug on mirrorbrain with redirection to HTTPS URLs.

I also investigated #894979 and concluded that the CA certificates keystore file generated with OpenJDK 9 is not working properly with OpenJDK 8. This got fixed in ca-certificates-java.

Sponsorship. I sponsored pylint-plugin-utils 0.2.6-2.

Packaging. I uploaded oca-core (still in NEW) and ccextractor for Freexian customers. I also uploaded python-num2words (dependency for oca-core). I fixed the RC bug #891541 on lua-posix.

Live team. I reviewed better handling of missing host dependency on live-build and reviewed a live-boot merge request to ensure that the FQDN returned by DHCP was working properly in the initrd.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianRuss Allbery: podlators 4.11

podlators is the CPAN distribution that contains Pod::Man, Pod::Text, and its subclasses, used for rendering Perl documentation as man pages or text files. It's been part of Perl core for a very long time.

This release cleans up a bunch of long-standing bugs in Pod::Text and its subclasses thanks to a series of excellent bug reports from eponymous alias. The default value of the sentence option was documented incorrectly, the width option was ignored in Pod::Text::Termcap, that module also worked incorrectly when COLUMNS wasn't set in the environment, and both Pod::Text::Termcap and Pod::Text::Color had various problems with wrapping that are now fixed. Long =item text was miswrapped due to an incorrect length calculation, and text attributes are now cleared at the end of each line and then reapplied to work better with common pagers.

In addition, the none value for the errors option of Pod::Man and Pod::Text now works correctly, thanks to a bug report from Olly Betts.

You can get the latest version from the podlators distribution page.

Cory DoctorowDonald Trump is a pathogen evolved to thrive in an attention-maximization ecosystem

My latest Locus column is The Engagement-Maximization Presidency, and it proposes a theory to explain the political phenomenon of Donald Trump: we live in a world in which communications platforms amplify anything that gets “engagement” and provides feedback on just how much your message has been amplified so you can tune and re-tune for maximum amplification.

Peter Watts’s 2002 novel Maelstrom illustrates a beautiful, terrifying example of this, in which a mindless, self-modifying computer virus turns itself into a chatbot that impersonates patient zero in a world-destroying pandemic; even though the virus doesn’t understand what it’s doing or how it’s doing it, it’s able to use feedback to refine its strategies, gaining control over more resources with which to try more strategies.
It’s a powerful metaphor for the kind of cold reading we see Trump engaging in at his rallies, and for the presidency itself. I think it also explains why getting Trump of Twitter is impossible: it’s his primary feedback tool, and without it, he wouldn’t know what kinds of rhetoric to double down on and what to quietly sideline.

Maelstrom is concerned with a pandemic that is started by its protago­nist, Lenie Clark, who returns from a deep ocean rift bearing an ancient, devastating pathogen that burns its way through the human race, felling people by the millions.

As Clark walks across the world on a mission of her own, her presence in a message or news story becomes a signal of the utmost urgency. The filters are firewalls that give priority to some packets and suppress others as potentially malicious are programmed to give highest priority to any news that might pertain to Lenie Clark, as the authorities try to stop her from bringing death wherever she goes.

Here’s where Watt’s evolutionary bi­ology shines: he posits a piece of self-modifying malicious software – something that really exists in the world today – that automatically generates variations on its tactics to find computers to run on and reproduce itself. The more computers it colonizes, the more strategies it can try and the more computational power it can devote to analyzing these experiments and directing its randomwalk through the space of all possible messages to find the strategies that penetrate more firewalls and give it more computational power to devote to its task.

Through the kind of blind evolution that produces predator-fooling false eyes on the tails of tropical fish, the virus begins to pretend that it is Lenie Clark, sending messages of increasing convincingness as it learns to impersonate patient zero. The better it gets at this, the more welcoming it finds the firewalls and the more computers it infects.

At the same time, the actual pathogen that Lenie Clark brought up from the deeps is finding more and more hospitable hosts to reproduce in: thanks to the computer virus, which is directing public health authorities to take countermeasures in all the wrong places. The more effective the computer virus is at neutralizing public health authorities, the more the biological virus spreads. The more the biological virus spreads, the more anxious the public health authorities become for news of its progress, and the more computers there are trying to suck in any intelligence that seems to emanate from Lenie Clark, supercharging the computer virus.

Together, this computer virus and biological virus co-evolve, symbiotes who cooperate without ever intending to, like the predator that kills the prey that feeds the scavenging pathogen that weakens other prey to make it easier for predators to catch them.

The Engagement-Maximization Presidency [Cory Doctorow/Locus]

(Image: Kevin Dooley, CC-BY; Trump’s Hair)

Krebs on SecurityStudy: Attack on KrebsOnSecurity Cost IoT Device Owners $323K

A monster distributed denial-of-service attack (DDoS) against in 2016 knocked this site offline for nearly four days. The attack was executed through a network of hacked “Internet of Things” (IoT) devices such as Internet routers, security cameras and digital video recorders. A new study that tries to measure the direct cost of that one attack for IoT device users whose machines were swept up in the assault found that it may have cost device owners a total of $323,973.75 in excess power and added bandwidth consumption.

My bad.

But really, none of it was my fault at all. It was mostly the fault of IoT makers for shipping cheap, poorly designed products (insecure by default), and the fault of customers who bought these IoT things and plugged them onto the Internet without changing the things’ factory settings (passwords at least.)

The botnet that hit my site in Sept. 2016 was powered by the first version of Mirai, a malware strain that wriggles into dozens of IoT devices left exposed to the Internet and running with factory-default settings and passwords. Systems infected with Mirai are forced to scan the Internet for other vulnerable IoT devices, but they’re just as often used to help launch punishing DDoS attacks.

By the time of the first Mirai attack on this site, the young masterminds behind Mirai had already enslaved more than 600,000 IoT devices for their DDoS armies. But according to an interview with one of the admitted and convicted co-authors of Mirai, the part of their botnet that pounded my site was a mere slice of firepower they’d sold for a few hundred bucks to a willing buyer. The attack army sold to this ne’er-do-well harnessed the power of just 24,000 Mirai-infected systems (mostly security cameras and DVRs, but some routers, too).

These 24,000 Mirai devices clobbered my site for several days with data blasts of up to 620 Gbps. The attack was so bad that my pro-bono DDoS protection provider at the time — Akamai — had to let me go because the data firehose pointed at my site was starting to cause real pain for their paying customers. Akamai later estimated that the cost of maintaining protection against my site in the face of that onslaught would have run into the millions of dollars.

We’re getting better at figuring out the financial costs of DDoS attacks to the victims (5, 6 or 7 -digit dollar losses) and to the perpetrators (zero to hundreds of dollars). According to a report released this year by DDoS mitigation giant NETSCOUT Arbor, fifty-six percent of organizations last year experienced a financial impact from DDoS attacks for between $10,000 and $100,000, almost double the proportion from 2016.

But what if there were also a way to work out the cost of these attacks to the users of the IoT devices which get snared by DDos botnets like Mirai? That’s what researchers at University of California, Berkeley School of Information sought to determine in their new paper, “rIoT: Quantifying Consumer Costs of Insecure Internet of Things Devices.

If we accept the UC Berkeley team’s assumptions about costs borne by hacked IoT device users (more on that in a bit), the total cost of added bandwidth and energy consumption from the botnet that hit my site came to $323,973.95. This may sound like a lot of money, but remember that broken down among 24,000 attacking drones the per-device cost comes to just $13.50.

So let’s review: The attacker who wanted to clobber my site paid a few hundred dollars to rent a tiny portion of a much bigger Mirai crime machine. That attack would likely have cost millions of dollars to mitigate. The consumers in possession of the IoT devices that did the attacking probably realized a few dollars in losses each, if that. Perhaps forever unmeasured are the many Web sites and Internet users whose connection speeds are often collateral damage in DDoS attacks.

Image: UC Berkeley.

Anyone noticing a slight asymmetry here in either costs or incentives? IoT security is what’s known as an “externality,” a term used to describe “positive or negative consequences to third parties that result from an economic transaction. When one party does not bear the full costs of its actions, it has inadequate incentives to avoid actions that incur those costs.”

In many cases negative externalities are synonymous with problems that the free market has a hard time rewarding individuals or companies for fixing or ameliorating, much like environmental pollution. The common theme with externalities is that the pain points to fix the problem are so diffuse and the costs borne by the problem so distributed across international borders that doing something meaningful about it often takes a global effort with many stakeholders — who can hopefully settle upon concrete steps for action and metrics to measure success.

The paper’s authors explain the misaligned incentives on two sides of the IoT security problem:

-“On the manufacturer side, many devices run lightweight Linux-based operating systems that open doors for hackers. Some consumer IoT devices implement minimal security. For example, device manufacturers may use default username and password credentials to access the device. Such design decisions simplify device setup and troubleshooting, but they also leave the device open to exploitation by hackers with access to the publicly-available or guessable credentials.”

-“Consumers who expect IoT devices to act like user-friendly ‘plug-and-play’ conveniences may have sufficient intuition to use the device but insufficient technical knowledge to protect or update it. Externalities may arise out of information asymmetries caused by hidden information or misaligned incentives. Hidden information occurs when consumers cannot discern product characteristics and, thus, are unable to purchase products that reflect their preferences. When consumers are unable to observe the security qualities of software, they instead purchase products based solely on price, and the overall quality of software in the market suffers.”

The UC Berkeley researchers concede that their experiments — in which they measured the power output and bandwidth consumption of various IoT devices they’d infected with a sandboxed version of Mirai — suggested that the scanning and DDoSsing activity prompted by a Mirai malware infection added almost negligible amounts in power consumption for the infected devices.

Thus, most of the loss figures cited for the 2016 attack rely heavily on estimates of how much the excess bandwidth created by a Mirai infection might cost users directly, and as such I suspect the $13.50 per machine estimates are on the high side.

No doubt, some Internet users get online via an Internet service provider that includes a daily “bandwidth cap,” such that over-use of the allotted daily bandwidth amount can incur overage fees and/or relegates the customer to a slower, throttled connection for some period after the daily allotted bandwidth overage.

But for a majority of high-speed Internet users, the added bandwidth use from a router or other IoT device on the network being infected with Mirai probably wouldn’t show up as an added line charge on their monthly bills. I asked the researchers about the considerable wiggle factor here:

“Regarding bandwidth consumption, the cost may not ever show up on a consumer’s bill, especially if the consumer has no bandwidth cap,” reads an email from the UC Berkeley researchers who wrote the report, including Kim Fong, Kurt Hepler, Rohit Raghavan and Peter Rowland.

“We debated a lot on how to best determine and present bandwidth costs, as it does vary widely among users and ISPs,” they continued. “Costs are more defined in cases where bots cause users to exceed their monthly cap. But even if a consumer doesn’t directly pay a few extra dollars at the end of the month, the infected device is consuming actual bandwidth that must be supplied/serviced by the ISP. And it’s not unreasonable to assume that ISPs will eventually pass their increased costs onto consumers as higher monthly fees, etc. It’s difficult to quantify the consumer-side costs of unauthorized use — which is likely why there’s not much existing work — and our stats are definitely an estimate, but we feel it’s helpful in starting the discussion on how to quantify these costs.”

Measuring bandwidth and energy consumption may turn out to be a useful and accepted tool to help more accurately measure the full costs of DDoS attacks. I’d love to see these tests run against a broader range of IoT devices in a much larger simulated environment.

If the Berkeley method is refined enough to become accepted as one of many ways to measure actual losses from a DDoS attack, the reporting of such figures could make these crimes more likely to be prosecuted.

Many DDoS attack investigations go nowhere because targets of these attacks fail to come forward or press charges, making it difficult for prosecutors to prove any real economic harm was done. Since many of these investigations die on the vine for a lack of financial damages reaching certain law enforcement thresholds to justify a federal prosecution (often $50,000 – $100,000), factoring in estimates of the cost to hacked machine owners involved in each attack could change that math.

But the biggest levers for throttling the DDoS problem are in the hands of the people running the world’s largest ISPs, hosting providers and bandwidth peering points on the Internet today. Some of those levers I detailed in the “Shaming the Spoofers” section of The Democraticization of Censorship, the first post I wrote after the attack and after Google had brought this site back online under its Project Shield program.

By the way, we should probably stop referring to IoT devices as “smart” when they start misbehaving within three minutes of being plugged into an Internet connection. That’s about how long your average cheapo, factory-default security camera plugged into the Internet has before getting successfully taken over by Mirai. In short, dumb IoT devices are those that don’t make it easy for owners to use them safely without being a nuisance or harm to themselves or others.

Maybe what we need to fight this onslaught of dumb devices are more network operators turning to ideas like IDIoT, a network policy enforcement architecture for consumer IoT devices that was first proposed in December 2017.  The goal of IDIoT is to restrict the network capabilities of IoT devices to only what is essential for regular device operation. For example, it might be okay for network cameras to upload a video file somewhere, but it’s definitely not okay for that camera to then go scanning the Web for other cameras to infect and enlist in DDoS attacks.

So what does all this mean to you? That depends on how many IoT things you and your family and friends are plugging into the Internet and your/their level of knowledge about how to secure and maintain these devices. Here’s a primer on minimizing the chances that your orbit of IoT things become a security liability for you or for the Internet at large.

CryptogramRay Ozzie's Encryption Backdoor

Last month, Wired published a long article about Ray Ozzie and his supposed new scheme for adding a backdoor in encrypted devices. It's a weird article. It paints Ozzie's proposal as something that "attains the impossible" and "satisfies both law enforcement and privacy purists," when (1) it's barely a proposal, and (2) it's essentially the same key escrow scheme we've been hearing about for decades.

Basically, each device has a unique public/private key pair and a secure processor. The public key goes into the processor and the device, and is used to encrypt whatever user key encrypts the data. The private key is stored in a secure database, available to law enforcement on demand. The only other trick is that for law enforcement to use that key, they have to put the device in some sort of irreversible recovery mode, which means it can never be used again. That's basically it.

I have no idea why anyone is talking as if this were anything new. Several cryptographers have already explained explained why this key escrow scheme is no better than any other key escrow scheme. The short answer is (1) we won't be able to secure that database of backdoor keys, (2) we don't know how to build the secure coprocessor the scheme requires, and (3) it solves none of the policy problems around the whole system. This is the typical mistake non-cryptographers make when they approach this problem: they think that the hard part is the cryptography to create the backdoor. That's actually the easy part. The hard part is ensuring that it's only used by the good guys, and there's nothing in Ozzie's proposal that addresses any of that.

I worry that this kind of thing is damaging in the long run. There should be some rule that any backdoor or key escrow proposal be a fully specified proposal, not just some cryptography and hand-waving notions about how it will be used in practice. And before it is analyzed and debated, it should have to satisfy some sort of basic security analysis. Otherwise, we'll be swatting pseudo-proposals like this one, while those on the other side of this debate become increasingly convinced that it's possible to design one of these things securely.

Already people are using the National Academies report on backdoors for law enforcement as evidence that engineers are developing workable and secure backdoors. Writing in Lawfare, Alan Z. Rozenshtein claims that the report -- and a related New York Times story -- "undermine the argument that secure third-party access systems are so implausible that it's not even worth trying to develop them." Susan Landau effectively corrects this misconception, but the damage is done.

Here's the thing: it's not hard to design and build a backdoor. What's hard is building the systems -- both technical and procedural -- around them. Here's Rob Graham:

He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.

A bunch of us cryptographers have already explained why we don't think this sort of thing will work in the foreseeable future. We write:

Exceptional access would force Internet system developers to reverse "forward secrecy" design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today's Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

Finally, Matthew Green:

The reason so few of us are willing to bet on massive-scale key escrow systems is that we've thought about it and we don't think it will work. We've looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there's no detection system for key theft, there's no renewability system, HSMs are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We're not going to put the data of a few billion people on the line an environment where we believe with high probability that the system will fail.

Sociological ImagesPocket-sized Politics

Major policy issues like gun control often require massive social and institutional changes, but many of these issues also have underlying cultural assumptions that make the status quo seem normal. By following smaller changes in the way people think about issues, we can see gradual adjustments in our culture that ultimately make the big changes more plausible.

Photo Credit: Emojipedia

For example, today’s gun debate even drills down to the little cartoons on your phone. There’s a whole process for proposing and reviewing new emoji, but different platforms have their own control over how they design the cartoons in coordination with the formal standards. Last week, Twitter pointed me to a recent report from Emojipedia about platform updates to the contested “pistol” emoji, moving from a cartoon revolver to a water pistol:

In an update to the original post, all major vendors have committed to this design change for “cross-platform compatibility.”

There are a couple ways to look at this change from a sociological angle. You could tell a story about change from the bottom-up, through social movements like the March For Our Lives, calling for gun reform in the wake of mass shootings. These movements are drawing attention to the way guns permeate American culture, and their public visibility makes smaller choices about the representation of guns more contentious. Apple didn’t comment directly on the intentions behind the redesign when it came out, but it has weighed in on the politics of emoji design in the past.

You could also tell a story about change from the top-down, where large tech companies have looked to copy Apple’s innovation for consistency in a contentious and uncertain political climate (sociologists call this “institutional isomorphism”). In the diagram, you can see how Apple’s early redesign provided an alternative framework for other companies to take up later on, just like Google and Microsoft adopted the dominant pistol design in earlier years.

Either way, if you favor common sense gun reform, redesigning emojis is obviously not enough. But cases like this help us understand how larger shifts in social norms are made up of many smaller changes that challenge the status quo.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Planet DebianNorbert Preining: Gaming: The House of Da Vinci

Recently the gaming time has dropped close to zero, but The House of Da Vinci was just perfectly suited to my low time burst play mode. A game in the style of the Room Series, available on all kinds of platforms (PC, Mac, Android, iOS or Amazon Kindle – who wants to play on Kindle?), took a few hours of my spare time, mostly while soaking in the bath tub.

The game is set at the height of the Renaissance, beginning of the 16th century, and the player is invited by Leonardo Da Vinci to see his most recent spectacular invention. But it turned out that neither Da Vinci is there, nor is it that easy to enter or progress through the various level. A typical puzzle game with a big part of point-click-search-point-click-search repetition, it still kept me interested throughout the whole game.

I played the game on a tablet (Amazon Fire 10) and that was definitely better than on my mobile, and much more fun. The graphics is very well done. That helped to overcome the fatigue I normally get from games that require to player to tap on each singly pixel in the hope to find a hidden door, hidden key, hidden foobar. I like the riddles part, but not the searching part.

Fortunately, to keep the player’s frustration at bay, there is an excellent and well thought out hint system. Increasingly detailed hints, from rather general suggestions to detailed hints help the player when he got stuck. Something that happened to me quite to often (see above).

The game controls, at least on the tablet, worked very nicely, no problem whatsoever. I guess playing it on a small screen device (like my mobile) will make it even more harder to find some of the special spots, though. But for a bit bigger devices it is a big fun and nice getaway from normal work load.

During the game one can collect also some non-essential items and later on (after finishing) visit Da Vinci’s garden with his inventions (or better, those one has found) on display. Other than these gimmicks I’m not aware of any further hidden Easter egg or similar, but one never knows.

All in all a very nice and enjoyable game for a few hours of relaxation. After having started the game on my mobile I switched to the tablet and restarted there, and I realized that the game actually also has a replaying value. Guess I need to go through it once again. So all in all, recommended if you like puzzles!

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.4

A minor update version 0.3.4 of RcppGSL is now on CRAN. It contains an improved Windows build system (thanks, Jeroen!) and updates the C++ headers by removing dynamic exception specifications which C++11 frowns upon, and the compilers lets us know that in no uncertain terms. Builds using RcppGSL will now be more quiet. And as always, an extra treat for Solaris.

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

No user-facing new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.4 (2018-05-06)

  • Windows builds were updated (Jeroen Ooms in #16).

  • Remove dynamic exception specifications which are deprecated with C++11 or later (Dirk in #17).

  • Accomodate Solaris by being more explicit about sqrt.

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianTimo Jyrinki: Converting an existing installation to LUKS using luksipc - 2018 notes

Time for a laptop upgrade. Encryption was still not the default for the new Dell XPS 13 Developer Edition (9370) that shipped with Ubuntu 16.04 LTS, so I followed my own notes from 3 years ago together with the official documentation to convert the unencrypted OEM Ubuntu installation to LUKS during the weekend. This only took under 1h altogether.

On this new laptop model, EFI boot was already in use, Secure Boot was enabled and the SSD had GPT from the beginning. The only thing I wanted to change thus was the / to be encrypted.

Some notes for 2018 to clarify what is needed and what is not needed:
  • Before luksipc, remember to resize existing partitions to have 10 MB of free space at the end of the / partition, and also create a new partition of eg 1 GB size partition for /boot.
  • To get the code and compile luksipc on Ubuntu 16.04.4 LTS live USB, just apt install git build-essential is needed. cryptsetup package is already installed.
  • After luksipc finishes and you've added your own passphrase and removed the initial key (slot 0), it's useful to cryptsetup luksOpen it and mount it still under the live session - however, when using ext4, the mounting fails due to a size mismatch in ext4 metadata! This is simple to correct: sudo resize2fs /dev/mapper/root. Nothing else is needed.
  • I mounted both the newly encrypted volume (to /mnt) and the new /boot volume (to /mnt2 which I created), and moved /boot/* from the former to latter.
  • I edited /etc/fstab of the encrypted volume to add the /boot partition
  • Mounted as following in /mnt:
    • mount -o bind /dev dev
    • mount -o bind /sys sys
    • mount -t proc proc proc
  • Then:
    • chroot /mnt
    • mount -a # (to mount /boot and /boot/efi)
    • Edited files /etc/crypttab (added one line: root UUID none luks) and /etc/grub/default (I copied over my overkill configuration that specifies all of cryptopts and cryptdevice some of which may be obsolete, but at least one of them and root=/dev/mapper/root is probably needed).
    • Ran grub-install ; update-grub ; mkinitramfs -k all -c (notably no other parameters were needed)
    • Rebooted.
  • What I did not need to do:
    • Modify anything in /etc/initramfs-tools.
If the passphrase input shows on your next boot, but your correct passphrase isn't accepted, it's likely that the initramfs wasn't properly updated yet. I first forgot to run the mkinitramfs command and faced this.

Worse Than FailureCodeSOD: CHashMap

There’s a phenomenon I think of as the “evolution of objects” and it impacts novice programmers. They start by having piles of variables named things like userName0, userName1, accountNum0, accountNum1, etc. This is awkward and cumbersome, and then they discover arrays. string* userNames, int[] accountNums. This is also awkward and cumbersome, and then they discover hash maps, and can do something like Map<string, string>* users. Most programmers go on to discover “wait, objects do that!”

Not so Brian’s co-worker, Dagny. Dagny wanted to write some C++, but didn’t want to learn that pesky STL or have to master templates. Dagny also considered themselves a “performance junkie”, so they didn’t want to bloat their codebase with peer-reviewed and optimized code, and instead decided to invent that wheel themselves.

Thus was born CHashMap. Now, Brian didn’t do us the favor of including any of the implementation of CHashMap, claiming he doesn’t want to “subject the readers to the nightmares that would inevitably arise from viewing this horror directly”. Important note for submitters: we want those nightmares.

Instead, Brian shares with us how the CHashMap is used, and from that we can infer a great deal about how it was implemented. First, let’s simply look at some declarations:

    CHashMap bills;
    CHashMap billcols;
    CHashMap summary;
    CHashMap billinfo;

Note that CHashMap does not take any type parameters. This is because it’s “type ignorant”, which is like being type agnostic, but with more char*. For example, if you want to get, say, the “amount_due” field, you might write code like this:

    double amount = 0;
    amount = Atof(bills.Item("amount_due"));

Yes, everything, keys and values, is simply a char*. And, as a bonus, in the interests of code clarity, we can see that Dagny didn’t do anything dangerous, like overload the [] operator. It would certainly be confusing to be able to index the hash map like it were any other collection type.

Now, since everything is stored as a char*, it’s onto you to convert it back to the right type, but since chars are just bytes if you don’t look too closely… you can store anything at that pointer. So, for example, if you wanted to get all of a user’s billing history, you might do something like this…

    CHashMap bills;
    CHashMap billcols;
    CHashMap summary;
    CHashMap billinfo;

    int nbills = dbQuery (query, bills, billcols);
    if (nbills > 0) {
        // scan the bills for amounts and/or issues
        double amount;
        double amountDue = 0;
        int unresolved = 0;

        for (int r=0; r<nbills; r++) {
            if (Strcmp(bills.Item("payment_status",r),BILL_STATUS_REMITTED) != 0) {
                amount = Atof(bills.Item("amount_due",r));
                if (amount >= 0) {
                    amountDue += amount;
                    if (Strcmp(bills.Item("status",r),BILL_STATUS_WAITING) == 0) {
                        unresolved += 1;
                        billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))));
                        billinfo.AddItem ("biller", bills.Item("account_display_name",r));
                        billinfo.AddItem ("account", bills.Item("account_number",r));
                        billinfo.AddItem ("amount", amount);
                else {
                    amountDue += 0;
                    unresolved += 1;
                    billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))));
                    billinfo.AddItem ("biller", bills.Item("account_display_name",r));
                    billinfo.AddItem ("account", bills.Item("account_number",r));
                    billinfo.AddItem ("amount", "???");
                summary.AddItem ("", &billinfo);

Look at that summary.AddItem ("", &billinfo) line. Yes, that is an empty key. Yes, they’re pointing it at a reference to the billinfo (which also gets Clear()ed a few lines earlier, so I have no idea what’s happening there). And yes, they’re doing this assignment in a loop, but don’t worry! CHashMap allows multiple values per key! That "" key will hold everything.

So, you have multi-value keys which can themselves point to nested CHashMaps, which means you don’t need any complicated JSON or XML classes, you can just use CHashMap as your hammer/foot-gun.

    //code which fetches account details from JSON
    CHashMap accounts;
    CHashMap details;
    CHashMap keys;

    rc = getAccounts (userToken, accounts);
    if (rc == 0) {
        for (int a=1; a<=accounts.Count(); a++) {
            cvt.jsonToKeys (accounts.Item(a), keys);
            rc = getAccountDetails (userToken, keys.Item("accountId"), details);
    // Details of getAccounts
    int getAccounts (const char * user, CHashMap& rows) {
      // <snip>
      AccountClass account;
      for (int a=1; a<=count; a++) {
        // Populate the account class
        // <snip>
        rows.AddItem ("", account.jsonify(t));

With this kind of versatility, is it any surprise that pretty much every function in the application depends on a CHashMap somehow? If that doesn’t prove its utility, I don’t know what will. How could you do anything better? Use classes? Don’t make me laugh!

As a bonus, remember this line above? billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))))? Well, Brian has this to add:

it’s worth mentioning that our DB stores dates in the typical format: “YYYY-MM-DD hh:mm:ss”. cvtUTC is a function that converts a date-time string to a time_t value, and FormatTime converts a time_t to a date-time string.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianDaniel Pocock: Powering a ham radio transmitter

Last week I announced the crowdfunding campaign to help run a ham radio station at OSCAL. Thanks to all those people who already donated or expressed interest in volunteering.

Modern electronics are very compact and most of what I need to run the station can be transported in my hand luggage. The two big challenges are power supplies and antenna masts. In this blog post there are more details about the former.

Here is a picture of all the equipment I hope to use:

The laptop is able to detect incoming signals using the RTL-SDR dongle and up-converter. After finding a signal, we can then put the frequency into the radio transmitter (in the middle of the table), switch the antenna from the SDR to the radio and talk to the other station.

The RTL-SDR and up-converter run on USB power and a phone charger. The transmitter, however, needs about 22A at 12V DC. This typically means getting a large linear power supply or a large battery.

In the photo, I've got a Varta LA60 AGM battery, here is a close up:

There are many ways to connect to a large battery. For example, it is possible to use terminals like these with holes in them for the 8 awg wire or to crimp ring terminals onto a wire and screw the ring onto any regular battery terminal. The type of terminal with these extra holes in it is typically sold for car audio purposes. In the photo, the wire is 10 awg superflex. There is a blade fuse along the wire and the other end has a PowerPole 45 plug. You can easily make cables like this yourself with a PowerPole crimping tool, everything can be purchased online from sites like eBay.

The wire from the battery goes into a fused power distributor with six PowerPole outlets for connecting the transmitter and other small devices, for example, the lamp in the ATU or charging the handheld:

The AGM battery in the photo weighs about 18kg and is unlikely to be accepted in my luggage, hence the crowdfunding campaign to help buy one for the local community. For many of the young people and students in the Balkans, the price of one of the larger AGM batteries is equivalent to about one month of their income so nobody there is going to buy one on their own. Please consider making a small donation if you would like to help as it won't be possible to run demonstrations like this without power.

Planet DebianNOKUBI Takatsugu: Now mikutter is not work

mikutter (deb), a Twitter client written in Ruby, isn’t work now because of the Consumer Key had suspended (Japanese blog).

Planet DebianThorsten Glaser: mksh bugfix — thank you for the music

I’m currently working on an mksh(1) and bc(1) script that takes a pitch standard (e.g. “A₄ = 440 Hz” or “C₄ = 256 Hz”) and a config file describing a temperament (e.g. the usual equal temperament, or Pythagorean untempered pure fifths (with the wolf), or “just” intonation, Werckmeister Ⅲ, Vallotti or Bach/Lehman 1722 (to name a few; these are all temperaments that handle enharmonics the same or, for Pythagorean in out case, ignore the fact they’re unplayable). Temperaments are rule-based, like in ttuner. Well, I’m not quite there yet, but I’m already able to display the value for MuseScore to adjust its pitch standard (it can only take A₄-based values), a frequency table, and a list and table of cent deltas (useful for using or comparing with other tuners). Of course, right now, the cent deltas are all 0 because, well, they are equal temperament against equal temperament (as baseline), but I can calculate that with arbitrary and very high precision!

For outputting, I wanted to make the tables align nicely; column(1), which I normally use, was out because it always left-aligns, so I used string padding in Korn Shell — except I’m also a Unicode BMP fan, so I had F♯ and B♭ in my table headings, which were for some reason correctly right-aligned (for when the table values were integers) but not padded right when aligning with the decimal dot. So I worked around it, but also investigated.

Turns out that the desired length was used as second snprintf(3) argument, instead of, as in the right-align case, the buffer size. This worked only until multibyte characters happened. A fun bug, which only took about three minutes to find, and is covered by a new check in the testsuite even. Thought I’d share.

Feedback on and improvements for the tuner, once it’ll be done, are, of course, also welcome. I plan to port the algorithm (once I’ve got it down in a programming language I know well) to QML for inclusion in the tuner MuseScore plugin, even. Check here, for now, for my work in progress… it’s quite big already despite doing basically nothing. Foundation laid (or so…).


Planet DebianDirk Eddelbuettel: RcppMsgPack 0.2.2

Am maintenance release of RcppMsgPack got onto CRAN this afternoon. It contains a single static_cast fix to address a warning which g++-8.1 shows whereas older compilers remained silent---and CRAN asked us to address this.

MessagePack itself is an efficient binary serialization format. It lets you exchange data among multiple languages like JSON. But it is faster and smaller. Small integers are encoded into a single byte, and typical short strings require only one extra byte in addition to the strings themselves. RcppMsgPack brings both the C++ headers of MessagePack as well as clever code (in both R and C++) Travers wrote to access MsgPack-encoded objects directly from R.

Changes in version 0.2.2 (2018-05-06)

  • Apply a static_cast from upstream to suppress a warning from g++-8.1 as requested by CRAN.

Courtesy of CRANberries, there is also a diffstat report for this release.

More information is on the RcppRedis page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRuss Allbery: rra-c-util 7.1

This is my collection of portability libraries and test suite support for my C projects.

The big news in this release was already discussed in the corresponding C TAP Harness release: much better support for valgrind checking. Not only has valgrind been enabled in this package to check the portability and utility layers, but the TAP helper library has been updated to support spawning remctld with valgrind for remctl tests. (The primary consumer of this support will be the remctl package, of course.) This release also includes a valgrind/logs check that is meant to run last in a test list and scans a directory of valgrind logs, reporting test failures for any that contain valgrind errors. If the test suite isn't run under valgrind, this test is just skipped.

Some other fixes in this release:

  • Fix misplaced va_end in putil_log_failure.

  • Avoid testing the client IP address in network_bind_all tests. Some hosts in the compiler farm for open source testing have a weird network configuration where localhost packets appear to come from an IP address other than, resulting in spurious failures.

  • Define UINT32_MAX on systems that don't have it.

  • Skip more Autoconf and Automake files in the all_files function of Test::RRA::Automake.

This release also contains a few other, more minor fixes, and switches the license of all of my public domain source files to the FSF all-permissive license in the name of standardizing on licenses registered with the SPDX project and vetted by actual lawyers.

You can get the latest release from the rra-c-util distribution page.

Planet DebianRuss Allbery: C TAP Harness 4.3

This is my test harness and C library for supporting TAP (Test Anything Protocol, the same used by Perl) tests for C projects.

The big change in this release is test options: a test list passed to runtests can now include space-separated options after the name of the test to run. The two supported options are valgrind, which controls whether the test can be run under valgrind, and libtool, which further controls how the test is run under valgrind.

The background on this is a bit interesting.

I've had Makefile targets to run test suites under valgrind for various packages for a while, using valgrind's --trace-children=yes option and running runtests under valgrind. However, this has a serious problem: valgrind tries to trace into every child, including shell scripts or Perl scripts used in the test suite. This results in valgrind logs that are basically always unuseful: lots of false positives or uninteresting small memory leaks in standard system tools.

For projects that use Libtool, the situation becomes far worse, since Libtool wraps all the executables in the build tree with shell scripts that set up an appropriate library path. valgrind traces into those shell scripts and all the various programs they run. As a result, running valgrind with --trace-children=yes on a non-trivial test suite such as remctl's could easily result in 10,000 or more valgrind log files, most of which were just noise.

As a result, while the valgrind target was there, I rarely ran it or tried to dig through the results. And that, in turn, led to security vulnerabilities like the recent one in remctl.

Test options are a more comprehensive fix. When the valgrind test option is set, C TAP Harness will look at the C_TAP_VALGRIND environment variable. If it is set, that test will be run, directly by the harness, using the command given in that environment variable. A typical setting might be:

VALGRIND_COMMAND = $(PATH_VALGRIND) --leak-check=full             \
        --log-file=$(abs_top_builddir)/tests/tmp/valgrind/log.%p  \

and then a Makefile target like:

# Used by maintainers to run the main test suite under valgrind.
check-valgrind: $(check_PROGRAMS)
        rm -rf $(abs_top_builddir)/tests/tmp
        mkdir $(abs_top_builddir)/tests/tmp
        mkdir $(abs_top_builddir)/tests/tmp/valgrind
        C_TAP_VALGRIND="$(VALGRIND_COMMAND)"                    \
            C_TAP_LIBTOOL="$(top_builddir)/libtool"             \
            tests/runtests -l '$(abs_top_srcdir)/tests/TESTS'

(this is take from the current version of remctl). Note that --trace-children=yes can be omitted, which avoids recursing into any small helper programs that tests run, and only tests with the valgrind option set will be run under valgrind (the rest will be run as normal). This cuts down massively on the noise. Tests that explicitly want to invoke valgrind on some other program can use the presence of C_TAP_VALGRIND in the environment to decide whether to do that and what command to use.

The C_TAP_LIBTOOL setting above is the other half of the fix. Packages that use Libtool may have tests that are Libtool-generated shell wrappers, so just using the above would still run valgrind on a shell script. But if the libtool test option is also set, C TAP Harness now knows to invoke the test with libtool --mode=execute and then the valgrind command, which causes Libtool to expand all of the shell wrappers to actual binaries first and then run valgrind only on the actual binary. The path to libtool is taken from the C_TAP_LIBTOOL environment variable.

The final piece is an additional test that scans the generated valgrind log files. That piece is part of rra-c-util, so I'll talk about it with that release.

There are two more changes in this release. First, borrowing an idea from Rust, diagnostics from test failures are now reported as the left value and the right value, instead of the seen and wanted values. This allows one to write tests without worrying about the order of arguments to the is_* functions, which turned out to be hard to remember and mostly a waste of energy to keep consistent. And second, I fixed an old bug with NULL handling in is_string that meant a NULL might compare as equal to a literal string "(null)". This test is now done properly.

The relationship between C TAP Harness and rra-c-util got a bit fuzzier in this release since I wanted to import more standard tests from rra-c-util and ended up importing the Perl test framework. This increases the amount of stuff that's in the C TAP Harness distribution that's canonically maintained in rra-c-util. I considered moving the point of canonical maintenance to C TAP Harness, but it felt rather out of scope for this package, so decided to live with it. Maybe I should just merge the two packages, but I know a lot of people who use pieces of C TAP Harness but don't care about rra-c-util, so it seems worthwhile to keep them separate.

I did replace the tests/docs/pod-t and tests/docs/pod-spelling-t tests in C TAP Harness with the rra-c-util versions in this release since I imported the Perl test framework anyway. I think most users of the old C TAP Harness versions will be able to use the rra-c-util versions without too much difficulty, although it does require importing the Perl modules from rra-c-util into a tests/tap/perl directory.

You can get the latest release from the C TAP Harness distribution page.

Planet DebianThorsten Alteholz: My Debian Activities in April 2018

FTP master

This month I accepted 145 packages and rejected 5 uploads. The overall number of packages that got accepted this month was 260.

Debian LTS

This was my forty sixth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 16.25h. During that time I did LTS uploads of:

    [DLA 1353-1] wireshark security update for 12 CVEs
    [DLA 1364-1] openslp-dfsg security update for one CVE
    [DLA 1367-1] slurm-llnl security update for one CVE

I also started to work on the next bunch of wireshark CVEs and I intend to upload packages for Jessie and Stretch as well.
Other packages I started are krb5 and cups.

Last but not least I did a week of frontdesk duties, where I check lots of CVEs for their impact on Wheezy.

Other stuff

During April I did uploads of …

  • pescetti to fix a FTBFS with Java 9 due to -source/-target only
  • salliere to fix a FTBFS with Java 9 due to -source/-target only
  • libb64 to fix a FTCBFS
  • chktex to fix a FTBFS with TeX Live 2018

Thanks to all the people who sent patches!

I also finished the libosmocore transistion this month by uploading the following osmocom packages to unstable and alongside fixing some bugs that our tireless QA tools detected:

Further I uploaded osmo-fl2k already two days after release. It is a nice software that enables an USB-VGA converter to be used as transmitter for all kind of signals. Of course just use it in a shielded room!

As Nicolas Mora, the upstream author of the oauth2 server glewlwyd, wanted to be more involved in Debian packaging, I sponsored some of his first packages. They are all new versions of his software:

I also uploaded a new upstream version of dateutils.

Last but not least I worked on some apcupsd bugs and I am down to 16 bugs now.

Planet Linux AustraliaDavid Rowe: FreeDV 700D and SSB Comparison

Mark, VK5QI has just performed a SSB versus FreeDV 700D comparison between his home in Adelaide and the Manly Warringah Radio Society WebSDR SDR in Sydney, about 1200km away. The band was 40m, and the channel very poor, with some slow fading. Mark used SVN revision 3581, built himself on Ubuntu, with an interleaver setting (Tools-Options menu) of 1 frame. Transmit power for SSB and FreeDV 700D was about the same.

I’m still finishing off FreeDV 700D integration and tuning the mode – but this is a very encouraging start. Thanks Mark!

Don MartiUnlocking the hidden European mode in web ads

It would make me really happy to be able to yellow-list Google web ads in Privacy Badger. (Yellow-listed domains are not blocked, but have their cookies restricted in order to cut back on cross-site tracking.) That's because a lot of news and cultural sites use DoubleClick for Publishers and other Google services to deliver legit, context-based advertising. Unfortunately, as far as I can tell, Google mixes in-context ads with crappy, spam-like, targeted stuff. What I want is something like Doc Searls style ads: Just give me ads not based on tracking me.

Until now, there has been no such setting. There could have been, if Do Not Track (DNT) had turned out to be a thing, but no. But there is some good news. Instead of one easy-to-use DNT, sites are starting to give us harder-to-find, but still usable, settings, in order to enable GDPR-compliant ads for Europe. Here's Google's: Ads personalization settings in Google’s publisher ad tags - DoubleClick for Publishers Help.

Wait a minute? Google respects DNT now?

Sort of. GDPR-compliant terms written by Google aren't exactly the same as EFF's privacy-friendly Do Not Track (DNT) Policy All these different tracking policies are reminding me of open source licenses for some reason. but close enough. The catch is that as an end user, you can't just turn on Google's European mode. You have to do some JavaScript. I think I figured out how to do this in a simple browser extension to unlock secret European status.

Google doesn't appear to have their European mode activated yet, so I added a do-nothing "European mode" to the Aloodo project, for testing. I'm not able to yellow-list Google yet, but when GDPR takes effect later this month I'll test it some more.

In the meantime, I'll keep looking for other examples of hidden European mode, and see if I can figure out how to activate them.

Planet DebianRuss Allbery: DocKnot 1.05

DocKnot is the program that generates the top-level web pages and README files for my software packages (and eventually should take over generation of the LICENSE files, doing release management, and may eventually incorporate the tools for the macro language I use to generate my web pages).

For this release, I did a bit more research about the little badges that people have started displaying in their README files on GitHub and in a few other places (Rust's, for instance). DocKnot can now generate the Travis-CI badges, and also uses for CPAN badges for things I upload to CPAN. I will probably add more as I work on other packages that have meaningful badges.

This release also moves the documentation of environment variables to set to enable more parts of the test suite to the testing section of README files. I have no idea why I put that in the requirements section originally; this should make more sense.

There are a few additions to support generating web pages for remctl, which previously wasn't using DocKnot, and a bug fix for README generation when there are additional bootstrapping instructions. (Previous versions miswrapped the resulting paragraph.)

DocKnot now requires Perl 5.24 or later, since right now I'm the only user and don't need to support anything older than Debian stable, and that way I can start using the neat new $ref->@* Perl syntax instead of the much noisier @{$ref} syntax.

You can get the latest release from the DocKnot distribution page.

Planet DebianNorbert Preining: Debian/TeX Live 2018.20180505-1

The first big bunch of updates of TeX Live 2018. During the freeze for DVD production, several bugs have been found and fixed. In particular compatibility of csqoutes with the shiny new LaTeX release, as well as some other related fixes. That hopefully will fix most if not all build failures that were introduced with the TL2018 upload.

I guess there will be some bad bugs still lurking around, but now that regular updates of TeX Live are done again on a daily basis, they should be fixed rather soon.


New packages

bath-bst, bezierplot, cascade, citeref, clrdblpg, colophon, competences, dsserif, fduthesis, includernw, kurdishlipsum, latex-via-exemplos, libertinus-otf, manyind, milsymb, modulus, musikui, stix2-otf, stix2-type1.

Updated packages

academicons, acmart, adjustbox, aleph, apnum, archaeologie, asymptote, asymptote.x86_64-linux, babel, babel-french, babel-ukrainian, beamerposter, bib2gls, biblatex-gb7714-2015, biblatex-publist, bibtexperllibs, bxjscls, caption, chngcntr, cleveref, colortbl, context-filter, context-handlecsv, context-vim, cooking-units, cslatex, csquotes, ctex, datatool, datetime2-estonian, datetime2-greek, datetime2-hebrew, datetime2-icelandic, datetime2-ukrainian, dccpaper, doclicense, exercisebank, factura, findhyph, fithesis, fonts-tlwg, gbt7714, genealogytree, glossaries-extra, graph35, gzt, handin, hecthese, hyph-utf8, hyphen-bulgarian, hyphen-german, hyphen-latin, hyphen-thai, ifmtarg, jlreq, ketcindy, koma-script-examples, l3kernel, l3packages, langsci, latex, latex-bin, latex2e-help-texinfo, latexindent, latexpand, libertinus, libertinust1math, lshort-german, ltximg, lua-check-hyphen, luamplib, luaotfload, luatexko, lwarp, lyluatex, m-tx, make4ht, marginnote, markdown, mathalfa, newtx, newunicodechar, nicematrix, novel, nwejm, pgfornament-han, pgfplots, pkgloader, platex, plex-otf, polexpr, polyglossia, pst-func, pstricks, regexpatch, reledmac, roboto, robustindex, siunitx, siunitx, skdoc, stix, t2, tetex, tex4ebook, tex4ht, texdef, texdoc, texlive-cz, texlive-sr, thuthesis, tikzsymbols, tools, univie-ling, updmap-map, uplatex, xcharter, xecjk, xellipsis, xetex, xetexko, yathesis, zhlipsum, zxjafont, zxjatype.

Planet DebianJoey Hess: more fun with reactive-banana-automation

My house knows when people are using the wifi, and keeps the inverter on so that the satellite internet is powered up, unless the battery is too low. When nobody is using the wifi, the inverter turns off, except when it's needed to power the fridge.

Sounds a little complicated, doesn't it? The code to automate that using my reactive-banana-automation library is almost shorter than the English description, and certianly clearer.

inverterPowerChange :: Sensors t -> MomentAutomation (Behavior (Maybe PowerChange))
inverterPowerChange sensors = do
    lowpower <- lowpowerMode sensors
    fridgepowerchange <- fridgePowerChange sensors
    wifiusers <- numWifiUsers sensors
    return $ react <$> lowpower <*> fridgepowerchange <*> wifiusers
    react lowpower fridgepowerchange wifiusers
            | lowpower = Just PowerOff
            | wifiusers > 0 = Just PowerOn
            | otherwise = fridgepowerchange

Of course, there are complexities under the hood, like where does numWifiUsers come from? (It uses inotify to detect changes to the DHCP leases file, and tracks when leases expire.) I'm up to 1200 lines of custom code for my house, only 170 lines of which are control code like the above.

But that code is the core, and it's where most of the bugs would be. The goal is to avoid most of the bugs by using FRP and Haskell the way I have, and the rest by testing.

For testing, I'm using doctest to embed test cases along with the FRP code. I designed reactive-banana-automation to work well with this style of testing. For example, here's how it determines when the house needs to be in low power mode, including the tests:

-- | Start by assuming we're not in low power mode, to avoid
-- entering it before batteryVoltage is available.
-- If batteryVoltage continues to be unavailable, enter low power mode for
-- safety.
-- >>> runner <- observeAutomation (runInverterUnless lowpowerMode) (mkSensors (pure ()))
-- >>> runner $ \sensors -> gotEvent (dhcpClients sensors) []
-- []
-- >>> runner $ \sensors -> sensorUnavailable (batteryVoltage sensors)
-- [InverterPower PowerOff]
-- >>> runner $ \sensors -> batteryVoltage sensors =: Volts 25
-- [InverterPower PowerOn]
-- >>> runner $ \sensors -> batteryVoltage sensors =: Volts 20
-- [InverterPower PowerOff]
-- >>> runner $ \sensors -> batteryVoltage sensors =: Volts 25
-- [InverterPower PowerOn]
-- >>> runner $ \sensors -> sensorUnavailable (batteryVoltage sensors)
-- [InverterPower PowerOff]
lowpowerMode :: Sensors t -> MomentAutomation (Behavior Bool)
lowpowerMode sensors = automationStepper False
    =<< fmap calc <$> getEventFrom (batteryVoltage sensors)
    -- Below 24.0 (really 23.5 or so) is danger zone for lead acid.
    calc (Sensed v) = v < Volts 24.1
    calc SensorUnavailable = True

The sensor data is available over http, so I can run this controller code in test mode, on my laptop, and observe how it reacts to real-world circumstances.

joey@darkstar:~/src/homepower>./controller test
InverterPower PowerOn
FridgeRelay PowerOff

Previously: my haskell controlled offgrid fridge


Planet DebianRuss Allbery: remctl 3.15

As promised in the security advisory accompanying remctl 3.14, this release of remctl (and upcoming releases of C TAP Harness and rra-c-util) implements proper valgrind testing. In the process, I found and fixed a potential data loss bug: a server-side command accepting input on standard input that exited before consuming all of its input could have its output truncated due to a logic bug in the remctld server.

This release also adds some more protocol verification (I didn't see any bugs; this just tightens things out of paranoia) and much-improved maintainer support for static analysis and compiler warning checks.

You can get the latest release from the remctl distribution page.

Planet DebianSune Vuorela: Modern C++ and Qt


– ’cause raw new’s are bad.

Rondam RamblingsIn your face, liberal haters!

The New York Times reports that California is now world's 5th largest economy.  Only the U.S. as a whole, China, Japan and Germany are bigger.  On top of that the vast majority of that growth came from the coastal areas, where the liberals live. Meanwhile, in Kansas, the Republican experiment in stimulating economic growth by cutting taxes has gone down in screaming flames: The experiment with


CryptogramFriday Squid Blogging: US Army Developing 3D-Printable Battlefield Robot Squid

The next major war will be super weird.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet Linux AustraliaDavid Rowe: FreeDV 1600 Sample Clock Offset Bug

So I’m busy integrating FreeDV 700D into the FreeDV GUI program. The 700D modem works on larger frames (160ms) than the previous modes (e.g. 20ms for FreeDV 1600) so I need to adjust FIFO sizes.

As a reference I tried FreeDV 1600 between two laptops (one tx, one rx) and noticed it was occasionally losing frame sync, generating bit errors, and producing the occasional bloop in the audio. After a little head scratching I discovered a bug in the FreeDV 1600 FDMDV modem! Boy, is my face red.

The FMDMV modem was struggling with sample clock differences between the mod and demod. I think the bug was introduced when I did some (too) clever refactoring to reduce FDMDV memory consumption while developing the SM1000 back in 2014!

Fortunately I have a trail of unit test programs, leading back from FreeDV GUI, to the FreeDV API (freedv_tx and freedv_rx), then individual unit tests for each modem (fdmdv_mod/fdmdv_demod), and finally Octave simulation code (fdmdv.m, fdmdv_demod.m and friends) for the modem.

Octave (or an equivalent vector based scripting language like Python/numpy) is much easier to work with than C for complex DSP problems. So after a little work I reproduced the problem using the Octave version of the FDMDV modem – bit errors happening every time there was a timing jump.

The modulator sends parallel streams of symbols at about 50 baud. These symbols are output at a sample rate of 8000 Hz. Part of the demodulators job is to estimate the best place to sample each received modem symbol, this is called timing estimation. When the tx and rx are separate, the two sample clocks are slightly different – your 8000 Hz clock will be a few Hz different to mine. This means the timing estimate is a moving target, and occasionally we need to compenstate by talking a few more or few less samples from the 8000 Hz sample stream.

In the plot below the Octave demodulator was fed with a signal that is transmitted at 8010 Hz instead of the nominal 8000 Hz. So the tx is sampling faster than the rx. The y axis is the timing estimate in samples, x axis time in seconds. For FreeDV 1600 there are 160 samples per symbol (50 baud at 8 kHz). The timing estimate at the rx drifts forwards until we hit a threshold, set at +/- 40 samples (quarter of a symbol). To avoid the timing estimate drifting too far, we take a one-off larger block of samples from the input, the timing takes a step backwards, then starts drifting up again.

Back to the bug. After some head scratching, messing with buffer shifts, and rolling back phases I eventually fixed the problem in the Octave code. Next step is to port the code to C. I used my test framework that automatically compares a bunch of vectors (states) in the Octave code to the equivalent C code:

octave:8> system("../build_linux/unittest/tfdmdv")
sizeof FDMDV states: 40032 bytes
ans = 0
octave:9> tfdmdv
tx_bits..................: OK
tx_symbols...............: OK
tx_fdm...................: OK
pilot_lut................: OK
pilot_coeff..............: OK
pilot lpf1...............: OK
pilot lpf2...............: OK
S1.......................: OK
S2.......................: OK
foff_coarse..............: OK
foff_fine................: OK
foff.....................: OK
rxdec filter.............: OK
rx filt..................: OK
env......................: OK
rx_timing................: OK
rx_symbols...............: OK
rx bits..................: OK
sync bit.................: OK
sync.....................: OK
nin......................: OK
sig_est..................: OK
noise_est................: OK

passes: 46 fails: 0

Great! This system really lets me move fast once the Octave code is written and tested. Next step is to test the C version of the FDMDV modem using the command line arguments. Note how I used sox to insert a sample rate offset by changing the same rate of the raw sample stream:

build_linux/src$ ./fdmdv_get_test_bits - 30000 | ./fdmdv_mod - - | sox -t raw -r 8000 -s -2 - -t raw -r 7990 - | ./fdmdv_demod - - 14 demod_dump.txt | ./fdmdv_put_test_bits -
bits 29568  errors 0  BER 0.0000

Zero errors, despite 10Hz sample clock offset. Yayyyyy. The C demodulator outputs a bunch of vectors that can be plotted with an Octave helper program:

octave:6> fdmdv_demod_c("../build_linux/src/demod_dump.txt",28000)

The FDMDV modem is integrated with Codec 2 in the FreeDV API. This can be tested using the freedv_tx/freedv_rx programs. For convenience, I generated some 60 second test files at different sample rates. Here is how I test using the freedv_rx program:

./freedv_rx 1600 ~/Desktop/ve9qrp_1600_8010.raw - | aplay -f S16

The ouput audio sounds good, no bloops, and by examining the freedv_rx_log.txt file I can see the demodulator didn’t loose sync. Cool.

Here is a table of the samples I used for testing:

No clock offset Simulates Tx sample rate 10Hz slower than Rx Simulates Tx sampling 10Hz faster than Rx

Finally, the FreeDV API is linked with the FreeDV GUI program. Here is a video of me testing different sample clock offsets using the raw files in the table above. Note there is no audio in this video as my screen recorder fights with FreeDV for use of sound cards. However the decoded FreeDV audio should be uninterrupted, there should be no re-syncs, and zero bit errors:

The fix has been checked into codec2-dev SVN rev 3556, and will make it’s way into FreeDV GUI 1.3, to be released in late May 2018.

Reading Further

FDMDV modem
Steve Ports an OFDM modem from Octave to C, some more on the Octave/C automated test framework and porting complex DSP algorithms.
Testing a FDMDV Modem. Early blog post on FDMDV modem with some more disucssion on sample clock offsets
Timing Estimation for PSK modems, talks a little about how we generate a timing estimate

Planet DebianJonathan McDowell: The excellent selection of Belfast Tech conferences

Yesterday I was lucky enough to get to speak at BelTech, giving what I like to think of as a light-hearted rant entitled “10 Stupid Reasons You’re Not Using Free Software”. It’s based on various arguments I’ve heard throughout my career about why companies shouldn’t use or contribute to Free Software, and, as I’m sure you can guess, I think they’re mostly bad arguments. I only had a 20 minute slot for it, which was probably about right, and it seemed to go down fairly well. Normally the conferences I would pitch talks to would end up with me preaching to the converted, but I felt this one provided an audience who were probably already using Free software but hadn’t thought about it that much.

And that got me to thinking “Isn’t it fantastic that we have this range of events and tech groups in Belfast?”. I remember the days when the only game in town was the Belfast LUG (now on something like its 5th revival and still going strong), but these days you could spend every night of the month at a different tech event covering anything from IoT to Women Who Code to DevOps to FinTech. There’s a good tech community that’s built up, with plenty of cross over between the different groups.

An indicator of that is the number of conferences happening in the city, with many of them now regular fixtures in the annual calendar. In addition to BelTech I’ve already attended BelFOSS and Women Techmakers this year. Product Camp Belfast is happening today. NIDevConf is just over a month away (I’ll miss this year due to another commitment, but thoroughly enjoyed last year). WordCamp Belfast isn’t the sort of thing I’d normally notice, but the opportunity to see Heather Burns speak on the GDPR is really tempting. Asking around about what else is happening turned up B-Sides, Big Data Belfast and DigitalDNA.

How did we end up with such a vibrant mix of events (and no doubt many more I haven’t noticed)? They might not be major conferences that pull in an international audience, but in some ways I find that more heartening - there’s enough activity in the local tech scene to make this number of events make sense. And I think that’s pretty cool.

CryptogramDetecting Laptop Tampering

Micah Lee ran a two-year experiment designed to detect whether or not his laptop was ever tampered with. The results are inconclusive, but demonstrate how difficult it can be to detect laptop tampering.

Worse Than FailureError'd: Version-itis

"No thanks, I'm holding out for version greater than or equal to 3.6 before upgrading," writes Geoff G.


"Looks like Twilio sent me John Doe's receipt by mistake," wrote Charles L.


"Little do they know that I went back in time and submitted my resume via punch card!" Jim M. writes.


Richard S. wrote, "I went to request a password reset from an old site that is sending me birthday emails, but it looks like the reCAPTCHA is no longer available and the site maintainers have yet to notice."


"It's nice to see that this new Ultra Speed Plus™ CD burner lives up to its name, but honestly, I'm a bit scared to try some of these," April K. writes.


"Sometimes, like Samsung's website, you have to accept that it's just ok to fail sometimes," writes Alessandro L.


[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianDaniel Pocock: GoFundMe: errors and bait-and-switch

Yesterday I set up a crowdfunding campaign to purchase some equipment for the ham radio demo at OSCAL.

It was the first time I tried crowdfunding and the financial goal didn't seem very big (a good quality AGM battery might only need EUR 250) so I only spent a little time looking at some of the common crowdfunding sites and decided to try GoFundMe.

While the campaign setup process initially appeared quite easy, it quickly ran into trouble after the first donation came in. As I started setting up bank account details to receive the money, errors started appearing:

I tried to contact support and filled in the form, typing a message about the problem. Instead of sending my message to support, it started trying to show me long lists of useless documents. Finally, after clicking through several screens of unrelated nonsense, another contact form appeared and the message I had originally typed had been lost in their broken help system and I had to type another one. It makes you wonder, if you can't even rely on a message you type in the contact form being transmitted accurately, how can you rely on them to forward the money accurately?

When I finally got a reply from their support department, it smelled more like a phishing attack, asking me to give them more personal information and email them a high resolution image of my passport.

If that was really necessary, why didn't they ask for it before the campaign went live? I felt like they were sucking people in to get money from their friends and then, after the campaign gains momentum, holding those beneficiaries to ransom and expecting them to grovel for the money.

When a business plays bait-and-switch like this and when their web site appears to be broken in more ways than one (both the errors and the broken contact form), I want nothing to do with them. I removed the GoFundMe links from my blog post and replaced them with direct links to Paypal. Not only does this mean I avoid the absurdity of emailing copies of my passport, but it also cuts out the five percent fee charged by GoFundMe, so more money reaches the intended purpose.

Another observation about this experience is the way GoFundMe encourages people to share the link to their own page about the campaign and not the link to the blog post. Fortunately in most communication I had with people about the campaign I gave them a direct link to my blog post and this makes it easier for me to change the provider handling the money by simply removing links from my blog to GoFundMe.

While the funding goal hasn't been reached yet, my other goal, learning a little bit about the workings of crowdfunding sites, has been helped along by this experience. Before trying to run something like this again I'll look a little harder for a self-hosted solution that I can fully run through my blog.

I've told GoFundMe to immediately refund all money collected through their site so donors can send money directly through the Paypal donate link on my blog. If you would like to see the ham radio station go ahead at OSCAL, please donate, I can't take my own batteries with me by air.

Planet Linux AustraliaSimon Lyall: Audiobooks – April 2018

Viking Britain: An Exploration by Thomas Williams

Pretty straightforward, Tells as the uptodate research (no Winged Helmets 😢) and easy to follow (easier if you have a map of the UK) 7/10

Contact by Carl Sagan

I’d forgotten how different it was from the movie in places. A few extra characters and plot twists. many more details and explanations of the science. 8/10

The Path Between the Seas: The Creation of the Panama Canal, 1870-1914 by David McCullough

My monthly McCullough book. Great as usual. Good picture of the project and people. 8/10

Winter World: The Ingenuity of Animal Survival by Bernd Heinrich

As per the title this spends much of the time on [varied strategies for] Winter adaptation vs Summer World’s more general coverage. A great listen 8/10

A Man on the Moon: The Voyages of the Apollo Astronauts by Andrew Chaikin

Great overview of the Apollo missions. The Author interviewed almost all the astronauts. Lots of details about the missions. Excellent 9/10

Walkaway by Cory Doctorow

Near future Sci Fi. Similar feel to some of his other books like Makers. Switches between characters & audiobook switches narrators to match. Fastforward the Sex Scenes 💤. Mostly works 7/10

The Neanderthals Rediscovered: How Modern Science Is Rewriting Their Story by Michael A. Morse

Pretty much what the subtitle advertises. Covers discoveries from the last 20 years which make other books out of date. Tries to be Neanderthals-only. 7/10

The Great Quake: How the Biggest Earthquake in North America Changed Our Understanding of the Planet by Henry Fountain

Straightforward story of the 1964 Alaska Earthquake. Follows half a dozen characters & concentrates on worst damaged areas. 7/10


Rondam RamblingsI don't know where I'm a gonna go when the volcano blows

Hawaii's Kilauea volcano is erupting.  So is one in Vanuatu.  And there is increased activity in Yellowstone.  Hang on to your hats, folks, Jesus's return must be imminent. (In case you didn't know, the title is a line from a Jimmy Buffet song.)

Planet DebianNorbert Preining: Onyx Boox Note 10.3 – first impressions

I recently go my hands on a new gadget, the Onyx Boox Note. I have now owned a Kindle Paperwhite (2nd gen), a Kobo Glo, a Kobo GloHD, and now the Onyx Boox Note. The prime reason for me getting this device were two: (i) ability to mark up pdfs and epubs with comments (something I need for research, review, check,…), (ii) the great pdf readability (automatic crop support), which is of course also related to the bigger screen.

The Note main screen gives the last read book and some others from the library, plus direct access to some apps. One can scroll through the most recently read books at the top by swiping right. I would have preferred a bit smaller icons on the big screen to see more of the books, maybe in a future firmware version.

Not too many applications are available, but the Play Store is there and one can get most programs. Unfortunately it seems that k9-mail – my main mail program on Android – does not support the Note.

Reading epubs is quite normal an experience. Nothing to complain here. Usual settings etc.

Where the Note is great is PDFs, which are a huge pain on my smaller devices. Neither the Kindle nor the Kobo have decent PDF support in my opinion, while the Note allows for auto-cropping (as seen in the image below), as well as manual cropping and several other features. Simply great.

Another wonderful feature is that one can scribble directly in the pdf or epub, and the notes will be saved. In addition to that, there is also a commenting mode in landscape format with the document on the left and the notes on the right, see below. Very useful, both of the modes.

Besides adding notes to pdfs and epubs, one can also have a notebook. Here is the Notes main interface screen, which allows selecting previous notes, adding new, and some more operations (I still don’t know the function of most icons).

Here is a simple example of scribbling around. Surprisingly good. I will see how much my normal paper note taking will be replaced by this.

Note taking and markup can of course be done with the finger, but the pen that comes with the device is much better. The sleeve that comes with the device has a holder for the pen, so it could be around wherever you go.

Finally some hardware specs from one of the hardware info programs.

I have used the Note now for two weeks for reading, pdf markup, and a bit of note taking. For now I have a very good impression, good battery run time, durable feeling. What I am missing is a backlight for reading in the night. I guess with more usage time I will find more points to criticize, but for now I think it was an excellent purchase.

Planet DebianJunichi Uekawa: Seems like my raspberry pi root filesystems break after about 2 years.

Seems like my raspberry pi root filesystems break after about 2 years. Presumably because I have everything including /var/log there. Fails to write. Is there a good way to monitor wear like SMART for HDD ? Quick search didn't give me much.


Planet Linux AustraliaMichael Still: How to make a privileged call with oslo privsep


Once you’ve added oslo privsep to your project, how do you make a privileged call? Its actually really easy to do. In this post I will assume you already have privsep running for your project, which at the time of writing limits you to OpenStack Nova in the OpenStack universe.

The first step is to write the code that will run with escalated permissions. In Nova, we have chosen to only have one set of escalated permissions, so its easy to decide which set to use. I’ll document how we reached that decision and alternative approaches in another post.

In Nova, all code that runs with escalated permissions is in the nova/privsep directory, which is a pattern I’d like to see repeated in other projects. This is partially because privsep maintains a whitelist of methods that are allowed to be run this way, but its also because it makes it very obvious to callers that the code being called is special in some way.

Let’s assume that we’re going to add a simple method which manipulates the filesystem of a hypervisor node as root. We’d write a method like this in a file inside nova/privsep:

import nova.privsep


def update_motd(message):
    with open('/etc/motd', 'w') as f:

This method updates /etc/motd, which is the text which is displayed when a user interactively logs into the hypervisor node. “motd” stands for “message of the day” by the way. Here we just pass a new message of the day which clobbers the old value in the file.

The important thing is that entrypoint decorator at the start of the method. That’s how privsep decides to run this method with escalated permissions, and decides what permissions to use. In Nova at the moment we only have one set of escalated permissions, which we called sys_admin_pctxt because we’re artists. I’ll discuss in a later post how we came to that decision and what the other options were.

We can then call this method from anywhere else in Nova like this:

import nova.privsep.motd


nova.privsep.motd('This node is currently idle')

Note that we do imports for privsep code slightly differently. We always import the entire path, instead of creating a shortcut to just the module we’re using. In other words, we don’t do:

from nova.privsep import motd


motd('This node is a banana')

The above code would work, but is frowned on because it is less obvious here that the update_motd() method runs with escalated permissions — you’d have to go and read the imports to tell that.

That’s really all there is to it. The only other thing to mention is that there is a bit of a wart — code with escalated permissions can only use Nova code that is within the privsep directory. That’s been a problem when we’ve wanted to use a utility method from outside that path inside escalated code. The restriction happens for good reasons, so instead what we do in this case is move the utility into the privsep directory and fix up all the other callers to call the new location. Its not perfect, but its what we have for now.

There are some simple review criteria that should be used to assess a patch which implements new code that uses privsep in OpenStack Nova. They are:

  • Don’t use imports which create aliases. Use the “import nova.privsep.motd” form instead.
  • Keep methods with escalated permissions as simple as possible. Remember that these things are dangerous and should be as easy to understand as possible.
  • Calculate paths to manipulate inside the escalated method — so, don’t let someone pass in a full path and the contents to write to that file as root, instead let them pass in the name of the network interface or whatever that you are manipulating and then calculate the path from there. That will make it harder for callers to use your code to clobber random files on the system.

Adding new code with escalated permissions is really easy in Nova now, and much more secure and faster than it was when we only had sudo and root command lines to do these sorts of things. Let me know if you have any questions.


The post How to make a privileged call with oslo privsep appeared first on Made by Mikal.

Krebs on SecurityTwitter to All Users: Change Your Password Now!

Twitter just asked all 300+ million users to reset their passwords, citing the exposure of user passwords via a bug that stored passwords in plain text — without protecting them with any sort of encryption technology that would mask a Twitter user’s true password. The social media giant says it has fixed the bug and that so far its investigation hasn’t turned up any signs of a breach or that anyone misused the information. But if you have a Twitter account, please change your account password now.

Or if you don’t trust links in blogs like this (I get it) go to and change it from there. And then come back and read the rest of this. We’ll wait.

In a post to its company blog this afternoon, Twitter CTO Parag Agrawal wrote:

“When you set a password for your Twitter account, we use technology that masks it so no one at the company can see it. We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone.

A message posted this afternoon (and still present as a pop-up) warns all users to change their passwords.

“Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password. You can change your Twitter password anytime by going to the password settings page.”

Agrawal explains that Twitter normally masks user passwords through a state-of-the-art encryption technology called “bcrypt,” which replaces the user’s password with a random set of numbers and letters that are stored in Twitter’s system.

“This allows our systems to validate your account credentials without revealing your password,” said Agrawal, who says the technology they’re using to mask user passwords is the industry standard.

“Due to a bug, passwords were written to an internal log before completing the hashing process,” he continued. “We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”

Agrawal wrote that while Twitter has no reason to believe password information ever left Twitter’s systems or was misused by anyone, the company is still urging all Twitter users to reset their passwords NOW.

A letter to all Twitter users posted by Twitter CTO Parag Agrawal

Twitter advises:
-Change your password on Twitter and on any other service where you may have used the same password.
-Use a strong password that you don’t reuse on other websites.
Enable login verification, also known as two factor authentication. This is the single best action you can take to increase your account security.
-Use a password manager to make sure you’re using strong, unique passwords everywhere.

This may be much ado about nothing disclosed out of an abundance of caution, or further investigation may reveal different findings. It doesn’t matter for right now: If you’re a Twitter user and if you didn’t take my advice to go change your password yet, go do it now! That is, if you can. seems responsive now, but some period of time Thursday afternoon Twitter had problems displaying many Twitter profiles, or even its homepage. Just a few moments ago, I tried to visit the Twitter CTO’s profile page and got this (ditto for

What KrebsOnSecurity and other Twitter users got when we tried to visit and the Twitter CTO’s profile page late in the afternoon ET on May 3, 2018.

If for some reason you can’t reach, try again soon. Put it on your to-do list or calendar for an hour from now. Seriously, do it now or very soon.

And please don’t use a password that you have used for any other account you use online, either in the past or in the present. A non-comprehensive list (note to self) of some password tips are here.

I have sent some more specific questions about this incident in to Twitter. More updates as available.

Update, 8:04 p.m. ET: Went to reset my password at Twitter and it said my new password was strong, but when I submitted it I was led to a dead page. But after logging in again at the new password worked (and the old didn’t anymore). Then it prompted me to enter one-time code from app (you do have 2-factor set up on Twitter, right?) Password successfully changed!

Planet DebianBenjamin Mako Hill: Climbing Mount Rainier

Mount Rainier is an enormous glaciated volcano in Washington state. It’s  4,392 meters tall (14,410 ft) and extraordinary prominent. The mountain is 87 km (54m) away from Seattle. On clear days, it dominates the skyline.

Drumheller Fountain and Mt. Rainier on the University of Washington CampusDrumheller Fountain and Mt. Rainier on the University of Washington Campus (Photo by Frank Fujimoto)

Rainier’s presence has shaped the layout and structure of Seattle. Important roads are built to line up with it. The buildings on the University of Washington’s campus, where I work, are laid out to frame it along the central promenade. People in Seattle typically refer to Rainier simply as “the mountain.”  It is common to here Seattlites ask “is the mountain out?”

Having grown up in Seattle, I have an deep emotional connection to the mountain that’s difficult to explain to people who aren’t from here. I’ve seen Rainier thousands of times and every single time it takes my breath away. Every single day when I bike to work, I stop along UW’s “Rainier Vista” and look back to see if the mountain is out. If it is, I always—even if I’m running late for a meeting—stop for a moment to look at it. When I lived elsewhere and would fly to visit Seattle, seeing Rainier above the clouds from the plane was the moment that I felt like I was home.

Given this connection, I’ve always been interested in climbing Mt. Rainier.  Doing so typically takes at least a couple days and is difficult. About half of people who attempt typically fail to reach the top. For me, climbing rainier required an enormous amount of training and gear because, until recently, I had no experience with mountaineering. I’m not particularly interested in climbing mountains in general. I am interested in Rainier.

On Tuesday, Mika and I made our first climbing attempt and we both successfully made it to the summit. Due to the -15°C (5°F) temperatures and 88kph (55mph) winds at the top, I couldn’t get a picture at the top. But I feel like I’ve built a deeper connection with an old friend.

Other than the picture from UW campus, photos were all from my climb and taken by (in order): Jennifer Marie, Jonathan Neubauer, Mika Matsuzaki, Jonathan Neubauer, Jonathan Neubauer, Mika Matsuzaki, and Jake Holthaus.

Rondam RamblingsA quantum mechanics puzzle

Time to take a break from politics and sociology and geek out about quantum mechanics for a while. Consider a riff on a Michelson-style interferometer that looks like this: A source of laser light shines on a half-silvered mirror angled at 45 degrees (the grey rectangle).  This splits the beam in two.  The two beams are in actuality the same color as the original, but I've drawn them in

Planet DebianSilva Arapi: Digital Born Media Carnival July 2017

As described in their website, Digital Born Media Carnival was a gathering of hundred of online media representatives, information explorers and digital rights enthusiasts. The event took place on 14 – 18 July in Kotor, Montenegro. I found out about it as one of the members of Open Labs Hackerspace shared the news on our forum. While struggling if I should attend or not because of a very busy period at work and at the University, the whole thing sounded very interesting and intriguing at the same time, so I decided to join the group of people who were also planning to go and apply with a workshop session too. No regrets at all! This turned out to be one of the greatest events I’ve attended so far and had a great impact in what I somehow decided to do next, regarding my work as a hacktivist and as a digital rights enthusiast.

The organizers of the Carnival had announced on the website that they were looking for online media representatives, journalists, bloggers, content creators, human right defenders, hacktivists, new media startups etc and as a hactivist I found myself willing to join and learn more about some topics which for a while had been very intriguing to me, while I was also looking at this as an opportunity to meet with other people with common interests as me.

I applied with a workshop where I was going to introduce some simple tools for people to better preserve their privacy online. The session was accepted and I was invited to lead altogether with Andrej Petrovski the sessions on Digital Security track, located in the Sailing club “Lahor”. I held my workshop there on Saturday late in the morning and I really enjoyed it. Most of the attendees where journalists or people not with a technical background, and they showed a lot of interest, asked me many questions and shared some stories, I also received very good feedback on the workshop and it really gave me some really good vibes since this was the first time for me speaking on cyber security in an important event of this kind, as it was the DBMC’17.

I spent the other days on the Carnival attending different workshops and talks, meeting new people, discussing with friends and enjoying the sun. We would go to the beach on the afternoon and had very cool drone photo shooting 😉

DBMC drone photo shootingDBMC drone photo shooting – Kotor, Montenegro

This was a great work from the SHARE Foundation and hopefully there will be other events as such in the near future and I would totally recommend for people to attend! If you are new with the topics discussed there, this is a great way to start. If you have been on the field for a while, this is the place to meet other professionals as you. If you are looking for an event which you can also combine with some days of vacation but also be in touch with causes you care about, this would once again be the place to go.

Planet DebianDaniel Pocock: Turning a dictator's pyramid into a ham radio station

(Update: due to concerns about GoFundMe, I changed the links in this blog post so people can donate directly through PayPal. Anybody who tried to donate through GoFundMe should be receiving a refund.)

I've launched a crowdfunding campaign to help get more equipment for a bigger and better ham radio demo at OSCAL (19-20 May, Tirana). Please donate if you would like to see this go ahead. Just EUR 250 would help buy a nice AGM battery - if 25 people donate EUR 10 each, we can buy one of those.

You can help turn the pyramid of Albania's former communist dictator into a ham radio station for OSCAL 2018 on 19-20 May 2018. This will be a prominent demonstration of ham radio in the city center of Tirana, Albania.

Under the rule of Enver Hoxha, Albanians were isolated from the outside world and used secret antennas to receive banned television transmissions from Italy. Now we have the opportunity to run a ham station and communicate with the whole world from the very pyramid where Hoxha intended to be buried after his death.

Donations will help buy ham and SDR equipment for communities in Albania and Kosovo and assist hams from neighbouring countries to visit the conference. We would like to purchase deep-cycle batteries, 3-stage chargers, 50 ohm coaxial cable, QSL cards, PowerPole connectors, RTL-SDR dongles, up-convertors (Ham-it-up), baluns, egg insulators and portable masts for mounting antennas at OSCAL and future events.

The station is co-ordinated by Daniel Pocock VK3TQR from the Debian Project's ham radio team.

Donations of equipment and volunteers are also very welcome. Please contact Daniel directly if you would like to participate.

Any donations in excess of requirements will be transferred to one or more of the hackerspaces, radio clubs and non-profit organizations supporting education and leadership opportunities for young people in the Balkans. Any equipment purchased will also remain in the region for community use.

Please click here to donate if you would like to help this project go ahead. Without your contribution we are not sure that we will have essential items like the deep-cycle batteries we need to run ham radio transmitters.

Google AdsenseUpdated impressions metrics for AdSense

Reporting plays a key role in the AdSense experience. In fact, two-thirds of our partners consider it the most important feature within AdSense. Helping partners understand and improve their performance metrics is a top priority. Today, we’re announcing updates to how we report AdSense impressions.

The Interactive Advertising Bureau (IAB) and the Media Rating Council (MRC) in partnership with other industry bodies such as the Mobile Marketing Association (MMA), periodically review and update industry standards for impression measurement. They recommend guidelines to standardize how impressions are counted across formats and platforms.

What’s changing? 
Over the last year, to remain consistent with these standards, we have transitioned our ad serving platforms from served impressions to downloaded impressions. Served impressions are counted at the point that we find an ad for an ad request on the server. Downloaded impressions are counted for each ad request once at least one of the returned ads has begun to load on the user’s device. Starting today, publishers will see updated metrics in AdSense that reflect this change.
How will it impact partners?
Switching to counting impressions on download helps to improve viewability rates, consistency across measurement providers, and aligns the impression metric better with advertiser value. For example, if an ad failed to download or if the user closed the tab before it arrived, it will no longer be counted as an impression. As a result, some AdSense users might see decreases in their impression counts, and corresponding improvements in their impression RPM, CTR, and other impression-based metrics. Your earnings, however, should not be impacted.

We will continue to review and update our impression counting technology as new industry standards are defined and implemented.

For more information please visit our Help Center.

Posted by:
Andrew Gildfind, Product Manager

Cory DoctorowAnnouncing “Petard,” a new science fiction story reading on my podcast

Here’s the first part of my reading (MP3) of Petard, a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.


Planet DebianJulien Danjou: A simple filtering syntax tree in Python

A simple filtering syntax tree in Python

Working on various pieces of software those last years, I noticed that there's always a feature that requires implementing some DSL.

The problem with DSL is that it is never the road that you want to go. I remember how creating my first DSL was fascinating: after using programming languages for years, I was finally designing my own tiny language!

A new language that my users would have to learn and master. Oh, it had nothing new, it was a subset of something, inspired by my years of C, Perl or Python, who knows. And that's the terrible part about DSL: they are an marvelous tradeoff between the power that they give to users, allowing them to define precisely their needs and the cumbersomeness of learning a language that will be useful in only one specific situation.

In this blog post, I would like to introduce a very unsophisticated way of implementing the syntax tree that could be used as a basis for a DSL. The goal of that syntax tree will be filtering. The problem it will solve is the following: having a piece of data, we want the user to tell us if the data matches their conditions or not.

To give a concrete example: a machine wants to grant the user the ability to filter the beans that it should keep. What the machine passes to the filter is the size of the current grain, and the filter should return either true or false, based on the condition defined by the user: for example, only keep beans that are bigger that are between 1 and 2 centimeters or between 4 and 6 centimeters.

The number of conditions that the users can define could be quite considerable, and we want to provide at least a basic set of predicate operators: equal, greater than and lesser than. We also want the user to be able to combine those, so we'll add the logical operators or and and.

A set of conditions can be seen as a tree, where leaves are either predicates, and in that case, do not have children, or are logical operators, and have children. For example, the propositional logic formula φ1 ∨ (φ2 ∨ φ3) can be represented with as a tree like this:

A simple filtering syntax tree in Python

Starting with this in mind, it appears that the natural solution is going to be recursive: handle the predicate as terminal, and if the node is a logical operator, recurse over its children.
Since we will be doing Python, we're going to use Python to evaluate our syntax tree.

The simplest way to write a tree in Python is going to be using dictionaries. A dictionary will represent one node and will have only one key and one value: the key will be the name of the operator (equal, greater than, or, and…) and the value will be the argument of this operator if it is a predicate, or a list of children (as dictionaries) if it is a logical operator.

For example, to filter our bean, we would create a tree such as:

{"or": [
  {"and": [
    {"ge": 1},
    {"le": 2},
  {"and": [
    {"ge": 4},
    {"le": 6},

The goal here is to walk through the tree and evaluate each of the leaves of the tree and returning the final result: if we passed 5 to this filter, it would return True, and if we passed 10 to this filter, it would return False.

Here's how we could implement a very depthless filter that only handles predicates (for now):

import operator

class InvalidQuery(Exception):

class Filter(object):
    binary_operators = {
        "eq": operator.eq,
        "le": operator.le,

    def __init__(self, tree):
        # Parse the tree and store the evaluator
        self._eval = self.build_evaluator(tree)

    def __call__(self, value):
        # Call the evaluator with the value
        return self._eval(value)

    def build_evaluator(self, tree):
            # Pick the first item of the dictionary.
            # If the dictionary has multiple keys/values
            # the first one (= random) will be picked.
            # The key is the operator name (e.g. "eq")
            # and the value is the argument for it
            operator, nodes = list(tree.items())[0]
        except Exception:
            raise InvalidQuery("Unable to parse tree %s" % tree)
            # Lookup the operator name
            op = self.binary_operators[operator]
        except KeyError:
            raise InvalidQuery("Unknown operator %s" % operator)
        # Return a function (lambda) that takes
        # the filtered value as argument and returns
        # the result of the predicate evaluation
        return lambda value: op(value, nodes)

You can use this Filter class by passing a predicate such as {"eq": 4}:

>>> f = Filter({"eq": 4})
>>> f(2)
>>> f(4)

This Filter class works but is quite limited as we did not provide logical operators. Here's a complete implementation that supports binary operators and and or:

import operator

class InvalidQuery(Exception):

class Filter(object):
    binary_operators = {
        u"=": operator.eq,
        u"==": operator.eq,
        u"eq": operator.eq,



        u"<=": operator.le,
        u"≤": operator.le,
        u"le": operator.le,



    multiple_operators = {
        u"or": any,
        u"∨": any,
        u"and": all,
        u"∧": all,

    def __init__(self, tree):
        self._eval = self.build_evaluator(tree)

    def __call__(self, value):
        return self._eval(value)

    def build_evaluator(self, tree):
            operator, nodes = list(tree.items())[0]
        except Exception:
            raise InvalidQuery("Unable to parse tree %s" % tree)
            op = self.multiple_operators[operator]
        except KeyError:
                op = self.binary_operators[operator]
            except KeyError:
                raise InvalidQuery("Unknown operator %s" % operator)
            return lambda value: op(value, nodes)
        # Iterate over every item in the list of the value linked
        # to the logical operator, and compile it down to its own
        # evaluator.
        elements = [self.build_evaluator(node) for node in nodes]
        return lambda value: op((e(value) for e in elements))

To support the and and or operators, we leverage the all and any built-in Python functions. They are called with an argument that is a generator that evaluates each one of the sub-evaluator, doing the trick.

Unicode is the new sexy, so I've also added Unicode symbols support.

And it is now possible to implement our full example:

>>> f = Filter(
...     {"∨": [
...         {"∧": [
...             {"≥": 1},
...             {"≤": 2},
...         ]},
...         {"∧": [
...             {"≥": 4},
...             {"≤": 6},
...         ]},
...     ]})
>>> f(5)
>>> f(8)
>>> f(1)

As an exercise, you could try to add the not operator, which deserve its own category as it is a unary operator!

In the next blog post, we will see how to improve that filter with more features, and how to implement a domain-specific language on top of it, to make humans happy when writing the filter!

A simple filtering syntax tree in Python

Hole and Henni – François Charlier, 2018
In this drawing, the artist represents the deepness of functional programming and how its horse power can help you escape many dark situations.

Mark ShuttleworthScam alert

Am writing briefly to say that I believe a scam or pyramid scheme is currently using my name fraudulently in South Africa. I am not going to link to the websites in question here, but if you are being pitched a make-money-fast story that refers to me and crypto-currency, you are most likely being targeted by fraudsters.

CryptogramLC4: Another Pen-and-Paper Cipher

Interesting symmetric cipher: LC4:

Abstract: ElsieFour (LC4) is a low-tech cipher that can be computed by hand; but unlike many historical ciphers, LC4 is designed to be hard to break. LC4 is intended for encrypted communication between humans only, and therefore it encrypts and decrypts plaintexts and ciphertexts consisting only of the English letters A through Z plus a few other characters. LC4 uses a nonce in addition to the secret key, and requires that different messages use unique nonces. LC4 performs authenticated encryption, and optional header data can be included in the authentication. This paper defines the LC4 encryption and decryption algorithms, analyzes LC4's security, and describes a simple appliance for computing LC4 by hand.

Almost two decades ago I designed Solitaire, a pen-and-paper cipher that uses a deck of playing cards to store the cipher's state. This algorithm uses specialized tiles. This gives the cipher designer more options, but it can be incriminating in a way that regular playing cards are not.

Still, I like seeing more designs like this.

Hacker News thread.

Worse Than FailureCodeSOD: The Same Date

Oh, dates. Oh, dates in Java. They’re a bit of a dangerous mess, at least prior to Java 8. That’s why Java 8 created its own date-time libraries, and why JodaTime was the gold standard in Java date handling for many years.

But it doesn’t really matter what date handling you do if you’re TRWTF. An Anonymous submitter passed along this method, which is meant to set the start and end date of a search range, based on a number of days:

private void setRange(int days){
        DateFormat df = new SimpleDateFormat("yyyy-MM-dd")
        Date d = new Date();
        Calendar c = Calendar.getInstance()

        Date start =  c.getTime();

                c.add(Calendar.DAY_OF_MONTH, -1);
        else if(days==-7){
                c.add(Calendar.DAY_OF_MONTH, -7)
        else if (days==-30){
                c.add(Calendar.DAY_OF_MONTH, -30)
        else if (days==-365){
                c.add(Calendar.DAY_OF_MONTH, -365)

        from = df.format(start).toString()+"T07:00:00.000Z"
        to = df.format(d).toString()+"T07:00:00.000Z"

Right off the bat, days only has a handful of valid values- a day, a week, a month(ish) or a year(ish). I’m sure passing it as an int would never cause any confusion. The fact that they don’t quite grasp what variables are for is a nice touch. I’m also quite fond of how they declare a date format at the top, but then also want to append a hard-coded timezone to the format, which again, I’m sure will never cause any confusion or issues. The assertThat calls check that the Calendar.add method does what it’s documented to do, making those both pointless and stupid.

But that’s all small stuff. The real magic is that they never actually use the calendar after adding/subtracting dates. They obviously meant to include d = c.getTime() someplace, but forgot. Then, without actually testing the code (they have so many asserts, why would they need to test?) they checked it in. It wasn’t until QA was checking the prerelease build that anyone noticed, “Hey, filtering by dates doesn’t work,” and an investigation revealed that from and to always had the same value.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianNeil Williams: Upgrading the home server rack

My original home server rack is being upgraded to use more ARM machines as the infrastructure of the lab itself. I've also moved house, so there is more room for stuff and kit. This has allowed space for a genuine machine room. I will be using that to host test devices which are do not need manual intervention despite repeated testing. (I'll also have the more noisy / brightly illuminated devices in the machine room.) The more complex devices will sit on shelves in the office upstairs. (The work to put the office upstairs was a major undertaking involving my friends Steve and Andy - embedding ethernet cables into the walls of four rooms in the new house. Once that was done, the existing ethernet cable into the kitchen could be fixed (Steve) and then connected to my new Ubiquity AP, (a present from Steve and Andy)).

Before I moved house, I found that the wall mounted 9U communications rack was too confined once there were a few devices in use. A lot of test devices now need many cables to each device. (Power, ethernet, serial, second serial and USB OTG and then add a relay board with it's own power and cables onto the DUT....)

Devices like beaglebone-black, cubietruck and other U-Boot devices will go downstairs, albeit in a larger Dell 24U rack purchased from Vince who has moved to a larger rack in his garage. Vince also had a gigabit 16 port switch available which will replace the Netgear GS108 8-port Gigabit Ethernet Unmanaged Switch downstairs.

I am currently still using the same microserver to run various other services around the house (firewall, file server etc.): HP 704941-421 ProLiant Micro Server

I've now repurposed a reconditioned Dell Compact Form Factor desktop box to be my main desktop machine in my office. This was formerly my main development dispatcher and the desktop box was chosen explicitly to get more USB host controllers on the motherboard than is typically available with an x86 server. There have been concerns that this could be causing bottlenecks when running multiple test jobs which all try to transfer several hundred megabytes of files over USB-OTG at the same time.

I've now added a SynQuacer Edge ARM64 Server to run a LAVA dispatcher in the office, controlling several of the more complex devices to test in LAVA - Hikey 620, HiKey 960 and Dragonboard 410c via a Cambrionix PP15s to provide switchable USB support to enable USB network dongles attached to the USB OTG port which is also used for file deployment during test jobs. There have been no signs of USB bottlenecks at this stage.

This arm64 machine then supports running test jobs on the development server used by the LAVA software team as azrael.codehelp. It runs headless from the supplied desktop tower case. I needed to use a PCIe network card from TPlink to get the device operating but this limitation should be fixed with new firmware. (I haven't had time to upgrade the firmware on that machine yet, still got the rest of the office to kit out and the rack to build.) The development server itself is an ARM64 virtual machine, provided by the Linaro developer cloud and is used with a range of other machines to test the LAVA codebase, doing functional testing.

The new dispatcher is working fine, I've not had any issues with running test jobs on some of the most complex devices used in LAVA. I haven't needed to extend the RAM from the initial 4G and the 24 cores are sufficient for the work I've done using the machine so far.

The rack was moved into place yesterday (thanks to Vince & Steve) but the patch panel which Andy carefully wired up is not yet installed and there are cables everywhere, so a photo will have to wait. The plan now is to purchase new UPS batteries and put each of the rack, the office and the ISP modem onto dedicated UPS. The objective is not to keep the lab running in the event of a complete power cut lasting hours, just to survive brown outs and power cuts lasting a minute or two, e.g. when I finally get around to labelling up the RCD downstairs. (The new house was extended a few yours before I bought it and the organisation of the circuits is a little unexpected in some parts of the house.)

Once the UPS batteries are in, the microserver, a PDU, the network switch and patch panel, as well as the test devices, will go into the rack in the machine room. I've recently arranged to add a second SynQuacer server into the rack - this time fitted into a 1U server case. (Definite advantage of the new full depth rack over the previous half-depth comms box.) I expect this second SynQuacer to have a range of test devices to complement our existing development staging instance which runs the nightly builds which are available for both amd64 and arm64.

I'll post again once I've got the rest of the rack built and the second SynQuacer installed. The hardest work, by far, has been fitting out the house for the cabling. Setting up the machines, installing and running LAVA has been trivial in comparison. Thanks to Martin Stadler for the two SynQuacer machines and the rest of the team in Linaro Enterprise Group (LEG) for getting this ARM64 hardware into useful roles to support wider development. With the support from Debian for building the arm64 packages, the new machine simply sits on the network and does "TheRightThing" without fuss or intervention. I can concentrate on the test devices and get on with things. The fact that the majority of my infrastructure now runs on ARM64 servers is completely invisible to my development work.

Planet DebianSean Whitton: git-annex-export and reprepro


I wanted to set up my own apt repository to distribute packages to my own computers. This repository must be PGP-signed, but I want to use my regular PGP key rather than a PGP key stored on the server, because I don’t want to trust my server with root access to my laptop.

Further, I want to be able to add to my repo while offline, rather than dputting .changes files to my server.

The standard tools, mini-dinstall and reprepro, are designed to be executed on the same machine that will serve the apt repository. To satisfy the above, though, I need to be able to execute the repository generator offline, on my laptop.

Two new features of git-annex, git-annex-export and v6 repositories, can allow us to execute the repository generator offline and then copy the contents of the repository to the server in an efficient way.

(v6 repositories are not production-ready but the data in this repo is replaceable: I backup the reprepro config files, and the packages can be regenerated from the (d)git repositories containing the source packages.)

Schematic instructions

This should be enough to get you going if you have some experience with git-annex and reprepro.

In the following, athena is a host I can ssh to. On that host, I assume that Apache is set up to serve /srv/debian as the apt repository, with .htaccess rules to deny access to the conf/ and db/ subdirectories and to enable the following of symlinks.

  1. apt-get install git-annex reprepro
  2. git init a new git repository on laptop.
  3. Create conf/distributions, conf/options, conf/ and .gitattributes per below.
  4. Create other files such as README, sample foo.list, etc. if desired.
  5. git add the various plain text files we just created and commit.
  6. git annex init --version=6.
  7. Add an origin remote, git config remote.origin.annex-ignore true and git push -u origin master git-annex. I.e. store repository metadata somewhere.
  8. git config --local annex.thin true to save disc space.
  9. git config --local annex.addunlocked true so that reprepro can modify files.
  10. Tell git-annex about the /srv/debian directory on athena: git annex initremote athena type=rsync rsyncurl=athena:/srv/debian autoenable=true exporttree=yes encryption=none
  11. Tell git-annex that the /srv/debian directory on athena should track the contents of the master branch: git annex export --fast --to=athena --tracking master
  12. Now you can reprepro include foo.changes, reprepro export and git annex should do the rest: the script calls git annex sync and gitannex knows that it should export the repo to /srv/debian on athena when told to sync.


conf/distributions is an exercise for the reader – this is standard reprepro stuff.





git annex add
git annex sync --content


* annex.largefiles=anything
conf/* annex.largefiles=nothing
README annex.largefiles=nothing
\.gitattributes annex.largefiles=nothing

These git attributes tell git-annex to annex all files except the plain text config files, which are just added to git.


I’m not sure whether these are fixable in git-annex-export, or not. Both can be worked around with hacks/scripts on the server.

  • reprepro exportsymlinks won’t work to create suite symlinks: git-annex-export will create plain files instead of symlinks.

  • git-annex-exports exports non-annexed files in git, such as README, as readable only by their owner.

Planet DebianThorsten Glaser: Happy Birthday, GPS Stash Hunt!

GPS Stash Hunt, also commercially known as “Geocaching”, “Terracaching”, or non-commercially (but also nōn-free) as “Opencaching”, is 18 years old today! Time for celebration or something!

(read more…)


Planet DebianNorbert Preining: Docker, cron, mail and logs

If one searches for “docker cron“, there are a lot of hits but not really good solutions or explanations how to get a running system. In particular, what I needed/wanted was (i) stdout/stderr of the cron script is sent to some user, and (ii) that in case of errors the error output also appears in the docker logs. Well, that was not that easy …

There are three components here that need to be tuned:

  • getting cron running
  • getting mail delivery running
  • redirecting some error message to docker

Let us go through the list. The full Dockerfile and support scripts can be found below.

Getting cron running

Many of the lean images do not contain cron, even less run it (this is due to the philosophy of one process per container). So the usual incantation to install cron are necessary:

RUN apt-get install -y cron

After that, one can use as entry point the cron daemon running in foreground:

CMD cron -f

Of course you have to install a crontab file somehow, in my case I did:

ADD crontab /etc/cron.d/mypackage
RUN chmod 0644 /etc/cron.d/mypackage

This works all nice and well, but if there are errors, or problems with the crontab file, you will not see any error message, because (at least on Debian and Ubuntu) cron logs to syslog, but there is no syslog daemon available to show you the output, and cron has no option to log to a file (how stupid!). Furthermore, other cron daemon options (bcron, cronie) are not available in Debian/stable for example 🙁

In my case the crontab file had a syntax error, and thus no actual program was ever run.

Getting mail delivery running

Assuming you have these hurdles settled one would like to get the output of the cron scripts mailed. For this, a sendmail compliant program needs to be available. One could set up a full blown system (exim, postfix), but this is overkill as one only wants to send out message. I opted for ssmtp which is a single program with straight-forward configuration and operation, and it provides a sendmail program.

RUN apt-get install -y ssmtp
ADD ssmtp.conf /etc/ssmtp/ssmtp.conf
RUN chown root.mail /etc/ssmtp/ssmtp.conf
RUN chmod 0640 /etc/ssmtp/ssmtp.conf

The configuration file can be rather minimal, here is an example:


Of course, the mail-server-name-ip must accepts emails for destuser@destination. There are several more options for ssl support, rewriting of domains etc, see for example here.

Having this in place, cron will now duly send emails with the output of the cron jobs.

Redirecting some error message to docker

This leaves us with the last task, namely getting error messages into the docker logs. Since cron does capture the stdout and stderr of the cronjobs for mail sending, one can either redirect these outputs to docker, but then one will not get emails, or wrap the cron jobs up. I used the following wrapper to output a warning to the docker logs:

if [ -z "$1" ] ; then
  echo "need name of cron job as first argument" >&2
  exit 1

if [ ! -x "$1" ] ; then
  echo "cron job file $1 not executable, exiting" >&2
  exit 1

if "$1"
  exit 0
  echo "cron job $1 failed!" 2>/proc/1/fd/2 >&2
  exit 1

together with entries in the crontab like:

m h d m w root /app/run-cronjob /app/your-script

The magic trick here is the 2>/proc/1/fd/2 >&2 which first redirects the stderr to the stderr of the process with the id 1, which is the entry point of the container and watched by docker, and then echo the message to stderr. One could also redirect stdout in the same way to /proc/1/fd/1 if necessary or preferred.

The above thing combined gives a nice combination of emails with the output of the cron jobs, as well as entries in the docker logs if something broke and for example no output was created.

Let us finish with a minimal Dockerfile doing these kind of things:

from debian:stretch-slim
RUN apt-get -y update
RUN apt-get install -y cron ssmtp
ADD . /app
ADD crontab /etc/cron.d/mypackage
RUN chmod 0644 /etc/cron.d/mypackage
ADD ssmtp.conf /etc/ssmtp/ssmtp.conf
RUN chown root.mail /etc/ssmtp/ssmtp.conf
RUN chmod 0640 /etc/ssmtp/ssmtp.conf
CMD cron -f

Planet DebianBits from Debian: New Debian Developers and Maintainers (March and April 2018)

The following contributors got their Debian Developer accounts in the last two months:

  • Andreas Boll (aboll)
  • Dominik George (natureshadow)
  • Julien Puydt (jpuydt)
  • Sergio Durigan Junior (sergiodj)
  • Robie Basak (rbasak)
  • Elena Grandi (valhalla)
  • Peter Pentchev (roam)
  • Samuel Henrique (samueloph)

The following contributors were added as Debian Maintainers in the last two months:

  • Andy Li
  • Alexandre Rossi
  • David Mohammed
  • Tim Lunn
  • Rebecca Natalie Palmer
  • Andrea Bolognani
  • Toke Høiland-Jørgensen
  • Gabriel F. T. Gomes
  • Bjorn Anders Dolk
  • Geoffroy Youri Berret
  • Dmitry Eremin-Solenikov


Krebs on SecurityWhen Your Employees Post Passwords Online

Storing passwords in plaintext online is never a good idea, but it’s remarkable how many companies have employees who are doing just that using online collaboration tools like Last week, KrebsOnSecurity notified a host of companies that employees were using Trello to share passwords for sensitive internal resources. Among those put at risk by such activity included an insurance firm, a state government agency and ride-hailing service Uber.

By default, Trello boards for both enterprise and personal use are set to either private (requires a password to view the content) or team-visible only (approved members of the collaboration team can view).

But that doesn’t stop individual Trello users from manually sharing personal boards that include proprietary employer data, information that may be indexed by search engines and available to anyone with a Web browser. And unfortunately for organizations, far too many employees are posting sensitive internal passwords and other resources on their own personal Trello boards that are left open and exposed online.

A personal Trello board created by an Uber employee included passwords that might have exposed sensitive internal company operations.

KrebsOnSecurity spent the past week using Google to discover unprotected personal Trello boards that listed employer passwords and other sensitive data. Pictured above was a personal board set up by some Uber developers in the company’s Asia-Pacific region, which included passwords needed to view a host of internal Google Documents and images.

Uber spokesperson Melanie Ensign said the Trello board in question was made private shortly after being notified by this publication, among others. Ensign said Uber found the unauthorized Trello board exposed information related to two users in South America who have since been notified.

“We had a handful of members in random parts of the world who didn’t realize they were openly sharing this information,” Ensign said. “We’ve reached out to these teams to remind people that these things need to happen behind internal resources. Employee awareness is an ongoing challenge, We may have dodged a bullet here, and it definitely could have been worse.”

Ensign said the initial report about the exposed board came through the company’s bug bounty program, and that the person who reported it would receive at least the minimum bounty amount — $500 — for reporting the incident (Uber hasn’t yet decided whether the award should be higher for this incident).

The Uber employees who created the board “used their work email to open a public board that they weren’t supposed to,” Ensign said. “They didn’t go through our enterprise account to create that. We first found out about it through our bug bounty program, and while it’s not technically a vulnerability in our products, it’s certainly something that we would pay for anyway. In this case, we got multiple reports about the same thing, but we always pay the first report we get.”

Of course, not every company has a bug bounty program to incentivize the discovery and private reporting of internal resources that may be inadvertently exposed online.

Screenshots that KrebsOnSecurity took of many far more shocking examples of employees posting dozens of passwords for sensitive internal resources are not pictured here because the affected parties still have not responded to alerts provided by this author.

Trello is one of many online collaboration tools made by Atlassian Corporation PLC, a technology company based in Sydney, Australia. Trello co-founder Michael Pryor said Trello boards are set to private by default and must be manually changed to public by the user.

“We strive to make sure public boards are being created intentionally and have built in safeguards to confirm the intention of a user before they make a board publicly visible,” Pryor said. “Additionally, visibility settings are displayed persistently on the top of every board.”

If a board is Team Visible it means any members of that team can view, join, and edit cards. If a board is Private, only members of that specific board can see it. If a board is Public, anyone with the link to the board can see it.

Interestingly, updates made to Trello’s privacy policy over the past weekend may make it easier for companies to locate personal boards created by employees and pull them behind company resources.

A Trello spokesperson said the privacy changes were made to bring the company’s policies in line with new EU privacy laws that come into enforcement later this month. But they also clarify that Trello’s enterprise features allow the enterprise admins to control the security and permissions around a work account an employee may have created before the enterprise product was purchased.

Uber spokesperson Ensign called the changes welcome.

“As a result companies will have more security control over Trello boards created by current/former employees and contractors, so we’re happy to see the change,” she said.

KrebsOnSecurity would like to thank security researcher Kushagra Pathak for the heads up about the unauthorized Uber board. Pathak has posted his own account of the discovery here.

Update: Added information from Ensign about two users who had their data exposed from the credentials in the unauthorized Trello page published by Uber employees.

Rondam RamblingsThis is inspiring

The Washington Post reports that: Two African American men arrested at a Philadelphia Starbucks last month have reached a settlement with the city and secured its commitment to a pilot program for young entrepreneurs.  Rashon Nelson and Donte Robinson chose not to pursue a lawsuit against the city, Mike Dunn, a spokesman from the Mayor’s Office, told The Washington Post. Instead, they agreed to

TEDCalling all social entrepreneurs + nonprofit leaders: Apply for The Audacious Project

Our first collection of Audacious Project winners takes the stage after a stellar session at TED2018, in which each winner made a big, big wish to move their organization’s vision to the next level with help from a new consortium of nonprofits. As a bonus during the Audacious Project session. we watched an astonishing performance of “New Second Line” from Camille A. Brown and Dancers. From left: The Bail Project’s Robin Steinberg; Heidi M. Sosik of the Woods Hole Oceanographic Institute; Caroline Harper of Sight Savers; Vanessa Garrison and T. Morgan Dixon of GirlTrek; Fred Krupp from the Environmental Defense Fund; Chloe Davis and Maleek Washington of Camille A. Brown and Dancers; pianist Scott Patterson; Andrew Youn of the One Acre Fund; and Catherine Foster, Camille A. Brown, Timothy Edwards, Juel D. Lane from Camille A. Brown and Dancers. Obscured behind Catherine Foster is Raj Panjabi of Last Mile Health (and dancer Mayte Natalio is offstage). Photo: Ryan Lash / TED

Creating wide-scale change isn’t easy. It takes incredible passion around an issue, and smart ideas on how to move the needle and, hopefully, improve people’s lives. It requires bottomless energy, a dedicated team, an extraordinary amount of hope. And, of course, it demands real resources.

TED would like to help, on the last part at least. This is an open invitation to all social entrepreneurs and nonprofit leaders: apply to be a part of The Audacious Project in 2019. We’re looking for big, bold, unique ideas that are capable of affecting more than a million people or driving transformational change on a key issue. We’re looking for unexplored plans that have a real, credible path to execution. That can inspire people around the world to come together to act.

Applications for The Audacious Project are open now through June 10. And here’s the best part — this isn’t a long, detailed grant application that will take hours to complete. We’ve boiled it down to the essential questions that can be answered swiftly. So apply as soon as you can. If your idea feels like a good fit, we’ll be in touch with an extended application that you’ll have four weeks to complete.

The Audacious Project process is rigorous — if selected as a Finalist, you’ll participate in an ideation workshop to help clarify your approach and work with us and our partners on a detailed project proposal spanning three to five years. But the work will be worth it, as it can turbocharge your drive toward change.

More than $406 million has already been committed to the first ideas in The Audacious Project. And further support is coming in following the simultaneous launch of the project at both TED2018 and the annual Skoll World Forum last week. Watch the full session from TED, or highlight reel above that screened the next day at Skoll. And who knows? Perhaps you’ll be a part of the program in 2019.

CryptogramNIST Issues Call for "Lightweight Cryptography" Algorithms

This is interesting:

Creating these defenses is the goal of NIST's lightweight cryptography initiative, which aims to develop cryptographic algorithm standards that can work within the confines of a simple electronic device. Many of the sensors, actuators and other micromachines that will function as eyes, ears and hands in IoT networks will work on scant electrical power and use circuitry far more limited than the chips found in even the simplest cell phone. Similar small electronics exist in the keyless entry fobs to newer-model cars and the Radio Frequency Identification (RFID) tags used to locate boxes in vast warehouses.

All of these gadgets are inexpensive to make and will fit nearly anywhere, but common encryption methods may demand more electronic resources than they possess.

The NSA's SIMON and SPECK would certainly qualify.

Worse Than FailureCodeSOD: A Password Generator

Every programming language has a *bias* which informs their solutions. Object-oriented languages are biased towards objects, and all the things which follow on. Clojure is all about function application. Haskell is all about type algebra. Ruby is all about monkey-patching existing objects.

In any language, these things can be taken too far. Java's infamous Spring framework leaps to mind. Perl, being biased towards regular expressions, has earned its reputation as being "write only" thanks to regex abuse.

Gert sent us along some Perl code, and I was expecting to see regexes taken too far. To my shock, there weren't any regexes.

Gert's co-worker needed to generate a random 6-digit PIN for a voicemail system. It didn't need to be cryptographically secure, repeats and zeros are allowed (they exist on a keypad, after all!). The Perl-approach for doing this would normally be something like:

sub randomPIN {
  return sprintf("%06u",int(rand(1000000)));

Gert's co-worker had a different plan in mind, though.

sub randomPIN {
my $password;
my @num = (1..9);
my @char = ('@','#','$','%','^','&','*','(',')');
my @alph = ('a'..'z');
my @alph_up = ('A'..'Z');

my $rand_num1 = $num[int rand @num];
my $rand_num2 = $num[int rand @num];
my $rand_num3 = $num[int rand @num];
my $rand_num4 = $num[int rand @num];
my $rand_num5 = $num[int rand @num];
my $rand_num6 = $num[int rand @num];

$password = "$rand_num1"."$rand_num2"."$rand_num3"."$rand_num4"."$rand_num5"."$rand_num6";

return $password;

This code starts by creating a set of arrays, @num, @char, etc. The only one that matters is @num, though, since this generates a PIN to be entered on a phone keypad and touchtone signals are numeric and also there is no "(" key on a telephone keypad. Obviously, the developer copied this code from a random password function somewhere, which is its own special kind of awful.

Now, what's fascinating is that they initialize @num with the numbers 1 through 9, and then use the rand function to generate a random number from 0 through 8, so that they can select an item from the array. So they understood how the rand function worked, but couldn't make the leap to eliminate the array with something like rand(9).

For now, replacing this function is simply on Gert's todo list.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!


CryptogramIoT Inspector Tool from Princeton

Researchers at Princeton University have released IoT Inspector, a tool that analyzes the security and privacy of IoT devices by examining the data they send across the Internet. They've already used the tool to study a bunch of different IoT devices. From their blog post:

Finding #3: Many IoT Devices Contact a Large and Diverse Set of Third Parties

In many cases, consumers expect that their devices contact manufacturers' servers, but communication with other third-party destinations may not be a behavior that consumers expect.

We have found that many IoT devices communicate with third-party services, of which consumers are typically unaware. We have found many instances of third-party communications in our analyses of IoT device network traffic. Some examples include:

  • Samsung Smart TV. During the first minute after power-on, the TV talks to Google Play, Double Click, Netflix, FandangoNOW, Spotify, CBS, MSNBC, NFL, Deezer, and Facebook­even though we did not sign in or create accounts with any of them.

  • Amcrest WiFi Security Camera. The camera actively communicates with using HTTPS. QuickDDNS is a Dynamic DNS service provider operated by Dahua. Dahua is also a security camera manufacturer, although Amcrest's website makes no references to Dahua. Amcrest customer service informed us that Dahua was the original equipment manufacturer.

  • Halo Smoke Detector. The smart smoke detector communicates with Xively offers an MQTT service, which allows manufacturers to communicate with their devices.

  • Geeni Light Bulb. The Geeni smart bulb communicates with, which is operated by TuYa, a China-based company that also offers an MQTT service.

We also looked at a number of other devices, such as Samsung Smart Camera and TP-Link Smart Plug, and found communications with third parties ranging from NTP pools (time servers) to video storage services.

Their first two findings are that "Many IoT devices lack basic encryption and authentication" and that "User behavior can be inferred from encrypted IoT device traffic." No surprises there.

Boingboing post.

Related: IoT Hall of Shame.

Worse Than FailureAn Obvious Requirement

Requirements. That magical set of instructions that tell you specifically what you need to build and test. Users can't be bothered to write them, and even if they could, they have no idea how to tell you what they want. It doesn't help that many developers are incapable of following instructions since they rarely exist, and when they do, they usually aren't worth the coffee-stained napkin upon which they're scribbled.

A sign warning that a footpath containing stairs isn't suitable for wheelchairs

That said, we try our best to build what we think our users need. We attempt to make it fairly straightforward to use what we build. The button marked Reports most likely leads to something to do with generating/reading/whatever-ing reports. Of course, sometimes a particular feature is buried several layers deep and requires multiple levels of ribbons, menus, sub-menus, dialogs, sub-dialogs and tabs before you find the checkbox you want. Since us developers as a group are, by nature, somewhat anal retentive, we try to keep related features grouped so that you can generally guess what path to try to find something. And we often supply a Help feature to tell you how to find it when you can't.

Of course, some people simply cannot figure out how to use the software we build, no matter how sensibly it's laid out and organized, or how many hints and help features we provide. And there is nothing in the history of computer science that addresses how to change this. Nothing!

Dimitri C. had a user who wanted a screen that performed several actions. The user provided requirements in the form of a printout of a similar dialog he had used in a previous application, along with a list of changes/colors/etc. They also provided some "helpful" suggestions, along the lines of, "It should be totally different, but exactly the same as the current application." Dimitri took pains to organize the actions and information in appropriate hierarchical groups. He laid out appropriate controls in a sensible way on the screen. He provided a tooltip for each control and a Help button.

Shortly after delivery, a user called to complain that he couldn't find a particular feature. Dimitri asked "Have you tried using the Help button?" The user said that "I can't be bothered to read the instructions in the help tool because accessing this function should be obvious".

Dimitri asked him "Have you looked on the screen for a control with the relevant name?" The user complained that "There are too many controls, and this function should be obvious". Dimitri asked "Did you try to hover your mouse over the controls to read the tooltips?" The user complained that "I don't have the time to do that because it would take too long!" (yet he had the time to complain).

Frustrated, Dimitri replied "To make that more obvious, should I make these things less obvious?". The user complained that "Everything should be obvious". Dimitri asked how that could possibly be done, to which the user replied "I don't know, that's your job".

When he realized that this user had no clue how to ask for what he wanted, he asked how this feature worked in previous programs, to which the user replied "I clicked this, then this, then this, then this, then this, then restarted the program".

Dimitri responded that "That's six steps instead of the two in my program, and that would require you to reenter some of the data".

The user responded "Yes, but it's obvious".

So is the need to introduce that type of user to the business end of a clue-bat.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Cory DoctorowBoston, Chicago and Waterloo, I’m heading your way!

This Wednesday at 1145am, I’ll be giving the IDE Lunch Seminar at MIT’s Sloan School of Management, 100 Main Street.

From there, I head to Chicago to keynote Thotcon on Friday at 11am.

My final stop on this trip is Waterloo’s Perimeter Institute for Theoretical Physics, May 9 at 2PM.

I hope to see you! I’ve got plenty more appearances planned this season, including Santa Fe, Phoenix, San Jose, Boston, San Diego and Pasadena!


Planet Linux AustraliaDavid Rowe: FreeDV 700D Part 3

After a 1 year hiatus, I am back into FreeDV 700D development, working to get the OFDM modem, LDPC FEC, and interleaver algorithms developed last year into real time operation. The aim is to get improved performance on HF channels over FreeDV 700C.

I’ve been doing lots of refactoring, algorithm development, fixing bugs, tuning, and building up layers of C code so we can get 700D on the air.

Steve ported the OFDM modem to C – thanks Steve!

I’m building up the software in the form of command line utilities, some notes, examples and specifications in Codec 2 README_ofdm.txt.

Last week I stayed at the shack of Chris, VK5CP, in a quiet rural location at Younghusband on the river Murray. As well as testing my Solar Boat, Mark (VK5QI) helped me test FreeDV 700D. This was the first time the C code software has been tested over a real HF radio channel.

We transmitted signals from YoungHusband, and received them at a remote SDR in Sydney (about 1300km away), downloading wave files of the received signal for off-line analysis.

After some tweaking, it worked! The frequency offset was a bit off, so I used the cohpsk_ch utility to shift it within the +/- 25Hz acquisition range of the FreeDV 700D demodulator. I also found some level sensitivity issues with the LDPC decoder. After implementing a form of AGC, the number of bit errors dropped by a factor of 10.

The channel had nasty fading of around 1Hz, here is a video of the “sample #32” spectrum bouncing around. This rapid fading is a huge challenge for modems. Note also the spurious birdie off to the left, and the effect of receiver AGC – the noise level rises during fades.

Here is a spectrogram of the same sample 33. The x axis is time in seconds. It’s like a “waterfall” SDR plot on it’s side. Note the heavy “barber pole” fading, which corresponds to the fades sweeping across the spectrum in the video above.

Here is the smoothed SNR estimate. The SNR is moving target for real world HF channels, the SNR moves between 2 and 6dB.

FreeDV 700D was designed to work down to 2dB on HF fading channels so pat on the back for me! Hundreds of hours of careful development and testing meant this thing actually worked when it went on air….

Sample 32 is a longer file that contains test frames instead of coded voice. The QPSK scatter diagram is a messy cross, typical of fading channels, as the amplitude of the signal moves in and out:

The LDPC FEC does a good job. Here are plots of the uncoded (raw) bit errors, and the bit errors after LDPC decoding, with the SNR estimates below:

Here are some wave and raw (headerless) audio files. The off air audio is error free, albeit at the low quality of Codec 2 at 700 bits/s. The goal of this work is to get intelligible speech through HF channels at low SNRs. We’ll look at improving the speech quality as a future step.

Still, error free digital voice on a heavily faded HF channel at 2dB SNR is pretty cool.

See below for how to use the last two raw file samples.

sample 33 off air modem signal Sample 33 decoded voice Sample 32 off air test frames raw file Sample 33 off air voice raw file

SNR estimation

After I sampled the files I had a problem – I needed to know the SNR. You see in my development I use simulated channels where I know exactly what the SNR is. I need to compare the performance of the real world, off-air signals to my expected results at a given SNR.

Unfortunately SNR on a fading channel is a moving target. In simulation I measure the total power and noise over the entire run, and the simulated fading channel is consistent. Real world channels jump all over the place as the ionosphere bounces around. Oh well, knowing we are in the ball park is probably good enough. We just need to know if FreeDV 700D is hanging onto real world HF channels at roughly the SNRs it was designed for.

I came up with a way of measuring SNR, and tested it with a range of simulated AWGN (just noise) and fading channels. The fading bandwidth is the speed at which the fading channel evolves. Slow fading channels might change at 0.2Hz, faster channels, like samples #32 and #33, at about 1Hz.

The blue line is the ideal, and on AWGN and slowly fading channels my SNR estimator does OK. It reads a dB low as the fading bandwidth increases to 1Hz. We are interested in the -2 to 4dB SNR range.

Command Lines

With the samples in the table above and codec2-dev SVN rev 3465, you can repeat some of my decodes using Octave and C:

octave:42> ofdm_ldpc_rx("32.raw")
EsNo fixed at 3.000000 - need to est from channel
Coded BER: 0.0010 Tbits: 54992 Terrs:    55
Codec PER: 0.0097 Tpkts:  1964 Terrs:    19
Raw BER..: 0.0275 Tbits: 109984 Terrs:  3021

david@penetrator:~/codec2-dev/build_linux/src$ ./ofdm_demod ../../octave/32.raw /dev/null -t --ldpc
Warning EsNo: 3.000000 hard coded
BER......: 0.0246 Tbits: 116620 Terrs:  2866
Coded BER: 0.0009 Tbits: 54880 Terrs:    47

build_linux/src$ ./freedv_rx 700D ../../octave/32.raw /dev/null --testframes
BER......: 0.0246 Tbits: 116620 Terrs:  2866
Coded BER: 0.0009 Tbits: 54880 Terrs:    47

build_linux/src$ ./freedv_rx 700D ../../octave/33.raw  - | aplay -f S16

Next Steps

I’m working steadily towards integrating FreeDV 700D into the FreeDV GUI program so anyone can try it. This will be released in May 2018.

Reading Further

Towards FreeDV 700D
FreeDV 700D – First Over The Air Tests
Steve Ports an OFDM modem from Octave to C
Codec 2 README_ofdm.txt

CryptogramSecurity Vulnerabilities in VingCard Electronic Locks

Researchers have disclosed a massive vulnerability in the VingCard eletronic lock system, used in hotel rooms around the world:

With a $300 Proxmark RFID card reading and writing tool, any expired keycard pulled from the trash of a target hotel, and a set of cryptographic tricks developed over close to 15 years of on-and-off analysis of the codes Vingcard electronically writes to its keycards, they found a method to vastly narrow down a hotel's possible master key code. They can use that handheld Proxmark device to cycle through all the remaining possible codes on any lock at the hotel, identify the correct one in about 20 tries, and then write that master code to a card that gives the hacker free reign to roam any room in the building. The whole process takes about a minute.


The two researchers say that their attack works only on Vingcard's previous-generation Vision locks, not the company's newer Visionline product. But they estimate that it nonetheless affects 140,000 hotels in more than 160 countries around the world; the researchers say that Vingcard's Swedish parent company, Assa Abloy, admitted to them that the problem affects millions of locks in total. When WIRED reached out to Assa Abloy, however, the company put the total number of vulnerable locks somewhat lower, between 500,000 and a million.

Patching is a nightmare. It requires updating the firmware on every lock individually.

And the researchers speculate whether or not others knew of this hack:

The F-Secure researchers admit they don't know if their Vinguard attack has occurred in the real world. But the American firm LSI, which trains law enforcement agencies in bypassing locks, advertises Vingcard's products among those it promises to teach students to unlock. And the F-Secure researchers point to a 2010 assassination of a Palestinian Hamas official in a Dubai hotel, widely believed to have been carried out by the Israeli intelligence agency Mossad. The assassins in that case seemingly used a vulnerability in Vingcard locks to enter their target's room, albeit one that required re-programming the lock. "Most probably Mossad has a capability to do something like this," Tuominen says.

Slashdot post.

Worse Than FailureCodeSOD: Philegex

Last week, I was doing some graphics programming without a graphics card. It was low resolution, so I went ahead and re-implemented a few key methods from the Open GL Shader Language in a fashion which was compatible with NumPy arrays. Lucky for me, I was able to draw off many years of experience, I understood both technologies, and they both have excellent documentation which made it easy. After dozens of lines of code, I was able to whip up some pretty flexible image generator functions. I knew the tools I needed, I understood how they worked, and while I was reinventing a wheel, I had a very specific reason.

Philemon Eichin sends us some code from a point in his career where none of these things were true.

Philemon was building a changelog editor. As such, he wanted an easy, flexible way to identify patterns in the text. Philemon knew that there was something that could do that job, but he didn’t know what it was called or how it was supposed to work. So, like all good programmers, Philemon went ahead and coded up what he needed- he invented his own regular expression language, and built his own parser for it.

Thus was born Philegex. Philemon knew that regexes involved slashes, so in his language you needed to put a slash in front of every character you wanted to match exactly. He knew that it involved question marks, so he used the question mark as a wildcard which could match any character. That left the ’|" character to be optional.

So, for example: /P/H/I/L/E/G/E/X|??? would match “PHILEGEX!!!” or “PHILEGEWTF”. A date could be described as: nnnn/.nn/.nn. (YYYY.MM.DD or YYYY.DD.MM)

Living on his own isolated island without access to the Internet to attempt to google up “How to match patterns in text”, Philemon invented his own language for describing parts of a regular expression. This will be useful to interpret the code below.

p1 Pattern / Regex
CT CharType
CC currentChar
auf_zu openParenthesis
Chars CharClassification

With the preamble out of the way, enjoy Philemon’s approach to regular expressions, implemented elegantly in VB.Net.

Public Class Textmarker
    Const Datum As String = "nn/.nn/.nnnn"

    Private Structure Blocks
        Dim Type As Chars
        Dim Multi As Boolean
        Dim Mode As Char_Mode
        Dim Subblocks() As Blocks
        Dim passed As Boolean
        Dim _Optional As Boolean
    End Structure

    Public Shared Function IsMaskable(p1 As String, Content As String) As Boolean
        Dim ID As Integer = 0
        Dim p2 As Chars
        Dim _Blocks() As Blocks = SplitLine(p1)
        For i As Integer = 0 To Content.Length - 1
            p2 = GetCT(Content(i))
            '#If CONFIG = "Debug" Then
            '            If ID = 2 Then
            '                Stop
            '            End If

            '#End If
            If ID > _Blocks.Length - 1 Then
                Return False
            End If
            Select Case _Blocks(ID).Mode
                Case Char_Mode._Char
                    If p2.Char_V = _Blocks(ID).Type.Char_V Then
                        _Blocks(ID).passed = True
                        If Not _Blocks(ID).Multi = True Then ID += 1
                        Exit Select
                        If _Blocks(ID).passed = True And _Blocks(ID).Multi = True Then
                            ID += 1
                            GoTo START_CASE
                            If Not _Blocks(ID)._Optional Then Return False

                        End If
                    End If
                Case Char_Mode.Type
                    If _Blocks(ID).Type.Type = Chartypes.any Then
                        _Blocks(ID).passed = True
                        If Not _Blocks(ID).Multi = True Then ID += 1
                        Exit Select

                        If p2.Type = _Blocks(ID).Type.Type Then
                            _Blocks(ID).passed = True
                            If Not _Blocks(ID).Multi = True Then ID += 1
                            Exit Select
                            If _Blocks(ID).passed = True And _Blocks(ID).Multi = True Then
                                ID += 1
                                GoTo START_CASE
                                If _Blocks(ID)._Optional Then
                                    ID += 1
                                    _Blocks(ID - 1).passed = True
                                    Return False

                                End If

                            End If
                        End If

                    End If

            End Select

        For i = ID To _Blocks.Length - 1
            If _Blocks(ID)._Optional = True Then
                _Blocks(ID).passed = True
                Exit For
            End If
        If _Blocks(_Blocks.Length - 1).passed Then
            Return True
            Return False
        End If

    End Function

    Private Shared Function GetCT(Char_ As String) As Chars

        If "0123456789".Contains(Char_) Then Return New Chars(Char_, 2)
        If "qwertzuiopüasdfghjklöäyxcvbnmß".Contains((Char.ToLower(Char_))) Then Return New Chars(Char_, 1)
        Return New Chars(Char_, 4)
    End Function

    Private Shared Function SplitLine(ByVal Line As String) As Blocks()
        Dim ret(0) As Blocks
        Dim retID As Integer = -1
        Dim CC As Char
        For i = 0 To Line.Length - 1
            CC = Line(i)
            Select Case CC
                Case "("
                    ReDim Preserve ret(retID + 1)
                    retID += 1
                    Dim ii As Integer = i + 1
                    Dim auf_zu As Integer = 1
                        Select Case Line(ii)
                            Case "("
                                auf_zu += 1
                            Case ")"
                                auf_zu -= 1
                            Case "/"
                                ii += 1
                        End Select
                        ii += 1
                    Loop Until auf_zu = 0
                    ret(retID).Subblocks = SplitLine(Line.Substring(i + 1, ii - 1))
                    ret(retID).Mode = Char_Mode.subitems
                    ret(retID).passed = False

                Case "*"
                    ret(retID).Multi = True
                    ret(retID).passed = False
                Case "|"
                    ret(retID)._Optional = True

                Case "/"
                    ReDim Preserve ret(retID + 1)
                    retID += 1
                    ret(retID).Mode = Char_Mode._Char
                    ret(retID).Type = New Chars(Line(i + 1), Chartypes.other)
                    i += 1
                    ret(retID).passed = False

                Case Else

                    ReDim Preserve ret(retID + 1)
                    retID += 1
                    ret(retID).Mode = Char_Mode.Type
                    ret(retID).Type = New Chars(Line(i), TocType(CC))
                    ret(retID).passed = False
            End Select
        Return ret
    End Function
    Private Shared Function TocType(p1 As Char) As Chartypes
        Select Case p1
            Case "c"
                Return Chartypes._Char
            Case "n"
                Return Chartypes.Number
            Case "?"
                Return Chartypes.any
            Case Else
                Return Chartypes.other
        End Select
    End Function

    Public Enum Char_Mode As Integer
        Type = 1
        _Char = 2
        subitems = 3
    End Enum
    Public Enum Chartypes As Integer
        _Char = 1
        Number = 2
        other = 4
    End Enum
    Structure Chars
        Dim Char_V As Char
        Dim Type As Chartypes
        Sub New(Char_ As Char, typ As Chartypes)
            Char_V = Char_
            Type = typ
        End Sub
    End Structure
End Class

I’ll say this: building a finite state machine, which is what the core of a regex engine is, is perhaps the only case where using a GoTo could be considered acceptable. So this code has that going for it. Philemon was kind enough to share this code with us, so we knew he knows it’s bad.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet Linux AustraliaOpenSTEM: Be Gonski Ready!

Gonski is in the news again with the release of the Gonski 2.0 report. This is most likely to impact on schools and teachers in a range of ways from funding to curriculum. Here at OpenSTEM we can help you to be ahead of the game by using our materials, which are already Gonski-ready! The […]


Harald WelteOsmoDevCon 2018 retrospective

One week ago, the annual Osmocom developer meeting (OsmoDevCon 2018) concluded after four long and intense days with old and new friends (schedule can be seen here

It was already the 7th incarnation of OsmoDevCon, and I have to say that it's really great to see the core Osmocom community come together every year, to share their work and experience with their fellow hackers.

Ever since the beginning we've had the tradition that we look beyond our own projects. In 2012, David Burgess was presenting on OpenBTS. In 2016, Ismael Gomez presented about srsUE + srsLTE, and this year we've had the pleasure of having Sukchan Kim coming all the way from Korea to talk to us about his nextepc project (a FOSS implementation of the Evolved Packet Core, the 4G core network).

What has also been a regular "entertainment" part in recent years are the field trip reports to various [former] satellite/SIGINT/... sites by Dimitri Stolnikov.

All in all, the event has become at least as much about the people than about technology. It's a community of like-minded people that to some part are still working on joint projects, but often work independently and scratch their own itch - whether open source mobile comms related or not.

After some criticism last year, the so-called "unstructured" part of OsmoDevCon has received more time again this year, allowing for exchange among the participants irrespective of any formal / scheduled talk or discussion topic.

In 2018, with the help of c3voc, for the first time ever, we've recorded most of the presentations on video. The results are still in the process of being cut, but are starting to appear at

If you want to join a future OsmoDevCon in person: Make sure you start contributing to any of the many Osmocom member projects now to become eligible. We need you!

Now the sad part i that it will take one entire year until we'll reconvene. May the Osmocom Developer community live long and prosper. I want to meet you guys for many more years at OsmoDevCon!

There is of course the user-oriented OsmoCon 2018 in October, but that's of course a much larger event with a different audience.

Nevertheless, I'm very much looking forward to that, too.

The OsmoCon 2018 Call for Participation is still running. Please consider submitting talks if you have anything open source mobile communications related to share!

Sociological ImagesBouncers and Bias

Originally Posted at TSP Discoveries

Whether we wear stilettos or flats, jeans or dress clothes, our clothing can allow or deny us access to certain social spaces, like a nightclub. Yet, institutional dress codes that dictate who can and cannot wear certain items of clothing target some marginalized communities more than others. For example, recent reports of bouncers denying Black patrons from nightclubs prompted Reuben A Buford May and Pat Rubio Goldsmith to test whether urban nightclubs in Texas deny entrance for Black and Latino men through discriminatory dress code policies.

Photo Credit: Bruce Turner, Flickr CC

For the study, recently published in Sociology of Race and EthnicityThe authors recruited six men between the ages of 21 and 23. They selected three pairs of men by race — White, Black, and Latino — to attend 53 urban nightclubs in Dallas, Houston, and Austin. Each pair shared similar racial, socioeconomic, and physical characteristics. One individual from each pair dressed as a “conformist,” wearing Ralph Lauren polos, casual shoes, and nice jeans that adhered to the club’s dress code. The other individual dressed in stereotypically urban dress, wearing “sneakers, blue jean pants, colored T-shirt, hoodie, and a long necklace with a medallion.” The authors categorized an interaction as discrimination if a bouncer denied a patron entrance based on his dress or if the bouncer enforced particular dress code rules, such as telling a patron to tuck in their necklace. Each pair attended the same nightclub at peak hours three to ten minutes apart. The researchers exchanged text messages with each pair to document any denials or accommodations.

Black men were denied entrance into nightclubs 11.3 percent of the time (six times), while White and Latino men were both denied entry 5.7 percent of the time (three times). Bouncers claimed the Black patrons were denied entry because of their clothing, despite allowing similarly dressed White and Latino men to enter. Even when bouncers did not deny entrance, they demanded that patrons tuck in their necklaces to accommodate nightclub policy. This occurred two times for Black men, three times for Latino men, and one time for White men. Overall, Black men encountered more discriminatory experiences from nightclub bouncers, highlighting how institutions continue to police Black bodies through seemingly race-neutral rules and regulations.

Amber Joy is a PhD student in sociology at the University of Minnesota. Her current research interests include punishment, sexual violence and the intersections of race, gender, age, and sexuality. Her work examines how state institutions construct youth victimization.

Planet Linux AustraliaDavid Rowe: Solar Boat

Two years ago when I bought my Hartley TS16 sail boat I dreamed of converting it to solar power. In January I installed a Torqueedo electric outboard and a 24V, 100AH Lithium battery back. That’s working really well. Next step was to work out a way to mount some surplus 200W solar panels on the boat. The idea is to (temporarily) detach the mast, and use the boat on the river Murray, a major river that passes within 100km of where I live in Adelaide, South Australia.

Over the last few weeks I worked with my friend Gary (VK5FGRY) to mount solar panels on the TS16. Gary designed and fabricated some legs from 40mm square aluminium:

With a matching rubber foot on each leg, the panels sit firmly on the gel coat of the boat, and are held down by ropes or octopus straps.

The panels maximum power point is at 28.5V (and 7.5A) which is close to the battery pack under charge (3.3*8 = 26.4V) so I decided to try a direct DC connection – no inverter or charger. I ran some tests in the back yard: each panel was delivering about 4A into the battery pack, and two in parallel delivered about 8A. I didn’t know solar panels could be connected in parallel, but happily this means I can keep my direct DC connection. Horizontal panels costs a few amps – a good example of why solar panels are usually angled at the sun. However the azimuth of the boat will be always changing so horizontal is the only choice. The panels are very sensitive to shadowing; a hand placed on a panel, or a small shadow is enough to drop the current to 0A. OK, so now I had a figure for panel output – about 4A from each panel.

This didn’t look promising. Based on my sea voyages with the Torqueedo, I estimated I would need 800W (about 30A) to maintain my target houseboat speed of 4 knots (7 km/hr); that’s 8 panels which won’t ft on my boat! However the current draw on the river might be different without tides, and waves, and I wasn’t sure exactly how many AH I would get over a day from the sun. Would trees on the river bank shadow the panels?

So it was off to Younghusband on the Murray, where our friend Chris (VK5CP) was hosting a bunch of Ham Radio guys for an extended Anzac day/holiday weekend. It’s Autumn here, with generally sunny days of about 23C. The sun is up from from 6:30am to 6pm.

Turns out that even with two panels – the solar boat was really practical! Over three days we made three trips of 2 hours each, at speeds of 3 to 4 knots, using only the panels for charging. Each day I took friends out, and they really loved it – so quiet and peaceful, and the river scenery is really nice.

After an afternoon cruise I would park the boat on the South side of the river to catch the morning sun, which in Autumn appears to the North here in Australia. I measured the panel current as 2A at 7am, 6A at 9am, 9A at 10am, and much to my surprise the pack was charged by 11am! In fact I had to disconnect the panels as the cell voltage was pushing over 4V.

On a typical run upriver we measured 700W = 4kt, 300W = 3.1kt, 150W = 2.5kt, and 8A into the panels in full sun. Panel current dropped to 2A with cloud which was a nasty surprise. We experienced no shadowing issues from trees. The best current we saw at about noon was 10A. We could boost the current by 2A by putting three guys on one side of the boat and tipping the entire boat (and solar panels) towards the sun!

Even partial input from solar can have a big impact. Lets say at 4 knots (30A) I can drive for 2 hours using 60% of my 100AH pack. If I back off the speed a little, so I’m drawing 20A, then 10A from the panels will extend my driving time to 6 hours.

I slept on the boat, and one night I found a paddle steamer (the Murray Princess) parked across the river from me, all lit up with fairy lights:

On our final adventure, my friend Darin (VK5IX) and I were entering Lake Carlet, when suddenly the prop hit something very hard, “crack crack crack”. My poor prop shaft was bent and my propeller is wobbling from side to side:

We gently e-motored back and actually recorded our best results – 3 knots on 300W, 10A from the panels, 10A to the motor.

With 4 panels I would have a very practical solar boat, capable of 4-6 hours cruising a day just on solar power. The 2 extra panels could be mounted as a canopy over the rear of the boat. I have an idea about an extended solar adventure of several days, for example 150km from Younghusband to Goolwa.

Reading Further

Engage the Silent Drive
Lithium Cell Amp Hour Tester and Electric Sailing

Sam VargheseAppointing Justin Langer as coach will not solve Australia’s problems

In February, the Australian cricket team was in serious trouble after some players were caught cheating on the field.

The captain, vice-captain and the player who was the executor of the cheating that had been planned all lost their places and were ejected from cricket. Captain Steve Smith and vice-captain David Warner were banned for two years and Cameron Bancroft for nine months.

Coach Darren Lehmann retained his job but resigned soon thereafter.

The new coach, appointed recently, is Justin Langer. Through this appointment, Cricket Australia hopes to avoid any incidents that bring more disrepute on the team.

It would have been better if they had appointed former Test fast bowler Jason Gillespie. Langer, it may be recalled, tried to cheat during a Test match against Sri Lanka in 2004.

To quote from a published report: “In a bizarre incident, Langer brushed and dislodged a bail midway through the 80th over of Sri Lanka’s innings, an action he later described as unintentional. After left-handed batsman Hashan Tillakaratne completed a single to fine-leg, Australia’s fieldsmen crossed the pitch in preparation for the right-handed Thilan Samaraweera. Langer walked between the wicket and Samaraweera and broke the stumps with his hand.

“Later, Australian captain Ricky Ponting asked umpires Steve Bucknor and Dave Orchard about the fallen bail, but neither had seen Langer as he crossed the pitch. Accordingly, Bucknor and Orchard stopped play for three minutes while the matter was taken to third umpire Peter Manuel.

“Manuel ruled Tillakaratne not out. A reportedly upset Langer consulted team manager Steve Bernard during the lunch break, claiming to have been unaware that he was responsible for dislodging the bail.

“Bernard immediately relayed Langer’s sentiments to ICC match referee Chris Broad who, after receiving a report from the umpires at tea, charged the West Australian with a code of conduct breach. Broad later decided Langer had made an innocent mistake. However, he reminded the batsman to take more care in similar situations in the future.”

Langer is also the same man who denied he made contact with the ball when he nicked Wasim Akram to the keeper during a Test match in Hobart in 2009. Australia won that Test by six wickets, chasing down 350-plus.

In 2016, Langer admitted that he had cheated in that case too.

And the men who run cricket in Australia think he is the right man to coach the team, just after it was involved in a case of ball-tampering. Welcome to the new coach, a good likeness of the old one.

Here are some highlights of Lehmann’s cricketing career:

Jan 2003: Banned for five one-day internationals for racial outburst against Sri Lankans, calling them black cunts.

March 2006: Reprimanded for comments against Cricket Australia sponsors ING. “a 9.30am start, I don’t know how many times you have to say it. Thank God we might be changing sponsors. That might allow us to play at different times.”

July 2006: Censured by Yorkshire for making an obscene gesture to the crowd.

December 2012: Fined US$3000 and suspended for two years after questioning the bowling action of West Indian Marlon Samuels.

“He couldn’t bowl in the IPL last year. Yet he can bowl in the BBL. We’ve got to seriously look at what we’re doing. Are we here to play cricket properly or what?”

August 2013: Accused Stuart Broad of “Blatant cheating” during 2013 Ashes series. On Broad’s decision not to walk after edging to slip: “I just hope the Australian public give it to him right from the word go and I hope he cries and goes home.”

August 2013: Fined 20%of his match fee for accusation against Broad.

April 2014: Admits that the 2003 racial outburst against Sri Lanka was the “biggest mistake of my life”.

March 2017: Lehmann defended Steve Smith after the then captain was caught on camera looking up to the pavilion for guidance before making a DRS call in Bangalore.

March 2018: Lehmann defended David Warner after the incident where the opener nearly came to blows with Quentin de Kock in the pavilion in Durban. On this he said: “There are things that cross the line and evoke emotion and you’ve got to deal with that. Both sides are going to push the boundaries. That’s part and parcel of Test match cricket.”

March 2018: Lehmann called Newlands’ crowd abuse “disgraceful” after Warner’s exchange with a fan.

March 2018: Resigns after ball tampering controversy.

The only thing left for Cricket Australia to do now is to appoint drug cheat Shane Warne to be the team’s adviser. That would be the clincher.


Planet Linux AustraliaJulien Goodwin: PoE termination board

For my next big project I'm planning on making it run using power over ethernet. Back in March I designed a quick circuit using the TI TPS2376-H PoE termination chip, and an LMR16020 switching regulator to drop the ~48v coming in down to 5v. There's also a second stage low-noise linear regulator (ST LDL1117S33R) to further drop it down to 3.3v, but as it turns out the main chip I'm using does its own 5->3.3v conversion already.

Because I was lazy, and the pricing was reasonable I got these boards manufactured by who I'd used for the USB-C termination boards I did a while back.

Here's the board running a Raspberry Pi 3B+, as it turns out I got lucky and my board is set up for the same input as the 3B+ supplies.

One really big warning, this is a non-isolated supply, which, in general, is a bad idea for PoE. For my specific use case there'll be no exposed connectors or metal, so this should be safe, but if you want to use PoE in general I'd suggest using some of the isolated convertors that are available with integrated PoE termination.

For this series I'm going to try and also make some notes on the mistakes I've made with these boards to help others, for this board:
  • I failed to add any test pins, given this was the first try I really should have, being able to inject power just before the switching convertor was helpful while debugging, but I had to solder wires to the input cap to do that.
  • Similarly, I should have had a 5v output pin, for now I've just been shorting the two diodes I had near the output which were intended to let me switch input power between two feeds.
  • The last, and the only actual problem with the circuit was that when selecting which exact parts to use I optimised by choosing the same diode for both input protection & switching, however this was a mistake, as the switcher needed a Schottky diode, and one with better ratings in other ways than the input diode. With the incorrect diode the board actually worked fine under low loads, but would quickly go into thermal shutdown if asked to supply more than about 1W. With the diode swapped to a correctly rated one it now supplies 10W just fine.
  • While debugging the previous I also noticed that the thermal pads on both main chips weren't well connected through. It seems the combination of via-in-thermal-pad (even tented), along with Kicad's normal reduction in paste in those large pads, plus my manufacturer's use of a fairly thin application of paste all contributed to this. Next time I'll probably avoid via-in-pad.

Coming soon will be a post about the GPS board, but I'm still testing bits of that board out, plus waiting for some missing parts (somehow not only did I fail to order 10k resistors, I didn't already have some in stock).

Planet Linux AustraliaChris Smart: Fedora on ODROID-HC1 mini NAS (ARMv7)

EDIT: I am having a problem where the Fedora kernel does not always detect the disk drive (whether cold, warm or hotplugged). I’ve built upstream 4.16 kernel and it works perfectly every time. It doesn’t seem to be uas related, disabling that on the usb-storage module doesn’t make any difference. I’m looking into it…

Hardkernel is a Korean company that makes various embedded ARM based systems, which it calls ODROID.

One of their products is the ODROID-HC1, a mini NAS designed to take a single 2.5″ SATA drive (HC stands for “Home Cloud”) which comes with 2GB RAM and a Gigabit Ethernet port. There is also a 3.5″ model called the HC2. Both of these are based on the ODROID-XU4, which itself is based on the previous iteration ODROID-XU3. All of these are based on the Samsung Exynos5422 SOC and should work with the following steps.

The Exynos SOC needs proprietary first stage bootloaders which are embedded in the first 1.4MB or so at the beginning of the SD card in order to load U-Boot. As these binary blobs are not re-distributable, Fedora cannot support these devices out of the box, however all the other bits are available including the kernel, device tree and U-Boot. So, we just need to piece it all together and the result is a stock Fedora system!

To do this you’ll need the ODROID device, a power supply (5V/4A for HC1, 12V/2A for HC2), one of their UART adapters, an SD card (UHS-I) and probably a hard drive if you want to use it as a NAS (you may also want a battery for the RTC and a case).

ODROID-HC1 with UART, RTC battery, SD card and 2.5″ drive.

Note that the default Fedora 27 ARM image does not support the Realtek RTL8153 Ethernet adapter out of the box (it does after a kernel upgrade) so if you don’t have a USB Ethernet dongle handy we’ll download the kernel packages on our host, save them to the SD card and install them on first boot. The Fedora 28 image works out of the box, so if you’re installing 28 you can skip that step.

Download the Fedora Minimal ARM server image and save it in your home dir.

Install the Fedora ARM installer and U-Boot bootloader files for the device on your host PC.

sudo dnf install fedora-arm-installer uboot-images-armv7

Insert your SD card into your computer and note the device (mine is /dev/mmcblk0) using dmesg or df commands. Once you know that, open a terminal and let’s write the Fedora image to the SD card! Note that we are using none as the target because it’s not a supported board and we will configure the bootloader manually.

sudo fedora-arm-image-installer \
--target=none \
--image=Fedora-Minimal-armhfp-27-1.6-sda.raw.xz \
--resizefs \
--norootpass \

First things first, we need to enable the serial console and turn off cpuidle else it won’t boot. We do this by mounting the boot partition on the SD card and modifying the extlinux bootloader configuration.

sudo mount /dev/mmcblk0p2 /mnt
sudo sed -i "s|append|& \
console=tty1 console=ttySAC2,115200n8|" \

As mentioned, the kernel that comes with Fedora 27 image doesn’t support the Ethernet adapter, so if you don’t have a spare USB Ethernet dongle, let’s download the updates now. If you’re using Fedora 28 this is not necessary.

cd /mnt
sudo wget \ \
cd ~/

Unmount the boot partition.

sudo umount /mnt

Now, we can embed U-Boot and the required bootloaders into the SD card. To do this we need to download the files from Hardkernel along with their script which writes the blobs (note that we are downloading the files for the XU4, not HC1, as they are compatible). We will tell the script to use the U-Boot image we installed earlier, this way we are using Fedora’s U-Boot not the one from Hardkernel.

Download the required files from Hardkernel.

mkdir hardkernel ; cd hardkernel
wget \ \ \
chmod a+x

Copy the Fedora U-Boot files into the local dir.

cp /usr/share/uboot/odroid-xu3/u-boot.bin .

Finally, run the fusing script to embed the files onto the SD card, passing in the device for your SD card.
sudo ./ /dev/mmcblk0

That’s it! Remove your SD card and insert it into your ODROID, then plug the UART adapter into a USB port on your computer and connect to it with screen (check dmesg for the port number, generally ttyUSB0).

sudo screen /dev/ttyUSB0

Now power on your ODROID. If all goes well you should see the SOC initialise, load Fedora’s U-Boot and boot Fedora to the welcome setup screen. Complete this and then log in as root or your user you have just set up.

Welcome configuration screen for Fedora ARM.

If you’re running Fedora 27 image, install the kernel updates, remove the RPMs and reboot the device (skip this if you’re running Fedora 28).
sudo dnf install --disablerepo=* /boot/*rpm
sudo rm /boot/*rpm
sudo reboot

Fedora login over serial connection.

Once you have rebooted, the Ethernet adapter should work and you can do your regular updates

sudo dnf update

You can find your SATA drive at /dev/sda where you should be able to partition, format, mount it, share it and well, do whatever you want with the box.

You may wish to take note of the IP address and/or configure static networking so that you can SSH in once you unplug the UART.

Enjoy your native Fedora embedded ARM Mini NAS 🙂


CryptogramFriday Squid Blogging: Bizarre Contorted Squid

This bizarre contorted squid might be a new species, or a previously known species exhibiting a new behavior. No one knows.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecuritySecurity Trade-Offs in the New EU Privacy Law

On two occasions this past year I’ve published stories here warning about the prospect that new European privacy regulations could result in more spams and scams ending up in your inbox. This post explains in a question and answer format some of the reasoning that went into that prediction, and responds to many of the criticisms leveled against it.

Before we get to the Q&A, a bit of background is in order. On May 25, 2018 the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires companies to get affirmative consent for any personal information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — has proposed redacting key bits of personal data from WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free.

But in a bid to help registrars comply with the GDPR, ICANN is moving forward on a plan to remove critical data elements from all public WHOIS records. Under the new system, registrars would collect all the same data points about their customers, yet limit how much of that information is made available via public WHOIS lookups.

The data to be redacted includes the name of the person who registered the domain, as well as their phone number, physical address and email address. The new rules would apply to all domain name registrars globally.

ICANN has proposed creating an “accreditation system” that would vet access to personal data in WHOIS records for several groups, including journalists, security researchers, and law enforcement officials, as well as intellectual property rights holders who routinely use WHOIS records to combat piracy and trademark abuse.

But at an ICANN meeting in San Juan, Puerto Rico last month, ICANN representatives conceded that a proposal for how such a vetting system might work probably would not be ready until December 2018. Assuming ICANN meets that deadline, it could be many months after that before the hundreds of domain registrars around the world take steps to adopt the new measures.

In a series of posts on Twitter, I predicted that the WHOIS changes coming with GDPR will likely result in a noticeable increase in cybercrime — particularly in the form of phishing and other types of spam. In response to those tweets, several authors on Wednesday published an article for Georgia Tech’s Internet Governance Project titled, “WHOIS afraid of the dark? Truth or illusion, let’s know the difference when it comes to WHOIS.”

The following Q&A is intended to address many of the more misleading claims and assertions made in that article.

Cyber criminals don’t use their real information in WHOIS registrations, so what’s the big deal if the data currently available in WHOIS records is no longer in the public domain after May 25?

I can point to dozens of stories printed here — and probably hundreds elsewhere — that clearly demonstrate otherwise. Whether or not cyber crooks do provide their real information is beside the point. ANY information they provide — and especially information that they re-use across multiple domains and cybercrime campaigns — is invaluable to both grouping cybercriminal operations and in ultimately identifying who’s responsible for these activities.

To understand why data reuse in WHOIS records is so common among crooks, put yourself in the shoes of your average scammer or spammer — someone who has to register dozens or even hundreds or thousands of domains a week to ply their trade. Are you going to create hundreds or thousands of email addresses and fabricate as many personal details to make your WHOIS listings that much harder for researchers to track? The answer is that those who take this extraordinary step are by far and away the exception rather than the rule. Most simply reuse the same email address and phony address/phone/contact information across many domains as long as it remains profitable for them to do so.

This pattern of WHOIS data reuse doesn’t just extend across a few weeks or months. Very often, if a spammer, phisher or scammer can get away with re-using the same WHOIS details over many years without any deleterious effects to their operations, they will happily do so. Why they may do this is their own business, but nevertheless it makes WHOIS an incredibly powerful tool for tracking threat actors across multiple networks, registrars and Internet epochs.

All domain registrars offer free or a-la-carte privacy protection services that mask the personal information provided by the domain registrant. Most cybercriminals — unless they are dumb or lazy — are already taking advantage of these anyway, so it’s not clear why masking domain registration for everyone is going to change the status quo by much. 

It is true that some domain registrants do take advantage of WHOIS privacy services, but based on countless investigations I have conducted using WHOIS to uncover cybercrime businesses and operators, I’d wager that cybercrooks more often do not use these services. Not infrequently, when they do use WHOIS privacy options there are still gaps in coverage at some point in the domain’s history (such as when a registrant switches hosting providers) which are indexed by historic WHOIS records and that offer a brief window of visibility into the details behind the registration.

This is demonstrably true even for organized cybercrime groups and for nation state actors, and these are arguably some of the most sophisticated and savvy cybercriminals out there.

It’s worth adding that if so many cybercrooks seem nonchalant about adopting WHOIS privacy services it may well be because they reside in countries where the rule of law is not well-established, or their host country doesn’t particularly discourage their activities so long as they’re not violating the golden rule — namely, targeting people in their own backyard. And so they may not particularly care about covering their tracks. Or in other cases they do care, but nevertheless make mistakes or get sloppy at some point, as most cybercriminals do.

The GDPR does not apply to businesses — only to individuals — so there is no reason researchers or anyone else should be unable to find domain registration details for organizations and companies in the WHOIS database after May 25, right?

It is true that the European privacy regulations as they relate to WHOIS records do not apply to businesses registering domain names. However, the domain registrar industry — which operates on razor-thin profit margins and which has long sought to be free from any WHOIS requirements or accountability whatsoever — won’t exactly be tripping over themselves to add more complexity to their WHOIS efforts just to make a distinction between businesses and individuals.

As a result, registrars simply won’t make that distinction because there is no mandate that they must. They’ll just adopt the same WHOIS data collection and display polices across the board, regardless of whether the WHOIS details for a given domain suggest that the registrant is a business or an individual.

But the GDPR only applies to data collected about people in Europe, so why should this impact WHOIS registration details collected on people who are outside of Europe?

Again, domain registrars are the ones collecting WHOIS data, and they are most unlikely to develop WHOIS record collection and dissemination policies that seek to differentiate between entities covered by GDPR and those that may not be. Such an attempt would be fraught with legal and monetary complications that they simply will not take on voluntarily.

What’s more, the domain registrar community tends to view the public display of WHOIS data as a nuisance and a cost center. They have mainly only allowed public access to WHOIS data because ICANN’s contracts state that they should. So, from registrar community’s point of view, the less information they must make available to the public, the better.

Like it or not, the job of tracking down and bringing cybercriminals to justice falls to law enforcement agencies — not security researchers. Law enforcement agencies will still have unfettered access to full WHOIS records.

As it relates to inter-state crimes (i.e, the bulk of all Internet abuse), law enforcement — at least in the United States — is divided into two main components: The investigative side (i.e., the FBI and Secret Service) and the prosecutorial side (the state and district attorneys who actually initiate court proceedings intended to bring an accused person to justice).

Much of the legwork done to provide the evidence needed to convince prosecutors that there is even a case worth prosecuting is performed by security researchers. The reasons why this is true are too numerous to delve into here, but the safe answer is that law enforcement investigators typically are more motivated to focus on crimes for which they can readily envision someone getting prosecuted — and because very often their plate is full with far more pressing, immediate and local (physical) crimes.

Admittedly, this is a bit of a blanket statement because in many cases local, state and federal law enforcement agencies will do this often tedious legwork of cybercrime investigations on their own — provided it involves or impacts someone in their jurisdiction. But due in large part to these jurisdictional issues, politics and the need to build prosecutions around a specific locality when it comes to cybercrime cases, very often law enforcement agencies tend to miss the forest for the trees.

Who cares if security researchers will lose access to WHOIS data, anyway? To borrow an assertion from the Internet Governance article, “maybe it’s high time for security researchers and businesses that harvest personal information from WHOIS on an industrial scale to refine and remodel their research methods and business models.”

This is an alluring argument. After all, the technology and security industries claim to be based on innovation. But consider carefully how anti-virus, anti-spam or firewall technologies currently work. The unfortunate reality is that these technologies are still mostly powered by humans, and those humans rely heavily on access to key details about domain reputation and ownership history.

Those metrics for reputation weigh a host of different qualities, but a huge component of that reputation score is determining whether a given domain or Internet address has been connected to any other previous scams, spams, attacks or other badness. We can argue about whether this is the best way to measure reputation, but it doesn’t change the prospect that many of these technologies will in all likelihood perform less effectively after WHOIS records start being heavily redacted.

Don’t advances in artificial intelligence and machine learning obviate the need for researchers to have access to WHOIS data?

This sounds like a nice idea, but again it is far removed from current practice. Ask anyone who regularly uses WHOIS data to determine reputation or to track and block malicious online threats and I’ll wager you will find the answer is that these analyses are still mostly based on manual lookups and often thankless legwork. Perhaps such trendy technological buzzwords will indeed describe the standard practice of the security industry at some point in the future, but in my experience this does not accurately depict the reality today.

Okay, but Internet addresses are pretty useful tools for determining reputation. The sharing of IP addresses tied to cybercriminal operations isn’t going to be impacted by the GDPR, is it? 

That depends on the organization doing the sharing. I’ve encountered at least two cases in the past few months wherein European-based security firms have been reluctant to share Internet address information at all in response to the GDPR — based on a perceived (if not overly legalistic) interpretation that somehow this information also might be considered personally identifying data. This reluctance to share such information out of a concern that doing so might land the sharer in legal hot water can indeed have a chilling effect on the important sharing of threat intelligence across borders.

According to the Internet Governance article, “If you need to get in touch with a website’s administrator, you will be able to do so in what is a less intrusive manner of achieving this purpose: by using an anonymized email address, or webform, to reach them (The exact implementation will depend on the registry). If this change is inadequate for your ‘private detective’ activities and you require full WHOIS records, including the personal information, then you will need to declare to a domain name registry your specific need for and use of this personal information. Nominet, for instance, has said that interested parties may request the full WHOIS record (including historical data) for a specific domain and get a response within one business day for no charge.”

I’m sure this will go over tremendously with both the hacked sites used to host phishing and/or malware download pages, as well as those phished by or served with malware in the added time it will take to relay and approve said requests.

According to a Q3 2017 study (PDF) by security firm Webroot, the average lifespan of a phishing site is between four and eight hours. How is waiting 24 hours before being able to determine who owns the offending domain going to be helpful to either the hacked site or its victims? It also doesn’t seem likely that many other registrars will volunteer for this 24-hour turnaround duty — and indeed no others have publicly demonstrated any willingness to take on this added cost and hassle.

I’ve heard that ICANN is pushing for a delay in the GDPR as it relates to WHOIS records, to give the registrar community time to come up with an accreditation system that would grant vetted researchers access to WHOIS records. Why isn’t that a good middle ground?

It might be if ICANN hadn’t dragged its heels in taking GDPR seriously until perhaps the past few months. As it stands, the experts I’ve interviewed see little prospect for such a system being ironed out or in gaining necessary traction among the registrar community to accomplish this anytime soon. And most experts I’ve interviewed predict it is likely that the Internet community will still be debating about how to create such an accreditation system a year from now.

Hence, it’s not likely that WHOIS records will continue to be anywhere near as useful to researchers in a month or so than they were previously. And this reality will continue for many months to come — if indeed some kind of vetted WHOIS access system is ever envisioned and put into place.

After I registered a domain name using my real email address, I noticed that address started receiving more spam emails. Won’t hiding email addresses in WHOIS records reduce the overall amount of spam I can expect when registering a domain under my real email address?

That depends on whether you believe any of the responses to the bolded questions above. Will that address be spammed by people who try to lure you into paying them to register variations on that domain, or to entice you into purchasing low-cost Web hosting services from some random or shady company? Probably. That’s exactly what happens to almost anyone who registers a domain name that is publicly indexed in WHOIS records.

The real question is whether redacting all email addresses from WHOIS will result in overall more bad stuff entering your inbox and littering the Web, thanks to reputation-based anti-spam and anti-abuse systems failing to work as well as they did before GDPR kicks in.

It’s worth noting that ICANN created a working group to study this exact issue, which noted that “the appearance of email addresses in response to WHOIS queries is indeed a contributor to the receipt of spam, albeit just one of many.” However, the report concluded that “the Committee members involved in the WHOIS study do not believe that the WHOIS service is the dominant source of spam.”

Do you have something against people not getting spammed, or against better privacy in general? 

To the contrary, I have worked the majority of my professional career to expose those who are doing the spamming and scamming. And I can say without hesitation that an overwhelming percentage of that research has been possible thanks to data included in public WHOIS registration records.

Is the current WHOIS system outdated, antiquated and in need of an update? Perhaps. But scrapping the current system without establishing anything in between while laboring under the largely untested belief that in doing so we will achieve some kind of privacy utopia seems myopic.

If opponents of the current WHOIS system are being intellectually honest, they will make the following argument and stick to it: By restricting access to information currently available in the WHOIS system, whatever losses or negative consequences on security we may suffer as a result will be worth the cost in terms of added privacy. That’s an argument I can respect, if not agree with.

But for the most part that’s not the refrain I’m hearing. Instead, what this camp seems to be saying is if you’re not on board with the WHOIS changes that will be brought about by the GDPR, then there must be something wrong with you, and in any case here a bunch of thinly-sourced reasons why the coming changes might not be that bad.

Rondam RamblingsCredit where it's due

Richard Nixon is rightfully remembered as one of the great villains of American democracy.  But he wasn't all bad.  He opened relations with China, appointed four mostly sane Supreme Court justices, and oversaw the establishment of the EPA among many other accomplishments.  Likewise, I believe that Donald Trump will eventually go down in history as one of the worst (if not the worst) president

Rondam RamblingsAn open letter to Jack Phillips

[Jack Phillips is the owner of the Masterpiece Cake Shop in Lakewood, Colorado.  Mr. Phillips is being sued by the Colorado Civil Rights Commission for refusing to make a wedding cake for a gay couple.  His case is currently before the U.S. Supreme Court.  Yesterday Mr. Phillips published an op-ed in The Washington Post to which this letter is a response.] Dear Mr. Phillips: Imagine your child

Rondam RamblingsPaul Ryan forces out House chaplain

Just in case there was the slightest ember of hope in your mind that Republicans actually care about religious freedom and are not just odious hypocritical power-grubbing opportunists, this should extinguish it once and for all: House Chaplain Patrick Conroy’s sudden resignation has sparked a furor on Capitol Hill, with sources in both parties saying he was pushed out by Speaker Paul Ryan (R-Wis

CryptogramTSB Bank Disaster

This seems like an absolute disaster:

The very short version is that a UK bank, TSB, which had been merged into and then many years later was spun out of Lloyds Bank, was bought by the Spanish bank Banco Sabadell in 2015. Lloyds had continued to run the TSB systems and was to transfer them over to Sabadell over the weekend. It's turned out to be an epic failure, and it's not clear if and when this can be straightened out.

It is bad enough that bank IT problem had been so severe and protracted a major newspaper, The Guardian, created a live blog for it that has now been running for two days.

The more serious issue is the fact that customers still can't access online accounts and even more disconcerting, are sometimes being allowed into other people's accounts, says there are massive problems with data integrity. That's a nightmare to sort out.

Even worse, the fact that this situation has persisted strongly suggests that Lloyds went ahead with the migration without allowing for a rollback.

This seems to be a mistake, and not enemy action.

Worse Than FailureError'd: Billboards Show Obvious Disasters

"Actually, this board is outside a casino in Sheffield which is next to the church, but we won't go there," writes Simon.


Wouter wrote, "The local gas station is now running ads for Windows 10 updates."


"If I were to legally change my name to a GUID, this is exactly what I'd pick," Lincoln K. wrote.


Robert F. writes, "Copy/Paste? Bah! If you really want to know how many log files are being generated per server, every minute, you're going to have to earn it by typing out this 'easy' command."


"I imagine someone pushed the '10 items or fewer' rule on the self-checkout kiosk just a little too far," Michael writes.


Wojciech wrote, "To think - someone actually doodled this JavaScript code!"


[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.


Cory DoctorowRaleigh-Durham, I’m headed your way! CORRECTED!

CORRECTION! The Flyleaf event is at 6PM, not 7!

I’m delivering the annual Kilgour lecture tomorrow morning at 10AM at UNC, and I’ll be speaking at Flyleaf Books at 6PM — be there or be oblong!

Also, if you’re in Boston, Waterloo or Chicago, you can catch me in the coming weeks!.

Abstract: For decades, regulators and corporations have viewed the internet and the computer as versatile material from which special-purpose tools can be fashioned: pornography distribution systems, jihadi recruiting networks, video-on-demand services, and so on.

But the computer is an unprecedented general purpose device capable of running every program we can express in symbolic language, and the internet is the nervous system of the 21st century, webbing these pluripotent computers together.

For decades, activists have been warning regulators and corporations about the peril in getting it wrong when we make policies for these devices, and now the chickens have come home to roost. Frivolous, dangerous and poorly thought-through choices have brought us to the brink of electronic ruin.

We are balanced on the knife-edge of peak indifference — the moment at which people start to care and clamor for action — and the point of no return, the moment at which it’s too late for action to make a difference. There was never a more urgent moment to fight for a free, fair and open internet — and there was never an internet more capable of coordinating that fight.

Cory DoctorowLittle Brother is 10 years old today: I reveal the secret of writing future-proof science fiction

It’s been ten years since the publication of my bestselling novel Little Brother; though the novel was written more than a decade ago, and though it deals with networked computers and mobile devices, it remains relevant, widely read, and widely cited even today.

In an essay for, I write about my formula for creating fiction about technology that stays relevant — the secret is basically to assume that people will be really stupid about technology for the foreseeable future.

And now we come to how to write fiction about networked computers that stays relevant for 12 years and 22 years and 50 years: just write stories in which computers can run all the programs, and almost no one understands that fact. Just write stories in which authority figures, and mass movements, and well-meaning people, and unethical businesses, all insist that because they have a *really good reason* to want to stop some program from running or some message from being received, it *must* be possible.

Write those stories, and just remember that because computers can run every program and the internet can carry any message, every device will someday be a general-purpose computer in a fancy box (office towers, cars, pacemakers, voting machines, toasters, mixer-taps on faucets) and every message will someday be carried on the public internet. Just remember that the internet makes it easier for people of like mind to find each other and organize to work together for whatever purpose stirs them to action, including terrible ones and noble ones. Just remember that cryptography works, that your pocket distraction rectangle can scramble messages so thoroughly that they can never, ever be descrambled, not in a trillion years, without your revealing the passphrase used to protect them. Just remember that swords have two edges, that the universe doesn’t care how badly you want something, and that every time we make a computer a little better for one purpose, we improve it for every purpose a computer can be put to, and that is all purposes.

Just remember that declaring war on general purpose computing is a fool’s errand, and that that never stopped anyone.

Ten Years of Cory Doctorow’s Little Brother [Cory Doctorow/]

(Image: Missy Ward, CC-BY)

TEDTED hosts first-ever TED en Español Spanish-language speaker event at NYC headquarters

Thursday, April 26, 2018 – Today marks the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event, held in TED’s theater in Manhattan, will feature eight speakers, a musical performance, five short films and fifteen 1-minute talks given by members of the audience. 150 people are expected to attend from around the world, and about 20 TEDx events in 10 countries plan to tune in to watch remotely.

The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event – TEDxRiodelaPlata in Argentina – TED en Español includes a Facebook community, Twitter feed, weekly ¨Boletín¨ newsletter, YouTube channel and – as of earlier this month – an original podcast created in partnership with Univision Communications.

“As part of our nonprofit mission at TED, we work to find and spread the best ideas no matter where the speakers live or what language they speak,” said Gerry. “We want everyone to have access to ideas in their own language. Given the massive global Hispanic population, we’ve begun the work to bring Spanish-language ideas directly to Spanish-speaking audiences, and today event is a major step in solidifying our commitment to that effort.”

Today´s speakers include chef Gastón Acurio, futurist Juan Enriquez, entrepreneur Leticia Gasca, data scientist César A. Hidalgo, founder and funder Rebeca Hwang, ocean expert Enric Sala, assistant conductor of the LA Philharmonic Paolo Bortolameolli, and psychologist and dancer César Silveyra. Musical group LADAMA will perform.


Planet Linux AustraliaMichael Still: A first program in golang, with a short aside about Google


I have reached the point in my life where I needed to write my first program in golang. I pondered for a disturbingly long time what exactly to write, but then it came to me…

Back in the day Google had an internal short URL service (think, but for internal things). It was called “go” and lived at http://go. So what should I write as my first golang program? go of course.

The implementation is on github, and I am sure it isn’t perfect. Remember, it was a learning exercise. I mostly learned that golang syntax is a bit bonkers, and that etcd hates me.

This code stores short URLs in etcd, and redirects you to the right place if it knows about the short code you used. If you just ask for the root URL, you get a list of the currently defined short codes, as well as a form to create new ones. Not bad for a few hours hacking I think.


The post A first program in golang, with a short aside about Google appeared first on Made by Mikal.

Worse Than FailureCodeSOD: If Not Null…

Robert needed to fetch some details about pump configurations from the backend. The API was poorly documented, but there were other places in the code which did that, so a quick search found this block:

var getConfiguration = function(){
    var result = null;
    result = getPumpConfiguration (areaID,subStationID,mngmtUnitID,lastServiceDate,service,format,result);
    result = getPumpConfiguration (areaID,subStationID,null,lastServiceDate,null,format,result);
    result = getPumpConfiguration (areaID,subStationID,null,lastServiceDate,service,null,result);
    result = getPumpConfiguration (areaID,subStationID,mngmtUnitID,lastServiceDate,null,null,result);
    result = getPumpConfiguration (areaID,subStationID,null,lastServiceDate,null,null,result);
    return result;

This collection of lines lurked at the end of a 100+ line function, which did a dozen other things. At a glance, it’s mildly perplexing. I can see that result gets passed into the function multiple times, so perhaps this is an attempt at a fluent API? So this series of calls awkwardly fetches the data that’s required? The parameters vary a little with every call, so that must be it, right?

Let’s check the implementation of getPumpConfiguration

function getPumpConfiguration (areaID,subStationID,mngmtUnitID,lastServiceDate,service,format,result) {
    if (result==null) {
    result = queryResult;
    return result;

Oh, no. If the result parameter has a value… we just return it. Otherwise, we attempt to fetch data. This isn’t a fluent API which loads multiple pieces of data with separate requests, it’s an attempt at implementing retries. Hopefully one of those calls works.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV May 2018 Workshop

May 19 2018 12:30
May 19 2018 16:30
May 19 2018 12:30
May 19 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Topic to be announced

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

May 19, 2018 - 12:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV May 2018 Main Meeting: "Share" with FOSS Software

May 1 2018 18:30
May 1 2018 20:30
May 1 2018 18:30
May 1 2018 20:30
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053


6:30 PM to 8:30 PM Tuesday, May 1, 2018
Meeting Room 3, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053


Linux Users of Victoria is a subcommittee of Linux Australia.

May 1, 2018 - 18:30

read more

Planet Linux AustraliaMichael Still: etcd v2 and v3 data stores are separate


Just noting this because it wasted way more of my time that it should have…

So you write an etcd app in a different language from your previous apps and it can’t see the data that the other apps wrote? Check the versions of your client libraries. The v2 and v3 data stores in etcd are different, and cannot be seen by each other. You need to convert your v2 data to the v3 data store before it will be visible there.

You’re welcome.


The post etcd v2 and v3 data stores are separate appeared first on Made by Mikal.


Rondam RamblingsSupport Josh Harder for Congress

I've been quiet lately in part because I'm sinking back into the pit of despair when I think about politics.  The spinelessness and hypocrisy of the Republican party, the insidious and corrosive effects of corporate "free speech" embodied in soulless monsters like Sinclair and Fox News, and the fact that ultimately all this insanity has its foundation in the will of the people (or at least a

TEDIn Case You Missed It: The dawn of “The Age of Amazement” at TED2018

In Case You Missed It TED2018More than 100 speakers — activists, scientists, adventurers, change-makers and more — took the stage to give the talk of their lives this week in Vancouver at TED2018. One blog post could never hope to hold all of the extraordinary wisdom they shared. Here’s a (shamelessly inexhaustive) list of the themes and highlights we heard throughout the week — and be sure to check out full recaps of day 1, day 2, day 3 and day 4.

Discomfort is a proxy for progress. If we hope to break out of the filter bubbles that are defining this generation, we have to talk to and connect with people we disagree with. This message resonated across the week at TED, with talks from Zachary R. Wood and Dylan Marron showing us the power of reaching out, even when it’s uncomfortable. As Wood, a college student who books “uncomfortable speakers,” says: “Tuning out opposing viewpoints doesn’t make them go away.” To understand how society can progress forward, he says, “we need to understand the counterforces.” Marron’s podcast “Conversations With People Who Hate Me” showcases him engaging with people who have attacked him on the internet. While it hasn’t led to world peace, it has helped him develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

The Audacious Project, a new initiative for launching big ideas, seeks to create lasting change at scale. (Photo: Ryan Lash / TED)

Audacious ideas for big impact. The Audacious Project, TED’s newest initiative, aims to be the nonprofit version of an IPO. Housed at TED, it’s a collaboration among some of the biggest names in philanthropy that asks for nonprofit groups’ most audacious dreams; each year, five will be presented at TED with an invitation for the audience and world to get involved. The inaugural Audacious group includes public defender Robin Steinberg, who’s working to end the injustice of bail; oceanographer Heidi M. Sosik, who wants to explore the ocean’s twilight zone; Caroline Harper from Sight Savers, who’s working to end the scourge of trachoma; conservationist Fred Krupp, who wants to use the power of satellites and data to track methane emissions in unprecedented detail; and T. Morgan Dixon and Vanessa Garrison, who are inspiring a nationwide movement for Black women’s health. Find out more (and how you can get involved) at

Living means acknowledging death. Philosopher-comedian Emily Levine has stage IV lung cancer — but she says there’s no need to “oy” or “ohhh” over her: she’s OK with it. Life and death go hand in hand, she says; you can’t have one without the other. Therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Jason Rosenthal’s journey of loss and grief began when his wife, Amy Krouse Rosenthal, wrote about their lives in an article read by millions of people: “You May Want to Marry My Husband” — a meditation on dying disguised as a personal ad for her soon-to-be-solitary spouse. By writing their story, Amy made Jason’s grief public — and challenged him to begin anew. He speaks to others who may be grieving: “I would like to offer you what I was given: a blank sheet of paper. What will you do with your intentional empty space, with your fresh start?”

“It’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” says Yuval Noah Harari. (Photo: Ryan Lash / TED)

Can we rediscover the humanity in our tech?  In a visionary talk about a “globally tragic, astoundingly ridiculous mistake” companies like Google and Facebook made at the foundation of digital culture, Jaron Lanier suggested a way we can fix the internet for good: pay for it. “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them,” he says. Historian Yuval Noah Harari, appearing onstage as a hologram live from Tel Aviv, warns that with consolidation of data comes consolidation of power. Fascists and dictators, he says, have a lot to gain in our new digital age; and “it’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” he says. Gizmodo writers Kashmir Hill and Surya Mattu survey the world of “smart devices” — the gadgets that “sit in the middle of our home with a microphone on, constantly listening,” and gathering data — to discover just what they’re up to. Hill turned her family’s apartment into a smart home, loading up on 18 internet-connected appliances; her colleague Mattu built a router that tracked how often the devices connected, who they were transmitting to, what they were transmitting. Through the data, he could decipher the Hill family’s sleep schedules, TV binges, even their tooth-brushing habits. And a lot of this data can be sold, including deeply intimate details. “Who is the true beneficiary of your smart home?” he asks. “You, or the company mining you?”

An invitation to build a better world. Actor and activist Tracee Ellis Ross came to TED with a message: the global collection of women’s experiences will not be ignored, and women will no longer be held responsible for the behaviors of men. Ross believes it is past time that men take responsibility to change men’s bad behavior — and she offers an invitation to men, calling them in as allies with the hope they will “be accountable and self-reflective.” She offers a different invitation to women: Acknowledge your fury. “Your fury is not something to be afraid of,” she says. “It holds lifetimes of wisdom. Let it breathe, and listen.”

Wow! discoveries. Among the TED Fellows, explorer and conservationist Steve Boyes’ efforts to chart Africa’s Okavango Delta has led scientists to identify more than 25 new species; University of Arizona astrophysicist Burçin Mutlu-Pakdil discovered a galaxy with an outer ring and a reddish inner ring that was unlike any ever seen before (her reward: it’s now called Burçin’s Galaxy). Another astronomer, University of Hawaii’s Karen Meech saw — and studied for an exhilarating few days — ‘Oumuamua, the first interstellar comet observed from Earth. Meanwhile, engineer Aaswath Raman is harnessing the cold of deep space to invent new ways to keep us cooler and more energy-efficient. Going from the sublime to the ridiculous, roboticist Simone Giertz showed just how much there is to be discovered from the process of inventing useless things.  

Walter Hood shares his work creating public spaces that illuminate shared memories without glossing over past — and present — injustices. (Photo: Ryan Lash / TED)

Language is more than words. Even though the stage program of TED2018 consisted primarily of talks, many went beyond words. Architects Renzo Piano, Vishaan Chakbrabarti, Ian Firth and Walter Hood showed how our built structures, while still being functional, can lift spirits, enrich lives, and pay homage to memories. Smithsonian Museum craft curator Nora Atkinson shared images from Burning Man and explained how, in the desert, she found a spirit of freedom, creativity and collaboration not often found in the commercial art world. Designer Ingrid Fetell Lee uncovered the qualities that make everyday objects a joy to behold. Illustrator Christoph Niemann reminded us how eloquent and hilarious sketches can be; in her portraits of older individuals, photographer Isadora Kosofsky showed us that visuals can be poignant too. Paul Rucker discussed his painful collection of artifacts from America’s racial past and how the artistic act of making scores of Ku Klux Klan robes has brought him some catharsis. Our physical movements are another way we speak  — for choreographer Elizabeth Streb, it’s expressing the very human dream to fly. For climber Alex Honnold, it was attaining a sense of mastery when he scaled El Capitan alone without ropes. Dolby Laboratories chief scientist Poppy Crum demonstrated the emotions that can be read through physical tells like body temperature and exhalations, and analytical chemist Simone Francese revealed the stories told through the molecules in our fingerprints.  

Kate Raworth presents her vision for what a sustainable, universally beneficial economy could look like. (Photo: Bret Hartman / TED)

Is human growth exponential or limited? There will be almost ten billion people on earth by 2050. How are we going to feed everybody, provide water for everybody and get power to everybody? Science journalist Charles C. Mann has spent years asking these questions to researchers, and he’s found that their answers fall into two broad categories: wizards and prophets. Wizards believe that science and technology will let us produce our way out of our dilemmas — think: hyper-efficient megacities and robots tending genetically modified crops. Prophets believe close to the opposite; they see the world as governed by fundamental ecological processes with limits that we transgress to our peril. As he says: “The history of the coming century will be the choice we make as a species between these two paths.” Taking up the cause of the prophets is Oxford economist Kate Raworth, who says that our economies have become “financially, politically and socially addicted” to relentless GDP growth, and too many people (and the planet) are being pummeled in the process. What would a sustainable, universally beneficial economy look like? A doughnut, says Raworth. She says we should strive to move countries out of the hole — “the place where people are falling short on life’s essentials” like food, water, healthcare and housing — and onto the doughnut itself. But we shouldn’t move too far lest we end up on the doughnut’s outside and bust through the planet’s ecological limits.

Seeing opportunity in adversity. “I’m basically nuts and bolts from the knee down,” says MIT professor Hugh Herr, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He dreams of a future where humans have augmented their bodies in a way that redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. In a beautiful, touching talk in the closing session of TED2018, Mark Pollock and Simone George take us inside their relationship — detailing how Pollock became paralyzed and the experimental work they’ve undertaken to help him regain motion. In collaboration with a team of engineers who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test — proving that progress is definitely still possible.

TED Fellow and anesthesiologist Rola Hallam started the world’s first crowdfunded hospital in Syria. (Photo: Ryan Lash / TED)

Spotting the chance to make a difference. The TED Fellows program was full of researchers, activists and advocates capitalizing on the spaces that go unnoticed. Psychiatrist Essam Daod, found a “golden hour” in refugees’ treks when their narratives can sometimes be reframed into heroes’ journeys; landscape architect Kotcharkorn Voraakhom realized that a park could be designed to allow her flood-prone city of Bangkok mitigate the impact of climate change; pediatrician Lucy Marcil seized on the countless hours that parents spend in doctors’ waiting rooms to offer tax assistance; sustainability expert DeAndrea Salvador realized the profound difference to be made by helping low-income North Carolina residents with their energy bills; and anesthesiologist Rola Hallam is addressing aid shortfalls for local nonprofits, resulting in the world’s first crowdfunded hospital in Syria.

Catch up on previous In Case You Missed It posts from April 10 (Day 1), April 11 (Day 2), April 12 (Day 3), and yesterday, April 13 (Day 4).

Krebs on SecurityDDoS-for-Hire Service Webstresser Dismantled

Authorities in the U.S., U.K. and the Netherlands on Tuesday took down popular online attack-for-hire service and arrested its alleged administrators. Investigators say that prior to the takedown, the service had more than 136,000 registered users and was responsible for launching somewhere between four and six million attacks over the past three years.

The action, dubbed “Operation Power Off,” targeted (previously, one of the most active services for launching point-and-click distributed denial-of-service (DDoS) attacks. WebStresser was one of many so-called “booter” or “stresser” services — virtual hired muscle that anyone can rent to knock nearly any website or Internet user offline. (formerly, as it appeared in 2017.

“The damage of these attacks is substantial,” reads a statement from the Dutch National Police in a Reddit thread about the takedown. “Victims are out of business for a period of time, and spend money on mitigation and on (other) security measures.”

In a separate statement released this morning, Europol — the law enforcement agency of the European Union — said “further measures were taken against the top users of this marketplace in the Netherlands, Italy, Spain, Croatia, the United Kingdom, Australia, Canada and Hong Kong.” The servers powering WebStresser were located in Germany, the Netherlands and the United States, according to Europol.

The U.K.’s National Crime Agency said WebStresser could be rented for as little as $14.99, and that the service allowed people with little or no technical knowledge to launch crippling DDoS attacks around the world.

Neither the Dutch nor U.K. authorities would say who was arrested in connection with this takedown. But according to information obtained by KrebsOnSecurity, the administrator of WebStresser allegedly was a 19-year-old from Prokuplje, Serbia named Jovan Mirkovic.

Mirkovic, who went by the hacker nickname “m1rk,” also used the alias “Mirkovik Babs” on Facebook where for years he openly discussed his role in programming and ultimately running WebStresser. The last post on Mirkovic’s Facebook page, dated April 3 (the day before the takedown), shows the young hacker sipping what appears to be liquor while bathing. Below that image are dozens of comments left in the past few hours, most of them simply, “RIP.”

A story in the Serbia daily news site notes that two men from Serbia were arrested in conjunction with the WebStresser takedown; they are named only as “MJ” (Jovan Mirkovik) and D.V., aged 19 from Ruma.

Mirkovik’s fake Facebook page (Mirkovik Babs) includes countless mentions of another Webstresser administrator named “Kris” and includes a photograph of a tattoo that Kris got in 2015. That same tattoo is shown on the Facebook profile of a Kristian Razum from Zapresic, Croatia. According to the press releases published today, one of the administrators arrested was from Croatia.

Multiple sources are now pointing to other booter businesses that were reselling WebStresser’s service but which are no longer functional as a result of the takedown, including powerboot[dot]net, defcon[dot]pro, ampnode[dot]com, ripstresser[dot]com, fruitstresser[dot]com, topbooter[dot]com, freebooter[dot]co and rackstress[dot]pw.

Tuesday’s action against WebStresser is the latest such takedown to target both owners and customers of booter services. Many booter service operators apparently believe (or at least hide behind) a wordy “terms of service” agreement that all customers must acknowledge, under the assumption that somehow this absolves them of any sort of liability for how their customers use the service — regardless of how much hand-holding and technical support booter service administrators offer customers.

In October the FBI released an advisory warning that the use of booter services is punishable under the Computer Fraud and Abuse Act, and may result in arrest and criminal prosecution.

In 2016, authorities in Israel arrested two 18-year-old men accused of running vDOS, until then the most popular and powerful booter service on the market. Their arrests came within hours of a story at KrebsOnSecurity that named the men and detailed how their service had been hacked.

Many in the hacker community have criticized authorities for targeting booter service administrators and users and for not pursuing what they perceive as more serious cybercriminals, noting that the vast majority of both groups are young men under the age of 21. In its Reddit thread, the Dutch Police addressed this criticism head-on, saying Dutch authorities are working on a new legal intervention called “Hack_Right,” a diversion program intended for first-time cyber offenders.

“Prevention of re-offending by offering a combination of restorative justice, training, coaching and positive alternatives is the main aim of this project,” the Dutch Police wrote. “See page 24 of the 5th European Cyber Security Perspectives and stay tuned on our THTC twitter account #HackRight! AND we are working on a media campaign to prevent youngsters from starting to commit cyber crimes in the first place. Expect a launch soon.”

In the meantime, it’s likely we’ll sooner see the launch of yet more booter services. According to reviews and sales threads at stresserforums[dot]net — a marketplace for booter buyers and sellers — there are dozens of other booter services in operation, with new ones coming online almost every month.

Sociological ImagesBoozy Milkshakes and Sordid Spirits

The first nice weekend after a long, cold winter in the Twin Cities is serious business. A few years ago some local diners joined the celebration with a serious indulgence: the boozy milkshake.

When talking with a friend of mine from the Deep South about these milkshakes, she replied, “oh, a bushwhacker! We had those all the time in college.” This wasn’t the first time she had dropped southern slang that was new to me, so off to Google I went.

According to Merriam-Webster, “to bushwhack” means to attack suddenly and unexpectedly, as one would expect the alcohol in a milkshake to sneak up on you. The cocktail is a Nashville staple, but the origins trace back to the Virgin Islands in the 1970s.

Photo Credit: Beebe Bourque, Flickr CC
Photo Credit: Like_the_Grand_Canyon, Flickr CC

Here’s the part where the history takes a sordid turn: “Bushwhacker” was apparently also the nickname for guerrilla fighters in the Confederacy during the Civil War who would carry out attacks in rural areas (see, for example, the Lawrence Massacre). To be clear, I don’t know and don’t mean to suggest this had a direct influence in the naming of the cocktail. Still, the coincidence reminded me of the famous, and famously offensive, drinking reference to conflict in Northern Ireland.

Battle of Lawrence, Wikimedia Commons

When sociologists talk about concepts like “cultural appropriation,” we often jump to clear examples with a direct connection to inequality and oppression like racist halloween costumes or ripoff products—cases where it is pretty easy to look at the object in question and ask, “didn’t they think about this for more than thirty seconds?”

Cases like the bushwhacker raise different, more complicated questions about how societies remember history. Even if the cocktail today had nothing to do with the Confederacy, the weight of that history starts to haunt the name once you know it. I think many people would be put off by such playful references to modern insurgent groups like ISIS. Then again, as Joseph Gusfield shows, drinking is a morally charged activity in American society. It is interesting to see how the deviance of drinking dovetails with bawdy, irreverent, or offensive references to other historical and social events. Can you think of other drinks with similar sordid references? It’s not all sex on the beach!

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

CryptogramTwo NSA Algorithms Rejected by the ISO

The ISO has rejected two symmetric encryption algorithms: SIMON and SPECK. These algorithms were both designed by the NSA and made public in 2013. They are optimized for small and low-cost processors like IoT devices.

The risk of using NSA-designed ciphers, of course, is that they include NSA-designed backdoors. Personally, I doubt that they're backdoored. And I always like seeing NSA-designed cryptography (particularly its key schedules). It's like examining alien technology.

Worse Than FailureThe Search for Truth

Every time you change existing code, you break some other part of the system. You may not realize it, but you do. It may show up in the form of a broken unit test, but that presumes that a) said unit test exists, and b) it properly tests the aspect of the code you are changing. Sadly, more often than not, there is either no test to cover your change, or any test that does exist doesn't handle the case you are changing.

Nicolai Abildgaard - Diogenes der lyser i mørket med en lygte.jpg

This is especially true if the thing you are changing is simple. It is even more true when changing something as complex as working with a boolean.

Mr A. was working at a large logistics firm that had an unusual error where a large online retailer was accidentally overcharged by millions of dollars. When large companies send packages to logistics hubs for shipment, they often send hundreds or thousands of them at a time on the same pallet, van or container (think about companies like Amazon). The more packages you send in these batches the less you pay (a single lorry is cheaper than a fleet of vans). These packages are lumped together and billed at a much lower rate than you or I would get.

One day, a particular developer saw something untidy in the code - an uninitialized Boolean variable in one of the APIs. The entire code change was from this:


to this:

    parcel.consolidated = false;

There are some important things to note: the code was written in NodeJS and didn't really have the concept of Boolean, the developers did not believe in Unit Testing, and in a forgotten corner of the codebase was a little routine that examined each parcel to see if the discount applied.

The routine to see if the discount should be applied ran every few minutes. It looked at each package and if it was marked as consolidated or not, then it moved on to the next parcel. If the flag was NULL, then it applied the rules to see if was part of a shipment and set the flag to either True or False.

That variable was not Boolean but rather tri-state (though thankfully didn't involve FILE_NOT_FOUND). By assuming it was Boolean and initializing it, NO packages had a discount applied. Oopsie!

It took more than a month before anyone noticed and complained. And since it was a multi-million dollar mistake, they complained loudly!

Even after this event, Unit Testing was still not accepted as a useful practice. To this day Release Management, Unit Testing, Automated Testing and Source Code Management remain stubbornly absent...

Not long after this, Mr. A. continued his search for truth elsewhere.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

CryptogramComputer Alarm that Triggers When Lid Is Opened

"Do Not Disturb" is a Macintosh app that send an alert when the lid is opened. The idea is to detect computer tampering.

Wired article:

Do Not Disturb goes a step further than just the push notification. Using the Do Not Disturb iOS app, a notified user can send themselves a picture snapped with the laptop's webcam to catch the perpetrator in the act, or they can shut down the computer remotely. The app can also be configured to take more custom actions like sending an email, recording screen activity, and keeping logs of commands executed on the machine.

Can someone please make one of these for Windows?


CryptogramBaseball Code

Info on the coded signals used by the Colorado Rockies.

TEDMore TED2018 conference shorts to amuse and amaze

Even in the Age of Amazement, sometimes you need a break between talks packed with fascinating science, tech, art and so much more. That’s where interstitials come in: short videos that entertain and intrigue, while allowing the brain a moment to reset and ready itself to absorb more information.

For this year’s conference, TED commissioned and premiered four short films made just for the conference. Check out those films here!

Mixed in with our originals, curators Anyssa Samari and Jonathan Wells hand-picked even more videos — animations, music, even cool ads — to play throughout the week. Here’s the program of shorts they found, from creative people all around the world:

The short: Jane Zhang: “Dust My Shoulders Off.” A woman having a bad day is transported to a world of famous paintings where she has a fantastic adventure.

The creator: Outerspace Leo

Shown during: Session 2, After the end of history …

The short: “zoom(art).” A kaleidoscopic, visually compelling journey of artificial intelligence creating beautiful works of art.

The creator: Directed and programmed by Alexander Mordvintsev, Google Research

Shown during: Session 2, After the end of history …

The short: “20syl – Kodama.” A music video of several hands playing multiple instruments (and drawing a picture) simultaneously to create a truly delicious electronic beat.

The creators: Mathieu Le Dude & 20syl

Shown during: Session 3, Nerdish Delight

The short: “If HAL-9000 was Alexa.” 2001: A Space Odyssey seems a lot less sinister (and lot more funny) when Alexa can’t quite figure out what Dave is saying.

The creator: ScreenJunkies

Shown during: Session 3, Nerdish Delight

The short: “Maxine the Fluffy Corgi.” A narrated day in the life of an adorable pup named Maxine who knows what she wants.

The creator: Bryan Reisberg

Shown during: Session 3, Nerdish Delight

The short: “RGB FOREST.” An imaginative, colorful and geometric jaunt through the woods set to jazzy electronic music.

The creator: LOROCROM

Shown during: Session 6, What on earth do we do?

The short: “High Speed Hummingbirds.” Here’s your chance to watch the beauty and grace of hummingbirds in breathtaking slow motion.

The creator: Anand Varma

Shown during: Session 6, What on earth do we do?

The short: “Cassius ft. Cat Power & Pharrell Williams | Go Up.” A split screen music video that cleverly subverts and combines versions of reality.

The creator: Alex Courtès

Shown during: Session 7, Wow. Just wow.

The short: “Blobby.” A stop motion film about a man and a blob and the peculiar relationship they share.

The creator: Laura Stewart

Shown during: Session 7, Wow. Just wow.

The short: “WHO.” David Byrne and St. Vincent dance and sing in this black-and-white music video about accidents and consequences.

The creator: Martin de Thurah

Shown during: Session 8, Insanity. Humanity.

The short: “MAKIN’ MOVES.” When music makes the body move in unnatural, impossible ways.

The creator: Kouhei Nakama

Shown during: Session 9, Body electric

The short: “The Art of Flying.” The beautiful displays the Common Starling performs in nature.

The creator: Jan van IJken

Shown during: Session 9, Body electric

The short: “Kiss & Cry.” The heart-rending story of Giselle, a woman who lives and loves and wants to be loved. (You’ll never guess who plays the heroine.)

The creators: Jaco Van Dormael and choreographer Michèle Anne De Mey

Shown during: Session 10, Personally speaking

The short: “Becoming Violet.” The power of the human body, in colors and dance.

The creator: Steven Weinzierl

Shown during: Session 10, Personally speaking

The short: “Golden Castle Town.” A woman is transported to another world and learns to appreciate life anew.

The creator: Andrew Benincasa

Shown during: Session 10, Personally speaking

The short: “Tom Rosenthal | Cos Love.” A love letter to love that is grand and a bit melacholic.

The creator: Kathrin Steinbacher

Shown during: Session 11, What matters

TEDInsanity. Humanity. Notes from Session 8 at TED2018

If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. Photo: Ryan Lash / TED

The seven speakers lived up to the two words in the title of the session. Their talks showcased both our collective insanity — the algorithmically-assembled extremes of the Internet — and our humanity — the values and desires that extremists astutely tap into — along with some speakers combining the two into a glorious salad. Let’s dig in.

Artificial Intelligence = artificial stupidity. How does a sweetly-narrated video of hands unwrapping Kinder eggs garner 30 million views and spawn more than 10 million imitators? Welcome to the weird world of YouTube children’s videos, where an army of content creators use YouTube “to hack the brains of very small children, in return for advertising revenue,” as artist and technology critic James Bridle describes. Marketing ethics aside, this world seems innocuous on the surface but go a few clicks deeper and you’ll find a surreal and sinister landscape of algorithmically-assembled cartoons, nursery rhymes built from keyword combos, and animated characters and human actors being tortured, assaulted and killed. Automated copycats mimic trusted content providers “using the same mechanisms that power Facebook and Google to create ‘fake news’ for kids,” says Bridle. He adds that feeding the situation is the fact “we’re training them from birth to click on the very first link that comes along, regardless of where the source is.” As technology companies ignore these problems in their quest for ad dollars, the rest of us are stuck in a system in which children are sent down auto-playing rabbit holes where they see disturbing videos filled with very real violence and very real trauma — and get traumatized as a result. Algorithms are touted as the fix, but Bridle declares, “Machine learning, as any expert on it will tell you, is what we call software that does stuff we don’t really understand, and I think we have enough of that already,” he says. Instead, “we need to think of technology not as a solution to all our problems but as a guide to what they are.” After his talk, TED Head of Curation Helen Walters has a blunt question for Bridle: “So are we doomed?” His realistic but ungrim answer: “We’ve got a hell of a long way to go, but talking is the beginning of that process.”

Technology that fights extremism and online abuse. Over the last few years, we’ve seen geopolitical forces wreak havoc with their use of the Internet. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. “Radicalization isn’t a yes or no choice,” she says. “It’s a process, during which people have questions about ideology, religion — and they’re searching online for answers which is an opportunity to reach them.” In 2016, Green collaborated with Moonshot CVE to pilot a new approach called the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups and used that information to create a campaign that deployed targeted advertising to reach people susceptible to ISIS’s recruiting and show them videos to counter those messages. Available in English and Arabic, the eight-week pilot program reached more than 300,000 people. In another project, she and her team looked for a way to combat online abuse. Partnering across Google with Wikipedia and the New York Times, the team trained machine-learning models to understand the emotional impact of language — specifically, to predict comments that were likely to make someone leave a conversation and to give commenters real-time feedback about how their words might land. Due to the onslaught of online vitriol, the Times had previously enabled commenting on only 10 percent of homepage stories, but this strategy led it to open up all homepage stories to comments. “If we ever thought we could build technology insulated from the dark side of humanity, we were wrong,” Green says. “If technology has any hope of overcoming today’s challenges, we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” In a post-talk Q & A, Green adds that banning certain keywords isn’t enough of a solution: “We need to combine human insight with innovation.”

Living life means acknowledging death. Philosopher-comedian Emily Levine starts her talk with some bad news — she’s got stage 4 lung cancer — but says there’s no need to “oy” or “ohhh” over her: she’s okay with it. After all, explains Levine, life and death go hand in hand; you can’t have one without the other. In fact, therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Levine muses about the scientists who are attempting to thwart death — she dubs them the Anti-Life Brigade — and calls them ungrateful and disrespectful in their efforts to wrest control from nature. “We don’t live in the clockwork universe,” she says wryly. “We live in a banana peel universe,” where our attempts at mastery will always come up short against mystery. She has come to view life as a “gift that you enrich as best you can and then give back.” And just as we should appreciate that life’s boundary line stops abruptly at death, we should accept our own intellectual and physical limits. “We won’t ever be able to know everything or control everything or predict everything,” says Levine. “Nature is like a self-driving car.” We may have some control, but we’re not at the wheel.

A high-schooler working on the future of AI. Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years — he’s now just 18 — he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of we call intelligence is just trial-and-error on a massive scale — machines try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. That can create computers that are champions at Go or Q-Bert, but it really doesn’t create general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives and think with these machines. What can he and these new brains accomplish together?

Come fly with her. From a young age, action and hardware engineer Elizabeth Streb wanted to fly like, well, a fly or a bird. It took her years of painful experimentation to realize that humans can’t swoop and veer like them, but perhaps she could discover how humans could fly. Naturally, it involves more falling than staying airborne. She has jumped through broken glass and toppled from great heights in order to push the bounds of her vertical comfort zone. With her Streb Extreme Action company, she’s toured the world, bringing the delight and wonder of human flight to audiences. Along the way, she realized, “If we wanted to go higher, faster, sooner, harder and make new discoveries, it was necessary to create our very own space-ships,” so she’s also built hardware to provide a boost. More recently, she opened Brooklyn’s Streb Lab for Action Mechanics (SLAM) to instruct others. “As it turns out, people don’t just want to dream about flying, nor do they want to watch people like us fly; they want to do it, too, and they can,” she says. In teaching, she sees “smiles become more common, self-esteem blossom, and people get just a little bit braver. People do learn to fly, as only humans can.”

Calling all haters. “You’re everything I hate in a human being” — that’s just one of the scores of nasty messages that digital creator Dylan Marron receives every day. While his various video series such as “Every Single Word” and “Sitting in Bathrooms With Trans People” have racked up millions of views, they’ve also sent a slew of Internet poison in his direction. “At first, I would screenshot their comments and make fun of their typos but this felt elitist and unhelpful,” recalls Marron. Over time, he developed an unexpected coping mechanism: he calls the people responsible for leaving hateful remarks on his social media, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace — you would have noticed — he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.” And he stresses that his solution is not right for everyone . In a Q & A afterward, he says that some people have told him that his podcast just gives a platform to those espousing harmful ideologies. Marron emphasizes, “Empathy is not endorsement.” His conversations represent his own way of responding to online hate, and he says, “I see myself as a little tile in the mosaic of activism.”

Rebuilding trust at work. Trust is the foundation for everything we humans do, but what do we do when it is broken? It’s a problem that fascinates Frances Frei, a professor at Harvard Business School who recently spent six months trying to restore trust at Uber. According to Frei, trust is a three-legged stool that rests on authenticity, logic, and empathy. “If any one of these three gets shaky, if any one of these three wobbles, trust is threatened,” she explains. So which wobbles did Uber have? All of them, according to Frei. Authenticity was the hardest to fix – but that’s not uncommon. “It is still much easier to coach people to fit in; it is still much easier to reward people when they say something that you were going to say,” Frei says, “but when we figure out how to celebrate difference and how to let people bring the best version of themselves forward, well, holy cow, is that the world I want my sons to grow up in.” You can read more about her talk here.

TEDWhat matters: Notes from Session 11 of TED2018

Reed Hastings, the head of Netflix, listens to a question from Chris Anderson during a sparky onstage Q&A on the final morning of TED2018, April 14, 2018. Photo: Ryan Lash / TED

What a week. We’ve heard so much, from dystopian warnings to bold visions for change. Our brains are full. Almost. In this session we pull back to the human stories that underpin everything we are, everything we want. From new ways to set goals and move business forward, to unabashed visions for joy and community, it’s time to explore what matters.

The original people of this land. One important thing to know: TED’s conference home of Vancouver is built on un-ceded land that once belonged to First Nations people. So this morning, two DJs from A Tribe Called Red start this session by remembering and honoring them, telling First Nations stories in beats and images in a set that expands on the concept of Halluci Nation, inspired by the poet, musician and activist John Trudell. In Trudell’s words: “We are the Halluci Nation / Our DNA is of earth and sky / Our DNA is of past and future.”

The power of why, what and how. Our leaders and our institutions are failing us, and it’s not always because they’re bad or unethical. Sometimes, it’s simply because they’re leading us toward the wrong objectives, says venture capitalist John Doerr. How can we get back on track? The trick may be a system called OKR, developed by legendary management thinker Andy Grove. Doerr explains that OKR stands for ‘objectives and key results’ – and setting the right ones can be the difference between success and failure. However, before you set your objective (your what) and your key results (your how), you need to understand your why. “A compelling sense of why can be the launch pad for our objectives,” he says. He illustrates the power of OKRs by sharing the stories of individuals and organizations who’ve put them into practice, including Google’s Larry Page and Sergey Brin. “OKRs are not a silver bullet. They’re not going to be a substitute for a strong culture or for stronger leadership, but when those fundamentals are in place, they can take you to the mountaintop,” he says. He encourages all of us to take the time to write down our values, our objectives, and our key results – and to do it today. “Let’s fight for what it is that really matters, because we can take OKRs beyond our businesses. We can take them to our families, to our schools, even to our government. We can hold those governments accountable,” he says. “We can get back on the right track if we can and do measure what really matters.”

What’s powering China’s tech innovation? The largest mass migration in the world occurs every year around the Chinese Spring Festival. Over 40 days, travelers — including 290 million migrant workers — take 3 billion trips all over China. Few can afford to fly, so railways strained to keep up, with crowding, fraud and drama. So the Chinese technology sector has been building everything from apps to AI to ease not only this process, but other pain points throughout society. But unlike the US, where innovation is often fueled by academia and enterprise, China’s tech innovation is powered by “an overwhelming need economy that is serving an underprivileged populace, which has been separated for 30 years from China’s economic boom.” The CEO of the China Morning Post, Gary Liu has a front-row seat to this transformation. As China’s introduction of a “social credit rating” system suggests, a technology boom in an authoritarian society hides a significant dark side. But the Chinese internet hugely benefits its 772 million users. It has spread deeply into rural regions, revitalizing education and creating jobs. There’s a long way to go to bring the internet to everyone in China — more than 600 million people remain offline . But wherever the internet is fueling prosperity, “we should endeavor to follow it with capital and with effort, driving both economic and societal impact all over the world. Just imagine for a minute what more could be possible if the global needs of the underserved become the primary focus of our inventions.”

Netflix and chill, the interview. The humble beginnings of Netflix paved the way to transforming how we consume content today. Reed Hastings — who started out as a high school math teacher — admits that making the shift from DVDs to streaming was a big leap. “We weren’t confident,” he admits in his interview with TED Curator Chris Anderson. “It was scary.” Obviously, it paid off over time, with 117 million subscribers (and growing), more than $11 billion in revenue (so far) and a slew of popular original content (Black Mirror, anyone?) fueled by curated algorithmic recommendations. The offerings of Netflix, Hastings says, is a mixture of candy and broccoli — and it allows people to decide what a proper “diet” is for them. “We get a lot of joy from making people happy,” he says. The external culture of the streaming platform reflects its internal culture as well: they’re super focused on how to run with no process, but without chaos. There’s an emphasis on freedom, responsibility and honesty (as he puts it, “disagreeing silently is disloyal”). And though Hastings loves business — competing against the likes of HBO and Disney — he also enjoys his philanthropic pursuits supporting innovative education, such as the KIPP charter schools, and advocates for more variety in educational content. For now, he says, it’s the perfect job.

“E. Pluribus Unum” — ”Out of many, one.” It’s the motto of the United States, yet few citizens understand its meaning. Artist and designer Walter Hood calls for national landscapes that preserve the distinct identities of peoples and cultures, while still forging unity. Hood believes spaces should illuminate shared memories without glossing over past — and present — injustices. To guide his projects, Hood follows five simple guidelines. The first — “Great things happen when we exist in each other’s world” — helped fire up a Queens community garden initiative in collaboration with Bette Midler and hip-hop legend 50 Cent. “Two-ness” — or the sense of double identity faced by those who are “othered,” like women and African-Americans — lies behind a “shadow sculpture” at the University of Virginia that commemorates a forgotten, buried servant household uncovered during the school’s expansion. “Empathy” inspired the construction of a park in downtown Oakland that serves office workers and the homeless community, side-by-side. “The traditional belongs to all of us” — and to the San Francisco neighborhood of Bayview-Hunter’s Point, where Hood restored a Victorian opera house to serve the local community. And “Memory” lies at the core of a future shorefront park in Charleston, which will rest on top of Gadsden Wharf — an entry point for 40% of the United States’ slaves, where they were then “stored” in chains — that forces visitors to confront the still-resonating cruelty of our past.

The tension between acceptance and hope. When Simone George met Mark Pollock, it was eight years after he’d lost his sight. Pollock was rebuilding his identity — living a high-octane life of running marathons and racing across Antarctica to reach the South Pole. But a year after he returned from Antarctica, Pollock fell from a third-story window; he woke up paralyzed from the waist down. Pollock shares how being a realist — inspired by the writings of Admiral James Stockdale, a Vietnam POW — helped him through bleak days after this accident, when even hope seemed dangerous. George explains how she helped Pollock navigate months in the hospital; told that any sensation Pollock didn’t regain in the weeks immediately after the fall would likely never come back, the two looked to stories of others, like Christopher Reeve, who had pushed beyond what was understood as possible for those who are paralyzed. “History is filled with the kinds of impossible made possible through human endeavor,” Pollock says. So he started asking: Why can’t human endeavor cure paralysis in his lifetime? In collaboration with a team of engineers in San Francisco, who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who had developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test, proving that progress is definitely still possible. For now, “I accept the wheelchair, it’s almost impossible not to,” says Pollock. “We also hope for another life — a life where we have created a cure through collaboration, a cure that we’re actively working to release from university labs around the world and share with everyone who needs it.”

The pursuit of joy, not happiness. “How do tangible things make us feel intangible joy?” asks designer Ingrid Fetell Lee. She pursued this question for ten years to understand how the physical world relates to the mysterious, quixotic emotion of joy. In turns out, the physical can be a remarkable, renewable resource for fostering a happier, healthier life. There isn’t just one type of joy, and its definition morphs from person to person — but psychologists, broadly speaking, describe joy as intense, momentary experience of positive emotion (or, simply, as something that makes you want to jump up and down). However, joy shouldn’t be conflated with happiness, which measure how good we feel over time. So, Lee asked around about what brings people joy and eventually had a notebook filled with things like beach balls, treehouses, fireworks, googly eyes and ice cream cones with rainbow sprinkles, and realized something significant: the patterns of joy have roots in evolutionary history. Things like symmetrical shapes, bright colors, an attraction to abundance and multiplicity, a feeling of lightness or elevation — this is what’s universally appealing. Joy lowers blood pressure, improves our immune system and even increases productivity. She began to wonder: should we use these aesthetics to help us find more opportunities for joy in the world around us? “Joy begins with the senses,” she says. “What we should be doing is embracing joy, and finding ways to put ourselves in the path of it more often.”

And that’s a wrap. Speaking of joy, Baratunde Thurston steps out to close this conference with a wrap that shouts out the diversity of this year’s audience but also nudges the un-diverse selection of topics: next year, he asks, instead of putting an African child on a slide, can we put her onstage to speak for herself? He winds together the themes of the week, from the terrifying — killer robots, octopus robots, genetically modified piglets — to the badass, the inspiring and the mind-opening. Are you not amazed?

Worse Than FailureThe Big Balls of…

The dependency graph of your application can provide a lot of insight into how objects call each other. In a well designed application, this is probably mostly acyclic and no one node on the graph has more than a handful of edges coming off of it. The kinds of applications we talk about here, on the other hand, we have a name for their graphs: the Enterprise Dependency and the Big Ball of Yarn.

Thomas K introduces us to an entirely new iteration: The Big Ball of Mandelbrot

A dependency graph which exhibits self-similarity at different scales

This gives new meaning to points “on a complex plane”.

What you’re seeing here is the relationship between stored procedures and tables. Circa 1995, when this application shambled into something resembling life, the thinking was, “If we put all the business logic in stored procedures, it’ll be easy to slap new GUIs on there as technology changes!”

Of course, the relationship between what the user sees on the screen and the underlying logic which drives that display means that as they changed the GUI, they also needed to change the database. Over the course of 15 years, the single cohesive data model ubercomplexificaticfied itself as each customer needed a unique GUI with a unique feature set which mandated unique tables and stored procedures.

By the time Thomas came along to start a pseudo-greenfield GUI in ASP.Net, the first and simplest feature he needed to implement involved calling a 3,000 line stored procedure which required over 100 parameters.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!


Krebs on SecurityTranscription Service Leaked Medical Records

MEDantex, a Kansas-based company that provides medical transcription services for hospitals, clinics and private physicians, took down its customer Web portal last week after being notified by KrebsOnSecurity that it was leaking sensitive patient medical records — apparently for thousands of physicians.

On Friday, KrebsOnSecurity learned that the portion of MEDantex’s site which was supposed to be a password-protected portal physicians could use to upload audio-recorded notes about their patients was instead completely open to the Internet.

What’s more, numerous online tools intended for use by MEDantex employees were exposed to anyone with a Web browser, including pages that allowed visitors to add or delete users, and to search for patient records by physician or patient name. No authentication was required to access any of these pages.

This exposed administrative page from MEDantex’s site granted anyone complete access to physician files, as well as the ability to add and delete authorized users.

Several MEDantex portal pages left exposed to the Web suggest that the company recently was the victim of WhiteRose, a strain of ransomware that encrypts a victim’s files unless and until a ransom demand is paid — usually in the form of some virtual currency such as bitcoin.

Contacted by KrebsOnSecurity, MEDantex founder and chief executive Sreeram Pydah confirmed that the Wichita, Kansas based transcription firm recently rebuilt its online servers after suffering a ransomware infestation. Pydah said the MEDantex portal was taken down for nearly two weeks, and that it appears the glitch exposing patient records to the Web was somehow incorporated into that rebuild.

“There was some ransomware injection [into the site], and we rebuilt it,” Pydah said, just minutes before disabling the portal (which remains down as of this publication). “I don’t know how they left the documents in the open like that. We’re going to take the site down and try to figure out how this happened.”

It’s unclear exactly how many patient records were left exposed on MEDantex’s site. But one of the main exposed directories was named “/documents/userdoc,” and it included more than 2,300 physicians listed alphabetically by first initial and last name. Drilling down into each of these directories revealed a varying number of patient records — displayed and downloadable as Microsoft Word documents and/or raw audio files.

Although many of the exposed documents appear to be quite recent, some of the records dated as far back as 2007. It’s also unclear how long the data was accessible, but this Google cache of the MEDantex physician portal seems to indicate it was wide open on April 10, 2018.

Among the clients listed on MEDantex’s site include New York University Medical Center; San Francisco Multi-Specialty Medical Group; Jackson Hospital in Montgomery Ala.; Allen County Hospital in Iola, Kan; Green Clinic Surgical Hospital in Ruston, La.; Trillium Specialty Hospital in Mesa and Sun City, Ariz.; Cooper University Hospital in Camden, N.J.; Sunrise Medical Group in Miami; the Wichita Clinic in Wichita, Kan.; the Kansas Spine Center; the Kansas Orthopedic Center; and Foundation Surgical Hospitals nationwide. MEDantex’s site states these are just some of the healthcare organizations partnering with the company for transcription services.

Unfortunately, the incident at MEDantex is far from an anomaly. A study of data breaches released this month by Verizon Enterprise found that nearly a quarter of all breaches documented by the company in 2017 involved healthcare organizations.

Verizon says ransomware attacks account for 85 percent of all malware in healthcare breaches last year, and that healthcare is the only industry in which the threat from the inside is greater than that from outside.

“Human error is a major contributor to those stats,” the report concluded.

Source: Verizon Business 2018 Data Breach Investigations Report.

According to a story at BleepingComputer, a security news and help forum that specializes in covering ransomware outbreaks, WhiteRose was first spotted about a month ago. BleepingComputer founder Lawrence Abrams says it’s not clear how this ransomware is being distributed, but that reports indicate it is being manually installed by hacking into Remote Desktop services.

Fortunately for WhiteRose victims, this particular strain of ransomware is decryptable without the need to pay the ransom.

“The good news is this ransomware appears to be decryptable by Michael Gillespie,” Abrams wrote. “So if you become infected with WhiteRose, do not pay the ransom, and instead post a request for help in our WhiteRose Support & Help topic.”

Ransomware victims may also be able to find assistance in unlocking data without paying from

KrebsOnSecurity would like to thank India-based cybersecurity startup Banbreach for the heads up about this incident.

Google AdsenseLet’s talk about Policy! Partnership with Industry Associations, and Better Ads Standards

Thanks to your feedback around wanting more discussion of AdSense policies, we have created some new content on our AdSense YouTube channel in a series called “Let’s talk about Policy!”.

This month, we're talking about the Coalition for Better Ads, Better Ads Standards.

These standards were created by the Coalition for Better Ads to improve ad experiences for users across the web. Google is a member of the Coalition alongside other stakeholders within the industry. The Better Ads Standards were created with extensive consumer research to minimize annoying ad experiences across devices. The Standards are currently in place for users within North America and Europe, but we are advising all our publishers to abide by the standards to provide a quality user experience for your visitors and minimize the need to make changes in the future.

Check out our video discussing the Standards to learn more:

Be sure to subscribe to the channel to ensure you don’t miss an episode.

CryptogramRussia is Banning Telegram

Russia has banned the secure messaging app Telegram. It's making an absolute mess of the ban -- blocking 16 million IP addresses, many belonging to the Amazon and Google clouds -- and it's not even clear that it's working. But, more importantly, I'm not convinced Telegram is secure in the first place.

Such a weird story. If you want secure messaging, use Signal. If you're concerned that having Signal on your phone will itself arouse suspicion, use WhatsApp.

CryptogramYet Another Biometric: Ear Shape

This acoustic technology identifies individuals by their ear shapes. No information about either false positives or false negatives.

Worse Than FailureRepresentative Line: The Truth About Comparisons

We often point to dates as one of the example data types which is so complicated that most developers can’t understand them. This is unfair, as pretty much every data type has weird quirks and edge cases which make for unexpected behaviors. Floating point rounding, integer overflows and underflows, various types of string representation…

But file-not-founds excepted, people have to understand Booleans, right?

Of course not. We’ve all seen code like:

if (booleanFunction() == true) …


if (booleanValue == true) {
    return true;
} else {
    return false;

Someone doesn’t understand what booleans are for, or perhaps what the return statement is for. But Paul T sends us a representative line which constitutes a new twist on an old chestnut.

if ( Boolean.TRUE.equals(functionReturningBooleat(pa, isUpdateNetRevenue)) ) …

This is the second most Peak Java Way to test if a value is true. The Peak Java version, of course, would be to use an AbstractComparatorFactoryFactory<Boolean> to construct a Comparator instance with an injected EqualityComparison object. But this is pretty close- take the Boolean.TRUE constant, use the inherited equals method on all objects, which means transparently boxing the boolean returned from the function into an object type, and then executing the comparison.

The if (boolean == true) return true; pattern is my personal nails-on-the-chalkboard code block. It’s not awful, it just makes me cringe. Paul’s submission is like an angle-grinder on a chalkboard.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!