Planet Russell


Debian Administration DKIM-signing outgoing mail with exim4

There have been several systems designed to prevent mail spoofing over the years, the two most prominent solutions are DKIM and SPF. Here we're going to document the setup of using DKIM to signing outgoing mails with Debian's default mail transfer agent, exim4.

Geek FeminismThe Lean Linkspam (28 July 2015)

  • TODO Group And Open Source Codes of Conduct | Model View Culture: “We’ve come up with some pretty great resources and tools, put them into practice, tested and iterated, and built community consensus. Yet TODO swoops in to erase and replace all of this work: without our consent or input, a group of massive companies with practically unlimited funds are branding and pushing a code of conduct that suits their needs, not ours.”
  • That time the Internet sent a SWAT team to my mom’s house | Boing Boing: “As the reporter recounted all of this to me, I was living my research in real time. I was well-versed in the mechanics of a prank like this, but that didn’t abate the anxiety attacks I was having.”
  • Managers beware of gender faultlines | EurekAlert! Science News: “In addition to gender divisions, the authors looked at a more benign kind of faultline: Those created by cliques centered on job types (that is, when people with similar job duties share not only that trait but other demographic qualities such as gender, age and time served.) When the diversity environment was positive, that kind of group identity actually led to stronger feelings of loyalty toward the firm. But the positive effect of job-function cliques disappeared when the diversity climate was unsatisfactory.”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet Linux AustraliaDavid Rowe: Low Order LPC and Bandpass Filtering

I’ve been working on the Linear Predictive Coding (LPC) modeling used in the Codec 2 700 bit/s mode to see if I can improve the speech quality. Given this mode was developed in just a few days I felt it was time to revisit it for some tuning.

LPC fits a filter to the speech spectrum. We update the LPC model every 40ms for Codec 2 at 700 bit/s (10 or 20ms for the higher rate modes).

Speech Codecs typically use a 10th order LPC model. This means the filter has 10 coefficients, and every 40ms we have to send them to the decoder over the channel. For the higher bit rate modes I use about 37 bits/frame for this information, which is the majority of the bit rate.

However I discovered I can get away with a 6th order model, if the input speech is filtered the right way. This has the potential to significantly reduce the bit rate.

The Ear

Our ear perceives speech based on the frequency of peaks in the speech spectrum. When the peaks in the speech spectrum are indistinct, we have trouble understanding what is being said. The speech starts to sound muddy. With analog radio like SSB (or in a crowded room), the troughs between the peaks fill with noise as the SNR degrades, and eventually we can’t understand what’s being said.

The LPC model is pretty good at representing peaks in the speech spectrum. With a 10th order LPC model (p=10) you get 10 poles. Each pair of poles can represent one peak, so with p=10 you get up to 5 independent peaks, with p=6, just 3.

I discovered that LPC has some problems if the speech spectrum has big differences between the low and high frequency energy. To find the LPC coefficients, we use an algorithm that minimises the mean square error. It tends to “throw poles” at the highest energy part of signal (frequently near DC), while ignoring the still important, lower energy peaks at higher frequencies above 1000Hz. So there is a mismatch in the way LPC analysis works and how our ears perceive speech.

For example I found that samples like hts1a and ve9qrp code quite well, but cq_ref and kristoff struggle. The former have just 12dB between the LF and HF parts of the speech spectrum, the latter 40dB. This may be due to microphones, input filtering, or analog shaping.

Another problem with using an unconventionally low LPC order like p=6 is that the model “runs out of poles”. Some speech signals may have 4 or 5 peaks, so the poor LPC model gets all confused and tries to reach a compromise that just sounds bad.

My Experiments

I messed around with a bunch of band pass filters that I applied to the speech samples before LPC modeling. These filters whip the speech signal into a shape that the LPC model can work with. I ran various samples (hts1a, hts2a, cq_ref, ve9qrp_10s, kristoff, mmt1, morig, forig, x200_ext, vk5qi) through them to come up with the best compromise for the 700 bits/mode.

Here is what p=6 LPC modeling sounds like with no band pass filter. Here is a sample of p=6 LPC modeling with a 300 to 2600Hz input band pass filter with very sharp edges.

Even though the latter sample is band limited, it is easier to understand as the LPC model is doing a better job of clearly representing those peaks.

Filter Implementation

After some experimentation with sox I settled on two different filter types: a sox “bandpass 1000 2000″ worked on some, whereas on others with more low frequency content “bandpass 1500 2000″ sounded better. Some helpful discussions with Glen VK1XX had suggested that a two band AGC was common in broadcast audio pre-processing, and might be useful here.

However through a process of frustrated experimentation (I was stuck on cq_ref for a day) I found that a very sharp skirted filter between 300 and 2600Hz did a pretty good job. Like p=6 LPC, a 2600Hz cut off is quite uncommon for speech coding, but SSB users will find it strangely familiar…….

Note that for the initial version of the 700 bit/s mode (currently in use in FreeDV 700) I have a different band pass filter design I chose more or less at random on the day that sounds like this with p=6 LPC. This filter now appears to be a bit too severe.


Here is a little chunk of speech from hts1a:

Below are the original (red) and p=6 LPC models (green line) without and with a sox “bandpass 1000 2000″ filter applied. If the LPC model was perfect green and red would be superimposed. Open each image in a new browser tab then jump back and forth. See how the two peaks around 550 and 1100Hz are better defined with the bandpass filter? The error (purple) in the 500 – 1000 Hz region is much reduced, better defining the “twin peaks” for our long suffering ears.

Here are three spectrograms of me saying “D G R”. The dark lines represent the spectral peaks we use to perceive the speech. In the “no BPF” case you can see the spectral peaks between 2.2 and 2.3 seconds are all blurred together. That’s pretty much what it sounds like too – muddy and indistinct.

Note that compared to the original, the p=6 BPF spectrogram is missing the pitch fundamental (dark line near 0 Hz), and a high frequency peak at around 2.5kHz is indistinct. Turns out neither of these matter much for intelligibility – they just make the speech sound band limited.

Next Steps

OK, so over the last few weeks I’ve spent some time looking at the effects of microphone placement, and input filtering on p=6 LPC models. Now time to look at quantisation of the 700 mode parameters then try it again over the air and see if the speech quality is improved. To improve performance in the presence of bit errors I’d also like to get the trellis based decoding into a real world usable form. When the entire FreeDV 700 mode (codec, modem, error handling) is working OK compared to SSB, time to look at porting to the SM1000.

Command Line Magic

I’m working with the c2sim program, which lets me explore Codec 2 in a partially quantised or incomplete state. I pipe audio in and out between various sox stages.

Note these simulations sound a lot better than the final Codec 2 at 700 bit/s as nothing else is quantised/decimated, e.g. it’s all at a 10ms frame rate with original phases. It’s a convenient way to isolate the LPC modeling step with as much fidelity as we can.

If you want to sing along here are a couple of sample command lines. Feel free to ask me any questions:

sox -r 8000 -s -2 ../../raw/hts1a.raw -r 8000 -s -2 -t raw - bandpass 1000 2000 | ./c2sim - --lpc 6 --lpcpf -o - | play -t raw -r 8000 -s -2 -
sox -r 8000 -s -2 ../../raw/cq_ref.raw -r 8000 -s -2 -t raw - sinc 300 sinc -2600 | ./c2sim - --lpc 6 --lpcpf -o - | play -t raw -r 8000 -s -2 -

Reading Further

Open Source Low Rate Speech Codec Part 2
LPC Post Filter for Codec 2

Planet Linux AustraliaMichael Still: Geocaching with a view

I went to find a couple of geocaches in a jet lag fuelled caching walk this morning. Quite scenic!


Interactive map for this route.

Tags for this post: blog pictures 20150729 photo sydney
Related posts: In Sydney!; In Sydney for the day; A further update on Robyn's health; RIP Robyn Boland; Weekend update; Bigger improvements



CryptogramNew RC4 Attack

New research: "All Your Biases Belong To Us: Breaking RC4 in WPA-TKIP and TLS," by Mathy Vanhoef and Frank Piessens:

Abstract: We present new biases in RC4, break the Wi-Fi Protected Access Temporal Key Integrity Protocol (WPA-TKIP), and design a practical plaintext recovery attack against the Transport Layer Security (TLS) protocol. To empirically find new biases in the RC4 keystream we use statistical hypothesis tests. This reveals many new biases in the initial keystream bytes, as well as several new long-term biases. Our fixed-plaintext recovery algorithms are capable of using multiple types of biases, and return a list of plaintext candidates in decreasing likelihood.

To break WPA-TKIP we introduce a method to generate a large number of identical packets. This packet is decrypted by generating its plaintext candidate list, and using redundant packet structure to prune bad candidates. From the decrypted packet we derive the TKIP MIC key, which can be used to inject and decrypt packets. In practice the attack can be executed within an hour. We also attack TLS as used by HTTPS, where we show how to decrypt a secure cookie with a success rate of 94% using 9*227 ciphertexts. This is done by injecting known data around the cookie, abusing this using Mantin's ABSAB bias, and brute-forcing the cookie by traversing the plaintext candidates. Using our traffic generation technique, we are able to execute the attack in merely 75 hours.

News articles.

We need to deprecate the algorithm already.

Planet DebianJonathan Dowland: Sound effect pitch-shifting in Doom

My previous blog posts about deterministic Doom proved very popular.

The reason I was messing around with Doom's RNG was I was studying how early versions of Doom performed random pitch-shifting of sound effects, a feature that was removed early on in Doom's history. By fixing the random number table and replacing the game's sound effects with a sine wave, one second long and tuned to middle-c, I was able to determine the upper and lower bounds of the pitch shift.

Once I knew that, I was able to write some patches to re-implement pitch shifting in Chocolate Doom, which I'm pleased to say have been accepted. The patches have also made their way into the related projects Crispy Doom and Doom Retro.

I'm pleased with the final result. It's the most significant bit of C code I've ever released publically, as well as my biggest Doom hack and the first time I've ever done any audio manipulation in code. There was a load of other notes and bits of code that I produced in the process. I've put them together on a page here: More than you ever wanted to know about pitch-shifting.

Planet DebianLisandro Damián Nicanor Pérez Meyer: Plasma/KF5 : Testing situation

Dear Debian/KDE users,

We are aware that the current situation in testing is very unfortunate, with two main issues:

  1. systemsettings transitioned to testing before the corresponding KDE Control Modules. The result is that systemsettings displays an empty screen. This is tracked in the following bug
  2. plasmoids such as plasma-nm transitioned to testing before plasma-desktop 5. The result is that the plasmoid are no longer displayed in the system tray.

We are working on getting plasma-desktop to transition to testing as soon as possible (hopefully in 2 days time), which will resolve both those issues. We appreciate that the transition to KF5 is much rougher than we would have liked, and apologize to all those impacted.

On behalf of the Qt/KDE team,

Sociological ImagesAre Drag Queens Doing Girlface?

Organizers of Free Pride Glasgow, a Scottish gay pride parade, have “banned” drag queens from the event, citing concerns that men dressing up like women is offensive to trans women. The LGBTQ community is afire about this, citing the long tradition of drag performances in gay communities and the role drag queens have played in the Gay Liberation movement. “hello, ever heard of THE STONEWALL RIOTS?!!!” tweeted one of the stars of RuPaul’s Drag Race.

The organizers of Free Pride Glasgow are standing their ground, stating that they will only allow noncisgender men — men those who do not identify as men — trans women to perform in drag. A facebook comment suggested, and rightly so, that this could get really problematic really fast in practice, asking: “How are you going to moderate who is a trans and who is a cis drag act?”

Well, that’s a can of worms.

I don’t know how this conversation is going to play out and, to be honest, I’m nervous to jump in. But I gotta say that I, for one, really hope we keep talking about this. I don’t think it’s unreasonable to worry about how drag queen performances might make trans women feel. Drag performers generally do an exaggerated performance of femininity and I think it’s okay to ask whether and when this counts as mocking femininity and the people that perform it: trans women, yes, and ciswomen, too.


Sexism matters here and anyone can be sexist, even drag queens. When drag queens trot out some of the worst stereotypes about women, for example –performing characters that are vain, bitchy, selfish, and always PMSing — I see girlface. I see men mocking femininity, not embracing their feminine sides and busting the fiction of masculinity. So, I don’t blame trans women one bit if this makes them uncomfortable; it sure makes me uncomfortable and I’m in a much safer position than they.

So, I don’t know where this conversation is going to go, but I do think we need to have it. It needs to be, though, not about whether drag queens should be banned, but what drag should look like going forward. It should be about both what drag queens bring to the movement — their value in the past and the role they can play now — but also whether and how their performances contribute to a devaluation of femininity that hurts all women, cis, trans, and other.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Google AdsenseIntroducing a new user consent policy

Today we’re launching a new user consent policy. This policy requires publishers with site visitors from the European Union to ask their permission for using their data.

Why are we doing this?

European Union data protection authorities requested some changes to current practices for obtaining end user consents. It has always been Google’s policy to comply with privacy laws, so we’ve agreed to make certain changes affecting our own products and partners using Google products.

What do you need to do?

If your websites are getting visitors from any of the countries in the European Union, you must comply with the EU user consent policy. We recommend you start working on a policy-compliant user consent mechanism today. There’s guidance from data protection authorities and IABs across Europe on what is required to comply with relevant laws; the IAB's IAB Europe Guidance: Five Practical Steps to help companies comply with the E-Privacy Directive is a good place to start.

To learn how to implement a user consent mechanism, check out our help center FAQs and visit Cookie Choices, a website dedicated to complying with this new policy.

Posted by Jason Woloz, Security & Privacy Program Manager, Display and Video Ads

CryptogramStagefright Vulnerability in Android Phones

The Stagefright vulnerability for Android phones is a bad one. It's exploitable via a text message (details depend on auto downloading of the particular phone), it runs at an elevated privilege (again, the severity depends on the particular phone -- on some phones it's full privilege), and it's trivial to weaponize. Imagine a worm that infects a phone and then immediately sends a copy of itself to everyone on that phone's contact list.

The worst part of this is that it's an Android exploit, so most phones won't be patched anytime soon -- if ever. (The people who discovered the bug alerted Google in April. Google has sent patches to its phone manufacturer partners, but most of them have not sent the patch to Android phone users.)

Worse Than FailureCodeSOD: You've Got My Number

Luftballons Hannover

Today's snippet needs very little introduction. In the words of the submitter:

[My predecessor] is what I would consider, among the worst programmers in the world. While his programs actually do work and do what they should, his techniques and programming decisions are very questionable. The [below] code snippet is from a program he wrote after he spend about a year at this company.

The function had one goal: validate a pair of textboxes to ensure they each contain a date, usually in a format like "12 2012" for December 2012. It demonstrates the kind of short-sightedness that usually gets ground out of a developer inside of their first year, like the impulse to test only the happy path through the code and not any possible error conditions. It's generally best to assume your users are malicious idiots who will type things like "none" or "99 luftballons" instead of a proper date.

If that were all he did, though, this wouldn't be worthy of TDWTF. Have a look-see:

private void button1_Click(object sender, EventArgs e)
	int digitornot = 0; //'Tis a digit or not?

	if ((textBox1.Text.Length == 1) || (textBox1.Text.Length == 2) || (textBox2.Text.Length == 4))
		foreach (char x in textBox1.Text + textBox2.Text)
			if (Char.IsDigit(x))
				digitornot = 1;

		if (digitornot == 1)
			Program.Month = Convert.ToInt16(textBox1.Text);
			if ((Program.Month > 0) && (Program.Month < 13))
				Program.Year = Convert.ToInt16(textBox2.Text);
				if ((Program.Year > 2000) && (Program.Year < 2050))
					MessageBox.Show("Wrong input!\r\nCheck format!");
				MessageBox.Show("Wrong input!\r\nCheck format!");
			MessageBox.Show("Wrong input!\r\nCheck format!");
		MessageBox.Show("Wrong input!\r\nCheck format!");

Not only did the author use integers when a boolean would be more appropriate, he also neglected to name any of his input fields, making maintenance a nightmare. The check for digits will allow all kinds of crud through, which will then crash the program when it tries to convert non-integers to Int16. The submitter has no idea why the year is being compared to 2050. Presumably, the Rapture will happen before then, so no future dates need be considered beyond that point.

<link href="" rel="stylesheet"/> <script src=""></script> <script>hljs.initHighlightingOnLoad();</script>
[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaMichael Still: Chet and I went on an adventure to LA-96

So, I've been fascinated with American nuclear history for ages, and Chet and I got talking about what if any nuclear launch facilities there were in LA. We found LA-96 online and set off on an expedition to explore. An interesting site, its a pity there are no radars left there. Apparently SF-88 is the place to go for tours from vets and radars.


See more thumbnails

I also made a quick and dirty 360 degree video of the view of LA from the top of the nike control radar tower:

Interactive map for this route.

Tags for this post: blog pictures 20150727-nike_missile photo california
Related posts: First jog, and a walk to Los Altos; Did I mention it's hot here?; Summing up Santa Monica; Noisy neighbours at Central Park in Mountain View; So, how am I getting to the US?; Views from a lookout on Mulholland Drive, Bel Air


Planet DebianNorbert Preining: ePub editor Sigil landed in Debian

Long long time ago I wanted to have Sigil, an epub editor, to appear in Debian. There was a packaging wishlist bug from back in 2010 with intermittent activities. But thanks to concerted effort, especially by Mattia Rizzolo and Don Armstrong, packaging progressed to a state that I could sponsor the upload to experimental about 4 months ago. And yesterday, after long waiting, finally Sigil passed the watchful eyes of the Debian ftp-masters and has entered Debian/experimental.


I have already updated the packaging for the latest version 0.8.7, which will be included in Debian/sid rather soon. Thanks again especially Mattia for his great work.

Email this to someonePrint this pageShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInFlattr the author


Planet DebianKees Cook: 3D printing Poe

I helped print this statue of Edgar Allan Poe, through “We the Builders“, who coordinate large-scale crowd-sourced 3D print jobs:

Poe's Face

You can see one of my parts here on top, with “-Kees” on the piece with the funky hair strand:

Poe's Hair

The MakerWare I run on Ubuntu works well. I wish they were correctly signing their repositories. Even if I use non-SSL to fetch their key, as their Ubuntu/Debian instructions recommend, it still doesn’t match the packages:

W: GPG error: trusty Release: The following signatures were invalid: BADSIG 3D019B838FB1487F MakerBot Industries dev team <>

And it’s not just my APT configuration:

$ wget
$ wget
$ gpg --verify Release.gpg Release
gpg: Signature made Wed 11 Mar 2015 12:43:07 PM PDT using RSA key ID 8FB1487F
gpg: requesting key 8FB1487F from hkp server
gpg: key 8FB1487F: public key "MakerBot Industries LLC (Software development team) <>" imported
gpg: Total number processed: 1
gpg:               imported: 1  (RSA: 1)
gpg: BAD signature from "MakerBot Industries LLC (Software development team) <>"
$ grep ^Date Release
Date: Tue, 09 Jun 2015 19:41:02 UTC

Looks like they’re updating their Release file without updating the signature file. (The signature is from March, but the Release file is from June. Oops!)

© 2015, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Planet DebianAndrew Cater: Bye SPARC - for now

So it looks as if it's the end for the Debian SPARC port that is primarily 32 bit, for now at least. Too little available modern hardware, too few porters and an upstream hardware provider emotionally tied to significant licensing and support agreements.

If 64 bit SPARC hardware were more available, I'd be interested again. SPARC has given me two of my favourite moments in Debian. I helped a colleague to duplicate existing software and move architecture from Intel to SPARC mainly by copying across the list of packages. 

It also allowed me in ?? 1999 / 2000 ?? to take a SPARC 20 to London Olympia to a Linux Expo where one of the principal sponsors was Sun. They laughed on their stand when I set up older hardware with minimal memory but were not so amused when I demonstrated Debian, full X Window environment and KDE successfully.

CryptogramMichael Chertoff Speaks Out Against Backdoors

This is significant.

News article.

EDITED TO ADD (7/28): Commentary, and former Director of the National Counterintelligence Center Michael Leiter's comments.

Chaotic IdealismDivide and Conquer

I really wish autistic people who can talk and live on their own would stop trying to distance themselves from the autistic people who can't do those things... It frustrates me that they're still kind of buying into the idea that "I'm autistic but it's okay because I'm smart"... because that buys into the idea of ranking people's worth by their abilities.

I want autism rights to stop emphasizing our talents or our disabilities, and start talking about what we want, what we need, what our lives are really like. It just makes me really sad when I hear somebody say, "It's okay that I'm autistic, but I wouldn't want to be one of Those People who needs diapers and has intellectual disability and needs a group home; in fact, they're probably not really autistic at all because autistic people are smart"... It just makes me sad.

We can't shut them away. We need to stick together with those who can't communicate or who need a lot of help. We need to defeat the fear-pity-hate thing altogether instead of trying to wiggle away from it and leaving the "low-functioning people" to deal with that stigma alone.

Krebs on SecurityThe Wheels of Justice Turn Slowly

On the evening March 14, 2013, a heavily-armed police force surrounded my home in Annandale, Va., after responding to a phony hostage situation that someone had alerted authorities to at our address. I’ve recently received a notice from the U.S. Justice Department stating that one of the individuals involving in that “swatting” incident had pleaded guilty to a felony conspiracy charge.

swatnet“A federal investigation has revealed that several individuals participated in a scheme to commit swatting in the course of which these individuals committed various federal criminal offenses,” reads the DOJ letter, a portion of which is here (PDF). “You were the victim of the criminal conduct which resulted in swattings in that you were swattted.”

The letter goes on to state that one of the individuals who participated in the scheme has pleaded guilty to conspiracy charges (Title 18, Section 371) in federal court in Washington, D.C.

The notice offers little additional information about the individual who pleaded guilty or about his co-conspirators, and the case against him is sealed. It could be the individual identified at the conclusion of this story, or someone else. In any case, my own digging on this investigation suggests the government is in the process of securing charges or guilty pleas in connection with a group of young men who ran the celebrity “doxing” Web site exposed[dot]su (later renamed exposed[dot]re).

As I noted in a piece published just days after my swatting incident, the attack came not long after I wrote a story about the site, which was posting the Social Security numbers, previous addresses, phone numbers and credit reports on a slew of high-profile individuals, from the director of the FBI to Kim Kardashian, Bill Gates and First Lady Michelle Obama. Many of those individuals whose personal data were posted at the site also were the target of swatting attacks, including P. Diddy, Justin Timberlake and Ryan Seacrest.

The Web site exposed[dot]su featured the personal data of celebrities and public figures.

The Web site exposed[dot]su featured the personal data of celebrities and public figures.

Sources close to the investigation say Yours Truly was targeted because this site published a story correctly identifying the source of the personal data that the hackers posted on exposed[dot]su. According to my sources, the young men, nearly all of whom are based here in the United States, obtained the personal data after hacking into a now-defunct online identity theft service called ssndob[dot]ru.

Investigative reporting first published on KrebsOnSecurity in September 2013 revealed that the same miscreants controlling ssndob[dot]ru (later renamed ssndob[dot]ms) siphoned personal data from some of America’s largest consumer and business data aggregators, including LexisNexis, Dun & Bradstreet and Kroll Background America.

The administration page of ssndob[dot]ru. Note the logged in user,, is the administrator.

The administration page of ssndob[dot]ru. Note the logged in user,, is the administrator.

I look forward to the day that the Justice Department releases the names of the individuals responsible for these swatting incidents, for running exposed[dot]su, and hacking the ssndob[dot]ru ID theft service. While that identity theft site went offline in 2013, several competing services have unfortunately sprung up in its wake, offering the ability to pull Social Security numbers, dates of birth, previous addresses and credit reports on virtually all Americans.

Further reading:

Who Built the Identity Theft Service SSNDOB[dot]RU? 

Credit Reports Sold for Cheap in the Underweb

Data Broker Giants Hacked by ID Theft Service

Data Broker Hackers Also Compromised NW3C

Swatting Incidents Tied to ID Theft Sites?

Toward a Breach Canary for Data Brokers

How I Learn to Stop Worrying and Embrace the Credit Freeze

TEDWhy I put all my stuff in storage to travel cross-country and listen to people

In teams of three, the StoryCorps Mobile TK travels across the country, facilitating interviews. Photo: Courtesy of Emily Janssen

In teams of three, the StoryCorps Mobile Tour travels across the United States, facilitating interviews between ordinary people with extraordinary stories. Emily Janssen (center) shares a moment in her team’s Airstream trailer, known as Betty. Photo: Courtesy of Emily Janssen

Ever had the impulse to put everything in storage, sublet your place and travel across the country in an Airstream trailer?

That’s what Emily Janssen did when she joined the StoryCorps Mobile Tour as a facilitator, someone who helps people record their own StoryCorps interview.

She’d worked for StoryCorps before, at their headquarters in Brooklyn, but in 2014 she decided it was time to hit the road. Now she crisscrosses the country with a mobile recording booth and a team of three, parking in one spot for four or five weeks and recording interviews with people who live there.

The team records seven interviews a day, five days a week. Janssen oversees the technical aspects of recording, logs each interview for the Library of Congress — and actively listens to every conversation as it unfolds, asking clarifying questions to keep people on track. So far, she has captured nearly 500 interviews.

Janssen will travel with the mobile team through the end of 2015. She spoke to us from Vernal, Utah, about the mobile recording booth in a trailer (which is named Betty), life on the road and how listening has transformed her.

What inspired you to become a facilitator?

I was working as an advocate for homeless youth, and witnessing their voicelessness. I was also devouring audio documentaries. At the time, StoryCorps was a project of Sound Portraits. I clicked on the little icon on their website, read the mission statement and just immediately connected with the idea. I saw the potential for StoryCorps to be an empowering space for those who are marginalized to speak in their own words.

What do you remember from your first day?

I didn’t come from a radio background, so I just remember thinking, “I hope this is recording. It says it’s recording. Is the sound O.K.? Did I remember to tell the participants what will happen when they finish recording?”

How do you stay neutral in interviews?

I can’t say that I do 100% of the time. Witnessing people being open and honest, expressing their love and appreciation for each other, can be emotional. We keep tissues in our booth — and facilitators use them, too. When I’m hearing a particularly difficult or sad story, I remind myself that my role is not to be a therapist or to try to change the emotions my participants are expressing. My role is to listen and hold a safe space for them.

StoryCorps has two Airstream trailers that travel across the country, parking for four to five weeks in a town before moving on. Photo: Courtesy of StoryCorps

StoryCorps has two Airstream trailers that house mobile recording studios. With the 2015 TED Prize, StoryCorps launched an app that for the first time lets people conduct interviews outside of an official booth. Photo: Courtesy of StoryCorps

How do you manage the logistics of being on the road all the time?

When people ask, I say, “I live with the booth.” I don’t have a permanent residence, but my home bases are Brooklyn, where StoryCorps is based; Minneapolis-Saint Paul, where I lived before coming on the road; and Albany, where I grew up. I don’t have a lot of possessions, but what I do have is stored by friends and family. My dog, Pedro, has had his own adventure with my aunts during my time on the road.

Is it hard to go home?

I appreciate times at home now more than ever. I love hearing the sounds of someone brewing coffee or cooking dinner — or just the familiarity of sitting on the couch with someone I’ve known for years. Being on the road has made me crave the stability and deep ties you create by being in one place, while also sparking my curiosity to discover more.

Life on the road is about constant transition. Betty, our trailer, becomes the most stable thing. After getting lost looking for the grocery store, it’s nice to come home to the booth.

Where do you live while on the road?

It changes every stop. Our local public radio station partners provide housing for our team. We’ve been in houses, apartments, dorms, hotels. We’re never in one place for more than five weeks.

What’s it like driving the Airstream?

When the tour began, the staff pulled the trailer with a Chevy Silverado. But now, we hire professional drivers. Let’s just say that’s a good thing.

What are the essential things to have with you on the road?

My phone and my sneakers. My phone acts as my connection to our office and to my family and friends. And I’d be lost without my sneakers. Mine have run around Forest Park in St. Louis, seen alligators in Louisiana’s Barataria Preserve, climbed the Manitou Incline in Colorado, and walked countless city blocks.

You often listen to difficult stories. How do you prevent burnout?

It’s important to practice self-care. On the road, I make time to run and swim, and send surprise packages to my niece and nephew. I weave and quilt to create something outside of my head. But the best tool is the team. We’re always on the road in a team of three, and we check in with each other about our day and interviews. We create our own little community that helps you process everything.

What advice would you give on becoming a better listener?

If you have the impulse to interrupt and ask a question, it’s good to ask yourself, “Why do I want to ask that?” Sometimes the person will answer in their own time, and in their own way. It’s good to pause and make sure you’re not leading the conversation.

How has this experience changed you?

Being in the booth constantly reminds me that we are not a list of our accomplishments. When you’re listening to people talk about their lives, you may only peripherally know what’s on their resume, but you can see who they are. We all have our struggles and our celebrations, but most of us are just trying to do our best to move through life. I witness a tremendous capacity for love and forgiveness, and am amazed at people’s resilience.


Emily Janssen, in her facilitator's seat in the StoryCorps mobile recording booth. Photo: Courtesy of Emily Janssen

Emily Janssen in the StoryCorps mobile recording booth. She has facilitated nearly 500 interviews. Photo: Courtesy of Emily Janssen

Find out where the StoryCorps booth will be next »

Dave Isay, the founder of StoryCorps, is the winner of our 2015 TED Prize. In a talk at TED2015, he shared an audacious wish for his organization: to take it global with a free app. Stay tuned for this column every other week on the TED Blog, as we chart the evolution of his TED Prize wish.

TEDAn organic computer of connected rat brains, what the American South can learn from post-WWII Germany and much more

Anand-Giridharadas'-TED-TalkThe TED community always has lots of news to share. Below, some highlights from the past two weeks.

Lessons for the South — from Germany. “Can the American South, still grappling with the legacy of slavery and segregation, learn something from Germany’s grappling with Nazism?” Anand Giridharadas asked this question of four scholars who study both places. The takeaway: in Germany, a sense of “collective responsibility” has led to public memorials of tragic events, while in the South, public remembering is tinged with nostalgia for a lost cause. The “self-critical memory culture” of Germany allows for remembrance of unsettling events, like a memorial to a neighborhood where murdered Holocaust victims once lived. For a Southern equivalent, Anand points to an idea from TED speaker Bryan Stevenson: “holographic memorials that pop up and deliberately startle passers-by at sites where lynchings occurred.” (Watch Anand’s TED Talk, “A tale of two Americas. And the mini-mart where they collided.”)

An organic computer made of … animals. At TEDGlobal 2014, Miguel Nicolelis shared his research on a brain interface that lets two rats — or three monkeys — cooperate to solve problems together. This month, he and his colleagues published a paper in the open-access journal Scientific Reports to expand on the concept of the “Brainet.” The paper describes an initial model, interconnecting the brains of four adult rats, that shows “the core of a new type of computing device: an organic computer.” As Nicolelis told Motherboard, “These computers will not do word processing or numerical calculation or internet searches; they will be tailored for very specific tasks. It’s a totally different kind of vision for computation that we’re not used to.” (Watch Miguel’s TED Talk, “Brain-to-brain communication has arrived. How we did it.”)

A grant for the StoryCorps app. StoryCorps has won a $600,000 grant from the Knight Foundation to improve its app, which lets users record interviews on their mobile devices. The app was launched into public beta with the 2015 TED Prize; the Knight grant will be used to add new features, like social tools for interviewing people far away. (Watch Dave’s TED Prize talk, “Everyone around you has a story the world needs to hear.”)

And the first StoryCorps app marriage proposal. On July 8, Rory Miller asked his girlfriend, Asya Adcock, to marry him, while the two recorded an interview on the StoryCorps app. Sitting in their Tennessee home, Rory begins, “This last question basically says, ‘Take time to tell your interview partner what they mean to you.’ So I’m going to say to you: I absolutely adore you. You are easily the best thing that ever happened to me in any way, shape or form.” Check out an excerpt of the interview to hear him ask the question — and to hear Asya’s response.

African music in the digital age. In a fun Q&A with Paper, TED Fellow Bill “Blinky” Sellanga talks about how South African house music and guitar-heavy Kenyan songs are finding new global audiences online. “The grooves that African music has are insane,” he said, “and just waiting for innovative producers and musicians to merge both worlds to get people on their feet.”

Have a news item to share? Write us at and you may see it included in this biweekly round-up.

TEDWhy TED takes two weeks off every summer has gone dark for two weeks. No new TED Talks will be posted until Monday, August 10, 2015, while most of the TED staff takes a twoweek holiday. Yes, we all go on break at the same time (mostly). No, we don’t all go to the same place :)

We’ve been doing it this way now for six years; our summer break is a little hack that solves the problem of an office full of Type-A’s with raging FOMO. We avoid the fear of missing out on emails and IMs and new projects and blah blah blah … by making sure that nothing is going on.

I love how my boss, June Cohen, explains it. “When you have a team of passionate, dedicated overachievers, you don’t need to push them to work harder, you need to help them rest. By taking the same two weeks off, it makes sure everyone takes vacation,” she says. “Planning a vacation is hard — most of us would feel a little guilty to take two weeks off if it weren’t pre-planned for us, and we’d be likely to cancel when something inevitably came up. This creates an enforced rest period, which is so important for productivity and happiness.”

Bonus: “It’s efficient,” she says. “In most companies, people stagger their vacations through the summer. But this means you can never quite get things done all summer long. You never have all the right people in the room.”

“We’re all on the same schedule. We all return feeling rested and invigorated. What’s good for the team is good for business.”

So, as the bartender said: You don’t have to go home, but you can’t stay here. We won’t post new TED Talks for the next two weeks. The main office is empty. And we stay off email. The whole point is that vacation time should be truly restful, and we should be able to recharge without having to check in or worry about what we’re missing back at the office.

One team isn’t taking this year’s break, though: This year’s break falls over Q4 contract deadlines, so our partnership team is in full swing, closing the sponsor deals that help support all of TED’s work throughout the year. So please, send good thoughts to the hardworking folks who help bring you TED Talks for free.

From the rest of us, see you on August 10!

Note: This piece was originally posted on July 17, 2014. It was updated on July 27, 2015.

Sociological ImagesMass Shootings in the U.S. are on the Rise. What Makes American Men So Dangerous?

Following the recent mass shooting in Charleston, South Carolina on June 17th, 2015 – a racially motivated act of domestic terrorism – President Barack Obama delivered a sobering address to the American people. With a heavy heart, President Obama spoke the day following the attack, stating:

At some point we as a country will have to reckon with the fact that this type of mass violence does not happen in other advanced countries. And it is in our power to do something about it. I say that recognizing that politics in this town foreclose a lot of those avenues right now. But it would be wrong for us not to acknowledge.

President Obama was primarily referring to gun control in the portion of his speech addressing the cause of attacks like this. Not all mass shootings are racially motivated, and not all qualify as “terrorist” attacks — though Charleston certainly qualifies.  And the mass shooting that occurred a just a month later in Chattanooga, Tennessee by a Kuwati-born American citizen was quickly labeled an act of domestic terrorism. But, President Obama makes an important point here: mass shootings are a distinctly American problem. This type of rampage violence happens more in the United States than anywhere else. And gun control is a significant part of the problem. But, gun control is only a partial explanation for mass shootings in the United States.

Mass shootings are also almost universally committed by men.  So, this is not just an American problem; it’s a problem related to American masculinity and to the ways American men use guns.  But asking whether “guns” or “masculinity” is more of the problem misses the central point that separating the two might not be as simple as it sounds.  And, as Mark Follman, Gavin Aronsen, and Deanna Pan note in the Mother Jones Guide to Mass Shootings in America, the problem is getting worse.

We recently wrote a chapter summarizing the research on masculinity and mass shootings for Mindy Stombler and Amanda Jungels’ forthcoming volume, Focus on Social Problems: A Contemporary Reader (Oxford University Press). And we subsequently learned of a new dataset on mass shootings in the U.S. produced by the Stanford Geospatial Center. Their Mass Shootings in America database defines a “mass shooting” as an incident during which an active shooter shoots three or more people in a single episode. Some databases define mass shootings as involving 4 shootings in a single episode. And part of this reveals that the number is, in some ways, arbitrary. What is significant is that we can definitively say that mass shootings in the U.S. are on the rise, however they are defined. The Mother Jones database has shown that mass shootings have become more frequent over the past three decades.  And, using the Stanford database, we can see the tend by relying on data that stretches back a bit further.


Additionally, we know that the number of victims of mass shootings is also at an historic high:


We also produced a time-lapse map of mass shootings in the United States illustrating both where and when mass shootings have occurred using the Stanford Geospatial Center’s database to illustrate this trend over time:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="520" src="" width="100%"></iframe>

Our map charts mass shootings with 3 or more victims over roughly 5 decades, since 1966. The dataset takes us through the Charleston and Chattanooga shootings, which brought 2015 to 42 mass shootings . The dataset is composed of 216 separate incidents only 5 of which were committed by lone woman shooters. Below we produced an interactive map depicting all of the mass shootings in the dataset with brief descriptions of the shootings.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="520" src="" width="100%"></iframe>

In our chapter in Stombler and Jungels’ forthcoming book, we cull existing research to answer two questions about mass shootings: (1) Why is it men who commit mass shootings? and (2) Why do American men commit mass shootings so much more than men anywhere else?  Based on sociological research, we argue that there are two separate explanations – a social psychological explanation and a cultural explanation (see the book for much more detail on each).

A Social Psychological Explanation

Research shows that when an identity someone cares about is called into question, they are likely to react by over-demonstrating qualities associated with that identity.  As this relates to gender, some sociologists call this “masculinity threat.”  And while mass shootings are not common, research suggests that mass shooters experience masculinity threats from their peers and, sometimes, simply from an inability to live up to societal expectations associated with masculinity (like holding down a steady job, being able to obtain sexual access to women’s bodies, etc.) – some certainly more toxic than others.

The research on this topic is primarily experimental.  Men who are brought into labs and have their masculinity experimentally “threatened” react in patterned ways: they are more supportive of violence, less likely to identify sexual coercion, more likely to support statements about the inherent superiority of males, and more.

This research provides important evidence of what men perceive as masculine in the first place (resources they rely on in a crisis) and a new kind evidence regarding the relationship of masculinity and violence.  The research does not suggest that men are somehow inherently more violent than women.  Rather, it suggests that men are likely to turn to violence when they perceive themselves to be otherwise unable to stake a claim to a masculine gender identity.

A Cultural Explanation

But certainly boys and men experience all manner of gender identity threat in other societies.  Why are American boys and men more likely to react with such extreme displays?  To answer this question, we need an explanation that articulates the role that American culture plays in influencing boys and young men to turn to this kind of violence at rates higher than anywhere else in the world.  This means we need to turn our attention away from the individual characteristics of the shooters themselves and to more carefully investigate the sociocultural contexts in which violent masculinities are produced and valorized.

Men have historically benefited from a great deal of privilege – white, educated, middle and upper class, able-bodied, heterosexual men in particular.  Social movements of all kinds have slowly chipped away at some of these privileges.  So, while inequality is alive and well, men have also seen a gradual erosion of privileges that flowed more seamlessly to previous generations of men (white, heterosexual, class-privileged men in particular).  Michael Kimmel suggests that these changes have produced a uniquely American gendered sentiment that he calls “aggrieved entitlement.”  Of course, being pissed off about an inability to cash in on privileges previous generations of men received without question doesn’t always lead to mass shootings.  But, from this cultural perspective, mass shootings can be understood as an extremely violent example of a more general issue regarding changes in relations between men and women and historical transformations in gender, race, and class inequality.

Mass shootings are a pressing issue in the United States.  And gun control is an important part of this problem.  But, when we focus only on the guns, we sometimes gloss over an important fact: mass shootings are also enactments of masculinity.  And they will continue to occur when this fact is combined with a sense among some men that male privilege is a birthright – and one that many feel unjustly denied.

Cross-posted at Feminist Reflections and Inequality by (Interior) Design.

Tristan Bridges and Tara Leigh Tober are sociologists at the College at Brockport (SUNY).   You can follow them on at @tristanbphd and @tobertara.


(View original at

CryptogramHacking Team's Purchasing of Zero-Day Vulnerabilities

This is an interesting article that looks at Hacking Team's purchasing of zero-day (0day) vulnerabilities from a variety of sources:

Hacking Team's relationships with 0day vendors date back to 2009 when they were still transitioning from their information security consultancy roots to becoming a surveillance business. They excitedly purchased exploit packs from D2Sec and VUPEN, but they didn't find the high-quality client-side oriented exploits they were looking for. Their relationship with VUPEN continued to frustrate them for years. Towards the end of 2012, CitizenLab released their first report on Hacking Team's software being used to repress activists in the United Arab Emirates. However, a continuing stream of negative reports about the use of Hacking Team's software did not materially impact their relationships. In fact, by raising their profile these reports served to actually bring Hacking Team direct business. In 2013 Hacking Team's CEO stated that they had a problem finding sources of new exploits and urgently needed to find new vendors and develop in-house talent. That same year they made multiple new contacts, including Netragard, Vitaliy Toropov, Vulnerabilities Brokerage International, and Rosario Valotta. Though Hacking Team's internal capabilities did not significantly improve, they continued to develop fruitful new relationships. In 2014 they began a close partnership with Qavar Security.

Lots of details in the article. This was made possible by the organizational doxing of Hacking Team by some unknown individuals or group.

Worse Than FailureWe're All Admins Here


Will, his boss Rita, and Nick from HR huddled around a conference room speakerphone, listening to their new marching orders from the giant company that’d just bought out their small 100-person shop. Big changes would be avalanching down from Corporate over the next several months. For the moment, they were going over the modifications required to be complaint with their new overlords’ IT policies.

Twenty minutes into the call, nothing major had come up. Will dashed down notes, thinking this wouldn’t be so bad after all…

Then the voice on the other side intoned, “Local admin rights for all users.”

Will and Rita glanced up from their laptops with a start, sharing the same wide-eyed look of alarm.

Nick glanced between them, picking up on their consternation, but unsure what it meant. “Uh, guys? Is that doable?” he prompted.

“Hang on a sec.” Will reached out to swat the Mute button on the speakerphone. Then, he couldn’t help himself. His glimmer of amusement turned into a snort, then a giggle, then full-on loud laughter—laughter that Rita joined him in.

“What is it?” Nick asked, more confused than ever.

“Local…? Sorry. Local admin rights for everyone?” Will sat back in his chair, pressing his palms against his eyes as he recovered his breath.

“It basically means we’d be giving everyone here carte blanche to install and run and change whatever they want, whenever they want, on their computers,” Rita explained. “That doesn’t sound so bad, but in reality, it makes us vulnerable to malware, viruses, security attacks, you name it.”

“Some people do need admin rights to perform their jobs, but not everyone,” Will chimed back in. “It’s gonna open up huge cans of worms.”

“Well, shoot,” Nick said, concerned. “I don’t know if we have much wiggle room. Let’s see what we can do.” His finger hovered over the Mute button. “You’re willing to explain to them why it’s a bad idea?”

“In depth!” Will said.

“OK.” Nick un-muted the speakerphone. “We’re back now, thanks. Um, so, about the local admin thing—”

“We know you have objections,” one of the disembodied overlords replied casually.

Will, Rita, and Nick traded surprised looks.

“Most of you small fries do when you come aboard,” the voice continued. “Sorry, but that’s our policy. Non-negotiable.”

This marked the first time Will had a pronounced sinking feeling about their acquisition. It wouldn’t be the last.

“I really don’t want to do this,” he told Rita a few days later, poised to make the ordered changes.

Rita gave an apologetic shake of her head. “I appealed it as high as I could, kid. We don’t have a choice. Do me a favor: keep track of the extra tickets and problems we get as a result of this, OK? Maybe then I’ll have the metrics I need to get someone to listen.”

“It’s the metrics that matter.” With a distasteful shake of his head, Will got to work. “Can’t wait to see what comes in first.”

To their surprise, a full week of peace and quiet ensued, but this was merely the calm before the excrement-storm. Early on a Monday, emails flooded the support box.

Oh no where are my database icons?

Did you guys do something to my machine over the weekend? I’m missing a bunch of shortcuts…

Mysteriously, each user was missing the exact same set of desktop icons: 5 shortcuts leading to the databases located on the network.

His unfamiliarity with the problem, and horror at the sheer number of emails, sent Will careening to Rita’s cube. “Ever see anything like this before?”

“No,” she replied. “Does this have anything to do with enabling local admin rights?”

Will frowned. “I don’t really see how. I’m not sure what it is. I’m just gonna write up a quick batch file to re-add the shortcuts and push it out to everyone.”

So he did. The shortcuts reappeared, and worked perfectly. Everyone was happy. It was tedious, but Will made sure to log and close out a separate support ticket for each email he'd received, just in case he needed those blessed “metrics” later.

More like ammo, Will thought. Oh well, he doubted he’d ever run into this again.

Exactly one week later, the universe told him what he could do with his doubts.

“Those same icons are all missing again!” Will told Rita.

“OK, it really does seem like this has something to do with the admin change,” Rita said.


She shrugged and sighed. “Let’s find out.”

They pored through event logs, antivirus logs, GPO lists, and logon scripts. Nothing pointed to anything.

“Maybe Google is our friend?” Will proposed.

A few searches later, he had the answer: the infamous Windows 7 Computer Maintenance. If there were more than 4 broken shortcuts on the desktop, it deleted them completely. No Recycle Bin, no Unused Icons folder, just obliterated. It ran its maintenance tasks once a week on startup, after the desktop icons loaded, but before the network drives finished mapping. That meant the database links were "broken,” and were therefore deleted.

Windows Computer Maintenance required local administrator access to automatically delete icons off the desktop.

The icons could be retrieved via system restore, but Will wasn’t about to walk dozens of people of varying degrees of computer literacy through mounting a restore point and browsing to where the shortcuts lived. He ended up writing a startup script to manually recreate the shortcuts after all the other bizarre startup processes had finished doing their thing.

Again, he logged and closed support tickets for each email received. Two weeks after making everyone an admin, Rita had metrics-ammo spilling out of both pockets, but after a round of emails and conference calls, their overlords did not care.

<link href="" rel="stylesheet"/> <script src=""></script> <script>hljs.initHighlightingOnLoad();</script>
[Advertisement] BuildMaster is more than just an automation tool: it brings together the people, process, and practices that allow teams to deliver software rapidly, reliably, and responsibly. And it's incredibly easy to get started; download now and use the built-in tutorials and wizards to get your builds and/or deploys automated!

Planet DebianMichael Stapelberg: dh-make-golang: creating Debian packages from Go packages

Recently, the pkg-go team has been quite busy, uploading dozens of Go library packages in order to be able to package gcsfuse (a user-space file system for interacting with Google Cloud Storage) and InfluxDB (an open-source distributed time series database).

Packaging Go library packages (!) is a fairly repetitive process, so before starting my work on the dependencies for gcsfuse, I started writing a tool called dh-make-golang. Just like dh-make itself, the goal is to automatically create (almost) an entire Debian package.

As I worked my way through the dependencies of gcsfuse, I refined how the tool works, and now I believe it’s good enough for a first release.

To demonstrate how the tool works, let’s assume we want to package the Go library

midna /tmp $ dh-make-golang
2015/07/25 18:25:39 Downloading ""
2015/07/25 18:25:53 Determining upstream version number
2015/07/25 18:25:53 Package version is "0.0~git20150723.0.2ca5e0c"
2015/07/25 18:25:53 Determining dependencies
2015/07/25 18:25:55 
2015/07/25 18:25:55 Packaging successfully created in /tmp/golang-github-jacobsa-ratelimit
2015/07/25 18:25:55 
2015/07/25 18:25:55 Resolve all TODOs in itp-golang-github-jacobsa-ratelimit.txt, then email it out:
2015/07/25 18:25:55     sendmail -t -f < itp-golang-github-jacobsa-ratelimit.txt
2015/07/25 18:25:55 
2015/07/25 18:25:55 Resolve all the TODOs in debian/, find them using:
2015/07/25 18:25:55     grep -r TODO debian
2015/07/25 18:25:55 
2015/07/25 18:25:55 To build the package, commit the packaging and use gbp buildpackage:
2015/07/25 18:25:55     git add debian && git commit -a -m 'Initial packaging'
2015/07/25 18:25:55     gbp buildpackage --git-pbuilder
2015/07/25 18:25:55 
2015/07/25 18:25:55 To create the packaging git repository on alioth, use:
2015/07/25 18:25:55     ssh "/git/pkg-go/setup-repository golang-github-jacobsa-ratelimit 'Packaging for golang-github-jacobsa-ratelimit'"
2015/07/25 18:25:55 
2015/07/25 18:25:55 Once you are happy with your packaging, push it to alioth using:
2015/07/25 18:25:55     git push git+ssh:// --tags master pristine-tar upstream

The ITP is often the most labor-intensive part of the packaging process, because any number of auto-detected values might be wrong: the repository owner might not be the “Upstream Author”, the repository might not have a short description, the long description might need some adjustments or the license might not be auto-detected.

midna /tmp $ cat itp-golang-github-jacobsa-ratelimit.txt
From: "Michael Stapelberg" <stapelberg AT>
Subject: ITP: golang-github-jacobsa-ratelimit -- Go package for rate limiting
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit

Package: wnpp
Severity: wishlist
Owner: Michael Stapelberg <stapelberg AT>

* Package name    : golang-github-jacobsa-ratelimit
  Version         : 0.0~git20150723.0.2ca5e0c-1
  Upstream Author : Aaron Jacobs
* URL             :
* License         : Apache-2.0
  Programming Lang: Go
  Description     : Go package for rate limiting

 GoDoc (
 This package contains code for dealing with rate limiting. See the
 reference ( for more info.

TODO: perhaps reasoning
midna /tmp $

After filling in all the TODOs in the file, let’s mail it out and get a sense of what else still needs to be done:

midna /tmp $ sendmail -t -f < itp-golang-github-jacobsa-ratelimit.txt
midna /tmp $ cd golang-github-jacobsa-ratelimit
midna /tmp/golang-github-jacobsa-ratelimit master $ grep -r TODO debian
debian/changelog:  * Initial release (Closes: TODO) 
midna /tmp/golang-github-jacobsa-ratelimit master $

After filling in these TODOs as well, let’s have a final look at what we’re about to build:

midna /tmp/golang-github-jacobsa-ratelimit master $ head -100 debian/**/*
==> debian/changelog <==                            
golang-github-jacobsa-ratelimit (0.0~git20150723.0.2ca5e0c-1) unstable; urgency=medium

  * Initial release (Closes: #793646)

 -- Michael Stapelberg <>  Sat, 25 Jul 2015 23:26:34 +0200

==> debian/compat <==

==> debian/control <==
Source: golang-github-jacobsa-ratelimit
Section: devel
Priority: extra
Maintainer: pkg-go <>
Uploaders: Michael Stapelberg <>
Build-Depends: debhelper (>= 9),
Standards-Version: 3.9.6
Vcs-Git: git://

Package: golang-github-jacobsa-ratelimit-dev
Architecture: all
Depends: ${shlibs:Depends},
Built-Using: ${misc:Built-Using}
Description: Go package for rate limiting
 This package contains code for dealing with rate limiting. See the
 reference ( for more info.

==> debian/copyright <==
Upstream-Name: ratelimit

Files: *
Copyright: 2015 Aaron Jacobs
License: Apache-2.0

Files: debian/*
Copyright: 2015 Michael Stapelberg <>
License: Apache-2.0
Comment: Debian packaging is licensed under the same terms as upstream

License: Apache-2.0
 Licensed under the Apache License, Version 2.0 (the "License");
 you may not use this file except in compliance with the License.
 You may obtain a copy of the License at
 Unless required by applicable law or agreed to in writing, software
 distributed under the License is distributed on an "AS IS" BASIS,
 See the License for the specific language governing permissions and
 limitations under the License.
 On Debian systems, the complete text of the Apache version 2.0 license
 can be found in "/usr/share/common-licenses/Apache-2.0".

==> debian/gbp.conf <==
pristine-tar = True

==> debian/rules <==
#!/usr/bin/make -f

export DH_GOPKG :=

	dh $@ --buildsystem=golang --with=golang

==> debian/source <==
head: error reading ‘debian/source’: Is a directory

==> debian/source/format <==
3.0 (quilt)
midna /tmp/golang-github-jacobsa-ratelimit master $

Okay, then. Let’s give it a shot and see if it builds:

midna /tmp/golang-github-jacobsa-ratelimit master $ git add debian && git commit -a -m 'Initial packaging'
[master 48f4c25] Initial packaging                                                      
 7 files changed, 75 insertions(+)
 create mode 100644 debian/changelog
 create mode 100644 debian/compat
 create mode 100644 debian/control
 create mode 100644 debian/copyright
 create mode 100644 debian/gbp.conf
 create mode 100755 debian/rules
 create mode 100644 debian/source/format
midna /tmp/golang-github-jacobsa-ratelimit master $ gbp buildpackage --git-pbuilder
midna /tmp/golang-github-jacobsa-ratelimit master $ lintian ../golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes
I: golang-github-jacobsa-ratelimit source: debian-watch-file-is-missing
P: golang-github-jacobsa-ratelimit-dev: no-upstream-changelog
I: golang-github-jacobsa-ratelimit-dev: extended-description-is-probably-too-short
midna /tmp/golang-github-jacobsa-ratelimit master $

This package just built (as it should!), but occasionally one might need to disable a test and file an upstream bug about it. So, let’s push this package to pkg-go and upload it:

midna /tmp/golang-github-jacobsa-ratelimit master $ ssh "/git/pkg-go/setup-repository golang-github-jacobsa-ratelimit 'Packaging for golang-github-jacobsa-ratelimit'"
Initialized empty shared Git repository in /srv/
HEAD is now at ea6b1c5 add mrconfig for dh-make-golang
[master c5be5a1] add mrconfig for golang-github-jacobsa-ratelimit
 1 file changed, 3 insertions(+)
To /git/pkg-go/meta.git
   ea6b1c5..c5be5a1  master -> master
midna /tmp/golang-github-jacobsa-ratelimit master $ git push git+ssh:// --tags master pristine-tar upstream
Counting objects: 31, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (25/25), done.
Writing objects: 100% (31/31), 18.38 KiB | 0 bytes/s, done.
Total 31 (delta 2), reused 0 (delta 0)
To git+ssh://
 * [new branch]      master -> master
 * [new branch]      pristine-tar -> pristine-tar
 * [new branch]      upstream -> upstream
 * [new tag]         upstream/0.0_git20150723.0.2ca5e0c -> upstream/0.0_git20150723.0.2ca5e0c
midna /tmp/golang-github-jacobsa-ratelimit master $ cd ..
midna /tmp $ debsign golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes
midna /tmp $ dput golang-github-jacobsa-ratelimit_0.0\~git20150723.0.2ca5e0c-1_amd64.changes   
Uploading golang-github-jacobsa-ratelimit using ftp to ftp-master (host:; directory: /pub/UploadQueue/)
Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1.dsc
Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c.orig.tar.bz2
Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1.debian.tar.xz
Uploading golang-github-jacobsa-ratelimit-dev_0.0~git20150723.0.2ca5e0c-1_all.deb
Uploading golang-github-jacobsa-ratelimit_0.0~git20150723.0.2ca5e0c-1_amd64.changes
midna /tmp $ cd golang-github-jacobsa-ratelimit 
midna /tmp/golang-github-jacobsa-ratelimit master $ git tag debian/0.0_git20150723.0.2ca5e0c-1
midna /tmp/golang-github-jacobsa-ratelimit master $ git push git+ssh:// --tags master pristine-tar upstream
Total 0 (delta 0), reused 0 (delta 0)
To git+ssh://
 * [new tag]         debian/0.0_git20150723.0.2ca5e0c-1 -> debian/0.0_git20150723.0.2ca5e0c-1
midna /tmp/golang-github-jacobsa-ratelimit master $

Thanks for reading this far, and I hope dh-make-golang makes your life a tiny bit easier. As dh-make-golang just entered Debian unstable, you can install it using apt-get install dh-make-golang. If you have any feedback, I’m eager to hear it.

Planet Linux AustraliaMichael Still: Views from a lookout on Mulholland Drive, Bel Air

Planet DebianDirk Eddelbuettel: Evading the "Hadley tax": Faster Travis tests for R

Hadley is a popular figure, and rightly so as he successfully introduced many newcomers to the wonders offered by R. His approach strikes some of us old greybeards as wrong---I particularly take exception with some of his writing which frequently portrays a particular approach as both the best and only one. Real programming, I think, is often a little more nuanced and aware of tradeoffs which need to be balanced. As a book on another language once popularized: "There is more than one way to do things." But let us leave this discussion for another time.

As the reach of the Hadleyverse keeps spreading, we sometimes find ourselves at the receiving end of a cost/benefit tradeoff. That is what this post is about, and it uses a very concrete case I encountered yesterday.

As blogged earlier, the RcppZiggurat package was updated. I had not touched it in a year, but Brian Ripley had sent a brief and detailed note concerning something flagged by the Solaris compiler (correctly suggesting I replace fabs() with abs() on integer types). (Allow me to stray from the main story line here for a second to stress just how insane a work load he is carrying, essentially for all of us. R and the R community are so just so indebted to him for all his work---which makes the usual social media banter about him so unfortunate. But that too shall be left for another time.) Upon making the simple fix, and submitting to GitHub the usual Travis CI was triggered. And here is what I saw:

first travis build in a year
All happy, all green. Previous build a year ago, most recent build yesterday, both passed. But hold on: test time went from 2:54 minutes to 7:47 minutes for an increase of almost five minutes! And I knew that I had not added any new dependencies, or altered any build options. What did happen was that among the dependencies of my package, one had decided to now also depend on ggplot2. Which leads to a chain of sixteen additional packages being loaded besides the four I depend upon---when it used to be just one. And that took five minutes as all those packages are installed from source, and some are big and take a long time to compile.

There is however and easy alternative, and for that we have to praise Michael Rutter who looks after a number of things for R on Ubuntu. Among these are the R builds for Ubuntu but also the rrutter PPA as well as the c2d4u PPA. If you have not heard this alphabet soup before, a PPA is a package repository for Ubuntu where anyone (who wants to sign up) can upload (properly setup) source files which are then turned into Ubuntu binaries. With full dependency resolution and all other goodies we have come to expect from the Debian / Ubuntu universe. And Michael uses this facility with great skill and calm to provide us all with Ubuntu binaries for R itself (rebuilding what yours truly uploads into Debian), as well as a number of key packages available via the CRAN mirrors. Less know however is this "c2d4u" which stands for CRAN to Debian for Ubuntu. And this builds on something Charles Blundell once built under my mentorship in a Google Summer of Code. And Michael does a tremdous job covering well over a thousand CRAN source packages---and providing binaries for all. Which we can use for Travis!

What all that means is that I could now replace the line

 - ./ install_r RcppGSL rbenchmark microbenchmark highlight

which implies source builds of the four listed packages and all their dependencies with the following line implying binary installations of already built packages:

 - ./ install_aptget libgsl0-dev r-cran-rcppgsl r-cran-rbenchmark r-cran-microbenchmark r-cran-highlight

In this particular case I also needed to build a binary package of my RcppGSL package as this one is not (yet) handled by Michael. I happen to have (re-)discovered the beauty of PPAs for Travis earlier this year and revitalized an older and largely dormant launchpad account I had for this PPA of mine. How to build a simple .deb package will also have to left for a future post to keep this more concise.

This can be used with the existing r-travis setup---but one needs to use the older, initial variant in order to have the ability to install .deb packages. So in the .travis.yml of RcppZiggurat I just use

## PPA for Rcpp and some other packages
- sudo add-apt-repository -y ppa:edd/misc
## r-travis by Craig Citro et al
- curl -OL
- chmod 755 ./
- ./ bootstrap

to add my own PPA and all is good. If you do not have a PPA, or do not want to create your own packages you can still benefit from the PPAs by Michael and "mix and match" by installing from binary what is available, and from source what is not.

Here we were able to use an all-binary approach, so let's see the resulting performance:

latest travis build
Now we are at 1:03 to 1:15 minutes---much better.

So to conclude, while the every expanding universe of R packages is fantastic for us as users, it can be seen to be placing a burden on us as developers when installing and testing. Fortunately, the packaging infrastructure built on top of Debian / Ubuntu packages can help and dramatically reduce build (and hence test) times. Learning about PPAs can be a helpful complement to learning about Travis and continued integration. So maybe now I need a new reason to blame Hadley? Well, there is always snake case ...

Follow-up: The post got some pretty immediate feedback shortly after I posted it. Craig Citro pointed out (quite correctly) that I could use r_binary_install which would also install the Ubuntu binaries based on their R packages names. Having built R/CRAN packages for Debian for so long, I am simply more used to the r-cran-* notations, and I think I was also the one contributing install_aptget to r-travis ... Yihui Xie spoke up for the "new" Travis approach deploying containers, caching of packages and explicit whitelists. It was in that very (GH-based) discussion that I started to really lose faith in the new Travis approach as they want use to whitelist each and every package. With 6900 and counting at CRAN I fear this simply does not scale. But different approaches are certainly welcome. I posted my 1:03 to 1:15 minutes result. If the "New School" can do it faster, I'd be all ears.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Planet Linux AustraliaMichael Still: Geocaching with TheDevilDuck

In what amounts to possibly the longest LAX layover ever, I've been hanging out with Chet at his place in Altadena for a few days on the way home after the Nova mid-cycle meetup. We decided that being the dorks that we are we should do some geocaching. This is just some quick pics some unexpected bush land -- I never thought LA would be so close to nature, but this part certainly is.


Interactive map for this route.

Tags for this post: blog pictures 20150727 photo california bushwalk
Related posts: A walk in the San Mateo historic red woods; First jog, and a walk to Los Altos; Goodwin trig; Did I mention it's hot here?; Big Monks; Summing up Santa Monica


Planet DebianGregor Herrmann: RC bugs 2015/30

this week, besides other activities, I again managed to NMU a few packages as part of the GCC 5 transition. & again I could build on patches submitted by various HP engineers & other helpful souls.

  • #757525 – hardinfo: "hardinfo: FTBFS with clang instead of gcc"
    patch to build with -std=gnu89, upload to DELAYED/5
  • #758723 – nagios-plugins-rabbitmq: "should depend on libjson-perl"
    add missing dependency, upload to DELAYED/5
  • #777766 – " ftbfs with GCC-5"
    send updated patch to BTS
  • #777837 – src:ebview: "ebview: ftbfs with GCC-5"
    add patch from, upload to DELAYED/5
  • #777882 – src:gnokii: "gnokii: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5
  • #777907 – src:hunt: "hunt: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5
  • #777920 – src:isdnutils: "isdnutils: ftbfs with GCC-5"
    add patch to build with -fgnu89-inline; upload to DELAYED/5
  • #778019 – src:multimon: "multimon: ftbfs with GCC-5"
    build with -fgnu89-inline; upload to DELAYED/5
  • #778068 – src:pork: "pork: ftbfs with GCC-5"
    build with -fgnu89-inline, QA upload
  • #778098 – src:quarry: "quarry: ftbfs with GCC-5"
    build with -std=gnu89, upload to DELAYED/5, then rescheduled to 0-day with maintainer's permission
  • #778099 – src:ratbox-services: "ratbox-services: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5, later cancelled because package is about to be removed (#793408)
  • #778109 – src:s51dude: "s51dude: ftbfs with GCC-5"
    build with -fgnu89-inline, upload to DELAYED/5
  • #778116 – src:shell-fm: "shell-fm: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778119 – src:simulavr: "simulavr: ftbfs with GCC-5"
    apply patch from Brett Johnson, QA upload
  • #778120 – src:sipsak: "sipsak: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778122 – src:skyeye: "skyeye: ftbfs with GCC-5"
    build with -fgnu89-inline, QA upload
  • #778140 – src:tcpcopy: "tcpcopy: ftbfs with GCC-5"
    add patch backported from upstream git, upload to DELAYED/5
  • #778145 – src:thewidgetfactory: "thewidgetfactory: ftbfs with GCC-5"
    add missing #include, upload to DELAYED/5
  • #778164 – src:vtun: "vtun: ftbfs with GCC-5"
    add patch from Tim Potter, upload to DELAYED/5
  • #790464 – flow-tools: "Please drop conditional build-depend on libmysqlclient15-dev"
    drop obsolete dependency, NMU
  • #793336 – src:libdevel-profile-perl: "libdevel-profile-perl: FTBFS with perl 5.22 in experimental (MakeMaker changes)"
    finish and upload package modernized by XTaran (pkg-perl)
  • #793580 – libb-hooks-parser-perl: "libb-hooks-parser-perl: B::Hooks::Parser::Install::Files missing"
    investigate and forward upstream, upload new upstream release later (pkg-perl)

Planet DebianLunar: Reproducible builds: week 13 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

  • Emmanuel Bourg uploaded maven-archiver/2.6-3 which fixed parsing DEB_CHANGELOG_DATETIME with non English locales.
  • Emmanuel Bourg uploaded maven-repo-helper/1.8.12 which always use the same system independent encoding when transforming the pom files.
  • Piotr Ożarowski uploaded dh-python/2.20150719 which makes the order of the generated maintainer scripts deterministic. Original patch by Chris Lamb.

akira uploaded a new version of doxygen in the experimental “reproducible” repository incorporating upstream patch for SOURCE_DATE_EPOCH, and now producing timezone independent timestamps.

Dhole updated Peter De Wachter's patch on ghostscript to use SOURCE_DATE_EPOCH and use UTC as a timezone. A modified package is now being experimented.

Packages fixed

The following 14 packages became reproducible due to changes in their build dependencies: bino, cfengine2, fwknop, gnome-software, jnr-constants, libextractor, libgtop2, maven-compiler-plugin, mk-configure, nanoc, octave-splines, octave-symbolic, riece, vdr-plugin-infosatepg.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

  • #792943 on argus-client by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792945 on authbind by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792947 on cvs-mailcommit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792949 on chimera2 by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792950 on ccze by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792951 on dbview by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792952 on dhcpdump by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792953 on dhcping by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792955 on dput by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792958 on dtaus by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792959 on elida by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792961 on enemies-of-carlotta by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792963 on erc by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792965 on fastforward by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792967 on fgetty by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792969 on flowscan by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792971 on junior-doc by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792972 on libjama by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792973 on liblip by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792974 on liblockfile by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792975 on libmsv by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792976 on logapp by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792977 on luakit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792978 on nec by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792979 on runit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792980 on tworld by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792981 on wmweather by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792982 on ftpcopy by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792983 on gerstensaft by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792984 on integrit by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792985 on ipsvd by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792986 on uruk by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792987 on jargon by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792988 on xbs by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792989 on freecdb by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792990 on skalibs by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792991 on gpsmanshp by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792993 on cgoban by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792994 on angband-doc by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792995 on abook by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792996 on bcron by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792998 on chiark-utils by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #792999 on console-cyrillic by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793000 on beav by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793001 on blosxom by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793002 on cgilib by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793003 on daemontools by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793004 on debdelta by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793005 on checkpw by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793006 on dropbear by akira: set the mtimes of all files which are modified during builds to the latest debian/changelog entry.
  • #793126 on torbutton by Dhole: set TZ=UTC when calling zip.
  • #793127 on pdf.js by Dhole: set TZ=UTC when calling zip.
  • #793300 on deejayd by Dhole: set TZ=UTC when calling zip.

Packages identified as failing to build from source with no bugs filed and older than 10 days are scheduled more often now (except in experimental). (h01ger)

Package reviews

178 obsolete reviews have been removed, 59 added and 122 updated this week.

New issue identified this week: random_order_in_ruby_rdoc_indices.

18 new bugs for packages failing to build from sources have been reported by Chris West (Faux), and h01ger.

Planet DebianLunar: Reproducible builds: week 12 in Stretch cycle

What happened in the reproducible builds effort this week:

Toolchain fixes

Eric Dorlan uploaded automake-1.15/1:1.15-2 which makes the output of mdate-sh deterministic. Original patch by Reiner Herrmann.

Kenneth J. Pronovici uploaded epydoc/3.0.1+dfsg-8 which now honors SOURCE_DATE_EPOCH. Original patch by Reiner Herrmann.

Chris Lamb submitted a patch to dh-python to make the order of the generated maintainer scripts deterministic. Chris also offered a fix for a source of non-determinism in dpkg-shlibdeps when packages have alternative dependencies.

Dhole provided a patch to add support for SOURCE_DATE_EPOCH to gettext.

Packages fixed

The following 78 packages became reproducible in our setup due to changes in their build dependencies: chemical-mime-data, clojure-contrib, cobertura-maven-plugin, cpm, davical, debian-security-support, dfc, diction, dvdwizard, galternatives, gentlyweb-utils, gifticlib, gmtkbabel, gnuplot-mode, gplanarity, gpodder, gtg-trace, gyoto, highlight.js, htp, ibus-table, impressive, jags, jansi-native, jnr-constants, jthread, jwm, khronos-api, latex-coffee-stains, latex-make, latex2rtf, latexdiff, libcrcutil, libdc0, libdc1394-22, libidn2-0, libint, libjava-jdbc-clojure, libkryo-java, libphone-ui-shr, libpicocontainer-java, libraw1394, librostlab-blast, librostlab, libshevek, libstxxl, libtools-logging-clojure, libtools-macro-clojure, litl, londonlaw, ltsp, macsyfinder, mapnik, maven-compiler-plugin, mc, microdc2, miniupnpd, monajat, navit, pdmenu, pirl, plm, scikit-learn, snp-sites, sra-sdk, sunpinyin, tilda, vdr-plugin-dvd, vdr-plugin-epgsearch, vdr-plugin-remote, vdr-plugin-spider, vdr-plugin-streamdev, vdr-plugin-sudoku, vdr-plugin-xineliboutput, veromix, voxbo, xaos, xbae.

The following packages became reproducible after getting fixed:

Some uploads fixed some reproducibility issues but not all of them:

Patches submitted which have not made their way to the archive yet:

The statistics on the main page of are now updated every five minutes. A random unreviewed package is suggested in the “look at a package” form on every build. (h01ger)

A new package set based new on the Core Internet Infrastructure census has been added. (h01ger)

Testing of FreeBSD has started, though no results yet. More details have been posted to the freebsd-hackers mailing list. The build is run on a new virtual machine running FreeBSD 10.1 with 3 cores and 6 GB of RAM, also sponsored by Profitbricks.

strip-nondeterminism development

Andrew Ayer released version 0.009 of strip-nondeterminism. The new version will strip locales from Javadoc, include the name of files causing errors, and ignore unhandled (but rare) zip64 archives.

debbindiff development

Lunar continued its major refactoring to enhance code reuse and pave the way to fuzzy-matching and parallel processing. Most file comparators have now been converted to the new class hierarchy.

In order to support for archive formats, work has started on packaging Python bindings for libarchive. While getting support for more archive formats with a common interface is very nice, libarchive is a stream oriented library and might have bad performance with how debbindiff currently works. Time will tell if better solutions need to be found.

Documentation update

Lunar started a Reproducible builds HOWTO intended to explain the different aspects of making software build reproducibly to the different audiences that might have to get involved like software authors, producers of binary packages, and distributors.

Package reviews

17 obsolete reviews have been removed, 212 added and 46 updated this week.

15 new bugs for packages failing to build from sources have been reported by Chris West (Faux), and Mattia Rizzolo.


Lunar presented Debian efforts and some recipes on making software build reproducibly at Libre Software Meeting 2015. Slides and a video recording are available.


h01ger, dkg, and Lunar attended a Core Infrastructure Initiative meeting. The progress and tools mode for the Debian efforts were shown. Several discussions also helped getting a better understanding of the needs of other free software projects regarding reproducible builds. The idea of a global append only log, similar to the logs used for Certificate Transparency, came up on multiple occasions. Using such append only logs for keeping records of sources and build results has gotten the name “Binary Transparency Logs”. They would at least help identifying a compromised software signing key. Whether the benefits in using such logs justify the costs need more research.

Sociological ImagesBody Shapes, a Handy Guide

2By Gemma Correll. Visit her tumblr or buy stuff here.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Planet Linux AustraliaSridhar Dhanapalan: Twitter posts: 2015-07-20 to 2015-07-26

Planet DebianDirk Eddelbuettel: RcppZiggurat 0.1.3: Faster Random Normal Draws


After a slight hiatus since the last release in early 2014, we are delighted to announce a new release of RcppZiggurat which is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution.

The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl---all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

This release contains a few internal cleanups relative to the last release. It was triggered by a very helpful email from Brian Ripley who notices compiler warnings on the Solaris platform due to my incorrect use of on integer variables.

The NEWS file entry below lists all changes.

Changes in version 0.1.3 (2015-07-25)

  • Use the SHR3 generator for the default implementation just like Leong et al do, making our default implementation identical to theirs (but 32- and 64-bit compatible)

  • Switched generators from float to double ensuring that results are identical on 32- and 64-bit platforms

  • Simplified builds with respect to GSL use via the RcppGSL package; added a seed setter for the GSL variant

  • Corrected use of fabs() to abs() on integer variables, with a grateful nod to Brian Ripley for the hint (based on CRAN checks on the beloved Slowlaris machines)

  • Accelerated Travis CI tests by relying exclusively on r-cran-* packages from the PPAs by Michael Rutter and myself

  • Updated DESCRIPTION and NAMESPACE according to current best practices, and R-devel CMD check --as-cran checks

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSteinar H. Gunderson: DYI web video streaming

I've recently taken a new(ish) look at streaming video for the web, in terms of what's out there of formats. (When I say streaming, I mean live video; not static files where you can seek etc.) There's a bewildering array; most people would probably use a ready-made service such as Twitch, Ustream or YouTube, but they do have certain aspects that are less than ideal; for instance, you might need to pay (or have your viewers endure ads), you might be shut down at any time if they don't like your content (e.g. sending non-gaming content on Twitch, or using copyrighted music on YouTube), or the video quality might be less than ideal.

So what I'm going to talk about is mainly what format to choose; there are solutions that allow you to stream to many, but a) the CPU amount you need is largely proportional to the number of different codecs you want to encode to, and b) I've never really seen any of these actually work well in practice; witness the Mistserver fiasco at FOSDEM last year, for instance (full disclosure: I was involved in the 2014 FOSDEM streaming, but not in 2015). So the goal is to find the minimum number of formats to maximize quality and client support.

So, let's have a look at the candidates:

We'll start in a corner with HLS. The reason is that mobile is becoming increasingly important, and Mobile Safari (iOS) basically only supports HLS, so if you want iOS support, this has to be high on your list. HLS is basically H.264+AAC in a MPEG-TS mux, split over many files (segments), with a .m3u8 file that is refreshed to inform about new segments. This can be served over whatever that serves HTTP (including your favorite CDN), and if your encoder is up to it, you can get adaptive bandwidth control (which works so-so, but better than nothing), but unfortunately it also has high latency, and MPEG-TS is a pretty high-overhead mux (6–7%, IIRC).

Unfortunately, basically nothing but Safari (iOS/OS X) supports HLS. (OK, that's not true; the Android browser does from Android 4.0, but supposedly 4.0 is really buggy and you really want something newer.) So unless you're in la-la land where nothing but Apple counts, you'll not only need HLS, but also something else. (Well, there's a library that claims to make Chrome/Firefox support HLS, but it's basically a bunch of JavaScript that remuxes each segment from MPEG-TS to MP4 on the fly, and hangs the entire streaming process while doing so.) Thankfully FFmpeg can remux from some other format into HLS, so it's not that painful.

MPEG-DASH is supposedly the new hotness, but like anything container-wise from MPEG, it's huge, tries to do way too many things and is generally poorly supported. Basically it's HLS (with the same delay problems) except that you can support a bazillion different codecs and multiple containers, and actual support out there is poor. The only real way to get it into a browser (assuming you can find anything stable that encodes an MPEG-DASH stream) is to load a 285kB JavaScript library into your browser, which tries to do all the metadata parsing in JavaScript, download the pieces with XHR and then piece them into the <video> tag with the Media Source Extensions API. And to pile on the problems, you can't really take an MPEG-DASH stream and feed it into something that's not a web browser, e.g. current versions of MPlayer/VLC/XBMC. (This matters if you have e.g. a separate HTPC that's remote-controlled. Admittedly, it might be a small segment depending on your audience.) Perhaps it will get better over time, but for the time, I cannot really recommend it unless you're a huge corporation and have the resources to essentially make your own video player in JavaScript (YouTube or Twitch can, but the rest of us really can't).

Of course, a tried-and-tested solution is Flash, with its FLV and RTMP offerings. RTMP (in this context) is basically FLV over a different transport from HTTP, and I've found it to be basically pain from one end to the other; the solutions you get are either expensive (Adobe's stuff, Wowza), scale poorly (Wowza), or are buggy and non-interoptable in strange ways (nginx-rtmp). But H.264+AAC in FLV over HTTP (e.g. with VLC plus my own Cubemap reflector) works well against e.g. JW Player, and has good support… on desktop. (There's one snag, though, in that if you stream from HTTP, JW Player will believe that you're streaming a static file, and basically force you to zero client-side buffer. Thus, it ends up being continuously low on buffer, and you need some server-side trickery to give it some more leeway against network bumps and not show its dreaded buffering spinner.) With mobile becoming more important, and people increasingly calling for the death of Flash, I don't think this is the solution for tomorrow, although it might be okay for today.

Then there's WebM (in practice VP8+Vorbis in a Matroska mux; VP9 is too slow for good quality in realtime yet, AFAIK). If worries about format patents are high on your list, this is probably a good choice. Also, you can stick it straight into <video> (e.g. with VLC plus my own Cubemap reflector), and modulo some buffering issues, you can go without Flash. Unfortunately, VP8 trails pretty far behind H.264 on picture quality, libvpx has strange bugs and my experience is that bitrate control is rather lacking, which can lead to your streams getting subtle, hard-to-debug issues with getting through to the actual user. Furthermore, support is lackluster; no support for IE, no support for iOS, no hardware acceleration on most (all?) phones so you burn your battery.

Finally there's MP4, which is formally MPEG-4 Part 14, which in turn is based on MPEG-4 Part 12. Or something. In any case, it's the QuickTime mux given a blessing as official, and it's a relatively common format for holding H.264+AAC. MP4 is one of those formats that support a zillion different ways of doing everything; the classic case is when someone's made a file in QuickTime and it has the “moov” box at the end, so you can't play any of your 2 GB file until you have the very last bytes, too. But after I filed a VLC bug and Martin Storsjö picked it up, the ffmpeg mux has gotten a bunch of fixes to produce MP4 files that are properly streamable.

And browsers have improved as well; recent versions of Chrome (both desktop and Android) stream MP4 pretty well, IE11 reportedly does well (although I've had reports of regressions, where the user has to switch tabs once before the display actually starts updating), Firefox on Windows plays these fine now, and I've reported a bug against GStreamer to get these working on Firefox on Linux (unfortunately it will be a long time until this works out of the box for most people).

So that's my preferred solution right now; you need a pretty recent ffmpeg for this to work, and if you want to use MP4 in Cubemap, you need this VLC bugfix (unfortunately not in 2.2.0, which is the version in Debian stable), but combined with HLS as an iOS fallback, it will give you great quality on all platforms, good browser coverage, reasonably low latency (for non-HLS clients) and good playability in non-web clients. It won't give you adaptive bitrate selection, though, and you can't hand it to your favorite CDN because they'll probably only want to serve static files (and I don't think there's a market for a Cubemap CDN :-) ). The magic VLC incantation is:

--sout '#transcode{vcodec=h264,vb=3000,acodec=mp4a,ab=256,channels=2,fps=50}:std{access=http{mime=video/mp4},mux=ffmpeg{mux=mp4},dst=:9094}' --sout-avformat-options '{movflags=empty_moov+frag_keyframe+default_base_moof}'

Planet DebianNorbert Preining: Challenging riddle from The Talos Principle

When I recently complained that Portal 2 was too easy, I have to say, The Talos Principle is challenging. For a solution that, if known, takes only a few seconds, I often have to wring my brain about the logistics for long long time. Here a nice screenshot from one of the easier riddles, but with great effect.


A great game, very challenging. A more length review will come when I have finished the game.

Email this to someonePrint this pageShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInFlattr the author

Planet Linux AustraliaSimon Lyall: OSCON 2015


Planet DebianDirk Eddelbuettel: Rcpp 0.12.0: Now with more Big Data!

big-data image

A new release 0.12.0 of Rcpp arrived on the CRAN network for GNU R this morning, and I also pushed a Debian package upload.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 423 packages on CRAN depend on Rcpp for making analyses go faster and further. Note that this is 60 more packages since the last release in May! Also, BioConductor adds another 57 packages, and casual searches on GitHub suggests many more.

And according to Andrie De Vries, Rcpp has now page rank of one on CRAN as well!

And with this release, Rcpp also becomes ready for Big Data, or, as they call it in Texas, Data.

Thanks to a lot of work and several pull requests by Qiang Kou, support for R_xlen_t has been added.

That means we can now do stunts like

R> library(Rcpp)
R> big <- 2^31-1
R> bigM <- rep(NA, big)
R> bigM2 <- c(bigM, bigM)
R> cppFunction("double getSz(LogicalVector x) { return x.length(); }")
R> getSz(bigM)
[1] 2147483647
R> getSz(bigM2)
[1] 4294967294

where prior versions of Rcpp would just have said

> getSz(bigM2)
Error in getSz(bigM2) :
  long vectors not supported yet: ../../src/include/Rinlinedfuns.h:137

which is clearly not Texas-style. Another wellcome change, also thanks to Qiang Kou, adds encoding support for strings.

A lot of other things got polished. We are still improving exception handling as we still get the odd curveballs in a corner cases. Matt Dziubinski corrected the var() computation to use the proper two-pass method and added better support for lambda functions in Sugar expression using sapply(), Qiang Kou added more pull requests mostly for string initialization, and Romain added a pull request which made data frame creation a little more robust, and JJ was his usual self in tirelessly looking after all aspects of Rcpp Attributes.

As always, you can follow the development via the GitHub repo and particularly the Issue tickets and Pull Requests. And any discussions, questions, ... regarding Rcpp are always welcome at the rcpp-devel mailing list.

Last but not least, we are also extremely pleased to annouce that Qiang Kou has joined us in the Rcpp-Core team. We are looking forward to a lot more awesome!

See below for a detailed list of changes extracted from the NEWS file.

Changes in Rcpp version 0.12.0 (2015-07-24)

  • Changes in Rcpp API:

    • Rcpp_eval() no longer uses R_ToplevelExec when evaluating R expressions; this should resolve errors where calling handlers (e.g. through suppressMessages()) were not properly respected.

    • All internal length variables have been changed from R_len_t to R_xlen_t to support vectors longer than 2^31-1 elements (via pull request 303 by Qiang Kou).

    • The sugar function sapply now supports lambda functions (addressing issue 213 thanks to Matt Dziubinski)

    • The var sugar function now uses a more robust two-pass method, supports complex numbers, with new unit tests added (via pull request 320 by Matt Dziubinski)

    • String constructors now allow encodings (via pull request 310 by Qiang Kou)

    • String objects are preserving the underlying SEXP objects better, and are more careful about initializations (via pull requests 322 and 329 by Qiang Kou)

    • DataFrame constructors are now a little more careful (via pull request 301 by Romain Francois)

    • For R 3.2.0 or newer, Rf_installChar() is used instead of Rf_install(CHAR()) (via pull request 332).

  • Changes in Rcpp Attributes:

    • Use more robust method of ensuring unique paths for generated shared libraries.

    • The evalCpp function now also supports the plugins argument.

    • Correctly handle signature termination characters ('{' or ';') contained in quotes.

  • Changes in Rcpp Documentation:

    • The Rcpp-FAQ vignette was once again updated with respect to OS X issues and Fortran libraries needed for e.g. RcppArmadillo.

    • The included Rcpp.bib bibtex file (which is also used by other Rcpp* packages) was updated with respect to its CRAN references.

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianSteinar H. Gunderson: Stream audio level monitoring with ebumeter

When monitoring stream sound levels, seemingly VLC isn't quite there; at least the VU meter on mine shows unusably low levels (and I think it might even stick a compressor in there, completely negating the point). So I wanted to write my own, but while searching for the right libraries, I found ebumeter.

So I spent the same amount of time getting it to run in the first place; it uses JACK, which I've never ever had working before. But I guess there's a first time for everything? I wrote up a quick guide for others that are completely unfamiliar with it:

First, install the JACK daemon and qjackctl (Debian packages jackd2 and qjackctl), in addition to ebumeter itself. I've been using mplayer to play the streams, but you can use whatever with JACK output.

Then, start JACK:

jack_control start

and start ebumeter plus give the stream some input:

ebumeter &
mplayer -ao jack http://whatever…

You'll notice that ebumeter isn't showing anything yet, because the default routing for MPlayer is to go to the system output. Open qjackctl and go to the Connect dialog. You should see the running MPlayer and ebumeter, and you should see that MPlayer is connected to “system” (not ebumeter as we'd like).

So disconnect all (ignore the warning). Then expand the MPlayer and ebumeter clients, select out_0, then in.L and choose Connect. Do the same with the other channel, and tada! There should be a meter showing EBU R128 levels, including peak (unfortunately it doesn't seem to show number of clipped samples, but I can live with that).

Unfortunately the conncetions are not persistent. To get them persistent, you need to go to Patchbay, create a new patchbay, accept when it asks you if you want to start from the current conncetions, then save, and finally activate. As long as the qjackctl dialog is open (?), new MPlayer JACK sessions will now be autoconnected to ebumeter, no matter what the pid is. If you want to distinguish between different MPlayers, you can always give them a different name as an argument to the -ao jack parameter.

TED40 travel tips from hard-traveling TED staffers

Blog_Travel_tips_TED38143822_thumbnailTED staffers travel a lot — to TEDx events all over the globe, to TED conferences in Canada, Brazil, the UK …. so we love to trade travel tips for making the most of work and fun trips. Here are 40 of our best — including some oddball but practical ideas for vacation travel (to use up those miles).

For planning your trip…

Use an Incognito Window to book flights. “You know how you check the price of a flight, then go back a day later — and the price has gone up? That seems to happen less often if you use the Incognito function in Chrome. I also love Kayak, because it gives you advice on whether to book your ticket or wait for a better price.” — Kate Torgovnick May, writer

Try “You put in a destination, and get emails with updates on fares so you know when there are deals. Geneva is my spot, and I get excited when I see a round-trip flight for $650 instead of $1,200.” — Hailey Reissman, TEDx blogger

Consider a stop in Iceland. “I want to try Icelandair, because they let you do a layover in Iceland for up to seven days for free. Reykjavík sounds cool.” — Olivia Cucinotta, intern

Do your research. “I always Google Map the place I’m going, to see the streets. Then I switch to Earth view, so I can look at the topography, understand nearby towns and see if there are any blue spots that might be secluded beaches. I also see if National Geographic, Outside or The New York Times Travel has written anything and jot down all the stuff I want to do. If it’s a city, I also see if Anthony Bourdain has eaten anything cool there.” — Thaniya Keereepart, product lead

Crowdsource suggestions. “Sitting down for two hours to figure out what you want to do really can improve your trip 30 to 40%. If you’re too busy, post on Facebook asking about things to do and places to eat. Everyone loves to recommend their favorites.” — Tom Rielly, community director

Follow local Instagrammers. “Check out what they photograph. The locations are often geotagged, and can give you really interesting ideas you wouldn’t find in a guidebook.” — Chelsea Catlett, intern

There are good and bad hotel rooms. “Research your hotel on TripAdvisor to see which rooms people prefer, and request that room. Don’t speak to a reservations office that might be in a call center — always ask for the front desk. My trick: if the operator asks if I’m calling about a reservation, I say, ‘No, I have a question for the front desk.’” — Tom Rielly

New town? Plan for one night in a known spot, then keep your options open. “I book my first night’s stay ahead of time, but keep things open from there. When I arrive, I walk all over and find a local spot to move to. I did this when I went to Jericoacoara on the coast of Brazil and found the loveliest guest house run by this not-tech-savvy-at-all bohemian Italian lady. Score. This works best when you’re traveling off-season.” Thaniya Keereepart

Get Global Entry clearance if you travel a lot. “I found out about the Global Entry program last year, and knew I needed to sign up. It’s like TSA PreCheck for international travelers. Instead of standing in line for a customs officer after an international flight, you use an ATM-like machine to process re-entry into the United States. It takes your photo, scans your fingerprints and prints a receipt in minutes — and you don’t have to fill out those little declaration cards on the plane. The program has a lengthy application process, but membership lasts five years. It works for international flights and grants TSA PreCheck status for domestic flights. It’s only $15 more than TSA PreCheck.” — Isaac Wayton, video editor

Organize your itinerary. “TripIt is a must. You send all your confirmation emails in, and it creates one well-organized trip itinerary, and alerts you if flights are delayed or gates are changed.” — Gavin Hall, CTO

Research local transportation before you go. “You can never be sure if you’ll have the Internet on your trip, so research this ahead of time. I like Lonely Planet’s Thorn Tree forum for transit tips, but you can also just type in ‘how to get from X to Y’ in Google. Sometimes you’ll end up with a bus or train schedule, or sometimes you’ll land on some blog post that tells you to grab the van at the corner of the market and wait until it’s full. Either way, write down the names of the companies that operate the transportation — along with a few sentences in the native language on how to ask for directions.” — Thaniya Keereepart

Conspire with your travel companions. “If you are traveling with friends, especially for the first time, have a conversation in advance about how you like to do things. For example, I hate being rushed at art museums, but other people get really bored, so we make a plan to meet later. Ask how long people like to stay at the beach, what their budget preferences are for meals. And make a pact that if one of you is frustrated, you talk about it right away. It can save so much drama.” — Tom Rielly

Hype yourself up. “Watch a movie or read a book based in your destination. Preferably fiction. It makes the place feel romantic.” — Anyssa Samari, video team


For getting packed…

Make a reusable packing list. “I made a standard packing list for myself — one for week-long trips and one for three-day trips. It lists all the articles of clothing, toiletries and random things I like to have with me. I cross things off as I put them in my suitcase. It makes packing a 15-minute process.” — Kate Torgovnick May

Or try a universal one. “The Universal Packing List is great: You put in your travel dates, climate information and the activities you’re doing and it spits out a list. It’s often way too long, but it will remind you to take at least two things you didn’t think of.” — Tom Rielly

Color-coordinate your clothing. “To save space in your suitcase, pack a neutral color palette. You’ll need to have Jedi-like determination to skip the colorful pieces, but if you layer, you can get maximum outfit combos.” — Janet Lee, distribution team

Roll your clothes. “It’s a flight attendant trick. And pack wear-it-last-time clothes you can discard during the trip to have more space for souvenirs.” — Lisa Bu, distribution team

Consult the weather. “I’ve been to Los Angeles during a cold snap and Scotland in a sunny period. Things aren’t always what you expect, so know the high and lows for when you’ll be there. And always pack an umbrella, poncho and cozy sweater, just in case.”  — Kate Torgovnick May

Get easily identifiable luggage. “I like soft suitcases because they’re lighter and you can stuff more in them. Ones with four wheels are easy to maneuver. Pick a crazy color or unusual pattern, so you can spot it easily on the luggage carousel.” ChiHong Yim, video team

Pack for adventure. “I say: bring as little as possible — only as much as can fit on your back, in case you need to go everywhere by motorcycle. Always have Dramamine and a headlamp.” Shoham Arad, TED Fellows team

Mark everything. “Take your business card or a piece of paper with your mobile number and email on it. Write, ‘Staying at the [fill in hotel] in [fill in city].’ Put one in everything you have — your luggage, your camera case, your glasses cases, your wallet, your phone case, even in the back of your passport. People want to give things back, if you make it easy.” — Tom Rielly

Be a power ninja. “Put all of your cords, chargers and earbuds in a small bag so they’re all in one place and stay in good shape. Clear toiletry cases are especially good for this, so you can make sure you have everything before you leave.” — Susan Zimmerman, executive assistant

Get set on prescriptions. “Refill prescriptions and over-the-counter meds well in advance. Your pharmacist can give you a vacation override to refill early. Write down the generic names of your prescriptions — the brand name might be different abroad. And bring spare prescription sheets signed by your doctor. International pharmacists often have more latitude in prescribing without a doctor — just look for the green cross.” — Tom Rielly


For making the most of your stay….

Ask your barista for recommendations. “On the first day of a trip, I go to a local coffee shop, and tell the barista, ‘I have this many days in the city. What should I do?’ As locals, they know big, seasonal events and small, local things. They can help you prioritize.” — Jody Mak, partnership team

Spend your jet lag day at a niche-y museum. “You just got to Paris and you need to stay awake all day, but you don’t want to go to the Louvre yet because your brain is too foggy. It’s a perfect day to go to the Musée de la Chasse et de la Nature, a small, atmospheric museum all about nature and courtly hunting. It’s basically full of dog and horse paintings and old furniture; it’s the perfect speed for your jet-lagged brain. In Berlin, try the postal museum; in London, the permanent collection of the V&A; in Prague, the National Museum.” — Emily McManus, editor,

Check out a comedy show. “When I’m traveling to a new place, I catch a comedy show — improv, stand-up or a funny play. Humor is a fascinating lens into a culture and hearing jokes gives a lot of insight. A friend and I went to see comedy in Johannesburg, South Africa, and learned a lot about the post-apartheid social structure that no book would have told us.” — Morton Bast, editorial team

Try geocaching. “One of the best ways to see a new place is Geocaching, a real-world treasure hunt using GPS on your phone. People hide geocaches all over the world — they can contain anything from tiny scrolls for you to sign to boxes where you take trinkets. Geocaching has taken me off trails to beautiful vistas, to urban locations I might have walked past, and about a quarter of a mile into a drainage pipe underground.” — Kelly Stoetzel, content director

Take a cooking class. “You learn about the food, which automatically teaches you a bit about the culture and history. And you make friends, who might want to go on adventures with you.” — Kate Torgovnick May

Have a restaurant option list. “I like to make a list of food options for different areas and neighborhoods, so that wherever I am, I have thoughts — and the freedom to improvise.” — Anna Kostuk, partnership team

Try “Check it to see if there’s a dinner party held by a local near you. You can meet really great people. If nothing’s happening while you’re there, see if a friend of a friend of a friend can have you over for dinner.” — Janet Lee

Use an offline navigation app to save on data. “I like OsmAnd, but there are plenty of free choices for both iOS and Android. There’s no need to pay for an international 3G connection; with just your GPS on, you can navigate any destination and find the nearest ATM, supermarket or subway. One note, though: using your GPS will drain your phone’s battery fast, so you may want to get a portable charger.” — Krystian Aparta, translation team

Download some good travel apps. “I’ve heard good things about Native. You pay about $25 a month and it’s like a virtual travel agent you text for help. And while it’s very limited in locations right now, Detour is a cool idea. It’s an audio tour that gives you a starting point in a city, and guides you through — telling a story along the way.” — Gavin Hall

Take advantage of Google Translate. “Google Translate’s app can work offline. If you point it at a menu in a different language and take a photo, it shows it to you in English. I used it in Brazil when ordering food. And for those who want to stay off devices: Lonely Planet’s phrasebooks come in handy.” — Gavin Hall and Thaniya Keereepart

Avoid the temptation to overplan. “Try not to cram too much into your schedule — do one or two destinations a day. If there’s something you want to do but can’t, just think about how you now have a good reason to come back.” — Lisa Bu

Get lost. Say yes. “Don’t be afraid to create situations where you need to rely on locals. Always get on the boat. Always say yes to swimming.” — Laurie House, video editor, and Shoham Arad

Plan your photo documentation. “I just went on tour as a replacement singer in a band. One of the band members took a picture of every door of every venue we played. Together, those photos are so cool. But it’s something you have to think about ahead of time.” — Emilie Soffe, TED-Ed editorial

Make a playlist to remember your trip. “I was just reminding myself to Shazam things on my upcoming trip to Budapest, even songs I know. At the end of the trip, I’ll have a playlist stored in the app of all my sound memories. I still listen to the playlist I made of Berlin in 2009.” — Susan Zimmerman

Planet DebianSandro Tosi: How to change your Google services location

Several services in Google depends on your location, in particular on Google Play (things like apps, devices, contents can be restricted to some countries), but what to do if you relocate and want to update your information to access those exclusive services? Lots of stories out there to make a payment on the playstore with updated credit card info etc etc, it's actually a bit different, but not that much.

There are 3 places where you need to update your location information, all of them on Google Payments:

  1. in Payment Methods, change the billing address of all your payment methods;
  2. in Address Book, change the default shipping address;
  3. in Settings, change your home address.
Once that's done, wait some minutes, and you might also want to logout/login again in your Google account (even tho Google support will tell it's not necessary, it didnt work for me otherwise) and you should be ready to go.

Planet DebianSandro Tosi: DICOM viewer and converter in Debian

DICOM is a standard for your RX/CT/MRI scans and the format most of the times your result will be given to you, along with Win/MacOS viewers, but what about Debian? the best I could find is Ginkgo CADx (package ginkgocadx).

If  you want to convert those DICOM files into images you can use convert (I dont know why I was surprised to find out imagemagik can handle it).

PS: here a description of the format.

Planet DebianNorbert Preining: PiwigoPress release 2.30

I just pushed a new release of PiwigoPress (main page, WordPress plugin dir) to the WordPress servers. This release incorporates some new features, mostly contributed by Anton Lavrov (big thanks!)


The new features are:

  • Shortcode: multiple ids can be specified, including ranges (not supported in the shortcode generator)
  • Display of image name/title: in addition to the description, also the name/title can be displayed. Here three possible settings can be choosen: 0 – never show titles (default as before), 1 – always show titles, and ‘auto’ – show only titles that do not look like auto-generated titles. (supported in the shortcode generator)

I also checked that the plugin runs with the soon to be released WordPress 4.3, and fixed a small problem with the setting of ‘albumpicture’ not being saved.

That’s all, enjoy, and leave your wishlist items and complains at the issue tracker on the github project piwigopress.

Email this to someonePrint this pageShare on FacebookShare on Google+Tweet about this on TwitterShare on LinkedInFlattr the author

Geek FeminismEt tu, linkspam? (24 July 2015)

  • 25 Ways To Dress Like A Tech Employee | Buzzfeed: “There’s a persistent stereotype that people who work with technology are all dudes in hoodies or free company t-shirts, with zero interest in personal style or fashion. We see it in everything from Microsoft ad campaigns to the way tech companies are shown in television and movies. In my experience, I’ve worked with many people who are just as interested in style as they are technology. So I decided to ask my coworkers on BuzzFeed’s tech team to show me how to dress like a tech employee, and this is what happened…”
  • The kick-ass women of ‘Sense8′ make it best new show on TV | Reel Girl: “Last night, after my husband and I finished watching the last episode of ‘Sense 8,’ I rushed to the computer, Googling the show to see when to expect season 2. Maybe never! Wait, what? According to Think Progress and other sources, the diverse show featuring eight characters from different countries around the world may not be appealing enough to white males. Main characters also include a trans woman and a gay man.”
  • This Ruling Could Change Online “Free Speech” Forever | The Daily Beast: [CW: online harassment] “The spirit of “free speech” is put above freedom from harassment, bullying, or shaming. And it’s having horrific results—leaving jobs, avoiding careers, even contemplating suicide. The response from abusers and apologists is to “grow a thicker skin.” That “this is the Internet.” But it’s not. It’s the Internet as abusers want it. We should change that.”
  • The World’s Most Popular Video Game Fights Racist Harassment With Artificial Intelligence | Tech.mic: “They built a system called the Tribunal, a public case log of files where players could review reported instances of racism, sexism and homophobia, then vote on whether or not they warranted action. After 100 million votes were cast, the team had a usable database of what their community considers an abusive behavior. Then, they turned over that knowledge to their machine-learning algorithm and set it to work dealing with instances of abuse.”
  • At Comic-Con, It Feels Like the Year of the Woman | “Quite a few panels reflected this variety and grappled with its implications. Nobody is suggesting that a utopian age of sexual and racial equality has dawned in San Diego or anywhere else. The default Comic-Con panelist is still a white man, but it does seem that more of an effort has been made to correct this lazy lopsidedness here than in, say, the Hollywood studios a few hours up the freeway. If the entertainment business is still dominated by interlocking old-boy networks — in the movie studios, the bigger comic-book publishers, the television networks and among the writers, artists and directors those entities employ — the audience is challenging that status quo.”
  • The Women Who Rule Pluto | The Atlantic: “For all the firsts coming out of the New Horizons mission—color footage of Pluto, photos of all five of its moons, and flowing datastreams about Pluto’s composition and atmosphere—there’s one milestone worth noting on Earth: This may be the mission with the most women in NASA history.”
  • Listening, Being Heard | E. Catherine Tobler: “Writers like Weir — male, white, on top of the NYT Bestseller lists, movie deals, a break out book — are in an amazing position to boost voices that are not like their own. They have the ability to lift others up. And time after time, they mention work that is exactly like their own. Authors who mirror their own selves”
  • The trouble with jokes about girls | Times Higher Education: “There are many aspects to this story, but I want here to focus on just one of them: whether construing a sexist comment as a joke changes how we evaluate it. I am not so much concerned with the specifics of this case but rather by a more general issue: the division between those, like me, who think that the “joke” status of a disparaging comment is irrelevant, and those who think that whether someone is joking or not is a game-changer.”
  • ‘A national hero': psychologist who warned of torture collusion gets her due | Law | The Guardian: “Jean Maria Arrigo’s inbox is filling up with apologies. For a decade, colleagues of the 71-year-old psychologist ignored, derided and in some cases attacked Arrigo for sounding alarms that the American Psychological Association was implicated in US torture. But now that a devastating report has exposed deep APA complicity with brutal CIA and US military interrogations – and a smear campaign against Arrigo herself – her colleagues are expressing contrition.”
  • @EricaJoy’s salary transparency experiment at Google (with tweets) |_danilo · Storify: “The world didn’t end. Everything didn’t go up in flames because salaries got shared. But shit got better for some people.”
  • How to Deter Doxxing | Nieman Reports: “If I learned one thing from my ordeal it’s that doxxing can happen to anyone, at any time, for nearly any reason. But awareness of the risks—and effective strategies to mitigate them—too often come from bad experiences rather than preparation. When I was doxxed, the person who understood the most about what happened was the Domino’s delivery guy. As soon as Twitter was mentioned, he knew exactly what I was experiencing. Now it’s time for reporters and editors to know just as much.”
  • Read This Letter From Scientists Accusing Top Publisher Of Sexism | BuzzFeed News: “More than 600 scientists and their supporters have signed an open letter to the American Association for the Advancement of Science (AAAS), criticizing four recent events that “hinder the advancement of underrepresented groups” in science, technology, engineering, and math. The letter asks AAAS to “work more diligently” to avoid “harmful stereotypes” when publishing content about minorities, and recommends that its editorial staff undergo diversity training.”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet DebianMichal Čihař: Migrating phpMyAdmin from

Some time ago we've decided to move phpMyAdmin out of services. This was mostly motivated by issues with bundling crapware with installers (though we were not affected), but also we've missed some features that we would like to have and were not possible there.

The project relied on with several services. The biggest ones being website and downloads hosting, issue tracking and mailing lists. We've chosen different approach for each of these.

As first, we've moved away website and downloads. Thanks to generous offer of, everything went quite smoothly and we now have HTTPS secured website and downloads, see our announcement. Oh and on the way we've started to PGP sign the releases as well, so you can verify the download.

Shortly after this was hit by major problems with infrastructure. Unfortunately we were not yet completely ready with rest of the migration, but this has definitely pushed us to make progress faster.

During the outage, we've opened up issue tracker on GitHub, to be able to receive bug reports from our users. On the background I've worked on the issue migration. The good news is that as of now almost all issues are migrated. There are few missing ones, but these will be hopefully handled in upcoming days as well.

Last but not least, we had mailing lists on We've shortly discussed available options and decided to run own mail server with these. It will allow us greater flexibility while still using well know software in background. Initial attempts with Mailman 3 failed, so we got back to Mailman 2, which is stable and easy to configure. See also our news posts for official announcement.

Thanks to, it has been great home for us, but now we have better places to live.

Filed under: English phpMyAdmin | 0 comments

Planet DebianSteve Kemp: We're in Finland now.

So we've recently spent our first week together in Helsinki, Finland.

Mostly this has been stress-free, but there are always oddities about living in new places, and moving to Europe didn't minimize them.

For the moment I'll gloss over the differences and instead document the computer problem I had. Our previous shared-desktop system had a pair of drives configured using software RAID. I pulled one of the drives to use in a smaller-cased system (smaller so it was easier to ship).

Only one drive of a pair being present make mdadm scream, via email, once per day, with reports of failure.

The output of cat /proc/mdstat looked like this:

md2 : active raid1 sdb6[0] [LVM-storage-area]
      1903576896 blocks super 1.2 2 near-copies [2/1] [_U]
md1 : active raid10 sdb5[1] [/root]
      48794112 blocks super 1.2 2 near-copies [2/1] [_U]
md0 : active raid1 sdb1[0]  [/boot]
      975296 blocks super 1.2 2 near-copies [2/1] [_U]

See the "_" there? That's the missing drive. I couldn't remove the drive as it wasn't present on-disk, so this failed:

mdadm --fail   /dev/md0 /dev/sda1
mdadm --remove /dev/md0 /dev/sda1
# repeat for md1, md2.

Similarly removing all "detached" drives failed, so the only thing to do was to mess around re-creating the arrays with a single drive:

lvchange -a n shelob-vol
mdadm --stop /dev/md2
mdadm --create /dev/md2 --level=1 --raid-devices=1 /dev/sdb6 --force

I did that on the LVM-storage area, and the /boot partition, but "/" is still to be updated. I'll use knoppix/similar to do it next week. That'll give me a "RAID" system which won't alert every day.

Thanks to the joys of re-creation the UUIDs of the devices changed, so /etc/mdadm/mdadm.conf needed updating. I realized that too late, when grub failed to show the menu, because it didn't find it's own UUID. Handy recipe for the future:

set prefix=(md/0)/grub/
insmod linux
linux (md/0)/vmlinuz-3.16.0-0.bpo.4-amd64 root=/dev/md1
initrd (md/0)//boot/initrd.img-3.16.0-0.bpo.4-amd64

Planet Linux AustraliaDavid Rowe: Microphone Placement and Speech Codecs

This week I have been looking at the effect different speech samples have on the performance of Codec 2. One factor is microphone placement. In radio (from broadcast to two way HF/VHF) we tend to use microphones closely placed to our lips. In telephony, hands free, or more distance microphone placement has become common.

People trying FreeDV over the air have obtained poor results from using built-in laptop microphones, but good results from USB headsets.

So why does microphone placement matter?

Today I put this question to the codec2-dev and digital voice mailing lists, and received many fine ideas. I also chatted to such luminaries as Matt VK5ZM and Mark VK5QI on the morning drive time 70cm net. I’ve also been having an ongoing discussion with Glen, VK1XX, on this and other Codec 2 source audio conundrums.

The Model

A microphone is a bit like a radio front end:

We assume linearity (the microphone signal isn’t clipping).

Imagine we take exactly the same mic and try it 2cm and then 50cm away from the speakers lips. As we move it away the signal power drops and (given the same noise figure) SNR must decrease.

Adding extra gain after the microphone doesn’t help the SNR, just like adding gain down the track in a radio receiver doesn’t help the SNR.

When we are very close to a microphone, the low frequencies tend to be boosted, this is known as the proximity effect. This is where the analogy to radio signals falls over. Oh well.

A microphone 50cm away picks up multi-path reflections from the room, laptop case, and other surfaces that start to become significant compared to the direct path. Summing a delayed version of the original signal will have an impact on the frequency response and add reverb – just like a HF or VHF radio signal. These effects may be really hard to remove.

Science in my Lounge Room 1 – Proximity Effect

I couldn’t resist – I wanted to demonstrate this model in the real world. So I dreamed up some tests using a couple of laptops, a loudspeaker, and a microphone.

To test the proximity effect I constructed a wave file with two sine waves at 100Hz and 1000Hz, and played it through the speaker. I then sampled using the microphone at different distances from a speaker. The proximity effect predicts the 100Hz tone should fall off faster than the 1000Hz tone with distance. I measured each tone power using Audacity (spectrum feature).

This spreadsheet shows the results over a couple of runs (levels in dB).

So in Test 1, we can see the 100Hz tone falls off 4dB faster than the 1000Hz tone. That seems a bit small, could be experimental error. So I tried again with the mic just inside the speaker aperture (hence -1cm) and the difference increased to 8dB, just as expected. Yayyy, it worked!

Apparently this effect can be as large as 16dB for some microphones. Apparently radio announcers use this effect to add gravitas to their voice, e.g. leaning closer to the mic when they want to add drama.

Im my case it means unwanted extra low frequency energy messing with Codec 2 with some closely placed microphones.

Science in my Lounge Room 2 – Multipath

So how can I test the multipath component of my model above? Can I actually see the effects of reflections? I set up my loudspeaker on a coffee table and played a 300 to 3000 Hz swept sine wave through it. I sampled close up and with the mic 25cm away.

The idea is get a reflection off the coffee table. The direct and reflected wave will be half a wavelength out of phase at some frequency, which should cause a notch in the spectrum.

Lets take a look at the frequency response close up and at 25cm:

Hmm, they are both a bit of a mess. Apparently I don’t live in an anechoic chamber. Hmmm, that might be handy for kids parties. Anyway I can observe:

  1. The signal falls off a cliff at about 1000Hz. Well that will teach me to use a speaker with an active cross over for these sorts of tests. It’s part of a system that normally has two other little speakers plugged into the back.
  2. They both have a resonance around 500Hz.
  3. The close sample is about 18dB stronger. Given both have same noise level, that’s 18dB better SNR than the other sample. Any additional gain after the microphone will increase the noise as much as the signal, so the SNR won’t improve.

OK, lets look at the reflections:

A bit of Googling reveals reflections of acoustic waves from solid surfaces are in phase (not reversed 180 degrees). Also, the angle of incidence is the same as reflection. Just like light.

Now the microphone and speaker aperture is 16cm off the table, and the mic 25cm away. Couple of right angle triangles, bit of Pythagoras, and I make the reflected path length as 40.6cm. This means a path difference of 40.6 – 25 = 15.6cm. So when wavelength/2 = 15.6cm, we should get a notch in the spectrum, as the two waves will cancel. Now v=f(wavelength), and v=340m/s, so we expect a notch at f = 340*2/0.156 = 1090Hz.

Looking at a zoomed version of the 25cm spectrum:

I can see several notches: 460Hz, 1050Hz, 1120Hz, and 1300Hz. I’d like to think the 1050Hz notch is the one predicted above.

Can we explain the other notches? I looked around the room to see what else could be reflecting. The walls and ceiling are a bit far away (which means low freq notches). Hmm, what about the floor? It’s big, and it’s flat. I measured the path length directly under the table as 1.3m. This table summarises the possible notch frequencies:

Note that notches will occur at any frequency where the path difference is half a wavelength, so wavelength/2, 3(wavelength)/2, 5(wavelength)/2…..hence we get a comb effect along the frequency axis.

OK I can see the predicted notch at 486Hz, and 1133Hz, which means the 1050 Hz is probably the one off the table. I can’t explain the 1300Hz notch, and no sign of the predicted notch at 810Hz. With a little imagination we can see a notch around 1460Hz. Hey, that’s not bad at all for a first go!

If I was super keen I’d try a few variations like the height above the table and see if the 1050Hz notch moves. But it’s Friday, and nearly time to drink red wine and eat pizza with my friends. So that’s enough lounge room acoustics for now.

How to break a low bit rate speech codec

Low bit rate speech codecs make certain assumptions about the speech signal they compress. For example the time varying filter used to transmit the speech spectrum assumes the spectrum varies slowly in frequency, and doesn’t have any notches. In fact, as this filter is “all pole” (IIR), it can only model resonances (peaks) well, not zeros (notches). Codecs like mine tend to fall apart (the decoded speech sounds bad) when the input speech violates these assumptions.

This helps explain why clean speech from a nicely placed microphone is good for low bit rate speech codecs.

Now Skype and (mobile) phones do work quite well in “hands free” mode, with rather distance microphone placement. I often use Skype with my internal laptop microphone. Why is this OK?

Well the codecs used have a much higher bit rate, e.g. 10,000 bit/s rather than 1,000 bits/s. This gives them the luxury to employ codecs that can, to some extent, code arbitrary waveforms as well as speech. These employ algorithms like CELP that use a hybrid of model based (like Codec 2) and waveform based (like PCM). So they faithfully follow the crappy mic signal, and don’t fall over completely.


CryptogramFriday Squid Blogging: How a Squid Changes Color

The California market squid, Doryteuthis opalescens, can manipulate its color in a variety of ways:

Reflectins are aptly-named proteins unique to the light-sensing tissue of cephalopods like squid. Their skin contains specialized cells called iridocytes that produce color by reflecting light in a predictable way. When the neurotransmitter acetylcholine activates reflectin proteins, this triggers the contraction and expansion of deep pleats in the cell membrane of iridocytes. By turning enzymes on and off, this process adjusts (or tunes) the brightness and color of the light that's reflected.

Interesting details in the article and the paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

CryptogramHow an Amazon Worker Stole iPads

A worker in Amazon's packaging department in India figured out how to deliver electronics to himself:

Since he was employed with the packaging department, he had easy access to order numbers. Using the order numbers, he packed his order himself; but instead of putting pressure cookers in the box, he stuffed it with iPhones, iPads, watches, cameras, and other expensive electronics in the pressure cooker box. Before dispatching the order, the godown also has a mechanism to weigh the package. To dodge this, Bhamble stuffed equipment of equivalent weight," an officer from Vithalwadi police station said. Bhamble confessed to the cops that he had ordered pressure cookers thrice in the last 15 days. After he placed the order, instead of, say, packing a five-kg pressure cooker, he would stuff gadgets of equivalent weight. After receiving delivery clearance, he would then deliver the goods himself and store it at his house. Speaking to mid-day, Deputy Commissioner of Police (Zone IV) Vasant Jadhav said, "Bhamble's job profile was of goods packaging at's warehouse in Bhiwandi.

TEDThe neuro-revolution is coming: Greg Gage’s neuroscience kits put research in the hands of the curious

Greg Gage left a steady career in engineering when he realized his real passion was for neuroscience. He creates to help spark this interest in kids, "so they don't miss their calling like I did." Photo: Daily Laurel

Greg Gage left a career in engineering when he realized his real passion was for neuroscience. He creates kits to help spark this interest in kids, so they don’t “miss their calling like I did.” Many of the kits involve experiments with roaches. Photo: Courtesy of Daily laurel

Greg Gage is a reliable source of both shock and awe at TED. Onstage over the years, this TED Fellow has demonstrated his low-cost DIY teaching kits by amputating a cockroach leg to show how neurons fire, remote-controlling a cyborg cockroach to demonstrate how electrical stimulation guides behavior, and taking away an audience member’s free will to show how one person’s brain can control the arm movements of another.

Gage, a latecomer to science himself, is passionate about revolutionizing neuroscience education. His goal is to make research equipment previously only accessible in university labs available to teachers and home enthusiasts. Here, he tells the TED Blog about the evolution of Backyard Brains, his plan to create a class of independent neuroscience researchers and his home for retired cockroaches.

Tell us about your scientific background. Were you always passionate about neuroscience?

I was actually an engineer for many years. I made circuit boards. I had a nice job with a technology company and lived in Europe, where I was in charge of engineering for Europe, the Middle East, Africa and South Asia Pacific. I always enjoyed science — I read Scientific American and science books, but I thought science was stuff that you learned in school. That it was made up of already-accumulated facts. I never realized that science was a career you could do — that you can actually get paid to make experiments, and understand how nature works.

That all changed when I attended an evening talk on astronomy at Leiden University in the Netherlands. After the lecture, I talked to the graduate students who presented, and found out this was their full-time job: making experiments, collecting data and writing results and papers. I realized: this could be me! I could actually become a scientist! So I quit my job and went back to grad school in Michigan. Meanwhile, everyone told me I was crazy to leave my well-paid and comfortable job.

What did you study?

I was a basal ganglia guy, studying deep brain structures. I recorded from the motor cortex, the nucleus accumbens, the striatum. I trained rats to do decision-making tasks. When they heard a tone, they’d go one way, and when I played another tone, they’d go another way. At some points, I’d play both tones at the same time to confuse them, and I’d observe which direction they chose. I’d use that data to look at brain cells. I recorded the spike trains from the cells, looking at the exact moment in which they were making the decision — then determine what cells were firing when, to get an idea of what the microarchitecture and microcircuitry of our brains are like.

We made some nice discoveries: we found that certain interneurons — which are missing in people with schizophrenia — fire really, really fast at the moment you’re making a decision. These cells seem to be suppressing unwanted decisions, and only allowing the ones that are the strongest to escape and be chosen. It actually fits pretty well with schizophrenia. This research was published in the journal Neuron, and was a fairly high-impact paper.

Backyard Brains offers low-cost neuroscience equipment and free online lessons and experiments exploring the principles of neuroscience. This illustration is part of a lesson, which goes with the Neuron SpikerBox, that explores how muscles and neurons work together. Image: Courtesy of Backyard Brains

Backyard Brains offers low-cost neuroscience equipment and free online lessons and experiments exploring basic principles. This illustration is part of a lesson that goes with the Neuron SpikerBox, and explores how muscles and neurons work together. Image: Courtesy of Backyard Brains

How did you veer into creating neuroscience kits?

Given my experience, I felt it was important to explain what scientific careers are, so young people wouldn’t miss their calling like I almost did. While I was in grad school, I did outreach. I’d visit schools with my neuroscience labmate Tim Marzullo to teach kids about how the brain works. We’d explain that there are really cool opportunities out there to study neuroscience. I used to tell kids, “If you like Sudoku, solving puzzles in general or building things, you’ll love being a scientist.”

Tim and I would enter these little competitions called “Brains Rule!” where we’d try to create better demos to get kids interested. That led to trying to bring what we did in our graduate research into the classroom, so kids could see more than just demos consisting of ping-pong balls as transmitters. Many science exhibits make science too “fun” so that it just becomes a game — and then kids leave without a real understanding of what the brain or neurons actually do.

In 2008, we identified this need to make it real. But we couldn’t bring in our equipment from the lab because it cost $40,000. We couldn’t bring our animals in, because that’s illegal. If someone wants to study the brain, they typically have to go to grad school — which is silly. This isn’t the case in other areas of science. You can study the planets or stars with a cheap telescope — you don’t have to get a PhD in astrophysics.

So we set about building what we called the “$100 spike” — inspired by Nicholas Negroponte’s $100 laptop. Could we build neuroscience equipment rugged enough that students could use it, and cheap enough that schools could afford it? Six months later, we revealed our first prototypes at the Society for Neuroscience conference. We got some publicity. People started writing us, wanting to buy one. People loved the idea of making neuroscience equipment available to classrooms.

I was still writing my dissertation, recording data, training rats. I’d had a couple of high-profile publications, but never really received much feedback. On this little $100 spike project, Tim and I were getting emails all the time. I was learning in my research how the basal ganglia uses dopamine to change the probability of future behaviors. So it made sense that as we kept getting positive attention for the project — and none for our graduate school work — we decided to focus on this venture. We named it Backyard Brains.

Backyard Brains' flagship product, the Neuron SpikerBox, is a "bioamplifier" that allows you to hear and see spikes of neurons in insects and other invertebrates. Priced at $99, it was inspired by Nicholas Negroponte's $100 laptop. Photo: Courtesy of Backyard Brains

Backyard Brains’ flagship product, the Neuron SpikerBox, is a “bioamplifier” that allows you to hear and see spikes of neurons in insects and other invertebrates. Priced at $99, it was inspired by Nicholas Negroponte’s $100 laptop. Photo: Courtesy of Backyard Brains

What do the products do and teach?

We have a number of inventions. The first one was the Neuron SpikerBox, a kit that allows you to record the living brain cells of insects. The idea was to bring electrophysiology into the classroom. We demonstrated the process at TEDYouth: first you anesthetize the insect in ice water — which is the recommended way to do it, according to Vincent Wigglesworth, who published the standard paper on insect pain in 1980.

When the cockroach is anesthetized, we remove one of its legs, and let the leg warm back up so the neurons fire. We put the pins in the leg, which pick up the small electrical discharge from the spikes and amplify it — so you can hear it. Then you can plug it into your smartphone and can actually see, record and do data analysis on it.

With that prep, you can then carry out a dozen or so experiments. You can look at somatotopy — what do different parts of the legs encode, and what is the location of these neurons? This is very much like the different parts of our brain; our somatosensory cortex is laid out in certain ways, so that our fingers and hands are represented by large areas of the brain while the backs of our thighs are represented by a very small area of the brain. You can do that experiment in the cockroach leg, and actually figure out some of the representations of neurons in certain areas of the leg.

So can you tease out each one? Say, this neuron is about movement, and this one is about pain, and this one is about being touched?

It’s more about density and location. What’s important to the cockroach is in the tarsus and tibia areas — the hands. You can see a lot more dense representation of neurons there, while in the upper arm area you don’t see as much.

You can also do neuropharmacology — which is looking at how neurons respond to an increase in neurotransmitters. You can look at functional electrical stimulation, which is what’s used for treating people with stroke — basically, stimulating muscles using an electrical discharge. That’s what we do when we make the leg dance with the music from an iPod.

These are really advanced neuroscience experiments we’re allowing people to do at an amateur and high-school level. We have another line of products called the Muscle SpikerBox, which allows you to record from the output of the human brain — the muscles. You’re recording from the motor cortex on down, so it records from the arms and potential actions of the hand. It records the individual motor units from the lower motor neuron in the spinal cord — so you can actually see a little pulse as that neuron in the brain is telling the muscle to move.


Backyard Brains released The Roboroach kit as the “world’s first commercially available cyborg.” With this kit, students can wirelessly control the left/right movement of a cockroach by microstimulation of its antenna nerves. Photo: Courtesy of Backyard Brains

Backyard Brains released the Roboroach kit as the “world’s first commercially available cyborg.” With this kit, students can wirelessly control the left/right movement of a cockroach by stimulating its antenna nerves. Photo: Courtesy of Backyard Brains

What about the Roboroach, the kit that lets you remote-control a live cockroach. You billed it as the world’s first commercially available cyborg.

Roboroach is an interesting invention, because it allows us to study behavioral effects of the brain. You surgically fit an electronic backpack onto the roach, and it sends an electrical current directly into the antenna nerves. When you use the app to send the current, the roach responds with a turning behavior.

You can then ask, “Why is that cockroach doing that?” It’s the nature of the roach — you touch its antenna, and it turns in the other direction. It’s called a wall-following behavior. With the Roboroach kit, we’re talking to the same neurons using small pulses of electricity. We’re making the roach think it’s touching something.

Behavior is what’s really interesting about neuroscience. Neurons are the things that we’re firing when we do and think anything. They drive behavior. The more you can see neurons and behavior working together, the more interesting it gets.

The fact that the SpikerBox and Roboroach require removing parts of the cockroach caused quite a bit of controversy among animal activists. What do you say about that?

I think it’s partly perception of what the Roboroach, for example, does. Some people thought that Roboroach is a permanently remote-controlled insect — and that it was a slippery slope to something more macabre. But that’s not what’s happening. The reality is that this works on the cockroach because the insect naturally follows simple rules. But very soon, it adapts to a unnatural stimulus. After 15 minutes, the roach ignores the backpack. It retains its free will.

The other issue is about pain or damage to the cockroach. To install the micro-stimulator, you put the cockroach under ice water, you remove a portion of its antenna, and you place some stimulating wires inside. Afterwards, you remove the backpack and put the cockroach back in its cage. We’ve looked carefully at behavior before and after surgery, and the cockroach appears, in every way, shape and form, to function with the smaller antennae.

We have a retirement community called “Shady Acres” for roaches that have given their service to Backyard Brains. When we put food in the cage, their antennas move and they walk over. They appear to be functioning well. They also live just as long as other cockroaches do — the death rate of roaches used in experiments is equal to that of controls. In nature, you’ll see cockroaches in the wild missing antennae and even limbs. Their ability to adapt easily to damage is different from that of humans.

But the real question is: what is the human benefit and does it outweigh the cost to the cockroach? This is a question you have to ask every time you do an animal experiment. The benefit is the ability to demonstrate neurotechnology to a group of students who may be interested in pursuing a career in science. Students are able to study neural systems and behavior, and learn how the most advanced neurological treatments work in humans. The Roboroach is deep brain stimulation — the same technology used to treat diseases like Parkinson’s. About 20% of the world is diagnosed with a neurological disorder that doesn’t have a cure. So I think the benefits to humanity make it our moral responsibility to teach about the brain using insects.

The Reaction Timer works with the EMG SpikerBox to to measure reaction time by recording how quickly a person can flex their muscles in response to stimuli. Photo: Backyard Brains

The Reaction Timer works with the EMG SpikerBox to to measure reaction time by recording how quickly a person can flex their muscles in response to stimuli. Photo: Courtesy of Backyard Brains

Can you use Backyard Brains to study cognition in humans?

We have a kit that can measure how much time it takes for your eyes to see something and for you to react. We can record the delay between a green LED coming on and your muscles moving in response. It takes about 350 milliseconds.

We can make the experiment more complex. Instead of just a green light, we can use an additional red light to distract you. The task is the same: react as fast as you can to the green light. But reaction time is longer because you need to be sure the color is correct first. You can record how long it takes your brain to deal with this extra cognitive step.

You can also use a tone, so you know how long information takes to go from your ear to your muscle. It turns out the eye processes information 100 milliseconds faster, because we have more neurons in our visual pathway, as we’re more visual creatures.

What are some of your latest inventions?

We’ve been moving into neural interfaces — connecting machines to the brain via electrical signals we can detect. We have devices that snap onto an Arduino board, and students build their own computer interfaces from their muscles, heart or brain. Our talk at this year’s TED used a Muscle SpikerBox paired with an Arduino to control a muscle stimulator. We call it the “human-to-human interface.”

We focus on the latest technology that labs are using, and make DIY versions of it. Right now we’re developing an OptoStimmer that will soon make affordable optogenetics classroom tools. Optogenetics is a breakthrough technology that allows you to turn on and off specific neurons in the brain by genetic targeting. Specific neurons enable a gene that grows little channels that make the cell communicate when you shine a light. Normally, light doesn’t affect neurons, so this technique causes only targeted neurons to fire spikes. You can pulse this light, and all the other neurons ignore it except for the ones that you want to control. You now have this amazing ability to turn on or off any neuron, any time.


I can’t overstate how important this technique is to our field. This has been the holy grail, so you can figure out what neurons are actually doing. It’s given answers to long-standing debates. For example, no one really knew how deep-brain stimulation for treating Parkinson’s works. There were theories and models, but now it can be tested. Scientists carefully targeted possible neurons in mice and other animals using optogenetics, pulsed the light and looked to see which neurons actually had therapeutic effects. It turns out it was not even where they were stimulating in the deep brain — it was another section way far away, the motor cortex, that mattered.

We’d like to make this tool available for high school students to do experiments using fruit flies. They’ll be able to do some recordings on optogenetic flies under a microscope. Then pulse a little bright red LED, which shines through the skin of the fruit fly. When a targeted neuron sees the red light, it will start firing. Depending on which neuron you are targeting, you can drive vastly different behaviors: from thinking they tasted something sweet to moonwalking like Michael Jackson. You’ll be able to see how specific neurons are affecting behavior.

Are there amateur neuroscientists outside the classroom using Backyard Brains products to do research?

Yes. One of our goals is to have a peer-reviewed paper that comes from an amateur with an institutional address that is their home address. It’s happening already in mathematics and astronomy, but not in neuroscience. We want to change that. We want real discoveries to happen at home, using our gear.

What I like about Backyard Brains is that we not only push out products, we push out experiments. We want our experiments to be novel and educational. We want to develop new tools and techniques that we can publish in academic peer-reviewed journals. We train undergraduates on how to do experiments, and we write those up and publish them. Our first article about the SpikerBox was published in 2012 in PLOS ONE, and we’ve been publishing every year since.

Our work is independent from any university and is financed with the money that we’re generating from grants and sales at Backyard Brains. It feels great to finally be an independent scientist. Our goal is to see scientific neuroscience papers published by amateur scientists. The neuro-revolution is coming.

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="360" src=";rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>
Above: Backyard Brains shows exactly what happens when you play hip-hop music into the light-reflecting nerve cells of a squid. 

Sociological ImagesHappy Birthday, Sociological Images!

Hoooo-ray! This (newly described by science) spider has 8 legs and it’s doing cartwheels to celebrate SocImages’ 8th birthday!


This is our 5,530th post and still going strong. Thanks to all of you who discovered SocImages this year and those of you who’ve been hanging on since the beginning!

Here are some highlights from the last year. Quite a trip down memory lane!

  • Sociological Images was awarded this year’s American Sociological Association’s Distinguished Contributions to Teaching Award. We would love to think that the blog eases class prep a little and helps people get the best out of their students. If that’s what this award means, we’re over the moon.
  • I was honored to be invited to talk about SocImages in a plenary speech at the Midwest Sociological Society. Of course, there were lots of pictures. You’re welcome to view the slideshow here.
  • Rush Limbaugh covered a post about the relationship between studying economics and being antisocial and it sheds a scary light into the inner workings of his mind. (And I got called a “professorette” so… like I said, all high kicks all the time.)
  • We got tumbld by Wil Wheaton!
  • Two new Pinterest pages: pinkwashing and sexy what!?, a collection of totally random stuff being advertised as weirdly and unnecessarily sexual.
  • Our social media accounts continue to grow like weeds: thanks to the 74,000 of you on facebook,  23,000 on tumblr, 22,000 on twitter, and 14,000 on pinterest.  We do have fun and learn stuff, too!

Here’s to another year! (Sorry about the spider.)

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Planet DebianVincent Sanders: NetSurf developers and the Order of the Phoenix

Once more the NetSurf developers gathered to battle the forces of darkness, or as they are more commonly known web specifications.

Michael Drake, Vincent Sanders, John-Mark Bell and Daniel Silverstone at the Codethink manchester officesThe fifth developer weekend was an opportunity for us to gather in a pleasant setting and work together in person. We were graciously hosted, once again, by Codethink in their Manchester offices.

Four developers managed to attend in person from around the UK: Michael Drake, John-Mark Bell, Daniel Silverstone and Vincent Sanders.

The main focus of the weekends activities was to address two areas that have become overwhelmingly important: JavaScript and Layout.

Although the browser obviously already has both these features they are somewhat incomplete and incapable of supporting the features of the modern web.


The discussion started with JavaScript and its implementation. We had previously looked at the feasibility of changing our JavaScript engine from Spidermonkey to DukTape. We had decided this was a change we wanted to make when DukTape was mature enough to support the necessary features.

The main reasons for the change are that Spidermonkey is a poor fit to NetSurf as it is relatively large and does not provide a stable API guarantee. The lack of a stable API requires extensive engineering to update to new releases. Additionally support for compiling on minority platforms is very challenging meaning that most platforms are stuck using version 1.7 or 1.85 (current release is version 31 with 38 due).

We started the move to Duktape by creating a development branch, integrating the Duktape library  and open coding a minimal implementation of the core classes as a proof of concept. This work was mostly undertaken by Daniel with input from myself and John-Mark. This resulted in a build that was substantially smaller than using Spidermonkey with all the existing functionality our tests cover.

The next phase of this work is to take the prototype implementation and turn it into something that can be reliably used and covers the entire JavaScript DOM interface. This is no small job as there are at least 200 classes and 1500 methods and properties to implement.


The layout library design discussion was an extensive and very involved. The layout engine is a browsers most important component. It takes all the information processed by the CSS and DOM libraries, applies a vast number of involved rules and produces a list of operations that can be rendered.

This reimplementation of our rendering engine has been in planning for many years. The existing engine stems from the browsers earliest days more than a decade ago and has many shortcomings in architecture and implementation we hope to address.

The work has finally started on libnslayout with Michael taking the lead and defining the initial API and starting the laborious work of building the test harness, a feature the previous implementation lacked!

The second war begins

For a war you need people and it is a little unfortunate that this was our lowest ever turnout for the event. This is true of the project overall with declining numbers of commits and interest outside our core group. If anyone is interested we are always happy to have new contributors and there are opportunities to contribute in many areas from image assets, through translations, to C programming.

We discussed some ways to encourage new developers and try and get committed developers especially for the minority platform frontends. The RISC OS frontend for example has needed a maintainer since the previous one stepped down. There was some initial response from its community, culminating in a total of two patches, when we announced the port was under threat of not being releasable in future. Unfortunately nothing further came from this and it appears our oldest frontend may soon become part of our history.

We also covered some issues from the bug tracker mostly to see if there were any patterns that we needed to address before the forthcoming 3.4 release.

There was discussion about recent improvements to the CI system which generate distribution packages from the development branch and how this could be extended to benefit more users. This also included authorisation to acquire storage and other miscellaneous items necessary to keep the project infrastructure running.

We managed over 20 hours of work in the two days and addressed our current major shortcomings. Now it just requires a great deal of programming to complete the projects started here.

Planet DebianMartin Michlmayr: Congratulations to Stefano Zacchiroli

Stefano Zacchiroli receiving the O'Reilly Open Source Award I attended OSCON's closing sessions today and was delighted to see my friend Stefano Zacchiroli (Zack) receive an O'Reilly Open Source Award. Zack acted as Debian Project Leader for three years, is working on important activities at the Open Source Initiative and the Free Software Foundation, and is generally an amazing advocate for free software.

Thanks for all your contributions, Zack, and congratulations!

Planet DebianElena 'valhalla' Grandi: Old and new: furoshiki and electronics.

Old and new: furoshiki and electronics.

Yesterday at the local LUG (@Gruppo Linux Como ) somebody commented on the mix of old and new in my cloth-wrapped emergency electronics kit (you know, the kind of things you carry around with a microcontroller board and a few components in case you suddenly have an idea for a project :-) ).


This is the kind of things it has right now: components tend to change in time.


And yes, I admit I can only count up to 2, for higher numbers I carry a reference card :-)


Anyway, there was a bit of conversation on how this looked like a grandmother-ish thing, especially since it was in the same bag with a knitted WIP sock, and I mentioned the Japanese #furoshiki revival and how I believe that good old things are good, and good new things are good, and why not use them both?

Somebody else, who may or not be @Davide De Prisco asked me to let him have the links I mentioned, which include:

* Wikipedia page: Furoshiki
* Guide from the Japanese Ministry of the Environment on how to use a furoshiki (and the article
* A website with many other wrapping tecniques

Worse Than FailureError'd: What is this 'Right Click' You Speak Of?

"What makes this worse is that this wasn't an edge case," wrote Roger, "I only right-clicked in the body of an email."


"Luckily, I'm independently wealthy, and can afford to take a position like this. I'll get right back to you, Joe!" writes Scott.


Mike wrote, "How can client software live in a world where a server's answers are meaningless?"


Daniel Z. writes, "I never use XDebug, but when I do, I do it in Production."


"I guess that it makes sense that, this time of year, even Pluto doesn't have any hotel vacancies," David A. wrote.


"Now THIS is what I call proactive support!" Paolo T. writes.


Piotr wrote, "If you'll excuse me, I need to create a YouTube channel."


"Estimated delivery...NEVER!" Erik C. writes.


[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianSimon Kainz: DUCK challenge: week 3

One more update on the the DUCK challenge: In the current week, the following packages were fixed and uploaded into unstable:

So we had 10 packages fixed and uploaded by 8 different uploaders. A big "Thank You" to you!!

Since the start of this challenge, a total of 35 packages, uploaded by 25 different persons were fixed.

Here is a quick overview:

Week 1 Week 2 Week 3 Week 4 Week 5 Week 6 Week 7
# Packages 10 15 10 - - - -
Total 10 25 35 - - - -

The list of the fixed and updated packages is availabe here. I will try to update this ~daily. If I missed one of your uploads, please drop me a line.

There is still lots of time till the end of DebConf15 and the end of the DUCK Challenge, so please get involved.

Pevious articles are here: Week 1, Week 2.

Planet Linux AustraliaJames Morris: Linux Security Summit 2015 Update: Free Registration

In previous years, attending the Linux Security Summit (LSS) has required full registration as a LinuxCon attendee.  This year, LSS has been upgraded to a hosted event.  I didn’t realize that this meant that LSS registration was available entirely standalone.  To quote an email thread:

If you are only planning on attending the The Linux Security Summit, there is no need to register for LinuxCon North America. That being said you will not have access to any of the booths, keynotes, breakout sessions, or breaks that come with the LinuxCon North America registration.  You will only have access to The Linux Security Summit.

Thus, if you wish to attend only LSS, then you may register for that alone, at no cost.

There may be a number of people who registered for LinuxCon but who only wanted to attend LSS.   In that case, please contact the program committee at

Apologies for any confusion.


Planet Linux AustraliaMichael Davies: Virtualenv and library fun

Doing python development means using virtualenv, which is wonderful.  Still, sometimes you find a gotcha that trips you up.

Today, for whatever reason, inside a venv inside a brand new Ubuntu 14.04 install,  I could not see a system-wide install of pywsman (installed via sudo apt-get install python-openwsman)

For example:
mrda@host:~$ python -c 'import pywsman'
# Works

mrda@host:~$ tox -evenv --notest
(venv)mrda@host:~$ python -c 'import pywsman'
Traceback (most recent call last):
  File "<string>", line 1, in <module>
ImportError: No module named pywsman
# WAT?

Let's try something else that's installed system-wide
(venv)mrda@host:~$ python -c 'import six'
# Works

Why does six work, and pywsman not?
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/six*
-rw-r--r-- 1 root root  1418 Mar 26 22:57 /usr/lib/python2.7/dist-packages/six-1.5.2.egg-info
-rw-r--r-- 1 root root 22857 Jan  6  2014 /usr/lib/python2.7/dist-packages/
-rw-r--r-- 1 root root 22317 Jul 23 07:23 /usr/lib/python2.7/dist-packages/six.pyc
(venv)mrda@host:~$ ls -la /usr/lib/python2.7/dist-packages/*pywsman*
-rw-r--r-- 1 root root  80590 Jun 16  2014 /usr/lib/python2.7/dist-packages/
-rw-r--r-- 1 root root 293680 Jun 16  2014 /usr/lib/python2.7/dist-packages/

The only thing that comes to mind is that pywsman wraps a .so

A work-around is to tell venv that it should use the system-wide install of pywsman, like this:

# Kill the old venv first
(venv)mrda@host:~$ deactivate
mrda@host:~$ rm -rf .tox/venv

Now startover
mrda@host:~$ tox -evenv --notest --sitepackages pywsman
(venv)mrda@host:~$ python -c "import pywsman"
# Fun and Profit!

Planet DebianAntoine Beaupré: Is it safe to use open wireless access points?

I sometimes get questions when people use my wireless access point, which, for as long as I can remember, has been open to everyone; that is without any form of password protection or encryption. I arguably don't use the access point much myself, as I prefer the wired connection for the higher bandwidth, security and reliability it provides.

Apart from convenience for myself and visitors, the main reason why I leave my wireless access open is that I believe in a free (both as in beer and freedom) internet, built with principles of solidarity rather than exploitation and profitability. In these days of ubiquitous surveillance, freedom often goes hand in hand with anonymity, which implies providing free internet access to everyone.

I also believe that, as more and more services get perniciously transferred to the global internet, access to the network is becoming a basic human right. This is therefore my small contribution to the struggle, now also part of the Réseau Libre project.

So here were my friends question, in essence:

My credit card info was stolen when I used a wifi hotspot in an airport... Should I use open wifi networks?

Is it safe to use my credit card for shopping online?

Here is a modified version of an answer I sent to a friend recently which I thought could be useful to the larger internet community. The short answer is "sorry about that", "it depends, you generally can, but be careful" and "your credit card company is supposed to protect you".


First off, sorry to hear that our credit card was stolen in an airport! That has to be annoying... Did the credit card company reimburse you? Normally, the whole point of credit cards is that they protect you in case of theft like this and they are supposed to reimburse you if you credit card gets stolen or abused...

The complexity and unreliability of passwords

Now of course, securing every bit of your internet infrastructure helps in protecting against such attacks. However: there is a trade-off! First off, it does makes it more complicated for people to join the network. You need to make up some silly password (which has its own security problems: passwords can be surprisingly easy to guess!) that you will post on the fridge or worst, forget all the time!

And if it's on the fridge, anyone with a view to that darn fridge, be it one-time visitor or sneaky neighbor, can find the password and steal your internet access (although, granted, that won't allow them to directly spy on your internet connection).

In any case, if you choose to use a password, you should use the tricks I wrote in the koumbit wiki to generate the password and avoid writing it on the fridge.

The false sense of security of wireless encryption

Second, it can also give a false sense of security: just because a wifi access point appears "secure" (ie. that the communication between your computer and the wifi access point is encrypted) doesn't mean the whole connection is secure.

In fact, one attack that can be done against access points is exactly to masquerade as an existing access point, with no security security at all. That way, instead of connecting to the real secure and trusted access point, you connect to an evil one which spies on our connection. Most computers will happily connect to such a hotspot even with degraded security without warning.

It may be what happened at the airport, in fact. Of course this particular attack would be less likely to happen if you live in the middle of the woods than an airport, but it's some important distinction to keep in mind, because the same attack can be performed after the wireless access point, for example by your countryside internet access provider or someone attacking it.

Your best protection for your banking details is to rely on good passwords (for your back account) but also, and more importantly, what we call end-to-end encryption. That is usually implemented using the "HTTPS" with a pad lock icon in your address bar. This ensures that the communication between your computer and the bank or credit card company is secure, that is: that no wifi access point or attacker between your computer and them can intercept your credit card number.

The flaws of internet security

Now unfortunately, even the HTTPS protocol doesn't bring complete security. For example, one attack that can be done is similar to the previous one and that is to masquerade as a legitimate bank site, but either strip out the encryption or even fake the encryption.

So you also need to look at the address of the website you are visiting. Attackers are often pretty clever and will use many tricks to hide the real address of the website in the address bar. To work around this, I always explicitly type my bank website address ( in my case) directly myself instead of clicking on links, bookmarks or using a search engine to find my bank site.

In the case of credit cards, it is much trickier because when you buy stuff online, you end up putting that credit card number on different sites which you do not necessarily trust. There's no good solution but complaining to your credit card company if you believe a website you used has stolen your credit card details. You can also use services like Paypal, Dwolla or Bitcoin that hide your credit card details from the seller, if they support the service.

I usually try to avoid putting my credit card details on sites I do not trust, and limit myself to known parties (e.g. Via Rail, Air Canada, etc). Also, in general, I try to assume the network connection between me and the website I visit is compromised. This forced me to get familiar with online security and use of encryption. It is more accessible to me than trying to secure the infrastructure i am using, because i often do not control it at all (e.g. internet cafes...).

Internet security is unfortunately a hard problem, and things are not getting easier as more things move online. The burden is on us programmers and system administrators to create systems that are more secure and intuitive for our users so, as I said earlier, sorry the internet sucks so much, we didn't think so many people would join the acid trip of the 70s. ;)

Planet DebianElena 'valhalla' Grandi: A Makefile for OpenSCAD projects

A Makefile for OpenSCAD projects

When working with OpenSCAD to generate models for 3D printing, I find it convenient to be able to build .stl and .gcode files from the command line, expecially in batch, so I've started writing a Makefile, improving it and making it more generic in subsequent iterations; I've added a page on my website to hosts my current version.

Most of my projects use the following directory structure.

  • my_project/conf/basic.ini…
    slic3r configuration files

  • my_project/src/object1.scad, my_project/src/object2.scad…
    models that will be exported

  • my_projects/src/lib/library1.scad, my_projects/src/lib/library2.scad…
    OpenSCAD files that don't correnspond to a single object, included / used in the files above.

  • my_project/Makefile
    the shown below.

Running make will generate stl files for all of the models; make gcode adds .gcode files using slic3r; make build/object1.stl and make build/object1.gcode also work, when just one model is needed.

# Copyright 2015 Elena Grandi
# This work is free. You can redistribute it and/or modify it under the
# terms of the Do What The Fuck You Want To Public License, Version 2,
# as published by Sam Hocevar. See for more details.

BUILDDIR = build
CONFDIR = conf
SRCDIR = src

SLIC3R = slic3r


STL_TARGETS = $(patsubst $(SRCDIR)/%.scad,$(BUILDDIR)/%.stl,$(wildcard $(SRCDIR)/*.scad))
GCODE_TARGETS = $(patsubst $(SRCDIR)/%.scad,$(BUILDDIR)/%.gcode,$(wildcard $(SRCDIR)/*.scad))

.PHONY: all gcode clean


$(BUILDDIR)/%.stl: %.scad $(SRCDIR)/lib/*
mkdir -p ${BUILDDIR}
openscad -o $@ $<

$(BUILDDIR)/%.gcode: %.stl ${CONFDIR}/basic.ini
${SLIC3R} --load ${CONFDIR}/basic.ini $<

rm -f ${BUILDDIR}/*.stl ${BUILDDIR}/*.gcode

This Makefile is released under the WTFPL:

Version 2, December 2004

Copyright (C) 2004 Sam Hocevar <>

Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.



Sociological ImagesSlave Families’ Desperate Efforts to Reunite During Reconstruction

“It is fair to say,” writes historian Heather Williams about the Antebellum period in America, “that most white people had been so acculturated to view black people as different from them that they… barely noticed the pain that they experienced.”

She describes, for example, a white woman who, while wrenching enslaved people from their families to found a distant plantation, describes them as “cheerful,” in “high spirits,” and “play[ful] like children.” It simply never occurred to her or many other white people that black people had the same emotions they did, as the reigning belief among whites was that they were incapable of any complex or deep feeling at all.

It must have created such cognitive dissonance, then — such confusion on the part of the white population — when after the end of slavery, black people tried desperately to reunite with their parents, cousins, aunties and uncles, nieces and nephews, spouses, lovers, children, and friends.

And try they did. For decades newly freed black people sought out their loved ones. One strategy was to put ads in the paper. The “Lost Friends” column was one such resource. It ran in the Southwestern Christian Advocate from 1879 until the early 1900s and a collection of those ads — more than 330 from just one year — has been released by the Historic New Orleans Collection. Here is an example:


The ads would have been a serious investment. They cost 50 cents which, at the time, would have been more than a day’s income for most recently freed people.

Williams reports that reunions were rare. She excerpted this success story from the Southwestern in her book, Help Me To Find My People, about enslaved families torn asunder, their desperate search for one another, and the rare stories of reunification.


In the SOUTHWESTERN of March 1st, we published in this column a letter from Charity Thompson, of Hawkins, Texas, making inquiry about her family. She last heard of them in Alabama years ago. The letter, as printed in the paper was read in the First church Houston, and as the reading proceeded a well-known member of the church — Mrs. Dibble — burst into tears and cried out “That is my sister and I have not seen her for thirty three years.” The mother is still living and in a few days the happy family will once more re-united.

I worry that white America still does not see black people as their emotional equals. Psychologists continue to document what is now called a racial empathy gap, both blacks and whites show lesser empathy when they see darker-skinned people experiencing physical or emotional pain. When white people are reminded that black people are disproportionately imprisoned, for example, it increases their support for tougher policing and harsher sentencing. Black prisoners receive presidential pardons at much lower rates than whites. And we think that black people have a higher physical pain threshold than whites.

How many of us tolerate the systematic deprivation and oppression of black people in America today — a people whose families are being torn asunder by death and imprisonment — by simply failing to notice the depths of their pain?

Cross-posted at A Nerd’s Guide to New Orleans.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

CryptogramRemotely Hacking a Car While It's Driving

This is a big deal. Hackers can remotely hack the Uconnect system in cars just by knowing the car's IP address. They can disable the brakes, turn on the AC, blast music, and disable the transmission:

The attack tools Miller and Valasek developed can remotely trigger more than the dashboard and transmission tricks they used against me on the highway. They demonstrated as much on the same day as my traumatic experience on I-64; After narrowly averting death by semi-trailer, I managed to roll the lame Jeep down an exit ramp, re-engaged the transmission by turning the ignition off and on, and found an empty lot where I could safely continue the experiment.

Miller and Valasek's full arsenal includes functions that at lower speeds fully kill the engine, abruptly engage the brakes, or disable them altogether. The most disturbing maneuver came when they cut the Jeep's brakes, leaving me frantically pumping the pedal as the 2-ton SUV slid uncontrollably into a ditch. The researchers say they're working on perfecting their steering control -- for now they can only hijack the wheel when the Jeep is in reverse. Their hack enables surveillance too: They can track a targeted Jeep's GPS coordinates, measure its speed, and even drop pins on a map to trace its route.

In related news, there's a Senate bill to improve car security standards. Honestly, I'm not sure our security technology is enough to prevent this sort of thing if the car's controls are attached to the Internet.

Planet Linux AustraliaBinh Nguyen: Self Replacing Secure Code, our Strange World, Mac OS X Images Online, Password Recovery Software, and Python Code Obfuscation

A while back (several years ago) I wrote about self replacing code in my 'Cloud and Security' report (p.399-402)(I worked on it on and off over an extended period of time) within the context of building more secure codebases. DARPA are currently funding projects within this space. Based on I've seen it's early days. To be honest it's not that difficult to build if you think about it carefully and break it down. Much of the code that is required is already in wide spread use and I already have much of the code ready to go. The problem is dealing with the sub-components. There are some aspects that are incredibly tedious to deal with especially within the context of multiple languages.

If you're curious, I also looked at fully automated network defense (as in the CGC (Cyber Grand Challenge)) in all of my three reports, 'Building a Coud Computing Service', 'Convergence Effect', and 'Cloud and Internet Security' (I also looked at a lot of other concepts such as 'Active Defense' systems which involves automated network response/attack but there are a lot of legal, ethical, technical, and other conundrums that we need to think about if we proceed further down this path...). I'll be curious to see what the final implementations will be like...

If you've ever worked in the computer security industry you'll realise that it can be incredibly frustrating at times. As I've stated previously it can sometimes be easier to get information from countries under sanction than legitimately (even in a professional setting in a 'safe environment') for study. I find it very difficult to understand this perspective especially when search engines allow independent researchers easy access to adequate samples and how you're supposed to defend against something if you (and many others around you) have little idea of how some attack system/code works.,infosec-firms-oppose-misguided-exploit-export-controls.aspx

It's interesting how the West views China and Russia via diplomatic cables (WikiLeaks). They say that China is being overly aggressive particularly with regards to economics and defense. Russia is viewed as a hybrid criminal state. When you think about it carefully the world is just shades of grey. A lot of what we do in the West is very difficult to defend when you look behind the scenes and realise that we straddle such a fine line and much of what they do we also engage in. We're just more subtle about it. If the general public were to realise that Obama once held off on seizing money from the financial system (proceeds of crime and terrorism) because there was so much locked up in US banks that it would cause the whole system to crash would they see things differently? If the world in general knew that much of southern Italy's economy was from crime would they view it in the same way as they saw Russia? If the world knew exactly how much 'economic intelligence' seems to play a role in 'national security' would we think about the role of state security differently?

If you develop across multiple platforms you'll have discovered that it is just easier to have a copy of Mac OS X running in a Virtual Machine rather than having to shuffle back and forth between different machines. Copies of the ISO/DMG image (technically, Mac OS X is free for those who don't know) are widely available and as many have discovered most of the time setup is reasonably easy.

If you've ever lost your password to an archive, password recovery programs can save a lot of time. Most of the free password recovery tools deal only with a limited number of filetypes and passwords.

There are some Python bytecode obfuscation utilities out there but like standard obfuscators they are of limited utility against skilled programmers.

Worse Than FailureCodeSOD: Patterned After Success

Design patterns are more than just useless interview questions that waste everyone’s time and annoy developers. They’re also a set of approaches to solving common software problems, while at the same time, being a great way to introduce new problems, but enough about Spring.

For those of us that really want global variables back in our object oriented languages, the Singleton pattern is our go-to approach. Since it’s the easiest design pattern to understand and implement, those new to design patterns tend to throw it in everywhere, whether or not it fits.
<script src="" type="text/javascript"></script>

Andres once worked with a developer who not only was new and enthusiastic about design patterns, but didn’t actually understand how to implement the Singleton pattern. Which is why Andres was getting null reference exceptions when trying to get an instance from this “singleton”.

   public class SingletonSmtpClient
        private static SmtpClient _smtpClient;

        public SingletonSmtpClient(string host, string user, string password, bool ssl)
            if (_smtpClient == null)
                _smtpClient = new SmtpClient();
                _smtpClient.Credentials = new System.Net.NetworkCredential(user, password);
                _smtpClient.EnableSsl = ssl;
                _smtpClient.Host = host;

        public static SmtpClient getInstance()
                return _smtpClient;

<link href="" rel="stylesheet"/>
<script src=""></script>

[Advertisement] Use NuGet or npm? Check out ProGet, the easy-to-use package repository that lets you host and manage your own personal or enterprise-wide NuGet feeds and npm repositories. It's got an impressively-featured free edition, too!

Planet DebianDaniel Pocock: Unpaid work training Google's spam filters

This week, there has been increased discussion about the pain of spam filtering by large companies, especially Google.

It started with Google's announcement that they are offering a service for email senders to know if their messages are wrongly classified as spam. Two particular things caught my attention: the statement that less than 0.05% of genuine email goes to the spam folder by mistake and the statement that this new tool to understand misclassification is only available to "help qualified high-volume senders".

From there, discussion has proceeded with Linus Torvalds blogging about his own experience of Google misclassifying patches from Linux contributors as spam and that has been widely reported in places like Slashdot and The Register.

Personally, I've observed much the same thing from the other perspective. While Torvalds complains that he isn't receiving email, I've observed that my own emails are not always received when the recipient is a Gmail address.

It seems that Google expects their users work a little bit every day going through every message in the spam folder and explicitly clicking the "Not Spam" button:

so that Google can improve their proprietary algorithms for classifying mail. If you just read or reply to a message in the folder without clicking the button, or if you don't do this for every message, including mailing list posts and other trivial notifications that are not actually spam, more important messages from the same senders will also continue to be misclassified.

If you are not willing to volunteer your time to do this, or if you are simply one of those people who has better things to do, Google's Gmail service is going to have a corrosive effect on your relationships.

A few months ago, we visited Australia and I sent emails to many people who I wanted to catch up with, including invitations to a family event. Some people received the emails in their inboxes yet other people didn't see them because the systems at Google (and other companies, notably Hotmail) put them in a spam folder. The rate at which this appeared to happen was definitely higher than the 0.05% quoted in the Google article above. Maybe the Google spam filters noticed that I haven't sent email to some members of the extended family for a long time and this triggered the spam algorithm? Yet it was at that very moment that we were visiting Australia that email needs to work reliably with that type of contact as we don't fly out there every year.

A little bit earlier in the year, I was corresponding with a few students who were applying for Google Summer of Code. Some of them also observed the same thing, they sent me an email and didn't receive my response until they were looking in their spam folder a few days later. Last year I know a GSoC mentor who lost track of a student for over a week because of Google silently discarding chat messages, so it appears Google has not just shot themselves in the foot, they managed to shoot their foot twice.

What is remarkable is that in both cases, the email problems and the XMPP problems, Google doesn't send any error back to the sender so that they know their message didn't get through. Instead, it is silently discarded or left in a spam folder. This is the most corrosive form of communication problem as more time can pass before anybody realizes that something went wrong. After it happens a few times, people lose a lot of confidence in the technology itself and try other means of communication which may be more expensive, more synchronous and time intensive or less private.

When I discussed these issues with friends, some people replied by telling me I should send them things through Facebook or WhatsApp, but each of those services has a higher privacy cost and there are also many other people who don't use either of those services. This tends to fragment communications even more as people who use Facebook end up communicating with other people who use Facebook and excluding all the people who don't have time for Facebook. On top of that, it creates more tedious effort going to three or four different places to check for messages.

Despite all of this, the suggestion that Google's only response is to build a service to "help qualified high-volume senders" get their messages through leaves me feeling that things will get worse before they start to get better. There is no mention in the Google announcement about what they will offer to help the average person eliminate these problems, other than to stop using Gmail or spend unpaid time meticulously training the Google spam filter and hoping everybody else does the same thing.

Some more observations on the issue

Many spam filtering programs used in corporate networks, such as SpamAssassin, add headers to each email to suggest why it was classified as spam. Google's systems don't appear to give any such feedback to their users or message senders though, just a very basic set of recommendations for running a mail server.

Many chat protocols work with an explicit opt-in. Before you can exchange messages with somebody, you must add each other to your buddy lists. Once you do this, virtually all messages get through without filtering. Could this concept be adapted to email, maybe giving users a summary of messages from people they don't have in their contact list and asking them to explicitly accept or reject each contact?

If a message spends more than a week in the spam folder and Google detects that the user isn't ever looking in the spam folder, should Google send a bounce message back to the sender to indicate that Google refused to deliver it to the inbox?

I've personally heard that misclassification occurs with mailing list posts as well as private messages.

Planet DebianDaniel Pocock: Recording live events like a pro (part 1: audio)

Whether it is a technical talk at a conference, a political rally or a budget-conscious wedding, many people now have most of the technology they need to record it and post-process the recording themselves.

For most events, audio is an essential part of the recording. There are exceptions: if you take many short clips from a wedding and mix them together you could leave out the audio and just dub the couple's favourite song over it all. For a video of a conference presentation, though, the the speaker's voice is essential.

These days, it is relatively easy to get extremely high quality audio using a lapel microphone attached to a smartphone. Lets have a closer look at the details.

Using a lavalier / lapel microphone

Full wireless microphone kits with microphone, transmitter and receiver are usually $US500 or more.

The lavalier / lapel microphone by itself, however, is relatively cheap, under $US100.

The lapel microphone is usually an omnidirectional microphone that will pick up the voices of everybody within a couple of meters of the person wearing it. It is useful for a speaker at an event, some types of interviews where the participants are at a table together and it may be suitable for a wedding, although you may want to remember to remove it from clothing during the photos.

There are two key features you need when using such a microphone with a smartphone:

  • TRRS connector (this is the type of socket most phones and many laptops have today)
  • Microphone impedance should be at least 1kΩ (that is one kilo Ohm) or the phone may not recognize when it is connected

Many leading microphone vendors have released lapel mics with these two features aimed specifically at smartphone users. I have personally been testing the Rode smartLav+

Choice of phone

There are almost 10,000 varieties of smartphone just running Android, as well as iPhones, Blackberries and others. It is not practical for most people to test them all and compare audio recording quality.

It is probably best to test the phone you have and ask some friends if you can make test recordings with their phones too for comparison. You may not hear any difference but if one of the phones has a poor recording quality you will hopefully notice that and exclude it from further consideration.

A particularly important issue is being able to disable AGC in the phone. Android has a standard API for disabling AGC but not all phones or Android variations respect this instruction.

I have personally had positive experiences recording audio with a Samsung Galaxy Note III.

Choice of recording app

Most Android distributions have at least one pre-installed sound recording app. Look more closely and you will find not all apps are the same. For example, some of the apps have aggressive compression settings that compromise recording quality. Others don't work when you turn off the screen of your phone and put it in your pocket. I've even tried a few that were crashing intermittently.

The app I found most successful so far has been Diktofon, which is available on both F-Droid and Google Play. Diktofon has been designed not just for recording, but it also has some specific features for transcribing audio (currently only supporting Estonian) and organizing and indexing the text. I haven't used those features myself but they don't appear to cause any inconvenience for people who simply want to use it as a stable recording app.

As the app is completely free software, you can modify the source code if necessary. I recently contributed patches enabling 48kHz recording and disabling AGC. At the moment, the version with these fixes has just been released and appears in F-Droid but not yet uploaded to Google Play. The fixes are in version 0.9.83 and you need to go into the settings to make sure AGC is disabled and set the 48kHz sample rate.

Whatever app you choose, the following settings are recommended:

  • 16 bit or greater sample size
  • 48kHz sample rate
  • Disable AGC
  • WAV file format

Whatever app you choose, test it thoroughly with your phone and microphone. Make sure it works even when you turn off the screen and put it in your pocket while wearing the lapel mic for an hour. Observe the battery usage.


Now lets say you are recording a wedding and the groom has that smartphone in his pocket and the mic on his collar somewhere. What is the probability that some telemarketer calls just as the couple are exchanging vows? What is the impact on the recording?

Maybe some apps will automatically put the phone in silent mode when recording. More likely, you need to remember this yourself. These are things that are well worth testing though.

Also keep in mind the need to have sufficient storage space and to check whether the app you use is writing to your SD card or internal memory. The battery is another consideration.

In a large event where smartphones are being used instead of wireless microphones, possibly for many talks in parallel, install a monitoring app like Ganglia on the phones to detect and alert if any phone has weak wifi signal, low battery or a lack of memory.

Live broadcasts and streaming

Some time ago I tested RTP multicasting from Lumicall on Android. This type of app would enable a complete wireless microphone setup with live streaming to the internet at a fraction of the cost of a traditional wireless microphone kit. This type of live broadcast could also be done with WebRTC on the Firefox app.


If you research the topic thoroughly and spend some time practicing and testing your equipment, you can make great audio recordings with a smartphone and an inexpensive lapel microphone.

In subsequent blogs, I'll look at tips for recording video and doing post-production with free software.


Planet DebianSven Hoexter: moto g falcon CM 12.1 nightly - eating the battery alive

At least the nightly builds from 2015-07-21 to 2015-07-24 eat the battery alive. Until that one is fixed one can downgrade to The downgrade fixed the issue for me.

Update: I'm now running fine with the build from 2015-07-26.

Geek FeminismGuest Post: Men, if Django Girls makes you uncomfortable, maybe that’s a good thing

This is a guest post by Brianna Laugher, a software developer who appreciates significant whitespace. She tweets fleetingly at @pfctdayelise. It is cross-posted at her Tumblr.

Monday was the first day of Europython, and the first keynote was by Ola Sendecka & Ola Sitarska, the founders of Django Girls. They gave a wonderful talk leading us through their journey in creating the Django Girls tutorial, its viral-like spread in introducing over 1600 women worldwide to Python programming, leading to a Django Girls Foundation with a paid employee, and their plans to expand the tutorial to a book, Yay Python!. This was all illustrated with an incredibly charming squirrel-centred parable, hand-drawn by Sendecka. The two Olas are clearly a formidable team.

And yet. I had no less than three conversations with men later that day who told me they thought it was a great idea to encourage more women in Python, but…wasn’t it encouraging stereotypes? Was it good that Django Girls was so, well, girly?

There may be a well-meaning concern about avoiding stereotypes, but I wonder if there also wasn’t some underlying discomfort, about seeing something encouraging people in their field that didn’t speak to them. Could programming really look like this? Maybe it felt a bit like being a squirrel surrounded by badgers, in fact.

colored illustration of one squirrel, alone, among three badgers who are conversing with each other

one squirrel among three badgers, by Ola Sendecka, from slide 12 of
It’s Dangerous To Go Alone. Take This: The Power of Community
slides from EuroPython 2015 keynote

So firstly. Certainly pink can be a lazy shorthand for marketing to women. But anyone who watches the Olas’ keynote can be in no doubt that they have poured endless effort into their work. Their enthusiasm and attitude infuses every aspect of the tutorials. There’s no way it could be equated with a cynical marketing ploy.

Certainly pink things, sparkles and curly fonts have a reputation as being associated with girls. Here’s a question to blow your mind: is there anything bad about them, besides the fact that they are associated with girls?

Compulsory femininity, where girls and women are expected to act and look a certain way, is bad, yes. But femininity itself is not inherently weak, or silly, or frivolous, or bad.

Monospace white-on-black command-line aesthetic is a stylistic choice. It’s one that is relatively unmarked in our community. Glittery pastels is a different aesthetic. They are both perfectly valid ways to invite someone to be a programmer. And they will appeal to different audiences.

Julia Serano writes:

Most reasonable people these days would agree that demeaning or dismissing someone solely because she is female is socially unacceptable. However, demeaning or dismissing people for expressing feminine qualities is often condoned and even encouraged. Indeed, much of the sexism faced by women today targets their femininity (or assumed femininity) rather than their femaleness.

Demeaning feminine qualities is the flip side of androcentrism. Androcentrism is a society-wide pattern that celebrates masculine or male-associated traits, whatever the gender of the person with these traits. It’s part of the reason why women who succeed in male dominated fields are lauded, why those fields themselves are often overpaid. It’s how we find ourselves being the Cool Girl, who is Not Like Other Girls, an honorary guy.

It’s not a coincidence that people in our community rarely attend with a feminine presentation, for example, wearing dresses. Fitting in – looking like we belong – currently requires pants and a t-shirt. Wearing a dress is a lightning rod for double-takes, stares, condescension, being doubted, not being taken seriously.

To be explicit, this doesn’t mean that all women currently in tech are longing to femme it up. Many women are perfectly comfortable in a t-shirt and jeans. But implicitly expecting women to conform to that uniform is just as much a problem as expecting feminine attire. The problem is the lack of freedom to present and participate as our authentic selves.

Read these personal accounts and believe that this is how feminine women in tech get treated. They’re both hugely insightful.

(Then maybe read Julia Serano’s piece again and think about the connections to these two stories – seriously, these three pages are dense with concepts to absorb.)

photo of an instant camera, lip gloss, a zine marked 'Secret Messages' featuring two cats conversing, nail polish, and an object shaped like a strawberry ice cream cone, on a white shag carpet

Secret Messages zine by Sailor Mercury, surrounded by other symbols of femininity

Like Ola Sendecka, Sailor Mercury is a talented illustrator, as can be seen in her article. She ran a Kickstarter campaign to create her Bubblesort Zines (which you can now buy!). The overwhelming success of her Kickstarter (it reached its goal in 4 hours and eventually raised over US$60,000) speaks to an excitement and hunger for this style of work.

Inviting women into tech isn’t worth much if they have to leave their personality at the door to be accepted. Being supportive of diversity doesn’t mean much if you expect to look around and see things look basically the same. The existence of Django Girls does not compel all Pythonista women to femininity, but it does offer and even celebrate it as an option. If it’s not for you, so what? Take your discomfort as a starting point to figure out what you can do to make your community more welcoming for feminine people. Embrace femininity: Take a feminine person seriously today.

PS. If you’re still stuck back at “isn’t something only for girls (REVERSE) SEXIST?” – Read the FAQ.

Krebs on SecuritySpike in ATM Skimming in Mexico?

Several sources in the financial industry say they are seeing a spike in fraud on customer cards used at ATMs in Mexico. The reason behind that apparent increase hopefully will be fodder for another story. In this post, we’ll take a closer look at a pair of ATM skimming devices that were found this month attached to a cash machine in Puerto Vallarta — a popular tourist destination on Mexico’s Pacific coast.

On Saturday, July 18, 2015, municipal police in Puerto Vallara arrested a man who had just replaced the battery in a pair of skimming devices he or an associate had installed at an ATM in a busy spot of the town. This skimming kit targeted certain models of cash machines made by Korean ATM manufacturer Hyosung, and included a card skimming device as well as a hidden camera to record the victim’s ATM card PIN.

Here’s a look at the hidden camera installed over the compromised card reader. Would you have noticed anything amiss here?

The tiny pinhole camera was hidden in a molded plastic fascia designed to fit over top of the area directly above the PIN pad. The only clue that something is wrong here is a gap of about one millimeter between the PIN capture device and the actual ATM. Check out the backside of the false front:

The backside of the false fascia shows the location of the hidden camera.

The backside of the false fascia shows the location of the hidden camera.

The left side of the false fascia (as seen from the front, installed) contains the battery units that power the video camera:

Swapping the batteries out got this skimmer scammer busted. No wonder they included so many!

Swapping the batteries out got this skimmer scammer busted. No wonder they included so many!

The device used to record data from the magnetic stripe as the customer inserts his ATM card into the machine is nothing special, but it does blend in pretty well as we can see here:

The card skimming device, as attached to a compromised ATM in Puerto Vallarta.

The card skimming device, as attached to a compromised ATM in Puerto Vallarta.

Have a gander at the electronics that power this badboy:

hyosung-cardskimmer-back copy

According to a local news clipping about the skimming incident, the fraudster caught red-handed was found in possession of a Carte Vitale card, a health insurance card of the national health care system in France.

The man apprehended by Mexican police. Image:

The French health care card found on the man apprehended by Mexican police. Image:

The man gave his name as Dominique Mardokh, the same name on the insurance card. Also, the picture on the insurance card matched his appearance in real life; here’s a picture of Mardokh in the back of a police car.

According to the news site, the suspect was apprehended by police as he fled the scene in a vehicle with license plates from Quintana Roo, a state nearly 2,500 km away on the Atlantic side of Mexico that is the home of another very popular tourist destination: Cancún.

Ironically, the healthcare card that identified this skimmer scammer is far more secure than the bank cards he was allegedly stealing with the help of the skimming devices. That’s because the healthcare card stores data about its owner on a small computer chip which makes the card difficult for thieves to duplicate.

Virtually all European banks and most non-US financial institutions issue chip-and-PIN cards (also called Europay, Mastercard and Visa or EMV), but unfortunately chip cards have been slow to catch on in the United States. Most US-based cards still store account data in plain text on a magnetic stripe, which can be easily copied by skimming devices and encoded onto new cards.

For reasons of backward compatibility with ATMs that aren’t yet in line with EMV, many EMV-compliant cards issued by European banks also include a plain old magnetic stripe. The weakness here, of course, is that thieves can still steal card data from Europeans using skimmers on European ATMs, but they need not fabricate chip-and-PIN cards to withdrawal cash from the stolen accounts: They simply send the card data to co-conspirators in the United States who use it to fabricate new cards and to pull cash out of ATMs here, where the EMV standard is not yet in force.

This skimmers found in Mexico (where most credit cards also are identified by microchip) abuse that same dynamic: Undoubtedly, the thieves in this scheme compromised ATMs at popular tourist destinations because they knew these places were overrun with American tourists.

In October 2015, U.S. merchants that have not yet installed card readers which accept more secure chip-based cards will assume responsibility for the cost of fraud from counterfeit cards. While most experts believe it may be years after that deadline before most merchants have switched entirely to chip-based card readers (and many U.S. banks are only now thinking about issuing chip-based cards to customers). Unfortunately, that liability shift doesn’t apply to ATMs in the U.S. until October 2017.

Whether or not your card has a chip in it, one way to defeat skimmers that rely on hidden cameras (and that’s most of them) is to simply cover the PIN pad with your hand when entering your PIN: That way, if even if the thieves somehow skim your card, there is less chance that they will be able to snag your PIN as well. You’d be amazed at how many people fail to take this basic precaution. Yes, there is still a chance that thieves could use a PIN-pad overlay device to capture your PIN, but in my experience these are far less common than hidden cameras (and quite a bit more costly for thieves who aren’t making their own skimmers).

Are you as fascinated by ATM skimmers as I am? Check out my series on this topic, All About Skimmers.

Update, July 28, 8:54 a.m. ET: ATM maker NCR has just released an advisory also warning about a spike in ATM skimming tied to Mexico. See the alert here (PDF).

Sociological ImagesTrigger Warnings, the Big Picture: Changing Our Culture of Social Control

Recently there’s been heightened attention to calling out microaggressions and giving trigger warnings. I recently speculated that the loudest voices making these demands come from people in categories that have gained in power but are still not dominant, notably women at elite universities.  What they’re saying in part is, “We don’t have to take this shit anymore.” Or as Bradley Campbell and Jason Manning put it in a recently in The Chronicle:

…offenses against historically disadvantaged social groups have become more taboo precisely because different groups are now more equal than in the past.

It’s nice to have one’s hunches seconded by scholars who have given the issue much more thought.

Campbell and Manning make the context even broader. The new “plague of hypersensitivity” (as sociologist Todd Gitlin called it) isn’t just about a shift in power, but a wider cultural transformation from a “culture of dignity” to a “culture of victimhood.” More specifically, the aspect of culture they are talking about is social control. How do you get other people to stop doing things you don’t want them to do – or not do them in the first place?

In a “culture of honor,” you take direct action against the offender.  Where you stand in society – the rights and privileges that others accord you – is all about personal reputation (at least for men). “One must respond aggressively to insults, aggressions, and challenges or lose honor.” The culture of honor arises where the state is weak or is concerned with justice only for some (the elite). So the person whose reputation and honor are at stake must rely on his own devices (devices like duelling pistols).  Or in his pursuit of personal justice, he may enlist the aid of kin or a personalized state-substitute like Don Corleone.

In more evolved societies with a more extensive state, honor gives way to “dignity.”

The prevailing culture in the modern West is one whose moral code is nearly the exact opposite of that of an honor culture. Rather than honor, a status based primarily on public opinion, people are said to have dignity, a kind of inherent worth that cannot be alienated by others. Dignity exists independently of what others think, so a culture of dignity is one in which public reputation is less important. Insults might provoke offense, but they no longer have the same importance as a way of establishing or destroying a reputation for bravery. It is even commendable to have “thick skin” that allows one to shrug off slights and even serious insults, and in a dignity-based society parents might teach children some version of “sticks and stones may break my bones, but words will never hurt me” – an idea that would be alien in a culture of honor.

The new “culture of victimhood” has a different goal – cultural change. Culture is, after all, a set of ideas that is shared, usually so widely shared as to be taken for granted. The microaggression debate is about insult, and one of the crucial cultural ideas at stake is how the insulted person should react. In the culture of honor, he must seek personal retribution. In doing so, of course, he is admitting that the insult did in fact sting. The culture of dignity also focuses on the character of offended people, but here they must pretend that the insult had no personal impact. They must maintain a Jackie-Robinson-like stoicism even in the face of gross insults and hope that others will rise to their defense. For smaller insults, say Campbell and Manning, the dignity culture “would likely counsel either confronting the offender directly to discuss the issue,” which still keeps things at a personal level, “or better yet, ignoring the remarks altogether.”

In the culture of victimhood, the victim’s goal is to make the personal political.  “It’s not just about me…”  Victims and their supporters are moral entrepreneurs. They want to change the norms so that insults and injustices once deemed minor are now seen as deviant. They want to define deviance up.  That, for example, is the primary point of efforts like the Microaggressions Project, which describes microaggressions in exactly these terms, saying that microaggression “reminds us of the ways in which we and people like us continue to be excluded and oppressed” (my emphasis).


So, what we are seeing may be a conflict between two cultures of social control: dignity and victimhood. It’s not clear how it will develop. I would expect that those who enjoy the benefits of the status quo and none of its drawbacks will be most likely to resist the change demanded by a culture of victimhood. It may depend on whether shifts in the distribution of social power continue to give previously more marginalized groups a louder and louder voice.

Cross-posted at Montclair SocioBlog.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at

Planet Linux AustraliaGlen Turner: Configuring Zotero PDF full text indexing in Debian Jessie


Zoterto is an excellent reference and citation manager. It runs within Firefox, making it very easy to record sources that you encounter on the web (and in this age of publication databases almost everything is on the web). There are plugins for LibreOffice and for Word which can then format those citations to meet your paper's requirements. Zotero's Firefox application can also output for other systems, such as Wikipedia and LaTeX. You can keep your references in the Zotero cloud, which is a huge help if you use different computers at home and work or school.

The competing product is EndNote. Frankly, EndNote belongs to a previous era of researcher methods. If you use Windows, Word and Internet Explorer and have a spare $100 then you might wish to consider it. For me there's a host of showstoppers, such as not running on Linux and not being able to bookmark a reference from my phone when it is mentioned in a seminar.

Anyway, this article isn't a Zotero versus EndNote smackdown, there's plenty of those on the web. This article is to show a how to configure Zotero's full text indexing for the RaspberryPi and other Debian machines.

Installing Zotero

There are two parts to install: a plugin for Firefox, and extensions for Word or LibreOffice. (OpenOffice works too, but to be frank again, LibreOffice is the mainstream project of that application these days.)

Zotero keeps its database as part of your Firefox profile. Now if you're about to embark on a multi-year research project you may one day have trouble with Firefox and someone will suggest clearing your Firefox profile, and Firefox once again works fine. But then you wonder, "where are my years of carefully-collected references?" And then you cry before carefully trying to re-sync.

So the first task in serious use of Zotero on Linux is to move that database out of Firefox. After installing Zotero on Firefox press the "Z" button, press the Gear icon, select "Preferences" from the dropbox menu. On the resulting panel select "Advanced" and "Files and folders". Press the radio button "Data directory location -- custom" and enter a directory name.

I'd suggest using a directory named "/home/vk5tu/.zotero" or "/home/vk5tu/zotero" (amended for your own userid, of course). The standalone client uses a directory named "/home/vk5tu/.zotero" but there are advantages to not keeping years of precious data in some hidden directory.

After making the change quit from Firefox. Now move the directory in the Firefox profile to whereever you told Zotero to look:

$ cd
$ mv .mozilla/firefox/*.default/zotero .zotero

Full text indexing of PDF files

Zotero can create a full-text index of PDF files. You want that. The directions for configuring the tools are simple.

Too simple. Because downloading a statically-linked binary from the internet which is then run over PDFs from a huge range of sources is not the best of ideas.

The page does have instructions for manual configuration but the page lacks a worked example. Let's do that here.

Manual configuration of PDF full indexing utilities on Debian

Install the pdftotext and pdfinfo programs:

    $ sudo apt-get install poppler-utils

Find the kernel and architecture:

$ uname --kernel-name --machine
Linux armv7l

In the Zotero data directory create a symbolic link to the installed programs. The printed kernel-name and machine is part of the link's name:

$ cd ~/.zotero
$ ln -s $(which pdftotext) pdftotext-$(uname -s)-$(uname -m)
$ ln -s $(which pdfinfo) pdfinfo-$(uname -s)-$(uname -m)

Install a small helper script to alter pdftotext paramaters:

$ cd ~/.zotero
$ wget -O
$ chmod a+x

Create some files named *.version containing the version numbers of the utilities. The version number appears in the third field of the first line on stderr:

$ cd ~/.zotero
$ pdftotext -v 2>&1 | head -1 | cut -d ' ' -f3 > pdftotext-$(uname -s)-$(uname -m).version
$ pdfinfo -v 2>&1 | head -1 | cut -d ' ' -f3 > pdfinfo-$(uname -s)-$(uname -m).version

Start Firefox and Zotero's gear icon, "Preferences", "Search" should report something like:

PDF indexing
  pdftotext version 0.26.5 is installed
  pdfinfo version 0.26.5 is installed

Do not press "check for update". The usual maintenance of the operating system will keep those utilities up to date.

CryptogramPreventing Book Theft in the Middle Ages

Planet DebianCyril Brulebois: D-I Stretch Alpha 1

Time for a quick recap of the beginning of the Stretch release cycle as far as the Debian Installer is concerned:

  • It took nearly 3 months after the Jessie release, but linux finally managed to get into shape and fit for migration to testing, which unblocked the way for an debian-installer upload.
  • Trying to avoid last-minute fun, I’ve updated the britney freeze hints file to put into place a block-udeb on all packages.
  • Unfortunately, a recent change in systemd (implementation of Proposal v2: enable stateless persistant network interface names) found its way into testing a bit before that, so I’ve had my share of last-minute fun anyway! Indeed, that resulted in installer system and installed system having different views on interface naming. Thankfully I was approached by Michael Biebl right before my final tests (and debian-installer upload) so there was little head scratching involved. Commits were already in the master branch so a little plan was proposed in Fixing udev-udeb vs. net.ifnames for Stretch Alpha 1. This was implemented in two shots, given the extra round trip due to having dropped a binary package in the meanwhile and due to dak’s complaining about it.
  • After the usual round of build (see logs), and dak copy-installer to get installer files from unstable to testing, and urgent to get the source into testing as well (see request), I’ve asked Steve McIntyre to start building images through debian-cd. As expected, some troubles were run into, but they were swiftly fixed!
  • While Didier Raboud and Steve were performing some tests with the built images, I’ve prepared the annoucement for dda@, and updated the usual pages in the debian-installer corner of the website: news entry, errata, and homepage.
  • Once the website was rebuilt to include these changes, I’ve sent the announce, and lifted all block-udeb.

(On a related note, I’ve started tweeting rather regularly about my actions, wins & fails, using the #DebianInstaller hashtag. I might try and aggregate my tweets as @CyrilBrulebois into more regular blog posts, time permitting.)

Executive summary: D-I Stretch Alpha 1 is released, time to stretch a bit!

Stretching cat

(Credit: rferran on openclipart)

Worse Than FailureFinding Closure

Jim’s mail client dinged and announced a new message with the subject, “Assigned to you: TICKET #8271”. “Not this again,” he muttered.

Ticket #8271 was ancient. For over a year now, Initech’s employees had tossed the ticket around like kids playing hot potato. Due to general incompetence and rigid management policies, it never got fixed.

Jim was the GUI developer for their desktop application, InitechWORKS. The app used a web browser widget to display content from the company’s web page within the application, mostly for marketing fluff. The bug itself was a tracking pixel which occasionally failed to load, and when it did the browser widget replaced the pixel with a large, unsightly error icon. Both the web page and the tracking pixel came from Marketing’s web server.

Time after time, ticket #8271 landed in some luckless developer’s hands. They each tacked on a note, saying there was nothing wrong with InitechWORKS, and forwarded the ticket to marketing. And time after time, Marketing punted the ticket with a note saying, “Can’t reproduce, must be an issue with InitechWORKS, re-assigning.”

And so, once again, Jim decided to talk to the project manager, a middle-aged man named Greg with the memory retention of a dying goldfish and a management style with all the flexibility of a beryllium rod three feet in diameter.

“Greg, I was looking at ticket #8271. I know Marketing won’t fix this bug, but I have a quick fix to suggest-”

Greg had no idea what ticket Jim was talking about, but he didn’t need to. His flexible management policy came into play. “Per company policy and the org chart, the InitechWORKS team cannot talk to Marketing.”

“But they need to fix something on their end!” Jim nearly shouted, hoping to get the full sentence out before Greg interrupted him.

With a sigh, Greg pulled up the ticket. His mouth moved as he read it to himself. “See here, Marketing says the issue is with InitechWORKS, not Marketing.”

“But they’re wrong. It’s definitely not-”

“Marketing is never wrong,” Greg said with a cold stare. “Now, go fix this bug.”

Jim walked away, dejected. He knew Greg wouldn’t remember this conversation the next time ticket #8271 came up.

Jim couldn’t assign the ticket to Marketing, but he added his suggested quick-fix along with the note, “Problem is with the web page, not InitechWORKS,” and moved the ticket to Greg.

Later that day, Greg approached Jim at his desk. “Jim, about ticket number…”. He paused to glance down at his notepad. “… number 8271. I passed it over to Marketing, but there’s a problem.” Greg lowered his voice and became indignant. “I had to delete your comments and ‘fixes’. We can’t presume to tell Marketing how to do their job.”

Jim mentally facepalmed. His plan had failed. “But it’s a si-”

“They’re smart guys, and I’m sure they’ll fix it,” Greg interrupted. “You do your job, and let Marketing do theirs, and we won’t have to get HR involved with a formal reprimand.”

Two weeks later, Jim’s mail client dinged. “Assigned to you: TICKET #8271” was on the subject line. He groaned, and started planning how he was going to approach the issue this time. When he went to Greg’s office, the project manager was nowhere to be seen.

“Have you seen Greg?” he asked the PM in the neighboring office.

“Oh, didn’t you hear? He quit this morning. HR refused to discipline one of his employees, so he quit on the spot. Said something about the company refusing to follow their own policies. And now I’m inheriting a lot of his projects, so if you don’t mind…” The PM went back to work, silently ignoring Jim.

Jim glanced back into Greg’s office and noticed that Greg’s PC was unlocked and logged in. On a whim, he sat down at the computer. As a PM, Greg had special privileges, like the ability to disable the automatic computer locking, and access to pretty much any system in the company. That included Marketing’s production web server.

With a little poking around, Jim found the problematic web page and its tracking pixel. He quickly implemented the quick fix he’d suggested earlier, simply styling the image to be zero pixels and located 10,000 pixels off the edge of the screen. That wouldn’t fix the loading issues, but when it misbehaved, the ugly error would stay off-screen and not hurt the page.

He slipped out of Greg’s office. Greg’s neighbor didn’t even notice him as he walked by. When Jim returned to his desk, he could no longer reproduce the issue. After a painfully long eighteen months, he marked ticket #8271 as closed.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main August 2015 Meeting: Open Machines Building Open Hardware / VLSCI: Supercomputing for Life Sciences

Aug 4 2015 18:30
Aug 4 2015 20:30
Aug 4 2015 18:30
Aug 4 2015 20:30

200 Victoria St. Carlton VIC 3053


• Jon Oxer, Open Machines Building Open Hardware
• Chris Samuel, VLSCI: Supercomputing for Life Sciences

200 Victoria St. Carlton VIC 3053 (formerly the EPA building)

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the venue and VPAC for hosting.

Linux Users of Victoria Inc. is an incorporated association, registration number A0040056C.

August 4, 2015 - 18:30

read more

Planet Linux AustraliaMichael Davies: DHCP and NUCs

I've put together a little test network at home for doing some Ironic testing on hardware using NUCs.  So far it's going quite well, although one problem that had me stumped for a while was getting the NUC to behave itself when obtaining an IP address with DHCP.

Each time I booted the network, a different IP address from the pool was being allocated (i.e. the next one in the DHCP address pool).

There's already a documented problem with isc-dhcp-server for devices where the BMC and host share a NIC (including the same MAC address), but this was even worse because on closer examination a different Client UID is being presented as part of the DHCPDISCOVER for the node each time. (Fortunately the NUC's BMC doesn't do this as well).

So I couldn't really find a solution online, but the answer was there all the time in the man page - there's a cute little option "ignore-client-uids true;" that ensures only the MAC address is used for DHCP lease matching, and not Client UID.  Turning this on means now that on each deploy the NUC receives the same IP address - and not just for the node, but also for the BMC - it works around the aforementioned bug as well.  Woohoo!

There's still one remaining problem, I can't seem to get a fixed IP address returned in the DHCPOFFER, I have to configure a dynamic pool instead (which is fine because this is a test network with limited nodes in it).  One to resolve another day...

Planet DebianJames McCoy: porterbox-logins

Some time ago, pabs documented his setup for easily connecting to one of Debian's porterboxes based on the desired architecture. Similarly, he submitted a wishlist bug against devscripts specifying some requirements for a script to make this functionality generally accessible to the developer community.

I have yet to follow up on that request mainly due to ENOTIME for developing new scripts outright. I also have my own script I had been using to get information on available Debian machines.

Recently, this came up again on IRC and jwilk decided to actually implement pabs' DNS alias idea. Now, one can use $ to connect to a porterbox of the specified architecture.

Preference is given to domains when there are both and porterboxes, and it's a simple use first listed machine if there are multiple available porterboxes.

This is all well and good, but if you have SSH's StrictHostKeyChecking enabled, SSH will rightly refuse to connect. However, OpenSSH 6.5 added a feature called hostname canonicalization which can help. The below ssh_config snippet allows one to run ssh $arch-porterbox or ssh $ and connect to one of the porterboxes, verifying the host key against the canonical host name.

Host *-porterbox

Match host *
  CanonicalizeHostname yes
  CanonicalizePermittedCNAMEs **

Planet Linux AustraliaDavid Rowe: Self Driving Cars

I’m a believer in self driving car technology, and predict it will have enormous effects, for example:

  1. Our cars currently spend most of the time doing nothing. They could be out making money for us as taxis while we are at work.
  2. How much infrastructure and frustration (home garage, driveways, car parks, finding a park) do we devote to cars that are standing still? We could park them a few km away in a “car hive” and arrange to have them turn up only when we need them.
  3. I can make interstate trips laying down sleeping or working.
  4. Electric cars can recharge themselves.
  5. It throws personal car ownership into question. I can just summon a car on my smart phone then send the thing away when I’m finished. No need for parking, central maintenance. If they are electric, and driverless, then very low running costs.
  6. It will decimate the major cause of accidental deaths, saving untold misery. Imagine if your car knew the GPS coordinates of every car within 1000m, even if outside of visual range, like around a corner. No more t-boning, or even car doors opening in the path of my bike.
  7. Speeding and traffic fines go away, which will present a revenue problem for governments like mine that depend on the statistical likelihood of people accidentally speeding.
  8. My red wine consumption can set impressive new records as the car can drive me home and pour me into bed.

I think the time will come when computers do a lot better than we can at driving. The record of these cars in the US is impressive. The record for humans in car accidents dismal (a leading case of death).

We already have driverless planes (autopilot, anti-collision radar, autoland), that do a pretty good job with up to 500 lives at a time.

I can see a time (say 20 years) when there will be penalties (like a large insurance excess) if a human is at the wheel during an accident. Meat bags like me really shouldn’t be in control of 1000kg of steel hurtling along at 60 km/hr. Incidentally that’s 144.5 kJ of kinetic energy. A 9mm bullet exits a pistol with 0.519 kJ of energy. No wonder cars hurt people.

However many people are concerned about “blue screens of death”. I recently had an email exchange on a mailing list, here are some key points for and against:

  1. The cars might be hacked. My response is that computers and micro-controllers have been in cars for 30 years. Hacking of safety critical systems (ABS or EFI or cruise control) is unheard of. However unlike a 1980′s EFI system, self driving cars will have operating systems and connectivity, so this does need to be addressed. The technology will (initially at least) be closed source, increasing the security risk. Here is a recent example of a modern car being hacked.
  2. Planes are not really “driverless”, they have controls and pilots present. My response is that long distance commercial aircraft are under autonomous control for the majority of their flying hours, even if manual controls are present. Given the large number of people on board an aircraft it is of course prudent to have manual control/pilot back up, even if rarely used.
  3. The drivers of planes are sometimes a weak link. As we saw last year and on Sep 11 2001, there are issues when a malicious pilot gains control. Human error is also behind a large number of airplane incidents, and most car accidents. It was noted that software has been behind some airplane accidents too – a fair point.
  4. Compared to aircraft the scale is much different for cars (billions rather than 1000s). The passenger payload is also very different (1.5 people in a car on average?), and the safety record of cars much much worse – it’s crying out for improvement via automation. So I think automation of cars will eventually be a public safety issue (like vaccinations) and controls will disappear.
  5. Insurance companies may refuse a claim if the car is driverless. My response is that insurance companies will look at the actuarial data as that’s how they make money. So far all of the accidents involving Google driverless cars have been caused by meat bags, not silicon.

I have put my money where my mouth is and invested in a modest amount of Google shares based on my belief in this technology. This is also an ethical buy for me. I’d rather have some involvement in an exciting future that saves lives and makes the a world a better place than invest in banks and mining companies which don’t.

Planet DebianOrestis Ioannou: GSoC Debsources midterm news

Midterm evaluations have already passed and I guess we have also reached a milestone since last week I finished working on the copyright tracker and started the patch tracker.

Here's the list of my reports on soc-coordination for those interested

Copyright tracker status

Copyright tracker

Most of the functionalities of the copyright tracker are already merged. Specifically navigating in the tracker, rendering the machine readable licenses, API functionalities such as obtaining the license of a file searching by checksum or by a package / version / path or obtaining the licenses of many files at once and their respective views.

Some more functionalities are still under review such as filling the database with copyright related information at update time, using the database to answer the aforementioned requests, license statistics in the spirit of the Debsources ones and exporting a license in SPDX format.

Its going to be pretty exciting when those pull requests are going to be merged since the copyright tracker will be full and complete! Meanwhile I started working on the patch tracker.

Patch tracker

My second task is the implementation of a patch tracker. This feature existed in Debian but unfortunately died recently. I have already started revising the functionalities of the old patch tracker, started identifying target users, creating use stories and cases. Those should help me list the desired functionalities of the tracker, imagine the structure of the blueprint and start writing code to that end.

It is going to be a pretty exciting run of 1 month doing this as my knowledge on the Debian packaging system is not that good just yet. I hope that until Debconf some of the functionalities of the patch tracker are going to be ready.


My request for sponsorship for Debconf was accepted and I am pretty excited since this is going to be my first Debconf attendance. I am looking forward meeting my mentors (Zack and Matthieu), the fellow student working on Debsources (Clemux) as well as a lot of other people I happen to chat occasionaly during this summer. I ll arrive on Friday 14th and leave on Sunday 23.

Debconf 2015


Geek FeminismCode release: Spam All the Links

This is a guest post by former Geek Feminism blogger Mary Gardiner. It originally appeared on

The Geek Feminism blog’s Linkspam tradition started back in August 2009, in the very early days of the blog and by September it had occurred to us to take submissions through bookmarking services. From shortly after that point there were a sequence of scripts that pulled links out of RSS feeds. Last year, I began cleaning up my script and turning it into the one link-hoovering script to rule them all. It sucks links out of bookmarking sites, Twitter and WordPress sites and bundles them all up into an email that is sent to the linkspamming team there for curation, pre-formatted in HTML and with title and suggestion descriptions for each link. It even attempts to filter out links already posted in previous linkspams.

The Geek Feminism linkspammers aren’t the only link compilers in town, and it’s possible we’re not the only group who would find my script useful. I’ve therefore finished generalising it, and I’ve released it as Spam All the Links on Gitlab. It’s a Python 3 script that should run on most standard Python environments.

Spam All the Links

Spam All the Links is a command line script that fetches URL suggestions from
several sources and assembles them into one email. That email can in turn be
pasted into a blog entry or otherwise used to share the list of links.

Use case

Spam All the Links was written to assist in producing the Geek Feminism linkspam posts. It was developed to check WordPress comments, bookmarking websites such as Pinboard, and Twitter, for links tagged “geekfeminism”, assemble them into one email, and email them to an editor who could use the email as the basis for a blog post.

The script has been generalised to allow searches of RSS/Atom feeds, Twitter, and WordPress blog comments as specified by a configuration file.

Email output

The email output of the script has three components:

  1. a plain text email with the list of links
  2. a HTML email with the list of links
  3. an attachment with the HTML formatted links but no surrounding text so as to be easily copy and pasted

All three parts of the email can be templated with Jinja2.

Sources of links

Spam All the Links currently can be configured to check multiple sources of links, in these forms:

  1. RSS/Atom feeds, such as those produced by the bookmarking sites Pinboard or Diigo, where the link, title and description of the link can be derived from the equivalent fields in the RSS/Atom. (bookmarkfeed in the configuration file)
  2. RSS/Atom feeds where links can be found in the ‘body’ of a post (postfeed in the configuration file)
  3. Twitter searches (twitter in the configuration file)
  4. comments on WordPress blog entries (wpcommentsfeed in the configuration file)

More info, and the code, is available at the Spam All the Links repository at Gitlab. It is available under the MIT free software licence.

Planet DebianDmitry Shachnev: GNOME Flashback 3.16 available in archive, needs your help

Some time ago GNOME Flashback 3.16/3.17 packages landed in Debian testing and Ubuntu wily.

GNOME Flashback is the project which continues the development of components of classic GNOME session, including the GNOME Panel, the Metacity window manager, and so on.


The full changelog can be found in official announcement mail by Alberts and in changelog.gz files in each package, but I want to list the most imporant improvements in this release (compared to 3.14):

  • GNOME Panel and GNOME Applets (uploaded version 3.16.1):

    • The ability to use transparent panels has been restored.

    • The netspeed applet has been ported to the new API and integrated into gnome-applets source tree.

    • Many deprecation warnings have been fixed, and the code has been modernized.

    • This required a transition and a port of many third-party applets. Currently in Debian these third-party applets are compatible with gnome-panel 3.16: command-runner-applet, gnubiff, sensors-applet, uim, workrave.

  • GNOME Flashback helper application (uploaded version 3.17.2):

    • Added support for the on-screen display (OSD) when switching brightness, volume, etc.

    • Applications using GNOME AppMenu are now shown correctly.

  • Metacity window manager (uploaded version 3.17.2):

    • Metacity can now draw the window decorations based on the Gtk+ theme (without need to add Metacity-specific theme). This follows Mutter behavior, but (unlike Mutter) ability to use Metacity themes has been preserved.

    • Adwaita and HighContrast themes for Metacity have been removed from gnome-themes-standard, so they are now shipped as part of Metacity (in metacity-common package).

    • Metacity now supports invisible window borders (the default setting is 10px extra space for resize cursor area).

Sounds interesting? Contribute!

If you are interested in helping us, please write to our mailing list:

The current TODO list is:

  1. Notification Daemon needs GTK notification support.
  2. GNOME Flashback needs screenshot, screencast, keyboard layout switching and bluetooth status icon.
  3. Fix/replace deprecated function usage in all modules.
  4. libstatus-notifier — get it in usable state, create a new applet for gnome-panel.

TEDOne mother’s nightly learning ritual with her son

Freschi family

Mom Erin Freschi and her son Sawyer, 8, watch TED Talks together at night and the rest of the family has started joining in too. Scroll down for Erin and Sawyer’s top 10 picks to watch together. Photo: Courtesy of Erin Freschi

Erin Freschi, a California mom who has worked in education for nearly 25 years, has a nightly ritual with her 8-year-old son, Sawyer. Instead of bedtime stories or songs, Freschi and her son wind down by watching TED Talks.

The name of their unique ritual? TED Before Bed.

“It started when Sawyer was in first grade,” said Freschi. “He is pretty advanced for his age, and he was getting bored with the books and things he was getting for homework. We used to Google things that came up during the day. But we got to a point where we were running out of things to Google.”

Then Freschi watched Simon Sinek’s TED Talk in a work meeting, and she knew she wanted to share it with her son.

“Sinek talks a lot about Martin Luther King, Jr., and Sawyer just loves him,” she said. “That night I showed [the talk] to him, and he loved it. Now it’s the last thing we do every night before he goes to sleep.”

The ritual has stimulated Sawyer’s appetite for learning — and strengthened the mother-son bond. “We’ve always been close, but it just gives us this secret world we can share,” said Freschi. “It’s a special time for us to learn and think about new ideas together.”

Her husband joins in now, too.

“This week, we watched Greg Gage’s talk about do-it-yourself neuroscience tools. It was the first time Sawyer had heard the term ‘neuroscience,’ so we talked a little bit about that. And then he said, ‘I want one of those machines. Daddy, could you make one of those?’ It got him excited about the brain and how it works.”

Sometimes, the family watches TED Talks; sometimes they opt for TED-Ed lessons.  Both have opened up new conversations, including one about recycling, inspired by TED-Ed’s “The life cycle of a plastic bottle.”

“We’ve always been recyclers, and we try hard not to be wasteful, but it brought the concept home for him,” said Freschi. “It made more sense than, ‘Mommy and Daddy say we have to recycle.’ He’s always had an inquisitive mind, and you kind of hit a wall sometimes. Talks can spur that excitement again.”

Sawyer, who says that one of his goals in life is to be President of the United States, loves science and history, and says he’s interested in any talk that has to do with civil rights.

“I believe in civil rights,” he says. “I want to be a civil rights leader, actually.”

Erin and Sawyer’s top 10 talks to watch together:

  1. BLACK: My journey to yo-yo mastery
  2. Joshua Klein: A thought experiment on the intelligence of crows
  3. Simon Sinek: How great leaders inspire action
  4. Greg Gage: How to control someone else’s arm with your brain
  5. Anand Varma: A thrilling look at the first 21 days of a bee’s life
  6. David Christian: The history of our world in 18 minutes
  7. Chris Hadfield: What I learned from going blind in space
  8. Anthony Hazard: The Atlantic slave trade: What too few textbooks told you
  9. Emma Bryce: What really happens to the plastic you throw away
  10. Eleanor Nelsen: Why do your knuckles pop?

Planet DebianSven Hoexter: O: courierpassd

In case you're one of the few still depending on courierpassd and would like to see it to be part of stretch, please pick it up. I'm inclined to fill a request for removal before we release stretch in case nobody picks it up.

TEDDo you have an idea worth spreading? Share it on video through OpenTED

OpenTED_nominate_page_webYou have an idea. A good one – one that will make people think. But giving a TED Talk on a stage in front of an audience? Well, that doesn’t quite feel like the right way to express it.

If giving a traditional TED Talk isn’t your style, you may be excited to hear about The OpenTED Project — a new experimental initiative launching today to uncover ideas in all forms. Through OpenTED we’re erasing the lines around what is and isn’t a “TED Talk” and soliciting ideas that come in any form capturable on video. Through OpenTED, you can show us an idea as a documentary, an invention, an original animation, video poetry, song lyrics, monologues, dialogues, art, choreography  — really, in any form you can imagine to communicate your idea to others.

The OpenTED Project is your personal invitation to share your idea — be it grand and global, or individual and personal — with the world.

The only rules for The OpenTED Project: your idea needs to exist as a video, and it cannot exceed six minutes. And the expression of an idea really is key. Great OpenTED videos will inspire others to think, learn and even act. Whether your idea is a product, a campaign, a line of inquiry or a revision of history — it should come with a light bulb over it.

If you have an idea, submit your video. The best ones — as voted on by members of the OpenTED community — may just become a TED Talk of the day, starting in the fall. Other OpenTED speakers may get an invitation to attend or appear at one of our conferences.

Some submissions might look a bit like a filmed talk, but others will undoubtedly come in packages we never could have imagined.

The deadline for submitting is Thursday, Oct. 15, 2015, at 11am Eastern.

We plan to share your submissions with the world in grand style in the fall, where the global community can view and vote on them.

Read all the fine print, and find instructions for submitting your video to The OpenTED Project »

Google AdsenseDemystifying AdSense policies with John Brown: Understand your traffic (Part 3)

Editor’s note: John Brown, the Head of Publisher Policy Communications, is sharing insights about understanding your traffic and how you can prevent invalid activity.

Last week, I explained why we take invalid activity seriously and how AdSense policies protect users, advertisers and publishers. This week, I’d like to give you some tips to help you keep your account in good standing.

What can you do as a publisher?

Here are some best practices to prevent invalid activity on your site:

  • Monitor your analytics often to spot traffic anomalies. Setting up Analytics alerts can be very useful. For instance, you can set Analytics alerts to see if an unusual amount of traffic comes from a country you wouldn’t expect for your site.
  • Be very careful when purchasing any traffic, and review the traffic provider checklist to help guide your discussions with any traffic provider you’re considering.
  • Double and triple-check your implementation. Make sure your implementation has no programming errors, conforms to AdSense policies, and interacts properly across different browsers and platforms. Having a well-implemented page can protect against unintended consequences, like accidental clicks. 
  • Don’t click on your own ads. Even if you’re interested in an ad or looking for its destination URL, clicking on your own ads is prohibited. Instead, use the Google Publisher Toolbar.

You can find more information about ad traffic quality and best practices on our Ad Traffic Quality Resource Center. I hope these resources help clarify why we care about the quality of the ecosystem and what you can do to comply with our traffic policies. Please share your feedback and do let us know if you have additional questions in the comment section below this post.

Subscribe to AdSense blog posts

Posted by John Brown
Head of Publisher Policy Communications

Krebs on SecurityExperian Hit With Class Action Over ID Theft Service

Big-three credit bureau Experian is the target of a class-action lawsuit just filed in California. The suit alleges that Experian negligently violated consumer protection laws when it failed to detect for nearly 10 months that a customer of its data broker subsidiary was a scammer who ran a criminal service that resold consumer data to identity thieves.

experianThe lawsuit comes just days after a judge in New Hampshire handed down a 13-year jail sentence against Hieu Minh Ngo, a 25-year-old Vietnamese man who ran an ID theft service variously named and

Ngo admitted hacking into or otherwise illegally gaining access to databases belonging to some of the world’s largest data brokers, including a Court Ventures — a company that Experian acquired in 2012. He got access to some 200 million consumer records by posing as a private investigator based in the United States, and for nearly ten months after Experian acquired Court Ventures, Ngo continued paying for his customers’ data searches via cash wire transfers from a bank in Singapore.

Ngo’s service sold access to “fullz,” the slang term for packages of consumer data that could be used to commit identity theft in victims’ names. The government says Ngo made nearly $2 million from his scheme. According to the Justice Department, the IRS has confirmed that 13,673 U.S. citizens, whose stolen personal information was sold on Ngo’s websites, have been victimized through the filing of $65 million in fraudulent individual income tax returns.

The class action lawsuit, filed July 17, 2015 in the U.S. District Court for the Central District of California, seeks statutory damages for Experian’s alleged violations of, among other statutes, the Fair Credit Reporting Act (FCRA). The plaintiffs also want the court to force Experian to notify all consumers affected by Ngo’s service; to provide them free credit monitoring services; to disgorge all profits made from Ngo’s service; and to establish a fund (in an amount to be determined) to which victims can apply for reimbursement of the time and out-of-pocket expenses they incurred to remediate the identity theft and fraud caused by customers of Ngo’s ID theft service.

Experian's Tony Hadley, addressing the Senate Commerce Committee in Dec. 2013.

Experian’s Tony Hadley, addressing the Senate Commerce Committee in Dec. 2013.

“The Security Lapse notice, as well as the above referenced protections, also will fulfill the promise made to Congress by Tony Hadley, Experian’s Senior Vice President of Government Affairs and Public Policy, that ‘we know who they [the Security Lapse victims] are, and we’re going to make sure they’re protected’,” the complaint states. For more on Experian’s contradictory statements before Congress on this breach, see this March 2014 story.

Experian did not respond to requests for comment on the lawsuit, and it has yet to respond to the claims in court. A copy of the complaint is here (PDF). Incidentally, Experian and the former owner of Court Ventures are currently suing each other over which company is at fault for Ngo’s service.

For additional stories related to Ngo’s service and his hundreds of criminal customers, check out this series. For more on what you can do to avoid becoming an identity theft victim, please see this story.

Planet DebianJonathan Dowland: New camera

Earlier in the year I treated myself to a new camera. It's been many years since I bought one, which was a perfectly serviceable Panasonic FS-15 compact, to replace my lost-or-stolen Panasonic TZ3, which I loved. The FS-15 didn't have a "wow" factor and with the advent of smartphones and fantastic smartphone cameras, it rarely left a drawer at home.

Last year I upgraded my mobile from an iPhone to a Motorola Moto G, which is a great phone in many respects, but has a really poor camera. I was given a very generous gift voucher when I left my last job and so had the perfect excuse to buy a dedicated camera.

I'd been very tempted by a Panasonic CSC camera ever since I read this review of the GF1 years ago, and the GM1 was high on my list, but there were a lot of compromises: no EVF... In the end I picked up a Sony RX 100 Mark 3 which had the right balance of compromises for me.

I haven't posted a lot of photos to this site in the past but I hope to do so in future. I've got to make some alterations to the software first.

Post-script: Craig Mod, who wrote that GF1 review, wrote another interesting essay a few years later: Cameras, Goodbye, where he discusses whether smartphone cameras are displacing even the top end of the Camera market.

Planet Linux AustraliaBinh Nguyen: Joint Strike Fighter F-35 Notes

Below are a bunch of thoughts, collation of articles about the F-35 JSF, F-22 Raptor, and associated technologies...

- every single defense analyst knows that comprimises had to be made in order to achieve a blend of cost effectiveness, stealth, agility, etc... in the F-22 and F-35. What's also clear is that once things get up close and personal things mightn't be as clear cut as we're being told. I was of the impression that the F-22 would basically outdo anything and everything in the sky all of the time. It's clear that based on training excercises that unless the F-22's have been backing off it may not be as phenomenal as we're being led to believe (one possible reason to deliberately back off is to not provide intelligence on max performance envelope to provide less of a target for near peer threats with regards to research and engineering). There are actually a lot of low speed manouvres that I've seen a late model 3D-vectored Sukhoi perform that a 2D-vectored F-22 has not demonstrated. The F-35 is dead on arrival in many areas (at the moment. Definitely from a WVR perspective) as many people have stated. My hope and expectation is that it will have significant upgrades throughout it's lifetime
F22 vs Rafale dogfight video
Dogfight: Rafale vs F22 (Close combat)
- in the past public information/intelligence regarding some defense programs/equipment have been limited to reduce the chances of a setting off arms race. That way the side who has disemminated the mis-information can be guaranteed an advantage should there be a conflict. Here's the problem though, while some of this may be such, I doubt that all of it is. My expectation that due to some of the intelligence leaks (many terabytes. Some details of the breach are available publicly) regarding designs of the ATF (F-22) and JSF (F-35) programs is also causing some problems as well. They need to overcome technical problems as well as problems posed by previous intelligence leaks. Some of what is being said makes no sense as well. Most of what we're being sold on doesn't actually work (yet) (fusion, radar, passive sensors, identification friend-or-foe, etc...)...
- if production is really as problematic as they say that it could be without possible recourse then the only thing left is to bluff. Deterrence is based on the notion that your opponent will not attack because you have a qualitative or quantitative advantage... Obviously, the problem if there is actual conflict we have a huge problem. We purportedly want to be able to defend ourselves should anything potentially bad occur. The irony is that our notion of self defense often incorporates force projection in far off, distant lands...
F22 Raptor Exposed - Why the F22 Was Cancelled
F-35 - a trillion dollar disaster
JSF 35 vs F18 superhornet
- we keep on giving Lockheed Martin a tough time regarding development and implementation but we keep on forgetting that they have delivered many successful platforms including the U-2, the Lockheed SR-71 Blackbird, the Lockheed F-117 Nighthawk, and the Lockheed Martin F-22 Raptor
f-22 raptor crash landing
- SIGINT/COMINT often produces a lot of a false positives. Imagine listening to every single conversation that you overheard every single conversation about you. Would you possibly be concerned about your security? Probably more than usual despite whatever you might say? As I said previously in posts on this blog it doesn't makes sense that we would have such money invested in SIGINT/COMINT without a return on investment. I believe that we may be involved in far more 'economic intelligence' then we may be led to believe
- despite what is said about the US (and what they say about themselves), they do tell half-truths/falsehoods. They said that the Patriot missile defense systems were a complete success upon release with ~80% success rates when first released. Subsequent revisions of past performance have indicated actual success rate of about half that. It has been said that the US has enjoyed substantive qualitative and quantitative advantages over Soviet/Russian aircraft for a long time. Recently released data seems to indicate that it is closer to parity (not 100% sure about the validity of this data) when pilots are properly trained. There seems to be indications that Russian pilots may have been involved in conflicts where they shouldn't have been or were unknown to be involved...
- the irony between the Russians and US is that they both deny that their technology is worth pursuing and yet time seems to indicate otherwise. A long time ago Russian scientists didn't bother with stealth because they though it was overly expensive without enough of a gain (especially in light of updated sensor technology) and yet the PAK-FA/T50 is clearly a test bed for such technology. Previously, the US denied that that thrust vectoring was worth pursuing and yet the the F-22 clearly makes use of it
- based on some estimates that I've seen the F-22 may be capable of close to Mach 3 (~2.5 based on some of the estimates that I've seen) under limited circumstances
- people keep on saying maintaining a larger, indigenous defense program is simply too expensive. I say otherwise. Based on what has been leaked regarding the bidding process many people basically signed on without necessarily knowing everything about the JSF program. If we had more knowledge we may have proceeded a little bit differently
- a lot of people who would/should have classified knowledge of the program are basically implying that it will work and will give us a massive advantage give more development time. The problem is that there is so much core functionality that is so problematic that this is difficult to believe...
- the fact that pilots are being briefed not to allow for particular circumstances tells us that there are genuine problems with the JSF
- judging by the opinions in the US military many people are guarded regarding the future performance of the aircraft. We just don't know until it's deployed and see how others react from a technological perspective
- proponents of the ATF/JSF programs keep on saying that since you can't see it you can't shoot. If that's the case, I just don't understand why we don't push up development of 5.5/6th gen fighters (stealth drones basically) and run a hybrid force composed of ATF, JSF, and armed drones (some countries including France are already doing this)? Drones are somewhat of a better known quantity and without life support issues to worry about should be able to go head to head with any manned fighter even with limited AI and computing power. Look at the following videos and you'll notice that the pilot is right on the physical limit in a 4.5 gen fighter during an excercise with an F-22. A lot of stories are floating around indicating that the F-22 enjoys a big advantage but that under certain circumstance it can be mitigated. Imagine going up against a drone where you don't have to worry about the pilot blacking out, pilot training (incredibly expensive to train. Experience has also told us that pilots need genuine flight time not just simulation time to maintain their skills), a possible hybrid propulsion system (for momentary speed changes/bursts (more than that provided by afterburner systems) to avoid being hit by a weapon or being acquired by a targeting system), and has more space for weapons and sensors? I just don't understand how you would be better off with a mostly manned fleet as opposed to a hybrid fleet unless there are technological/technical issues to worry about (I find this highly unlikely given some of the prototypes and deployments that are already out there)
F22 vs Rafale dogfight video
Dogfight: Rafale vs F22 (Close combat)
- if I were a near peer aggressor or looking to defend against 5th gen threats I'd just to straight to 5.5/6th gen armed drone fighter development. You wouldn't need to fulfil all the requirements and with the additional lead time you may be able to achieve not just parity but actual advantages while possibly being cheaper with regards to TCO (Total Cost of Ownership). There are added benefits going straight to 5.5/6th gen armed drone development. You don't have to compromise so much on design. The bubble shaped (or not) canopy to aide dogfighting affects aerodynamic efficiency and actually is one of the main causes of increased RCS (Radar Cross Section) on a modern fighter jet. The pilot and additional equipment (ejector sear, user interface equipment, life support systems, etc...) would surely add a large amount of weight which can now be removed. With the loss in weight and increase in aerodynamic design flexibility you could save a huge amount of money. You also have a lot more flexibility in reducing RCS. For instance, some of the biggest reflectors of RADAR signals is the canopy (a film is used to deal with this) and the pilot's helmet and one of the biggest supposed selling points of stealth aircraft are RAM coatings. They're incredibly expensive though and wear out (look up the history of the B-2 Spirit and the F-22 Raptor). If you have a smaller aicraft to begin with though you have less area to paint leading to lower costs of ownership while retaining the advantages of low observable technology
- the fact that it has already been speculated that 6th gen fighters may focus less on stealth and speed and more on weapons capability means that the US is aware of increasingly effective defense systems against 5th gen fighters such as the F-22 Raptor and F-35 JSF which rely heavily on low observability 
- based on Wikileaks and other OSINT (Open Source Intelligence) everyone involved with the United States seems to acknowledge that they get a raw end of the deal to a certain extent but they also seem to acknowledge/imply that life is easier with them than without them. Read enough and you'll realise that even when classified as a closer partner rather than just a purchaser of their equipment you sometimes don't/won't receive much extra help
- if we had the ability I'd be looking to develop our own indigineous program defense programs. At least when we make procurements we'd be in a better position to be able to make a decision as to whether what was being presented to us was good or bad. We've been burnt on so many different programs with so many different countries... The only issue that I may see is that the US may attempt to block us from this. It has happened in the past with other supposed allies before...
- I just don't get it sometimes. Most of the operations and deployments that US and allied countries engage in are counter-insurgency and CAS significant parts of our operations involving mostly un-manned drones (armed or not). 5th gen fighters help but they're overkill. Based on some of what I've seen the only two genuine near peer threats are China and Russia both of whom have known limitations in their hardware (RAM coatings/films, engine performance/endurance, materials design and manufacturing, etc...). Sometimes it feels as though the US looks for enemies that mightn't even exist. Even a former Australian Prime-Ministerial advister said that China doesn't want to lead the world, "China will get in the way or get out of the way." The only thing I can possibly think of is that the US has intelligence that may suggest that China intends to project force further outwards (which it has done) or else they're overly paranoid. Russia is a slightly different story though... I'm guessing it would be interesting reading up more about how the US (overall) interprets Russian and Chinese actions behinds the scenes (lookup training manuals for allied intelligence officers for an idea of what our interpretation of what their intelligence services are like)
- sometimes people say that the F-111 was a great plane but in reality there was no great use of it in combat. It could be the exact same circumstance with the F-35
- there could be a chance the aircraft could become like the B-2 and the F-22. Seldom used because the actual true, cost of running it is horribly high. Also imagine the ramifications/blowback of losing such an expensive piece of machinery should there be a chance that it can be avoided
- defending against 5th gen fighters isn't easy but it isn't impossible. Sensor upgrades, sensor blinding/jamming technology, integrated networks, artificial manipulation of weather (increased condensation levels increases RCS), faster and more effective weapons, layered defense (with strategic use of disposable (and non-disposable) decoys so that you can hunt down departing basically, unarmed fighters), experimentation with cloud seeing with substances that may help to speed up RAM coating removal or else reduce the effectiveness of stealth technology (the less you have to deal with the easier your battles will be), forcing the battle into unfavourable conditions, etc... Interestingly, there have been some accounts/leaks of being able to detect US stealth bombers (B-1) lifting off from some US air bases from Australia using long range RADAR. Obviously, it's one thing to be able to detect and track versus achieving a weapons quality lock on a possible target
RUSSIAN RADAR CAN NOW SEE F-22 AND F-35 Says top US Aircraft designer
- following are rough estimate on RCS of various modern defense aircraft. It's clear that while Chinese and Russian technology aren't entirely on par they make the contest unconfortably close. Estimates on the PAK-FA/T-50 indicate RCS of about somewhere between the F-35 and F-22. Ultiamtely this comes back down to a sensor game. Rough estimates seem to indicate a slight edge to the F-22 in most areas. Part me thinks that the RCS of the PAK-FA/T-50 must be propoganda, the other part leads me to believe that there is no way countries would consider purchase of the aircraft if it didn't offer a competitive RCS
- it's somehwat bemusing that that you can't take pictures/videos from certain angles of the JSF in some of the videos mentioned here and yet there are heaps of pictures online of LOAN systems online including high resolution images of the back end of the F-35 and F-22
F 22 Raptor F 35 real shoot super clear
- people keep on saying that if you can't see and you can't lock on to stealth aircraft they'll basically be gone by the time. The converse is true. Without some form of targeting system the fighter in question can't lock on to his target. Once you understand how AESA RADAR works you also understand that given sufficient computing power, good implementation skills, etc... it's also subject to the same issue that faces the other side. You shoot what you can't see and by targeting you give away your position. My guess is that detection of tracking by RADAR is somewhat similar to a lot of de-cluttering/de-noising algorithms (while making use of wireless communication/encryption & information theories as well) but much more complex... which is why there has been such heavy investment and interest in more passive systems (infra-red, light, sound, etc...)
F-35 JSF Distributed Aperture System (EO DAS)

Lockheed Martin F-35 Lightning II- The Joint Strike Fighter- Full Documentary.
4195: The Final F-22 Raptor
Rafale beats F 35 & F 22 in Flight International
Eurofighter Typhoon fighter jet Full Documentary
Eurofighter Typhoon vs Dassault Rafale
DOCUMENTARY - SUKHOI Fighter Jet Aircrafts Family History - From Su-27 to PAK FA 50
Green Lantern : F35 v/s UCAVs

RacialiciousThe Netroots Nation Files: Daring To Internet While Female 2.0

By Arturo R. García

<iframe allowtransparency="true" frameborder="no" height="750" src="" width="100%"></iframe><script src=""></script><noscript>[View the story “NN15: Daring To Internet While Female 2.0″ on Storify]</noscript>

The post The Netroots Nation Files: Daring To Internet While Female 2.0 appeared first on Racialicious - the intersection of race and pop culture.

Planet DebianMartin Michlmayr: Debian archive rebuild on ARM64 with GCC 5

I recently got access to several ProLiant m400 ARM64 servers at work. Since Debian is currently working on the migration to GCC 5, I thought it would be nice to rebuild the Debian archive on ARM64 to see if GCC 5 is ready. Fortunately, I found no obvious compiler errors.

During the process, I noticed several areas where ARM64 support can be improved. First, a lot of packages failed to build due to missing dependencies. Some missing dependencies are libraries or tools that have not been ported to ARM64 yet, but the majority was due to the lack of popular programming languages on ARM64. This requires upstream porting work, which I'm sure is going on already in many cases. Second, over 160 packages failed to build due to out-of-date autoconf and libtool scripts. Most of these bugs have been reported over a year ago by the ARM64 porters (Matthias Klose from Canonical/Ubuntu and Wookey from ARM/Linaro) and the PowerPC porters, but unfortunately they haven't been fixed yet.

Finally, I went through all packages that list specific architectures in debian/control and filed wishlist bugs on those that looked relevant to ARM64. This actually prompted some Debian and upstream developers to implement ARM64 support, which is great!

Sociological ImagesBetween Free Speech and Bureaucracy: Anarchist Political Theory and a Way Forward for Reddit

Reddit’s co-founder Steve Huffman, who is currently taking over CEO responsibilities in the wake of Ellen Pao’s resignation, has started doing these Fireside AMAs where he makes some sort of edict and all of the reddit users react and ask clarifying questions. Just today he made an interesting statement about the future of “free speech” in general and certain controversial subreddits in particular. The full statement is here but I want to focus on this specific line where he describes how people were banned in the beginning of Reddit versus the later years when the site became popular:

Occasionally, someone would start spewing hate, and I would ban them. The community rarely questioned me. When they did, they accepted my reasoning: “because I don’t want that content on our site.”

As we grew, I became increasingly uncomfortable projecting my worldview on others. More practically, I didn’t have time to pass judgement on everything, so I decided to judge nothing.

This all comes at the heels of some interesting revelations by former, former Reddit CEO Yishan Wong saying that Ellen Pao was actually the person in the board room championing free speech and it was Huffman, fellow co-founder Alexis Ohanian, and others that really wanted to clamp down on the hate speech. So that’s just a big side dish of delicious schadenfreude that’s fun to nibble on

But those quotes bring up some questions that are absolutely crucial to something Britney Summit-Gil posted here a few days ago, namely that Reddit finds itself in a paradox where revolting against the administration forces users to recognize that “Reddit is less like a community and more like a factory,” and that the free speech they rally around is an anathema to their other great love: the free market.

What structures this contradiction, what sets everyone up at cross-purposes, also has a lot to do with Huffman’s reticence to ban people as the site grew. After all, why would Huffman feel “increasingly uncomfortable” making unilateral banning decisions as the site grew, and why was his default position then be “to judge nothing”? Why does it, all of a sudden, become unfair or inappropriate to craft a community or even a product with the kind of decisiveness that comes with “I just don’t like it”?

The answer to all of this comes out of two philosophic ideas: One is the Enlightenment model of reason that we still use to undergird our concepts of legitimacy and rhetorical persuasiveness. That big decisions that effect lots of people should be argued out and have practical and utilitarian reasons and not be based on the whims of an individual. That’s what kings did and that sort of authority is arbitrary even if the results seem desirable.

The second is relatively more recent but still fundamental to the point of vanishing: the idea of the modern society as being governed by bureaucracies that have written rules that are followed by everyone. The rule of law, not of individuals. Bureaucracies are nice when they work because if you look at the written down rules, you have a fairly good idea of how to behave and what to expect from others. It’s a very enticing prospect that is rarely fully experienced.

Huffman doesn’t say as much but this is essentially how we went from fairly common-sense decisions about good governance to free speech fanaticism: not choosing to ban is the absence of arbitrary authority. When you have a site that lets you vote on things it feels like a decision to stop imposing order from the top is making room for democratic order from below.

But this is closer to the kind of majoritarian tyranny that even the architects of the American constitution were worried about. Voting in the 1700s was something that only aristocrats were qualified to do. Leave it to rabble and you would have chaos. That’s why they built a bicameral legislature that originally featured a senate with members appointed by state governments.

It should also be said that one of the oldest laws in the United States is that Congress can’t make laws that specifically target a single individual or organization. That’s why those efforts to defund Planned Parenthood in 2011 were immediately dismissed as unconstitutional. Laws have to apply to everyone equally.

And so what Huffman is presently faced with is a problem of liberal (lowercase L) and modern state governance. How do you write broad laws that classify r/coontown without just saying “I ban r/coontown”?  Unfortunately, this is also the biggest fuel line to the flames of fear that banning even detestable subreddits are a threat to free speech in general. This is, fundamentally, why it even makes sense to argue that banning an outwardly and explicitly racist subreddit can threaten the integrity of other subreddits either in the present or sometime in the future. Laws apply to everyone equally.

So if Reddit wants to get itself out of this paradox, I say dispense with liberalism all-together. At the very least come up with some sort of aspirational progressive vision of what kind of community you want to have and persuade others that they should work to achieve it. This sort of move is the biggest departure that anarchist political theory takes from mainstream liberalism: that communities can agree on the features of a future utopia and govern in the present as if you are already free to live that future utopia. Organizing humans with blanket laws forces you to explain the obvious, namely that hateful people suck and should be persuaded to act otherwise if they wish to remain part of a community that is meaningful to them.

Right now Huffman and the rest of the Reddit administration have come up with some strange and inelegant ways of dealing with the present problem. They make all these dubious distinctions between action and speech; between inciting harm and just abstracting wishing it on people; and lots of blanket “I know it when I see it” sorts of decency rules. Under liberalism redditors would be right to demand very specific descriptions of the “I know it when I see it” kinds of moments.

But if prominent members were to just be upfront in stating what sort of community they would like to see and then acting as if it already existed, discontents would have to persuade admins that they were acting against their own interests and propose a more compelling way to achieve the stated utopia. If they don’t like the utopia at all, then those people can leave for Voat and new users who like that utopia might come to replace them. At the very least, if Reddit were to take this approach, users might actually start answering the question that is at the heart of the matter but is rarely stated in explicit terms: who gets to be a part of the community?

Cross-posted at Cyborgology.

David Banks is a PhD candidate in Rensselaer Polytechnic Institute’s Science and Technology Studies Department. You can follow him on Twitter and Tumblr.

(View original at

RacialiciousThe Netroots Nation Files: Feminist Future

By Arturo R. García

<iframe allowtransparency="true" frameborder="no" height="750" src="" width="100%"></iframe><script src=""></script><noscript>[View the story “NN15: Feminist Future” on Storify]</noscript>

The post The Netroots Nation Files: Feminist Future appeared first on Racialicious - the intersection of race and pop culture.

CryptogramMalcolm Gladwell on Competing Security Models

In this essay/review of a book on UK intelligence officer and Soviet spy Kim Philby, Malcolm Gladwell makes this interesting observation:

Here we have two very different security models. The Philby-era model erred on the side of trust. I was asked about him, and I said I knew his people. The "cost" of the high-trust model was Burgess, Maclean, and Philby. To put it another way, the Philbyian secret service was prone to false-negative errors. Its mistake was to label as loyal people who were actually traitors.

The Wright model erred on the side of suspicion. The manufacture of raincoats is a well-known cover for Soviet intelligence operations. But that model also has a cost. If you start a security system with the aim of catching the likes of Burgess, Maclean, and Philby, you have a tendency to make false-positive errors: you label as suspicious people and events that are actually perfectly normal.

CryptogramMalcolm Gladwell on Competing Security Models

In this essay/review of a book on UK intelligence officer and Soviet spy Kim Philby, Malcolm Gladwell makes this interesting observation:

Here we have two very different security models. The Philby-era model erred on the side of trust. I was asked about him, and I said I knew his people. The "cost" of the high-trust model was Burgess, Maclean, and Philby. To put it another way, the Philbyian secret service was prone to false-negative errors. Its mistake was to label as loyal people who were actually traitors.

The Wright model erred on the side of suspicion. The manufacture of raincoats is a well-known cover for Soviet intelligence operations. But that model also has a cost. If you start a security system with the aim of catching the likes of Burgess, Maclean, and Philby, you have a tendency to make false-positive errors: you label as suspicious people and events that are actually perfectly normal.

Planet DebianJonathan McDowell: Recovering a DGN3500 via JTAG

Back in 2010 when I needed an ADSL2 router in the US I bought a Netgear DGN3500. It did what I wanted out of the box and being based on a MIPS AR9 (ARX100) it seemed likely OpenWRT support might happen. Long story short I managed to overwrite u-boot (the bootloader) while flashing a test image I’d built. I ended up buying a new router (same model) to get my internet connection back ASAP and never getting around to fully fixing the broken one. Until yesterday. Below is how I fixed it; both for my own future reference and in case it’s of use any any other unfortunate soul.

The device has clear points for serial and JTAG and it was easy enough (even with my basic soldering skills) to put a proper header on. The tricky bit is that the flash is connected via SPI, so it’s not just a matter of attaching JTAG, doing a scan and reflashing from the JTAG tool. I ended up doing RAM initialisation, then copying a RAM copy of u-boot in and then using that to reflash. There may well have been a better way, but this worked for me. For reference the failure mode I saw was an infinitely repeating:

ROM VER: 1.1.3
CFG 05

My JTAG device is a Bus Pirate v3b which is much better than the parallel port JTAG device I built the first time I wanted to do something similar. I put the latest firmware (6.1) on it.

All of this was done from my laptop, which runs Debian testing (stretch). I used the OpenOCD 0.9.0-1+b1 package from there.

Daniel Schwierzeck has some OpenOCD scripts which include a target definition for the ARX100. I added a board definition for the DGN3500 (I’ve also send Daniel a patch to add this to his repo).

I tied all of this together with an openocd.cfg that contained:

source [find interface/buspirate.cfg]

buspirate_port /dev/ttyUSB1
buspirate_vreg 0
buspirate_mode normal
buspirate_pullup 0
reset_config trst_only

source [find openocd-scripts/target/arx100.cfg]

source [find openocd-scripts/board/dgn3500.cfg]

gdb_flash_program enable
gdb_memory_map enable
gdb_breakpoint_override hard

I was then able to power on the router and type dgn3500_ramboot into the OpenOCD session. This fetched my RAM copy of u-boot from dgn3500_ram/u-boot.bin, copied it into the router’s memory and started it running. From there I had a u-boot environment with access to the flash commands and was able to restore the original Netgear image (and once I was sure that was working ok I subsequently upgraded to the Barrier Breaker OpenWRT image).

Worse Than FailureCodeSOD: The New Zero

If Alice needed to rate her co-workers, on a scale of 1–10, whoever wrote this is a zero. The goal here is to create a new string that is 4096 characters long and contains only zeros. This was the best approach Alice’s co-worker found:

string s = new String("0000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000");

Alice replaced <script src="" type="text/javascript"></script> that line with this one:

string s = new String("0", 4096); 
<link href="" rel="stylesheet"/> <script src=""></script> <script>hljs.initHighlightingOnLoad();</script>
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaErik de Castro Lopo: Building the LLVM Fuzzer on Debian.

I've been using the awesome American Fuzzy Lop fuzzer since late last year but had also heard good things about the LLVM Fuzzer. Getting the code for the LLVM Fuzzer is trivial, but when I tried to use it, I ran into all sorts of road blocks.

Firstly, the LLVM Fuzzer needs to be compiled with and used with Clang (GNU GCC won't work) and it needs to be Clang >= 3.7. Now Debian does ship a clang-3.7 in the Testing and Unstable releases, but that package has a bug (#779785) which means the Debian package is missing the static libraries required by the Address Sanitizer options. Use of the Address Sanitizers (and other sanitizers) increases the effectiveness of fuzzing tremendously.

This bug meant I had to build Clang from source, which nnfortunately, is rather poorly documented (I intend to submit a patch to improve this) and I only managed it with help from the #llvm IRC channel.

Building Clang from the git mirror can be done as follows:

  mkdir LLVM
  cd LLVM/
  git clone
  (cd llvm/tools/ && git clone
  (cd llvm/projects/ && git clone
  (cd llvm/projects/ && git clone
  (cd llvm/projects/ && git clone

  mkdir -p llvm-build
  (cd llvm-build/ && cmake -G "Unix Makefiles" -DCMAKE_INSTALL_PREFIX=$(HOME)/Clang/3.8 ../llvm)
  (cd llvm-build/ && make install)

If all the above works, you will now have working clang and clang++ compilers installed in $HOME/Clang/3.8/bin and you can then follow the examples in the LLVM Fuzzer documentation.

Planet Linux AustraliaDavid Rowe: FreeDV Robustness Part 6 – Early Low SNR Results

Anyone who writes software should be sentenced to use it. So for the last few days I’ve been radiating FreeDV 700 signals from my home in Adelaide to this websdr in Melbourne, about 800km away. This has been very useful, as I can sample signals without having to bother other Hams. Thanks John!

I’ve also found a few bugs and improved the FreeDV diagnostics to get a feel for how the system is working over real world channels.

I am using a simple end fed dipole a few meters off the ground and my IC7200 at maximum power (100W I presume, I don’t have a power meter). A key goal is comparable performance to SSB at low SNRs on HF channels – that is where FreeDV has struggled so far. This has been a tough nut to crack. SSB is really, really good on HF.

Here is a sample taken this afternoon, in a marginal channel. It consists of analog/DV/analog/DV speech. You might need to listen to it a few times, it’s hard to understand first time around. I can only get a few words in analog or DV. It’s right at the lower limit of intelligibility, which is common in HF radio.

Take a look at the spectrogram of the off air signal. You can see the parallel digital carriers, the diagonal stripes is the frequency selective fading. In the analog segments every now and again some low frequency energy pops up above the noise (speech is dominated by low frequency energy).

This sample had a significant amount of frequency selective fading, which occasionally drops the whole signal down into the noise. The DV mutes in the middle of the 2nd digital section as the signal drops out completely.

There was no speech compressor on SSB. I am using the “analog” feature of FreeDV, which allows me to use the same microphone and quickly swap between SSB and DV to ensure the HF channel is roughly the same. I used my laptops built in microphone, and haven’t tweaked the SSB or DV audio with filtering or level adjustment.

I did confirm the PEP power is about the same in both modes using my oscilloscope with a simple “loop” antenna formed by clipping the probe ground wire to the tip. It picked up a few volts of RF easily from the nearby antenna. The DV output audio level is a bit quiet for some reason, have to look into that.

I’m quite happy with these results. In a low SNR, barely usable SSB channel, the new coherent PSK modem is hanging on really well and we could get a message through on DV (e.g. phonetics, a signal report). When the modem locks it’s noise free, a big plus over SSB. All with open source software. Wow!

My experience is consistent with this FreeDV 700 report from Kurt KE7KUS over a 40m NVIS path.

Next step is to work on the DV speech quality to make it easy to use conversationally. I’d say the DV speech quality is currently readability 3 or 4/5. I’ll try a better microphone, filtering of the input speech, and see what can be done with the 700 bit/s Codec.

One option is a new mode where we use the 1300 bit/s codec (as used in FreeDV 1600) with the new, cohpsk modem. The 1300 bit/s codec sounds much better but would require about 3dB more SNR (half an s-point) with this modem. The problem is bandwidth. One reason the new modem works so well is that I use all of the SSB bandwidth. I actually send the 7 x 75 symbol/s carriers twice, to get 14 carriers total. These are then re-combined in the demodulator. This “diversity” approach makes a big difference in the performance on frequency selective fading channels. We don’t have room for that sort of diversity with a codec running much faster.

So time to put the thinking hat back on. I’d also like to try some nastier fading channels, like 20m around the world, or 40m NVIS. However I’m very pleased with this result. I feel the modem is “there”, however a little more work required on the Codec. We’re making progress!


Planet DebianNiels Thykier: Performance tuning of lintian, take 2

The other day, I wrote about our recent performance tuning in lintian.  Among other things, we reduced the memory usage by ~33%.  The effect was also reproducible on libreoffice (4.2.5-1 plus its 170-ish binaries, arch amd64), which started at ~515 MB and was reduced to ~342 MB.  So this is pretty great in its own right…

But at this point, I have seen what was in “Pandora’s box”. By which, I mean the two magical numbers 1.7kB per file and 2.2kB per directory in the package (add +250-300 bytes per entry in binary packages).  This is before even looking at data from file(1), readelf, etc.  Just the raw index of the package.

Depending on your point of view, 1.7-2.2kB might not sound like a lot.  But for the lintian source with ~1 500 directories and ~3 300 non-directories, this sums up to about 6.57MB out of the (then) usage at 12.53MB.  With the recent changes, it dropped to about 1.05kB for files and 1.5kB for dirs.  But even then, the index is still 4.92MB (out of 8.48MB).

This begs the question, what do you get for 1.05kB in perl? The following is a dump of the fields and their size in perl for a given entry:

lintian/vendors/ubuntu/main/data/changes-file/known-dists: 1077.00 B
  _path_info: 24.00 B
  date: 44.00 B
  group: 42.00 B
  name: 123.00 B
  owner: 42.00 B
  parent_dir: 24.00 B
  size: 42.00 B
  time: 42.00 B
  (overhead): 694.00 B

With time, date, owner and group being fixed sized strings (at most 15 characters).  The size and _path_info fields being integers, parent_dir a reference (nulled).  Finally, the name being a variable length string.  Summed the values take less than half of the total object size.  The remainder of ~700 bytes is just “overhead”.

Time for another clean up:

  • The ownership fields are usually always “root/root” (0/0).  So let’s just omit them when they satisfy said assumption. [f627ef8]
    • This is especially true for source packages where lintian ignores the actual value and just uses “root/root”.
  • The Lintian::Path API has always had a “cop-out” on the size field for non-files and it happens to be 0 for these.  Let’s omit the field if the value was zero and save 0.17MB on lintian. [5cd2c2b]
    • Bonus: Turns out we can save 18 bytes per non-zero “size” by insisting on the value being an int.
  • Unsurprisingly, the date and time fields can trivially be merged into one.  In fact, that makes “time” redundant as nothing outside Lintian::Path used its value.  So say goodbye to “time” and good day to 0.36MB more memory. [f1a7826]

Which leaves us now with:

lintian/vendors/ubuntu/main/data/changes-file/known-dists: 698.00 B
  _path_info: 24.00 B
  date_time: 56.00 B
  name: 123.00 B
  parent_dir: 24.00 B
  size: 24.00 B
  (overhead): 447.00 B

Still a ~64% overhead, but at least we reduced the total size by 380 bytes (585 bytes for entries in binary packages).  With these changes, the memory used for the lintian source index is now down to 3.62MB.  This brings the total usage down to 7.01MB, which is a reduction to 56% of the original usage (a.k.a. “the-almost-but-not-quite-50%-reduction”).

But at least the results also carried over to libreoffice, which is now down to 284.83 MB (55% of original).  The chromium-browser (source-only, version 32.0.1700.123-2) is down to 111.22MB from 179.44MB (61% of original, better results expected if processed with binaries).


In closing, Lintian 2.5.34 will use slightly less memory than 2.5.33.


Filed under: Debian, Lintian

CryptogramOrganizational Doxing of Ashley Madison

The -- depending on who is doing the reporting -- cheating, affair, adultery, or infidelity site Ashley Madison has been hacked. The hackers are threatening to expose all of the company's documents, including internal e-mails and details of its 37 million customers. Brian Krebs writes about the hackers' demands.

According to the hackers, although the "full delete" feature that Ashley Madison advertises promises "removal of site usage history and personally identifiable information from the site," users' purchase details -- including real name and address -- aren't actually scrubbed.

"Full Delete netted ALM $1.7mm in revenue in 2014. It's also a complete lie," the hacking group wrote. "Users almost always pay with credit card; their purchase details are not removed as promised, and include real name and address, which is of course the most important information the users want removed."

Their demands continue:

"Avid Life Media has been instructed to take Ashley Madison and Established Men offline permanently in all forms, or we will release all customer records, including profiles with all the customers' secret sexual fantasies and matching credit card transactions, real names and addresses, and employee documents and emails. The other websites may stay online."

Established Men is another of the company's sites; this one is designed to link wealthy men with young and pretty women.

This is yet another instance of organizational doxing:

Dumping an organization's secret information is going to become increasingly common as individuals realize its effectiveness for whistleblowing and revenge. While some hackers will use journalists to separate the news stories from mere personal information, not all will.

EDITED TO ADD (7/22): I don't believe they have 37 million users. This type of service will only appeal to a certain socio-economic demographic, and it's not equivalent to 10% of the US population.

This page claims that 20% of the population of Ottawa is registered. Given that 25% of the population are children, that means it's 30% of the adult population: 189,000 people. I just don't believe it.

Planet DebianMatthew Garrett: Your Ubuntu-based container image is probably a copyright violation

Update: A Canonical employee responded here, but doesn't appear to actually contradict anything I say below.

I wrote about Canonical's Ubuntu IP policy here, but primarily in terms of its broader impact, but I mentioned a few specific cases. People seem to have picked up on the case of container images (especially Docker ones), so here's an unambiguous statement:

If you generate a container image that is not a 100% unmodified version of Ubuntu (ie, you have not removed or added anything), Canonical insist that you must ask them for permission to distribute it. The only alternative is to rebuild every binary package you wish to ship[1], removing all trademarks in the process. As I mentioned in my original post, the IP policy does not merely require you to remove trademarks that would cause infringement, it requires you to remove all trademarks - a strict reading would require you to remove every instance of the word "ubuntu" from the packages.

If you want to contact Canonical to request permission, you can do so here. Or you could just derive from Debian instead.

[1] Other than ones whose license explicitly grants permission to redistribute binaries and which do not permit any additional restrictions to be imposed upon the license grants - so any GPLed material is fine

comment count unavailable comments

Planet DebianDaniel Pocock: RTC status on Debian, Ubuntu and Fedora

Zoltan (Zoltanh721) recently blogged about WebRTC for the Fedora community and Fedora desktop. has been running for a while now and this has given many people a chance to get a taste of regular SIP and WebRTC-based SIP. As suggested in Zoltan's blog, it has convenient integration with Fedora SSO and as the source code is available, people are welcome to see how it was built and use it for other projects.

Issues with Chrome/Chromium on Linux

If you tried any of, or using Chrome/Chromium on Linux, you may have found that the call appears to be connected but there is no media. This is a bug and the Chromium developers are on to it. You can work around this by trying an older version of Chromium (it still works with v37 from Debian wheezy) or Firefox/Iceweasel.

WebRTC is not everything

WebRTC offers many great possibilities for people to quickly build and deploy RTC services to a large user base, especially when using components like JSCommunicator or the DruCall WebRTC plugin for Drupal.

However, it is not a silver bullet. For example, there remain concerns about how to receive incoming calls. How do you know which browser tab is ringing when you have many tabs open at once? This may require greater browser/desktop integration and that has security implications for JavaScript. Whether users on battery-powered devices can really leave JavaScript running for extended periods of time waiting for incoming calls is another issue, especially when you consider that many web sites contain some JavaScript that is less than efficient.

Native applications and mobile apps like Lumicall continue to offer the most optimized solution for each platform although WebRTC currently offers the most convenient way for people to place a Call me link on their web site or portal.

Deploy it yourself

The RTC Quick Start Guide offers step-by-step instructions and a thorough discussion of the architecture for people to start deploying RTC and WebRTC on their own servers using standard packages on many of the most popular Linux distributions, including Debian, Ubuntu, RHEL, CentOS and Fedora.

RacialiciousThe Netroots Nation Files: An Interview With Jose Antonio Vargas

By Arturo R. García

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

Not long after the #BlackLivesMatter protest during Saturday’s town hall event at the Netroots Nation conference, I interviewed journalist and immigration activist Jose Antonio Vargas, who moderated the event, and talked about his experience being — literally — in the middle of the demonstration, as well as his views on how both Sen. Bernie Sanders (I-VT) and former Maryland Gov. Martin O’Malley handled their responses.

Were you prepared for [the protest]?
JAV: I was up all night trying to figure out a great mix of questions. For Senator Sanders, it was about immigration, because many people feel that’s something that he hasn’t talked about specifically. Gun control was a big one. Senator Sanders had talked about marching for civil rights in the March on Washington. That’s why I asked that question of, “Is there a specific bill you can point to” that had benefited the African-American community. So I’m just frustrated and disappointed that we weren’t able to ask this variety of questions.

But, having said that, the urgency that people of color — that Black people, that brown people in this country — feel about not only race but immigration, about policies that criminalize and dehumanize people in this country. It’s an emergency, somebody said, and it is. That’s what we saw. And I wasn’t about to stop that. As a person of color, as a gay man, as an undocumented person, I wasn’t about to stop that. You can’t silence people who have been silenced for far too long. I was just trying to figure out how I could keep the conversation going. I kept thinking to myself, “Man, handle this with as much grace as you can.”

I cannot overstate the importance of #BlackLivesMatter and the intersection of these issues. Remember, when [Phoenix activist Tia Oso] got up there, she talked about immigration, she talked about LGBT rights, she talked about civil rights. That’s the kind of conversation that we’re not seeing nationally. And that’s why it’s imperative that they get to hold that state. I just wish we could have known about it ahead of time, because I could have maybe found a better way to facilitate it, just so we could have had more questions and not just platitudes. So I was disappointed in myself for that.

Was it surprising to see the candidates that taken aback?

JAV: Hey, if you’re running for the presidency of the United States of America, you’d better be prepared for anything and everything, especially in the social media age. It was actually interesting seeing the layers of identity on that stage: you had two straight white guys running for the presidency, and you have a room full of people — people of color, people who are gay, transgender, documented, undocumented.

That’s actually one of the questions I didn’t get to ask them. I was gonna ask them, “You’re straight, you’re white, you’re a man — how has your privilege gotten you to where you are now?”

What was your reaction when O’Malley said, “all lives matter”?

JAV: You know, I’m sure the governor wishes that he could go back on that stage what he said when he said, “All lives matter, white lives matter.”

[In fact, O’Malley later apologized for doing so during his interview with TWiB:]

<script async="async" charset="utf-8" src=""></script>

But is it a concern when party leaders still can’t articulate their concerns without using phrases that effectively silence people of color?

JAV: That’s the thing now — you have a Black man, President Obama. You have a woman running for the presidency. If you’re a straight guy who happens to be white, what is your responsibility to these issues that may not be personal to you, but may be personal to many, many people? I don’t know what the governor said when he got off that stage, but when he said “all lives matter, white lives matter,” what did he mean by that, exactly?

The Sanders interview, did that end early?

JAV: It ended early because people around me were like, “end it, end it.”

So you were directed to end it.

JAV: I was directed to end it, yes. Believe me, if it was up to me, I had at least five questions that I wanted to ask. At that point, I thought the conversation was actually flowing better. So I wish that we could have just kept going.

There have been some concerns about his ability to outreach to communities of color.

JAV: That was something else I wanted to ask him: How many people of color does Senator Sanders have on his staff? I was gonna ask that question. I never got a chance to ask that question.

How do you feel he did today?

JAV: Under the circumstances, Senator Sanders has to be commended for addressing the root causes of inequality in this country. But when I asked, “I hear you there Senator, but there are people here who are talking to us about how much of an emergency race in this country is,” I don’t know if you caught his answer, but it was basically more of a non-answer.

Addressing a question of police brutality with economic policy seems like a dodge.

JAV: He was trying to connect the dots. But people need to hear more than that. They’re not [just] talking about somebody in the news. Yes, they are, but they’re also talking about themselves. When they say #SayHerName, it’s personal. And I think how politicians get up there and they get in their talking points — it doesn’t fit; it runs against this very visceral, guttural, urgent concern that people of color have in this country, who feel under attack.

The post The Netroots Nation Files: An Interview With Jose Antonio Vargas appeared first on Racialicious - the intersection of race and pop culture.

Sociological ImagesA Sociology of the Reddit Revolt

For many Reddit users, these are dark times indeed. With the banning of r/fatpeoplehate and other subreddits that did not curtail harassment and vote brigading, followed more recently by the sudden dismissal of Reddit employees including Victoria Taylor, many users are criticizing the increase in top-down administrative decisions made under the leadership of interim CEO Ellen Pao.  Alongside these criticisms are accusations that the “PC” culture of safe spaces and “social justice warriors” has eroded the ideological foundations of Reddit culture–freedom of speech, democracy, and the right to be offensive under any circumstances. Meanwhile, Reddit’s biggest competitor is having a hard time keeping their servers functioning with the massive influx of traffic.


The abrupt and unexplained dismissal of Victoria Taylor has become a particularly vivid rallying point for disgruntled users. Many moderators set their subreddits to private or restricted submissions, effectively making Reddit unusable and invisible for a vast majority of visitors. “The Blackout” (aka #TheDarkening) lasted from late Thursday (7/2) until Friday afternoon when most subreddits came back online; it is one of several tactics used so far in the “Reddit Revolt.” At this time a petition calling for Ellen Pao to step down is nearing 200,000 supporters.

One of the more confusing elements of the revolt is the target of redditors’ anger. Who is to blame for this perceived assault on liberty and the free exchange of ideas? For now, two seemingly opposed forces are bearing the brunt of accusation. These are Ellen Pao, under the influence of commercial interests, and social justice activists who criticize Reddit for tolerating and perpetuating hateful discourse. No one is speaking up on the cause of Taylor’s dismissal, which has led to speculation that she was fired for refusing to comply with the increasingly commercial motivations of Reddit admins, that she would not relocate from New York to San Francisco, or that she did not sufficiently manage the controversial Jessie Jackson AMA. Without more information, and in the context of other recent changes to Reddit, users alternate between blaming encroaching corporatism or PC freedom police who are finally ruining the internet.

So, how can these two forces both be responsible for the changes taking place on Reddit, and in other media such as television and gaming? Consider that a cornerstone of the Gamer Gate fiasco has been the assertion that market forces, not SJW activism, should determine the content and character of video games. Opposition to greater inclusivity in games, such as more central female, minority, and queer characters, has often been justified through free market rhetoric; the assertion is that men are the primary consumers of games, and that their demographic preferences do – and should – determine content. Any other force driving game design is perceived as ideologically motivated, propagandizing, and an assault on liberty.

If video game production companies are acquiescing to the demands of activists, they have not been forthcoming about it. Instead, they claim to be adapting to a marketplace in which women, people of color, and LGBTQ individuals occupy an ever increasing consumer base. Perhaps the activist/consumer dichotomy is more distracting than useful, given that the voices most critical of capitalism’s ability to turn identity into a commodity are also the ones advocating to see a bit of themselves in their beloved games. Here again, people are caught between wanting to see their values and identities reflected back at them in the media that they love, and coming to terms with what capitalist logics do to those values and identities.

On its face, the simultaneous blame directed at SJWs and commercialization seems at odds. But given the ability of neoliberal late capitalism to commodify identity and the self, and to turn nearly any element of culture into a profitable enterprise, this muddiness is a logical outcome of the contradictions of capitalism that Marx believed would be its downfall. Instead, neoliberalism and identity politics send capitalism into overdrive as the need to colonize ever expanding markets and commodify even the most absurd abstractions turns anti-capitalist ideology into easily packaged products. Rather than disturbing the supposed working-class false consciousness, the contradiction has accounted for it and marketed it back to the very people it exploits. It’s only a matter of time before Walmart starts selling a t-shirt that reads “Social Justice Warrior!” in yellow glitter.

Also central to the Reddit Revolt are discussions of labor and exploitation. Many on Reddit have remarked on the betrayal of moderators by the admins. Mods develop and manage Reddit content on their own time and for no compensation, a service admins rely on for the site to function and be profitable. In exchange, mods have historically been given relative freedom within the subreddits they moderate. Now that this freedom is being restricted or, as in the case with Victoria Taylor, decisions are made at the admin level without consulting or even informing mods, mods and users are taking the opportunity to air more general grievances, like the lack of investment in the site’s infrastructure.

Here is the centerpiece of the Reddit Revolt paradox: what is a redditor relative to the admins, or to the site itself? Redditors perceive themselves as members of a community, or perhaps as customers of the site. In many instances they even see themselves as workers generating content for the site to the benefit of the admins. But redditors are not customers, nor are they simply workers — they are the product.

To complicate this further, the Reddit Revolt requires all of us to grapple with digital and affective labor, and its tendency to blur the categories of workers, products, and consumers. Ellen Pao’s job is not to make Reddit a happy community, it is to sell the attention of redditors to advertisers. And even as users begin to understand that Reddit is less like a community and more like a factory, they seem less clear on their position within this factory. Redditors are not so much customers engaged in a boycott or even laborers on strike, they are products. As products, the only effective protest movement redditors could possibly engage in would be to remove themselves from the market. Hence, the blackout.

But the fact is, Reddit admins can shoulder the brunt of a couple of blackout days. Given how quickly the front page returned to normal it seems unlikely that any sustained movement will take hold. And while they may make promises to users about changes to come, Reddit admins will continue to do what all successful corporate entities require — turn a profit, often at the expense of those who use, make, or even are the product.

It’s to be expected that redditors feel betrayed by the powers that be for undermining the perceived ethos of Reddit as a community in which ideas — any ideas — can be freely exchanged. But there is perhaps a deeper betrayal that has not been articulated in the dominant narrative of the Reddit Revolt. That is the betrayal of western rationalism itself, and the notion that free markets and free speech are two articulations of a deeper, natural order that ultimately works in favor of the masses. The rhetorical relationship between freedom of expression and freedom of markets performs key ideological work for the perpetuation of an American-flavored narrative that capitalism is the great equalizer. While events like the Citizens United Supreme Court case occasionally highlight the absurdity of this argument, it is pervasive and often unseen. That cornerstone of western rationalism that so many redditors love is playing out in ways that they really really do not love. And the rupture will require more than dank memes and mental gymnastics to reconcile.

Cross-posted at Cyborgology.

Britney Summit-Gil is a graduate student in Communication and Media at Rensselaer Polytechnic Institute. She tweets occasionally at @beersandbooks.

(View original at

RacialiciousThe Netroots Nation Files: #BlackLivesMatter Makes Its Presence Felt

By Arturo R. García

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

The Netroots Nation progressive conference in Phoenix was marked this past Saturday by a powerful show of solidarity from #BlackLivesMatter activists, who effectively forced both the attendees and Democratic presidential candidates Sen. Bernie Sanders (I-VT) and ex-Maryland Gov. Martin O’Malley to talk about police violence against communities of color.

I’ve included a Storify under the cut with notes and images from the demonstration, as well as a follow-up discussion hosted by This Week in Blackness featuring, among others, the movement’s co-founder Patrisse Cullors. You can also read a synopsis of some of the day’s events from me at The Raw Story.

<iframe allowtransparency="true" frameborder="no" height="750" src="" width="100%"></iframe><script src=""></script><noscript>[View the story “NN15: Black Lives Matter protests presidential town hall” on Storify]</noscript>

The post The Netroots Nation Files: #BlackLivesMatter Makes Its Presence Felt appeared first on Racialicious - the intersection of race and pop culture.

Planet DebianRitesh Raj Sarraf: Micro DD meetup

A couple of us DDs met here on the weekend. It is always a fun time, being part of these meetings. We talked briefly about the status of Cross Compilation in Debian, on the tools that simplify the process.

Next we touched upon licensing, discussing the benefits of particular licenses (BSD, Apache, GPL) from the point of view of the consumer. The consumer being an individual just wanting to use/improve software, to a consumer who's building a (free / non-free) product on top of it. I think the overall conclusion was that there are 2 major licenses at a high level: Ones those allow you take the code and not give back, and the others which allow you to take code only if you are ready to share the enhancements back and forward.

Next we briefly touched upon systemd. Given that I recently spent a good amount of time talking to the systemd maintainer while fixing bugs in my software, it was natural for me to steer that topic. At the end, more people are now enthused to learn the paradigm shift.

The other topic where we spent time was on Containers. It is impressive to see how quick, and how many, products have now spun out of cgroups. The topic moved to cgroups, thanks to systemd, one of the prime consumers of cgroups. While demonstrating the functionalities of Linux Containers (LXC), I realized that systemd has a tool in place to serve the same use case.

So, once back home, I spent some time figuring out the possibility to replace my lxc setup, with that of systemd-nspawn. Apart from a minor bug, almost everything else seems to work find with systemd-nspawn.

So, following is the config detail of my container, as used in lxc. And to replace lxc, I need to fill is almost all of it with systemd-nspawn.

rrs@learner:~$ sudo cat /var/lib/lxc/deb-template/config
# Template used to create this container: /usr/share/lxc/templates/lxc-debian
# Parameters passed to the template:
# For additional config options, please look at lxc.container.conf(5)

lxc.cgroup.cpuset.cpus = 0,1
lxc.cgroup.cpu.shares = 1234

# Mem
lxc.cgroup.memory.limit_in_bytes = 2000M
lxc.cgroup.memory.soft_limit_in_bytes = 1500M

# Network = veth = 00:16:3e:0c:c5:d4 = up = lxcbr0

# Root file system
lxc.rootfs = /var/lib/lxc/deb-template/rootfs

# Common configuration
lxc.include = /usr/share/lxc/config/debian.common.conf

# Container specific configuration
lxc.mount = /var/lib/lxc/deb-template/fstab
lxc.utsname = deb-template
lxc.arch = amd64

# For apt
lxc.mount.entry = /var/cache/apt/archives var/cache/apt/archives none defaults,bind 0 0
lxc.mount.entry = /var/tmp/lxc var/tmp/lxc none defaults,bind 0 0
2015-07-20 / 16:28:58 ♒♒♒  ☺    


The equivalent of the above, in systemd-nspawn is:

sudo systemd-nspawn -n -b --machine deb-template --network-bridge=lxcbr0 --bind /var/cache/apt/archives/

The only missing bit is CPU and Memory, which I'm yet to try, is documented as doable with the systemctl --property= interface

           Set a unit property on the scope unit to register for the machine. This only
           applies if the machine is run in its own scope unit, i.e. if --keep-unit is not
           used. Takes unit property assignments in the same format as systemctl
           set-property. This is useful to set memory limits and similar for machines.

With all this in place, using containers under systemd is a breeze.

rrs@learner:~/Community/Packaging/multipath-tools (experimental)$ sudo machinectl list
deb-template container nspawn

1 machines listed.
2015-07-20 / 16:44:07 ♒♒♒  ☺    
rrs@learner:~/Community/Packaging/multipath-tools (experimental)$ sudo machinectl status deb-template
           Since: Mon 2015-07-20 16:13:58 IST; 30min ago
          Leader: 9064 (systemd)
         Service: nspawn; class container
            Root: /var/lib/lxc/deb-template/rootfs
           Iface: lxcbr0
              OS: Debian GNU/Linux stretch/sid
            Unit: machine-deb\x2dtemplate.scope
                  ├─9064 /lib/systemd/systemd --system --deserialize 14
                    │ └─9092 /lib/systemd/systemd-journald
                    │ └─9160 /usr/sbin/sshd -D
                      ├─9166 /bin/login --     
                      ├─9171 -bash
                      └─9226 dhclient host0

Jul 20 16:13:58 learner systemd[1]: Started Container deb-template.
Jul 20 16:13:58 learner systemd[1]: Starting Container deb-template.
2015-07-20 / 16:44:15 ♒♒♒  ☺    




Worse Than FailureBrillance is in the Eye of the Beholder

“E-commerce” just doesn’t have the ring it once did. The best-qualified hackers in the world used to fall all over themselves to work on the next Amazon or eBay, but now? A job maintaining the back-end of an online store isn’t likely to lure this generation’s rockstar ninja coderz, which explains why Inicart ended up hiring Jay.

As far as Colleen could tell, her boss had been trying to add a developer to their team for at least a year. Scott was always on his way to interviews, second interviews, phone screens, and follow-up Skype calls… but summer turned to autumn turned to Christmas, and Inicart’s dev team returned from the holidays to find only their waistbands had increased in size. But then came the day Colleen walked in to find the long-empty cubicle next to hers brimming with a brand-new task chair and workstation. She ran down the hall.


“’Morning, Colleen.” The team lead was leaning back in his chair with the grin of a satisfied hiring manager.

“So you… you found someone?”

“That’s right.”

“And they’re… good?”

“Right again. He’s very good.”

Colleen didn’t know what to say.

“He starts next Monday,” Scott said. “You guys should get ready to do some onboarding.”

Colleen flipped a mock salute, and scampered off to do just that. A new developer! This was huge: Colleen and her team might finally be able to take a break from fixing bugs and actually deliver a new feature!

With all due respect to Scott’s hiring prowess, it wasn’t immediately obvious to Colleen what he’d seen in Jay. The new developer was sociable enough, joining the team at their various outings, but he wasn’t big on eye contact, and tended to wander around whatever point he was making until you just lost interest. Colleen didn’t want to write Jay off on his social skills alone, however; they needed someone to fix bugs, and pretty soon he was doing just that.

Week three was when Colleen started to worry. Jay was tearing through the bug backlog, but, for a developer new to the team, the company, and the codebase, he asked very few questions. That is to say, no questions. Not wanting to be unreasonable, Colleen confirmed that her teammates were also concerned.

She brought those concerns to Scott. “I mean, I’ve been on this project for years, and I have questions.”

“Well, he is very good. He interviewed at Google, you know,” Scott said. “If you’re worried, though, maybe you could do a code review?”

Like everything else about Jay, his changes seemed fine at first glance. His taste in variable names tended towards the unusual- booThu stuck in Colleen’s mind as one example (an abortive attempt to summon the Great Codethulhu?)- but Jay seemed to know more or less what he was doing. Then they found Jay’s proclivity for write-only properties. A bunch of classes had sprouted these strange properties, properties whose value couldn’t be accessed, properties that did weird things to the classes’ internal state, more like they were a function call than a property- it was like Jay had never learned about void methods.

When challenged, Jay said, “Well, when I interviewed at Google, they thought that was a really clever design choice.” Of course, Jay may have interviewed at Google, but according to his resume, he never worked there.

As the checkins piled up and the team dug deeper, worry turned into alarm. Large sections of code had vanished from the codebase. According to Jay’s checkin comments, the swaths he’d erased were “inefficient and useless”. Colleen would have been willing to argue the point about efficiency, but the missing code was better described as “handling rare but important corner cases in shopping cart processing”.

Jay was obstinate when questioned about his unusual coding style. “I’m writing compiler-efficient code,” he cried. “If you don’t understand how the compiler turns your code into machine instructions, you’re never going to write an efficient program! That’s why I’ve been cleaning up your code.”

The outburst that ensured Jay a place in Inicart legend forevermore took place when, in the wake of The Case of the Missing Corner-Case Code, Scott told Jay they were letting him go. After security had shown the raving developer out of the building, Scott let the team in on their final conversation.

“I told him, ‘I’m sorry, Jay, but we have to let you go,’” Scott said.

“You can’t do that!” Jay had replied. “I’m brilliant!

Scott had been so taken aback by this assertion that he’d been unable to stop himself from saying “Uh, no! You’re not!”

Scott admitted this hadn’t been his most-professional moment. But the rest of the team forgave him: from then on, Colleen and co. had a new catchphrase whenever a teammate found a bug in their code.

[Advertisement] BuildMaster is more than just an automation tool: it brings together the people, process, and practices that allow teams to deliver software rapidly, reliably, and responsibly. And it's incredibly easy to get started; download now and use the built-in tutorials and wizards to get your builds and/or deploys automated!

CryptogramGoogle's Unguessable URLs

Google secures photos using public but unguessable URLs:

So why is that public URL more secure than it looks? The short answer is that the URL is working as a password. Photos URLs are typically around 40 characters long, so if you wanted to scan all the possible combinations, you'd have to work through 1070 different combinations to get the right one, a problem on an astronomical scale. "There are enough combinations that it's considered unguessable," says Aravind Krishnaswamy, an engineering lead on Google Photos. "It's much harder to guess than your password."

It's a perfectly valid security measure, although unsettling to some.

Planet Linux AustraliaMatt Palmer: Why DANE isn't going to win

In a comment to my previous post, Daniele asked the entirely reasonable question,

Would you like to comment on why you think that DNSSEC+DANE are not a possible and much better alternative?

Where DANE fails to be a feasible alternative to the current system is that it is not “widely acknowledged to be superior in every possible way”. A weak demonstration of this is that no browser has implemented DANE support, and very few other TLS-using applications have, either. The only thing I use which has DANE support that I’m aware of is Postfix – and SMTP is an application in which the limitations of DANE have far less impact.

My understanding of the limitations of DANE, for large-scale deployment, are enumerated below.

DNS Is Awful

Quoting Google security engineer Adam Langley:

But many (~4% in past tests) of users can’t resolve a TXT record when they can resolve an A record for the same name. In practice, consumer DNS is hijacked by many devices that do a poor job of implementing DNS.

Consider that TXT records are far, far older than TLSA records. It seems likely that TLSA records would fail to be retrieved greater than 4% of the time. Extrapolate to the likely failure rate for lookup of TLSA records would be, and imagine what that would do to the reliability of DANE verification. It would either be completely unworkable, or else would cause a whole new round of “just click through the security error” training. Ugh.

This also impacts DNSSEC itself. Lots of recursive resolvers don’t validate DNSSEC, and some providers mangle DNS responses in some way, which breaks DNSSEC. Since OSes don’t support DNSSEC validation “by default” (for example, by having the name resolution APIs indicate DNSSEC validation status), browsers would essentially have to ship their own validating resolver code.

Some people have concerns around the “single point of control” for DNS records, too. While the “weakest link” nature of the CA model is terribad, there is a significant body of opinion that replacing it with a single, minimally-accountable organisation like ICANN isn’t a great trade.

Finally, performance is also a concern. Having to go out-of-band to retrieve TLSA records delays page generation, and nobody likes slow page loads.


Lots of people don’t like DNSSEC, for all sorts of reasons. While I don’t think it is quite as bad as people make out (I’ve deployed it for most zones I manage, there are some legitimate issues that mean browser vendors aren’t willing to rely on DNSSEC.

1024 bit RSA keys are quite common throughout the DNSSEC system. Getting rid of 1024 bit keys in the PKI has been a long-running effort; doing the same for DNSSEC is likely to take quite a while. Yes, rapid rotation is possible, by splitting key-signing and zone-signing (a good design choice), but since it can’t be enforced, it’s entirely likely that long-lived 1024 bit keys for signing DNSSEC zones is the rule, rather than exception.

DNS Providers are Awful

While we all poke fun at CAs who get compromised, consider how often someone’s DNS control panel gets compromised. Now ponder the fact that, if DANE is supported, TLSA records can be manipulated in that DNS control panel. Those records would then automatically be DNSSEC signed by the DNS provider and served up to anyone who comes along. Ouch.

In theory, of course, you should choose a suitably secure DNS provider, to prevent this problem. Given that there are regular hijackings of high-profile domains (which, presumably, the owners of those domains would also want to prevent), there is something in the DNS service provider market which prevents optimal consumer behaviour. Market for lemons, perchance?


None of these problems are unsolvable, although none are trivial. I like DANE as a concept, and I’d really, really like to see it succeed. However, the problems I’ve listed above are all reasonable objections, made by people who have their hands in browser codebases, and so unless they’re fixed, I don’t see that anyone’s going to be able to rely on DANE on the Internet for a long, long time to come.

Krebs on SecurityOnline Cheating Site AshleyMadison Hacked

Large caches of data stolen from online cheating site have been posted online by an individual or group that claims to have completely compromised the company’s user databases, financial records and other proprietary information. The still-unfolding leak could be quite damaging to some 37 million users of the hookup service, whose slogan is “Life is short. Have an affair.”


The data released by the hacker or hackers — which self-identify as The Impact Team — includes sensitive internal data stolen from Avid Life Media (ALM), the Toronto-based firm that owns AshleyMadison as well as related hookup sites Cougar Life and Established Men.

Reached by KrebsOnSecurity late Sunday evening, ALM Chief Executive Noel Biderman confirmed the hack, and said the company was “working diligently and feverishly” to take down ALM’s intellectual property. Indeed, in the short span of 30 minutes between that brief interview and the publication of this story, several of the Impact Team’s Web links were no longer responding.

“We’re not denying this happened,” Biderman said. “Like us or not, this is still a criminal act.”

Besides snippets of account data apparently sampled at random from among some 40 million users across ALM’s trio of properties, the hackers leaked maps of internal company servers, employee network account information, company bank account data and salary information.

The compromise comes less than two months after intruders stole and leaked online user data on millions of accounts from hookup site AdultFriendFinder.

In a long manifesto posted alongside the stolen ALM data, The Impact Team said it decided to publish the information in response to alleged lies ALM told its customers about a service that allows members to completely erase their profile information for a $19 fee.

According to the hackers, although the “full delete” feature that Ashley Madison advertises promises “removal of site usage history and personally identifiable information from the site,” users’ purchase details — including real name and address — aren’t actually scrubbed.

“Full Delete netted ALM $1.7mm in revenue in 2014. It’s also a complete lie,” the hacking group wrote. “Users almost always pay with credit card; their purchase details are not removed as promised, and include real name and address, which is of course the most important information the users want removed.”

Their demands continue:

“Avid Life Media has been instructed to take Ashley Madison and Established Men offline permanently in all forms, or we will release all customer records, including profiles with all the customers’ secret sexual fantasies and matching credit card transactions, real names and addresses, and employee documents and emails. The other websites may stay online.”

A snippet of the message left behind by the Impact Team.

A snippet of the message left behind by the Impact Team.

It’s unclear how much of the AshleyMadison user account data has been posted online. For now, it appears the hackers have published a relatively small percentage of AshleyMadison user account data and are planning to publish more for each day the company stays online.

“Too bad for those men, they’re cheating dirtbags and deserve no such discretion,” the hackers continued. “Too bad for ALM, you promised secrecy but didn’t deliver. We’ve got the complete set of profiles in our DB dumps, and we’ll release them soon if Ashley Madison stays online. And with over 37 million members, mostly from the US and Canada, a significant percentage of the population is about to have a very bad day, including many rich and powerful people.”

ALM CEO Biderman declined to discuss specifics of the company’s investigation, which he characterized as ongoing and fast-moving. But he did suggest that the incident may have been the work of someone who at least at one time had legitimate, inside access to the company’s networks — perhaps a former employee or contractor.

“We’re on the doorstep of [confirming] who we believe is the culprit, and unfortunately that may have triggered this mass publication,” Biderman said. “I’ve got their profile right in front of me, all their work credentials. It was definitely a person here that was not an employee but certainly had touched our technical services.”

As if to support this theory, the message left behind by the attackers gives something of a shout out to ALM’s director of security.

“Our one apology is to Mark Steele (Director of Security),” the manifesto reads. “You did everything you could, but nothing you could have done could have stopped this.”

Several of the leaked internal documents indicate ALM was hyper aware of the risks of a data breach. In a Microsoft Excel document that apparently served as a questionnaire for employees about challenges and risks facing the company, employees were asked “In what area would you hate to see something go wrong?”

Trevor Stokes, ALM’s chief technology officer, put his worst fears on the table: “Security,” he wrote. “I would hate to see our systems hacked and/or the leak of personal information.”

In the wake of the AdultFriendFinder breach, many wondered whether AshleyMadison would be next. As the Wall Street Journal noted in a May 2015 brief titled “Risky Business for,” the company had voiced plans for an initial public offering in London later this year with the hope of raising as much as $200 million.

“Given the breach at AdultFriendFinder, investors will have to think of hack attacks as a risk factor,” the WSJ wrote. “And given its business’s reliance on confidentiality, prospective AshleyMadison investors should hope it has sufficiently, er, girded its loins.”

Update, 8:58 a.m. ET: ALM has released the following statement about this attack:

“We were recently made aware of an attempt by an unauthorized party to gain access to our systems. We immediately launched a thorough investigation utilizing leading forensics experts and other security professionals to determine the origin, nature, and scope of this incident.”

“We apologize for this unprovoked and criminal intrusion into our customers’ information. The current business world has proven to be one in which no company’s online assets are safe from cyber-vandalism, with Avid Life Media being only the latest among many companies to have been attacked, despite investing in the latest privacy and security technologies.”

“We have always had the confidentiality of our customers’ information foremost in our minds, and have had stringent security measures in place, including working with leading IT vendors from around the world. As other companies have experienced, these security measures have unfortunately not prevented this attack to our system.”

“At this time, we have been able to secure our sites, and close the unauthorized access points. We are working with law enforcement agencies, which are investigating this criminal act. Any and all parties responsible for this act of cyber–terrorism will be held responsible.”

“Avid Life Media has the utmost confidence in its business, and with the support of leading experts in IT security, including Joel Eriksson, CTO, Cycura, we will continue to be a leader in the services we provide. “I have worked with leading companies around the world to secure their businesses. I have no doubt, based on the work I and my company are doing, Avid Life Media will continue to be a strong, secure business,” Eriksson said.”


TEDSoaring imaginations, harsh realities: A recap of TEDGlobal>London

Why do humans rule the world? At TEDGlobalLondon, Yuval Noah Harris says it's our ability to imagine. Photo: James Duncan Davidson/TED

Why do humans rule the world? At TEDGlobal>London, Yuval Noah Harari says it’s our ability to imagine. Photo: James Duncan Davidson/TED

Formula E racing, the darknet, a potential fountain of youth, and beheadings. At TEDGlobal>London — a two-session event curated and hosted by Bruno Giussani on June 16, 2015, at the Royal Institution of Great Britain — the talks ranged from a wildly hopeful future to stern warnings about the present. Enjoy these recaps of the talks in the event.

In the first session…

What sets humans apart? Our ability to imagine. Prehistoric humans were just another unimportant animal — and yet, today, homo sapiens control the planet. How did we get from there to here, asks history professor Yuval Noah Harari? It turns out, it’s because we can imagine things. Nation states, religions, laws — even money — are fictions, and as long as we believe in them, we’ll continue to cooperate effectively in large groups. Think you could convince a chimp to give you a banana by promising that after he dies, he might get countless bananas in Chimpanzee Heaven? Yeah. Not so much. Only humans believe such stories. This is why we rule the world, whereas chimps are locked up in labs. Read more about Harari’s insights at »

Are we really facing a 2°C warming … or worse? Alice Bows-Larkin says we’re ignoring reality when it comes to climate change — and not just on a personal level. Our policy discussions have focused on limiting global warming to 2°C above pre-industrial levels. But looking at a graph of the exponential growth of emissions, we’re really on a path toward a 4°C warming, says Bows-Larkin. And this will translate into an 8 or 10 degree temperature rise in cities. Imagine the hottest day in New York or Mumbai, and add 10 degrees to that, forever. “Our infrastructure has not been designed to cope with this,” she says. To avoid this hot new reality, we need a 10% decrease in a emissions per year — starting right now. And to get there, we might need to question which we value more: economic growth or our planet’s future. Bows-Larkin points to a 2011 paper in which she and a colleague suggested a period of “planned austerity in wealthy nations” might help us reduce emissions. It was not a popular suggestion, she says now.

Severed heads as spectacle, then and now. “For the past year, we’ve all been watching the same show, and I don’t mean Game of Thrones,” says anthropologist Frances Larson — but rather, a show produced by murderers and aired on the worldwide web: the recent beheadings of seven Western men by ISIS, filmed and uploaded. “It’s easy to say they’re barbaric, but if we think that they are archaic — from a remote, obscure age — then we’re wrong,” Larson says. While the nature of beheadings and executions has changed with time, one thing hasn’t: We all watch. Beheadings of criminals by guillotine drew crowds in 1792 France; execution day was almost like a carnival. Today, although a public execution is unthinkable, our morbid fascination with severed heads continues. Modern terrorist beheadings are staged, a “horrifying real-life drama” — and a viral spectacle. While it’s easy to feel distant and passive, clicking on a video of a terrorist beheading, the act’s power comes from the people watching as the killer performs. Everyone who looks plays a part. Larson ends with a powerful observation: “We should stop watching, but we know we won’t. History tells us we won’t and the killers know it too.”
Society’s toll on joy. With a guitar in hand, Alice Phoebe Lou — a South African singer-songwriter now based in Berlin — begins her song “Society” with a delicate finger-picking melody. In a lullaby-like lament, she sings of a man embittered by what could have been: “Oh society, what have you done to me?” Afterward, she plays “Red,” singing in a translucent, reflective voice, this time of a man who chases money while stifling his inner joy. “He gets out of bed / money makes him poor,” she sings. “He’s been mislead / life’s flown overhead.”

Formula E racer Nicolas Prost answers questions about his electric race car — and the creativity it takes to drive it. Photo: James Duncan Davidson/TED

Formula E racer Nicolas Prost answers questions about his electric race car — and the creativity it takes to drive it. Photo: James Duncan Davidson/TED

Do race car drivers dream of electric vehicles? Nicolas Prost competes in the Formula E competition. Formula Huh, you say? Formula E — the first all-electric race car championship. Prost, who’s currently placing fourth in the first year of the race, explains more: it’s essentially like the grandaddy racing championship, Formula One, only these cars have electric engines and rechargeable batteries. These battles are less about costly technology, though, because all the drivers have essentially the same car with the same engine. The difference is made “by engineers and drivers, not by money,” says Prost. The young Frenchman will try to emulate the success of his legendary Formula One driver father, Alain Prost, in the final race of the season — to be held in London at the end of June.

Do we understand addiction? It’s been 100 years since the US and Britain banned drugs, says journalist Johann Hari. And he calls it a “fateful decision” to punish addicts as criminals, instead of treating addiction as an illness. Does the century-long, fairly ineffective War on Drugs have at its base a faulty assumption about what addiction is? Hari points to a few hints that addiction may be about more than building a dependency on “chemical hooks” — like the fact that when your grandmother has a hip replacement, she will get dosed, heavily, with a powerful heroin-like narcotic for pain, but she usually doesn’t become an addict afterward. Addiction might have more to do with environment, specifically with a sense of social isolation, says Hari — who has seen the ravages of addiction in his own family. For his new book, he visited Portugal, which decriminalized drug use in 2000 and dedicated its former war-in-drugs budget to creating jobs and social connection for addicts, reuniting them with a sense of purpose. The move has been widely applauded, and fifteen years on, the numbers show it works. Hari suggests, “The opposite of addiction is not sobriety. The opposite of addiction is connection.”

After a short break, the second session…

Social media shaming. In the early days of Twitter, “voiceless people realized that they had a voice,” says journalist Jon Ronson. “When powerful people transgressed, we realized that we could do something about it … we could hit them with a weapon that we understood and they didn’t: a social media shaming.” But lately, it seems to have gotten out of control. He tells the story of Justine Sacco, who made an ill-advised joke to her 170 Twitter followers before getting on a plane. When she got off the flight and turned on her phone, she was trending on Twitter worldwide, the subject of a (sometimes violently threatening) social media shaming campaign. In a week of online shaming, she lost her job, her reputation and her sense of self. Social media is “a mutual approval machine,” says Ronson, who spent three years interviewing people like Sacco for his new book, So You’ve Been Publicly Shamed. We’re seeing in black and white, deciding that people are either heroes or horrible, when the reality is much more gray. “We are now creating a surveillance society, where the smartest way to survive is by being voiceless,” he says.

Welcome to the darknet. Jamie Bartlett studies crypto-currency, surveillance and counter-surveillance — and he’s here to talk about a place where all those fascinations meet: the darknet. Accessed using the Tor browser (first developed for the US military to ensure online privacy), the darknet contains 20,000 or 30,000 sites where, among other pursuits, you can buy weed, cocaine and illegal pornography, paying with Bitcoin … on websites that are very, very similar to mainstream shopping sites. They have search, shopping carts, click-to-buy buttons, and — most vital — user ratings. Bartlett analyzed 120,000 pieces of online consumer feedback on darknet sites, and found a very high level of consumer trust between online dealers and customers. To strengthen that trust, there’s even an escrow system, so everyone gets their money and/or product. As he says, “The creation of an anonymous marketplace which is competitive and, on the whole, functions is a remarkable, staggering achievement.”

The digital age of conflict. “What are the connections between Facebook, Minnesota, ISIS and Al-Shabaab?” asks security analyst Rodrigo Bijou. The answer: the two terrorist groups used social media to recruit young men in Minnesota to their cause. The digital landscape has changed radicalization, says Bijou, and it’s also changed what constitutes a threat. Governments simply aren’t nimble and adaptive enough to keep up, he says. He points to a moment in the wake of the Charlie Hebdo attack when terrorists infected a “Je Sui Charlie” photo meme with malware. “The new common class of threats is decentralized, digital and takes place at network speed,” he says. So how can we stay safe? Peer-to-peer security, says Bijou. “Individuals have more power than ever before to affect national and international security,” he says. He ends with a plea for governments to nurture hackers, value encryption and support privacy. Because if governments use security backdoors to check in on their citizens, so can those with ill intent.

In defense of millennials. Poet Suli Breaks takes the TEDGlobal stage with a warm, confident smile, piano accompaniment behind his words as he lays out his “Millennial Generation Manifesto.” In it, he addresses common misconceptions about his age group. “They say I don’t get involved in politics, but I engage in politics on Facebook.” Even though millennials do things differently, he says, they don’t deserve all the scorn. “You keep telling us to look up from our screens / just to see you looking down on us, it seems.” Breaks wants to see different age groups collaborate. “It’s a new day, and even though we grew up in different generations / we are facing the same problems disguised as different situations.”

Of mice and (young) men. The fountain of youth may not be as far-fetched as we think, says neurologist Tony Wyss-Coray. He shares past research that shows how old mice, when given a common blood supply with a young mouse through a process called “parabiosis,” showed tissue rejuvenation in the pancreas, liver and heart. “What I am most excited [about] is that this may even apply to the brain,” he says. Wyss-Coray’s lab looked at blood samples from human beings ages 20 to 89, and found a strong correlation between chronological age (age in years) and biological age (the age of their body). And they identified multiple factors in the blood that correlated with age. But could these factors actually effect tissue? To test this, they paired young mice and old mice in parabiosis, and found that — yes — the brains of the old mice showed more active synapses and had less inflammation than before. In a second experiment, mice injected with young human plasma performed better on a memory test than ones injected with a saline solution. Anyone with a sci-fi imagination may automatically be imagining horrifying scenarios of the future — will billionaires set up ‘young person farms’? — but there’s still a long way to go to see if this can work in humans. But Wyss-Coray is going there. He is running a small clinical study in which adults with mild Alzheimer’s will receive injections of plasma from 20 year olds once a day for four week. The results could prove fascinating.

Quantum … biology? Jim Al-Khalili is a quantum physicist. So as he says, “I’ve grown used to the weirdness of quantum mechanics,” the counterintuitive, two-places-at-once subatomic strangeness first described by physicists almost a century ago. Today, though, he asks whether physics is the only field that has to learn to live with quantum mechanics. Perhaps, too, the principles apply to biology. Could some of the messiness of life be explained by quantum biology? Ernest Schrödinger first suggested this in 1944, and Al-Khalili talks through some ground-breaking new papers that suggest that phenomena such as photosynthesis and possibly even bird migration may be explained by quantum physics. Life, suggests Al-Khalil, may have evolved ways to take advantage of quantum mechanics.

Developing an undeveloped relationship. Photographer Diana Markosian was seven years old when her mother took her away from her father. She left the Soviet Union for California — and was never given the chance to say goodbye to him. In a somber, raw and visual account, Markosian shares how she waited years for him to find her … before she found herself standing in his courtyard 15 years later. She moved in with him, but found that the space between them was much too profound. She began a photo project as a way to bridge the gap. They found it easier to take snapshots of each other than to search for words that the other could understand. “It was a way for us to be together without the past intruding,” she says.

Finally, singer-songwriter Alice Phoebe Lou returned to the stage and, in an unnamed song, told the story of a young, determined heroine who broke free to experience the world. “She cut a hole in the fence and ran,” she sang, ending the event on an uplifting note.

Check out the TEDGlobalLondon program guide. And stay tuned to see some of these talks on

TEDGlobal London was a one-day event at the  Faraday Lecture Hall at the Royal Institution of Great Britain. Photo: James Duncan Davidson/TED

TEDGlobalLondon was a one-day event held in the Faraday Lecture Hall at the Royal Institution of Great Britain. Photo: James Duncan Davidson/TED

Planet DebianLaura Arjona: Family games: Robots

I play “Robots” with my kid. I’ve tested the game with other kids and it seems that for ages 5 to 7 they like it. I’ve talked about the game to several adults and it seems they like too, so I thought maybe writing about it here may be useful for somebody to enjoy some summer days.


One player is the Robot. The other one is the programmer. If there are more players, it can be several robots and several programmers. If players are older, you can make the game more complicated making robots cooperate or programmers cooperate. If not, you make pairs 1-1 or 1 programmer – 2 robots if the number is odd.

The game

The programmer must turn on the robot, pressing the ON/OFF button (robot chooses where’s the button: nose, ear, belly, whatever).
Then, the robot say “hello”, and the programmer asks for the list of commands available (like “Hello, robot, give me the list of commands”). The robot says the list of commands available, for example “Run, stop, jump, sing a song, somersault, say something in a different language”. Then, the programmer thinks a program, and loads it to the robot (speaks the list of orders, loudly, to the robot). Then the programmer presses the START button (Robot choses where it is) and then the robot has to perform the program without errors.

If the robot performs correctly, wins one point. If it fails, looses one point. The programmer can design another program (maybe longer, maybe with some conditional expression) and tries the limits of the memory of robot.

If the robot is tired, needs to charge batteries, or whatever, the roles programmer/robot are interchanged, and the one with more points in a certain amount of time or rounds, wins.

Variants, tips…

If the programmer does not like the list, of commands, she can ask for updates, and maybe some new commands will be installed (and/or other uninstalled, who knows).

Please be creative with the list of commands, or the game will be very boring.

Depending on the operating system which runs the robot, it will give more or less options to the programmer, and the behaviour will be more evil or good. Robots shouldn’t behave too much evil, though, otherwise the programmer will erase their disk and install Debian on them to make them obedient ;)

You can play with a third person being the Robot manufacturer, who controls the robot, even sometimes overriding the programmer instructions (if the robot has an OS which is not free software). Robot will win one point obeying the manufacturer, but if there are more robots, will loose one round of playing because the programmer got angry and turned it off or reinstalled the software.

The manufacturer and the programmer cooperate if the robot runs free software, though. Together they can expand robot memory (for example, lend a piece of paper where to store the program), or create new commands, fix bugs, or whatever.


You can comment about this post in thread about this post.

Filed under: My experiences and opinion Tagged: Debian, Education, English, Free culture, Free Software, Games, kids

Planet DebianGregor Herrmann: RC bugs 2015/17-29

after the release is before the release. – or: long time no RC bug report.

after the jessie release I spent most of my Debian time on work in the Debian Perl Group. we tried to get down the list of new upstream releases (from over 500 to currently 379; unfortunately the CPAN never sleeps), we were & still are busy preparing for the Perl 5.22 transition (e.g. we uploaded something between 300 & 400 packages to deal with Module::Build & being removed from perl core; only team-maintained packages so far), & we had a pleasant & productive sprint in Barcelona in May. – & I also tried to fix some of the RC bugs in our packages which popped up over the previous months.

yesterday & today I finally found some time to help with the GCC 5 transition, mostly by making QA or Non-Maintainer Uploads with patches that already were in the BTS. – a big thanks especially to the team at HP which provided a couple dozens patches!

& here's the list of RC bugs I've worked on in the last 3 months:

  • #752026 – libpdl-stats-perl: "libpdl-stats-perl: FTBFS on arm*"
    upload new upstream release (pkg-perl)
  • #755961 – autounit: "FTBFS with clang instead of gcc"
    apply patch from Alexander <>, QA upload
  • #755963 – clearsilver: "FTBFS with clang instead of gcc"
    apply patch from Alexander <>, upload to DELAYED/5
  • #777776 – src:apron: "apron: ftbfs with GCC-5"
    tag as unreproducible
  • #777780 – src:asmon: "asmon: ftbfs with GCC-5"
    apply patch from Martin Michlmayr, upload to DELAYED/5
  • #777783 – src:atftp: "atftp: ftbfs with GCC-5"
    apply patch from Martin Michlmayr, upload to DELAYED/5
  • #777797 – src:bbrun: "bbrun: ftbfs with GCC-5"
    add patch to build with "-std=gnu89", upload to DELAYED/5
  • #777806 – src:booth: "booth: ftbfs with GCC-5"
    tag as unreproducible
  • #777808 – src:bwm-ng: "bwm-ng: ftbfs with GCC-5"
    merge patch from Ubuntu, and build with "-std=gnu89", upload to DELAYED/5
  • #777831 – src:deborphan: "deborphan: ftbfs with GCC-5"
    apply patch from Jakub Wilk, upload to DELAYED/5, then rescheduled to 0-day with maintainer's permission
  • #777835 – src:dsbltesters: "dsbltesters: ftbfs with GCC-5"
    tag as unreproducible
  • #777853 – src:flow-tools: "flow-tools: ftbfs with GCC-5"
    apply patch from Alexander Balderson, upload to DELAYED/5
  • #777880 – src:gnac: "gnac: ftbfs with GCC-5"
    apply patch from Greg Pearson, upload to DELAYED/5
  • #777881 – src:gngb: "gngb: ftbfs with GCC-5"
    apply patch from Greg Pearson, upload to DELAYED/5
  • #777895 – src:haildb: "haildb: ftbfs with GCC-5"
    tag as unreproducible
  • #777902 – src:hfsplus: "hfsplus: ftbfs with GCC-5"
    merge patch from Ubuntu, QA upload
  • #777903 – src:hugs98: "hugs98: ftbfs with GCC-5"
    apply patch from Elizabeth J Dall, upload to DELAYED/5
  • #777965 – src:libpam-chroot: "libpam-chroot: ftbfs with GCC-5"
    apply patch from Linn Crosetto, upload to DELAYED/5
  • #777975 – src:libssh: "libssh: ftbfs with GCC-5"
    apply patch from Matthias Klose, upload to DELAYED/5
  • #778009 – src:mknbi: "mknbi: ftbfs with GCC-5"
    apply patch from Matthias Klose, QA upload
  • #778020 – src:mz: "mz: ftbfs with GCC-5"
    apply patch from Joshua Gadeken, upload to DELAYED/5
  • #778051 – src:overgod: "overgod: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5
  • #778056 – src:pads: "pads: ftbfs with GCC-5"
    apply patch from Andrew Patterson, upload to DELAYED/5
  • #778121 – src:sks-ecc: "sks-ecc: ftbfs with GCC-5"
    apply patch from Brett Johnson, QA upload
  • #778129 – src:squeak-plugins-scratch: "squeak-plugins-scratch: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778137 – src:tabble: "tabble: ftbfs with GCC-5"
    apply patch from David S. Roth, QA upload
  • #778146 – src:tinyscheme: "tinyscheme: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5
  • #778148 – src:trafficserver: "trafficserver: ftbfs with GCC-5"
    lower severity
  • #778151 – src:tuxonice-userui: "tuxonice-userui: ftbfs with GCC-5"
    apply patch from Nicholas Luedtke, upload to DELAYED/5, later sponsor maintainer upload
  • #778152 – src:uaputl: "uaputl: ftbfs with GCC-5"
    apply patch from Brett Johnson, upload to DELAYED/5
  • #778153 – src:udftools: "udftools: ftbfs with GCC-5"
    apply patch from Jakub Wilk, upload to DELAYED/5
  • #778159 – src:uswsusp: "uswsusp: ftbfs with GCC-5"
    apply patch from Andrew James, upload to DELAYED/5
  • #778167 – src:weplab: "weplab: ftbfs with GCC-5"
    apply patch from Elizabeth J Dall, QA upload
  • #778171 – src:wmmon: "wmmon: ftbfs with GCC-5"
    add patch to build with "-std=gnu89", upload to DELAYED/5
  • #778173 – src:wmressel: "wmressel: ftbfs with GCC-5"
    apply patch from Elizabeth J Dall, upload to DELAYED/5
  • #780199 – src:redhat-cluster: "redhat-cluster: FTBFS in unstable - error: conflicting types for 'int64_t'"
    apply patch from Michael Tautschnig, upload to DELAYED/2, then rescheduled by maintainer
  • #783899 – liblog-any-perl: "liblog-any-perl, liblog-any-adapter-perl: File conflict when being installed together"
    add Breaks/Replaces/Provides (pkg-perl)
  • #784844 – libmousex-getopt-perl: "libmousex-getopt-perl: FTBFS: test failures"
    upload new upstream release (pkg-perl)
  • #785020 – libmoosex-getopt-perl: "libmoosex-getopt-perl: FTBFS: test failures"
    upload new upstream release (pkg-perl)
  • #785158 – libnet-ssleay-perl: "libnet-ssleay-perl: FTBFS: Your vendor has not defined SSLeay macro LIBRESSL_VERSION_NUMBER"
    upload new upstream release (pkg-perl)
  • #785229 – sqitch: "sqitch: FTBFS: new warnings"
    upload new upstream release (pkg-perl)
  • #785232 – libdist-zilla-plugin-requiresexternal-perl: "libdist-zilla-plugin-requiresexternal-perl: FTBFS: More than one plan found in TAP output"
    make tests non-verbose (pkg-perl)
  • #785659 – libdist-zilla-perl: "libdist-zilla-perl: FTBFS: t/plugins/testrelease.t failure"
    make tests non-verbose (pkg-perl)
  • #786447 – libcgi-application-plugin-authentication-perl: "libcgi-application-plugin-authentication-perl FTBFS in unstable"
    add patch from Micah Gersten/Ubuntu (pkg-perl)
  • #786591 – libtext-quoted-perl: "libtext-quoted-perl: broken by libtext-autoformat-perl changes"
    upload new upstream release (pkg-perl)
  • #786667 – libcatalyst-plugin-authentication-credential-openid-perl: "libcatalyst-plugin-authentication-credential-openid-perl: FTBFS: Bareword "use_test_base" not allowed"
    patch Makefile.PL (pkg-perl)
  • #788350 – libhttp-proxy-perl: "FTBFS - proxy tests"
    add patch, improved from CPAN RT (pkg-perl)
  • #789141 – src:libdancer2-perl: "libdancer2-perl: FTBFS with Plack >= 1.0036: t/classes/Dancer2-Core-Response/new_from.t"
    upload new upstream release (pkg-perl)
  • #789669 – src:starlet: "starlet: FTBFS with Plack 1.0036"
    add patch for test compatibility with newer Plack (pkg-perl)
  • #789838 – src:starman: "starman: FTBFS with Plack 1.0036"
    upload new upstream release (pkg-perl)
  • #791493 – libpadre-plugin-datawalker-perl: "libpadre-plugin-datawalker-perl: missing dependency on padre"
    add missing dependency (pkg-perl)
  • #791510 – libcatalyst-authentication-credential-authen-simple-perl: "libcatalyst-authentication-credential-authen-simple-perl: FTBFS: Can't locate Test/ in @INC"
    add missing build dependency (pkg-perl)
  • #791512 – libcatalyst-plugin-cache-store-fastmmap-perl: "libcatalyst-plugin-cache-store-fastmmap-perl: FTBFS: Can't locate Test/ in @INC"
    add missing build dependency (pkg-perl)
  • #791709 – libjson-perl: "libjson-perl: FTBFS: Recursive inheritance detected"
    upload new upstream release (pkg-perl)
  • #792063 – src:libmath-mpfr-perl: "FTBFS: lngamma_bug.t and test1.t fail"
    upload new upstream release (pkg-perl)
  • #792844 – libatombus-perl: "libatombus-perl: ships usr/share/man/man3/README.3pm.gz"
    don't install README manpage (pkg-perl)
  • #792845 – libclang-perl: "libclang-perl: ships usr/share/man/man3/README.3pm.gz"
    don't install README POD/manpage (pkg-perl)

Planet DebianEnrico Zini: quote

Random quote

Be selfish when you ask, honest when you reply, and when others reply, take them seriously.

(me, late at night)

Planet Linux AustraliaSridhar Dhanapalan: Twitter posts: 2015-07-13 to 2015-07-19

Sociological ImagesWhy People Become Sociologists…

…according to Charles Shultz:2 (1)

Found at sociologist Larry Stern’s “Who are these people that become sociologists?

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Planet Linux AustraliaCraige McWhirter: Craige McWhirter: How To Configure Debian to Use The Tiny Programmer ISP Board

So, you've gone and bought yourself a Tiny Programmer ISP, you've plugged into your Debian system, excitedly run avrdude only to be greeted with this:

% avrdude -c usbtiny -p m8

avrdude: error: usbtiny_transmit: error sending control message: Operation not permitted
avrdude: initialization failed, rc=-1
         Double check connections and try again, or use -F to override
         this check.

avrdude: error: usbtiny_transmit: error sending control message: Operation not permitted

avrdude done.  Thank you.

I resolved this permissions error by adding the following line to /etc/udev/rules.d/10-usbtinyisp.rules:

SUBSYSTEM=="usb", ATTR{idVendor}=="1781", ATTR{idProduct}=="0c9f", GROUP="plugdev", MODE="0666"

Then restarting udev:

% sudo systemctl restart udev

Plugged the Tiny Programmer ISP back in the laptop and ran avrdude again:

% sudo avrdude -c usbtiny -p m8

avrdude: AVR device initialized and ready to accept instructions

Reading | ################################################## | 100% 0.00s

avrdude: Device signature = 0x1e9587
avrdude: Expected signature for ATmega8 is 1E 93 07
         Double check chip, or use -F to override this check.

avrdude done.  Thank you.

You should now have avrdude love.

Enjoy :-)


Planet Linux AustraliaMichael Still: Casuarina Sands to Kambah Pool

I did a walk with the Canberra Bushwalking Club from Casuarina Sands (in the Cotter) to Kambah Pool (just near my house) yesterday. It was very enjoyable. I'm not going to pretend to be excellent at write ups for walks, but will note that the walk leader John Evans has a very detailed blog post about the walk up already. We found a bunch of geocaches along the way, with John doing most of the work and ChifleyGrrrl and I providing encouragement and scrambling skills. A very enjoyable day.


See more thumbnails

Interactive map for this route.

Tags for this post: blog pictures 20150718-casurina_sands_to_kambah_pool photo canberra bushwalk
Related posts: Goodwin trig; Big Monks; Geocaching; Confessions of a middle aged orienteering marker; A quick walk through Curtin; Narrabundah trig and 16 geocaches


Sociological ImagesLess than 1% of Women Regret Their Decision to Have an Abortion

A new article reports the findings from a longitudinal study that followed 667 women who had early- and later-term abortions for three years after their procedure. Dr. Corinne Rocca and her colleagues asked women if they felt that the abortion was the “right decision” at one week and approximately every six months thereafter.

This is your image of the week:


Percent of women reporting that abortion was the right decision over three years:


Over 99% of the women said that the abortion was the right decision at every time point. The line that looks like the upper barrier of the graph? That’s the data.

Overall, measures of negative emotions were relatively low — an average score of under 4 on a 16-point scale at one week and declining to about 2 at three years — and were higher for women who had a more difficult time deciding whether to get an abortion or who subsequently had planned pregnancies. Whether the abortion occurred in the first trimester or near the legal limit did not correlate with emotional response.

In contrast, women reported twice as many positive emotions at one week. Over time, positive feelings about the abortion declined along with negative ones, suggesting that the experience became less emotionally charged overall with distance from the procedure.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Planet DebianNiels Thykier: Performance tuning of lintian

For quite a while, Lintian has been able to create performance logs (--perf-debug --perf-output perf.log) that help diagnose where lintian spends most of its runtime.  I decided to make lintian output these logs on to help us spot performance issues, though I have not been very good at analysing them regularly.

At the beginning of the month, I finally got around to look a bit into one of them.  My findings on IRC triggered Mattia Rizzolo to create this following graph.  It shows the accumulated runtime of each check/collection measured in seconds.  Find these findings, we set out to solve some of the issues.  This lead to the following changes in 2.5.33 (in no particular order):

  • Increased buffer size in check/ to reduce overhead [bc8b3e5] (S)
  • Reduced overhead in strings(1) extraction [b058fef] (P)
  • Reduced overhead in spell-checking [b824170] (S)
    • Also improves the performance of spellintian!
  • Removed a high overhead check that did not work [2c7b922] (P)

Legend: S: run single threaded (1:1 performance improvement).  P: run in parallel.

Overall, I doubt the changes will give a revolutionary change in speed, but it should improve the 3rd, 4th and 5th slowest parts in Lintian.

Beyond runtime performance, we got a few memory optimisations in the pipeline for Lintian 2.5.34:

  • Remove member from “non-dir” nodes in the Lintian path graph (2%) [6365635]
  • Remove two fields at the price of computing them as needed (~5%) [a696197 + 8dacc8e]
  • Merge 4 fields into 1 (~8%) [5d49cd2 + fb074e4]
  • Share some memory between package-based caches (18%) [ffc7174]

Combined these 6 commits reduce memory consumption in caches by ~33% compared to 2.5.33, when lintian processes itself.  In absolute numbers, we are talking about a drop from 12.53MB to 8.48MB.  The mileages can certainly vary depending on the package (mscgen “only” saw an ~25% improvement).  Nevertheless, I was happy to list #715035 as being closed in 2.5.34. :)

Filed under: Debian, Lintian

Planet DebianJohn Goerzen: True Things About Learning to Fly

I’ve been pretty quiet for the last few months because I’m learning to fly. I want to start with a few quotes about aviation. I have heard things like these from many people and can vouch for their accuracy:

Anyone can learn to fly.

Learning to fly is one of the hardest things you’ll ever do.

It is totally worth it. Being a pilot will give you a new outlook on life.

You’ll be amazed at what radios do a 3000ft. Have you ever had an 3000-foot antenna tower?

The world is glorious at 1000ft up.

Share your enthusiasm with those around you. You have a perspective very few ever see, except for a few seconds on the way to 35,000ft.

Earlier this month, I flew solo for the first time — the biggest milestone on the way to getting the pilot’s license. Here’s a photo my flight instructor took as I was coming in to land that day.


Today I took my first flight to another airport. It wasn’t far — about 20 miles away — but it was still a thrill. I flew about 1500ft above the ground, roughly above a freeway that happened to be my route. From that height, things still look three-dimensional. The grain elevator that marked out the one small town, the manufacturing plant at another, the college at the third. Bales of hay dotting the fields, the occasional tractor creeping along a road, churches sticking up above the trees. These are places I’ve known for decades, and now, suddenly, they are all new.

What a time to be alive! I am glad that our world is still so full of wonder and beauty.


Geek FeminismThe linkspam was inside us all along (17 July 2015)

  • Everything You Know About Boys and Video Games Is Wrong | Time: “Kids I’ve worked with, both male and female, will put up with a lot to play exciting games. But it doesn’t mean they like the way women are portrayed. Yet the video game industry seems to base much of its game and character design on a few assumptions, among them that girls don’t play big action games, boys won’t play games with strong female characters, and male players like the sexual objectification of female characters.”
  • Tech’s Hottest Lunch Spot? A Strip Club | Forbes: “In a city that’s being gentrified by the engineers and startup employees, the Gold Club is perhaps the most outré illustration of San Francisco’s recent excesses, a place where curious crowds come for the cheap fare and stay for the alcohol and extracurriculars. It is also an example of how tone deaf many in the male-dominated tech industry can be.”
  • How Reddit shoved former CEO Ellen Pao off the glass cliff | The Daily Dot: “What is clear, however, is that like many women before her, Pao was tasked with finding solutions to difficult problems only for the men around her to avoid being blamed for them. While it stood to reason Pao would be targeted by the adolescents on the site, she probably would’ve appreciated a warning about the ones in the board room.”
  • Internet harassment and online threats targeting women: Research review | Journalist’s Resource: “As the totality and intensity of the harassment is being better understood, scholars have even begun to see this phenomenon as a profound civil rights issue for women and other groups such as racial minorities. Persistent threats can not only diminish well-being and cause psychological trauma but can undercut career prospects and the ability to function effectively in the marketplace and participate in democracy.”
  • The Mad Max Comics’ Half-Assed Female Characters | Vulture: “Lazy writers, when doing stories that feature women, are drawn magnetically to woman-denigrating plotlines because those are the ones so baked into the culture that they become easy. The Furiosa writers probably didn’t make the title character a brooding sexual-assault survivor because they wanted to take her down a peg; they did it because they couldn’t be bothered to do something more interesting. That is, of course, an extra disappointment because it runs so counter to the spirit of the movie. And it’s especially frustrating because it’s not a matter of bad storytelling, but a matter of a culture that condones and incentivizes bad stories. “
  • Women at Universities File Patents at Higher Rate | Futurity: “Around the world, the number of women filing patents with the US Patent and Trade Office over the last 40 years has risen fastest within universities, a new study shows.”
  • The Myth That Academic Science Isn’t Biased Against Women | The Chronicle of Higher Education: “We know it is comforting to believe that sexism in science is over, and that the tables have turned and women are now the preferred item on the menu. Fine, whatever: Enjoy your comfort food. Just don’t call it scholarship.”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

CryptogramFriday Squid Blogging: Squid Giving Birth

I may have posted this short video before, but if I did, I can't find it. It's four years old, but still pretty to watch.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Krebs on SecurityCVS Probes Card Breach at Online Photo Unit

Nationwide pharmacy chain CVS has taken down its online photo center, replacing it with a message warning that customer credit card data may have been compromised. The incident comes just days after Walmart Canada said it was investigating a potential breach of customer card data at its online photo processing store.


“We have been made aware that customer credit card information collected by the independent vendor who manages and hosts may have been compromised,” CVS said in a statement that replaced the photo Web site’s normal homepage content. “As a precaution, as our investigation is underway we are temporarily shutting down access to online and related mobile photo services. We apologize for the inconvenience. Customer registrations related to online photo processing and are completely separate from and our pharmacies. Financial transactions on and in-store are not affected.”

Last week, Walmart Canada warned it was investigating a similar breach of its online photo Web site, which the company said was operated by a third party. The Globe and Mail reported that the third-party in the Walmart Canada breach is a company called PNI Digital Media. 

According to PNI’s investor relations page, PNI provides a “provides a proprietary transactional software platform” that is used by retailers such as Costco, Walmart Canada, and CVS/pharmacy to sell millions of personalized products every year.”

“Our digital logistics connect your website, in-store kiosks, and mobile presences with neighbourhood storefronts, maximizing style, price, and convenience. Last year the PNI Digital Media platform worked with over 19,000 retail locations and 8,000 kiosks to generate more than 18M transactions for personalized products.”

Update: 11:35 a.m. ET: The above-cited text from PNI’s Investor Relations page was removed shortly after this story went live; A screenshot of it is available here). Someone also edited PNI’s Wikipedia page to remove client information.

Original story: Neither CVS nor PNI could be immediately reached for comment. Costco’s online photo store —, does not appear to include any messaging about a possible breach.

Interestingly, PNI Digital Media was acquired a year ago by office supply chain Staples. As first reported by this site in October 2014, Staples suffered its own card breach, a six-month intrusion that allowed thieves to steal more than a million customer card accounts.

Update, 11:33 p.m. ET: According to a review of customer data previously listed by PNI, we could be seeing similar actions from Sams Club, Walgreens, Rite Aid and Tesco, to name a few.

Costo, which also was listed as a customer of PNI, just took its photo site offline as well, adding the following message:

“As a result of recent reports suggesting that there may have been a security compromise of the third party vendor who hosts we are temporarily suspending access to the site. This decision does not affect any other Costco website or our in-store operations, including in-store photo centers.”


Tesco’s photo site — — currently says it is “down for maintenance.” Rite Aid’s photo site also carries a notice saying it was notified by PNI Digital Media of a possible breach:

“We recently were advised by PNI Digital Media, the third party that manages and hosts, that it is investigating a possible compromise of certain online and mobile photo account customer data. The data that may have been affected is name, address, phone number, email address, photo account password and credit card information. Unlike for other PNI customers, PNI does not process credit card information on Rite Aid’s behalf and PNI has limited access to this information. At this time, we have no reports from our customers of their credit card or other information being affected by this issue. While we investigate this issue, as a precaution we have temporarily shut down access to online and mobile photo services.”

No other online or mobile transactions are affected. This issue is limited to online and mobile photo transactions involving PNI., Rite Aid Online Store, My Pharmacy, wellness+ with Plenti, and in-store systems are not affected.”


Sociological ImagesU.S. Wildfires: What Is a “Natural” Disaster?

Flashback Friday.

The AP has an interesting website about wildfires from 2002 to 2006. Each year, most wildfires occurred west of the Continental Divide:

Many of these areas are forested. Others are desert or shortgrass prairie:

There are a lot of reasons for wildfires–climate and ecology, periodic droughts, humans. The U.S. Fish and Wildlife Service reports that in the Havasu National Wildlife Refuge, the “vast majority” of wildfires are due to human activity. Many scientists expect climate change to increase wildfires.

Many wildfires affect land managed by the Bureau of Land Management. For most of the 1900s, the BLM had a policy of total fire suppression to protect valuable timber and private property.

Occasional burns were part of forest ecology. Fires came through, burning forest litter relatively quickly, then moving on or dying out. Healthy taller trees were generally unaffected; their branches were often out of the reach of flames and bark provided protection. Usually the fire moved on before trees had ignited. And some types of seeds required exposure to a fire to sprout.

Complete fire suppression allowed leaves, pine needles, brush, fallen branches, etc., to build up. Wildfires then became more intense and destructive: they were hotter, flames reached higher, and thicker layers of forest litter meant the fire lingered longer.

As a result, an uncontrolled wildfire was often more destructive. Trees were more likely to burn or to smolder and reignite a fire several days later. Hotter fires with higher flames are more dangerous to fight, and can also more easily jump naturally-occurring or artificial firebreaks. They may burn a larger area than they would otherwise, and thus do more of the damage that total fire suppression policies were supposed to prevent.

In the last few decades the BLM has recognized the importance of occasional fires in forest ecology. Fires are no longer seen as inherently bad. In some areas “controlled burns” are set to burn up some of the dry underbrush and mimic the effects of naturally-occurring fires.

But it’s not easy to undo decades of fire suppression. A controlled burn sometimes turns out to be hard to control, especially with such a buildup of forest litter. Property owners often oppose controlled burns because they fear the possibility of one getting out of hand. So the policy of fire suppression has in many ways backed forest managers into a corner: it led to changes in forests that make it difficult to change course now, even though doing so might reduce the destructive effects of wildfires when they do occur.

Given this, I’m always interested when wildfires are described as “natural disasters.” What makes something a natural disaster? The term implies a destructive situation that is not human-caused but rather emerges from “the environment.” As the case of wildfires shows, the situation is often more complex than this, because what appear to be “natural” processes are often affected by humans… and because we are, of course, part of the environment, despite the tendency to think of human societies and “nature” as separate entities.

Originally posted in 2010.

Gwen Sharp is an associate professor of sociology at Nevada State College. You can follow her on Twitter at @gwensharpnv.

(View original at

Planet DebianDirk Eddelbuettel: RcppRedis 0.1.5

Another small update to RcppRedis arrived on CRAN this morning. The fix I made a few days ago addressing a unit test setup (for the rredis package loaded only for a comparison) didn't quite work out.

Changes in version 0.1.5 (2015-07-17)

  • Another minor fix to unit test setup for rredis.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppRedis page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RcppTOML 0.0.4

We introduced RcppTOML in April with the initial CRAN release 0.0.3. It permits R to read the (absolutely awesome) TOML format which is simply fabulous for configuration files.

A new version appeared on CRAN yesterday. We had observed a somewhat rare segfault in our production use which came down me dereferencing a list iterator which checking length first. Ooops.

As usual, a few other changes were made as, mostly to stay on the good side of R CMD check --as-cran for the development version of R.

Courtesy of CRANberries, there is also a diffstat report for this release More information is on the RcppRedis page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaBen Martin: OSX Bundling Soprano and other joys

Libferris has been moving to use more Qt/KDE technologies over the years. Ferris is also a fairly substantial software project in it's own right, with many plugins and support for multiple libraries. Years back I moved from using raw redland to using soprano for RDF handling in libferris.

Over recent months, from time to time, I've been working on an OSX bundle for libferris. The idea is to make installation as simple as copying to /Applications. I've done some OSX packaging before, so I've been exposed to the whole library paths inside dylib stuff, and also the freedesktop specs expecting things in /etc or whatever and you really want it to look into /Applications/YouApp/Contents/Resources/.../etc/whatever.

The silver test for packaging is to rename the area that is used to build the source to something unexpected and see if you can still run the tools. The Gold test is obviously to install from the app.dmz onto a fresh machine and see that it runs.

I discovered a few gotchas during silver testing and soprano usage. If you get things half right then you can get to a state that allows the application to run but that does not allow a redland RDF model to ever be created. If your application assumes that it can always create an in memory RDF store, a fairly secure bet really, then bad things will befall the app bundle on osx.

Plugins are found by searching for the desktop files first and then loading the shared libary plugin as needed. The desktop files can be found with the first line below, while the second line allows the plugin shared libraries to be found and loaded.

export SOPRANO_DIRS=/Applications/
export LD_LIBRARY_PATH=/Applications/

You have to jump through a few more hoops. You'll find that the plugin ./lib/soprano/ links to lib/librdf.0.dylib and librdf will link to other redland libraries which themselves link to things like libxml2 which you might not have bundled yet.

There are also many cases of things linking to QtCore and other Qt libraries. These links are normally to nested paths like Library/Frameworks/QtCore.framework/Versions/4/QtCore which will not pass the silver test. Actually, links inside dylibs like that tend to cause the show to segv and you are left to work out where and why that happened. My roll by hand solution is to create softlinks to these libraries like QtCore in the .../lib directory and then resolve the dylib links to these softlinks.

In the end I'd also like to make an app bundle for specific KDE apps. Just being able to install okular by drag and drop would be very handy. It is my preferred reader for PDF files and having a binary that doesn't depend on a build environment (homebrew or macports) makes it simpler to ensure I can always have okular even when using an osx machine.

RacialiciousOur histories, Our Selves: Poshida‘s Powerful Portrayal of LGBT Pakistanis

By Guest Contributor Sabah Choudrey

To be honest with you, I was already a little won over. Before watching Poshida, a documentary on LGBT Pakistan I was already moved. As an LGBT Pakistani myself, I felt a connection with this film already, directed by an LGBT Pakistani person. I was feeling excited to rediscover Pakistan and meet my “other” family. Maybe one day my families will meet. This film had already given me hope.

It’s still rare that we are allowed to take claim and pride over our culture. But no matter how hidden it is, pride is something that can still shine through. I think that the mainstream assumes that just because something is hidden, it is something to be ashamed of. Especially when it is involves a number of taboos – religion, sexuality and gender diversity, namely: Islam, Pakistanis and queerness.

It’s rare that we are allowed to write our own histories and document our own lives. To let others see us the way we see ourselves. To take control of the white Western gaze that is constantly dictating our not-so-happy endings. That is why this film is already so important, before even having watched it. I want to thank the director of this film for simply having made it. This is a milestone in our history.

Poshida really is a one-of-a-kind film. It is so different from any other LGBT documentaries I have ever seen, including ones on LGBT Pakistan. Poshida looks at the many different aspects and constructs of the identities of LGBT Pakistanis living there, and how these aspects all interlink with each other, reflecting the true colours of the queer umbrella. Here, the film maker tells us of stories through an intersectional lens.

The movie opens with the traditional story of an untraditional love between two men of different faiths. This old tale is recounted to us from the mouths of the locals at the Sufi shrine of Madho Lal Hussain where they rest. The narrator reiterates that Pakistan is a country of contradictions, and follows with something that is always missing from any documentation on LGBT South Asians: colonialization. The filmmaker exposes the root of homophobia and transphobia and how queerness became a sin and a criminal offence under the rule of the British Empire – and I feel a rush of validation. This history is never told. We are told our history began when the British set foot, when actually it was erased the moment they invaded our land.

Through the various tales of seven people, the film maker shows us the privilege of wealth and the reality of the class divide for LGBT Pakistani people just trying to survive, and what money can truly buy – the privilege of being “out” and safe. The director allows us to hear the honest stories of gay men, lesbians, trans women and trans men without the usual assumptions and stereotypes that shadow our understanding: “What did your parents say?” A shadow I can never escape from here in the UK, asked by those who already assume

I was rejected and misunderstood by my family. We are shown realities that are affected by what the community thinks, that are improved by financial status, and that are criminalized by the media. Soon after I came out as transgender to my parents, I dug through Pakistani media channels, looking for anything on trans men. I found a news article.

The only thing I could find was a piece on Shamial Raj, a transgender man. I showed it to my dad, but how does it help my case to say that this man was charged with lying to his wife and then the two were imprisoned. The film maker continues the story and tells us that Shamial and his wife has disappeared and gone into hiding for safety. It isn’t surprising then to hear that very few transgender men have come forward since.

The film maker interviews Malik, a trans man with a similar tale of being found out, forced to flee and threatened. For Malik, he was given a choice to either return to family or have his girlfriend kidnapped or murdered. I think this is the first Pakistani trans man I have seen on film, speaking in Urdu about coming out. It is the first time I am hearing someone talk about coming out as trans in a language so close to me. I have only ever learnt to speak about my own gender in English, using words of the people who invaded my country and colonised trans.

Poshida sticks to its aims, delving into the history of LGBT Pakistan, taking us right through to modern day culture, and what it is really like to be LGBT in Pakistan – a question that constantly crosses my mind, having spent a third of my childhood in Pakistan and a whole year in a secondary school in Lahore. I often catch myself thinking, “What if …?” I finally have a glimpse of what if. I have known that homosexuality and transgender people is not new to Pakistan’s history, but to see the shrine of Madho Lal Hussain in Lahore, a city where I spent hundreds of days questioning what I was and why Allah had made me this way, is life-changing.

This film has given me strength. Poshida has given me a reason not to hide.

“Poshida: Hidden LGBT Pakistan” is currently under consideration at a number of international film festivals. Like “Poshida” on Facebook and follow on Twitter for updates.

Sabah is a Pakistani trans activist with a passion for his community. Raised in West London, England. He migrated South to Brighton for queerer pastures, and has now returned for browner pastures. His tiny head is full of big ideas, having founded Trans Pride Brighton in 2012, the first trans march and trans celebration in the UK, the QTIPOC Brighton Network for queer, trans and intersex people of color, and desiQ for queer South Asian people in London/South East area. Living a glamorous London lifestyle, he works for Gendered Intelligence as a mentor and facilitator for trans young people of colour. He likes talking about his feelings and likes to write about them even more at Tweet him @SabahChoudrey.

The post Our histories, Our Selves: Poshida‘s Powerful Portrayal of LGBT Pakistanis appeared first on Racialicious - the intersection of race and pop culture.

CryptogramUsing Secure Chat

Micah Lee has a good tutorial on installing and using secure chat.

To recap: We have installed Orbot and connected to the Tor network on Android, and we have installed ChatSecure and created an anonymous secret identity Jabber account. We have added a contact to this account, started an encrypted session, and verified that their OTR fingerprint is correct. And now we can start chatting with them with an extraordinarily high degree of privacy.

FBI Director James Comey, UK Prime Minister David Cameron, and totalitarian governments around the world all don't want you to be able to do this.

Planet Linux AustraliaBinh Nguyen: Selling Software Online, Installer, Packaging, and Packing Software, Desktop Automation, and More

Selling software online is deceptively simple. Actually making money out of it can be much more difficult.

Heaps of packaging/installer programs out there. Some cross platform solutions out there as well. Interestingly, just like a lot of businesses out there (even a restaurant that I frequent will offer you a free drink if you 'Like' them via Facebook) now they make use of guerilla style marketing techniques. Write a blog article for them and they may provide you with a free license.

I've always wondered how much money software manufacturers make from bloatware and other advertising... It can vary drastically. Something that to watch for are silent/delayed installs though. Namely, installation of software even though it doesn't show up the Window's 'Control Panel'.

Even though product activation/DRM can be simple to implement (depending on the solution), cost can vary drastically depending on the company and solution that is involved.

Sometimes you just want to know what packers and obfuscation a company may have used to protect/compress their program. It's been a while since I looked at this and it looks like things were just like last time. A highly specialised tool with few genuinely good, quality candidates...

A nice way of earning some extra/bonus (and legal) income if you have a history being able to spot software bugs.

If you've never used screen/desktop automation software before there are actually quiet a few options out there. Think of it as 'Macros' for the Windows desktop. The good thing is that a lot of them may use a scripting language for the backend and have other unexpected functionality as well opening up further opportunities for productivity and automation gains.

A lot of partition management software claim to be able to basically handle all circumstances. The strange thing is that disk cloning to an external drive doesn't seem to be handled as well. The easiest/simplest way seems to be just using a caddy/internal in combination with whatever software you may be using.

There are some free Australian accounting solutions out there. A bit lacking feature wise though.,7-accounting-packages-for-australian-small-businesses-compared-including-myob-quickbooks-online-reckon-xero.aspx

Every once in a while someone sends you an email in a 'eml' format which can't be decoded by your local mail client. Try using 'ripmime'...

Worse Than FailureError'd: Error Version 16

"I was filling out a survey for PayPal when this message popped up to let me know that I am at a testing stage," Ishai S. writes.


"Apparently CAPTCHAs now come in postmodern surrealist flavor," writes Ian S.


"So, let me get this straight," wrote Luke H., "I am allowed to install VMware ESXi Free on as many physical machines as I like, just as long as I don't install it on any physical machines."


Connor wrote, "Home Depot is good at installing new roofs...but not at installing genuine copies of Windows."


"The error was alright, but what was REALLY amusing was that one of the players tried to hit the close icon as if it was a tablet," writes Garry M.


"While trying to upgrade Ubuntu, the upgrader broke and rebooted to show me this," Nick wrote.


Helen B. wrote, "No wonder my internet has been a little slow, although surely if I did actually live somewhere off the coast of Africa, they could find an exchange a little nearer?"


"Thank goodness Lloyds TSB's site obfuscates their phone numbers," James writes.


[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianSimon Kainz: DUCK challenge: week 2

Just a litte update on the DUCK challenge: In the last week, the following packages were fixed and uploaded into unstable:

Last week we had 10 packages uploaded & fixed, the current week resulted in 15 fixed packages.

So there are currently 25 packages fixed by 20 different uploaders. I really hope i can meet you all at DebConf15!! The list of the fixed and updated packages is availabe here. I will try to update this ~daily. If I missed one of your uploads, please drop me a line.

A big "Thank You" to you.

There is still lots of time till the end of DebConf15 and the end of the DUCK Challenge, so please get involved.

And rememeber:

debcheckout fails? FIX MORE URLS

Planet Linux AustraliaBen Martin: Terry && EL

After getting headlights Terry now has a lighted arm. This is using the 3 meter EL wire and a 2xAA battery inverter to drive it. The around $20 entry point to bling is fairly hard to resist. The EL tape looks better IMHO but seems to be a little harder to work with from what I've read about cutting the tape and resoldering / reconnecting.

I have a 1 meter red EL tape which I think I'll try to wrap around the pan/tilt assembly. From an initial test it can make it around the actobotics channel length I'm using around twice. I'll probably print some mounts for it so that the tape doesn't have to try to make right angle turns at the ends of the channel.

Planet DebianJohn Goerzen: First steps with smartcards under Linux and Android — hard, but it works

Well this has been an interesting project.

It all started with a need to get better password storage at work. We wound up looking heavily at a GPG-based solution. This prompted the question: how can we make it even more secure?

Well, perhaps, smartcards. The theory is this: a smartcard holds your private keys in a highly-secure piece of hardware. The PC can never actually access the private keys. Signing and decrypting operations are done directly on the card to prevent the need to export the private key material to the PC. There are lots of “standards” to choose from (PKCS#11, PKCS#15, and OpenPGP card specs) that are relevant here. And there are ways to use SSH and OpenVPN with some of these keys too. Access to the card is protected by a passphrase (called a “PIN” in smartcard lingo, even though it need not be numeric). These smartcards might be USB sticks, or cards you pop into a reader. In any case, you can pop them out when not needed, pop them in to use them, and… well, pretty nice, eh?

So that’s the theory. Let’s talk a bit of reality.

First of all, it is hard for a person like me to evaluate how secure my data is in hardware. There was a high-profile bug in the OpenPGP JavaCard applet used by Yubico that caused the potential to use keys without a PIN, for instance. And how well protected is the key in the physical hardware? Granted, in most of these cards you’re talking serious hardware skill to compromise them, but still, this is unknown in absolute terms.

Here’s the bigger problem: compatibility. There are all sorts of card readers, but compatibility with pcsc-tools and pcscd on Linux seems pretty good. But the cards themselves — oh my. PKCS#11 defines an interface API, but each vendor would provide their own .so or .dll file to interface. Some cards (for instance, the ACOS5-64 mentioned on the Debian wiki!) are made by vendors that charge $50 for the privilege of getting the drivers needed to make them work… and they’re closed-source proprietary drivers at that.

Some attempts

I ordered several cards to evaluate: the OpenPGP card, specifically designed to support GPG; the ACOS5-64 card, the JavaCOS A22, the Yubikey Neo, and a simple reader listed on the GPG smartcard howto.

The OpenPGP card and ACOS5-64 are the only ones in the list that support 4096-bit RSA keys due to the computational demands of them. The others all support 2048-bit RSA keys.

The JavaCOS requires the user to install a JavaCard applet to the card to make it useable. The Yubico OpenPGP applet works here, along with GlobalPlatform to install it. I am not sure just how solid it is. The Yubikey Neo has yet to arrive; it integrates some interesting OAUTH and TOTP capabilities as well.

I found that Debian’s wiki page for smartcards lists a bunch of them that are not really useable using the tools in main. The ACOS5-64 was such a dud. But I got the JavaCOS A22 working quite nicely. It’s also NFC-enabled and works perfectly with OpenKeyChain on Android (looking like a “Yubikey Neo” to it, once the OpenPGP applet is installed). I’m impressed! Here’s a way to be secure with my smartphone without revealing everything all the time.

Really the large amount of time is put into figuring out how all this stuff fits together. I’m getting there, but I’ve got a ways to go yet.

Update: Corrected to read “signing and decrypting” rather than “signing and encrypting” operations are being done on the card. Thanks to Benoît Allard for catching this error.


TED9 TED Talks to inspire smart conversation


No one really wants to talk about the weather. Inspired by TED Talks, here are some questions to start a better conversation in any situation.


“So, what’s your favorite word?”

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="585"></iframe>

Who to ask: The chatty person who’s sharing an outlet with you at the coffee shop
The basic idea: Dictionaries don’t compile themselves — linguistic sleuths called lexicographers do — and in order to keep the modern dictionary accurate and dynamic, they need be open to new words and formats. They also need your help.
Fun facts you’ll learn: How lexicography is like archaeology; why there’s no such thing as a “bad” word; and the definition of “erinaceous” (hint: it involves hedgehogs). Scoot to 3:58 for that.


“If you could choose a sixth sense, what would it be?”

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="585"></iframe>

When to ask: Around the dinner table, just before dessert
The basic idea: Human perception is limited to information our five senses are able to receive and process. But combining technology with biology, scientists are finding new ways to enhance our current senses — and even add new ones.
Fun facts you’ll learn: How scientists are using “peripheral devices” inspired by snakes, moles and fish to give humans new senses; the inner workings of a vest that lets people hear through touch. (Yes, you read that right. See it in action at the 11:45 mark.)


“Do you think you can tell when someone is telling you a lie?”

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="585"></iframe>

When to ask: At a late-night get-together
The basic idea: From inkblot tests to learning styles to the details of Milgram’s famous experiment, there are a number of famous psychology tidbits we think we know — but are actually wrong about.
Fun facts you’ll learn: Contrary to popular belief, men are not from Mars and women are not from Venus. Also — sorry to break it to you — listening to Mozart won’t make you smarter.


“Tell me about a time when you made an assumption — and were proven wrong.”

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="585"></iframe>

Who to ask: Your seatmate on a plane or long train trip
The basic idea: It’s easy to hold a narrow vision of a person or a whole culture. But everyone has a collection of layered, overlapping stories — no one is a single, simple meme.
Fun facts you’ll learn: Why the media’s focus on a “single story” about a place prevents true understanding; and what we can do to change — and broaden — the narrative.


“Are you optimistic about the world, or pessimistic — and what makes you feel that way?”

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="585"></iframe>

When to ask: Among old friends
The basic idea: Thanks to sensationalist news media, many people think the world is heading in the wrong direction. But reality doesn’t always align with our pessimistic perceptions. By changing the way we see information, we can rise above ignorance.
Fun facts you’ll learn: The many surprising pieces of evidence that show the world is getting better (and why chimps seem to have a better handle on this than we do). For a heartening stat on global vaccine rates that the media got wrong, click to 7:05.


“Do you think you are smarter than your parents?”

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="585"></iframe>

When to ask: At a family reunion picnic
The basic idea: Cognitive history shows that each generation scores higher on IQ tests than the one before. As the world around us has changed, so has our ability to process it and understand increasingly complex concepts.
Fun facts you’ll learn: The staggering difference in average IQ scores between generations; the “mental artillery” we have today that our grandparents didn’t; and the areas in which we still fall short. (Hint: flipping through a history textbook might be a good idea.)


“Did you ever wonder why humans cook our food, and other animals don’t?”

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="585"></iframe>

When to ask: While cooking with friends
The basic idea: For all the progress we’ve made in neuroscience, some basic questions about the human brain’s size and function have remained unanswered — until now.
Fun facts you’ll learn: The key difference between a human brain and a rat brain; the skill our ancestors developed that changed everything; and what neuroscientists achieved by making “brain soup.”


“If you had to choose between a roof over your head and your right to vote, which would you choose?”

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="585"></iframe>

Who to ask: Someone from a different background whom you want to understand a bit better
The basic idea: China’s rise to economic power is indisputable, and while Western leaders tend to fixate on clashing ideals, some emerging economies view China’s model as the one to emulate.
Fun facts you’ll learn: The political and economic values propelling China forward; why the West’s focus on liberty and democracy isn’t always applicable to reality; and what the could be championing instead.


“Ever notice how our dogs behave when they’re in packs?”

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="585"></iframe>

When to ask: At the dog park
The basic idea: The behavior of individual animals may seem simple and straightforward, but when these animals interact in groups, surprisingly complex patterns emerge.
Fun facts you’ll learn: How to create an example of complexity theory with Scottish terrier puppies.

Worse Than FailureBonus WTF: 5:22

No, it isn't an extended cut of a John Cage song, it's a new feature article that we put together- but you can't read it here, you can only read it over at our sponsor site: 5:22.

Special thanks to Infragistics, for helping support TDWTF.


A worldwide leader in user experience, Infragistics helps developers build amazing applications. More than a million developers trust Infragistics for enterprise-ready user interface toolsets that deliver high-performance applications for Web, Windows and mobile applications. Their Indigo Studio is a design tool for rapid, interactive prototyping.

[Advertisement] Use NuGet or npm? Check out ProGet, the easy-to-use package repository that lets you host and manage your own personal or enterprise-wide NuGet feeds and npm repositories. It's got an impressively-featured free edition, too!

Sociological ImagesThere are 22 Million Angry, Impulsive Americans with Guns

While it seems that much of the discourse around curbing gun violence focuses on the need to keep guns out of the hands of the mentally ill, these two issues — gun violence and mental illness — “intersect only at their edges.” These are the words of Jeffrey Swanson and his colleagues in their new article examining the personality characteristics of American gun owners.

To think otherwise, they argue, is to fall prey to the narrative of gun rights advocates, who want us to think that “controlling people with serious mental illness instead of controlling firearms is the key policy answer.” Since the majority of people with mental illnesses are never violent, this is unlikely to be an effective strategy while, at the same time, further stigmatizing people with mental illness.

What is a good strategy, then, short of the unlikely event that we take America’s guns away?

Swanson and colleagues argue that a better policy would be to look for signs of impulsive, angry, and aggressive behavior and limit gun rights based on that. Evidence of such behavior, they believe, “conveys inherent risk of aggressive or violent acts” substantial enough to justify limiting gun ownership.

Using a nationally representative data set, they estimate that 8,865 people out of every 100,000 both (1) owns at least one gun and (2) exhibits impulsive angry behavior: angry outbursts, smashing things in anger, or losing their temper and engaging in physical fights. If I do my math right, that’s almost 22 million American adults (~321,300,000 people minus the 23% under 18 divided by 100,000 and multiplied by 8,865).

1,488 out of those 100,000, or almost 3.6 million, also carries a gun outside the home. People who owned lots of guns (six or more) were four times as likely to both have anger issues and carry outside the home.

The numbers of angry and impulsive people who own and carry guns, importantly, far exceeds the number of people who have been hospitalized for mental illness. This is a dangerous population, in other words, much larger than the one currently excluded from legal gun ownership.


“It is reasonable to imagine,” Swanson and his colleagues conclude, that people who are angry, aggressive, and impulsive have an arrest history. Accordingly, they advocate gun restrictions based on indicators of this personality type, such as convictions for misdemeanor violence, DUIs, and restraining orders. This, they think, would do a much better job of reducing gun violence than a focus on certified mental illness.

H/t to gin and tacos. Cross-posted at Pacific Standard.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

CryptogramProxyHam Canceled

The ProxyHam project (and associated Def Con talk) has been canceled under mysterious circumstances. No one seems to know anything, and conspiracy theories abound.

Worse Than FailureAnnouncements: Experience Your Own Support Stories at Inedo

Support stories have always been among some of my favorite. Not enough links for you? Here, I'll just share my favorite favorite: Radio WTF Presents: Quantity of Service.

It's not so much the sense of smug superiority that comes with diagnosing ID-10t and PEBCAK errors, but more a sense of appreciation. I've been there — my first grown-up job was on a helpdesk — and to this day I still handle a fair bit of BuildMaster and ProGet support inquiries. And actually, that's why I'm writing this message today.

We've been growing a bit at Inedo this past year, and there was a position / job opportunity that I wanted to share with you: Support Analyst. This is not a typical helpdesk role by any means. Actually, it's a blend of roles — support, service, technical writing, and development — and since we're a small team, we'll all be working on doing all of these things together.

But I thought the most interesting part about this opportunity is the Developer Growth Opportunity. From the posting:

Most of the developers on our team started their career in a helpdesk/support role, either in college or as their first job. We'll gladly help you make the transition from support to development through our Developer Apprenticeship Program. If you are interested in pursuing this route, we will spend a significant amount of time working with you to rapidly increase your software development proficiency; through our mentorship, you could reach an expert/mastery level in a very short time (what would take most developers ten+ years), but it will require a lot of dedication and a lot of a hard work. This experience will also be helpful in the short-term: learning the code is crucial to understanding how our software behaves under certain conditions.

As you all know, either from first hand experience or reading an Tales from the Interview, it's nearly impossible to find great developers... especially in our fine city. But we can help make great developers.

Anyway, that's all. If you, or someone you know might be a good fit, please consider getting in touch!

[Advertisement] Scout is the best way to monitor your critical server infrastructure. With over 90 open source plugins, robust alerting, beautiful dashboards and a 5 minute install - Scout saves youvaluable engineering time. Try the server monitoring you'll 👍 today.Your first 30 days are free on us. Learn more at Scout.

Worse Than FailureRepresentative Line: Truely Representative

There’s bad code, and then there’s code so bad that you only need to see one line of code to understand how bad it actually is. Simon supplied this tiny horror which manages to combine all that’s wrong with PHP with the worst of loose typing and a thick layer of not really understanding what you’re doing.

Korean Traffic sign (Pass Left or Right)

PHP is a terrible language, but it’s not so terrible that it doesn’t have a boolean type. It’s perfectly happy to mangle juggle your types, and for that reason, it actually discourages the use of any sort of type casting.

Yes, by PHP standards, casting types is an anti-pattern, but that’s not why this is as much of a WTF as it is.

PHP lets you use anything as a boolean, and follows the same general conventions as other type-mangling languages- the integer 0, the floating point value 0.0, an empty string (or the string “0”) are all false.

So, now we’re stacking up the WTFs- the original programmer could have just passed TRUE, instead of “true”, but really, it didn’t matter what they passed, so this line probably would have made just as much sense:


I fortunately don’t work with PHP, so I had to do a little fact checking to verify that this line was as stupid as I suspected it was, and that meant reading up on PHP’s handling of booleans and their odd results. At the risk of picking on PHP, while I wanted to understand how it did booleans, I discovered that it has two different boolean operators- “OR” and “||”, which isn’t a WTF, and that they have totally different orders of operation.

As a result, you run into weird cases like this:

$z= $y OR $x;

$z is false at the end of that expression, because the assignment is actually evaluated before the “OR”. Essentially, the final line is actually:

($z=$y) OR $x;

The “||” operator works like a normal person would expect it to.

<link href="" rel="stylesheet"/>
<script src=""></script>

[Advertisement] Use NuGet or npm? Check out ProGet, the easy-to-use package repository that lets you host and manage your own personal or enterprise-wide NuGet feeds and npm repositories. It's got an impressively-featured free edition, too!


I've read plenty of silly articles in my time, but the naive nonsense from the President of the Australian Population Institute (wonder who funds it?) just about takes the cake.<o:p></o:p>

Ms Jane Nathan says in today's Age 16 July 2015 that Melbourne is headed for eight million by 2050, and goes on to describe what it will be like in the most wildly optimistic tones imaginable. She says "our social harmony, kaleidoscopic culture, clean food, innovative education systems and greatly reduced crime rates are the envy of the world. Our neighbourhoods are artistic, green and pristine". <o:p></o:p>

Sounds like paradise. The problem is, there is no evidence to support it. Indeed all the evidence points in the opposite direction. Rapid population growth in Melbourne has produced higher crime rates, with domestic violence and the ice epidemic blighting our city. Education for our young people has more costly and less valuable, with increasing graduate unemployment and alarming reports of dodgy private training colleges and cheating at universities. The risk of terrorist attack is higher. And as for green and pristine, just this week it was reported that even common Australian birds, like the Willy Wagtail and the Kookaburra, were being sighted much less frequently. The reason for this is that the streets of mature gardens that used to give our birds food and shelter have been replaced by multi-unit developments and high rise. The vegetation has been destroyed, and the birds have died out.<o:p></o:p>

And the evidence from cities overseas which have got to eight million and more is pretty clear too. Terrible traffic congestion, lousy housing affordability, poor quality open space, big gaps between rich and poor, and an underclass of poverty, drugs and crime. Ms Nathan can endeavour to talk up eight million and sell it as an exciting future all she likes, but there is absolutely no evidence to warrant this "she'll be right" approach to rapid population growth.<o:p></o:p>

Kelvin Thomson MP<o:p></o:p>

Planet Linux AustraliaBen Martin: Terry - Lights, EL and solid Panner

Terry the robot now has headlights! While the Kinect should be happy in low light I found some nice 3 watt LEDs on sale and so headlights had to happen. The lights want a constant current source of 700mA so I grabbed an all in one chip solution do to that and mounted the lights in series. Yes, there are a load of tutorials on building a constant current driver for a few bucks around the net, but sometimes I don't really want to dive in and build every part. I think it will be interesting at some stage to test some of the constant current setups and see the ripple and various metrics of the different designs. That part of he analysis is harder to find around the place.

And just how does this all look when the juice is flowing I hear you ask. I have tilted the lights ever so slightly downwards to save the eyes from the full blast. Needless to say, you will be able to see Terry coming now, and it will surely see you in full colour 1080 glory as you become in the sights. I thought about mounting the lights on the pan and tilt head unit, but I really don't want these to ever get to angles that are looking right into a person's eyes as they are rather bright.

On another note, I now have some EL wire and EL tape for Terry itself. So the robot will be glowing in a sublte way itself. The EL tape is much cooler looking than the wire IMHO but the tape is harder to cut (read I probably won't be doing that). I think the 1m of tape will end up wrapped around the platform on the pan and tilt board.

Behind the LED is quite a heatsink, so they shouldn't pop for quite some time. In the top right you can just see the heatshrink direct connected wires on the LED driver chip and the white wire mounts above it. I have also trimmed down the quad encoder wires and generally cleaned up that area of the robot.

A little while ago I moved the pan mechanism off axle. The new axle is hollow and setup to accomodate a slip ring at the base. I now have said slip ring and am printing a crossover plate for that to mount to channel. Probably by the next post Terry will be able to continuiously rotate the panner without tangling anything up. The torque multiplier of the brass to alloy wheels together with the 6 rpm gearmotor having very high torque means that the panner will tend to stay where it is. Without powering the motor the panner is nearly impossible to move, the grub screws will fail before the motor gives way.

Although the EL tape is tempting, the wise move is to fit the slip ring first.

Krebs on SecurityThe Darkode Cybercrime Forum, Up Close

By now, many of you loyal KrebsOnSecurity readers have seen stories in the mainstream press about the coordinated global law enforcement takedown of Darkode[dot]me, an English-language cybercrime forum that served as a breeding ground for botnets, malware and just about every other form of virtual badness. This post is an attempt to distill several years’ worth of lurking on this forum into a narrative that hopefully sheds light on the individuals apprehended in this sting and the cybercrime forum scene in general.

To tell this tale completely would take a book the size of The Bible, but it’s useful to note that the history of Darkode — formerly darkode[dot]com — traces several distinct epochs that somewhat neatly track the rise and fall of the forum’s various leaders. What follows is a brief series of dossiers on those leaders, as well as a look at who these people are in real life.


Darkode began almost eight years ago as a pet project of Matjaz Skorjanc, a now-36-year-old Slovenian hacker best known under the hacker alisas “Iserdo.” Skorjanc was one of several individuals named in the complaints published today by the U.S. Justice Department.

Butterfly Bot customers wonder why Iserdo isn't responding to support requests. He was arrested hours before.

Butterfly Bot customers wonder why Iserdo isn’t responding to support requests. He was arrested hours before.

Iserdo was best known as the author of the ButterFly Bot, a plug-and-play malware strain that allowed even the most novice of would-be cybercriminals to set up a global cybercrime operation capable of harvesting data from thousands of infected PCs, and using the enslaved systems for crippling attacks on Web sites. Iserdo was arrested by Slovenian authorities in 2010. According to investigators, his ButterFly Bot kit sold for prices ranging from $500 to $2,000.

In May 2010, I wrote a story titled Accused Mariposa Botnet Operators Sought Jobs at Spanish Security Firm, which detailed how several of Skorjanc’s alleged associates actually applied for jobs at Panda Security, an antivirus and security firm based in Spain. At the time, Skorjanc and his buddies were already under the watchful eye of the Spanish police.


Following Iserdo’s arrest, control of the forum fell to a hacker known variously as “Mafi,” “Crim” and “Synthet!c,” who according to the U.S. Justice Department is a 27-year-old Swedish man named Johan Anders Gudmunds. Mafi is accused of serving as the administrator of Darkode, and creating and selling malware that allowed hackers to build botnets. The Justice Department also alleges that Gudmunds operated his own botnet, “which at times consisted of more than 50,000 computers, and used his botnet to steal data from the users of those computers on approximately 200,000,000 occasions.”

Mafi was best known for creating the Crimepack exploit kit, a prepackaged bundle of commercial crimeware that attackers can use to booby-trap hacked Web sites with malicious software. Mafi’s stewardship over the forum coincided with the admittance of several high-profile Russian cybercriminals, including “Paunch,” an individual arrested in Russia in 2013 for selling a competing and far more popular exploit kit called Blackhole.

Paunch worked with another Darkode member named “J.P. Morgan,” who at one point maintained an $800,000 budget for buying so-called “zero-day vulnerabilities,” critical flaws in widely-used commercial software like Flash and Java that could be used to deploy malicious software.

Darkode admin "Mafi" explains his watermarking system.

Darkode admin “Mafi” explains his watermarking system.

Perhaps unsurprisingly, Mafi’s reign as administrator of Darkode coincided with the massive infiltration of the forum by a number of undercover law enforcement investigators, as well as several freelance security researchers (including this author).

As a result, Mafi spent much of his time devising new ways to discover which user accounts on Darkode were those used by informants, feds and researchers, and which were “legitimate” cybercriminals looking to ply their wares.

For example, in mid-2013 Mafi and his associates cooked up a scheme to create a fake sales thread for a zero-day vulnerability — all in a bid to uncover which forum participants were researchers or feds who might be lurking on the forum.

That plan, which relied on a clever watermarking scheme designed to “out” any forum members who posted screen shots of the forum online, worked well but also gave investigators key clues about the forum’s hierarchy and reporting structure.


Mafi worked closely with another prominent Darkode member nicknamed “Fubar,” and together the two of them advertised sales of a botnet crimeware package called Ngrbot (according to Mafi’s private messages on the forum, this was short for “Niggerbot.” The password databases from several of Mafi’s accounts on hacked cybercrime forums included variations on the word “nigger” in some form). Mafi also advertised the sale of botnets based on “Grum” a spam botnet whose source code was leaked in 2013.


Conspicuously absent from the Justice Department’s press release on this takedown is any mention of Darkode’s most recent administrator — a hacker who goes by the handle “Sp3cial1st.”

Better known to Darkode members at “Sp3c,” this individual’s principal contribution to the forum seems to have revolved around a desire to massively expand the membership of the form, as well as an obsession with purging the community of anyone who even remotely might emit a whiff of being a fed or researcher.

The personal signature of Sp3cialist.

The personal signature of Sp3cial1st.

Sp3c is a well-known core member of the Lizard Squad, a group of mostly low-skilled miscreants who specialize in launching distributed denial-of-service attacks (DDoS) aimed at knocking Web sites offline.

In late 2014, the Lizard Squad took responsibility for launching a series of high-profile DDoS attacks that knocked offline the online gaming networks of Sony and Microsoft for the majority of Christmas Day.

In the first few days of 2015, KrebsOnSecurity was taken offline by a series of large and sustained denial-of-service attacks apparently orchestrated by the Lizard Squad. As I noted in a previous story, the booter service — lizardstresser[dot]su — was hosted at an Internet provider in Bosnia that is home to a large number of malicious and hostile sites. As detailed in this story, the same botnet that took Sony and Microsoft offline was built using a global network of hacked wireless routers.

That provider happens to be on the same “bulletproof” hosting network advertised by sp3cial1st. At the time, Darkode and LizardStresser shared the same Internet address.


Another key individual named in the Justice Department’s complaint against Darkode is a hacker known in the underground as “KMS.” The government says KMS is a 28-year-old from Opelousas, Louisiana named Rory Stephen Guidry, who used the Jabber instant message address “” Having interacted with this individual on numerous occasions, I’d be remiss if I didn’t explain why this person is perhaps the least culpable and yet most interesting of the group named in the law enforcement purge.

For the past 12 months, KMS has been involved in an effort to expose the Lizard Squad members, to varying degrees of success. There are few individuals I would consider more skilled in tricking people into divulging information that is not in their best interests than this guy.

Near as I can tell, KMS has worked assiduously to expose the people behind the Lizard Squad and, by extension, the core members of Darkode. Unfortunately for KMS, his activities also appear to have ensnared him in this investigation.

To be clear, nobody is saying KMS is a saint. KMS’s best friend, a hacker from Kentucky named Ryan King (a.k.a. “Starfall” and a semi-frequent commenter on this blog), says KMS routinely had trouble seeing the lines between exposing others and involving himself in their activities. Here’s one recording of him making a fake emergency call to the FBI, disguising his voice as that of President Obama.

KMS is rumored to have played a part in exposing the Lizard Squad’s February 2015 hijack of’s domain in Vietnam. The message left behind in that crime suggested this author was somehow responsible, along with Sp3c and a Rory Andrew Godfrey, the only name that KMS was known under publicly until this week’s law enforcement action.

“As far as I know, I’m the only one who knew his real name,” King said. “The only botnets that he operated were those that he social engineered out of [less skilled hackers], but even those he was trying get shut down. All I know is that he and I were trying to get [root] access to Darkode and destroy it, and the feds beat us to it by about a week.”

The U.S. government sees things otherwise. Included in a heavily-redacted affidavit (PDF) related to Guidry’s case are details of a pricing structure that investigators say KMS used to sell access to hacked machines (see screenshot below)


Many other individuals operating under a number of hacker names were called out in the Justice Department press release about this action. Perhaps some of them are mentioned in this subset of my personal archive of screen shots from Darkode, hosted here. Happy hunting.

One final note: As happens with many of these takedowns, the bad guys don’t just go away: They go someplace else. In this case, that someplace else is most likely to be a Deep Web or Dark Web forum accessible only via Tor: According to chats observed from Sp3c’s public and private online accounts, the forum is getting ready to move much further underground.

The Justice Department press release on this action is here, which includes links to charging documents on most of the defendants.

Update, 8:55 p.m. ET: Removed a sentence fragment that confused Iserdo with other individuals connected to his indictment.



Planet Linux AustraliaMichael Still: Wanderings

I am on vacation this week, so I took this afternoon to do some walking and geocaching...

That included a return visit to Narrabundah trig to clean up some geocaches I missed last visit:


Interactive map for this route.

And exploring the Lindsay Pryor arboretum because I am trying to collect the complete set of arboretums in Canberra:


Interactive map for this route.

And then finally the Majura trig, which was a new one for me:


See more thumbnails

Interactive map for this route.

I enjoyed the afternoon. I found a fair few geocaches, and walked for about five hours (not including driving between the locations). I would have spent more time geocaching at Majura, except I made it back to the car after sunset as it was.

Tags for this post: blog pictures 20150715-wanderings photo canberra bushwalk trig_point
Related posts: Goodwin trig; Big Monks; Narrabundah trig and 16 geocaches; Cooleman and Arawang Trigs; One Tree and Painter; A walk around Mount Stranger


Google AdsenseDemystifying AdSense policies with John Brown: Understand your traffic (Part 2)

Editor’s note: John Brown, the Head of Publisher Policy Communications, is sharing insights and answering most common questions about invalid activity.

In this post, I want to stress why we take invalid clicks so seriously and clarify a few questions related to traffic quality and invalid clicks.

Let’s take a step back and think about the digital ad ecosystem. The relationships between Google, advertisers, and publishers are built on trust. A strong and healthy digital ecosystem needs:
  • Users who trust the system and have a good experience,
  • Advertisers safely investing in digital ads,
  • Publishers who can sustain their business.
To protect those relationships, it’s very important to make sure that clicks and impressions are based on genuine user intent. That’s why at Google we have a global team that monitors the traffic across Google's ad network, and prevents advertisers from paying for invalid traffic.

Now, I'd like to address some of the most common questions and concerns from publishers related to ad traffic quality and invalid clicks.

  • What is Google's obligation to publishers?

Google manages advertiser relationships so that you don’t have to. Publishers benefit from our vast supply of ads. To provide ads to your sites for the months and years to come, advertisers must trust our network. Our policies are in place to protect these advertiser relationships, which ultimately protects publishers that work with us as well.

  • What happens to earnings held back from publishers due to invalid activity?

Any revenue found to be from invalid activity is refunded back to the active advertisers, not kept by Google. In 2014, we refunded more than $200,000,000 to advertisers from detected invalid activity. In 2014, we’ve disabled more than 160,000 sites to protect the ecosystem.

  • What can Google do to better communicate policies and enforcement?

We’ve adopted a policy of silence for the most part in order to protect our signals. We find it important to protect our signals so that bad actors cannot detect how we discover invalid activity. Additionally, we are always striving to increase transparency around our communications without compromising our techniques to protect advertisers and publishers. Stay tuned for new features which will help you have more control over your content and stay compliant with the policies.

  • Will Google modify interactions with the publisher community going forward?

We realize that we can improve our communications, especially around warnings, suspensions, and account disablement.  My charge is to do this. I have many people working with me on better education, along with improving the language and instructions around warnings or messages received from Google. I believe that publishers understand much better where they stand at all times when our policies are clear and when we communicate them effectively, and enforce consistently.

I hope you found these insights useful. Check back here next week where we’ll talk about what you can do as a publisher to help us protect the digital ecosystem. Let us know what you think in the comments section below.

Subscribe to AdSense blog posts

Posted by John Brown
Head of Publisher Policy Communications

RacialiciousWho Gets To Decide? Multiracial Families and the Question of Identity

By Guest Contributor Kristen Green

After talking with a group of writers about my new book—part memoir, part history—I was approached by a white woman who questioned my use of the term multiracial to refer to my husband.

“Is he Black?” she asked. When I said no, she firmly suggested that I “just call him American Indian.”

Since writing Something Must Be Done About Prince Edward County, which outlines white leaders’ decision to close the schools in my hometown rather than desegregate, I’ve received unwelcome feedback about the way I describe Jason and our children, who are a mix of American Indian and white. My mom, tired of my use of the word multiracial, told me to “just let Jason be Jason.”

One person felt my kids were “so light” their race wasn’t worth mentioning. Another wondered how the race of my husband and children could be relevant to the story of my hometown since my husband wasn’t Black.

The comments, all made by white people, sting. Their feedback implies that my husband and children’s deeply personal racial identification is something they are entitled to have a say about. It also suggests they think they have an understanding of Jason and the girls’ lived experience. They couldn’t be more wrong.

The coverage of Rachel Dolezal’s decision to identify as Black, dishonest as she was, has put a laser focus on the topic of identity in this country. The issue at stake: who gets to decide how people of color refer to themselves?

For generations, whites have controlled these definitions of identity, stretching back to the one-drop rule, where anyone with a drop of “Black blood”—once called “Negro blood”—was forced by law to identify as black. The rule was used to justify slavery and Jim Crow segregation. Biracial children with one white parent could not claim any identity other than Black.

On the other hand, one drop of Indian blood has not historically made someone American Indian. The federal government has methods for classifying American Indians. In some cases, if their blood is too “diluted,” people of American Indian descent don’t qualify for land allotments or tribal membership.

With the population of brown people in the U.S. rising, the government, and the American public, will be forced to cede control of these definitions. Over the next four decades, people of two or more races are expected to be the fastest growing population of Americans. And there are more ways than ever for mixed-race people to define themselves.

My husband doesn’t have a cultural or tribal connection to his American Indian background. Yet his lived experience is as a brown person; his identity is tied to how the world views him and treats him. We know the same will be true of our two daughters.

I hear fear in the voices of whites that act as if they have some stake in how people of multiple racial backgrounds identify. When someone questions how I refer to my children, I think of the power whites have clung to by deciding how people of color are labeled.

When people challenge the terms we use, I hear this: there is so much power and privilege in being white, why would you undermine that? Why would you call your kids “mixed,” a term that many still associate with its historical reference to miscegenation, once widely considered shameful? There’s a lack of understanding that the definitions of mixed and multiracial no longer refer singularly to those who are white and black.

When a relative told me that I am placing a burden on my girls by referring to them as multiracial, I wonder what she would have me do instead. Let them try to pass for white? Why shouldn’t my girls proudly claim all that they are?

There is so much power in deciding how to identify oneself, and people of color rightly want to claim that power. This question of how people identify is increasingly relevant as more Americans marry and parent across racial and ethnic lines. People have the right to decide how to identify themselves and their children. They can call themselves mixed or biracial or multiracial. They can identify with one of their races or multiple races. It is a personal decision.

My husband and I want our daughter’s skin color, and their racial background, to be something they take pride in, something they are comfortable talking about. We want them to be aware that people have historically been discriminated against for the color of their skin.

As they get older, they will decide how they want to identify. I hope, by then, as people of color become the majority in this country, they won’t get so many unsolicited opinions from others about the way they refer to their own racial makeup.

Kristen Green (@kgreen) is the author of SOMETHING MUST BE DONE ABOUT PRINCE EDWARD COUNTY, published by Harper in June. The book, a hybrid of memoir and history, describes the decision by white leaders in her hometown to close the public schools rather than desegregate and examines her family’s role. She has worked as a newspaper reporter for 20 years, including at the San Diego Union-Tribune and the Boston Globe.

The post Who Gets To Decide? Multiracial Families and the Question of Identity appeared first on Racialicious - the intersection of race and pop culture.

Sociological ImagesHappy Birthday, Jacques Derrida!

“Jacques Derrida (1930–2004) was born in French Algeria and became one of the most well known 20th century philosophers. His approach was distinct from the various philosophical movements popular among other French intellectuals of the time. Derrida developed a novel strategy called “deconstruction” in the mid 1960s. Through the analysis of texts, deconstruction seeks to expose, and then to subvert, the various binary oppositions that undergird a dominant way of thinking.”

– Sociological Cinema

1Art by David Levine. H/t Sociological Cinema.

(View original at

Krebs on SecurityID Theft Service Proprietor Gets 13 Years

A Vietnamese man who ran an online identity theft service that sold access to Social Security numbers and other personal information on more than 200 million Americans has been sentenced to 13 years in a U.S. prison.

Vietnamese national Hieu Minh Ngo was sentenced to 13 years in prison for running an identity theft service.

Vietnamese national Hieu Minh Ngo was sentenced to 13 years in prison for running an identity theft service.

Hieu Minh Ngo, 25, ran an ID theft service variously named and Ngo admitted hacking into or otherwise illegally gaining access to databases belonging to some of the world’s largest data brokers, including a Court Ventures, a subsidiary of the major consumer credit bureau Experian.

Ngo’s service sold access to “fullz,” the slang term for packages of consumer data that could be used to commit identity theft in victims’ names. The government says Ngo made nearly $2 million from his scheme.

The totality of damage caused by his more than 1,300 customers is unknown, but it is clear that Ngo’s service was quite popular among ID thieves involved in filing fraudulent tax refund requests with the U.S. Internal Revenue Service (IRS). According to the Justice Department, the IRS has confirmed that 13,673 U.S. citizens, whose stolen PII was sold on Ngo’s websites, have been victimized through the filing of $65 million in fraudulent individual income tax returns.

“From his home in Vietnam, Ngo used Internet marketplaces to offer for sale millions of stolen identities of U.S. citizens to more than a thousand cyber criminals scattered throughout the world,” said Assistant Attorney General Leslie R. Caldwell, in a press release.  “Criminals buy and sell stolen identity information because they see it as a low-risk, high-reward proposition.  Identifying and prosecuting cybercriminals like Ngo is one of the ways we’re working to change that cost-benefit analysis.” allowed users search for specific individuals by name, city, and state. Each “credit” cost USD$1, and a successful hit on a Social Security number or date of birth cost 3 credits each. The more credits you bought, the cheaper the searches were per credit: Six credits cost $4.99; 35 credits cost $20.99, and $100.99 bought you 230 credits. Customers with special needs could avail themselves of the “reseller plan,” which promised 1,500 credits for $500.99, and 3,500 credits for $1000.99.

Lance Ealy, one of Ngo's customers, is now in prison for tax ID theft.

Lance Ealy, one of Ngo’s customers, is now in prison for tax ID theft.

Ngo was arrested in 2013, after he was lured to Guam with the offer of access to more consumer data by an undercover U.S. Secret Service agent. Ngo had been facing more than 24 years in federal prison, but his sentence was lightened because he cooperated with investigators to secure the arrest of at least a dozen of his U.S.-based customers. Among them was an Ohio man who led U.S. Marshals on a multi-state pursuit after his conviction on charges of filing phony tax refund requests with the IRS. Investigators close to the case say additional arrests of Ngo’s former customers are pending.

It remains unclear what, if any, consequences there may be going forward for Experian or its subsidiary, Court Ventures. Ngo gained access to the latter’s consumer database by posing as a private investigator based in the United States. In March 2012, Court Ventures was acquired by Experian, and for approximately ten months past that date, Ngo continued paying for his customers’ data searches via cash wire transfers from a bank in Singapore.

In December 2013, an executive from big-three credit reporting bureau Experian told Congress that the company was not aware of any consumers who had been harmed by an incident. Clearly, the facts unveiled in Ngo’s sentencing show otherwise.

I first wrote about Ngo’s service in November 2011. For more on the fallout from this investigation, see this series.

Worse Than FailureThe Batman

    Na, na, na, na, na, na, na, na, 
    Na, na, na, na, na, na, na, na, 
    Na, na, na, na, na, na, na, na, 

We've all heard it. Some of us even grew up when it was first aired on TV. Be honest, who among us, hasn't experienced just a little bump in cardiac rate when the Batmobile fired up?

It's pretty much a given that when The Batman gets involved, bad things will usually be made better.


S. K. was a developer at Gotham-Dev, Inc. They were doing work for an event-management client located in Gotham City in the U.S. The client had an in-house programmer who called himself The Batman because he had a propensity to swoop down upon and make all sorts of problems better.

One day, The Batman decided that all developers should create an entire branch for each and every bug/feature/issue, no matter how small it was. This way, each change could be coded in and of itself, and without interference from changes made for other issues. He proposed this up the chain, and received the corporate blessing to impose this new policy as law.

As a contractor, S.K.'s team of 12 developers had to comply, and so started creating separate branches for every little work task. After an entire week of this, there were 50+ new branches, and counting. While this may seem excessive to some, it didn't matter to the developers as their changes could be made in isolation. It was the responsibility of The Batman to merge those 50+ branches and deploy the whole thing on the test environment. After several hours of merge-hell, The Batman sent the following email:

   From:    The Batman <>
   Date:    Fri, Jun 13, 2014 at 12:02 PM
   Subject: Bam! Process improvement!
   To:      Dev-Team <>

   To cut down on confusion and working space issues, I've created a new space and 
   protocol on our devops server just for QA operations. For instance, this week's 
   Dashboard release lives at: 

   Individual branches will be installed for testing, like so:

   The only time when it's acceptable to edit code in a QA instance is when you 
   are making last-minute, pre-release changes. When this is necessary, please 
   only do so with QA's knowledge and understanding.

   Please let me know if you have any questions/comments/concerns.
   -The Batman-

Ok, so The Batman has to restructure the QA and bug branch directory layout. While daunting, it didn't really affect the developers, and in fact, it only really affected QA. A short time later, The Batman sent another email:

   From:    The Batman <>
   Date:    Fri, Jun 13, 2014 at 4:18 PM
   Subject: Thwap! Process improvement!
   To:      Dev-Team <>

   I will be moving public site release candidates to the same system as soon as 
   possible. Currently the public codebase makes some hard assumptions about URL
   and path structure which requires the site be installed to /home/username/dir 
   - deeper URL structures cause problems. Once these auto-config issues are solved 
   we'll have all our releases under this process - in the interim I will continue 
   pushing front-end releases to dev-w.

   Please let me know if you have any questions/comments/concerns.
   -The Batman-

Again, this also seemed harmless as a few minor code changes were required to eliminate the old assumptions about the old path structure and replace them with new assumptions about the new path structure. The developers were not worried. Shortly thereafter, The Batman swooped in with yet another email:

   From:    The Batman <>
   Date:    Fri, Jun 13, 2014 at 10:47 PM
   Subject: Sok! Process improvement!
   To:      Dev-Team <>

   Regarding databases for these installs: we've had problems in the past where 
   using the test database allowed a branch to go live with database scripts 
   which did not work. In many cases the database scripts specifically named the 
   test database, so in an effort to prevent these "gotchas" I am currently using 
   an entire database for each release.

   On the upside, this means up-to-date test data in the release candidates.

   -The Batman-

Ok, so then there were countless database instances to go with countless branches in both development and QA.

But the development staff need not fear all of these issues because The Batman was here!

I think The Joker might have viewed The Batman's actions as impinging upon his domain!

[Advertisement] Use NuGet or npm? Check out ProGet, the easy-to-use package repository that lets you host and manage your own personal or enterprise-wide NuGet feeds and npm repositories. It's got an impressively-featured free edition, too!