Planet Russell

,

Planet DebianLars Wirzenius: 45

45 today. I should stop being childish, but I don't wanna.

TEDPoll: Do you understand your pet’s emotions, and do they understand yours? The TED community answers

Animals clearly show emotions, says science historian Laurel Braitman in her TED Talk. Which leads to the question: do you understand your pet's feelings? Photo: Kris Krüg/TED

Animals clearly show emotions, says science historian Laurel Braitman in her TED Talk. Which leads to the question: do you understand your pet’s feelings? Photo: Kris Krüg/TED

Pets: they are just like us. Well, maybe.

In her TED Talk, “Depressed dogs, cats with OCD—what animal madness means for us humans,” Laurel Braitman shares her seven years of research on the mental health of animals. “What I discovered is that I do believe they can suffer from mental illness, and [that] trying to identify mental illness in them can help us be better friends to them,” says Braitman, a TED Fellow, in the talk. “Even though you can’t know exactly what’s going on in the mind of a pig or your pug or your partner, that shouldn’t stop you from empathizing with them.” Laurel Braitman: Depressed dogs, cats with OCD — what animal madness means for us humans Laurel Braitman: Depressed dogs, cats with OCD — what animal madness means for us humans

This got us curious: For pet owners out there, do you feel in tune with your pet’s emotions? And does that feel like a two-way street?

Earlier this week, we asked you to take a poll on this subject. What you had to say was fascinating.

First of all, members of the TED community appear to be dog people. 51.1% of the 364 poll respondents have a dog, while 34.3% have a cat. Fish were far less popular—only 4.4% of you have them—and just a handful of you have guinea pigs, hamsters or other rodents, or amphibians or reptiles. When it came to write-in “Other” answers—one of you has ferrets, another has a pair of horses and a select few of you have rabbits. Bonus points for the poll respondent who answered: “I have starfish, crabs, sea cucumbers, coral and sea snails.”

The grand majority of you pet owners—44.2%—have a single pet. 26.6% of you have two pets; 8% have three. We’re especially intrigued by the 14.3% of you who have 4 or more.

When it comes to naming pets, you guys are highly creative. One poll respondent said, “My dog’s name is Meatball, because when she was a baby, she slept like a ball.” Giving human names to animals was hugely popular with you—which feels in line with Braitman’s idea that anthropomorphizing our pets may actually be a good thing. (Three of you, by the way, have pets named Ted.) Many of your pet names are also very erudite. We love the person who named your pets Tau and Science, and the one whose dogs are named Scout and Atticus because, “I’m a big fan of the characters from To Kill A Mockingbird.”

As for whether you have a good understanding of your pet’s emotions, the answer was overwhelmingly: yes. About 71.2% of you feel like you can generally read what your pet is feeling, while 27.7% says that you get your pet at least some of the time. Only three survey respondents threw their arms up in the air, revealing that their pet’s emotions are a total mystery to them.

Even more interesting—48.7% percent of you said that your pet has a good understanding of your emotions. And 71.5% of you said that your pet helps you through hard times often—with an additional 14.2% saying that there’s one major event in life that your pet helped you get through. Your write-in answers on this were very moving. A sampling:

“I was suffering a flare-up of an autoimmune condition, feeling just awful. I woke from a nap to find my three cats surrounding me—one on each side and one at my feet. When I woke up the next morning, they were still ‘on duty.’ This was very calming, as stress makes my symptoms worse.”

“During a period of mourning, my dog remained quietly next to my bed as I slept and wept for weeks. He often rested his chin on the edge of the bed, or simply breathed a bit more audibly than usual as if to remind me that I was not alone in my sorrow. He was silent when I needed silence, and he was only affectionate when I seemed to need it. He managed to give me space/time to mourn while never leaving my side … a delicate balance indeed.”

“These past few years have been harder than I ever imagined I’d have to experience, but my dogs have constantly checked in with me. One paws at me until I put her in my lap and she relaxes her head on my shoulder. Her strong sense of empathy has carried me through.”

“I have had episodic depression and just having my dog around, with her boundless energy and happiness, frequently helped me get some different perspective. Plus, looking after her helped rebalance suicidal thoughts.”

As for Laurel Braitman’s observations that animals manifest signs of mental illness, many of you have witnessed this first-hand. 29.4% said that your pet has shown signs of anxiety; 16.2% have seen signs of abandonment issues; 11.9% suspect that your pet struggles with depression; and 11.9% have a pet with a specific phobia (thunder being the most common). An additional 9.4% have noticed signs of Obsessive-Compulsive Disorder; and about 4% have seen what you interpreted as mania, self-destructive tendencies or signs of Narcissistic Personality Disorder. Some of your descriptions of these pets:

“My cat can’t do anything once. If she licks herself, she will lick until the fur starts to come off; if she scratches something, she will scratch until it’s starting to rip apart; if she buries her waste, she will bury it for at least 10-15 minutes. Definitely Obsessive-Compulsive Disorder.”

“I have a cat I believe may have an eating disorder; she frequently binge eats to the point of vomiting, and is only concerned with the state of the food dish.”

“My dog still suckles her blanket at 8 years old, and has licked all the paint off the freezer! When she is even a tiny bit anxious she has incontinence problems. She is on medication for this.”

But interestingly, while many of you have noticed signs of mental illness in your pets, many of you noted a kind intelligence that goes along with it. Writes one poll respondent, “My Border Collie gets overly anxious when she does not have tasks or stimuli. I think she is too smart and doesn’t have the appropriate mental outlets. She is also very good at feigning injuries when she feels like she is being ignored.”

Check out the answers to our multiple-choice questions below. And if you didn’t get to take the poll, leave your thoughts in the comments section.

<noscript>Take Our Poll</noscript><script type="text/javascript"> (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='http://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader')); </script>
.

<noscript>Take Our Poll</noscript><script type="text/javascript"> (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='http://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader')); </script>
.

<noscript>Take Our Poll</noscript><script type="text/javascript"> (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='http://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader')); </script>
.

<noscript>Take Our Poll</noscript><script type="text/javascript"> (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='http://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader')); </script>
.

<noscript>Take Our Poll</noscript><script type="text/javascript"> (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='http://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader')); </script>
.

<noscript>Take Our Poll</noscript><script type="text/javascript"> (function(d,c,j){if(!d.getElementById(j)){var pd=d.createElement(c),s;pd.id=j;pd.src='http://s1.wp.com/wp-content/mu-plugins/shortcodes/js/polldaddy-shortcode.js';s=d.getElementsByTagName(c)[0];s.parentNode.insertBefore(pd,s);} else if(typeof jQuery !=='undefined')jQuery(d.body).trigger('pd-script-load');}(document,'script','pd-polldaddy-loader')); </script>

Planet DebianDirk Eddelbuettel: littler 0.2.0

We are happy to announce a new release of littler.

A few minor things have changes since the last release: max-heap image

  • A few new examples were added or updated, including use of the fabulous new docopt package by Edwin de Jonge which makes command-line parsing a breeze.
  • Other new examples show simple calls to help with sweave, knitr, roxygen2, Rcpp's attribute compilation, and more.
  • We also wrote an entirely new webpage with usage example.
  • A new option -d | --datastdin was added which will read stdin into a data.frame variable X.
  • The repository has been move to this GitHub repo.
  • With that, the build process was updated both throughout but also to reflect the current git commit at time of build.

Full details are provided at the ChangeLog

The code is available via the GitHub repo, from tarballs off my littler page and the local directory here. A fresh package will got to Debian's incoming queue shortly as well.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJoseph Bisch: Debconf Wrapup

Debconf14 was the first Debconf I attended. It was an awesome experience.

Debconf14 started with a Meet and Greet before the Welcome Talk. I got to meet people and find out what they do for Debian. I also got to meet other GSoC students that I had only previously interacted with online. During the Meet and Greet I also met one of my mentors for GSoC, Zack. Later in the conference I met another of my mentors, Piotr. Previously I only interacted with Zack and Piotr online.

On Monday we had the OpenPGP Keysigning. I got to meet people and exchange information so that we could later sign keys. Then on Tuesday I gave my talk about debmetrics as part of the larger GSoC talks.

During the conference I mostly attended talks. Then on Wednesday we had the daytrip. I went hiking at Multnomah Falls, had lunch at Rooster Rock State Park, and then went to Vista House.

Later in the conference, Zack and I did some work on debmetrics. We looked at the tests, which had some issues. I was able to fix most of the issues with the tests while I was there at Debconf. We also moved the debmetrics repository under the qa category of repositories. Previously it was a private repository.

Sociological ImagesFrom Our Archives: For Labor Day

Screenshot_1Today is Labor Day in the U.S. Though many think of it mostly as a last long weekend for recreation and shopping before the symbolic end of summer, the federal holiday, officially established in 1894, celebrates the contributions of labor.

Here are some SocImages posts on a range of issues related to workers, from the history of the labor movement, to current workplace conditions, to the impacts of the changing economy on workers’ pay:

The Social Construction of Work

Work in Popular Culture

Unemployment, Underemployment, and the “Class War”

Unions and Unionization

Economic Change, Globalization, and the Great Recession

Gender and Work

The U.S. in International Perspective

Just for Fun

Bonus!

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Cory DoctorowPodcast: Petard from Tech Review’s Twelve Tomorrows


Here's a reading (MP3) of the first part of my story "Petard: A Tale of Just Desserts" from the new MIT Tech Review anthology Twelve Tomorrows, edited by Bruce Sterling. The anthology also features fiction by William Gibson, Lauren Beukes, Chris Brown, Pat Cadigan, Warren Ellis, Joel Garreau, and Paul Graham Raven. The 2013 summer anthology was a huge hit -- Gardner Dozois called it "one of the year’s best SF anthologies to date, perhaps the best."

MP3

Planet DebianJo Shields: Xamarin Apt and Yum repos now open for testing

Howdy y’all

Two of the main things I’ve been working on since I started at Xamarin are making it easier for people to try out the latest bleeding-edge Mono, and making it easier for people on older distributions to upgrade Mono without upgrading their entire OS.

Public Jenkins packages

Every time anyone commits to Mono git master or MonoDevelop git master, our public Jenkins will try and turn those into packages, and add them to repositories. There’s a garbage collection policy – currently the 20 most recent builds are always kept, then the first build of the month for everything older than 20 builds.

Because we’re talking potentially broken packages here, I wrote a simple environment mangling script called mono-snapshot. When you install a Jenkins package, mono-snapshot will also be installed and configured. This allows you to have multiple Mono versions installed at once, for easy bug bisecting.

directhex@marceline:~$ mono --version
Mono JIT compiler version 3.6.0 (tarball Wed Aug 20 13:05:36 UTC 2014)
directhex@marceline:~$ . mono-snapshot mono
[mono-20140828234844]directhex@marceline:~$ mono --version
Mono JIT compiler version 3.8.1 (tarball Fri Aug 29 07:11:20 UTC 2014)

The instructions for setting up the Jenkins packages are on the new Mono web site, specifically here. The packages are built on CentOS 7 x64, Debian 7 x64, and Debian 7 i386 – they should work on most newer distributions or derivatives.

Stable release packages

This has taken a bit longer to get working. The aim is to offer packages in our Apt/Yum repositories for every Mono release, in a timely fashion, more or less around the same time as the Mac installers are released. Info for setting this up is, again, on the new website.

Like the Jenkins packages, they are designed as far as I am able to cleanly integrate with different versions of major popular distributions – though there are a few instances of ABI breakage in there which I have opted to fix using one evil method rather than another evil method.

Please note that these are still at “preview” or “beta” quality, and shouldn’t be considered usable in major production environments until I get a bit more user feedback. The RPM packages especially are super new, and I haven’t tested them exhaustively at this point – I’d welcome feedback.

I hope to remove the “testing!!!” warning labels from these packages soon, but that relies on user feedback to my xamarin.com account preferably (jo.shields@)

Worse Than FailureHeard Around the Office

Gary works in a huge conglomerate. There are about 500 developers and assorted low level managers on his floor alone, and everyone is constantly on live audio-chat with their remote peers. As such, you can pretty much hear all of the conversations going on at any given time - if you listen... (see if you can guess whether the engineers or managers are in italics)

"We need to put foreign keys on auxiliary tables in order to enforce the relationships between primary and secondary data." We don't need foreign keys in the database; they slow everything down and make it harder to delete stuff. We'll just keep everything straight in code!

"We need to get requirements on when to do rounding, and what type of rounding to do." What do you mean? "Should we do rounding after each mathematical operation, or after every logical computation? Should we do it on a record by record basis, on an aggregate basis or something else? Should we round up, down, half up, half down, half even? To how many digits of precision? Should all the different computations use the same rounding rules or are they different in each case? The requirements say nothing about it!" We can decide that after the application is finished; we'll see what the data looks like and decide if things need to change.

"We should set up database roles, assign permissions to each role and assign relevant roles to appropriate groups of users. This will make things much easier to manage." It's much simpler to just assign all the permissions each person needs to them individually. "No it's not. We have about 200 tables, each of which needs table-create, drop, insert, update, delete and select privileges. That goes with about 200 sequences which need create, drop and use privileges. Then there are the stored procedures, functions, triggers and views. Multiply all of that by 10 developers and 35 users, and it becomes quite unmanageable." Maybe, but we don't have time to invest in this; change privileges as needs arise!

"Per project plan, we have written > 1,350 JUnit and JBehave (business driven development) tests and scripts to verify the code in the main processing module. Everything works per the tests, but the tests were designed to verify that it works the way we intended. The users haven't yet specified the primary functionality of the core of the application. If they fill in the requirements with anything other than what you told us to expect, most of this stuff will likely need to be changed. We should stop writing tests until the users provide final requirements!" No, keep writing tests. If we have enough of them and it becomes too cumbersome to change it all, the users won't be able to make changes to this iteration of development, and it will all get pushed to version 2.0!

"When you made the project plan, you assumed that nobody would be doing any code changes to the legacy system. Further, you assumed that only one person would be doing 50% production support and the rest of us were 100% dedicated to development. In practice, we've all been doing about 30% production support plus functionality changes to the legacy system. This is going to translate directly to missed deadlines. Before you accept any more requests from the users, you need to tell them that the cost for the request-of-the-day is a delay of <time> in delivering the new project. Then agree to do the work only if they agree - in writing - to the delay." No, I am not going to say 'no' to the users. They are our customers and they get what they want! "Nobody said to say 'no'; just that they must agree to the cost of doing the work, which translates into delays on the new project." I don't care, we'll all just put in extra hours to make up for it. "Wait; 3/4 of the team is hourly consultants who are contractually limited to 40 hours per week. They're not going to work for free, so they won't be putting in any extra time. There's no way you'll be able to deliver this thing on time; you're digging a very deep hole with no escape clause!" Just keep doing the work on the legacy system!

"You can't just hire junior developers with 2-3 years of experience." Maybe, but we can hire two of them for less than what we pay an experienced engineer. "That may be true, but the experienced engineer will generally out-produce them by way more than 2:1. In the long run, having a couple of more experienced folks is cheaper than fixing the damage caused by very inexperienced folks." Productivity doesn't have a line-item on the budget. I get reviewed on (among other things) by how well I work within my budget!

Gary now keeps his head down and makes a concerted effort not to listen; however, he's developing his own escape clause for when the time comes.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet DebianJuliana Louback: Debconf 2014 and How I Became a Debian Contributor

Part 1 - Debconf 2014

This year I went to my first Debconf, which took place in Portland, OR during the last week of August 2014. All in all I have to rate my experience as very enlightening and in the end quite fun.

First of all, it was a little daunting to go to a conference in 1 - A city I’d never been to before; 2 - A conference with 300+ people, only 3 of which I knew and even then I only knew them virtually. Not to mention I was in the presence of some extremely brilliant and known contirbutors in the Debian community which was somewhat intimidating. Just to give you an idea, Linus Torvalds showed up for a Q&A session last Friday morning! Jealous? Actually I missed that too. It was kind of a last minute thing, booked for coincidentally the exact time I’d be flying out of Portland. I found out about it much too late. But luckily for me and maybe you, the session was filmed and can be seen here. Isn’t that a treat?

Point made, there are lots of really talented people there, both techies and non-techies. It’s easy to feel you’re out of league, at least I did. But I’d highly encourage you to ignore such feelings if you’re ever in the same situation. Debian has been built on for a long time now, but although a lot has been done, a lot still needs to be done. The Debian community is very welcoming of new contributors and users, regardless of the level of expertise. So far I haven’t been snubbed by anyone. To the contrary, all my interactions with the Debian community members has been extremely positive.

So go ahead and attend the meetings and presentations, even if you think it’s not your area of expertise. Debconf was organized (or at least this one was) as a series of talks, meet ups and ad hoc sessions, some of which occured simultaneously. The sessions were all about different components of the Debian universe, from presenting new features to overviews of accomplishments to discussing issues and how to fix them. A schedule with the location and description of each session was posted on the Debconf wiki. Sometimes none of the sessions at a certain time was on a topic I knew very much about. But I’d sit in anyways. There’s no rule to attending the sessions, no ‘minimum qualifications’ required. You’ll likely learn something new and you just might find out there is something you can do to contribute. There are also hackathons that are quite the thing or so I heard. Or you could walk about and meet new people, do some networking.

I have to say networking was the highlight of the Debconf for me. Remember I said I knew about 3 people who were at the conference? Well, I had actually just corresponded with those people. I didn’t really know them. So on my first day I spent quite some time shyly peeking at people’s name tags, trying to recognize someone I had ‘met’ over email or IRC. But with 300 or so people at the conference, I was unsuccessful. So I finally gave up on that strategy and walked up to a random person, stuck out my hand and said, “Hi. My name is Juliana. This is my first Debconf. What’s your name and what do you do for Debian?” This may not be according to protocol, but it worked for me. I got to meet lots of people that way, met some Debian contributos from my home country (Brazil), some from my current city (NYC), and yet others that had similar interests as I do who I might work with in the near future. For example, I love Machine Learning, I’m currently beginning my graduate studies on that track. Several Debian contributors offered to introduce me to a well known Machine Learning researcher and Debian contributor who is in NYC. Others had tried out JSCommunicator and had lots of suggestions for new features and fixes, or wanted to know more about the project and WebRTC in general. Also, not everyone there is a super experienced Debian contributor or user. There are a lot of newbies like me.

I got to do a quick 20-min presentation and demo of the work I had done on JSCommunicator during GSoC 2014. Oh my goodness that was nerve-wracking, but not half as painful as I expected. My mentor (Daniel Pocock) wisely suggested that when confronted with a question I didn’t know how to answer, to redirect the question to to the audience. Chances are, there is someone there that knows the answer. If not, it will at least spark a good discussion.

When meeting new people at Debian, a question almost everyone asked is “How did you start working with/for Debian?”. So I thought it would be a good topic to post about.

Part 2 - How I Became a Debian Contributor

Sometime in late October of 2013 (I think) I received an email from one of my professors at UNIRIO forwarding a description of the Outreach Program for Women. OPW is a program organized by the GNOME which endeavors to get more women involved in FOSS. OPW is similar to Google Summer of Code; you work remotely from home, guided by an assigned mentor. Debian was one of the 8 participating organizations that year. There was a list of project proposals which I perused, a few of them caught my eye and these projects were all Debian. I’d already been a fan of FOSS before. I had used the Ubuntu and Debian OS, I’d migrated to GIMP from Photoshop and Open Office from Microsoft Office, for example. I’d strongly advocated the use of some of my prefered open source apps and programs to my friends and family. But I hadn’t ever contributed to a FOSS project.

There’s no time like the present, so I reached out the the mentor responsible for one of the projects I was interested in, Daniel Pocock. Daniel guided me through making a small contribution to a FOSS project, which serves as a token demonstration of my abilities and is part of the application process. I added a small feature to JMXetric(https://github.com/ganglia/jmxetric) and suggested a fix for an issue in the xTuple project. Actually, I had forgotten about this. Recently I made another contribution to xTuple, it’s funny to see things come full circle. I also had to write a profile-ish description of my experience and how I intended on contributing during OPW on the Debian wiki, if you’d like you can check it out here.

I wouldn’t follow my example to a T, because in the end I didn’t make the OPW selection. Actually, I take that back. The fact I wasn’t chosen for OPW that year doesn’t mean I was incompetent or incapable of making a valuable contribution. OPW and GSoC do not have unlimited resources, they can’t include everyone they’d like to. They receive thousands of proposals from very talented engineers and not everyone can participate at a given moment. But even though I wasn’t selected, like I said, I could still pitch in. It’s good to keep in mind that people usually aren’t paid to contribute to FOSS. It’s usually volunteer based, which I think is one of the beauties of the FOSS community and in my opinion one of the causes of it’s success and great quality. People contribute because they want to, not because they have to.

I will say I was a little disappointed at not being chosen. But after being reassured that this ‘rejection’ wasn’t due to any lack on my part, I decided to continue contributing to the Debian project I’d applied to. I was begining the final semester of my undergraduate studies which included writing a thesis. To be able to focus on my thesis and graduate on time, I’d stopped working and was studying full time. But I didn’t want to lose practice and contributing to a FOSS project is a great way to stay in coding shape while doing something useful. So continue contributing I did.

It paid off. I gained experience, added value to a FOSS project and I think my previous contributions added weight to the application I later made for GSoC 2014. I passed this time. To be honest, I really wasn’t counting on it. Actually, I was certain I wouldn’t pass for some reason - insecure much? But with GSoC I wasn’t too anxious about it as I was with the OPW application because by then, I was already ‘hooked’. I’d learned about all the benefits of becoming a FOSS contributor and I wasn’t stopping anytime soon. I had every intention of still working on my FOSS project with or without GSoC. GSoC 2014 ended a week ago (August 18th 2014). There’s a list of things I still want to do with JSCommunicator and you can be sure I’ll keep working on them.

P.S. This is not to say that programs like OPW and GSoC aren’t amazing programs. Try it out if you can, it’s really a great experience.

Planet DebianPeter Palfrader: Driving your firefox like vim

= Driving your firefox like vim Recently a friend of mine pointed out pentadactyl to me. It's a firefox add-on to drive your browser in a vim-like style. I had tried vimperator in the past and didn't like it very much back then, but a lot has improved since then. Obviously there's a learning curve involved, like in many great tools, but I think for me it was worth it. I have it now enabled in all my firefoxes/iceweasels. Aloha,

Planet DebianPeter Palfrader: Fun and Profit with lirc

= Fun and Profit with lirc == Getting the basics A couple weeks ago I got myself a small USB Infrared Transceiver from the people at iguanaworks. It comes in different models and I got the dual socket one that allows me to hook up external IR emitters. The goal was to remote control my stereo from my desktop machine, so using wired IR blasters gives me the flexibility I need with the wiring. They provide Debian packages built for some version of Ubunutu from their website. I wanted to build them from source, so apt-get source from their repository it is. The build process of the package is pretty broken -- it assumes there already is an iguanair user for instance. If somebody could package this up properly for Debian that'd be real nice. And who knows, maybe upstream would appreciate that as well. At first glance the software looks free enough. Once {{{iguanair}}} has been built and installed, lirc needs a recompile -- it will pick up the iguanair library automatically. Their Getting Started docs are actually pretty good. == Learning the IR codes Anyway, once the software was set up, I needed the IR-codes to send to my Denon. There are a couple of databases for varios remote controls and receivers on the internet, but I didn't find a single entry for my old Denon AVR-1600RD. Fortunately lirc alls you to learn codes off a remote control you already have, and since the iguana device also comes with a receiver, I could use {{{irrecord}}} to get the needed magic bytes for commands like power on/off, mute, volume up/down. This actually worked quite well, the only problem was realizing that the IR blasters are really picky about being properly aligned and close to the receiver (my Denon). Once I got that right, all the recorded codes worked. lircd.conf for Denon AVR-1600RD == Putting it all together I can now use this to mute my stereo using {{{irsend send_once denon KEY_MUTE}}} for instance. What I also wanted to get out of this was that the stereo should be shut off automatically when my machine goes to sleep, or when I haven't used it in a while. Since my Denon only has a power toggle command and does not appear to have a discrete power-off command -- at least I don't know it -- I needed something on my system to keep state, and it means I no longer should use my real remote control or else things will get out of sync. I have written a small wrapper script that does this for me -- stereo. It's in {{{/usr/local/bin}}}. === Initialisation On boot, the script creates its state file when called with the {{{init}}} command. There is a small init script that takes care of this, as well as turning off the stereo when the machine reboots or powers off. === Power off on suspend In addition to powering off the stereo when the machine goes down, I also want it to do that when it goes to sleep. Enter {{{/etc/pm/sleep.d/20 stereo}}}. === Power off on idle Furthermore, I run the script out of cron every 5 minutes - to see if anything is using the alsa device. If there isn't for half an hour the stereo gets turned off. Crontab entry: {{{ */5 * * * * /usr/local/bin/stereo off-if-idle }}} === Manually controlling it And last not least I have integrated it into my {{{xbindkeys}}} config. I am abusing a couple special keys that I never ever use. Here's the snippet from my {{{~/.xbindkeysrc}}}: {{{ "stereo mute" XF86HomePage "stereo vol-" XF86Search "stereo vol+" XF86Mail "stereo toggle" c:193 "stereo touch" c:192 }}} Cheers,

Geek FeminismLinkspam: The First Adventure (30 August 2014)

[Trigger Warning: Transphobic slurs, deliberate mis-gendering] Anti-trans trolling spree forces Wikipedia to ban U.S. House staffers for third time | Raw Story “Wikipedia has once again blocked all computers from the U.S. House of Representatives in order to stop malicious, anti-trans edits to popular pages on the site.”

Admit It: Women Are Smarter Than Men | Inc.com “Hedge funds run by women make three times as much money as hedge funds run by men, and that companies with female CEOs outperform companies with male CEOs by nearly 50%. What’s fascinating about this story, though, isn’t the data, but the attempt to “mansplain” it away.”

Fuck you, Lego | Reel Girl “That’s right, after just two weeks on the market, the Lego female scientists set will no longer be sold by major retailers at a competitive price. The female scientists are banished to become collector’s items.”

Meteor Man: Will There Ever Be Another Black Superhero? | Black Girl Nerds “Robert Townsend created his own superhero, but in the end, he does not taut this superhero as the answer. In a way, I think he simply uses the superhero as inspiration. What would we do to make our communities better if we have the means? … It makes me wonder what Hollywood studios truly fear when they hesitate to make a superhero film with a person of color in the lead. “

Articles i would like to see by men in tech | @shanley Short list of alternative article titles for men considering writing about women in tech.

Hidden dangers of team building rituals | Semantici.st Some points to consider when you are promoting mandofun at work: “everyone else is having fun, except for one person who is forced to play-act at enjoying themselves because they’re terrified of losing their job for ‘not being a culture fit’.

About the recent attacks on Anita Sarkeesian and Zoe Quinn [Trigger warning: these incidents involve harassment and threats of rape and violence]:

The End of Gatekeeping: The Extinction Burst of Gaming Culture | Dr. NerdLove Covers the recent incidents, the myth that geek and gamer culture has always been ‘a boy thing’, the fall of the self-appointed gatekeepers of gamer culture, and high-profile support for both women.

Tropes vs Anita Sarkeesian: on passing off anti-feminist nonsense as critique | New Statesman “Anita Sarkeesian makes videos looking at how poorly women are represented in games, and gamers hate her for it, insulting her work and accusing her of dishonesty. It’s almost like they’re trying to prove her premise.”

Announcement: Readers who feel threatened by equality no longer welcome | Games.on.net Great link roundup of the incidents, and a fantastically worded admonition for people who think that games in any way belong to them or are “under attack” from political correctness or “social justice warriors” to leave and never come back.

games presenting abusive behaviours as entertainment | @4xisblack “The appeal that games are ‘just harmless fun’ is even more ridiculous because harmless fun, by definition, musn’t harm people”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet Linux AustraliaAndrew Pollock: [life] Day 215: Kindergarten, tinkering, massage and some shopping

Zoe was yelling out for me at 1:30am because her polar bear had fallen out of bed. She then proceeded to have a massive sleep in until 7:30am, so the morning was a little bit rushed.

That said, she took it upon herself to make breakfast while I was in the shower, which was pretty impressive.

Being the first day of Spring and a nice day at that, I wanted to get back into the habit of biking to Kindergarten, so despite it being a bit late, even though we did a very good job of getting ready in a hurry, we biked to Kindergarten. Zoe was singing all the way there, it was very cute.

I got home and spent the day getting Puppet to manage my BeagleBone Black, since I'd had to reinstall it as it had semi-died over the weekend.

I'd moved my massage from Wednesday to today, since there's a Father's Day thing on at Kindergarten on Wednesday, so I had a massage, and then went directly to pick up Zoe.

We went out to Westfield Carindale after pick up, to try and get some digital cameras donated to the Kindergarten to replace the ones they've got, which have died. I wasn't successful on the spot. Then we dropped past the pet shop to get some more kitty litter for Smudge, and then got home.

We'd barely gotten home and then Sarah arrived to pick Zoe up.

Planet Linux AustraliaAndrew Pollock: [life] Day 212: A trip to the pool

This is what I get for not blogging on the day of, I can't clearly remember what we did on Friday now...

I had the plumber out in the morning, and then some cleaners to give a quote. I can't remember what we did after that.

After lunch we biked to Colmslie Pool and swam for a bit, I remember that much, and then I had some friends join Anshu and us for dinner, but the rest is coming up blank.

Planet DebianChristian Perrier: Bug #760000

René Mayorga reported Debian bug #760000 on Saturday August 30th, against the pyfribidi package.

Bug #750000 was reported as of May 31th: nearly exactly 3 months for 10,000 bugs. The bug rate increased a little bit during the last weeks, probably because of the freeze approaching.

We're therefore getting more clues about the time when bug #800000 for which we have bets. will be reported. At current rate, this should happen in one year. So, the current favorites are Knuth Posern or Kartik Mistry. Still, David Prévot, Andreas Tille, Elmar Heeb and Rafael Laboissiere have their chances, too, if the bug rate increases (I'll watch you guys: any MBF by one of you will be suspect...:-)).

Krebs on SecurityFun With Funny Money

Readers or “fans” of this blog have sent some pretty crazy stuff to my front door over the past few years, including a gram of heroin, a giant bag of feces, an enormous cross-shaped funeral arrangement, and a heavily armed police force. Last week, someone sent me a far less menacing package: an envelope full of cash. Granted, all of the cash turned out to be counterfeit money, but hey it’s the thought that counts, right?

Counterfeit $100s and $50s

Counterfeit $100s and $50s

This latest “donation” to Krebs On Security arrived via USPS Priority Mail, just days after I’d written about counterfeit cash sold online by a shadowy figure known only as “MrMouse.” These counterfeits had previously been offered on “dark web” — sites only accessible using special software such as Tor — but I wrote about MrMouse’s funny money because he’d started selling it openly on Reddit, as well as on a half-dozen hacker forums that are quite reachable on the regular Internet.

Sure enough, the package contained the minimum order that MrMouse allows: $500, split up into four fake $100s and two phony $50 bills — all with different serial numbers. I have no idea who sent the bogus bills; perhaps it was MrMouse himself, hoping I’d write a review of his offering. After all, since my story about his service was picked up by multiple media outlets, he’s changed his sales thread on several crime forums to read, “As seen on KrebsOnSecurity, Business Insider and Ars Technica…”

Anyhow, it’s not every day that I get a firsthand look at counterfeit cash, so for better for worse, I decided it would be a shame not to write about it. Since I was preparing to turn the entire package over to the local cops, I was careful to handle the cash sparingly and only with gloves. At first glance, the cash does look and feel like the real thing. Closer inspection, however, reveals that these bills are fakes.

In the video below, I run the fake bills through two basic tests designed to determine the authenticity of U.S. currency: The counterfeit pen test, and ultraviolet light. As we’ll see in the video, the $50 bills shipped in this package sort of failed the pen test (the fake $100 more or less passed). However, both the $50s and $100s completely flopped on the ultraviolet test. It’s too bad more businesses don’t check bills with a cheapo ultraviolet light: the pen test apparently can be defeated easily (by using acid-free paper or by bleaching real bills and using them as a starting point).

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/e8M8AOzEJ8s" width="480"></iframe>

Let’s check out the bogus Benjamins. In the image below, we can see a pretty big difference in the watermarks on both bills. The legitimate $100 bill — shown at the bottom of the picture — has a very defined image of Benjamin Franklin as a watermark. In contrast, the fake $100 up top has a much less detailed watermark. Still, without comparing the fake and the real $100 side by side, this deficiency probably would be difficult to spot for the untrained eye.

The fake $100 (above) has a much less defined Ben Franklin as a watermark.

The fake $100 (top) has a much less defined Ben Franklin for a watermark. The color difference between these two bills is negligible, but the legitimate $100 appears darker here because it was closer to  the light source behind the bills when this photo was taken.

Granted, hardly any merchants are going to put a customer’s cash under a microscope before deciding whether to accept it as legal tender, but I wanted to have a look because I wasn’t sure when I’d have the opportunity to do so again. One security feature of the $20s, $50s and $100s is the use of “color shifting” ink, which makes the denomination noted in the lower right corner of the bill appear to shift in color from green to black when the bill is tilted at different angles. The fake cash pictured here does a so-so job mimicking that color-shifting feature, but upon closer inspection using a cheap $50 Celestron handheld digital microscope, we can see distinct differences.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/LvdYxHU4bII" width="480"></iframe>

Again, using a microscope to inspect cash for counterfeits is impractical for regular businesses in detecting bogus bills, but it nevertheless reveals interesting dissimilarities  between real and fake money. Most of those differences come down to the definition and clarity of markings and lettering. For instance, embedded in the bottom of the portraits of U.S. Presidents Grant and Franklin on the $50 and $100 bills, respectively, is the same message in super-fine print: “The United States of America.” As we can see in the video below, that message also is present in the counterfeits, but it’s quite a bit less clear in the funny money.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="360" src="http://www.youtube.com/embed/Z1xxZ-5cvS4" width="480"></iframe>

In some cases, entire areas of the real bills are completely absent in the counterfeits. Take a close look at the area of the $50 just to the left of Gen. Grant’s ear and you will see a blob of text that repeats the phrase “USA FIFTY” several times. The image on the left shows a closeup of the legitimate $50, while the snapshot on the right reveals how the phony bill completely lacks this feature.

fiftynifty

50missing

Similarly, the “100″ in the lower left hand corner of the $100 bill is filled in with the words “USA 100,” as we can see in the close-up of a real $100, pictured below left. Magnification of the same area on the phony $100 note (right) shows that this area is filled with nothing more than dots.

real100left

fake100left

Like most counterfeit currency, these bills look and feel fairly real on casual inspection, but they’d quickly be revealed as fakes to anyone with a $9 ultraviolet pen light or a simple magnifying glass.

If someone sticks you with a counterfeit bill, don’t try and pass it off on someone else; the penalties for passing counterfeit currency with intent to defraud are severe (steep fines and up to 15 years in prison). Instead, contact your local police department or the nearest U.S. Secret Service field office and hand it over to them.

Planet Linux AustraliaBrendan Scott: brendanscott

AGs is seeking public submissions on Online Copyright Infringement.

Some thoughts are:

The cover letter to the inquiry cites a PWC report prepared for the Australian Copyright Council.  The letter fails to note that gains are offset by the role of intellectual property in transfer pricing by multinationals.  There is strong evidence to suggest that intellectual property regimes have the effect of substantially reducing Australian taxation revenue through the use of transfer pricing mechanisms.

Page 3 of the discussion paper states that  the High Court in Roadshow said “that there were no reasonable steps that could have been taken by iiNet to reduce its subscribers’ infringements.”  The discussion paper goes on to enquire about what reasonable steps a network operator could take to reduce subscribers’ infringements. The whole of the debate about copyright infringement on the internet is infected by this sort of double speak.

The discussion paper does not specifically ask about a three strikes regime.  However, it invites discussion of a three strikes regime by raising it in the cover matter then inviting proposals as to what might be a “reasonable step.  Where noted my responses on a particular question relate to a three strikes regime.

Question 1:

Compelling an innocent person to assist a third party is to deprive that person of their liberty.  The only reasonable steps that come to mind are for network operators to respond to subpoenas validly issued to them – at least that is determined on a case by case basis under the supervision of a court.

Question 2:

Innocent third parties should not be required to assist in the enforcement of someone else’s rights. Any assistance that an innocent third party is required to give should be at the rights holder’s cost.  To do otherwise is to effectively require (in the case of a network) all customers to subsidise the private rights of the “rights’ holders'” enforcement. This is an inefficient an inequitable equivalent to a taxation scheme for public services.  The Government may as well compulsorily acquire the rights in question and equitably spread the cost through a levy.

Question 3:

No.  The existing section 36/101 was specifically inserted to provide exactly the clarity proposed here.  Rights holders were satisfied at the time.

Question 4:

Presumably reasonable is an objective test.

Question 5:

This response assumes the proposed implementation of a “three strikes” regime.

There is a Federal Magistrates court which is able to hear copyright infringement cases.  Defendants should have the right to have the case against them heard in a judicial forum. Under a three strikes regime an individual is required to justify their actions based on an accusation of infringement.  In the absence of a justification they suffer a sanction.  Our legal system should not be driven by mere accusations.  Defendants also have the right to know that the case against them is particular to them and not a cookie cutter accusation.

Question 6:

The court should have regard to what aims a block is intended to achieve, whether a block will be effective in achieving those aims and what impact a block will have on innocent third parties which may be affected by it.  For example, when Megaupload was taken down many innocent people lost their data with no warning.  This is more likely to be the case in the future as computing resources are increasingly shared in load balanced cloud storage implementations. These third parties are not represented in court and have no opportunity to put their case before a block is implemented.

A practice should be established whereby the court requires an undertaking from any person seeking a block to indemnify any innocent third party affected by the block against any damage suffered by them.  Alternatively, the Government could establish a victims compensation scheme that can run alongside such a block.  These third parties will be collateral damage from such a scheme.  Indeed, if the test for a site is only a “dominant purpose” test then collateral damage necessarily a consequence of the block.   An indemnity will serve the purpose of guiding incentives to reduce damage to innocent third parties.

Question 7

If the Government implements proposals which extend the applicability of auhtorisation infringements to smaller and smaller entities (eg a cafe providing wifi) then the safe harbour provisions need to be sufficiently simple and certain as to allow those entities to rely on them. At the moment they are compex and convoluted. If a cafe is forced to pay hundreds or thousands of dollars for legal advice about their wifi service, they will simply not provide it.

Question 8

Before the impact of measures can be measured [sic] a baseline first needs to be established for the purpose the Copyright Act is intended to serve.   In particular, the purpose of the Copyright Act is not to reduce infringement.  Rather, its titular purpose is to promote the creation of works and other subject matter.  This receives no mention in the discussion paper.  Historically, the Copyright Act has been promoted as necessary to maintain distribution networks (pre 1980s), as a means of providing creators with an income (last 2 centuries, but repeatedly contradicted empirically – most recently in the Don’t Give Up Your Day Job report),  as a natural right of authors (00s – contrary to judicial pronouncements on the issue) and now, apparently, as a means of stimulating the economy.  An Act which has so mutable a purpose ought to be considered with a jaundiced eye.

The reference to the PWC document suggests that the Hargreaves report would be a good starting point for further policy making.

Question 9

The retail price of downloadable copies of copyright works in Australia (exclusive of GST) should not exceed the price in their country of origin by more than 5% when sold directly.  The 5% figure is intended to allow for some additional costs of selling into Australia.

Implement the Productivity Commission’s recommendations on parallel importation.

Question 10, 11

The next two paragraphs of the response to this question deals primarily with a possible three strikes regime although the final observations are of a general character.

“Three strikes” regulation will effectively shift the burden of enforcement further away from rights holders to people who are the least equipped to implement it.  What will parents who receive warning letters do?  Will they implement a sophisticated filtering system on their home router?  Will they send their children off to a reeducation camp run by the rights’ holders? More likely they will blanket ban the internet access.  How will cafes manage their risk? More likely they will not provide wifi access.  This has already been the death knell of community wifi networks in the US.  The collateral damage from these proposals is difficult to quantify but there is every reason to believe it will be widespread.  This damage is routinely ignored in policy making.

Will rights’ holders use such a system against everyone? That is unlikely.  Rather, it will be used against some individuals unlucky enough to be first on the list. Those individuals will be used as examples for others.  This will be a law which will be enforced in an arbitrary and discriminatory fashion.  As such it will undermine respect for the law more generally.

The comments on the proposals above assume that they are acted on bona fide.  Once network operators are conditioned to a Pavlovian response to requests the system will be abused – the Get Up! organisation already believes it has been the subject of misuse: https://www.getup.org.au/campaigns/great-barrier-reef–3/adani-video/someone-wants-to-silence-us-dont-let-them

Evasion technologies have previously been a niche interest.  The size of the market limited their growth.  These provisions will sheet home to all citizens the need to implement evasion technologies, thereby greatly increasing the market and therefore the economic incentive for their evolution.  The long run effect of implementing proposals which effect this form of general surveillance of the population is to weaken national security.

By insulating rights holders from the costs of enforcement the proposals disconnect rights holders from the very externalities that enforcement creates.  If there were ever a recipe for poor policy, such a disconnection would be a key element of it.

 

 


,

Planet DebianJunichi Uekawa: I was staring at qemu source for a while last month.

I was staring at qemu source for a while last month. There's a lot of things that I don't understand about the codebase. There's a race but it's hard to tell why a SIGSEGV was received.

Planet DebianTim Retout: Website revamp

This weekend I moved my blog to a different server. This meant I could:

I've tested it, and it's working. I'm hoping that I can swap out the Node.js modules one-by-one for the Debian-packaged versions.

Planet DebianStefano Zacchiroli: debsources hacking

Debsources now has a HACKING file

Here at DebConf14 I have given a few talks. The second one has been a technical talk about recent and future developments on Debsources. Both the talk slides and video are available.

After the talk, various DebConf participants have approached me and started hacking on Debsources, which is awesome! As a result of their work, new shiny features will probably be announced shortly. Stay tuned.

When discussing with new contributors (hi Luciano, Raphael!), though, it quickly became clear that getting started with Debsources hacking wasn't particularly easy. In particular, doing a local deployment for testing purposes might be intimidating, due to the need of having a (partial) source mirror and whatnot. To fix that, I have now written a HACKING file for Debsources, which you can find at top-level in the Git repo.

Happy Debsources hacking!

Planet DebianThorsten Alteholz: My Debian activities in August 2014

FTP assistant

By pure chance I was able to accept 237 packages, the same number as last month. 33 times I contacted the maintainer to ask a question about a package and 55 times I had to reject a package. The reject number increased a bit as I also worked on packages that already got a note but had not been fully processed. In contrast I only filed three serious bugs this month.

Currently there are about 200 packages still waiting in the NEW queue As the freeze for Jessie comes closer every day, I wonder whether all of them can be processed in time. So I don’t mind if every maintainer checks the package again and maybe uploads an improved version that can be processed faster.

Squeeze LTS

This was my second month that I did some work for the Squeeze LTS initiative, started by Raphael Hertzog at Freexian

All in all I got assigned a workload of 16.5h for August. I spent these hours to upload new versions of

  • [DLA 32-1] nspr security update
  • [DLA 34-1] libapache-mod-security security update
  • [DLA 36-1] polarssl security update
  • [DLA 37-1] krb5 security update
  • [DLA 39-1] gpgme1.0 security update
  • [DLA 41-1] python-imaging security update

As last month I prepared these uploads on the basis of the corresponding DSAs for Wheezy. For these packages backporting the Wheezy patches to Squeeze was rather easy.

I also had a look at python-django and eglibc. Although the python-django patches apply now, the package fails some tests and these issues need some further investigation. In case of eglibc, my small pbuilder didn’t have enough resources and trying to build the package resulted in a full disk after more than three hours of work.

For PHP5 Ondřej Surý (the real maintainer) suggested to use point releases of upstream instead of applying only patches. I am curious about how much effort is needed for this approach. Stay tuned, next month you will be told more details!

Anyway, this is still a lot of fun and I hope I can finish python-django, eglibc and php5 in September.

Other packages

This month my meep packages plus mpb have been part of a small hdf5 transition. All five packages needed a small patch and a new upload. As the patch was already provided by Gilles Filippini, this was done rather quickly.

Support

If you would like to support my Debian work you could either be part of the Freexian initiative (see above) or consider to send some bitcoins to 1JHnNpbgzxkoNexeXsTUGS6qUp5P88vHej. Contact me at donation@alteholz.eu if you prefer another way to donate. Every kind of support is most appreciated.

Planet DebianRitesh Raj Sarraf: apt-offline 1.4

apt-offline 1.4 has been released [1]. This is a minor bug fix release. In fact, one feature, offline bug reports (--bug-reports),  has been dropped for now.

The Debian BTS interface seems to have changed over time and the older debianbts.py module (that used the CGI interface) does not seem to work anymore. The current debbugs.py module seems to have switched to the SOAP interface.

There are a lot of changes going on personally, I just haven't had the time to spend. If anyone would like to help, please reach out to me. We need to use the new debbugs.py module. And it should be cross-platform.

Also, thanks to Hans-Christoph Steiner for providing the bash completion script.

[1] https://alioth.debian.org/projects/apt-offline/

AddThis: 

Categories: 

Keywords: 

Planet Linux AustraliaSridhar Dhanapalan: Twitter posts: 2014-08-25 to 2014-08-31

Sociological ImagesThis Month in SocImages (August 2014)

SocImages news:

New Pinterest board!

Someone put panties on peaches and then we had to start a new Pinterest board called Sexy What!?  It’s a collection of totally random stuff being made weirdly and unnecessarily sexual by marketers. My favorites are the ads for organ donation, hearing aids, CPR, and sea monkeys.  Enjoy!

1

You like!  Here are our most appreciated posts this month:

Thanks everybody!

Editor’s pick:

Top post on Tumblr this month:

Don’t forget our course guides!

Classes are starting and if you’re teaching you might find our Course Guides useful.  These collect strong SocImages posts organized in a way that follows standard syllabi for frequently-taught sociology courses.  Please use and share freely!

We’d love more! If you are a sociology professor or graduate student who would like to make one.  Please get in touch!

Social Media ‘n’ Stuff:

Finally, this is your monthly reminder that SocImages is on TwitterFacebookTumblrGoogle+, and Pinterest.  I’m on Facebook and most of the team is on Twitter: @lisawade@gwensharpnv@familyunequal, and @jaylivingston.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianRussell Coker: Links August 2014

Matt Palmer wrote a good overview of DNSSEC [1].

Sociological Images has an interesting article making the case for phasing out the US $0.01 coin [2]. The Australian $0.01 and $0.02 coins were worth much more when they were phased out.

Multiplicity is a board game that’s designed to address some of the failings of SimCity type games [3]. I haven’t played it yet but the page describing it is interesting.

Carlos Buento’s article about the Mirrortocracy has some interesting insights into the flawed hiring culture of Silicon Valley [4].

Adam Bryant wrote an interesting article for NY Times about Google’s experiments with big data and hiring [5]. Among other things it seems that grades and test results have no correlation with job performance.

Jennifer Chesters from the University of Canberra wrote an insightful article about the results of Australian private schools [6]. Her research indicates that kids who go to private schools are more likely to complete year 12 and university but they don’t end up earning more.

Kiwix is an offline Wikipedia reader for Android, needs 9.5G of storage space for the database [7].

Melanie Poole wrote an informative article for Mamamia about the evil World Congress of Families and their connections to the Australian government [8].

The BBC has a great interactive web site about how big space is [9].

The Raspberry Pi Spy has an interesting article about automating Minecraft with Python [10].

Wired has an interesting article about the Bittorrent Sync platform for distributing encrypted data [11]. It’s apparently like Dropbox but encrypted and decentralised. Also it supports applications on top of it which can offer social networking functions among other things.

ABC news has an interesting article about the failure to diagnose girls with Autism [12].

The AbbottsLies.com.au site catalogs the lies of Tony Abbott [13]. There’s a lot of work in keeping up with that.

Racialicious.com has an interesting article about “Moff’s Law” about discussion of media in which someone says “why do you have to analyze it” [14].

Paul Rosenberg wrote an insightful article about conservative racism in the US, it’s a must-read [15].

Salon has an interesting and amusing article about a photography project where 100 people were tased by their loved ones [16]. Watch the videos.

Planet DebianSteve Kemp: A diversion - The National Health Service

Today we have a little diversion to talk about the National Health Service. The NHS is the publicly funded healthcare system in the UK.

Actually there are four such services in the UK, only one of which has this name:

  • The national health service (England)
  • Health and Social Care in Northern Ireland.
  • NHS Scotland.
  • NHS Wales.

In theory this doesn't matter, if you're in the UK and you break your leg you get carried to a hospital and you get treated. There are differences in policies because different rules apply, but the basic stuff "free health care" applies to all locations.

(Differences? In Scotland you get eye-tests for free, in England you pay.)

My wife works as an accident & emergency doctor, and has recently changed jobs. Hearing her talk about her work is fascinating.

The hospitals she's worked in (Dundee, Perth, Kirkcaldy, Edinburgh, Livingstone) are interesting places. During the week things are usually reasonably quiet, and during the weekend things get significantly more busy. (This might mean there are 20 doctors to hand, versus three at quieter times.)

Weekends are busy largely because people fall down hills, get drunk and fight, and are at home rather than at work - where 90% of accidents occur.

Of course even a "quiet" week can be busy, because folk will have heart-attacks round the clock, and somebody somewhere will always be playing with a power tool, a ladder, or both!

So what was the point of this post? Well she's recently transferred to working for a childrens hospital (still in A&E) and the patiences are so very different.

I expected the injuries/patients she'd see to differ. Few 10 year olds will arrive drunk (though it does happen), and few adults fall out of trees, or eat washing machine detergent, but talking to her about her day when she returns home is fascinating how many things are completely different from how I expected.

Adults come to hospital mostly because they're sick, injured, or drunk.

Children come to hospital mostly because their parents are paranoid.

A child has a rash? Doctors are closed? Lets go to the emergency ward!

A child has fallen out of a tree and has a bruise, a lump, or complains of pain? Doctors are closed? Lets go to the emergency ward!

I've not kept statistics, though I wish I could, but it seems that she can go 3-5 days between seeing an actually injured or chronicly-sick child. It's the first-time-parents who bring kids in when they don't need to.

Understandable, completely understandable, but at the same time I'm sure it is more than a little frustrating for all involved.

Finally one thing I've learned, which seems completely stupid, is the NHS-Scotland approach to recruitment. You apply for a role, such as "A&E doctor" and after an interview, etc, you get told "You've been accepted - you will now work in Glasgow".

In short you apply for a post, and then get told where it will be based afterward. There's no ability to say "I'd like to be a Doctor in city X - where I live", you apply, and get told where it is post-acceptance. If it is 100+ miles away you either choose to commute, or decline and go through the process again.

This has lead to Kirsi working in hospitals with a radius of about 100km from the city we live in, and has meant she's had to turn down several posts.

And that is all I have to say about the NHS for the moment, except for the implicit pity for people who have to pay (inflated and life-changing) prices for things in other countries.

Planet DebianLucas Nussbaum: Debian trivia

After an intensive evening of brainstorming by the 5th floor cabal, I am happy to release the very first version of the Debian Trivia, modeled after the famous TCP/IP Drinking Game. Only the questions are listed here — maybe they should go (with the answers) into a package? Anyone willing to co-maintain? Any suggestions for additional questions?

  • what was the first release with an “and-a-half” release?
  • Where were the first two DebConf held?
  • what are Debian releases named after? Why?
  • Give two names of girls that were originally part of the Debian Archive Kit (dak), that are still actively used today.
  • Swirl on chin. Does it ring a bell?
  • What was Dunc Tank about? Who was the DPL at the time? Who were the release managers during Dunc Tank?
  • Cite 5 different valid values for a package’s urgency field. Are all of them different?
  • When was the Debian Maintainers status created?
  • What is the codename for experimental?
  • Order correctly lenny, woody, etch, sarge
  • Which one was the Dunc Tank release?
  • Name three locations where Debian machines are hosted.
  • What does the B in projectb stand for?
  • What is the official card game at DebConf?
  • Describe the Debian restricted use logo.
  • One Debian release was frozen for more than a year. Which one?
  • name the kernel version for sarge, etch, lenny, squeeze, wheezy. bonus for etch-n-half!
  • What happened to Debian 1.0?
  • Which DebConfs were held in a Nordic country?
  • What does piuparts stand for?
  • Name the first Debian release.
  • Order correctly hamm, bo, potato, slink
  • What are most Debian project machines named after?

Planet DebianAlexander Wirt: cgit on alioth.debian.org

Recently I was doing some work on the alioth infrastructure like fixing things or cleaning up things.

One of the more visible things I done was the switch from gitweb to cgit. cgit is a lot of faster and looks better than gitweb.

The list of repositories is generated every hour. The move also has the nice effect that user repositories are available via the cgit index again.

I don’t plan to disable the old gitweb, but I created a bunch of redirect rules that - hopefully - redirect most use cases of gitweb to the equivalent cgit url.

If I broke something, please tell me, if I missed a common use case, please tell me. You can usually reach me on #alioth@oftc or via mail (formorer@d.o)

People also asked me to upload my cgit package to Debian, the package is now waiting in NEW. Thanks to Nicolas Dandrimont (olasd) we also have a patch included that generates proper HTTP returncodes if repos doesn’t exist.

Planet Linux Australialinux.conf.au News: Big announcement about upcoming announcements...

The Papers Committee weekend went extremely well and without bloodshed according to Steve, although there was some very strong discussion from time to time! The upshot is that we have a fantastic program now, with excellent presentations all across the board.

We have already begun contacting Miniconf organisers to let them know who has been successful and who hasn’t, and over the next couple of weeks we will be sending emails out to everyone who submitted a presentation to let them know how they fared.

If you have been accepted to run a Miniconf then your contact will be Simon Lyall (miniconfs@lca2015.linux.org.au) and if you have been accepted as a speaker then your contact will be Lisa Sands (speakers@lca2015.linux.org.au). We will be asking for a photo of you and your twitter name, as we will be running a Speaker Feature about a different presenter each day - don’t worry - you will be notified on your day!

We want to give great thanks to everyone who submitted papers - you are all still winners in our eyes, and we hope that even if you weren’t selected this time that won’t put you off attending the conference and having a great time. Please note that due to the large volume of submissions, we are unable to provide feedback on why any particular submission was unsuccessful.

Our earlybird registration will be opening soon, so watch this space!

,

Planet DebianFrancois Marier: Outsourcing your webapp maintenance to Debian

Modern web applications are much more complicated than the simple Perl CGI scripts or PHP pages of the past. They usually start with a framework and include lots of external components both on the front-end and on the back-end.

Here's an example from the Node.js back-end of a real application:

$ npm list | wc -l
256

What if one of these 256 external components has a security vulnerability? How would you know and what would you do if of your direct dependencies had a hard-coded dependency on the vulnerable version? It's a real problem and of course one way to avoid this is to write everything yourself. But that's neither realistic nor desirable.

However, it's not a new problem. It was solved years ago by Linux distributions for C and C++ applications. For some reason though, this learning has not propagated to the web where the standard approach seems to be to "statically link everything".

What if we could build on the work done by Debian maintainers and the security team?

Case study - the Libravatar project

As a way of discussing a different approach to the problem of dependency management in web applications, let me describe the decisions made by the Libravatar project.

Description

Libravatar is a federated and free software alternative to the Gravatar profile photo hosting site.

From a developer point of view, it's a fairly simple stack:

The service is split between the master node, where you create an account and upload your avatar, and a few mirrors, which serve the photos to third-party sites.

Like with Gravatar, sites wanting to display images don't have to worry about a complicated protocol. In a nutshell, all that a site needs to do is hash the user's email and add that hash to a base URL. Where the federation kicks in is that every email domain is able to specify a different base URL via an SRV record in DNS.

For example, francois@debian.org hashes to 7cc352a2907216992f0f16d2af50b070 and so the full URL is:

http://cdn.libravatar.org/avatar/7cc352a2907216992f0f16d2af50b070

whereas francois@fmarier.org hashes to 0110e86fdb31486c22dd381326d99de9 and the full URL is:

http://fmarier.org/avatar/0110e86fdb31486c22dd381326d99de9

due to the presence of an SRV record on fmarier.org.

Ground rules

The main rules that the project follows is to:

  1. only use Python libraries that are in Debian
  2. use the versions present in the latest stable release (including backports)

Deployment using packages

In addition to these rules around dependencies, we decided to treat the application as if it were going to be uploaded to Debian:

  • It includes an "upstream" Makefile which minifies CSS and JavaScript, gzips them, and compiles PO files (i.e. a "build" step).
  • The Makefile includes a test target which runs the unit tests and some lint checks (pylint, pyflakes and pep8).
  • Debian packages are produced to encode the dependencies in the standard way as well as to run various setup commands in maintainer scripts and install cron jobs.
  • The project runs its own package repository using reprepro to easily distribute these custom packages.
  • In order to update the repository and the packages installed on servers that we control, we use fabric, which is basically a fancy way to run commands over ssh.
  • Mirrors can simply add our repository to their apt sources.list and upgrade Libravatar packages at the same time as their system packages.

Results

Overall, this approach has been quite successful and Libravatar has been a very low-maintenance service to run.

The ground rules have however limited our choice of libraries. For example, to talk to our queuing system, we had to use the raw Python bindings to the C Gearman library instead of being able to use a nice pythonic library which wasn't in Debian squeeze at the time.

There is of course always the possibility of packaging a missing library for Debian and maintaining a backport of it until the next Debian release. This wouldn't be a lot of work considering the fact that responsible bundling of a library would normally force you to follow its releases closely and keep any dependencies up to date, so you may as well share the result of that effort. But in the end, it turns out that there is a lot of Python stuff already in Debian and we haven't had to package anything new yet.

Another thing that was somewhat scary, due to the number of packages that were going to get bumped to a new major version, was the upgrade from squeeze to wheezy. It turned out however that it was surprisingly easy to upgrade to wheezy's version of Django, Apache and Postgres. It may be a problem next time, but all that means is that you have to set a day aside every 2 years to bring everything up to date.

Problems

The main problem we ran into is that we optimized for sysadmins and unfortunately made it harder for new developers to setup their environment. That's not very good from the point of view of welcoming new contributors as there is quite a bit of friction in preparing and testing your first patch. That's why we're looking at encoding our setup instructions into a Vagrant script so that new contributors can get started quickly.

Another problem we faced is that because we use the Debian version of jQuery and minify our own JavaScript files in the build step of the Makefile, we were affected by the removal from that package of the minified version of jQuery. In our setup, there is no way to minify JavaScript files that are provided by other packages and so the only way to fix this would be to fork the package in our repository or (preferably) to work with the Debian maintainer and get it fixed globally in Debian.

One thing worth noting is that while the Django project is very good at issuing backwards-compatible fixes for security issues, sometimes there is no way around disabling broken features. In practice, this means that we cannot run unattended-upgrades on our main server in case something breaks. Instead, we make use of apticron to automatically receive email reminders for any outstanding package updates.

On that topic, it can occasionally take a while for security updates to be released in Debian, but this usually falls into one of two cases:

  1. You either notice because you're already tracking releases pretty well and therefore could help Debian with backporting of fixes and/or testing;
  2. or you don't notice because it has slipped through the cracks or there simply are too many potential things to keep track of, in which case the fact that it eventually gets fixed without your intervention is a huge improvement.

Finally, relying too much on Debian packaging does prevent Fedora users (a project that also makes use of Libravatar) from easily contributing mirrors. Though if we had a concrete offer, we would certainly look into creating the appropriate RPMs.

Is it realistic?

It turns out that I'm not the only one who thought about this approach, which has been named "debops". The same day that my talk was announced on the DebConf website, someone emailed me saying that he had instituted the exact same rules at his company, which operates a large Django-based web application in the US and Russia. It was pretty impressive to read about a real business coming to the same conclusions and using the same approach (i.e. system libraries, deployment packages) as Libravatar.

Regardless of this though, I think there is a class of applications that are particularly well-suited for the approach we've just described. If a web application is not your full-time job and you want to minimize the amount of work required to keep it running, then it's a good investment to restrict your options and leverage the work of the Debian community to simplify your maintenance burden.

The second criterion I would look at is framework maturity. Given the 2-3 year release cycle of stable distributions, this approach is more likely to work with a mature framework like Django. After all, you probably wouldn't compile Apache from source, but until recently building Node.js from source was the preferred option as it was changing so quickly.

While it goes against conventional wisdom, relying on system libraries is a sustainable approach you should at least consider in your next project. After all, there is a real cost in bundling and keeping up with external dependencies.

This blog post is based on a talk I gave at DebConf 14: slides, video.

LongNowWe are Walking Rocks: Friends of the Pleistocene Explore the Geologic Now

Geopoetry Smudge Studio

In The Life and Death of Buildings: On Photography and Time Joel Smith writes:

Imagine making a picture using film so insensitive to light – so slow, in photographic parlance – that to burn an image onto it required an exposure of twenty-five centuries. Geologically speaking, the blink of an eye. The picture from that negative would reveal a world made of stone, and stone only. It would be a world where plants and people, seasons and civilizations, had come and gone, quite untouched, and unbothered, by mankind. And yet, here it is, a world, unmistakably shaped by human hands.

Perhaps one of humanity’s greatest weaknesses is that our power of imagination tends to be dwarfed by our power of transformation. Twenty-five centuries ago, Rome was little more than a small town; Confucius had just resigned from his government post; Olmec society had slid into decline; and none of the languages we speak today had yet evolved. Entire civilizations rise and fall within the blink of a geologic eye – and whether as cause or consequence, we have a collective attention span to match.

We might be able to stretch our sense of “our time” a century or two into the past and future, but anything beyond that feels so far away that it dissolves into (seeming) irrelevance. As a result, we often don’t realize that our contemporary world is significantly shaped by the geological worlds that came before it; and that the fruits of our short-term pursuits can far outlast our own physical existence on Earth.

The Friends of the Pleistocene try to encourage this realization by spurring our temporal capacity for imagination. An interactive and creative research collaboration by Elizabeth Ellsworth and Jamie Kruse, the duo behind Smudge Studios, FOP’s mission is, essentially, to create the kinds of pictures Smith imagined. Through a variety of projects, they direct our focus to the Pleistocene traces that continue to reverberate through our contemporary world, and to the impact our culture makes on the ancient landscapes around us.

The geologic epoch of the Pleistocene is commonly dated from 2.58 million to 10,000 years BP (before the present). We know it as the time of glacial periods, mammoths, saber-toothed cats, and Neanderthals. It is also the period of humanity’s childhood: it’s during the Pleistocene that the genus Homo first learned to walk upright and manipulate its environment with stone tools. This epoch predates agriculture, or any notion of ‘civilization’ – yet its landscapes are still as much a part of our present world as they were to our early ancestors. As Kruse and Ellsworth explain,

The Pleistocene landscape literally shapes how we live today and affords the placement and design of many infrastructures in our contemporary lives, such as building highways along the spines of glacial moraines, as in the case of the Long Island Expressway, or the great views afforded by the now extinct Pleistocene Lake Bonneville’s shoreline “benches” where suburban houses perch in Draper, Utah. We also use Pleistocene lakebeds as testing grounds for weapons (such as for the Trinity test in 1945) or for recreation, like the Bonneville Salt Flats or Cape Cod’s beloved swimming holes – the kettle ponds.

And just as the Pleistocene continues to shape our world, so do we continue to make an impact on it. Kruse and Ellsworth recall making this realization during a residency at the Center for Land Use Interpretation (CLUI) in Utah:

… we came across their (CLUI’s) book on the Nevada Test Site. At that point, we had no idea that over 1000 nuclear bombs had been detonated in the United States. We began to realize that the tourist experience of the American West often overlooked the fact that just behind or underneath the stunning backdrops of iconic scenery were invisible vibrant human-made materials, and they were actively reshaping the landscapes we were moving through, at the very moment we were moving through them. The forceful actions of many of those materials were potent enough to continue this reshaping into deep geological futures. This led to future trips where we actually toured the NTS and ended up designing a project to visit the sites where underground testing had occurred outside the NTS. It was literally standing at these sites, in the present, that geologic time and the contemporary moment came vividly together for us.

Exploring such ancient sites across the United States and beyond, the Smudge duo harness art and design as useful tools to spur our imagination into time scales that dwarf a human lifetime. They produce photographic essays, narrative field guides, educational events, and speculative tools to help others explore the convergence of human and geologic processes – and they add not one, but two zeroes to their date notations!

Smudge’s projects include an examination of how the Pleistocene geology of the Great Lakes continues to influence processes of urbanization in the region; visualizing the ancient geological materials that constitute the man-made buildings of New York City; mapping the intersection of human and geologic processes in the American West; and representing the material origins of the energy that sustains our civilization.

Much of their work has examined the handling of nuclear waste – an issue that indelibly reminds us of how tied we are to the deep material processes of our world:

In realizing that the contamination can’t be moved from where it is, and will stay contaminated for tens of thousands of years in the future, it seems important to start developing capacities to think and design for larger timescales. … We humans have catalyzed a geologic impact around the globe, materially. These effects are much more than the nuclear, but nuclear materials are such a clear and potent example, they still exist at the root of our work and are why we veered in this direction so many years ago.

Their projects tend to incorporate an interactive approach: Ellsworth and Kruse try not only to visualize, but to encourage others to join in their imaginative processes:

Interactivity seems key to the ideas we’re working with – which focus on the importance of being able to experience the material reality and force of processes and movements that are either hard for humans to sense physically, or hard for humans to admit politically or emotionally.

In 02012, Kruse and Ellsworth published Making the Geologic Now, an edited collection of photographs and essays by more than forty contributors that include Rachel Sussman, (photographer/author of the Oldest Living Things in the World and Long Now SALT speaker) and Elizabeth Kolbert (The Sixth Extinction and Field Notes from a Catastrophe).

Making the Geologic Now documents and encourages what they have identified as a “turn toward the geologic,” a collection of “early sightings” of emergent social and cultural awareness of the deep geological parameters of our world. The book is meant to be generative rather than analytical or critical; in their introduction Kruse and Ellsworth describe the contributions as

places to think experimentally about what might become thinkable and possible if humans were to collectively take up the geologic as an instructive partner in designing thoughts, objects, systems, and experiences. The book provides an armature for framing responses to that idea.

Earlier this year Kruse and Ellsworth went to Norway for a work they called “Inhabiting Change” – it’s part of a larger collaboration that investigates the socio-geographic changes ahead for the Arctic Circle. Once seen as a remote forbidding place, it is now being transformed by the forces of capitalism, the pinch of dwindling resources, and a growing global population. As they expressed their goal for this project:

We intend to create dynamic tracings of the arrival of new futures of the North into widespread human + nonhuman cognizance. Works that result from Inhabiting Change may take the form of a series of linked multi-media dispatches. We also intend to compose a collaborative, human + nonhuman voice with multiple, moving points of view—while we live and make in the midst of the forces of change that currently are composing emerging futures north.

Ultimately, what Ellsworth and Kruse hope people take away from their work is new curiosities about and appreciations of their bare physical materiality – the chemical and physical fact that we are “walking rocks,” and that we live within the geologic, as a condition of our daily lives.

Making The Geologic Now can be purchased or downloaded from Smudge Studios. They also offer a place to contribute your own sightings of the geologic.

Images by FOP/Smudge Studios

Planet DebianJohn Goerzen: 2AM to Seattle

Monday morning, 1:45AM.

Laura and I walk into the boys’ room. We turn on the light. Nothing happens. (They’re sound sleepers.)

“Boys, it’s time to get up to go get on the train!”

Four eyes pop open. “Yay! Oh I’m so excited!”

And then, “Meow!” (They enjoy playing with their stuffed cats that Laura got them for Christmas.)

Before long, it was out the door to the train station. We even had time to stop at a donut shop along the way.

We climbed into our family bedroom (a sleeping car room on Amtrak specifically designed for families of four), and as the train started to move, the excitement of what was going on crept in. Yes, it’s 2:42AM, but these are two happy boys:

2014-08-04 02

Jacob and Oliver love trains, and this was the beginning of a 3-day train trip from Newton to Seattle that would take us through Kansas, Colorado, the Rocky Mountains of New Mexico, Arizona, Los Angeles, up the California coast, through the Cascades, and on to Seattle. Whew!

Here we are later that morning before breakfast:

IMG_3776

Here’s our train at a station stop in La Junta, CO:

IMG_3791

And at the beautiful small mountain town of Raton, NM:

IMG_3805

Some of the passing scenery in New Mexico:

IMG_3828

Through it all, we found many things to pass the time. I don’t think anybody was bored. I took the boys “exploring the train” several times — we’d walk from one end to the other and see what all was there. There was always the dining car for our meals, the lounge car for watching the passing scenery, and on the Coast Starlight, the Pacific Parlor Car.

Here we are getting ready for breakfast one morning.

IMG_3830

Getting to select meals and order in the “train restaurant” was a big deal for the boys.

IMG_3832

Laura brought one of her origami books, which even managed to pull the boys away from the passing scenery in the lounge car for quite some time.

IMG_3848

Origami is serious business:

IMG_3869

They had some fun wrapping themselves around my feet and challenging me to move. And were delighted when I could move even though they were trying to weight me down!

IMG_3880

Several games of Uno were played, but even those sometimes couldn’t compete with the passing scenery:

IMG_3898

The Coast Starlight features the Pacific Parlor Car, which was built over 50 years ago for the Santa Fe Hi-Level trains. They’ve been updated; the upper level is a lounge and small restaurant, and the lower level has been turned into a small theater. They show movies in there twice a day, but most of the time, the place is empty. A great place to go with little boys to run around and play games.

IMG_3896

The boys and I sort of invented a new game: roadrunner and coyote, loosely based on the old Looney Tunes cartoons. Jacob and Oliver would be roadrunners, running around and yelling “MEEP MEEP!” Meanwhile, I was the coyote, who would try to catch them — even briefly succeeding sometimes — but ultimately fail in some hilarious way. It burned a lot of energy.

And, of course, the parlor car was good for scenery-watching too:

IMG_3908

We were right along the Pacific Ocean for several hours – sometimes there would be a highway or a town between us and the beach, but usually there was nothing at all between us and the coast. It was beautiful to watch the jagged coastline go by, to gaze out onto the ocean, watching the birds — apparently so beautiful that I didn’t even think to take some photos.

Laura’s parents live in California, and took a connecting train. I had arranged for them to have a sleeping car room near ours, so for the last day of the trip, we had a group of 6. Here are the boys with their grandparents at lunch Wednesday:

2014-08-06 11

We stepped off the train in Seattle into beautiful King Street Station.

P8100197

Our first day in Seattle was a quiet day of not too much. Laura’s relatives live near Lake Washington, so we went out there to play. The boys enjoyed gathering black rocks along the shore.

IMG_3956

We went blackberry picking after that – filled up buckets for a cobbler.

The next day, we rode the Seattle Monorail. The boys have been talking about this for months — a kind of train they’ve never been on. That was the biggest thing in their minds that they were waiting for. They got to ride in the very front, by the operator.

P8080073

Nice view from up there.

P8080078

We walked through the Pike Market — I hadn’t been in such a large and crowded place like that since I was in Guadalajara:

P8080019

At the Seattle Aquarium, we all had a great time checking out all the exhibits. The “please touch” one was a particular hit.

P8080038

Walking underneath the salmon tank was fun too.

We spent a couple of days doing things closer to downtown. Laura’s cousin works at MOHAI, the Museum of History and Industry, so we spent a morning there. The boys particularly enjoyed the old periscope mounted to the top of the building, and the exhibit on chocolate (of course!)

P8100146

They love any kind of transportation, so of course we had to get a ride on the Seattle Streetcar that comes by MOHAI.

P8090094

All weekend long, we had been noticing the seaplanes taking off from Lake Washington and Lake Union (near MOHAI). So finally I decided to investigate, and one morning while Laura was doing things with her cousin, the boys and I took a short seaplane ride from one lake to another, and then rode every method of transportation we could except for ferries (we did that the next day). Here is our Kenmore Air plane:

P8100100

The view of Lake Washington from 1000 feet was beautiful:

P8100109

I think we got a better view than the Space Needle, and it probably cost about the same anyhow.

P8100117

After splashdown, we took the streetcar to a place where we could eat lunch right by the monorail tracks. Then we rode the monorail again. Then we caught a train (it went underground a bit so it was a “subway” to them!) and rode it a few blocks.

There is even scenery underground, it seems.

P8100151

We rode a bus back, and saved one last adventure for the next day: a ferry to Bainbridge Island.

2014-08-11 14

2014-08-11 16

Laura and I even got some time to ourselves to go have lunch at an amazing Greek restaurant to celebrate a year since we got engaged. It’s amazing to think that, by now, it’s only a few months until our wedding anniversary too!

There are many special memories of the weekend I could mention — visiting with Laura’s family, watching the boys play with her uncle’s pipe organ (it’s in his house!), watching the boys play with their grandparents, having all six of us on the train for a day, flying paper airplanes off the balcony, enjoying the cool breeze on the ferry and the beautiful mountains behind the lake. One of my favorites is waking up to high-pitched “Meow? Meow meow meow! Wake up, brother!” sorts of sounds. There was so much cat-play on the trip, and it was cute to hear. I have the feeling we won’t hear things like that much more.

So many times on the trip I heard, “Oh dad, I am so excited!” I never get tired of hearing that. And, of course, I was excited, too.

Planet Linux AustraliaMaxim Zakharov: Australian Singing Competition

The Finals concert of the 2014 Australian Singing Competition was an amazing experience, and it was the first time I listened to opera singers live.

Congratulations to the winner, Isabella Moore from New Zealand!

Isabella Moore

Planet DebianJoachim Breitner: DebConf 14

I’m writing this blog post on the plain from Portland towards Europe (which I now can!), using the remaining battery live after having watched one of the DebConf talks that I missed. (It was the systemd talk, which was good and interesting, but maybe I should have watched one of the power management talks, as my battery is running down faster than it should be, I believe.)

I mostly enjoyed this year’s DebConf. I must admit that I did not come very prepared: I had neither something urgent to hack on, nor important things to discuss with the other attendees, so in a way I had a slow start. I also felt a bit out of touch with the project, both personally and technically: In previous DebConfs, I had more interest in many different corners of the project, and also came with more naive enthusiasm. After more than 10 years in the project, I see a few things more realistic and also more relaxed, and don’t react on “Wouldn’t it be cool to have crazy idea” very easily any more. And then I mostly focus on Haskell packaging (and related tooling, which sometimes is also relevant and useful to others) these days, which is not very interesting to most others.

But in the end I did get to do some useful hacking, heard a few interesting talks and even got a bit excited: I created a new tool to schedule binNMUs for Haskell packages which is quite generic (configured by just a regular expression), so that it can and will be used by the OCaml team as well, and who knows who else will start using hash-based virtual ABI packages in the future... It runs via a cron job on people.debian.org to produce output for Haskell and for OCaml, based on data pulled via HTTP. If you are a Debian developer and want up-to-date results, log into wuiet.debian.org and run ~nomeata/binNMUs --sql; it then uses the projectb and wanna-build databases directly. Thanks to the ftp team for opening up incoming.debian.org, by the way!

Unsurprisingly, I also held a talk on Haskell and Debian (slides available). I talked a bit too long and we had too little time for discussion, but in any case not all discussion would have fitted in 45 minutes. The question of which packages from Hackage should be added to Debian and which not is still undecided (which means we carry on packaging what we happen to want in Debian for whatever reason). I guess the better our tooling gets (see the next section), the more easily we can support more and more packages.

I am quite excited by and supportive of Enrico’s agenda to remove boilerplate data from the debian/ directories and relying on autodebianization tools. We have such a tool for Haskell package, cabal-debian, but it is unofficial, i.e. neither created by us nor fully endorsed. I want to change that, so I got in touch with the upstream maintainer and we want to get it into shape for producing perfect Debian packages, if the upstream provided meta data is perfect. I’d like to see the Debian Haskell Group to follows Enrico’s plan to its extreme conclusion, and this way drive innovation in Debian in general. We’ll see how that goes.

Besides all the technical program I enjoyed the obligatory games of Mao and Werewolves. I also got to dance! On Saturday night, I found a small but welcoming Swing-In-The-Park event where I could dance a few steps of Lindy Hop. And on Tuesday night, Vagrant Cascadian took us (well, three of us) to a blues dancing night, which I greatly enjoyed: The style was so improvisation-friendly that despite having missed the introduction and never having danced Blues before I could jump right in. And in contrast to social dances in Germany, where it is often announced that the girls are also invited to ask the boys, but then it is still mostly the boys who have to ask, here I took only half a minute of standing at the side until I got asked to dance. In retrospect I should have skipped the HP reception and went there directly...

I’m not heading home at the moment, but will travel directly to Göteborg to attend ICFP 2014. I hope the (usually worse) west-to-east jet lag will not prevent me from enjoying that as much as I could.

Sociological ImagesSaturday Stat: New Orleans Weathers the Great Recession

Partly because the city had just begun to recovery from Hurricane Katrina when the Great Recession began, it suffered less job loss relative to its pre-recession state and GDP actually grew 3.9% between 2008 and 2011. No other southern metropolitan area cracked 2% in the same period.

Charles Davidson, writing for EconSouth, offers the following evidence of New Orleans’ resilience in the face of the Great Recession. Chart 1 shows that it lost a smaller percentage of its jobs than the U.S. as a whole.

1

This is even more significant as it looks, as New Orleans had been in economic decline for decades before Katrina.  Davidson reports that “the economy in New Orleans has reversed decades of decline and outperformed the nation and other southern metropolitan areas.  Consider: the job growth in New Orleans shown in Chart 2 may not look impressive, but compare it to the extraordinary declines of its neighbors.

2

Thanks to greater diversification of its economy, record tourism, and rising investment money, the city may be setting itself up for a revival.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianGergely Nagy: Happy

For the past decade or so, I wasn't exactly happy. I struggled with low self esteem, likely bordered on depression at times. I disappointed friends, family and most of all, myself. There were times I not only disliked the person I was, but hated it. This wasn't healthy, nor forward-looking, I knew that all along, and that made the situation even worse. I tried to maintain a more enthusiastic mask, pretended that nothing's wrong. Being fully aware that there actually is nothing terribly wrong, while still feeling worthless, just added insult to injury.

In the past few years, things started to improve. I had a job, things to care about, things to feel passionate about, people around me who knew nothing about the shadows on my heart, yet still smiled, still supported me. But years of self loathing does not disappear overnight.

Then one day, some six months ago, my world turned upside down. Years of disappointment, hate and loathing - poof, gone. Today, I'm happy. This is something I have not been able to tell myself in all honesty in this century yet (except maybe for very brief periods of time, when I was happy for someone else).

[Engaged]

A little over six months ago, I met someone, someone I could open up to. I still remember the first hour, where we talked about our own shortcomings and bad habits. At the end of the day, when She ordered me a crazy-pancake (a pancake with half a dozen random fillings), I felt happy. She is everything I could ever wish for, and more. She isn't just the woman I love, with whom I'll say the words in a couple of months. She's much more than a partner, a friend a soul-mate combined in one person. She is my inspiration, my role model and my Guardian Angel too.

I no longer feel worthless, nor inadequate. I am disappointed with myself no more. I do not hate, I do not loathe, and past mistakes, past feelings seem so far away! I can share everything with Her, She does not judge, nor condemn: she supports and helps. With Her, I am happy. With Her, I am who I wanted myself to be. With Her, I am complete.

Thank You.

Geek FeminismWe’re going to need a bigger linkspam (29 August 2014)

Let’s talk about category structure and oppression | Etched with Soma’s Pen: Great analysis of category structure and oppression. (20 August)

Kimberly Bryant has levelled the digital playing field for black women | Marie Claire : “When Bryant, 47, signed Kai up for a summer program at Stanford University that teaches kids how to code, she discovered her daughter was the only African-American, and one of just a handful of girls, enrolled.” (26 August)

The abrasiveness trap: High-achieving men and women are described differently in reviews | Fortune : “Does gender play a role in the type of feedback an employee receives at review time? We had a linguist crunch the numbers.” (26 August)

Trolls drive Anita Sarkeesian out of her house to prove misogyny doesn’t exist | The Verge : “Since the project launched on Kickstarter way back in 2012, the gaming community has been treated to an incessant, deeply paranoid campaign against Tropes vs. Women generally and Sarkeesian personally.” (27 August)

Girl coder takes a leap on the Rails | The Age: “Karthika attended a coding workshop at Rails Girls a year ago and found herself drawn in by an atmosphere of conviviality and collaboration at odds with the solitary stereotype.” (25 August)

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet DebianMatthew Palmer: Chromium tabs crashing and not rendering correctly?

If you’ve noticed your chrome/chromium on Linux having problems since you upgraded to somewhere around version 35/36, you’re not alone. Thankfully, it’s relatively easy to workaround. It will hit people who keep their browser open for a long time, or who have lots of tabs (or if you’re like me, and do both).

To tell if you’re suffering from this particular problem, crack open your ~/.xsession-errors file (or wherever your system logs stdout/stderr from programs running under X), and look for lines that look like this:

[22161:22185:0830/124533:ERROR:shared_memory_posix.cc(231)]
Creating shared memory in /dev/shm/.org.chromium.Chromium.gFTQSy
failed: Too many open files

And

[22161:22185:0830/124601:ERROR:host_shared_bitmap_manager.cc(122)]
Cannot create shared memory buffer

If you see those errors, congratulations! The rest of this blog post will be of use to you.

There’s probably a myriad of bugs open about this problem, but the one I found was #367037: Shared memory-related tab crash. It turns out there’s a file handle leak in the chromium codebase somewhere, relating to shared memory handling. There’s no fix available, but the workaround is quite simple: increase the number of files that processes are allowed to have open.

System-wide, you can do this by creating a file /etc/security/limits.d/local-nofile.conf, containing this line:

* - nofile 65535

You could also edit /etc/security/limits.conf to contain the same line, if you were so inclined. Note that this will only take effect next time you login, or perhaps even only when you restart X (or, at worst, your entire machine).

This doesn’t help you if you’ve got Chromium already open and you’d like to stop it from crashing Right Now (perhaps restarting your machine would be a terrible hardship, causing you to lose your hard-won uptime record), then you can use a magical tool called prlimit.

The prlimit syscall is available if you’re running a Linux 2.6.36 or later kernel, and running at least glibc 2.13. You’ll have a prlimit command line program if you’ve got util-linux 2.21 or later. If not, you can use the example source code in the prlimit(2) manpage, changing RLIMIT_CPU to RLIMIT_NOFILE, and then running it like this:

prlimit <PID> 65535 65535

The <PID> argument is taken from the first number in the log messages from .xsession-errors – in the example above, it’s 22161.

And now, you can go back to using your tabs as ersatz bookmarks, like I do.

Planet DebianDirk Eddelbuettel: BH release 1.54.0-4

Another small new release of our BH package providing Boost headers for use by R is now on CRAN. This one brings a one-file change: the file any.hpp comprising the Boost.Any library --- as requested by a fellow package maintainer needing it for a pending upload to CRAN.

No other changes were made.

Changes in version 1.54.0-4 (2014-08-29)

  • Added Boost Any requested by Greg Jeffries for his nabo package

Courtesy of CRANberries, there is also a diffstat report for the most recent release.

Comments and suggestions are welcome via the mailing list or issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

CryptogramSquid Skin Inspires Eye-Like Photodetector

Squid are color-blind, but may detect color directly through their skin. A researcher is working on a system to detect colored light the way squid do.

TEDNeed help? Ask Aunt Bertha! Erine Gray helps people in need find social services in their area

_C0A4428

Most of us will, at some point, face a life crisis — divorce, job loss, illness, eviction. In the United States, 95% of social safety nets are provided by charity organizations and NGOs, so finding help in a crisis situation can be confusing and distressing. Erine Gray is the founder of Aunt Bertha, a free-to-use online platform that makes it easy for anyone in the US to find and apply for social services — anything from Medicare to food stamps to housing — just by typing in a ZIP code. Aunt Bertha serves people in all 50 states, with in-depth coverage in Texas, Colorado, Central Florida, and Richmond, Virginia. Starting this week, Aunt Bertha has added New York City to its in-depth coverage list. We took this moment to talk to Gray about how Aunt Bertha was born, how it works and how it’s shaping up to be a valuable tool not just for families and individuals in need but for policy makers, advocates and community workers as well.

Aunt Bertha started as a response to an illness in your own family. Can you tell us about your experience?

I grew up in a small town called Olean, New York, an hour south of Buffalo. When I was almost 17, in the summer of 1992, my mom, who worked as a janitor at the community college at the time, caught a rare disease called encephalitis. She needed to be rushed to Sayre, Pennsylvania, which was a four-hour drive. She flatlined twice on the way there, but made it to see a brain specialist. She went into a coma and survived, but she suffered brain damage. Her memory was essentially wiped out — everything after her childhood and the first few years of the birth of her first daughter was gone. She had no memory of me and my little sister.

She was released from the hospital three months later. It was me and my dad and my sister, just trying to figure out how to take care of her. Obviously you don’t get a certification for these types of things. Nobody is ever really prepared. She recovered, to some extent, but she suffered from seizures on a regular basis—they would sometimes knock her out for the day. My dad did the best he could to take care of her, and he did, for nine years. He did it alone for the most part. We didn’t know what services were available. And when we did find programs, it was difficult to get through the application process.

I went off to college, studied computer science, but ended up getting my degree from Indiana University in economics. I was working as a contractor in Austin, Texas, when I got a call from my dad. He needed help. My mother was getting older and started to have early-onset dementia. I flew up to New York and packed her things, and moved her to Texas, and became her legal guardian. So there I was—unprepared—trying to figure out how to navigate a system for somebody who needed help.

What kinds of services are available with people in this position?

Unfortunately there are not a lot of resources available for older adults with mental illness in the US. There are private care facilities, but these are financially unattainable for many. All too often, people either end up in the prison systems, homeless or, if they’re lucky—in a nursing home.

I went through a long process of looking for a nursing home, but many of them discriminated against people with signs of mental illness. If you think about it from their perspective, they don’t want people who might want to run away, or people who are difficult to deal with. We must have been rejected by 15 to 20 nursing homes. I had a social worker give me advice on how to find a place that would take her. She told me to dress up, wear a jacket and go meet the administrators in person. I’d be invited to submit an application—but the only response I would get would be very concise rejection letters that said, “We can’t meet your mother’s needs.” It seemed at the time to be a legal form of discrimination.

Erine Gray's mom.

Erine Gray says that the frustrating experience of trying to find social services for his mom helped him identify an incredible need: an easy-to-use searchable database of services in a specific area. Photo: Courtesy of Erine Gray

It was navigating this system for somebody who’s disabled that made me see how broken the system really is. So I went back to graduate school and got my masters in public policy from the LBJ School of Public Affairs here in Austin. I ended up working as a contractor for the state of Texas, essentially looking at improving the way people find out about social service programs like food stamps, the food subsidy program in the US, Medicaid, the US welfare program and how they apply for them. The company I worked for also ran a call center that helped people get enrolled into these programs.

During those four years, 2006 to 2010, there was a big economic downturn. Texas is the second largest state in the US—a huge, huge economy . Enrollment levels grew significantly, but the state didn’t have the capacity to deal with that much growth. So it was a challenge to figure out how to get everyone connected with what they needed. On most nights, my car was the last car in the parking lot. I’d analyze calls, and realized a lot of people were ringing just to say, “Hey, did you receive my application for food stamps?” “Or I sent you a fax, can you confirm you got it?” We figured out pretty quickly that this information was stored in the system, so our team redesigned the menu, allowing much more self-service. This meant people in need could get answers in 30 seconds rather than having to wait on hold for 30 minutes.

We worked on several big projects like this that made things more efficient. The number of calls and the amount of time spent taking them went down. These efforts helped turn the project into an operation that could scale.

It was this work, as well as my family’s experience caring for my mother, that led to the idea for Aunt Bertha. I thought to myself, “Well, if we can visualize data for complex programs like the food stamps program, would more self-service options in social services be cheaper to implement and less frustrating for the person in need?” And that was the a-ha moment—the big idea.

So how does it work, from perspective of the user? I’m assuming that searches are anonymous, first of all?

Yes. People search for all sorts of things—like HIV testing services, survivors of incest support groups—and we wanted our search to be completely anonymous.

Here’s how it works. For any ZIP code in the United States, you’ll see at least 200 listings. Some areas have more programs than others, but we are rapidly expanding. So say you’re in Austin. You type in a ZIP code, and in a couple of seconds, it’s pulling in all of the national programs, state programs, county programs, city programs, and then programs that cover just your neighborhood.

If you type in “food pantry,” it pulls in the food pantry programs, organized by how close they are to you. You can filter for other variables—say “seniors.” As you drill in, you get the hours and location, and so on. You can also search by eligibility: put in a family size—let’s say I have two kids under 5 and I make $700 a month. What comes up is the Texas Supplemental Nutrition Assistance Program—SNAP—what used to be called food stamps. I know, based on publicly available rules, that a family of this makeup would likely get somewhere around $458 a month in benefits.

What do you need help with? A sampling of what people search for on Aunt Bertha. Image: Aunt Bertha

What do you need help with? A sampling of what people search for on Aunt Bertha. Image: Aunt Bertha

Or maybe you’re prescribed a prescription drug—say Prozac. You’re uninsured and you don’t have the money for it. It will bring up the Lilly Cares program. Eli Lilly, the manufacturer of Prozac, will give it to you for free if you apply.

With an integrated application form we built, the seeker can log into Aunt Bertha and fill out the agency’s form in just a few clicks. The system saves the information so that if you apply to anything else in the future, any redundant questions get filled in automatically. You can also upload any supporting documentation. A dashboard shows all the programs you’ve applied to, the application’s current status, and a history of interactions with that agency. This eases phone traffic. The agency can see the same application from their perspective, and process it from there.

One cool thing about our approach is that we kept rural America in mind. There are hundreds of organizations that serve these areas remotely, through call centers.

Is search data used for anything else?

A couple of months ago we launched a real-time analytics platform that allows policy makers and advocates to see what people are searching for, so that they can see the holes in services in their community. For a pilot we did in Richmond, Virginia, we mapped the concentration of searches. You can put in any date range you’d like to analyze. You can see the history of search growth in that region, but you can also see the searches by neighborhood. If I drill into a zip code I can see—anonymously—exactly what is being searched for. And it’s up-to-the-minute.

What’s really cool about this is that policy makers and city workers can say, “Hey, why is it that so many people search unsuccessfully for, say, assistance with ‘light bills’?” It could be we just don’t have that need addressed in our database. Or, as in this case, we didn’t anticipate the search term the seeker used. Once we pinpoint that, we can add it as a synonym to our taxonomy, and redirect people to what they need—in this case, utility assistance programs.

But the sad thing is that sometimes there just are just no services. That’s what we’re trying to solve. We make search data available to policy makers, researchers, universities and so on, so that they can better understand where the hurt is in their community. Imagine a world where you can see, in real time, that your neighbors don’t have enough money for tampons. This is a real search: every single search is somebody’s life. It’s somebody’s crisis moment.

But how can a policy maker respond to a need so specific and small, in real time?

Not very well, honestly. But they can aggregate the information and say, “Let’s give $500 dollars a month to the local food pantry and ask them to stock sanitary supplies,” and then change the listing. The thing is, most cities and most foundations have full-time people dedicated to understanding the needs in their city. So this is just another tool to help them do a better job.

The Search Reporting Dashboard, which shows community leaders local search information in real time. Image: Aunt Bertha

The Search Reporting Dashboard, which shows community leaders local search information in real time. Image: Aunt Bertha

How did you find and aggregate so much information? There are government programs, NGOs… what else?

That’s been our biggest investment, and what I worry about the most—making sure that the data stays current.  Unfortunately, there’s no magic source of this data. There’s little bits and pieces of open government data, and it’s pretty poor quality. We start with the very basic information that is available—like IRS charity data—and then we built a series of automated jobs that can check program websites to grab information like hours of operation, email addresses and phone numbers. Every now and then people will send us a list of programs they’ve put together.

The nice thing is, it’s not something that has to be done all at once. We just launched a feature that allows programs to claim their listing. And we have some other experiments we’re working on. The reality is that it’s going to take lots of different channels to keep the information current. There’s no silver bullet. We just roll up our sleeves and work through one program at a time.

It’s a complicated problem. Some people think we’re crazy for even trying. We may be, I don’t know.

How do you pick the states you’re working with?

Usually someone will reach out to us and start a conversation. We’ve spent a lot of time building a scalable way to collect the data for a geographic area. Usually, if someone reaches out from rural areas or small cities, we’ll have our data entry team collect that data pretty quickly — in days or weeks, not months. Our Chief Information Officer, Stu Scruggs, really has made it easy. So if anybody’s interested in bringing Aunt Bertha to their city, just reach out to us.

One organization that we’re particularly excited to be working with is called Critical Mass. Critical Mass is developing strategies to overcome the barriers facing young adults with cancer. We’re working on a really cool project that is making it easy for people diagnosed with cancer to find targeted help in seconds. We’re also working with great organizations like Heartland for Children, which is working to eliminate child abuse and neglect in Central Florida.

Our mission is to make human services information accessible to people. And we’re inspired by the innovative government agencies, cause organizations and direct-service non-profit organizations that are out there.

What’s your business model?

When I had the idea for Aunt Bertha, I was originally considering incorporating as a non-profit. I went to dinner with a local entrepreneur, Gary Hoover. He pointed out that our customers would be non-profits, foundations and government entities. His advice was that we really didn’t want to be in a position where we would be competing with our customers for donations. So he challenged me to come up with a scalable model that would keep the doors open.

So we have two products that we sell—essentially we digitize the application process and integrate it into Aunt Bertha—and agencies subscribe monthly to our service. You see, a lot of non-profits don’t have an easy way of accepting applications online, so often seekers just go stand in line rather than submit digitally. Our product makes it possible for seekers to quickly apply for services, and it’s customizable to each agency. So we launch in an area, and then we take a leap of faith that somewhere along the line, some of the agencies in that city or state will hire us to digitize their application process. That’s essentially our business model.

Hopefully we’ll soon have enough customers to cover our expenses and allow us to grow. In the interim, we’ve raised investment from 11 impact investors in the US who are interested in our project and what we’re doing. I don’t know that we’ll be the next Google. But if you can do a good job and love your work and make enough to survive, what else is there to ask for in life?

Reports showing aggregate program supply and aggregate program demand. Image: Aunt Bertha

Reports showing aggregate program supply and aggregate program demand. Image: Aunt Bertha

Do you have any feedback from people that Aunt Bertha has helped?

We’re very protective of our seekers’ privacy and we don’t really ask for testimonials, but we’ve received comments from people who’ve used it for prescription drugs, finding housing and other things. Social workers tell us all the time. An organization called Any Baby Can in Austin, which helps young mothers, use our software to find services while visiting with clients, using their iPads. The feedback has been incredibly positive. People have reached out time and time again to say “Thank you for what you do.” They send Facebook messages. That’s the stuff that keeps us going.

You know you’re offering people relief on a daily basis. That’s got to be good.

We love our job. And we’re lucky to be able to be surviving doing what we love. We get great feedback from social workers who use our site. Just the other day, we received feedback from a neurologist in Colorado who helped a client find cancer resources for his mother. We heard that he had tears in his eyes when he realized there were awesome cancer organizations out there that could help her. Stories like that keep us going for weeks.

Oh, one more question—who’s Aunt Bertha?

Great question! It was a sort of a code name that ended up sticking. Originally it was a play on Uncle Sam—Aunt Bertha picks up where Uncle Sam leaves off. We’ve all had tough times, and need a helping hand every now and then. Almost all of us will at some point in our lives. Everybody has a family member that is a little eccentric, who’s the loudest person in the room. They have good advice, and they tell you when you’re screwing up. I was sort of playing on that. It kind of stuck. But people remember it, and that’s great for our mission.


CryptogramCell Phone Kill Switches Mandatory in California

California passed a kill-switch law, meaning that all cell phones sold in California must have the capability to be remotely turned off. It was sold as an antitheft measure. If the phone company could remotely render a cell phone inoperative, there would be less incentive to steal one.

I worry more about the side effects: once the feature is in place, it can be used by all sorts of people for all sorts of reasons.

The law raises concerns about how the switch might be used or abused, because it also provides law enforcement with the authority to use the feature to kill phones. And any feature accessible to consumers and law enforcement could be accessible to hackers, who might use it to randomly kill phones for kicks or revenge, or to perpetrators of crimes who might -- depending on how the kill switch is implemented -- be able to use it to prevent someone from calling for help.

"It's great for the consumer, but it invites a lot of mischief," says Hanni Fakhoury, staff attorney for the Electronic Frontier Foundation, which opposes the law. "You can imagine a domestic violence situation or a stalking context where someone kills [a victim's] phone and prevents them from calling the police or reporting abuse. It will not be a surprise when you see it being used this way."

I wrote about this in 2008, more generally:

The possibilities are endless, and very dangerous. Making this work involves building a nearly flawless hierarchical system of authority. That's a difficult security problem even in its simplest form. Distributing that system among a variety of different devices -- computers, phones, PDAs, cameras, recorders -- with different firmware and manufacturers, is even more difficult. Not to mention delegating different levels of authority to various agencies, enterprises, industries and individuals, and then enforcing the necessary safeguards.

Once we go down this path -- giving one device authority over other devices -- the security problems start piling up. Who has the authority to limit functionality of my devices, and how do they get that authority? What prevents them from abusing that power? Do I get the ability to override their limitations? In what circumstances, and how? Can they override my override?

The law only affects California, but phone manufacturers won't sell two different phones. So this means that all cell phones will eventually have this capability. And, of course, the procedural controls and limitations written into the California law don't apply elsewhere.

TEDA know-it-all gets another chance, JR quips with Stephen Colbert and a plan to give half the planet back to wildlife

JR with Stephen Colbert

Stephen Colbert snaps a photo with TED Prize winner JR on the set of The Colbert Report

Members of the TED community sure were busy this week. Below, some highlights:

Legendary biologist E.O. Wilson has come out with a bold idea: that we set aside half of Planet Earth for wildlife. If we don’t, he warns, he sees us hurtling toward a “biological holocaust” that could send us the way of the dinosaurs. (Watch his TED Prize talk: My wish: Build the Encyclopedia of Life.)

Artist JR, who launched more than 100,000 black-and-white posters with his TED Prize wish, appeared on The Colbert Report last night. The best moment? When Stephen Colbert said: “You like to hide your identity. Are you a criminal?”(Check out JR’s TED Prize talk: My wish: Use art to turn the world inside out)

Meanwhile, Ken Jennings has returned to the set of Jeopardy! for the Battle of the Decades. This week he plays during 2000s week, and naturally mopped the floor with last night’s other contestants. (Watch Ken’s talk: Watson, Jeopardy! and me, the obsolete know-it-all)

At the Boston Review, Paul Bloom kicks off a debate on empathy. He’s against it. And as you might guess, this provokes strong responses from thinkers like Peter Singer and Simon Baron-Cohen. (Watch Paul’s TED Talk: Can prejudice ever be a good thing?)

Ellen’s Jorgensen’s DIY biohacking lab supports many fascinating citizen scientists, including Heather Dewey-Hagborg. Her portraits and sculptures based on DNA were featured on the series AHA: A House for Arts, on WMHT-PBS, a PBS station in upstate NY. (Watch Ellen’s TED Talk: Biohacking—you can do it, too. And our video portrait of Heather: A DNA portrait from a single hair)

Captain Charles Moore, one of the first people to raise the flag on the Great Pacific Garbage Patch (GBGP), reports from his latest six-week cruise to the area, which is one of five (FIVE!) major garbage patches drifting in our oceans. (Watch his TED Talk: Seas of plastic. Bonus: Watch more talks from the inspired 2010 event TEDxGreatPacificGarbagePatch)

Christopher Ryan‘s idea—that we are all sexual omnivores—gets illustrated. (Watch Christopher’s TED Talk: Are we all sexual omnivores?)

Here’s a great Reddit AMA about the search for extraterrestrial life from Seth Shostak, who works for TED Prize winner Jill Tarter’s organization, the SETI Institute. (Watch Jill’s talk: Join the SETI search)

Anne Curzan explains in the Chronicle of Education’s Lingua Franca blog why she asks her students not to use laptops in class. Her standard first-day spiel is both personal and data driven—and, to her colleagues’ surprise, she gets almost no pushback from the class. (Watch Anne’s TED Talk: What makes a word ‘real?’)


Sociological Images“Tourist, Shame on You”: On Disaster Tourism

Flashback Friday.

When tourists returned to New Orleans after Hurricane Katrina, there was a new site to see: disaster.  Suddenly — in addition to going on a Ghost Tour, visiting the Backstreet Cultural Museum, and lunching at Dooky Chase’s — one could see the devastation heaped upon the Lower Ninth Ward.  Buses full of strangers with cameras were rumbling through the neighborhood as it tried to get back on its feet.

A sociology major at  Michigan State University, Kiara C., sent along this photograph of a homemade sign propped up in the Lower Ninth, shaming visitors for what sociologists call “disaster tourism.”

(Credit: Daniel Terdiman/CNET News.com; found here)

Disaster tourism is criticized for objectifying the suffering of others.  Imagine having lost loved ones and seen your house nearly destroyed. After a year out of town, you’re in your nastiest clothes, mucking sludge out of your house, fearful that the money will run out before you can get the house — the house your grandmother bought and passed down to you through your mother — put back together.

Imagine that — as you push a wheelbarrow out into the sunlight, blink as you adjust to the brightness, and push your hair off your forehead, leaving a smudge of toxic mud — a bus full of cameras flash at you, taking photographs of your trauma, effort, and fear.  And then they take that photo back to their cozy, dry home and show it to their friends, who ooh and aah about how cool it was that they got to see the aftermath of the flood.

The person who made this sign… this is what they may have been feeling.

Originally posted in 2011.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianSteve Kemp: Migration of services and hosts

Yesterday I carried out the upgrade of a Debian host from Squeeze to Wheezy for a friend. I like doing odd-jobs like this as they're generally painless, and when there are problems it is a fun learning experience.

I accidentally forgot to check on the status of the MySQL server on that particular host, which was a little embarassing, but later put together a reasonably thorough serverspec recipe to describe how the machine should be setup, which will avoid that problem in the future - Introduction/tutorial here.

The more I use serverspec the more I like it. My own personal servers have good rules now:

shelob ~/Repos/git.steve.org.uk/server/testing $ make
..
Finished in 1 minute 6.53 seconds
362 examples, 0 failures

Slow, but comprehensive.

In other news I've now migrated every single one of my personal mercurial repositories over to git. I didn't have a particular reason for doing that, but I've started using git more and more for collaboration with others and using two systems felt like an annoyance.

That means I no longer have to host two different kinds of repositories, and I can use the excellent gitbucket software on my git repository host.

Needless to say I wrote a policy for this host too:

#
#  The host should be wheezy.
#
describe command("lsb_release -d") do
  its(:stdout) { should match /wheezy/ }
end


#
# Our gitbucket instance should be running, under runit.
#
describe supervise('gitbucket') do
  its(:status) { should eq 'run' }
end

#
# nginx will proxy to our back-end
#
describe service('nginx') do
  it { should be_enabled   }
  it { should be_running   }
end
describe port(80) do
  it { should be_listening }
end

#
#  Host should resolve
#
describe host("git.steve.org.uk" ) do
  it { should be_resolvable.by('dns') }
end

Simple stuff, but being able to trigger all these kind of tests, on all my hosts, with one command, is very reassuring.

Don MartiDon't punch the monkey. Embrace the Badger.

One of the main reactions I get to Targeted Advertising Considered Harmful is: why are you always on about saving advertising? Advertising? Really?

Even when I do point out how non-targeted ads can be good for publishers and advertisers, the obvious question is, if I'm not an advertiser or publisher, why should I care? As a member of the audience, or a regular citizen, why does advertising matter? And what's all this about the thankless task of saving online advertising from itself? I didn't sign up for that.

The answer is: Because externalities.

Advertising can have positive externalities.

The biggest positive externality is ad-supported content that later becomes available for other uses. For example, short story readers today are benefitting from magazine ad budgets of the 19th-20th centuries.

Every time you binge-watch an old TV show, you're a positive externality winner, using a cultural good originally funded by advertising.

I agree with the people who want ad-supported content for free, or at a subsidized price. I'm not going to condemn all advertising as The Internet's Original Sin. I just think that we need to fix the bugs that make Internet advertising less valuable than ads in older media.

Advertising can have negative externalities.

On the negative side, the biggest externality is the identity theft and other fraud risk inherent in large databases of PII. (And it's all PII. Anonymization is bogus.) The costs of identity theft fall on the people whose information is compromised, not on the companies that chose to collect it.

In 20 years, people will look back at John Battelle's surveillance marketing fandom the way we now watch those 1950s industrial films that praise PCBs, or asbestos, or some other God-awful substance that we're still spending billions to clean up. PII is informational hazmat.

The French Task Force on Taxation of the Digital Economy suggests a unit charge per user monitored to address the dangers that uncontrolled practices regarding the use of these data are likely to raise for the protection of public freedoms. But although that kind of thing might fly in Europe, in the USA we have to use technology. And that's where regular people come in.

What you can do

Your choice to protect your privacy by blocking those creepy targeted ads that everyone hates is not a selfish one. You're helping to re-shape the economy. You're helping to move ad spending away from ads that target you, and have more negative externalities, and towards ads that are tied to content, and have more positive externalities. It's unlikely that Internet ads will ever be all positive, or all negative, but privacy-enabled users can shift the balance in a good way.

Don't punch the monkey. Embrace the Badger.

Sociological ImagesFrom Our Archives: Hurricane Katrina

Screenshot_1

August 29th is the anniversary of the day that Hurricane Katrina devastated the Gulf Coast and side-swiped New Orleans, breaching the levees.  These posts are from our archives:

Was Hurricane Katrina a “Natural” Disaster?

Racism and Neglect

Disaster and Discourse

Devastation and Rebuilding

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianJakub Wilk: More spell-checking

Have you ever wanted to use Lintian's spell-checker against arbitrary files? Now you can do it with spellintian:

$ zrun spellintian --picky /usr/share/doc/RFC/best-current-practice/rfc*
/tmp/0qgJD1Xa1Y-rfc1917.txt: amoung -> among
/tmp/kvZtN435CE-rfc3155.txt: transfered -> transferred
/tmp/o093khYE09-rfc3481.txt: unecessary -> unnecessary
/tmp/4P0ux2cZWK-rfc6365.txt: charater -> character

mwic (Misspelled Words In Context) takes a different approach. It uses classic spell-checking libraries (via Enchant), but it groups misspellings and shows them in their contexts. That way you can quickly filter out false-positives, which are very common in technical texts, using visual grep:

$ zrun mwic /usr/share/doc/debian/social-contract.txt.gz
DFSG:
| …an Free Software Guidelines (DFSG)
| …an Free Software Guidelines (DFSG) part of the
                                ^^^^

Perens:
|    Bruce Perens later removed the Debian-spe…
| by Bruce Perens, refined by the other Debian…
           ^^^^^^

Ean, Schuessler:
| community" was suggested by Ean Schuessler. This document was drafted
                              ^^^ ^^^^^^^^^^

GPL:
| The "GPL", "BSD", and "Artistic" lice…
       ^^^

contrib:
| created "contrib" and "non-free" areas in our…
           ^^^^^^^

CDs:
| their CDs. Thus, although non-free wor…
        ^^^

CryptogramISIS Threatens US with Terrorism

They're openly mocking our profiling.

But in several telephone conversations with a Reuters reporter over the past few months, Islamic State fighters had indicated that their leader, Iraqi Abu Bakr al-Baghdadi, had several surprises in store for the West.

They hinted that attacks on American interests or even U.S. soil were possible through sleeper cells in Europe and the United States.

"The West are idiots and fools. They think we are waiting for them to give us visas to go and attack them or that we will attack with our beards or even Islamic outfits," said one.

"They think they can distinguish us these days ­ they are fools and more than that they don't know we can play their game in intelligence. They infiltrated us with those who pretend to be Muslims and we have also penetrated them with those who look like them."

I am reminded of my debate on airport profiling with Sam Harris, particularly my initial response to his writings.

Planet Linux AustraliaDavid Rowe: SM1000 Part 4 – Killing a PCB and PTT Working

Last Sunday the ADC1 net on the first SM1000 prototype went open circuit all of a sudden. After messing about for a few hours I lifted the uC pin for that net and soldered a fine wire to the other end of the net. That lasted a few days then fell off. I then broke the uC pin trying to put it all back together. So then I tried to use some Chip Quick I had laying about from the Mesh Potato days to remove the uC. I royally screwed that up, breaking several pads.

It’s been 7 years since my last surface mount assembly project and it shows!

However when the uC came off the reason for the open circuit became apparent. The photo below was taken through the microscope I use for surface mount assembly:

At the top is the bottom part of a square pad that is part of the ADC1 net. The track is broken just before the lower left corner of the pad. Many of the pads under the uC were in various stages of decomposition, e.g. solder mask and tinning gone, down to bare copper. Turns out I used too much flux and it wasn’t cleaned out from under the chip when I washed the PCB. For the past few weeks it was busy eating away the PCB.

Oh well, one step back! So this week I built another SM1000, and today I brought it to life. After fixing a few small assembly bugs I debugged the “switches and leds” driver and sm1000_main.c, which means I now have PTT operation working. So it’s normally in receive mode, but press PTT and it swaps to tx mode. The sync, PTT, and error LEDs work too. Cool.

Here is a picture of prototype number 2:

The three trimmers along the bottom set the internal mic amp, and line levels to the “mic” and “speaker” ports of the radio. The pot on the RHS is the internal speaker volume control. The two switches upper RHS are PTT and power. On the left is a RJ45 for the audio connections to the radio and under the PCB (not visible) are a bunch of 3.5mm sockets that provide alternate audio connections to the radio.

What next? Well the speaker audio is a bit distorted at high volume so I might look into that and see if the LM386 is behaving as specified. Then hook it up to a real radio and test it over the air. That will shake down the interfaces some more and see if it’s affected by strong nearby RF. Oh, and I need to test USB and a few other minor interfaces.

I’m very happy with progress and we are on track to release the SM1000 in beta form commercially in late 2014.

Worse Than FailureError'd: Netflix is Smarter than You Are

"I never thought that The Princess Diaries and The Shining had anything in common, but who am I to argue with Netflix's magic algorithms?" Neal L. wrote.

 

"I clicked on the 'unsubscribe instantly' link in a third-party vendors email this morning," writes Mike G, "I got directed to a web page with the attached image as the content. I'm still not sure if I got off the mailing list or not."

 

Dominic M. wrote, "I found this at the subway gate on my way to work... I hope the trains aren't affected by this too!"

 

"When a comment is the only content, is it still a comment?," wonders William B.

 

"Yes. I'll take the 'clown car suite', please," writes Jim C.

 

"No, banggood.com, no matter how many decimal places you use, it's still wrong.," wrote Clark.

 

Steven B. writes, "Apparently, moonpig has access to some pretty freaky technology - including the ability to send my great, great grandmother some chocolates.

 

Rodney wrote, "While searching the Atkins website to learn where I could purchase their products near my home in New Jersey, I got a little lesson in geography."

 

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaTridge on UAVs: Lidar landing with APM:Plane

Over the last couple of days I have been testing the Lidar based auto-landing code that will be in the upcoming 3.1.1 release of APM:Plane. I'm delighted to say that it has gone very well!

Testing has been done on two planes - one is a Meridian sports plane with a OS46 Nitro motor. That is a tricycle undercarriage, so has very easy ground steering. The tests today were with the larger VQ Porter 2.7m tail-dragger with a DLE-35 petrol motor. That has a lot of equipment on board for the CanberraUAV OBC entry, so it weighs 14kg at takeoff making it a much more difficult plane to land well.

The Lidar is a SF/02 from LightWare, a really nice laser rangefinder that works nicely with Pixhawk. It has a 40m range, which is great for landing, allowing the plane plenty of time to lock onto the glide slope in the landing approach.

APM:Plane has supported these Lidars and other rangefinders for a while, but until now has not been able to use them for landing. Instead they were just being logged to the microSD card, but not actively used. After some very useful discussions with Paul Riseborough we now have the Lidar properly integrated into the landing code.

The test flights today were an auto-takeoff (with automatic ground steering), a quick auto-circuit then an automatic landing. The first landing went long as I'd forgotten to drop THR_MIN down to zero (I normally have it at 20% to ensure the engine stays at a reasonable level during auto flight). After fixing that we got a series of good auto flights.

The flight was flown with a 4 second flare time, which is probably a bit long as it allowed the plane to lose too much speed on the final part of the flare. That is why it bounces a bit as it starts losing height. I'll try with a bit shorter flare time tomorrow.

Here is the video of one of the Meridian flights yesterday. Sorry for missing part of the flight, the video was shot with a cell phone by a friend at the field.

Here is another video of the Porter flying today, but taken from the north of the runway

I'd like to thank Charles Wannop from Flying Finish Productions for the video of the Porter today with help from Darrell Burkey.

Planet DebianAntti-Juhani Kaijanaho: Licentiate Thesis is now publicly available

My recently accepted Licentiate Thesis, which I posted about a couple of days ago, is now available in JyX.

Here is the abstract again for reference:

Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.

Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

Planet DebianDaniel Pocock: Welcoming libphonenumber to Debian and Ubuntu

Google's libphonenumber is a universal library for parsing, validating, identifying and formatting phone numbers. It works quite well for numbers from just about anywhere. Here is a Java code sample (C++ and JavaScript also supported) from their web site:


String swissNumberStr = "044 668 18 00";
PhoneNumberUtil phoneUtil = PhoneNumberUtil.getInstance();
try {
  PhoneNumber swissNumberProto = phoneUtil.parse(swissNumberStr, "CH");
} catch (NumberParseException e) {
  System.err.println("NumberParseException was thrown: " + e.toString());
}
boolean isValid = phoneUtil.isValidNumber(swissNumberProto); // returns true
// Produces "+41 44 668 18 00"
System.out.println(phoneUtil.format(swissNumberProto, PhoneNumberFormat.INTERNATIONAL));
// Produces "044 668 18 00"
System.out.println(phoneUtil.format(swissNumberProto, PhoneNumberFormat.NATIONAL));
// Produces "+41446681800"
System.out.println(phoneUtil.format(swissNumberProto, PhoneNumberFormat.E164));

This is particularly useful for anybody working with international phone numbers. This is a common requirement in the world of VoIP where people mix-and-match phones and hosted PBXes in different countries and all their numbers have to be normalized.

About the packages

The new libphonenumber package provides support for C++ and Java users. Upstream also supports JavaScript but that hasn't been packaged yet.

Using libphonenumber from Evolution and other software

Lumicall, the secure SIP/ZRTP client for Android, has had libphonenumber from the beginning. It is essential when converting dialed numbers into E.164 format to make ENUM queries and it is also helpful to normalize all the numbers before passing them to VoIP gateways.

Debian includes the GNOME Evolution suite and it will use libphonenumber to improve handling of phone numbers in contact records if enabled at compile time. Fredrik has submitted a patch for that in Debian.

Many more applications can potentially benefit from this too. libphonenumber is released under an Apache license so it is compatible with the Mozilla license and suitable for use in Thunderbird plugins.

Improving libphonenumber

It is hard to keep up with the changes in dialing codes around the world. Phone companies and sometimes even whole countries come and go from time to time. Numbering plans change to add extra digits. New prefixes are created for new mobile networks. libphonenumber contains metadata for all the countries and telephone numbers that the authors are aware of but they also welcome feedback through their mailing list for anything that is not quite right.

Now that libphonenumber is available as a package, it may be helpful for somebody to try and find a way to split the metadata from the code so that metadata changes could be distributed through the stable updates catalog along with other volatile packages such as anti-virus patterns.

Planet Linux AustraliaGary Pendergast: The Next Adventure

Over my past few years at Automattic, I’ve worked on a bunch of different teams and projects – VideoPress, the WordPress iOS app, various Social projects, and most recently, o2. I even took a few months to work on WordPress core, helping build the auto-update functionality that we now see rolling out security updates within hours of their release.

The few months I spent working on WordPress core made me realise something – there’s a lot more I have to contribute. So, with the WordPress 4.0 RC out the door, I’m super excited to be moving to my next project – working on WordPress core full time!

Automattic naturally puts a lot of people-hours into WordPress, with over 30 of us contributing to WordPress 3.9. I’m looking forward to being a bigger part of that, and giving more back to WordPress community!

Planet DebianRobert Collins: Test processes as servers

Since its very early days subunit has had a single model – you run a process, it outputs test results. This works great, except when it doesn’t.

On the up side, you have a one way pipeline – there’s no interactivity needed, which makes it very very easy to write a subunit backend that e.g. testr can use.

On the downside, there’s no interactivity, which means that anytime you want to do something with those tests, a new process is needed – and thats sometimes quite expensive – particularly in test suites with 10’s of thousands of tests.Now, for use in the development edit-execute loop, this is arguably ok, because one needs to load the new tests into memory anyway; but wouldn’t it be nice if tools like testr that run tests for you didn’t have to decide upfront exactly how they were going to run. If instead they could get things running straight away and then give progressively larger and larger units of work to be run, without forcing a new process (and thus new discovery directory walking and importing) ? Secondly, testr has an inconsistent interface – if testr is letting a user debug things to testr through to child workers in a chain, it needs to use something structured (e.g. subunit) and route stdin to the actual worker, but the final testr needs to unwrap everything – this is needlessly complex. Lastly, for some languages at least, its possibly to dynamically pick up new code at runtime – so a simple inotify loop and we could avoid new-process (and more importantly complete-enumeration) *entirely*, leading to very fast edit-test cycles.

So, in this blog post I’m really running this idea up the flagpole, and trying to sketch out the interface – and hopefully get feedback on it.

Taking subunit.run as an example process to do this to:

  1. There should be an option to change from one-shot to server mode
  2. In server mode, it will listen for commands somewhere (lets say stdin)
  3. On startup it might eager load the available tests
  4. One command would be list-tests – which would enumerate all the tests to its output channel (which is stdout today – so lets stay with that for now)
  5. Another would be run-tests, which would take a set of test ids, and then filter-and-run just those ids from the available tests, output, as it does today, going to stdout. Passing somewhat large sets of test ids in may be desirable, because some test runners perform fixture optimisations (e.g. bringing up DB servers or web servers) and test-at-a-time is pretty much worst case for that sort of environment.
  6. Another would be be std-in a command providing a packet of stdin – used for interacting with debuggers

So that seems pretty approachable to me – we don’t even need an async loop in there, as long as we’re willing to patch select etc (for the stdin handling in some environments like Twisted). If we don’t want to monkey patch like that, we’ll need to make stdin a socketpair, and have an event loop running to shepard bytes from the real stdin to the one we let the rest of Python have.

What about that nirvana above? If we assume inotify support, then list_tests (and run_tests) can just consult a changed-file list and reload those modules before continuing. Reloading them just-in-time would be likely to create havoc – I think reloading only when synchronised with test completion makes a great deal of sense.

Would such a test server make sense in other languages?  What about e.g. testtools.run vs subunit.run – such a server wouldn’t want to use subunit, but perhaps a regular CLI UI would be nice…


,

Planet DebianRuss Allbery: git-pbuilder 1.33

git-pbuilder is a script to integrate git-buildpackage with pbuilder or cowbuilder. My long-term hope is to rewrite it in Python and just incorporate it into git-buildpackage's source base directly, but in the meantime I've been maintaining it as a separate script.

This new release is a relatively minor bug fix. git-pbuilder has always removed the *_source.changes file built as part of the pbuilder process, since this file is normally useless. It's used to generate the source package to move into the chroot, but then the build in the chroot normally regenerates the Debian source package. The *_source.changes file hangs around with invalid checksums and just confuses matters, so git-pbuilder has always deleted it.

However, Debian is now increasing support for source-only uploads, which means that source-only builds might now be interesting. One can do a source-only build with gbp buildpackage -S. But that also generates a *_source.changes file, one that's actually useful, and git-pbuilder was deleting that as well. This release, thanks to work by Guido Günther, refrains from deleting this file when doing a source-only build.

You can get the latest release of git-pbuilder from my scripts distribution page.

TEDHow translation amplifies ideas: TED speakers show appreciation

istock_ 13835975_HiRes_r

Shortly after model Geena Rocero gave the TED Talk “Why I must come out,” she was Skyping with an LGBTQ activist in Hong Kong. This activist mentioned how powerful it would be for Chinese speakers to be able to watch the talk.

Rocero logged on to TED.com to find out how she could get her talk translated. To her surprise, she found that Chinese subtitles were already live—along with subtitles in Hebrew, Romanian, Thai, Vietnamese and Spanish.

Today, Rocero’s talk is available in 28 languages. And as she travels the world speaking about LGBTQ issues with her organization Gender Proud, she sees the impact of that. “The places I’m going, there’s either no law at all about how you can change your name and gender marker on your legal documents, or a lot of steps before you can do it,” says Rocero. “People are becoming aware of the law that exists in the United States. Suddenly, they’re asking, ‘How come I can’t have that right?’ People are realizing that they can demand these rights.”

This is the point of TED’s Open Translation Project, a global volunteer effort that enables the amplification of ideas across languages and borders. When a talk goes live on TED.com, translators around the world have an open invitation to subtitle it in their language. Over time, OTP volunteers subtitle each talk in more and more languages.

Speakers are taking notice and reaching out to thank translation volunteers for their efforts. Rocero, for example, took to Facebook to publically thank the OTP network for amplifying her idea. Meanwhile, TEDxUNLV speaker Cortney Warren (watch her talk “Honest liars—the psychology of self-deception”) was so thrilled to find out that her talk was being translated by volunteer Adrienne Lin that she sent her a copy of her book and offered to do the same for anyone else who worked on the talk. “That’s such a generous service,” Warren said.

Repeat TED speaker Mikko Hypponen also sends personal thank-yous to his translators. He even translated two of his talks—“Fighting viruses, defending the net” and “Three types of online attacks”—into Finnish. “These translations made my talks accessible to a group of people that would otherwise miss them completely,” he says. “For example, my father has never studied English and wouldn’t be able to follow my talks.”

For his third talk, “How the NSA betrayed the world’s trust,” Hypponen reviewed the work of OTP volunteer Sami Andberg. He says it was quick and easy, with big benefits. “You can make sure the translation gets everything just right,” Hypponen says. “All speakers should do this for the languages they are fluent in.”

Keren Elazari reviewed the Hebrew translation of her talk, “Hackers: the Internet’s immune system,” created by volunteer Shir Ben Asher. “There are so many terms and idiosyncrasies that come from the hacker community—I knew it might be tricky to translate words like ‘hacktivism,’” says Elazari. “[Asher] did all the hard work—I tweaked a few words and messed up the time code. I didn’t realize how meticulous it is to make sure the timing is right.”

Reviewing her talk led Elazari to pay more attention to who is translating it, and into what languages. She is especially thrilled to see it subtitled in Persian, she says: “It’s cool to know that there are people in Iran listening to a talk from an Israeli.”

Lately, Elazari has gotten feedback on her talk from many corners of the globe. “I just got back from DEF CON, a big hacker conference,” she says. “I had people come up to me from Korea, Greece, Colombia. It’s one of fantastic qualities of TED—that an idea can reach a global audience. It’s not just that it’s online and accessible, but that it’s accessible in other languages too.”

When Martin Villeneuve gave the talk “How I made an impossible film” at TED2013, he became the first TED speaker from Quebec, Canada, where French is the official language but English is widely spoken. He jumped at the chance to translate his talk into French Canadian.

“I had hoped that French Canadian subtitles would be available within a few days of the release of my talk, but at least 10 other languages came first!” says Villeneuve. “That’s when I decided to translate my own talk.”

It turned out to be a fascinating undertaking. “English is my second language—I think first and foremost in French,” he says. “So I translated it the way I would’ve done the talk in my mother tongue, while respecting the spirit of the English version.”

The experience gave Villeneuve a deeper appreciation for the work each translator does to make ideas reverberate widely. “I feel the resonance,” he says. “I receive emails from people all around the globe, who found in my talk the inspiration to undertake their own impossible projects—not only in cinema but from education to engineering. That’s really the best compliment.”


Planet DebianBernhard R. Link: Where key expiry dates are useful and where they are not.

Some recent blog (here and here) suggest short key expiry times.

Then also highlight some thing many people forget: The expiry time of a key can be changed every time with just a new self-signature. Especially that can be made retroactively (you cannot avoid that, if you allow changing it: Nothing would stop an attacker from just changing the clock of one of his computers).

(By the way: did you know you can also reduce the validity time of a key? If you look at the resulting packets in your key, this is simply a revocation packet of the previous self-signature followed by a new self-signature with a shorter expiration date.)

In my eyes that fact has a very simple consequence: An expiry date on your gpg main key is almost totally worthless.

If you for example lose your private key and have no revocation certificate for it, then a expiry time will not help at all: Once someone else got the private key (for example by brute forcing it, as computers got faster over the years or because they could brute-force the pass-phrase for your backup they got somehow), they can just extend the expiry date and make it look like it is still valid. (And if they do not have the private key, there is nothing they can do anyway).

There is one place where expiration dates make much more sense, though: subkeys.

As the expiration date of a subkey is part of the signature of that subkey with the main key, someone having access to only the subkey cannot change the date.

This also makes it feasible to use new subkeys over the time, as you can let the previous subkey expire and use a new one. And only someone having the private main key (hopefully you), can extend its validity (or sign a new one).

(I generally suggest to always have a signing subkey and never ever use the main key except off-line to sign subkeys or other keys. The fact that it can sign other keys just makes the main key just too precious to operate it on-line (even if it is on some smartcard the reader cannot show you what you just sign)).

Planet DebianGunnar Wolf: Ongoing crypto handling discussions

I love to see there is a lot of crypto discussions going on at DebConf. Maybe I'm skewed by my role as keyring-maint, but I have been involved in more than one discussion every day on what do/should signatures mean, on best key handling practices, on some ideas to make key maintenance better, on how the OpenPGPv4 format lays out a key and its components on disk, all that. I enjoy some of those discussions pose questions that leave me thinking, as I am quite far from having all answers.

Discussions should be had face to face, but some start online and deserve to be answered online (and also pose opportunity to become documentation). Simon Josefsson blogs about The case for short OpenPGP key validity periods. This will be an important issue to tackle, as we will soon require keys in the Debian keyring to have a set expiration date (surprise surprise!) and I agree with Simon, setting an expiration date far in the future means very little.

There is a caveat with using, as he suggests, very short expiry periods: We have a human factor sitting in the middle. Keyring updates in Debian are done approximately once a month, and I do not see the period shortening. That means, only once a month we (currently Jonathan McDowell and myself, and we expect to add Daniel Kahn Gillmor soon) take the full changeset and compile a new keyring that replaces the active one in Debian.

This means that if you have, as Simon suggests, a 100-day validity key, you have to remember to update it at least every 70 days, or you might be locked out during the days it takes us to process it.

I set my expiration period to two years, although I might shorten it to only one. I expect to add checks+notifications before we enable this requirement project-wide (so that Debian servers will mail you when your key is close to expiry); I think that mail can be sent at approximately [expiry date - 90 days] to give you time both to you and to us to act. Probably the optimal expiration periods under such conditions would be between 180 and 365 days.

But, yes, this is by far not yet a ruling, but a point in the discussion. We still have some days of DebConf, and I'll enjoy revising this point. And Simon, even if we correct some bits for these details, I'd like to have your permission to use this fine blog post as part of our documentation!

(And on completely unrelated news: Congratulations to our dear and very much missed friend Bubulle for completely losing his sanity and running for 28 hours and a half straight! He briefly describes this adventure when it was about to start, and we all want him to tell us how it was. Mr. Running French Guy, you are amazing!)

TEDMission Blue chronicles the life, loves and calling of ocean champion Sylvia Earle

<iframe class="youtube-player" frameborder="0" height="360" src="http://www.youtube.com/embed/B1wp2MQCsfQ?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>

Sylvia Earle is a fearless 78-year-old woman. In the new documentary Mission Blue, we watch her dive with sharks in the deep blue sea and fearlessly dodge fishing nets as she swims through the middle of a major fishing operation. The film offers a bold new view of the famed oceanographer whose relentless pursuit of saving the ocean takes her from the mythical expanse of Australia’s Great Barrier Reef to the swirling schools of the Chesapeake Bay menhaden fishery to the bustling fish markets of Tokyo.

Mission Blue is now available on Netflix. Watch it here »

Directed by Fisher Stevens (The Cove) and Bob Nixon (Gorillas in the Mist), Mission Blue serves up visually stunning underwater footage. But beyond that, it weaves an inspiring storyline that focuses on Earle herself. Mission Blue could have easily been a documentary about the devastation of the ocean, but Stevens and Nixon felt wary of making a film full of scientific data and talking heads for an audience not already enthusiastic about ocean conservation — and likely feeling compassion fatigue. So Stevens and Nixon tossed out their first script, which focused on Earle’s concept of “hope spots,” underwater areas so critical to the health of the ocean that they need to be protected by law. Instead, the filmmakers turned their focus on the legendary eco-activist herself.

With icon status as a National Geographic Explorer-in-Residence, Earle was a female biologist in “a time of bearded scientists,” one whose ongoing efforts to save the ocean have been recognized by presidents Bill Clinton, George W. Bush and Barack Obama. Meanwhile, her sweeping romantic history offers human interest. “That’s what makes the movie work, at least for me; you get emotionally attached to Sylvia and you see the ocean through her eyes,” says Stevens, who met Earle on a TED-led expedition to the Galapagos Islands after she was awarded the TED Prize in 2009. “I spent one week with her and I was hooked. After the Galapagos trip, I really didn’t want to leave Sylvia’s world, so I didn’t.”

The film opens with Earle and a team of scientists surveying whale sharks off the coast of Louisiana, about 60 miles from the site of the 2010 Gulf of Mexico oil spill, the worst in US history. In her scuba gear, which is like a second skin for her, Earle plunges into the water and swims alongside these majestic creatures, which can be up to 40 feet in length. She makes no secret of her love of sharks and dispels the widely accepted belief that we, as human beings, are on their lunch menu. “They’ve been living here for millions of years. We’re newcomers in their backyard,” she says. “I love being a part of their world. They’re completely innocent of anything humans do.” By innocent she means, for example, that they haven’t had a role in building the more than 33,000 oil drill sites in the Gulf, even though their habitats have been adversely affected by them.

With no problem being in the water for 12 hours at a time, the ocean is as much as a comfort zone to Earle as land is for the rest of us, and it’s heartbreaking for her to have witnessed its decline. “Sixty years ago, when I began exploring the ocean, no one imagined that we could do anything to harm it,” she says. “But now we’re facing paradise lost.”

As Earle sees the narrative of the ocean, human beings are in some ways the bad guys. The film takes a hard look at how our global appetite for seafood has brought many species to the edge of extinction. A heart-stopping moment comes toward the end of the film, when Earle returns to a location 100 miles into the Coral Sea, which she visited decades before and remembers for its vibrant array of ocean wildlife. Only on this dive, there are barely any fish; only coral reef ruins. It looks like a graveyard.

Mission Blue escorts us through Earle’s youth – which opened up when her family moved from New Jersey to Florida when she was 12 years old. Antique footage of Earle swimming in the Gulf as a young woman and as an up-and-coming scientist diving on early expeditions gives a nostalgic twist, and reaffirms the sense that Earle is doing the work she was born to do. “As a kid, I had complete freedom. To spend all day out [in nature] just fooling around on my own,” she says. She became “entranced by the idea of submarines” after reading the book Half Mile Down by marine biologist William Beebe, who she regarded “as a soul mate.” She was also inspired by Jacques Cousteau. “His silent world made me want to see what he saw,” she says. “To meet fish swimming in something other than butter and lemon slices on a plate.”

Sylvia Earle shares her TED Prize wish in 2009. Photo: James Duncan Davidson

Sylvia Earle shares her TED Prize wish in 2009. Photo: James Duncan Davidson

Throughout the film, Earle is both charming and a force to be reckoned with. On the road 300 days of the year, she spends her time campaigning, meeting with world leaders, guest lecturing and, of course, diving—she can’t help but make the rest of us feel like we’re slacking off. Self-effacing at her core, her giggly charisma is contagious, yet she can also be serious when she needs to be. In the film, there’s a snippet of an interview with Stephen Colbert, who teases her that the ocean “is deep and dark and full of sharks who want to eat us,” so why should he care about it? Earle’s response elicits a chilling reaction: “Think of the world without an ocean. You’ve got a planet a lot like Mars.”

Most admirable, perhaps, is Earle’s intolerance for bureaucratic faffing on environmental change that doesn’t lead to concrete action, and that can even conceal the severity of ocean degradation. She is not afraid to ruffle feathers if it means saving more gills. Her brief stint as the Chief Scientist at the National Oceanic and Atmospheric Administration (NOAA) proved too stifling for her—we see her as she resigns with dignity, preferring to venture out on her own, with the freedom to speak her mind rather than maintain silence on matters close to her heart.

“The thing that’s impressive to me about Sylvia is that she is not afraid to point fingers, and say ‘you know what you’re doing and it’s wrong,’” says Jeremy Jackson of the Smithsonian Institution, one ocean activist interviewed in the film. Director James Cameron (whose journey 36,000 feet down is chronicled in his new documentary Deepsea Challenge 3D) meanwhile describes Earle as “the Joan of Arc of the ocean.”

Getting media coverage has never been difficult for Earle, but the coverage was somewhat sexist in the earlier part of her career. Take, for example, the headline “Sylvia Sails Away with 70 Men (But She Expects No Problems)” after her first scientific expedition to the Indian Ocean. As much as the media played up the fact that she was a woman in a man’s world, when it came to her love life, Earle says: “I wasn’t interested in anybody who wasn’t interested in what I was interested in. I was attracted to the nerdy types who loved talking about stars and space, or about diving.” This probably explains how she met her second husband at what she calls “a scientific meeting about fish.” A true feminist, Earle juggled being a mother, a wife and a scientist, among other things, but no matter the obstacles she faced soaring through the proverbial glass ceiling, Earle never saw herself as a victim. “That’s life!” she says with her signature infectious optimism.

In the film, Earle takes on criticism that she is a radical. Most people, she reminds us, haven’t had the opportunity to spend thousands of hours underwater like she has, and to see the kind of destruction to the ocean that she’s seen over the course of her lifetime. With fifty percent less coral in the ocean now than there was in 1950, Earle says simply, “the ocean is dying,” with a deep sadness. While only 5% of the ocean has been seen, let alone mapped and explored, “if we continue business as usual, we’re in real trouble,” she says.

Yet Mission Blue is not all doom and gloom, as Earle is hopeful: “We still have a chance to fix things,” she says, noting that it’s a matter of getting everyone committed to the cause of ocean conservation. Viewers come away with a bigger, more pronounced understanding of the catastrophic impact of human behavior on the world’s oceans—and what we can do to start changing it.

In the end, what are the takeaways from Mission Blue? That following your passion makes for an exciting life; that sometimes you have to go against the grain to make a difference. And finally, that we’re all interconnected on this planet, which means that we all need to be mindful of the consequences of our choices.

Sylvia Earle amidst jellyfish.

Sylvia Earle amidst jellyfish. Photo: Courtesy of Mission Blue

This piece originally ran on the TED Prize Blog. Head there to read more about our annual award to a dreamer with a wish for the world »


Planet DebianLior Kaplan: The importance of close integration between distribution and upstream

Many package maintainers need to decide when to upload a new version to Debian. Should the upload be done only after the official release, or is there a place for uploads during the development process. In the latter case there’s a need to balance between the benefit of early testing and feedback with the stability and not completely breaking user’s environment (and package relationships) too often.

With the coming PHP 5.6.0 release, Debian kept being on the cutting edge. Thanks to Ondřej, the new version was available in experimental since alpha1 and in unstable/testing since beta3. Considering the timing of the PHP release related to the Debian freeze, I’m happy we started early, and did the transition to PHP 5.6 a few months ago.

But just following the development releases (betas ,RCs) isn’t enough. Both Ondřej and myself are part of the PHP community, and know the planned timelines, current status and what are the critical points. Such knowledge was very useful this week, when we new 5.6.0 was pending finale tagging before release (after RC4). This made take the report of Debian bug #759381: “php5: TLS connections broken in 5.6.0 RC4″ seriously and contact the release managers.

First it was a “heads up” and then a real problem. After a quick discussion (both private mails by me and on github by Ondřej), the relevant commit was reverted by the release managers (Julien Pauli & Ferenc Kovacs), and 5.6.0 was retagged. The issue will get more checks towards 5.6.1 without any time pressure.

Although 5.6.0 isn’t in production for anyone (yet), and like any major release can have issues, the close connectivity between everyone saved complaints from the PHP users and ecosystem. I don’t imagine the issue been sorted so quickly 16 hours later. This is also due to the bug been reported on difference between two close release (regression in RC4 comparing to RC3).

To close the loop, if we’ve uploaded 5.6.0 only after the release, the report would be regression between 5.5.x and 5.6.0, which is obviously much harder to pinpoint. So, I’m not sure I have a good answer for the question in the beginning of the post, but for this case our policy proved itself.


Filed under: Debian GNU/Linux, PHP

Sociological ImagesNew Orleans after Katrina: An Uneven Recovery

To mourn, commemorate, and celebrate the city of New Orleans after Hurricane Katrina.  Photographer Ted Jackson returned to the site of some of his most powerful photographs, re-taking them to reveal the progress, or lack of progress, of the past nine years.

You can see them all at nola.com; I’ve pulled out three that speak to the uneven recovery that I see when I visit.

In this first photo, residents struggle to keep their heads above water by balancing on the porch railing of a home in the Lower 9th Ward, what was once a vibrant working class, almost entirely African American neighborhood. Today, the home remains dilapidated, as did one-in-four homes in New Orleans as of 2010.

11

In the first photo of this second set, a man delivers fresh water to people stranded in the BW Cooper Housing Development, better known as the Calliope Projects.  Today, the housing development is awaiting demolition, having been mostly empty since 2005.  Some suspect that closing these buildings was an excuse to make it difficult or impossible for some poor, black residents to return.

34

This set of homes is  located in an upper-income part of the city.  The neighborhood, called Lakeview, suffered some of the worst flooding, 8 to 10 feet and more; it has recovered very well.

5 6

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianTim Retout: Pump.io update 1

[The story so far: I'm packaging pump.io for Debian.]

4 packages uploaded to NEW:

  • node-webfinger
  • validator.js
  • websocket-driver
  • node-openid

2 packages eliminated as not needed:

  • set-immediate - deprecated
  • crypto-cacerts - not needed on Debian

1 package in progress:

  • node-databank

Got my eye on:

  • oauth-evanp - this is a fork with two patches, so I need to investigate the status of those.
  • node-iconv-lite - needs files downloaded from the internet, so I'm considering how to add them to the source package
  • dateformat/moment - there's an open discussion about combining Node.js modules, and I'm wondering if these are affected.

Thoughts

Currently I'm averaging around one package upload a day, I think? Which would mean ~1 month to go? But there may be challenges around getting packages through the NEW queue in time to build-depend on them.

Someone has asked my temporary Twitter account whether I have a pump.io account. Technically, yes, I do - but I don't post anything on it, because I want to run my own server in the long term. As part of running my own server, I always find that easier if I'm installing software from Debian packages. Hence this work. Sledgehammer, meet nut.

Google AdsenseGet twice the insights with new Google Analytics integration in the Google Publisher Toolbar


If you’re seeking rich insights and key information on your site’s performance or user behavior, it’s likely you’re already using the Google Publisher Toolbar or Google Analytics. Starting today, access more insights directly from your site pages in just one click, with the integration of Google Analytics in the Google Publisher Toolbar.


In addition to giving you blocking controls and up-to-date information on how your site is performing, the Google Publisher Toolbar now offers more insights on user behavior powered by Google Analytics. Understand your users and shape your audience development strategy with more insights into user demographics and traffic sources. Also, find out which sections of your pages are most popular with your users through In-Page Analytics. As with current information from the Google Publisher Toolbar, you can access this new data directly from your pages when viewing them on Chrome.

Google Analytics is now integrated by default in the Google Publisher Toolbar. More information can be found in our Help Center.

If you’re not yet using the Google Publisher Toolbar, download it today from the Chrome Web Store. As always, we’d love to hear your feedback on this new release. Tell us what you think in the comment section below this post.

Posted by: Araceli Checa, Software Engineer
Was this blog post useful? Share your feedback with us.

Planet Linux AustraliaGlen Turner: Raspberry Pi versus Cray X-MP supercomputer

It's often said that today we have in the phone in our pocket computers more powerful than the supercomputers of the past. Let's see if that is true.

The Raspberry Pi contains a Broadcom BCM2835 system on chip. The CPU within that system is a single ARM6 clocked at 700MHz. Located under the system on chip is 512MB of RAM -- this arrangement is called "package-on-package". As well as the RPi the BCM2835 SoC was also used in some phones, these days they are the cheapest of smartphones.

The Whetstone benchmark was widely used in the 1980s to measure the performance of supercomputers. It gives a result in millions of floating point operations per second. Running Whetstone on the Raspberry Pi gives 380 MFLOPS. See Appendix 1 for the details.

Let's see what supercomputer comes closest to 380 MFLOPS. That would be the Cray X-MP/EA/164 supercomputer from 1988. That is a classic supercomputer: the X-MP was a 1982 revision of the 1975 Cray 1. So good was the revision work by Steve Chen that it's performance rivalled the company's own later Cray 2. The Cray X-MP was the workhorse supercomputer for most of the 1980s, the EA series was the last version of the X-MP and its major feature was to allow a selectable word size -- either 24-bit or 32-bit -- which allowed the X-MP to run programs designed for the Cray 1 (24-bit), Cray X-MP (24-bit) or Cray Y-MP (32-bit).

Let's do some comparisons of the shipped hardware.

 

Basic specifications. Raspberry Pi versus Cray X-MP/EA/164
Item Cray X-MP/EA/164 Raspberry Pi Model B+
Price US$8m (1988) A$38
Price, adjusted for inflation US$16m A$38
Number of CPUs 1 1
Word size 24 or 32 32
RAM 64MB 512MB
Cooling Air cooled, heatsinks, fans Air cooled, no heatsink, no fan

 

Neither unit comes with a power supply. The Cray does come with housing, famously including a free bench seat. The RPi requires third-party housing, typically for A$10; bench seats are not available as an option.

The Cray had the option of solid-state storage. A Secure Digital card is needed to contain the RPi's boot image and, usually, its operating system and data.

 

I/O systems. Raspberry Pi versus Cray X-MP/EA/164
Item Cray Raspberry Pi
SSD size 512MB Third party, minimum of 4096MB
Price US$6m (1988) A$20
Price, adjusted for inflation US$12m A$20

 

Of course the Cray X-MP also had rotating disk. Each disk unit could contain 1.2GB and had a peak transfer rate of 10MBps. This was achieved by using a large number of platters to compensate for the low density of the recorded data, giving the typical "top loading washing machine" look of disks of that era. The disk was attached to a I/O channel. The channel could connect many disks, collectively called a "string" of disks. The Cray X-MP had two to four I/O channels, each capable of 13MBps.

In comparison the Raspberry Pi's SDHC connector attaches one SDHC card at a speed of 25MBps. The performance of the SD cards themselves varies hugely, ranging from 2MBps to 30MBps.

Analysis

What is clear from the number is that the floating point performance of the Cray X-MP/EA has fared better with the passage of time than the other aspects of the system. That's because floating point performance was the major design goal of that era of supercomputers. Ignoring the floating point performance, the Raspberry Pi handily beats out every computer in the Cray X-MP range.

Would Cray have been surprised by these results? I doubt it. Seymour Cray left CDC when they decided to build a larger supercomputer. He viewed this as showing CDC as not "getting it": larger computers have longer wires, more electronics to drive the wires, more heat from the electronics, more design issues such as crosstalk and more latency. Cray's main design insight was that computers needed to be as small as possible. There's not much smaller you can make a computer than a system-on-chip.

So why aren't today's supercomputers systems-on-chip? The answer has two parts. Firstly, the chip would be too small to remove the heat from. This is why "chip packaging" has moved to near the centre of chip design. Secondly, chip design, verification, and tooling (called "tape out") is astonishingly expensive for advanced chips. It's simply not affordable. You can afford a small variation on a proven design, but that is about the extent of the financial risk which designers care to take. A failed tape out was one of the causes of the downfall of the network processor design of Procket Networks.

Appendix 1. Whetstone benchmark

The whets.c benchmark was downloaded from Roy Longbottom's PC Benchmark Collection.

Compiling this for the RPi is simple enough. Since benchmark geeks care about the details, here they are.

$ diff -d -U 0 whets.c.orig whets.c
@@ -886 +886 @@
-#ifdef UNIX
+#ifdef linux
$ gcc --version | head -1
gcc (Debian 4.6.3-14+rpi1) 4.6.3

$ gcc -O3 -lm -s -o whets whets.c

Here's the run. This is using a Raspbian updated to 2014-08-23 on a Raspberry Pi Model B+ with the "turbo" overclocking to 1000MHz (this runs the RPi between 700MHz and 1000MHz depending upon demand and the SoC temperature). The Model B+ has 512MB of RAM. The machine was in multiuser text mode. There was no swap used before and after the run.

$ uname -a
Linux raspberry 3.12.22+ #691 PREEMPT Wed Jun 18 18:29:58 BST 2014 armv6l GNU/Linux

$ cat /etc/debian_version 
7.6

$ ./whets
##########################################
Single Precision C/C++ Whetstone Benchmark

Calibrate
       0.04 Seconds          1   Passes (x 100)
       0.19 Seconds          5   Passes (x 100)
       0.74 Seconds         25   Passes (x 100)
       3.25 Seconds        125   Passes (x 100)

Use 3849  passes (x 100)

          Single Precision C/C++ Whetstone Benchmark

Loop content                  Result              MFLOPS      MOPS   Seconds

N1 floating point     -1.12475013732910156       138.651              0.533
N2 floating point     -1.12274742126464844       143.298              3.610
N3 if then else        1.00000000000000000                 971.638    0.410
N4 fixed point        12.00000000000000000                   0.000    0.000
N5 sin,cos etc.        0.49911010265350342                   7.876   40.660
N6 floating point      0.99999982118606567       122.487             16.950
N7 assignments         3.00000000000000000                 592.747    1.200
N8 exp,sqrt etc.       0.75110864639282227                   3.869   37.010

MWIPS                                            383.470            100.373

It is worthwhile making the point that this took maybe ten minutes. Cray Research had multiple staff working on making benchmark numbers such as Whetstone as high as possible.

Planet Linux AustraliaBen Martin: Terry is getting In-Terry-gence.

I had hoped to use a quad core ARM machine running ROS to spruce up Terry the robot. Performing tasks like robotic pick and place, controlling Tiny Tim and autonomous "docking". Unfortunately I found that trying to use a Kinect from an ARM based Linux machine can make for some interesting times. So I thought I'd dig at the ultra low end Intel chipset "SBC". The below is a J1900 Atom machine which can have up to 8Gb of RAM and sports the features that one expects from a contemporary desktop machine, Gb net, USB3, SATA3, and even a PCI-e expansion.


A big draw to this is the "DC" version, which takes a normal laptop style power connector instead of the much larger ATX connectors. This makes it much simpler to hookup to a battery pack for mobile use. The board runs nicely from a laptop extension battery, even if the on button is a but funky looking. On the left is a nice battery pack which is running the whole PC.

An interesting feature of this motherboard is no LED at all. I had sort of gotten used to Intel boards having blinks and power LEDs and the like.
There should be enough CPU grunt to handle the Kinect and start looking at doing DSLAM and thus autonomous navigation.

Planet Linux AustraliaAndrew Pollock: [life] Day 211: A trip to the museum with Megan and a swim assessment

Today was a nice full day. It was go go go from the moment my feet hit the floor, though

I had a magnificent 9 hours of uninterrupted sleep courtesy of a sleeping pill, some pseudoephedrine and a decongestant spray.

I started off with a great yoga class, and then headed over to Sarah's to pick up Zoe. The traffic was phenomenally bad on the way there, and I got there quite late, so I offered to drop Sarah at work on the way back to my place, since I had no particular schedule.

After I'd dropped Sarah off, I was pondering what to do today, as the weather looked a bit dubious. I asked Zoe if she wanted to go to the museum. She was keen for that, and asked if Megan could come too, so I called up Jason on the way home to see she wanted to come with us, and directly picked her up on the way home.

We briefly stopped at home to grab my Dad Bag and some snacks, and headed to the bus stop. We managed to walk (well, make a run for) straight onto the bus and headed into the museum.

The Museum and the Sciencecentre are synonymous to Zoe, despite the latter requiring admission (we've got an annual membership). In trying to use my membership to get a discounted Sciencecentre ticket for Megan, I managed to score a free family pass instead, which I was pretty happy about.

We got into the Sciencecentre, which was pretty busy with a school excursion, and girls started checking it out. The problem was they both wanted to call the shots on what they did and didn't like not getting their way. Once I instituted some turn-taking, everything went much more smoothly, and they had a great time.

We had some lunch in the cafe and then spent some more time in the museum itself before heading for the bus. We narrowly missed the bus 30 minutes earlier than I was aiming for, so I asked the girls if they wanted to take the CityCat instead. Megan was very keen for that, so we walked over to Southbank and caught the CityCat instead.

I was half expecting Zoe to want to be carried back from the CityCat stop, but she was good with walking back. Again, some turn taking as to who was the "leader" walking back helped keep the peace.

I had to get over to Chandler for Zoe's new potential swim school to assess her for level placement, so I dropped Megan off on the way, and we got to the pool just in time.

Zoe did me very proud and did a fantastic job of swimming, and was placed in the second-highest level of their learn to swim program. We also ran into her friend from Kindergarten, Vaeda, who was killing time while her brother had a swim class. So Zoe and Vaeda ended up splashing around in the splash pool for a while after her assessment.

Once I'd managed to extract Zoe from the splash pool, and got her changed, we headed straight back to Sarah's place to drop her off. So we pretty much spent the entire day out of the house. Zoe and Megan had a good day together, and I managed to figure out pretty quickly how to keep the peace.

CryptogramHacking Traffic Lights

New paper: "Green Lights Forever: Analyzing the Security of Traffic Infrastructure," Branden Ghena, William Beyer, Allen Hillaker, Jonathan Pevarnek, and J. Alex Halderman.

Abstract: The safety critical nature of traffic infrastructure requires that it be secure against computer-based attacks, but this is not always the case. We investigate a networked traffic signal system currently deployed in the United States and discover a number of security flaws that exist due to systemic failures by the designers. We leverage these flaws to create attacks which gain control of the system, and we successfully demonstrate them on the deployment in coordination with authorities. Our attacks show that an adversary can control traffic infrastructure to cause disruption, degrade safety, or gain an unfair advantage. We make recommendations on how to improve existing systems and discuss the lessons learned for embedded systems security in general.

News article.

Worse Than FailureIssue History

Ladies and gentlemen: the story you are about to read is true. Only the names have been changed to protect the innocent. The guilty are too obtuse to recognize themselves in the story, even if their names hadn't been changed.

Playing the part of Alex in this story is you. Your current employer is a stock fund. Your current engagement: to work on FLASH, the in-house developed stock trading system. That's 'stock', the financial instrument, not 'stock', the live kind you find on farms.

The FLASH application gives traders (the people who actually decide which stocks to buy and sell) the ability to submit their trades for execution. You're assigned to work on backlog items as accepted by your team. A well organized ring of testers has been assigned your team. They are one of the most puzzling groups you've ever encountered. Your job: to not be broken by them.

What follows (which is the history log of one of the backlog items) is an attempt by the testers to break you.

DateTypeUser NameDescription
May 8, 2013Backlog ItemFrank JonesAs a user, I would like to limit the number of decimal places in the price of the security to 7. This allows me to conform to the limitation of a maximum of seven decimal places found in the DLS system.

Seems like a reasonable request. Certain the basic requirements are sufficient to look at possible solutions and choose the one more appropriate for the application. Let's get to it

DateTypeUser NameDescription
June 30, 2013ChangeMark Walch'Status' changed to "Committed". 'Assigned To' changed to Alex
July 1, 2013ChangeAlex Young'Status' changed to "In Progress"
July 1, 2013ChangeAlex Young'Status' changed to "Done"

No problem at all. As it turns out, the application includes a validation framework that makes this change simple. Set the status to done, move on to the next work item and let the testers do their thing.

DateTypeUser NameDescription
July 1, 2013ChangeMark Walch'Status changed to Rework - I attempted to test the solution, but as was unable to enter more than 7 characters, I can't be certain that the system will function as specified.

So the testers, being unable to test that more than 7 characters can be entered, won't accept the fix. Really??

DateTypeUser NameDescription
July 2, 2013ChangeAlex Young'Status' changed to "Done" - Allow more than 7 decimals in the Security Price field. Round to 7 places if more than 7 are provided. But only because 'Facepalm' isn't a valid status

And now, a couple of days later, the final entry in the history log.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaRichard Jones: When testing goes bad

I've recently started working on a large, mature code base (some 65,000 lines of Python code). It has 1048 unit tests implemented in the standard unittest.TestCase fashion using the mox framework for mocking support (I'm not surprised you've not heard of it).

Recently I fixed a bug which was causing a user interface panel to display when it shouldn't have been. The fix basically amounts to a couple of lines of code added to the panel in question:

+    def can_access(self, context):
+        # extend basic permission-based check with a check to see whether 
+        # the Aggregates extension is even enabled in nova 
+        if not nova.extension_supported('Aggregates', context['request']):
+            return False
+        return super(Aggregates, self).can_access(context)

When I ran the unit test suite I discovered to my horror that 498 of the 1048 tests now failed. The reason for this is that the can_access() method here is called as a side-effect of those 498 tests and the nova.extension_supported (which is a REST call under the hood) needed to be mocked correctly to support it being called.

I quickly discovered that given the size of the test suite, and the testing tools used, each of those 498 tests must be fixed by hand, one at a time (if I'm lucky, some of them can be knocked off two at a time).

The main cause is mox's mocking of callables like the one above which enforces the order that those callables are invoked. It also enforces that the calls are made at all (uncalled mocks are treated as test failures).

This means there is no possibility to provide a blanket mock for the "nova.extension_supported". Tests with existing calls to that API need careful attention to ensure the ordering is correct. Tests which don't result in the side- effect call to the above method will raise an error, so even adding a mock setup in a TestCase.setUp() doesn't work in most cases.

It doesn't help that the codebase is so large, and has been developed by so many people over years. Mocking isn't consistently implemented; even the basic structure of tests in TestCases is inconsistent.

It's worth noting that the ordering check that mox provides is never used as far as I can tell in this codebase. I haven't sighted an example of multiple calls to the same mocked API without the additional use of the mox InAnyOrder() modifier. mox does not provide a mechanism to turn the ordering check off completely.

The pretend library (my go-to for stubbing) splits out the mocking step and the verification of calls so the ordering will only be enforced if you deem it absolutely necessary.

The choice to use unittest-style TestCase classes makes managing fixtures much more difficult (it becomes a nightmare of classes and mixins and setUp() super() calls or alternatively a nightmare of mixin classes and multiple explicit setup calls in test bodies). This is exacerbated by the test suite in question introducing its own mock-generating decorator which will generate a mock, but again leaves the implementation of the mocking to the test cases. py.test's fixtures are a far superior mechanism for managing mocking fixtures, allowing simpler, central creation of mocks and overriding of them through fixture dependencies.

The result is that I spent some time working through some of the test suite and discovered that in an afternoon I could fix about 10% of the failing tests. I have decided that spending a week fixing the tests for my 5 line bug fix is just not worth it, and I've withdrawn the patch.

,

Debian Administration Using the haproxy load-balancer for increased availability

HAProxy is a TCP/HTTP load-balancer, allowing you to route incoming traffic destined for one address to a number of different back-ends. The routing is very flexible and it can be a useful component of a high-availability setup.

Sociological ImagesNew Orleans Voodoo: Before and After Hurricane Katrina

When Hurricane Katrina broke the levees of New Orleans and flooded 85% of the city, 100,000 people were left homeless. Disproportionately, these were the poor and black residents of New Orleans. This same population faced more hurdles to returning than their wealthier and whiter counterparts thanks to the effects of poverty, but also choices made by policymakers and politicians — some would say made deliberately — that reduced the black population of the city.

With them went many of the practitioners of voodoo, a faith with its origins in the merging of West African belief systems and Catholicism.  At Newsweek, Stacey Anderson writes that locals claim that the voodoo community was 2,500 to 3,000 people strong before Katrina, but after that number was reduced to around 300.

The result has been a bridging of different voodoo traditions and communities. Prior to the storm, celebrations and ceremonies were race segregated and those who adhered to Haitian- and New Orleans-style voodoo kept their distance.  After the storm, with their numbers decimated, they could no longer sustain the in-groups and out-groups they once had.  Voodoo practitioners forged bonds across prior divides.

Voodoo Priestess Sallie Ann Glassman performs a ceremony at Bayou St. John (photo by Alfonso Bresciani):

1

Voodoo Priestess Miriam Chamani performs a ceremony at the Voodoo Spiritual Temple:

3

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

CryptogramSecurity Flaws in Rapiscan Full-Body Scanners

Security researchers have finally gotten their hands on a Rapiscan backscatter full-body scanner. The results aren't very good.

Website with paper and images. News articles and commentary.

Note that these machines have been replaced in US airports with millimeter wave full-body scanners.

Cory DoctorowAdversarial Compatibility: hidden escape hatch rescues us from imprisonment through our stuff


My latest Guardian column, Adapting gadgets to our needs is the secret pivot on which technology turns, explains the hidden economics of stuff, and how different rules can trap you in your own past, or give you a better future.

Depending on your view, the stuff you own is either a boon to business or a tremendous loss of opportunity.

For example, your collection spice bottles in your pantry means that I could possibly sell you a spice rack. On the other hand, it also means that I can’t design a special spice rack that only admits spice bottles of my own patent-protected design, which would thereby ensure that if you wanted to buy spices in the future you’d either have to buy them from me or throw away that very nice spice rack I sold you.

In the tech world, this question is often framed in terms of “ecosystems” (as in the “Google/Chrome/Android ecosystem”) or platforms (as in the “Facebook platform”) but whatever you call it, the discussion turns on a crucial different concept: sunk cost.

That’s the money, time, mental energy and social friction you’ve already sunk into the stuff you own. Your spice rack’s sunk cost includes the money you spend on the rack, the time you spent buying fixings for it and the time you spent afixing it, the emotional toil of getting your family to agree on a spice rack, and the incredible feeling of dread that arises when you contemplate going through the whole operation again.

If you’ve already got a lot of sunk costs, the canny product strategy is to convince you that you can buy something that will help you organise your spices, rip all your CDs and put them on a mobile device, or keep your clothes organised.

But what a vendor really wants is to get you to sink cost into his platform, ecosystem, or what have you. To convince you to buy his wares, in order to increase the likelihood that you’ll go on doing so – because they match the decor, because you already have the adapters, and so on.

Adapting gadgets to our needs is the secret pivot on which technology turns [The Guardian]

(Image: David Joyce, CC-BY-SA: Story, Lumix G1 Adapter Breakdown, Chad Kainz, CC-BY)

Worse Than FailureCodeSOD: The Database Gazes Also Into You

When Simon asked us to consider this code from his predecessor's custom-built PHP CMS, we weren't terribly impressed:

$rs = new RecordSet("SELECT * FROM moduleData WHERE moduleID = '".$moduleID."' ORDER BY displayOrder ASC");

Since that code just selects a single record by its primary key, the only thing wrong with it is the redundant ORDER BY clause. But that wasn't all. Simon leaned forward across the table, his face made sinister by the single, flickering light bulb we make every would-be submitter sit under (TDWTF policy), and he whispered, "Wouldn't you like to know about the field in moduleData called SQLCode?"

We should have known better, Dear Reader. You would have known better. You would have known not to ask, not to take Simon's hand and follow him down the rickety, rusty spiral staircase into madness.

"SELECT * FROM ( SELECT * FROM (
    SELECT 1 as active, modulePR.*, DATE_FORMAT(createDate, '%M %Y') as groupBy FROM modulePR WHERE ".is_published($moduleID, 'modulePR', SITE_ID)." AND archiveStatus ".(ARCHIVE_MODE ? " =1" : " <>1")."
    UNION
    SELECT 0 as active, modulePR_old.*, DATE_FORMAT(createDate, '%M %Y') as groupBy FROM modulePR_old WHERE ".is_published($moduleID, 'modulePR_old', SITE_ID)." AND archiveStatus ".(ARCHIVE_MODE ? " =1" : " <>1")." GROUP BY linkUID
    ORDER BY active DESC, lastEditedDate desc
) as pr1
GROUP BY linkUID
) as prSorted
ORDER BY DATE_FORMAT(createDate,'%Y%m') DESC, itemTitle

Yes, that is a SQL string with PHP embedded in it, being stored in a database table. And, as Simon was quick to point out, that is_published() function returns even more SQL that makes up the first part of the WHERE clause. At this point, we'd learned our lesson. The lightbulb had flickered out while Simon described the monstrosity, and now his face was lost in shadow. He seemed to be chuckling to himself, quietly. He seemed to know, as we knew, that we were duty-bound to hear the fate of the SQL string. Praying it would just be fed to a simple eval() call, we kept recording—for you, Dear Reader. We did this for you...

while(!$rs->EOF()) {
    $moduleData['sql'] = !@eval("return ".$rs->field("SQLCode").";") ? !@eval("return \"".$rs->field("SQLCode")."\";") ? false : @eval("return \"".$rs->field("SQLCode")."\";") : @eval("return ".$rs->field("SQLCode").";");
    // ...snip a dozen or so more fields to build the $moduleData array, several of which contain similar thing to the above
    $rs2 = new RecordSet($moduleData['sql']);
    while(!$rs2->EOF()) {
        // ...
    }
    $rs->next();
}

Simon was laughing openly now, the atonal cackle of the truly lost. With the last of our sanity we heard him say, "The SQL/PHP blob from the DB is eval()ed to determine whether it's valid PHP code. In this example it isn't, so the eval fails, so it's eval()ed again with quotes around it to determine whether it's a valid PHP string-with-embedded-PHP-code. If either of these tests succeeds, the successful eval() is run one more time to get a value to put into the $moduleData[] array."

He went on, as though unable to stop, "If neither eval() works, or if they just return a falsey value like zero or the empty string, moduleData['sql'] gets set to false. False isn't a valid SQL string, so you'd think passing it into a new RecordSet without any further error checks might be a problem, but, no! The RecordSet class fails silently on SQL errors, and just sets EOF to true."

That's all we got out of Simon, who would do nothing further but mutter "GROUP BY with no aggregate" over and over. Weep for him, Dear Reader, for he has surely glimpsed an abyss. And shed a tear for yourself, for now you have, too.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaAndrew Pollock: [life] Day 210: Running and a picnic, with a play date and some rain

I had a rotten night's sleep last night. Zoe woke up briefly around midnight wanting a cuddle, and then I woke up again at around 3am and couldn't get back to sleep. I surprised I'm not more trashed, really.

It was a nice day today, so I made a picnic lunch, and we headed out to Minnippi Parklands to do a run with Zoe in the jogging stroller. It was around 10am by the time we arrived, and I had grand plans of running 10 km. I ran out of steam after about 3.5 km, conveniently at the "Rocket Park" at Carindale, which Zoe's been to a few times before.

So we stopped there for a bit of a breather, and then I ran back again for another 3 km or so, in a slightly different route, before I again ran out of puff, and walked the rest of the way back.

We then proceeded to have our picnic lunch and a bit of a play, before I dropped her off at Megan's house for a play while I chaired the PAG meeting at Kindergarten.

After that, and extracting Zoe, which is never a quick task, we headed home to get ready for swim class. It started to rain and look a bit thundery, and as we arrived at swim class we were informed that lessons were canceled, so we turned around and headed back home.

Zoe watched a bit of TV and then Sarah arrived to pick her up. I'm going to knock myself out with a variety of drugs tonight and hope I get a good night's sleep with minimum of cold symptoms.

Geek FeminismMan, I feel like a linkspam (26 August 2014)

Equity a distant prospect for women in CSIRO|Canberra Times: “CSIRO’s [Commonwealth Scientific and Industrial Research Organisation] latest annual report released in 2013 indicates that women represent 40 per cent of employees, but only 12 per cent of technical services roles and 24 per cent of research scientists are female. In contrast, women are over-represented in more poorly-paid, traditionally female roles such as administrative support which is 76 per cent female. At higher levels of the hierarchy, the situation for women is even bleaker, with only 11 per cent of research management roles held by women.” (August 25)

We Need to Talk About Silicon Valley’s Racism|The Daily Beast: “an elite set of tech investors that Forbes labels “The Midas List,” 100 venture capitalists with staggeringly profitable portfolios in the tech industry. And if you scroll down the complete Midas List, some visible trends begin to emerge. The featured photo for the list, first of all, is as white as a loaf of Wonder Bread and as male as a football locker room. There are only four women on the list, none of whom rank in the Top 20. And of the 96 men on the Midas List, the overwhelming majority appear to be white, including every single member of the Top 10.” (August 22)

Lunch with Dads|Ellen’s Blog: “That’s what being different does. It makes you aware of your actions, and that you might be imposing. It’s so minor, but it adds up…..When you don’t have a diverse team, there will be that nagging sensation for the few people who are different. It’s more likely those people will leave, or continue to feel out of place.” (August 23)

I accept trans women in my tech feminism | 0xabad1dea: “Trying to enforce the separation of trans women from other women does not support any cause I believe in – especially if that enforcement is being proposed by a man, no matter how well-meaning or feminist.” (August 22)

Adding misogyny to Fark moderator guidelines | Fark: “as of today, the FArQ will be updated with new rules reminding you all that we don’t want to be the He Man Woman Hater’s Club.  This represents enough of a departure from pretty much how every other large internet community operates that I figure an announcement is necessary.” “I recommend that when encountering grey areas, instead of trying to figure out where the actual line is, the best strategy would be to stay out of the grey area entirely.” (August 22)

Late Night Thoughts on Boundaries & Consent | Julie Pagano: “Being nice is incredibly overrated. I have no desire to be nice, and I think a culture of “nice” is counter to a culture of consent and boundaries. I prefer to be kind and empathetic – these are things to aspire to.” (August 24)

People of Color-led Makerspace and Hackerspace! | Indiegogo: Liberating Ourselves Locally is one of the few (if not only) people of color-led makerspaces/hackerspaces in the Bay Area. If you do a search for “people of color makerspace” on Google, we’re not just the first result, we fill the first page. We lost one of our main funding sources recently, so we’re appealing to our community to keep the space running.

If White Characters Were Described Like People Of Color In Literature|Buzzfeed:
“2. She took off his shirt, his skin glistening in the sun like a glazed doughnut. The glaze part, not the doughnut part.” (August 22)

 

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Krebs on SecurityDQ Breach? HQ Says No, But Would it Know?

Sources in the financial industry say they’re seeing signs that Dairy Queen may be the latest retail chain to be victimized by cybercrooks bent on stealing credit and debit card data. Dairy Queen says it has no indication of a card breach at any of its thousands of locations, but the company also acknowledges that nearly all stores are franchises and that there is no established company process or requirement that franchisees communicate security issues or card breaches to Dairy Queen headquarters.

Update, Aug. 28, 12:08 p.m. ET: A spokesman for Dairy Queen has confirmed that the company recently heard from the U.S. Secret Service about “suspicious activity” related to a strain of card-stealing malware found in hundreds of other retail intrusions. Dairy Queen says it is still investigating and working with authorities, and does not yet know how many stores may be impacted.

Original story:

dqI first began hearing reports of a possible card breach at Dairy Queen at least two weeks ago, but could find no corroborating signs of it — either by lurking in shadowy online “card shops” or from talking with sources in the banking industry. Over the past few days, however, I’ve heard from multiple financial institutions that say they’re dealing with a pattern of fraud on cards that were all recently used at various Dairy Queen locations in several states. There are also indications that these same cards are being sold in the cybercrime underground.

The latest report in the trenches came from a credit union in the Midwestern United States. The person in charge of fraud prevention at this credit union reached out wanting to know if I’d heard of a breach at Dairy Queen, stating that the financial institution had detected fraud on cards that had all been recently used at a half-dozen Dairy Queen locations in and around its home state.

According to the credit union, more than 50 customers had been victimized by a blizzard of card fraud just in the past few days alone after using their credit and debit cards at Dairy Queen locations — some as far away as Florida — and the pattern of fraud suggests the DQ stores were compromised at least as far back as early June 2014.

“We’re getting slammed today,” the fraud manager said Tuesday morning of fraud activity tracing back to member cards used at various Dairy Queen locations in the past three weeks. “We’re just getting all kinds of fraud cases coming in from members having counterfeit copies of their cards being used at dollar stores and grocery stores.”

Other financial institutions contacted by this reporter have seen recent fraud on cards that were all used at Dairy Queen locations in Florida and several other states, including Alabama, Indiana, Illinois, Kentucky, Ohio, Tennessee, and Texas.

On Friday, Aug. 22, KrebsOnSecurity spoke with Dean Peters, director of communications for the Minneapolis-based fast food chain. Peters said the company had heard no reports of card fraud at individual DQ locations, but he stressed that nearly all of Dairy Queen stores were independently owned and operated. When asked whether DQ had any sort of requirement that its franchisees notify the company in the event of a security breach or problem with their card processing systems, Peters said no.

“At this time, there is no such policy,” Peters said. “We would assist them if [any franchisees] reached out to us about a breach, but so far we have not heard from any of our franchisees that they have had any kind of breach.”

Julie Conroy, research director at the advisory firm Aite Group, said nationwide companies like Dairy Queen should absolutely have breach notification policies in place for franchisees, if for no other reason than to protect the integrity of the company’s brand and public image.

“Without question this is a brand protection issue,” Conroy said. “This goes back to the eternal challenge with all small merchants. Even with companies like Dairy Queen, where the mother ship is huge, each of the individual establishments are essentially mom-and-pop stores, and a lot of these stores still don’t think they’re a target for this type of fraud. By extension, the mother ship is focused on herding a bunch of cats in the form of thousands of franchisees, and they’re not thinking that all of these stores are targets for cybercriminals and that they should have some sort of company-wide policy about it. In fact, franchised brands that have that sort of policy in place are far more the exception than the rule.”

DEJA VU ALL OVER AGAIN?

The situation apparently developing with Dairy Queen is reminiscent of similar reports last month from multiple banks about card fraud traced back to dozens of locations of Jimmy John’s, a nationwide sandwich shop chain that also is almost entirely franchisee-owned. Jimmy John’s has said it is investigating the breach claims, but so far it has not confirmed reports of card breaches at any of its 1,900+ stores nationwide.

The DHS/Secret Service advisory.

The DHS/Secret Service advisory.

Rumblings of a card breach involving at least some fraction of Dairy Queen’s 4,500 domestic, independently-run stores come amid increasingly vocal warnings from the U.S. Department of Homeland Security and the Secret Service, which last week said that more than 1,000 American businesses had been hit by malicious software designed to steal credit card data from cash register systems.

In that alert, the agencies warned that hackers have been scanning networks for point-of-sale systems with remote access capabilities (think LogMeIn and pcAnywhere), and then installing malware on POS devices protected by weak and easily guessed passwords.  The alert noted that at least seven point-of-sale vendors/providers confirmed they have had multiple clients affected.

Around the time that the Secret Service alert went out, UPS Stores, a subsidiary of the United Parcel Service, said that it scanned its systems for signs of the malware described in the alert and found security breaches that may have led to the theft of customer credit and debit data at 51 UPS franchises across the United States (about 1 percent of its 4,470 franchised center locations throughout the United States). Incidentally, the way UPS handled that breach disclosure — clearly calling out the individual stores affected — should stand as a model for other companies struggling with similar breaches.

In June, I wrote about a rash of card breaches involving car washes around the nation. The investigators I spoke with in reporting that story said all of the breached locations had one thing in common: They were all relying on point-of-sale systems that had remote access with weak passwords enabled.

My guess is that some Dairy Queen locations owned and operated by a particular franchisee group that runs multiple stores has experienced a breach, and that this incident is limited to a fraction of the total Dairy Queen locations nationwide. Unfortunately, without better and more timely reporting from individual franchises to the DQ HQ, it may be a while yet before we find out the whole story. In the meantime, DQ franchises that haven’t experienced a card breach may see their sales suffer as a result.

CARD BLIZZARD BREWING?

geodumpsLast week, this publication received a tip that a well-established fraud shop in the cybercrime underground had begun offering a new batch of stolen cards that was indexed for sale by U.S. state. The type of card data primarily sold by this shop — known as “dumps” — allows buyers to create counterfeit copies of the cards so that they can be used to buy goods (gift cards and other easily-resold merchandise) from big box retailers, dollar stores and grocers.

Increasingly, fraudsters who purchase stolen card data are demanding that cards for sale be “geolocated” or geographically indexed according to the U.S. state in which the compromised business is located. Many banks will block suspicious out-of-state card-present transactions (especially if this is unusual activity for the cardholder in question). As a result, fraudsters tend to prefer purchasing cards that were stolen from people who live near them.

This was an innovation made popular by the core group of cybercrooks responsible for selling cards stolen in the Dec. 2013 breach at Target Corp, which involved some 40 million compromised credit and debit cards. The same fraudsters would repeat and refine that innovation in selling tens of thousands of cards stolen in February 2014 from nationwide beauty products chain Sally Beauty.

This particular dumps shop pictured to the right appears to be run by a completely separate fraud group than the gang that hit Target and Sally Beauty. Nevertheless, just this month it added its first new batch of cards that is searchable by U.S. state. Two different financial institutions contacted by KrebsOnSecurity said the cards they acquired from this shop under this new “geo” batch name all had been used recently at different Dairy Queen locations.

The first batch of state-searchable cards at this particular card shop appears to have first gone on sale on Aug. 11, and included slightly more than 1,000 cards. The second batch debuted a week later and introduced more than twice as many stolen cards. A third bunch of more than 5,000 cards from this batch went up for sale early this morning.

An ad in the shop pimping a new batch of geo-located cards apparently stolen from Dairy Queen locations.

An ad in the shop pimping a new batch of geo-located cards apparently stolen from Dairy Queen locations.

,

TEDHow teachers can best use TED Talks in class, from the perspective of a student

What happens when a teacher mixes Madame Bovary and a TED Talk? Good things, actually. Photo: iStockphoto

What happens when a teacher mixes Madame Bovary and a TED Talk? Good things, actually. Photo: iStockphoto

By Olivia Cucinotta

My high school English class had just finished reading Madame Bovary, and we were all confused. (For those of you who have not read it, please skip to paragraph two. Spoiler alert!) Emma Bovary, a listless housewife in search of the passionate love she’s read about in books, has many sordid affairs, falls deeply into debt and kills herself by swallowing arsenic, and her ever-faithful and terribly dull husband Charles dies a while later of a broken heart, and their daughter, upon finding her father dead, is sent to work in a cotton mill. We were all baffled and upset by the end of this intense, complicated novel. When we arrived in class the next day, our teacher asked us the question: “What can we learn about real love from Madame Bovary?” and no one knew what to say.

That night for homework, our only assignment was to watch a TED Talk: “Why we love, why we cheat” by anthropologist Helen Fisher. In the talk, Fisher explained her work: “My colleagues and I took 32 people who were madly in love and put them into a functional MRI brain scanner.” I knew that Helen Fisher was taking a very different approach to understanding love from Gustave Flaubert. So why was I reading Flaubert and watching her talk, one after the other?

I didn’t realize what my teacher was doing until class discussion the next day. We shuffled in, pulled our desks into a circle, took our copies of Madame Bovary out of our bags and looked around at each other.

“So,” my teacher said, “if Gustave Flaubert and Helen Fisher were having a conversation about love, what would they say to one another? What would you say to them?”

There was a pause, and then: “I mean, the thing about love being a drug, like cocaine, seems like Emma felt love like that?”

“But then what about Charles? Was he in love?”

“Well he wasn’t intense, and he wasn’t possessive. Maybe he wasn’t in love?”

“He died for love.”

“Did he die for love or for heartbreak?”

“What’s the difference?”

The discussion continued, back and forth.

What my English teacher did that day showed me the value of TED Talks in the classroom: school is all about ideas, and TED can help teachers bring ideas into conversation and debate. TED Talks aren’t like Wikipedia articles—yes, they contain information, but at their best, they actually spark a conversation. They can be used to bring diverse voices, questions, and even conflict into classroom discussions—as Helen Fisher’s did for my English class. Physics classes can start to think about just how non-linear physics really is with Boaz Almog’s demonstration of quantum superconductors, history classes can think about Yoruba Richen’s talk and wonder about how rights movements work, students can even question the school system they are a part of with Ken Robinson’s talk on how schools creativity.

I graduated from high school in May. (I’ve spent the summer before college interning with the lovely editorial department at TED.) Throughout my high school career, I’ve seen teachers use TED Talks often—sometimes very well, and sometimes in ways I didn’t find as effective. I recently got in touch with a former teacher from my school, Suzanne Fogarty (now the director of the Lincoln School in Providence, Rhode Island), who showed Chimamanda Adichie’s TED Talk, “The danger of a single story,” in an assembly. Afterward, Adichie’s talk popped up in lectures, lunchtime discussions, even in the hallways between classes. Her ideas had entered the vocabulary of the school. Everyone was thinking about the “single story.”

I wanted to find out why Ms. Fogarty chose to use TED in her curriculum. When I asked, she responded, “TED Talks make us pause and listen to the percolation of ideas—art, engineering, technology, the humanities, spoken word and more.”

Her comment clarified something for me. The best use of TED Talks in the classroom really do take advantage of that “percolation of ideas.” Talks work best when teachers use them to give perspective and to generate discussion around difficult topics.

But how exactly do you do this? Stephanie Lo, Director of TED-Ed Programs, advises teachers to use TED videos as a way to get students thinking. She recommends that teachers check out Ed.TED.com, which is packed full of short, animated lessons created specifically for students. (When searching, teachers can filter by student age—there are talks for elementary school students, middle school students, high school students and college students.) And she recommends that, whether they’re using a lesson or a talk, teachers prepare discussion questions to get students thinking before they get to class.

Fogarty echoes the sentiment. “I like having some essential questions to accompany the talk,” she told me, “or asking students to research TED Talks that carry meaning for their generation.”

These conversations helped me see what can really happen when TED Talks are brought into the classroom. Students can better grasp topics they might not fully understand at first glance, think critically about how they think about the world, and discuss other big ideas alongside their own. Gustave Flaubert can have a conversation with Helen Fisher about the meaning of love. And that is pretty cool.

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="https://embed-ssl.ted.com/talks/helen_fisher_tells_us_why_we_love_cheat.html" webkitallowfullscreen="webkitAllowFullScreen" width="586"></iframe>
<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="https://embed-ssl.ted.com/talks/chimamanda_adichie_the_danger_of_a_single_story.html" webkitallowfullscreen="webkitAllowFullScreen" width="586"></iframe>

TEDOrphans of the narrative: Bosnian photographer Ziyah Gafić documents the aftermath of war

ziyah_gafic_IMG_1246

Bosnian photojournalist Ziyah Gafić photographs the aftermath of conflict. (Watch his TED Talk, “Everyday objects, tragic histories.”) In his most recent book, Quest for Identity, he catalogs the belongings of Bosnia’s genocide victims, everyday objects like keys, books, combs and glasses that were exhumed from mass graves. The objects are still being used to identify the bodies from this two-decade-old conflict. Only 12 when the Bosnian War began, the Sarajevo native has spent the last 15 years turning his lens on conflicts around the world as a way of coming to terms with the tragedy of his homeland. Here, he tells the TED Blog about looking for patterns of violence, the relationship between detachment and empathy, and what it’s like to grow up with war.

Tell us about the overall focus of your work.

The stories I’m interested in are focused on countries that have followed a similar pattern of violence as my homeland, Bosnia. My book Troubled Islam: Short Stories from Troubled Societies covers my own journey starting with the aftermath of war in Bosnia, and then exploring the consequences of conflict in the northwestern province of Pakistan, Palestine and Israel to Afghanistan, Iraq, Iran, Pakistan, Chechnya and Lebanon. “Aftermath” is a key word: the aftermaths of conflicts in these countries follow a similar sequence to that of Bosnia: ethnic violence, fraternal war, ethnic cleansing and, ultimately, genocide.

My idea was to compare countries that are thousands of miles away, on different continents, yet following the same vicious patterns. These countries have certain things in common. One is ethnic wars, though “ethnic” is probably not the most fortunate choice of word. All these countries have significant Muslim communities, or have majority-Muslim populations. In the post-9/11 world, these elements became very relevant.

How is Quest for Identity related to Troubled Islam?

I worked on Quest for Identity in parallel to Troubled Islam, but Quest is totally different from everything I do, because my usual approach to photography is very subjective. I wanted to do something on the other side of the spectrum, something extremely objective—something neutral, something accurate. By coincidence, while doing other stories, around 10 years ago I stumbled upon these objects that are being used as forensic evidence in the ongoing process of identifying over 30,000 missing Bosnians.

From Quest for Identity. A watch from amongst the personal belongings recovered from mass graves, lying on a forensics table. These items were carried by people as they fled from the Serb Army, or when they were taken for execution. Personal belongings are still being recovered from countless mass graves across Bosnia and Herzegovina and are used as evidence in ongoing trials for war crimes and in the ongoing identification process of their owners. Photo: Ziyah Gafić.

From Quest for Identity. A watch from among the personal belongings recovered from a Bosnian mass grave, lying on a forensics table. These items were carried by people as they fled from the Serb Army, or when they were taken for execution. Photo: Ziyah Gafić

Family photo, from Quest for Identity. Photo: Ziyah Gafić

A family photo, from Quest for Identity. Personal belongings are still being recovered from countless mass graves across Bosnia and Herzegovina, and are used as evidence in ongoing trials for war crimes—and in the ongoing identification of their owners. Photo: Ziyah Gafić

Why did you want to start photographing these items?

The simplicity of the objects really struck me. I believe photography is about empathy, and I think the fact that we all share these items—everyone has owned some of these things at some point in their lives—triggers empathy in whoever sees these images. Photographs should make us imagine ourselves in other people’s shoes, and often that process is clouded by cultural differences. Quest for Identity goes beyond cultural differences.

It seems that this would be overwhelmingly emotional work. How do you feel when you’re photographing these very personal items?

On the one hand, it’s personal because it involves my people and my country, but the process allows me to be detached. You go into the morgue, effectively, where everyone is dressed in white. You get gloves, a mask, a hat. So there is immediate physical detachment. I also have a strong sense of doing something useful. Between the distance provided by the clothes and the feeling of purpose, it makes it easier. Well, easy’s not the right word, because we are talking about thousands of people. But it gives enough room to breathe.

Visually, they are very clean, clinical, forensic photographs. The result is something that is extremely objective, extremely accurate—but at the same time it provokes, cross-culturally, from India to the US, the same very emotional response. That is the feedback I was hoping for.

What happens to the objects after you’ve photographed them?

After they’ve been used by forensics and the lawyers, the objects are shelved in several identification centers across the country. Sometimes they actually get destroyed. There was a big scandal in the International Criminal Tribunal for ex-Yugoslavia (ICTY), where, after the proceedings, they destroyed literally thousands of these objects. Which is unacceptable because in many cases, entire families are gone. All that remains are bones, which have been scattered in several mass graves—sometimes in a mass grave you will have only one bone from the deceased—and these everyday items. And then someone destroys the last material evidence of that person’s existence. So part of the idea is to create a massive and accurate online image bank that will have hi-res files, downloadable for personal use.

I am using photography to permanently freeze these objects in a very accurate form, at a certain time. So in case they are destroyed, at least we will have an accurate replica. Two-dimensional, but it’s something.

Glasses, from Quest for Identity. Photo: Ziyah Gafić

Glasses, from Quest for Identity. Photo: Ziyah Gafić

Glass eye, from Quest for Identity. Photo: Ziyah Gafić

Glass eye, from Quest for Identity. Photo: Ziyah Gafić

Shoe, from Quest for Identity. Photo: Ziyah Gafić

Shoe, from Quest for Identity. Photo: Ziyah Gafić

So is identification data also preserved in your photographs?

Yes. All the items are processed and cataloged by the time I come to take the picture. I include the original coding of the item in the image metadata. So effectively, if you see the picture, it will correspond to a bag in which the item is stored. Eventually, once these proceedings are done, the families can claim them.

Another problem is the nature of mass graves. If one body has been moved from a mass grave to another, that means the integrity of the body is damaged. So you might find a femur in one grave, they match the DNA, and call the family. They identify the person and bury the bone. Six months later, investigators open another mass grave, and they find a toe from the same body. By law, they have to go through the same procedure again, calling the family, matching the DNA, the family comes in, signs the documents. Then they must decide whether to include it in the grave, which would mean reopening it. That can happen a number of times. It becomes very complex, expensive and incredibly painful for survivors.

That’s another reason these items remain stored for a long time. They never know when they will have to reopen the case—it never goes cold.

What was your experience of the war?

I was a teenager. So for me, it was a very passive experience. But it was also a beautiful experience. I know that’s an awful thing to say, but war is a very special state of mind. It’s crystal clear, in a way. On one side, you have the fog of war, but on the other side, you have a very simple situation. Life becomes very simple and reduced to the basics.

All these everyday choices we make—the silly ones, like, “Am I having boiled eggs or omelette?” You don’t have those choices. The choices that you make during the day are reduced to one or two. One is making it through the day. And the other is providing food. Most of the time, because we were under siege, the electricity was cut off most of the time, which means we’d have to find wood for cooking, which is crazy in the city. And fetching water.

Men in the remote village of Lukomir, Bosnia.  Founded in the 12th century, the village consists primarily of two families: Comor and Maslesa. To avoid incest, men marry women from the surrounding villages. From Troubled Islam: short stories from troubled societies. Photo: Ziyah Gafić

Men in the remote village of Lukomir, Bosnia. Founded in the 12th century, the village consists primarily of two families: Comor and Maslesa. To avoid incest, men marry women from the surrounding villages. From Troubled Islam: Short Stories from Troubled Societies. Photo: Ziyah Gafić

Post-liberation Afghanistan. Afghan cemeteries are very basic: simple stones mark graves, which are very rarely engraved. From Troubled Islam: short stories from troubled societies. Photo: Ziyah Gafić

Post-“liberation” Afghanistan. Afghan cemeteries are very basic: simple stones mark graves, which are very rarely engraved. From Troubled Islam: Short Stories from Troubled Societies. Photo: Ziyah Gafić

What about school?

School was irregular, depending on shelling. We were going to school erratically, from day to day. But as I said, it’s a very reduced way of life, which then allows a lot of time for your brain to do other stuff. And in my case, being a teenager, I was having great fun doing what teenagers do. There were two underground—physically underground—bars I started going to. I started going to bars when I was 14, because in a war situation, you unwillingly mature overnight. Your soul ages faster in wartime.

Were you afraid?

Of course. It’s part of the deal. But human beings are very resilient. This resilience is one of the things that I’m exploring in Troubled Islam, how people manage to keep the fabric of society together, despite the absolute destruction that is coming upon them. And at the same time, as a kid, you are a passive participant, if that’s fair. You are a target, and as a teenager, you don’t have the means to respond. You can’t be a combatant, because you’re too young. So not being able to take part in what’s happening is deeply frustrating. I guess that’s what pushed me towards photography. In a way, maybe I was trying to re-create my state of mind during the conflict by going to countries caught in conflict.

And did you succeed?

No. I can say that after 10 years of doing it that it doesn’t work. All of the other conflicts that I’ve visited—they had less sentimental value for me than our own war. It’s happening to someone else, so the experience has limited therapeutical effect. I go there as a photographer, a foreigner, for a limited time, on expenses, living in a bubble. I had the choice to leave, whereas during our war, I had no choice. And rarely did the people in my photographs have a choice to leave the conflict. They are stuck, and that is the fundamental difference that makes it so hard to imagine yourself in their shoes. The lack of choice makes all the difference. Often, photojournalism is more about ourselves than about our subjects, and then it becomes a powerful, self-serving ego trip. Quest for Identity is on the other side of the spectrum. I wanted to do something detached from my ego.

Saddam City, Baghdad. Brothers Hassan and Saad Nasir Alamy said they've been imprisoned by Saddam's regime without a trial. Some of their scars are from the torture, some of them self enflicted. After having spent time in prison, they claim they're mentally ill. From Troubled Islam: short stories from troubled societies. Photo: Ziyah Gafić

Saddam City, Baghdad. Brothers Hassan and Saad Nasir Alamy said they were imprisoned by Saddam’s regime without a trial. Some of their scars are from the torture, some of them self-inflicted. After having spent time in prison, they claim they’re mentally ill. From Troubled Islam: Short Stories from Troubled Societies. Photo: Ziyah Gafić

What are you planning next?

I always have several projects in development. One of them is about architectural changes and the reshaping of Mecca, the holy city for all the Muslims in the world, in Saudi Arabia. That’s something that I’m really interested in telling the story of. I’m also working on another project in the United States, called “Islam in America.” It’s basically a road trip through all 50 states, drawing a visual map—in photography and video—of Muslims in America, one of America’s fastest-growing communities. And finally, I’m working on a project about women in Saudi Arabia.

Do you feel that Quest for Identity is contributing to the healing of the community?

I’m quite modest about the effects of what I do. I don’t believe it will change the world, but it might change someone’s world.

It might still help connect families to their lost ones.

Hopefully. Hopefully. And if everything else fails, at least it will remain as a very accurate document of an era. I think that’s not to be overlooked at all. Because Bosnia is obviously an under-developed country, the resources for creating memorial centers and a culture of nurturing memory are very limited. So it’s up to us as individuals to do whatever we can.

Insterested in seeing more of Ziyah Gafić’s work? Follow him on Instagram


Planet DebianSimon Josefsson: The Case for Short OpenPGP Key Validity Periods

After I moved to a new OpenPGP key (see key transition statement) I have received comments about the short life length of my new key. When I created the key (see my GnuPG setup) I set it to expire after 100 days. Some people assumed that I would have to create a new key then, and therefore wondered what value there is to sign a key that will expire in two months. It doesn’t work like that, and below I will explain how OpenPGP key expiration works; how to extend the expiration time of your key; and argue why having a relatively short validity period can be a good thing.

The OpenPGP message format has a sub-packet called the Key Expiration Time, quoting the RFC:

5.2.3.6. Key Expiration Time

   (4-octet time field)

   The validity period of the key.  This is the number of seconds after
   the key creation time that the key expires.  If this is not present
   or has a value of zero, the key never expires.  This is found only on
   a self-signature.

You can print the sub-packets in your OpenPGP key with gpg --list-packets. See below an output for my key, and notice the “created 1403464490″ (which is Unix time for 2014-06-22 21:14:50) and the “subpkt 9 len 4 (key expires after 100d0h0m)” which adds up to an expiration on 2014-09-26. Don’t confuse the creation time of the key (“created 1403464321″) with when the signature was created (“created 1403464490″).

jas@latte:~$ gpg --export 54265e8c | gpg --list-packets |head -20
:public key packet:
	version 4, algo 1, created 1403464321, expires 0
	pkey[0]: [3744 bits]
	pkey[1]: [17 bits]
:user ID packet: "Simon Josefsson "
:signature packet: algo 1, keyid 0664A76954265E8C
	version 4, created 1403464490, md5len 0, sigclass 0x13
	digest algo 10, begin of digest be 8e
	hashed subpkt 27 len 1 (key flags: 03)
	hashed subpkt 9 len 4 (key expires after 100d0h0m)
	hashed subpkt 11 len 7 (pref-sym-algos: 9 8 7 13 12 11 10)
	hashed subpkt 21 len 4 (pref-hash-algos: 10 9 8 11)
	hashed subpkt 30 len 1 (features: 01)
	hashed subpkt 23 len 1 (key server preferences: 80)
	hashed subpkt 2 len 4 (sig created 2014-06-22)
	hashed subpkt 25 len 1 (primary user ID)
	subpkt 16 len 8 (issuer key ID 0664A76954265E8C)
	data: [3743 bits]
:signature packet: algo 1, keyid EDA21E94B565716F
	version 4, created 1403466403, md5len 0, sigclass 0x10
jas@latte:~$ 

So the key will simply stop being valid after that time? No. It is possible to update the key expiration time value, re-sign the key, and distribute the key to people you communicate with directly or indirectly to OpenPGP keyservers. Since that date is a couple of weeks away, now felt like the perfect opportunity to go through the exercise of taking out my offline master key and boot from a Debian LiveCD and extend its expiry time. See my earlier writeup for LiveCD and USB stick conventions.

user@debian:~$ export GNUPGHOME=/media/FA21-AE97/gnupghome
user@debian:~$ gpg --edit-key 54265e8c
gpg (GnuPG) 1.4.12; Copyright (C) 2012 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.

Secret key is available.

pub  3744R/54265E8C  created: 2014-06-22  expires: 2014-09-30  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2014-09-30  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> expire
Changing expiration time for the primary key.
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 150
Key expires at Fri 23 Jan 2015 02:47:48 PM UTC
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "Simon Josefsson "
3744-bit RSA key, ID 54265E8C, created 2014-06-22


pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2014-09-30  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 1

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub* 2048R/32F8119D  created: 2014-06-22  expires: 2014-09-30  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 150
Key expires at Fri 23 Jan 2015 02:48:05 PM UTC
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "Simon Josefsson "
3744-bit RSA key, ID 54265E8C, created 2014-06-22


pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub* 2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 2

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub* 2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub* 2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 1

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub* 2048R/78ECD86B  created: 2014-06-22  expires: 2014-09-30  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 150
Key expires at Fri 23 Jan 2015 02:48:14 PM UTC
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "Simon Josefsson "
3744-bit RSA key, ID 54265E8C, created 2014-06-22


pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub* 2048R/78ECD86B  created: 2014-06-22  expires: 2015-01-23  usage: E   
sub  2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 3

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub* 2048R/78ECD86B  created: 2014-06-22  expires: 2015-01-23  usage: E   
sub* 2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> key 2

pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2015-01-23  usage: E   
sub* 2048R/36BA8F9B  created: 2014-06-22  expires: 2014-09-30  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> expire
Changing expiration time for a subkey.
Please specify how long the key should be valid.
         0 = key does not expire
        = key expires in n days
      w = key expires in n weeks
      m = key expires in n months
      y = key expires in n years
Key is valid for? (0) 150
Key expires at Fri 23 Jan 2015 02:48:23 PM UTC
Is this correct? (y/N) y

You need a passphrase to unlock the secret key for
user: "Simon Josefsson "
3744-bit RSA key, ID 54265E8C, created 2014-06-22


pub  3744R/54265E8C  created: 2014-06-22  expires: 2015-01-23  usage: SC  
                     trust: ultimate      validity: ultimate
sub  2048R/32F8119D  created: 2014-06-22  expires: 2015-01-23  usage: S   
sub  2048R/78ECD86B  created: 2014-06-22  expires: 2015-01-23  usage: E   
sub* 2048R/36BA8F9B  created: 2014-06-22  expires: 2015-01-23  usage: A   
[ultimate] (1). Simon Josefsson 
[ultimate] (2)  Simon Josefsson 

gpg> save
user@debian:~$ gpg -a --export 54265e8c > /media/KINGSTON/updated-key.txt
user@debian:~$ 

I remove the “transport” USB stick from the “offline” computer, and back on my laptop I can inspect the new updated key. Let’s use the same command as before. The key creation time is the same (“created 1403464321″), of course, but the signature packet has a new time (“created 1409064478″) since it was signed now. Notice “created 1409064478″ and “subpkt 9 len 4 (key expires after 214d19h35m)”. The expiration time is computed based on when the key was generated, not when the signature packet was generated. You may want to double-check the pref-sym-algos, pref-hash-algos and other sub-packets so that you don’t accidentally change anything else. (Btw, re-signing your key is also how you would modify those preferences over time.)

jas@latte:~$ cat /media/KINGSTON/updated-key.txt |gpg --list-packets | head -20
:public key packet:
	version 4, algo 1, created 1403464321, expires 0
	pkey[0]: [3744 bits]
	pkey[1]: [17 bits]
:user ID packet: "Simon Josefsson "
:signature packet: algo 1, keyid 0664A76954265E8C
	version 4, created 1409064478, md5len 0, sigclass 0x13
	digest algo 10, begin of digest 5c b2
	hashed subpkt 27 len 1 (key flags: 03)
	hashed subpkt 11 len 7 (pref-sym-algos: 9 8 7 13 12 11 10)
	hashed subpkt 21 len 4 (pref-hash-algos: 10 9 8 11)
	hashed subpkt 30 len 1 (features: 01)
	hashed subpkt 23 len 1 (key server preferences: 80)
	hashed subpkt 25 len 1 (primary user ID)
	hashed subpkt 2 len 4 (sig created 2014-08-26)
	hashed subpkt 9 len 4 (key expires after 214d19h35m)
	subpkt 16 len 8 (issuer key ID 0664A76954265E8C)
	data: [3744 bits]
:user ID packet: "Simon Josefsson "
:signature packet: algo 1, keyid 0664A76954265E8C
jas@latte:~$ 

Being happy with the new key, I import it and send it to key servers out there.

jas@latte:~$ gpg --import /media/KINGSTON/updated-key.txt 
gpg: key 54265E8C: "Simon Josefsson " 5 new signatures
gpg: Total number processed: 1
gpg:         new signatures: 5
jas@latte:~$ gpg --send-keys 54265e8c
gpg: sending key 54265E8C to hkp server keys.gnupg.net
jas@latte:~$ gpg --keyserver keyring.debian.org  --send-keys 54265e8c
gpg: sending key 54265E8C to hkp server keyring.debian.org
jas@latte:~$ 

Finally: why go through this hassle, rather than set the key to expire in 50 years? Some reasons for this are:

  1. I don’t trust myselt to keep track of a private key (or revocation cert) for 50 years.
  2. I want people to notice my revocation certificate as quickly as possible.
  3. I want people to notice other changes to my key (e.g., cipher preferences) as quickly as possible.

Let’s look into the first reason a bit more. What would happen if I lose both the master key and the revocation cert, for a key that’s valid 50 years? I would start from scratch and create a new key that I upload to keyservers. Then there would be two keys out there that are valid and identify me, and both will have a set of signatures on it. None of them will be revoked. If I happen to lose the new key again, there will be three valid keys out there with signatures on it. You may argue that this shouldn’t be a problem, and that nobody should use any other key than the latest one I want to be used, but that’s a technical argument — and at this point we have moved into usability, and that’s a trickier area. Having users select which out of a couple of apparently all valid keys that exist for me is simply not going to work well.

The second is more subtle, but considerably more important. If people retrieve my key from keyservers today, and it expires in 50 years, there will be no need to refresh it from key servers. If for some reason I have to publish my revocation certificate, there will be people that won’t see it. If instead I set a short validity period, people will have to refresh my key once in a while, and will then either get an updated expiration time, or will get the revocation certificate. This amounts to a CRL/OCSP-like model.

The third is similar to the second, but deserves to be mentioned on its own. Because the cipher preferences are expressed (and signed) in my key, and that ciphers come and go, I would expect that I will modify those during the life-time of my long-term key. If I have a long validity period of my key, people would not refresh it from key servers, and would encrypt messages to me with ciphers I may no longer want to be used.

The downside of having a short validity period is that I have to do slightly more work to get out the offline master key once in a while (which I have to once in a while anyway because I’m signing other peoples keys) and that others need to refresh my key from the key servers. Can anyone identify other disadvantages? Also, having to explain why I’m using a short validity period used to be a downside, but with this writeup posted that won’t be the case any more. :-)

flattr this!

Planet DebianDaniel Pocock: GSoC talks at DebConf 14 today

This year I mentored two students doing work in support of Debian and free software (as well as those I mentored for Ganglia).

Both of them are presenting details about their work at DebConf 14 today.

While Juliana's work has been widely publicised already, mainly due to the fact it is accessible to every individual DD, Andrew's work is also quite significant and creates many possibilities to advance awareness of free software.

The Java project that is not just about Java

Andrew's project is about recursively building Java dependencies from third party repositories such as the Maven Central Repository. It matches up well with the wonderful new maven-debian-helper tool in Debian and will help us to fill out /usr/share/maven-repo on every Debian system.

Firstly, this is not just about Java. On a practical level, some aspects of the project are useful for many other purposes. One of those is the aim of scanning a repository for non-free artifacts, making a Git mirror or clone containing a dfsg branch for generating repackaged upstream source and then testing to see if it still builds.

Then there is the principle of software freedom. The Maven Central repository now requires that people publish a sources JAR and license metadata with each binary artifact they upload. They do not, however, demand that the sources JAR be complete or that the binary can be built by somebody else using the published sources. The license data must be specified but it does not appeared to be verified in the same way as packages inspected by Debian's legendary FTP masters.

Thanks to the transitive dependency magic of Maven, it is quite possible that many Java applications that are officially promoted as free software can't trace the source code of every dependency or build plugin.

Many organizations are starting to become more alarmed about the risk that they are dependent upon some rogue dependency. Maybe they will be hit with a lawsuit from a vendor stating that his plugin was only free for the first 3 months. Maybe some binary dependency JAR contains a nasty trojan for harvesting data about their corporate network.

People familiar with the principles of software freedom are in the perfect position to address these concerns and Andrew's work helps us build a cleaner alternative. It obviously can't rebuild every JAR for the very reason that some of them are not really free - however, it does give the opportunity to build a heat-map of trouble spots and also create a fast track to packaging for those heirarchies of JARs that are truly free.

Making WebRTC accessible to more people

Juliana set out to update rtc.debian.org and this involved working on JSCommunicator, the HTML5/JavaScript softphone based on WebRTC.

People attending the session today or participating remotely are advised to set up your RTC / VoIP password at db.debian.org well in advance so the server will allow you to log in and try it during the session. It can take 30 minutes or so for the passwords to be replicated to the SIP proxy and TURN server.

Please also check my previous comments about what works and what doesn't and in particular, please be aware that Iceweasel / Firefox 24 on wheezy is not suitable unless you are on the same LAN as the person you are calling.

Planet Linux AustraliaMaxim Zakharov: Central Sydney WordPress Meetup: E-mail marketing

Andrew Beeston from Clicky! speaks about email marketing at Central Sydney WordPress meetup:

Worse Than FailureAnnouncements: Pittsburgh WTFers: A Storytelling Workshop

Part of what brought me into writing and editing for The Daily WTF was my love of telling stories. I’ve had a very successful career working inside of corporate IT shops, and a huge part of that success comes from my ability to take a complex technical topic and explain it simply. To do that, I fall back on the same storytelling techniques that I use here.

A lot of real-world WTFs could be avoided through better communication, and while I hate the idea of losing out on more fodder for the site, it’s my duty as an IT drone to try and stamp out WTFs.

To that end, I’m teaming up with Kevin Allison, a master storyteller who runs the Risk! podcast and teaches storytelling at The Story Studio. Together, we’re going to lead a workshop on how storytelling techniques can work together with technical details. We can improve how we gather requirements, how we interact with users and vendors, how we interact with management, and most important: how we deal with the inevitable WTFs.

I can’t stress enough how awesome Kevin’s storytelling training is- I’ve been his student in the past, and I’m super excited to work to work with him to deliver this workshop.

This three hour workshop is at 1PM, October 19th, and costs $75.00. Sign up through Steel City Improv, which is hosting the workshop in partnership with The Maker Theater.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Sociological ImagesIn Employers’ Eyes, For-Profit Colleges are Equivalent to High School

Holding a college degree, it is widely assumed, improves the likelihood that a person will be successful in the labor market.  This maxim draws individuals into college across the class spectrum and aspiring students who are low-income or non-white may find themselves enrolled at a for-profit college.

For profit colleges have been getting slammed for their high prices, low bars, and atrocious graduation rates.  Now we have another reason to worry that these institutions are doing more harm than good.

Economist Rajeev Darolia and his colleagues sent out 8,914 fictitious resumes and waited to see if they received a response.  They were interested in whether attending a for-profit college actually enhanced job opportunities, as ads for such schools claim, so they varied the level of education on the resumes and whether the applicant attended a for-profit or community college.

1

It turns out that employers evaluate applicants who attended two-year community colleges and those who attended for-profit colleges about equally.  Community colleges, in other words, open just as many doors to possibility as for-profit ones.

Darolia and his colleagues then tested whether employers displayed a preference for applicants who went to for-profit colleges versus applicants with no college at all.  They didn’t. Employers treated people with high school diplomas and coursework at for-profit colleges equivalently.

Being economists, they staidly conclude that enrolling in a for-profit college is a bad investment.

H/t Gin and Tacos. Image borrowed from Salon.com.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Cory DoctorowTech Review’s annual science fiction issue, edited by Bruce Sterling, featuring William Gibson


The summer annual features stories "inspired by the real-life breakthroughs covered in the pages of MIT Technology Review," including "Petard," my story about hacktivism; and "Death Cookie/Easy Ice," an excerpt from William Gibson's forthcoming (and stone brilliant) futuristic novel The Peripheral.

Other authors in the collection include Lauren Beukes, Chris Brown, Pat Cadigan, Warren Ellis, Joel Garreau, and Paul Graham Raven. The 2013 summer anthology was a huge hit -- Gardner Dozois called it "one of the year’s best SF anthologies to date, perhaps the best."

The 2014 edition is out this month, available direct from MIT Tech Review.

Twelve Tomorrows | MIT Technology Review

Planet Linux AustraliaAndrew Pollock: [life] Day 209: Startup stuff, Kindergarten, tennis and a play date

Last night was not a good night for sleep. I woke up around 12:30am for some reason, and then Zoe woke up around 2:30am (why is it always 2:30am?), but I managed to get her to go back to bed in her own bed. I was not feeling very chipper this morning.

Today was photo day at Kindergarten, so I spent some extra time braiding Zoe's hair (at her request) before we headed to Kindergarten.

When I got home, I got stuck into my real estate license assessment and made a lot of progress on the current unit today. I also mixed in some research on another idea I'm running with at the moment, which I'm very excited about.

I biked to Kindergarten to pick Zoe up, and managed to get her all sorted out in time for her tennis class, and she did the full class without any interruptions.

After tennis, we went to Megan's house for a bit. As we were leaving, her neighbour asked if we could help video one of her daughters doing the ALS ice bucket challenge thing, so we got a bit waylaid doing that, before we got home.

I managed to get Zoe down to bed a bit early tonight. My cold is really kicking my butt today. I hope we both sleep well tonight.

CryptogramSecurity by Obscurity at Healthcare.gov Site

The White House is refusing to release details about the security of healthcare.gov because it might help hackers. What this really means is that the security details would embarrass the White House.

CryptogramEavesdropping Using Smart Phone Gyroscopes

The gyroscopes are sensitive enough to pick up acoustic vibrations. It's crude, but it works. Paper. Wired article. Hacker News thread.

Worse Than FailureThe Data Migration

Consider a small European country with more than 20 social insurance institutions, each using their own proprietary software. Now consider sharing data between them. After decades of integration failures, these institutions decided to standardize on a handful of applications. One of these institutions hired Philipp’s firm to migrate their data to DB2.

Philipp’s boss gave him the assignment with a clear conscience. “They have a data transfer interface already established. This should be a quick process.”

However, Philipp’s dreams of webservices, integration end-points, clean XML, and a well organized workflow were shattered when he was handed a few examples of the COBOL-generated flat files the company currently used for data transfer, via FTP. There was no documentation regarding the schema. Philipp sat down with William, an employee at the client site who had worked with this data for the better part of a generation, and had discovered its quirks through trial and error.

“Now, these files look exactly like the ones that we actually send, except they may or may not have an extra field stuffed into character 12,” William explained. “If there’s a ‘Q’ there, then we know we’re using the alternate message block, but only if the customer data flag contains a letter ‘B’.”

Philipp struggled to take notes that his brain would be able to parse later. “And where’s the customer data flag?”

“Oh, we call that column ‘R’. That’s a right-aligned field that starts at character 120. Be careful, because column ‘S’ is left aligned and starts at character 125. If you’re just skimming the file, it’s easy to think they’re the same field.” William chuckled. “Column ‘F’ is the tricky one, though- it needs two leading spaces, then a five-character field value, then five trailing spaces. That’s all one field, mind you.”

Mapping the file to the underlying data was even more of a challenge, as William explained. “Field ‘M’ is a substring across one of the database columns from the Patient database.”

“Which column?” Philipp asked.

“Oh, I don’t know. X1 or X8, I think. I’d have to reread the source code to be sure...”

The data from the flat files- sensitive patient data, transmitted as plain text across FTP- needed additional formatting and cleaning before it could move into its new home in DB2. The destination schema was as clearly specified and documented as the source schema- i.e., it wasn’t. The “already established” process Philipp’s boss had mentioned was a single gigantic stored procedure- thousands of lines of Oracle’s PL/SQL.

Philipp braced his temple. “Oracle? How do I log into Oracle?”

“You don’t,” William said. “We don’t have an Oracle database. You have to work with Stephen, he’s got a local instance on an off-site machine.”

“Could I just write my own DB2 stored procedure instead?”

“Absolutely not! Do you know how much we paid to get the PL/SQL procedure written? We can’t afford to pay that again. Work with Stephen.”

Philipp wasn’t the first person who needed to work with Stephen. The process for doing so was well-documented and formalized. Phillip took the output from his flat-file processing and emailed it to Stephen’s Gmail address. Stephen would import the data into Oracle, run the stored procedure against it, export the results to a CSV file, then email that gigantic file back. Finally, Philipp could import the data into the target DB2 database.

Philipp wasn’t a lawyer, so he had no idea how many privacy laws this violated, but he wasn’t allowed to do anything else. It would only be a one-time process, anyway…

…until after they ran through it, and discovered the data that had ended up in DB2 had significant flaws, requiring iterative corrections.

Since the data entry clerks weren’t allowed to access the test database (“It’s a development environment, and they’d only get confused,” William explained), the data had to be loaded into production. There, the clerks would correct it. Philipp had no access to production (“Security is very important to us”), so the DBA would copy the corrected data back to the test environment. The DBA refused to truncate the table before loading, and refused to drop the table, which meant each time through this cycle created a new table, named something like PRODUCTION_TEST_DATA_13, or 14_TEST_ATTEMPT.

The DBA account owned and controlled each new table.  Obtaining access was a separate request to the DBAs each time, with a paragraph justifying why Philipp needed access (“Security is very important to us”).

By the time the DBAs granted him access to PRODUCTION_TESTING_47, Philipp was confident that the migration had finally succeeded. Not long after, he got a call from his boss. ”We’re getting complaints from the client. What’s this I hear about you designing an overly complex migration process?”

Images: Oracle plane, and Midsummer bonfire. Collage by Remy Porter.
[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet DebianChristian Perrier: [life] Follow bubulle running adventures....

Just in case some of my free software friends would care and try understanding why I'm currently not attending my first DebConf since 2004...

Starting tomorrow 07:00am EST (so, 22:00 PST for Debconfers), I'll be running the "TDS" race of the Ultra-Trail du Mont-Blanc races.

Ultra-Trail du Mont-Blanc (UTMB) is one of the world famous long distance moutain trail races. It takes places in Chamonix, just below the Mont-Blanc, France's and Europe's highest moutain. The race is indeed simple : "go around the Mont-Blanc in a big circle, 160km long, with 10,000 meters positive climb cumulated on the climb of about 10 high passes between 2000 and 2700 meters altitude".

"My" race is a shortened version of UTMB that does half of the full loop, from Courmayeur in Italy (just "the other side" of Mont-Blanc, from Chamonix) and goes back to Chamonix. It is "only" 120 kilometers long with 7200 meters of positive climb. Some of these are however know as more difficult than UTMB itself.

Many firsts for me in this race : first "over 100km", first "over 24 hours running". Still, I trained hard for this, achieved a very though race in early July (60km, 5000m climb) with a very good result, and I expect to make it well.

Top runners complete this in 17 hours.....last arrivals are expected after 33 hours "running" (often fast walking, indeed). I plan to achieve the race in 28 hours but, indeed, I have no idea..:-)

So, in case you're boring in a night hacklab, or just want to draw your attention out of IRC, or don't have any package to polish...or just want to have a thought for an old friend, you can try to use the following link and follow all this live : http://utmb.livetrail.net/coureur.php?rech=6384⟨=en

Race start : 7am EST, Wednesday Aug 27th. bubulle arrival: Thursday Aug. 28th, between 10am and 4pm (best projection is 11am).

And there will be cheese at pit stops....

Don MartiQoTD: Craig Simmons

While ad fraud hurts the brand, every other party benefits from its existence. This alone has buoyed ad fraud's overwhelming survival in the industry. Bot operators, of course, end up pocketing a significant chunk of the $140 billion of overall digital ad spend. But it's not just the botmasters or fraudulent site owners that benefit. Buyers in the space have long been winning incremental budgets from advertisers by buying artificially well-performing impressions. Open exchanges and supply side platforms (SSPs) are responding to a demand for inventory by buying cheap scale from unknown publishers with limited transparency into the quality of those sites.

Craig Simmons

Planet DebianNeil Williams: vmdebootstrap images for ARMMP on BBB

After patches from Petter to add foreign architecture support and picking up some scripting from freedombox, I’ve just built a Debian unstable image using the ARMMP kernel on Beaglebone-black.

A few changes to vmdebootstrap will need to go into the next version (0.3), including an example customise script to setup the u-boot support. With the changes, the command would be:

sudo ./vmdebootstrap --owner `whoami` --verbose --size 2G --mirror http://mirror.bytemark.co.uk/debian --log beaglebone-black.log --log-level debug --arch armhf --foreign /usr/bin/qemu-arm-static --no-extlinux --no-kernel --package u-boot --package linux-image-armmp --distribution sid --enable-dhcp --configure-apt --serial-console-command '/sbin/getty -L ttyO0 115200 vt100' --customize examples/beagleboneblack-customise.sh --bootsize 50m --boottype vfat --image bbb.img

Some of those commands are new but there are a few important elements:

  • use of –arch and –foreign to provide the emulation needed to run the debootstrap second stage.
  • drop extlinux and install u-boot as a package.
  • linux-image-armmp kernel
  • new command to configure an apt source
  • serial-console-command as the BBB doesn’t use the default /dev/ttyS0
  • choice of sid to get the latest ARMMP and u-boot versions
  • customize command – this is a script which does two things:
    • copies the dtbs into the boot partition
    • copies the u-boot files and creates a u-boot environment to use those files.
  • use of a boot partition – note that it needs to be large enough to include the ARMMP kernel and a backup of the same files.

With this in place, a simple dd to an SD card and the BBB boots directly into Debian ARMMP.

The examples are now in my branch and include an initial cubieboard script which is unfinished.

The current image is available for download. (222Mb).

I hope to upload the new vmdebootstrap soon – let me know if you do try the version in the branch.

Planet Linux AustraliaBlueHackers: About your breakfast

We know that eating well (good nutritional balance) and at the right times is good for your mental as well as your physical health.

There’s some new research out on breakfast. The article I spotted (Breakfast no longer ‘most important meal of the day’ | SBS) goes a bit popular and funny on it, so I’ll phrase it independently in an attempt to get the real information out.

One of the researchers makes the point that skipping breakfast is not the same as deferring. So consider the reason, are you going to eat properly a bit later, or are you not eating at all?

When you do have breakfast, note that really most cereals contain an atrocious amount of sugar (and other carbs) that you can’t realistically burn off even with a hard day’s work. And from my own personal observation, there’s often way too much salt in there also. Check out Kellogg’s Cornflakes for a neat example of way-too-much-salt.

Basically, the research comes back to the fact that just eating is not the point, it’s what you eat that actually really does matter.

What do you have for breakfast, and at what point/time in your day?

,

Planet Linux AustraliaTridge on UAVs: APM:Rover 2.46 released

The ardupilot development team is proud to announce the release of version 2.46 of APM:Rover. This is a major release with a lot of new features and bug fixes.

This release is based on a lot of development and testing that happened prior to the AVC competition where APM based vehicles performed very well.

Full changes list for this release:

  • added support for higher baudrates on telemetry ports, to make it easier to use high rate telemetry to companion boards. Rates of up to 1.5MBit are now supported to companion boards.
  • new Rangefinder code with support for a wider range of rangefinder types including a range of Lidars (thanks to Allyson Kreft)
  • added logging of power status on Pixhawk
  • added PIVOT_TURN_ANGLE parameter for pivot based turns on skid steering rovers
  • lots of improvements to the EKF support for Rover, thanks to Paul Riseborough and testing from Tom Coyle. Using the EKF can greatly improve navigation accuracy for fast rovers. Enable with AHRS_EKF_USE=1.
  • improved support for dual GPS on Pixhawk. Using a 2nd GPS can greatly improve performance when in an area with an obstructed view of the sky
  • support for up to 14 RC channels on Pihxawk
  • added BRAKING_PERCENT and BRAKING_SPEEDERR parameters for better breaking support when cornering
  • added support for FrSky telemetry via SERIAL2_PROTOCOL parameter (thanks to Matthias Badaire)
  • added support for Linux based autopilots, initially with the PXF BeagleBoneBlack cape and the Erle robotics board. Support for more boards is expected in future releases. Thanks to Victor, Sid and Anuj for their great work on the Linux port.
  • added StorageManager library, which expands available FRAM storage on Pixhawk to 16 kByte. This allows for 724 waypoints on Pixhawk.
  • improved reporting of magnetometer and barometer errors to the GCS
  • fixed a bug in automatic flow control detection for serial ports in Pixhawk
  • fixed use of FMU servo pins as digital inputs on Pixhawk
  • imported latest updates for VRBrain boards (thanks to Emile Castelnuovo and Luca Micheletti)
  • updates to the Piksi GPS support (thanks to Niels Joubert)
  • improved gyro estimate in DCM (thanks to Jon Challinger)
  • improved position projection in DCM in wind (thanks to Przemek Lekston)
  • several updates to AP_NavEKF for more robust handling of errors (thanks to Paul Riseborough)
  • lots of small code cleanups thanks to Daniel Frenzel
  • initial support for NavIO board from Mikhail Avkhimenia
  • fixed logging of RCOU for up to 12 channels (thanks to Emile Castelnuovo)
  • code cleanups from Silvia Nunezrivero
  • improved parameter download speed on radio links with no flow control


Many thanks to everyone who contributed to this release, especially Tom Coyle and Linus Penzlien for their excellent testing and feedback.

Happy driving!

Planet Linux AustraliaTridge on UAVs: APM:Plane 3.1.0 released

The ardupilot development team is proud to announce the release of version 3.1.0 of APM:Plane. This is a major release with a lot of new features and bug fixes.

The biggest change in this release is the addition of automatic terrain following. Terrain following allows the autopilot to guide the aircraft over varying terrain at a constant height above the ground using an on-board terrain database. Uses include safer RTL, more accurate and easier photo mapping and much easier mission planning in hilly areas.

There have also been a lot of updates to auto takeoff, especially for tail dragger aircraft. It is now much easier to get the steering right for a tail dragger on takeoff.

Another big change is the support of Linux based autopilots, starting with the PXF cape for the BeagleBoneBlack and the Erle robotics autopilot.

Full list of changes in this release

  • added terrain following support. See http://plane.ardupilot.com/wiki/common- ... following/
  • added support for higher baudrates on telemetry ports, to make it easier to use high rate telemetry to companion boards. Rates of up to 1.5MBit are now supported to companion boards.
  • added new takeoff code, including new parameters TKOFF_TDRAG_ELEV, TKOFF_TDRAG_SPD1, TKOFF_ROTATE_SPD, TKOFF_THR_SLEW and TKOFF_THR_MAX. This gives fine grained control of auto takeoff for tail dragger aircraft.
  • overhauled glide slope code to fix glide slope handling in many situations. This makes transitions between different altitudes much smoother.
  • prevent early waypoint completion for straight ahead waypoints. This makes for more accurate servo release at specific locations, for applications such as dropping water bottles.
  • added MAV_CMD_DO_INVERTED_FLIGHT command in missions, to change from normal to inverted flight in AUTO (thanks to Philip Rowse for testing of this feature).
  • new Rangefinder code with support for a wider range of rangefinder types including a range of Lidars (thanks to Allyson Kreft)
  • added support for FrSky telemetry via SERIAL2_PROTOCOL parameter (thanks to Matthias Badaire)
    added new STAB_PITCH_DOWN parameter to improve low throttle behaviour in FBWA mode, making a stall less likely in FBWA mode (thanks to Jack Pittar for the idea).
  • added GLIDE_SLOPE_MIN parameter for better handling of small altitude deviations in AUTO. This makes for more accurate altitude tracking in AUTO.
  • added support for Linux based autopilots, initially with the PXF BeagleBoneBlack cape and the Erle robotics board. Support for more boards is expected in future releases. Thanks to Victor, Sid and Anuj for their great work on the Linux port. See http://diydrones.com/profiles/blogs/fir ... t-on-linux for details.
  • prevent cross-tracking on some waypoint types, such as when initially entering AUTO or when the user commands a change of target waypoint.
  • fixed servo demo on startup (thanks to Klrill-ka)
  • added AFS (Advanced Failsafe) support on 32 bit boards by default. See http://plane.ardupilot.com/wiki/advance ... iguration/
  • added support for monitoring voltage of a 2nd battery via BATTERY2 MAVLink message
  • added airspeed sensor support in HIL
  • fixed HIL on APM2. HIL should now work again on all boards.
  • added StorageManager library, which expands available FRAM storage on Pixhawk to 16 kByte. This allows for 724 waypoints, 50 rally points and 84 fence points on Pixhawk.
  • improved steering on landing, so the plane is actively steered right through the landing.
  • improved reporting of magnetometer and barometer errors to the GCS
  • added FBWA_TDRAG_CHAN parameter, for easier FBWA takeoffs of tail draggers, and better testing of steering tuning for auto takeoff.
  • fixed failsafe pass through with no RC input (thanks to Klrill-ka)
  • fixed a bug in automatic flow control detection for serial ports in Pixhawk
  • fixed use of FMU servo pins as digital inputs on Pixhawk
  • imported latest updates for VRBrain boards (thanks to Emile Castelnuovo and Luca Micheletti)
  • updates to the Piksi GPS support (thanks to Niels Joubert)
  • improved gyro estimate in DCM (thanks to Jon Challinger)
  • improved position projection in DCM in wind (thanks to Przemek Lekston)
  • several updates to AP_NavEKF for more robust handling of errors (thanks to Paul Riseborough)
  • improved simulation of rangefinders in SITL
  • lots of small code cleanups thanks to Daniel Frenzel
  • initial support for NavIO board from Mikhail Avkhimenia
  • fixed logging of RCOU for up to 12 channels (thanks to Emile Castelnuovo)
  • code cleanups from Silvia Nunezrivero
  • improved parameter download speed on radio links with no flow control

Many thanks to everyone who contributed to this release, especially our beta testers Marco, Paul, Philip and Iam.

Happy flying!

Sociological ImagesW.E.B. DuBois on the Indifference of White America

1

W.E.B. DuBois (1934):

The colored people of America are coming to face the fact quite calmly that most white Americans do not like them, and are planning neither for their survival, nor for their definite future if it involves free, self-assertive modern manhood. This does not mean all Americans. A saving few are worried about the Negro problem; a still larger group are not ill-disposed, but they fear prevailing public opinion. The great mass of Americans are, however, merely representatives of average humanity. They muddle along with their own affairs and scarcely can be expected to take seriously the affairs of strangers or people whom they partly fear and partly despise.

For many years it was the theory of most Negro leaders that this attitude was the insensibility of ignorance and inexperience, that white America did not know of or realize the continuing plight of the Negro.  Accordingly, for the last two decades, we have striven by book and periodical, by speech and appeal, by various dramatic methods of agitation, to put the essential facts before the American people.  Today there can be no doubt that Americans know the facts; and yet they remain for the most part indifferent and unmoved.

- From “A Negro Nation Within a Nation.

Borrowed from an essay by Tressie McMillan Cottom. Photo from ibtimes.com.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianPetter Reinholdtsen: Do you need an agreement with MPEG-LA to publish and broadcast H.264 video in Norway?

Two years later, I am still not sure if it is legal here in Norway to use or publish a video in H.264 or MPEG4 format edited by the commercially licensed video editors, without limiting the use to create "personal" or "non-commercial" videos or get a license agreement with MPEG LA. If one want to publish and broadcast video in a non-personal or commercial setting, it might be that those tools can not be used, or that video format can not be used, without breaking their copyright license. I am not sure. Back then, I found that the copyright license terms for Adobe Premiere and Apple Final Cut Pro both specified that one could not use the program to produce anything else without a patent license from MPEG LA. The issue is not limited to those two products, though. Other much used products like those from Avid and Sorenson Media have terms of use are similar to those from Adobe and Apple. The complicating factor making me unsure if those terms have effect in Norway or not is that the patents in question are not valid in Norway, but copyright licenses are.

These are the terms for Avid Artist Suite, according to their published end user license text (converted to lower case text for easier reading):

18.2. MPEG-4. MPEG-4 technology may be included with the software. MPEG LA, L.L.C. requires this notice:

This product is licensed under the MPEG-4 visual patent portfolio license for the personal and non-commercial use of a consumer for (i) encoding video in compliance with the MPEG-4 visual standard (“MPEG-4 video”) and/or (ii) decoding MPEG-4 video that was encoded by a consumer engaged in a personal and non-commercial activity and/or was obtained from a video provider licensed by MPEG LA to provide MPEG-4 video. No license is granted or shall be implied for any other use. Additional information including that relating to promotional, internal and commercial uses and licensing may be obtained from MPEG LA, LLC. See http://www.mpegla.com. This product is licensed under the MPEG-4 systems patent portfolio license for encoding in compliance with the MPEG-4 systems standard, except that an additional license and payment of royalties are necessary for encoding in connection with (i) data stored or replicated in physical media which is paid for on a title by title basis and/or (ii) data which is paid for on a title by title basis and is transmitted to an end user for permanent storage and/or use, such additional license may be obtained from MPEG LA, LLC. See http://www.mpegla.com for additional details.

18.3. H.264/AVC. H.264/AVC technology may be included with the software. MPEG LA, L.L.C. requires this notice:

This product is licensed under the AVC patent portfolio license for the personal use of a consumer or other uses in which it does not receive remuneration to (i) encode video in compliance with the AVC standard (“AVC video”) and/or (ii) decode AVC video that was encoded by a consumer engaged in a personal activity and/or was obtained from a video provider licensed to provide AVC video. No license is granted or shall be implied for any other use. Additional information may be obtained from MPEG LA, L.L.C. See http://www.mpegla.com.

Note the requirement that the videos created can only be used for personal or non-commercial purposes.

The Sorenson Media software have similar terms:

With respect to a license from Sorenson pertaining to MPEG-4 Video Decoders and/or Encoders: Any such product is licensed under the MPEG-4 visual patent portfolio license for the personal and non-commercial use of a consumer for (i) encoding video in compliance with the MPEG-4 visual standard (“MPEG-4 video”) and/or (ii) decoding MPEG-4 video that was encoded by a consumer engaged in a personal and non-commercial activity and/or was obtained from a video provider licensed by MPEG LA to provide MPEG-4 video. No license is granted or shall be implied for any other use. Additional information including that relating to promotional, internal and commercial uses and licensing may be obtained from MPEG LA, LLC. See http://www.mpegla.com.

With respect to a license from Sorenson pertaining to MPEG-4 Consumer Recorded Data Encoder, MPEG-4 Systems Internet Data Encoder, MPEG-4 Mobile Data Encoder, and/or MPEG-4 Unique Use Encoder: Any such product is licensed under the MPEG-4 systems patent portfolio license for encoding in compliance with the MPEG-4 systems standard, except that an additional license and payment of royalties are necessary for encoding in connection with (i) data stored or replicated in physical media which is paid for on a title by title basis and/or (ii) data which is paid for on a title by title basis and is transmitted to an end user for permanent storage and/or use. Such additional license may be obtained from MPEG LA, LLC. See http://www.mpegla.com for additional details.

Some free software like Handbrake and FFMPEG uses GPL/LGPL licenses and do not have any such terms included, so for those, there is no requirement to limit the use to personal and non-commercial.

CryptogramThe Problems with PGP

Matthew Green has a good post on what's wrong with PGP and what should be done about it.

Sociological ImagesProfessors’ Pet Peeves

1 (2)

I got this email from an Ivy League student when I arrived to give a speech. She was responsible for making sure that I was delivered to my hotel and knew where to go the next day:

Omg you’re here! Ahh i need to get my shit together now lol. Jk. Give me a ring when u can/want, my cell is [redacted]. I have class until 1230 but then im free! i will let the teacher she u will be there, shes a darling. Perhaps ill come to the end of the talk and meet you there after. Between the faculty lunch and your talk, we can chat! ill take make sure the rooms are all ready for u. See ya!

To say the least, this did not make me feel confident that my visit would go smoothly.

I will use this poor student to kick off this year’s list of Professors’ Pet Peeves.  I reached out to my network and collected some things that really get on instructors’ nerves.  Here are the results: some of the “don’ts” for how to interact with your professor or teaching assistant.  For what it’s worth, #2 was by far the most common complaint.

1. Don’t use unprofessional correspondence.

Your instructors are not your friends. Correspond with them as if you’re in a workplace, because you are. We’re not saying that you can’t ever write like this, but you do need to demonstrate that you know when such communication is and isn’t appropriate.  You don’t wear pajamas to a job interview, right? Same thing.

2. Don’t ask the professor if you “missed anything important” during an absence.

No, you didn’t miss anything important.  We spent the whole hour watching cats play the theremin on youtube!

Of course you missed something important!  We’re college professors!  Thinking everything we do is important is an occupational hazard.  Here’s an alternative way to phrase it:  “I’m so sorry I missed class. I’m sure it was awesome.”

If you’re concerned about what you missed, try this instead: Do the reading, get notes from a classmate (if you don’t have any friends in class, ask the professor if they’ll send an email to help you find a partner to swap notes with), read them over, and drop by office hours to discuss anything you didn’t understand.

3. Don’t pack up your things as the class is ending.

We get it.  The minute hand is closing in on the end of class, there’s a shift in the instructor’s voice, and you hear something like “For next time…”  That’s the cue for everyone to start putting their stuff away. Once one person does it, it’s like an avalanche of notebooks slapping closed, backpack zippers zipping, and cell phones coming out.

Don’t do it.

Just wait 10 more seconds until the class is actually over.  If you don’t, it makes it seem like you are dying to get out of there and, hey, that hurts our feelings!

4. Don’t ask a question about the readings or assignments until checking the syllabus first.

It’s easy to send off an email asking your instructor a quick question, but that person put a lot of effort into the syllabus for a reason.  Remember, each professor has dozens or hundreds of students.  What seems like a small thing on your end can add up to death-by-a-thousand-paper-cuts on our end.  Make a good faith effort to figure out the answer before you ask the professor.

5. Don’t get mad if you receive critical feedback.

If an instructor takes a red pen and massacres your writing, that’s a sign that they care.  Giving negative feedback is hard work, so the red ink means that we’re taking an interest in you and your future.  Moreover, we know it’s going to make some students angry at us. We do it anyway because we care enough about you to try to help you become a stronger thinker and writer.  It’s counterintuitive but lots of red ink is probably a sign that the instructor thinks you have a lot of potential.

6. Don’t grade grub.

Definitely go into office hours to find out how to study better or improve your performance, but don’t go in expecting to change your instructor’s mind about the grade.   Put your energy into studying harder on the next exam, bringing your paper idea to the professor or teaching assistant in office hours, doing the reading, and raising your hand in class. That will have more of a pay-off in the long run.

7. Don’t futz with paper formatting.

Paper isn’t long enough?  Think you can make the font a teensy bit bigger or the margins a tad bit wider? Think we won’t notice if you use a 12-point font that’s just a little more widely spaced?  Don’t do it. We’ve been staring at the printed page for thousands of hours. We have an eagle eye for these kinds of things. Whatever your motivation, here’s what they say to us: “Hi Prof!, I’m trying to trick you into thinking that I’m fulfilling the assignment requirements. I’m lazy and you’re stupid!”  Work on the assignment, not the document settings.

8. Don’t pad your introductions and conclusions with fluff.

Never start off a paper with the phrase, “Since the beginning of time…”  “Since the beginning of time, men have engaged in war.”  Wait, what?  Like, the big bang?  And, anyway, how the heck do you know?  You better have a damn strong citation for that!  “Historically,” “Traditionally,” and “Throughout history” are equally bad offenders.  Strike them from your vocabulary now.

In your conclusion, say something smart.  Or, barring that, just say what you said.  But never say: “Hopefully someday there will be no war.”  Duh.  We’d all like that, but unless you’ve got ideas as to how to make it that way, such statements are simple hopefulness and inappropriate in an academic paper.

9. Don’t misrepresent facts as opinions and opinions as facts.

Figure out the difference.  Here’s an example of how not to represent a fact, via CNN:

Considering that Clinton’s departure will leave only 16 women in the Senate out of 100 senators, many feminists believe women are underrepresented on Capitol Hill.

Wait. Feminists “believe”? Given that women are 51% of the population, 16 out of 100 means that women are underrepresented on Capitol Hill. This is a social fact, yeah?  Now, you can agree or disagree with feminists that this is a problem, but don’t suggest, as CNN does, that the fact itself is an opinion.

This is a common mistake and it’s frustrating for both instructors and students to get past.  Life will be much easier if you know the difference.

10. Don’t be too cool for school.

You know those students that sit at the back of the class, hunch down in their chair, and make an art of looking bored?  Don’t be that person.  Professors and teaching assistants are the top 3% of students.  They likely spent more than a decade in college. For better or worse, they value education. To stay on their good side, you should show them that you care too.  And, if you don’t, pretend like you do.

Thanks to @triciasryan, @hormiga, @wadewitz, @ameenaGK, @holdsher, @joanneminaker, @k_lseyrisman, @jessmetcalf87, @deeshaphilyaw, @currerbell, and @hist_enthusiast, and @gwensharpnv for their ideas!  Originally posted in 2013; cross-posted at Business Insider.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianSteve Kemp: Updates on git-hosting and load-balancing

To round up the discussion of the Debian Administration site yesterday I flipped the switch on the load-balancing. Rather than this:

  https -> pound \
                  \
  http  -------------> varnish  --> apache

We now have the simpler route for all requests:

http  -> haproxy -> apache
https -> haproxy -> apache

This means we have one less HTTP-request for all incoming secure connections, and these days secure connections are preferred since a Strict-Transport-Security header is set.

In other news I've been juggling git repositories; I've setup an installation of GitBucket on my git-host. My personal git repository used to contain some private repositories and some mirrors.

Now it contains mirrors of most things on github, as well as many more private repositories.

The main reason for the switch was to get a prettier interface and bug-tracker support.

A side-benefit is that I can use "groups" to organize repositories, so for example:

Most of those are mirrors of the github repositories, but some are new. When signed in I see more sources, for example the source to http://steve.org.uk.

I've been pleased with the setup and performance, though I had to add some caching and some other magic at the nginx level to provide /robots.txt, etc, which are not otherwise present.

I'm not abandoning github, but I will no longer be using it for private repositories (I was gifted a free subscription a year or three ago), and nor will I post things there exclusively.

If a single canonical source location is required for a repository it will be one that I control, maintain, and host.

I don't expect I'll give people commit access on this mirror, but it is certainly possible. In the past I've certainly given people access to private repositories for collaboration, etc.

CryptogramPeople Are Not Very Good at Matching Photographs to People

We have an error rate of about 15%:

Professor Mike Burton, Sixth Century Chair in Psychology at the University of Aberdeen said: "Psychologists identified around a decade ago that in general people are not very good at matching a person to an image on a security document.

"Familiar faces trigger special processes in our brain -- we would recognise a member of our family, a friend or a famous face within a crowd, in a multitude of guises, venues, angles or lighting conditions. But when it comes to identifying a stranger it's another story.

"The question we asked was does this fundamental brain process that occurs have any real importance for situations such as controlling passport issuing ­ and we found that it does."

The ability of Australian passport officers, for whom accurate face matching is central to their job and vital to border security, was tested in the latest study, which involved researchers from the Universities of Aberdeen, York and New South Wales Australia.

In one test, passport officers had to decide whether or not a photograph of an individual presented on their computer screen matched the face of a person standing in front of their desk.

It was found that on 15% of trials the officers decided that the photograph on their screen matched the face of the person standing in front of them, when in fact, the photograph showed an entirely different person.

Planet DebianErich Schubert: Analyzing Twitter - beware of spam

This year I started to widen up my research; and one data source of interest was text because of the lack of structure in it, that makes it often challenging. One of the data sources that everybody seems to use is Twitter: it has a nice API, and few restrictions on using it (except on resharing data). By default, you can get a 1% random sample from all tweets, which is more than enough for many use cases.
We've had some exciting results which a colleague of mine will be presenting tomorrow (Tuesday, Research 22: Topic Modeling) at the KDD 2014 conference:
SigniTrend: Scalable Detection of Emerging Topics in Textual Streams by Hashed Significance Thresholds
Erich Schubert, Michael Weiler, Hans-Peter Kriegel
20th ACM SIGKDD Conference on Knowledge Discovery and Data Mining
You can also explore some (static!) results online at signi-trend.appspot.com

In our experiments, the "news" data set was more interesting. But after some work, we were able to get reasonable results out of Twitter as well. As you can see from the online demo, most of these fall into pop culture: celebrity deaths, sports, hip-hop. Not much that would change our live; and even less that wasn't before captured by traditional media.
The focus of this post is on the preprocessing needed for getting good results from Twitter. Because it is much easier to get bad results!
The first thing you need to realize about Twitter is that due to the media attention/hype it gets, it is full of spam. I'm pretty sure the engineers at Twitter already try to reduce spam; block hosts and fraud apps. But a lot of the trending topics we discovered were nothing but spam.
Retweets - the "like" of Twitter - are an easy source to see what is popular, but are not very interesting if you want to analyze text. They just reiterate the exact same text (except for a "RT " prefix) than earlier tweets. We found results to be more interesting if we removed retweets. Our theory is that retweeting requires much less effort than writing a real tweet; and things that are trending "with effort" are more interesting than those that were just liked.
Teenie spam. If you ever searched for a teenie idol on Twitter, say this guy I hadn't heard of before, but who has 3.88 million followers on Twitter, and search for Tweets addressed to him, you will get millions over millions of results. Many of these tweets look like this: Now if you look at this tweet, there is this odd "x804" at the end. This is to defeat a simple spam filter by Twitter. Because this user did not tweet this just once: instead it is common amongst teenie to spam their idols with follow requests by the dozen. Probably using some JavaScript hack, or third party Twitter client. Occassionally, you see hundreds of such tweets, each sent within a few seconds of the previous one. If you get a 1% sample of these, you still get a few then...
Even worse (for data analysis) than teenie spammers are commercial spammers and wannabe "hackers" that exercise their "sk1llz" by spamming Twitter. To get a sample of such spam, just search for weight loss on Twitter. There is plenty of fresh spam there, usually consisting of some text pretending to be news, and an anonymized link (there is no need to use an URL shortener such as bit.ly on Twitter, since Twitter has its own URL shortener t.co; and you'll end up with double-shortened URLs). And the hacker spam is even worse (e.g. #alvianbencifa) as he seems to have trojaned hundreds of users, and his advertisement seems to be a nonexistant hash tag, which he tries to get into Twitters "trending topics".
And then there are the bots. Plenty of bots spam Twitter with their analysis of trending topics, reinforcing the trending topics. In my opinion, bots such as trending topics indonesia are useless. No wonder there are only 280 followers. And of the trending topics reported, most of them seem to be spam topics...

Bottom line: if you plan on analyzing Twitter data, spend considerable time on preprocessing to filter out spam of various kind. For example, we remove singletons and digits, then feed the data through a duplicate detector. We end up discarding 20%-25% of Tweets. But we still get some of the spam, such as that hackers spam.
All in all, real data is just messy. People posting there have an agenda that might be opposite to yours. And if someone (or some company) promises you wonders from "Big data" and "Twitter", you better have them demonstrate their use first, before buying their services. Don't trust visions of what could be possible, because the first rule of data analysis is: Garbage in, garbage out.

Planet Linux AustraliaAndrew Pollock: [life] Day 208: Kindergarten, running, insurance assessments, home improvements, BJJ and a babyccino

Today was a pretty busy day. I started off with a run, and managed to do 6 km this morning. I feel like I'm coming down with yet another cold, so I'm happy that I managed to get out and run at all, let alone last 6 km.

Next up, I had to get the car assessed after a minor rear-end collision it suffered on Saturday night (nobody was hurt, I wasn't at fault). I was really impressed with NRMA Insurance's claim processing, it was all very smooth. I've since learned that they even have a smartphone app for ensuring that one gets all the pertinent information after an accident.

I dropped into Bunnings on the way home to pick up a sliding rubbish bin. I've been wanting one of these ever since I moved into my place, and finally got around to doing it. I also grabbed some LED bulbs from Beacon.

After I got home, I spent the morning installing and reinstalling the rubbish bin (I suck at getting these things right first go) and swapping light bulbs around. Overall, it was a very satisfying morning scratching a few itches around the house that had been bugging me for a while.

I biked over to Kindergarten for pick up again, and we biked back home and didn't have a lot of time before we had to head out for Zoe's second freebie Brazilian Jiu-Jitsu class. This class was excellent, there were 8 kids in total, and 2 other girls. Zoe got to do some "rolling" with a partner. It was so cute to watch. They just had to try and block each other from touching their knees, and if they failed, they had to drop to the floor and hop back up again. For each of Zoe's partners they were very civilized and took turns at failing to block.

Zoe was pretty tired after the class. It was definitely the most strenuous class she's had to date, and she briefly fell asleep in the car on the way home. We had to make a stop at the Garage to grab some mushrooms for the mushroom soup we were making for dinner.

Zoe helped me make the mushroom soup, and after dinner to popped out for a babyccino. It's been a while since we've had a post-dinner one, and it was nice to do it again. We also managed to get through the entire afternoon without and TV, which I thought was excellent.

Worse Than FailureCodeSOD: An Attempt at Proper JSP

When developers first got access to those new-fangled gadgets called computers, memory was a very precious resource. Applications were frequently written as a main controller that would load module overlays into memory, call a function, and then repeat as additional functions were called. It was a horrible way to code, but it was all we had. Unfortunately, as computers came equipped with more and more RAM, this habit of repeating the controller code in every file seems to be quite resilient...

Fast forward several decades, and Jeremy, like the rest of us at some point, was a newbie at his first position as a developer. The application that he was tasked with maintaining had been written by an engineer whose training apparently included learning basic JSP control-structures, and how to perform cut-n-pasting of code from A to B.

The application was almost entirely constructed of JSP's containing tens of thousands of lines of conditionals and loops nested countless levels deep, all of which was copy-pasted across almost every page. There was rarely a method to be found, and when there was, there were usually 12+ parameters, all typed as Strings. All of this would be wrapped in a try-catch block that was frequently so huge that the compiler refused to compile it, insisting that the code be broken up.

The method below represents a very rare attempt at modularity. For those who choose not to risk their sanity, it formats a floating point number to a certain precision and returns it as a String. The author was kind enough to leave their debug statements commented out, presumably to save the next guy from having to put them back in...

public String rnd(double e, double d, int numDigits) {
  String t;
  int    tempDouble;
  double f;
  //e=16.47;
  //d=47.023;
  //out.print(e + " + " + d);
  f= e / d;
  //out.print("<br>" + f);
  //out.print("<br>f=" + f);
  tempDouble=(int)((f)* Math.pow(10,numDigits+1));
  //out.print("<br>tempDouble=" + tempDouble);
  f=(double)tempDouble;
  f=f/10;
  f=Math.round(f);
  //out.print("<br>f=" + f);
  //out.print("<br>" +tempDouble + " " + f/Math.pow(10,numDigits));
  t=Double.toString(f/Math.pow(10,numDigits));
  if (t.substring(t.indexOf('.'),t.length()).length() < numDigits+1) {
     t=t+'0';
     //out.print("<br>" + t);
  }
  if (t.compareTo("0.00")==0) {
     t="0.0";
  }
  return t;
}

Naturally, it was not long before Jeremy sought sanity at his second development position.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaGary Pendergast: My Media Server

Over the years, I’ve experimented with a bunch of different methods for media servers, and I think I’ve finally arrived at something that works well for me. Here are the details:

The Server

An old Dell Zino HD I had lying around, running Windows 7. Pretty much any server will be sufficient, this is just the one I had available. Dell doesn’t sell micro-PCs anymore, so just choose your favourite brand that sells something small and with low power requirements. The main things you need from it are a reasonable processor (fast enough to handle transcoding a few video streams in at least realtime), and lots of HD space. I don’t bother with RAID, because I won’t be sad about losing videos that I can easily re-download (the internet is my backup service).

Downloading

I make no excuses, nor apologies for downloading movies and TV shows in a manner that some may describe as involving “copyright violation”.

If you’re in a similar position, there are plenty of BitTorrent sites that allow you register and add videos to a personal RSS feed. Most BitTorrent clients can then subscribe to that feed, and automatically download anything added to it. Some sites even allow you to subscribe to collections, so you can subscribe to a TV show at the start of the season, and automatically get new episodes as soon as they arrive.

For your BitTorrent client, there are two features you need: the ability to subscribe to an RSS feed, and the ability to automatically run a command when the download finishes. I’ve found qBittorrent to be a good option for this.

Sorting

Once a file is downloaded, you need to sort them. By using a standard file layout, you have a much easier time of loading them into your media server later. For automatically sorting your files when they download, nothing compares to the amazing FileBot, which will automatically grab info about the download from TheMovieDB or TheTVDB, and pass it onto your media server. It’s entirely scriptable, but you don’t need to worry about that, because there’s already a great script to do all this for you, called Advanced Media Server (AMC). The initial setup for this was a bit annoying, so here’s the command I use (you can tweak the file locations for your own server, and you’ll need to fix the %n if you use something other than qBittorent):

"C:/Program Files/FileBot/filebot" -script fn:amc --output "C:/Media/Other" --log-file C:/Media/amc.log --action hardlink --conflict override -non-strict --def "seriesFormat=C:/Media/TV/{n}/{'S'+s}/{fn}" "movieFormat=C:/Media/Movies/{n} {y}/{fn}" excludeList=C:/Media/amc-input.txt plex=localhost "ut_dir=C:/Media/Downloads/%n" "ut_kind=multi" "ut_title=%n"

Media Server

Plex is the answer to this question. It looks great, it’ll automatically download extra information about your media, and it has really nice mobile apps for remote control. Extra features include offline syncing to your mobile device, so you can take your media when you’re flying, and Chromecast support so you can watch everything on your TV.

The Filebot command above will automatically tell Plex that a new file has arrived, which is great for if you choose to have your media stored on a NAS (Plex may not be able to automatically watch a directory on a NAS for when new files are added).

Backup

Having a local server is great for keeping a local backup of things that do matter – your photos and documents, for example. I use CrashPlan to sync my most important things to my server, so I have a copy immediately available if my laptop dies. I also use CrashPlan’s remote backup service to keep an offsite backup of everything.

Conclusion

While I’ve enjoyed figuring out how to get this all working smoothly, I’d love to be able to pay a monthly fee for an Rdio or Spotify style service, where I get the latest movies and TV shows as soon as they’re available. If you’re wondering what your next startup should be, get onto that.

Planet DebianHideki Yamane: Could you try to consider speaking more slowly and clearly at sessions, please?


Some people (including me :) are not native English speaking person, and also not use English for usual conversation. So, it's a bit tough for them to hear what you said if you speak as usual speed. We want to listen your presentation to understand and discuss about it (of course!), but sometimes machine gun speaking would prevent it.

Calm down, take a deep breath and do your presentation - then it'll be a fantastic, my cat will be pleased with it as below (meow!).



Thank you for your reading. See you in cheese & wine party.

Geek FeminismThe greatest good for the greatest linkspam (24 August 2014)

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

,

Planet DebianDebConf team: Full video coverage for DebConf14 talks (Posted by Tiago Bortoletto Vaz)

We are happy to announce that live video streams will be available for talks and discussion meetings in DebConf14. Recordings will be posted soon after the events. You can also interact with other local and remote attendees by joining the IRC channels which are listed at the streams page.

For people who want to view the streams outside a webbrowser, the page for each room lists direct links to the streams.

More information on the streams and the various possibilities offered is available at DebConf Videostreams.

The schedule of talks is available at DebConf 14 Schedule.

Thanks to our amazing video volunteers for making it possible. If you like the video coverage, please add a thank you note to VideoTeam Thanks

Planet DebianNoah Meyerhans: Debconf by train

Today is the first time I've taken an interstate train trip in something like 15 years. A few things about the trip were pleasantly surprising. Most of these will come as no surprise:

  1. Less time wasted in security theater at the station prior to departure.
  2. On-time departure
  3. More comfortable seats than a plane or bus.
  4. Quiet.
  5. Permissive free wifi

Wifi was the biggest surprise. Not that it existed, since we're living in the future and wifi is expected everywhere. It's IPv4 only and stuck behind a NAT, which isn't a big surprise, but it is reasonably open. There isn't any port filtering of non-web TCP ports, and even non-TCP protocols are allowed out. Even my aiccu IPv6 tunnel worked fine from the train, although I did experience some weird behavior with it.

I haven't used aiccu much in quite a while, since I have a native IPv6 connection at home, but it can be convenient while travelling. I'm still trying to figure out happened today, though. The first symptoms were that, although I could ping IPv6 hosts, I could not actually log in via IMAP or ssh. Tcpdump showed all the standard symptoms of a PMTU blackhole. Small packets flow fine, large ones are dropped. The interface MTU is set to 1280, which is the minimum MTU for IPv6 and any path on the internet is expected to handle packets of at least that size. Experimentation via ping6 reveals that the largest payload size I can successfully exchange with a peer is 820 bytes. Add 8 bytes for the ICMPv6 header for 828 bytes of payload, plus 40 bytes for the IPv6 header gives an 868 byte packet, which is well under what should be the MTU for this path.

I've worked around this problem with an ip6tables rule to rewrite the MSS on outgoing SYN packets to 760 bytes, which should leave 40 for the IPv6 header and 20 for any extension headers:

sudo ip6tables -t mangle -A OUTPUT -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --set-mss 760

It is working well and will allow me to publish this from the train, which I'd otherwise have been unable to do. But... weird.

Planet DebianVincent Sanders: Without craftsmanship, inspiration is a mere reed shaken in the wind.

While I imagine Johannes Brahms was referring to music I think the sentiment applies to other endeavours just as well. The trap of believing an idea is worth something without an implementation occurs all too often, however this is not such an unhappy tale.

Lars original design idea
Lars Wirzenius, Steve McIntyre and myself were chatting a few weeks ago about several of the ongoing Debian discussions. As is often the case these discussions had devolved into somewhat unproductive noise and yet amongst all this was a voice of reason in Russ Allbery.

Lars decided that would take the opportunity of the upcoming opportunity of Debconf 14 to say thank you to Russ for his work. It was decided that a plaque would be a nice gift and I volunteered to do the physical manufacture. Lars came up with the idea of a DEBCON scale similar to the DEFCON scale and got some text together with an initial design idea.

CAD drawing of cut paths in clear acrylic
I took the initial design and as is often the case what is practically possible forced several changes. The prototype was a steep learning curve on using the Cambridge makespace laser cutter to create all the separate pieces.

The construction is pretty simple and consisted of three layers of transparent acrylic plastic. The base layer is a single piece of plastic with the correct outline. The next layer has the DEBCON title, the Debian swirl and level numbers. The top layer has the text engraved in its back surface giving the impression the text floats above the layer behind it.

Failed prototype DEBCON plaqueFor the prototype I attempted to glue the pieces together. This was a complete disaster and required discarding the entire piece and starting again with new materials.

The final version with stand ready to be presented
For the second version I used four small nylon bolts to hold the sandwich of layers together which worked very well.

Presentation of plaque photo by Aigars Mahinovs
Yesterday at the Debconf 14 opening Steve McIntyre presented it to Russ and I think he was pleased, certainly he was surprised (photo from Aigars Mahinovs).

The design files are available from my design git repo, though why anyone would want to reproduce it I have no idea ;-)

Planet DebianLucas Nussbaum: on the Dark Ages of Free Software: a “Free Service Definition”?

Stefano Zacchiroli opened DebConf’14 with an insightful talk titled Debian in the Dark Ages of Free Software (slides available, video available soon).

He makes the point (quoting slide 16) that the Free Software community is winning a war that is becoming increasingly pointless: yes, users have 100% Free Software thin client at their fingertips [or are really a few steps from there]. But all their relevant computations happen elsewhere, on remote systems they do not control, in the Cloud.

That give-up on control of computing is a huge and important problem, and probably the largest challenge for everybody caring about freedom, free speech, or privacy today. Stefano rightfully points out that we must do something about it. The big question is: how can we, as a community, address it?

Towards a Free Service Definition?

I believe that we all feel a bit lost with this issue because we are trying to attack it with our current tools & weapons. However, they are largely irrelevant here: the Free Software Definition is about software, and software is even to be understood strictly in it, as software programs. Applying it to services, or to computing in general, doesn’t lead anywhere. In order to increase the general awareness about this issue, we should define more precisely what levels of control can be provided, to understand what services are not providing to users, and to make an informed decision about waiving a particular level of control when choosing to use a particular service.

Benjamin Mako Hill pointed out yesterday during the post-talk chat that services are not black or white: there aren’t impure and pure services. Instead, there’s a graduation of possible levels of control for the computing we do. The Free Software Definition lists four freedoms — how many freedoms, or types of control, should there be in a Free Service Definition, or a Controlled-Computing Definition? Again, this is not only about software: the platform on which a particular piece of software is executed has a huge impact on the available level of control: running your own instance of WordPress, or using an instance on wordpress.com, provides very different control (even if as Asheesh Laroia pointed out yesterday, WordPress does a pretty good job at providing export and import features to limit data lock-in).

The creation of such a definition is an iterative process. I actually just realized today that (according to Wikipedia) the very first occurrence of an attempt at a Free Software Definition was published in 1986 (GNU’s bulletin Vol 1 No.1, page 8) — I thought it happened a couple of years earlier. Are there existing attempts at defining such freedoms or levels of controls, and at benchmarking such criteria against existing services? Such criteria would not only include control over software modifications and (re)distribution, but also likely include mentions of interoperability and open standards, both to enable the user to move to a compatible service, and to avoid forcing the user to use a particular implementation of a service. A better understanding of network effects is also needed: how much and what type of service lock-in is acceptable on social networks in exchange of functionality?

I think that we should inspire from what was achieved during the last 30 years on Free Software. The tools that were produced are probably irrelevant to address this issue, but there’s a lot to learn from the way they were designed. I really look forward to the day when we will have:

  • a Free Software Definition equivalent for services
  • Debian Free Software Guidelines-like tests/checklist to evaluate services
  • an equivalent of The Cathedral and the Bazaar, explaining how one can build successful business models on top of open services

Exciting times!

Planet Linux AustraliaMark Greenaway: Guitar

Does anyone still blog? It seems nearly everyone has moved onto Twitter/Facebook. I miss being able to express thoughts in more than 160 characters.

I went to a picnic recently, and some people were passing a steel string guitar around. I'm not a good acoustic player, but it was fun so I had a bash. Someone played Under The Bridge, and took liberties with the chord voicings. So I was inspired to pick up my guitar and work through the official transcription, which I own. While the basic form of the song is pretty simple, as you can hear, the clever part is the choice of chord voicings and fills. I'll be practicing that one for a while.

I've also started over working through the Berklee method books, starting at Volume 2. I learned by playing by ear and memorising, so sight reading is still something I'm getting used to, and sometimes I'm not disciplined enough to do it properly. But I'm getting better at that. I'll be so happy when I start to get good at position playing.

Planet Linux AustraliaMark Greenaway: Physiotherapy

I'm at Elevate in Sydney CBD. For a long time, I've struggled with flexibility issues, and I began to think something must be wrong. It turned out something is wrong-I have a minor skeletal deformity in my left hip joint, and my muscles have developed in a strangely imbalanced way to compensate. Except it isn't working, I have severely reduced range of motion and had chronic pain in my left hip joint.
My Physio thinks he can correct the problem, but it's going to take a while. So I'll be off training for at least six weeks, and more likely two months or more. But it will be worth it if my joint pain goes away and I can move like the other people my judo class.

Posted via LiveJournal app for iPhone.

Planet Linux AustraliaMark Greenaway

"Curiously enough, the only thing that went through the mind of the bowl of petunias as it fell was 'Oh no, not again.'" - Hitchhiker's Guide to the Galaxy.

Planet Linux AustraliaMark Greenaway: One for the stats nerds

At USyd, we did all our stats in R. Now I'm working at the Department of Health, and we do most of our stats in SAS. SAS is pretty different to R, and so I've needed to work hard to try to learn it.

This is a rite of passage that most trainee biostatisticians go through, and so people have shoved various books into my hands to help me get up to speed. I'll omit the names of many of the books to protect the guilty, but the most useful book someone pressed innto my hands was The Little SAS Book, which I read cover to cover in two sittings.

The Little SAS Book is more technical than the others, hence more suitable for programmers, and actually gives you an inkling of what the designers of the language were thinking. That's helped me begin to think in the language, which is something none of the other books have helped me to do.

The best comparison I can come up with for now is that SAS is like German, whereas R is like Japanese. SAS has lots of compound statements, each of which does a lot, while R has many small statements which each do a little bit. So would you like to be able to speak German or Japanese? The correct answer is, of course, both, each at the appropriate time :)

Planet Linux AustraliaMark Greenaway: Gainfully employed

A while ago, I applied for a job as a biostatistician in public health. I made it to the interview stage, and that seemed to go quite well. They said they'd contact me in seven to ten days. I didn't hear anything for a while, but eventually I bumped into one of my referees who said he'd spoken with my interviewers, and they sounded "very positive". I poked my other referee, and he said they'd spoken to him too. So that was sounding pretty good.

On the advice of my girlfriend, I asked them how things were going, and they said they were waiting for a criminal record check to complete. Owing to my misspent youth, there's no criminal record to check, so now I was feeling very positive indeed. To cut a long story short, today they offered me a position, and I accepted.

So all's well that ends well!

Planet Linux AustraliaMark Greenaway: British Medical Journal publishes paper on the risks of head banging

Head and neck injury risks in heavy metal

This is the funniest thing I've seen in ages. I particularly like the mitigation options: many injuries could be prevented if AC/DC would stop playing Back In Black live, and instead play Moon River ;)

Planet Linux AustraliaMark Greenaway

Sometimes I feel like I'm not so much in control of my body as sending it memoes. And it responds with things like "Tough job, hip rotation. Can you come back Monday?".

Planet Linux AustraliaMark Greenaway: True Temperament

I've been playing a few more jazz chords and moveable chord forms of late, and my ear's been getting a little better. Unfortunately, this means I've become more sensitive to how out of tune the notes in some of those chords sound as you move up the neck.

To some extent, this problem is inherent in the guitar's construction. There are some very determined people at a company called True Temperament who've decided to do something about this by making custom necks with strange looking curved frets. The FAQ on that site also goes into some depth about tuning methods and why the problem is unsolvable on a standard guitar.

So I'm going to have to live with it, or pick up a saxophone or something :)

Planet Linux AustraliaMark Greenaway: Judo

Judo is going well. I've been working on the fundamentals a lot recently, especially breakfalling. I'm finally getting to the point where I'm losing my fear of being thrown - the people in my club are nice and aren't there to hurt people, and I've learnt that by just going with the throw rather than resisting you can breakfall more cleanly. That way, you're very unlikely to get hurt. And if you pay attention while you're being thrown, you tend to learn more about how the throw works, so when it comes your turn, you'll do it better.

Being a lightweight, I've got to fight differently to the heavier judoka, so in the next little while I'm going to focus on improving my speed and perfecting my technique for the basic throws and sweeps. If you're stronger you can just apply more force to get bad throwing techniques to work, but this isn't an option for me. I think ultimately it's a good thing, because I'll have to learn the throws properly.

Always more things to work on. Practice, practice, practice.

Planet Linux AustraliaMark Greenaway

Having spent the last few weeks being sick, now that I'm beginning to get better I'm finding I'm pretty hyper.

Planet Linux AustraliaMark Greenaway

Just getting over this illness, whatever it is. I just want to eat and sleep.

Planet Linux AustraliaMark Greenaway

I'm using a surprising amount of maths in my current job. Recently, we've been trying out measures of diversity. Today, I'm taking a look at Shannon entropy.

Planet Linux AustraliaMark Greenaway: The tyranny of distance

At the moment, I'm commuting between two and two and a half hours each way to and from work. That's between twenty and twenty five hours a week. And it's costing me more than $65 a week to do all that travelling. This doesn't make any sense. So looking forward to moving.

Planet Linux AustraliaMark Greenaway

I've been sick for a few weeks. I thought it was a bad cold, but my mum thinks it's something more serious. She suggests it might be sinusitis. We're not doctors, so it's off to visit a GP tomorrow to find out what's wrong.

Planet Linux AustraliaTim Connors: An open letter to Peter Ryan regarding police treatment of cyclists

Hon Peter Ryan,

I am writing because I am concerned at the number of recent incidents where a driver has collided with a cyclist, and the case hasn't been followed up by the police. Such incidents and the publicity surrounding them does nothing to encourage road users to obey the law when they realise that they will most likely get away with not doing so.

A week ago in Ballarat, a 13 year old boy was hit by a car, and the police said the boy had the right of way[1]. Despite this, the article linked states that the police will not charge the driver. This, despite her having broken Australian Road Rule 67 to 72, 84 or 86 depending on circumstances at the stated intersection, or perhaps 140 to 144 if travelling in the same direction. She was likely negligent in allowing the collision to happen in the first place, which, by my understanding, is a criminal offence, especially since there was serious injury involved. If she used the usual excuse that "she didn't see him", then that's an admission of guilt in failing to obey ARR 297 - driver having proper control of vehicle.

Also recently, there was a highly publicised case where Shane Warne had an altercation with a bicycle rider. In that case, the fact that Warne hit the cyclist from behind (ARR 126) after overtaking unsafely (ARR 144) is undisputed[2]. The fact that details were not exchanged following the collision is also undisputed (ARR 287). It is also well established that Warne was stopped unnecessarily in a bike lane (ARR 125; 153)[3]. And yet the police will not investigate[4].

Going back a number of years, I also have not had good experiences getting the police to follow up on cases. In my most recent case (11/10/2005; I do not know the case number sorry, all I know was that I was attended to by Angove & Auchterlonie from Boroondara police), the driver also failed to obey ARR 287 (as well as a slew of other offences, such as ARR 46 and 148 - changing lanes without indicating sufficiently and without due care). The police refused to prosecute the driver, and also would not hand over the driver's details or insurer details, based on some misguided privacy policy, asking me instead to fork out for a freedom of information request. Given that I was a broke student at the time, this was not a feasible thing to do and I never did receive compensation from the driver for damage to my bicycle, clothes, and large out of pocket expenses for travel to medical care for several years that the TAC didn't cover. The police also displayed a lack of knowledge of the law, initially thinking that I had broken ARR 141.

I can't imagine why the police aren't investigating these cases, because in each case, clear evidence is at hand, and not disputed. The identities of all parties are known. It should be an open and shut case. Without the police making charges, the rider in each case will have a much harder time claiming from the driver's insurance (if the boy was not admitted overnight, his TAC excess will be an enormous burden to his family). The driver in each case will not be discouraged from driving in a similar fashion next time. And other drivers also know that they will most likely get away with any offences they commit if a bicycle is involved. This is a perverse reversal of the situation that we should have, in which drivers should be encouraged to take due diligence around cyclists. It almost seems that the police always assume a cyclist is at fault unless proven otherwise in Australia, whereas most other countries with an established bicycling culture assume that the driver is at fault unless proven otherwise as they hold the burden of driving the more deadly vehicle and so should be required to take due care.

If the laws weren't adequate enough to prosecute to the driver in the above cases, has your department been contacted to update the laws, and what is being done? Keep in mind that cyclists have no protection other than by the law, and as the more vulnerable road user, the laws should focus on their safety and ensuring that transgressions are dealt with effectively.

Can you please encourage the police in each of these cases to follow them up to the full extent that the law currently allows.


Sincerely,


[1]
http://www.theage.com.au/victoria/teen-cyclist-struck-by-car-20120110-1ps85.html

[2]
http://theage.drive.com.au/motor-news/warnes-tirade-triggers-bike-rego-call-20120118-1q5k0.html

[3]
http://www.cyclingtipsblog.com/2012/01/cyclist-versus-warnie-the-cyclists-story/

[4]
http://www.heraldsun.com.au/news/more-news/warne-blasts-cyclists-on-twittershane-warne-clashes-with-cyclist-on-way-home-from-training-session/story-fn7x8me2-1226246735306

Planet Linux AustraliaTim Connors: Breaking windows

Another letter in The Age today. Unedited text below:


Ian Porter (Without car manufacturing, we are on the road to ruin, The Age, 13 Jan) believes that the government needs to keep throwing money at the car industry in order to support other industry in Australia. I'm surprised as an industry analyst, he hasn't heard of the broken window fallacy.

Throwing good money after a bad unsustainable industry that can't adapt is just a waste. It's exactly identical to sending soldiers to dig holes only to fill them back up again just to keep them employed and off the streets. The money could be better spent on doing useful things that will remain useful into the future. Yes, paying people to break windows and then paying the glazier to repair them will keep people employed, but couldn't the glazier be better employed building things that then keep other people employed into the future?

Why don't we do something useful with the money instead? Like built modern intra- and inter-city rail infrastructure? This won't become stranded assets when cheap oil becomes unavailable. We won't be left with vast tracts of useless motorways - we will continue to be able to use the rail infrastructure well past these boom times.

Planet Linux AustraliaTim Connors: Police a bit rich

Hrrrfm. The Age didn't publish my letter:


I find it a bit rich that the police union are upset that information
alongside a photograph was distributed about one of their members, without
his consent. I understand that truth is not not considered a defence to
libel in Australia, so it was perhaps unwise to distribute such a photo.
But it is common police practice to photograph protesters without our
consent, and to store these photos with profiles in national databases
without a right of appeal or review. I probably find myself on some
watchlist now just for attending some of last night's Occupy Melbourne
general assembly.

Maybe there would be no need for a photograph to be distributed if police
correctly wore their own name badges (and if the name badges weren't
deliberately too small to read). Or if there was some accountability, as
opposed to the protectionism that police have demonstrated in the past
with the likes of their disgusting behaviour at the APEC protests.

Planet Linux AustraliaTim Connors: on the hardships of living with minimal amounts of RAM (4GB or so)

I just got 200MB/s read/write rate from my swap device on my laptop. Fast laptop eh? OK, so I'm cheating by using the compcache/zram module from the staging tree.

When I bought my 2 laptops, I was upgrading from 256MB to 4GB. I thought that would be enough to last me for years. The video card on that first laptop came with more memory than the system memory of the machine I was upgrading from. Alas, I forgot to factor in opera and firefox (we're now in the era when Emacs is officially lightweight). And being laptops with the particular chipsets they have, 4GB is it, I'm afraid.

And the fact that Linux's VMM, for me, has never really handled the case of a machine running with a working set not all that much smaller than physical RAM. If I add up resident process sizes plus cache plus buffer plus slab plus anything else I can find, I always come up about 25% short of what's actually in the machine. Ever since those 256MB days (whereas about half the ram went "missing" on the 128MB machine prior to then). And even when your working set, including any reasonable allowance for what ought to be cachable, falls far short of RAM, it still manages to swap excessively, killing interactive performance (yes, I've tried /proc/sys/vm/swappiness). When I come in in the morning, it's paged everything out to make backups through the night marginally faster (not that I cared about that - I was asleep). Then it pages everything back in again at 3MB/s, despite the disk being capable of 80MB/s. Pity it's not smart enough to realise that I need the entire contiguous block of swapped pages back in, so it might as well dump the whole wasted cache, and read swap back in contiguously at 80MB/s rather than seeking everywhere and getting nowhere.

What I really wanted, was compressed RAM. Reading from disk with lots of seeks is a heck of a lot slower than decompressing pages in RAM. I vaguely recall such an idea exists if you're running inside VMWare or the like. But this is a desktop. I want to display to my physical screen without having to virtualise my X11 display.

But the zram module might be what I want. Pretty easy to set up (in the early days, it required a backing swap device and was kinda fiddly). Here's the hack I've got in rc.local along with a reminder sent to myself that I've still got this configured, at reboot:
echo 'rc.local setting zramfs to 3G in size - with a 32% compression ratio (zram_stats), that means we take up 980M for the ramfs swap' | mail -s 'zram' tconnors
echo $((3*1024*1024*1024)) > /sys/block/zram0/disksize
mkswap /dev/zram0
swapon -p 5 /dev/zram0

It seems to present as a block device of default 25% of RAM size (but I've chosen 3GB above), and as you write to that device, compressed versions of the page end up in physical memory. Eventually you'd run out of physical memory, and hopefully you have a second swap device (of lower priority) configured where it can page out for real. In my case, I'm using the debian swapspace utility. Be warned, if you plan to hibernate your laptop, not to forget to have a real swap partition handy :)

zram_stats tells me I'm currently swapping 570MB compressed down to 170MB, for a compression ratio of 28%. That 170MB has to be subtracted from the memory the machine has, so it appears to really only have 3.8 or so GB. No huge drawback. At that compression ratio, if I were to swap another 3GB out, physical ram stolen by zram would only be 1GB. My machine would be appearing to have 3GB of physical ram, 3GB of blindingly fast swap, and a dynamic amount (via swapspace) of slow disk based swap. I'd be swapping more because I had 1GB less than I originally had. But at least I'd be swapping so quickly I ought not notice it ('course, I haven't bench marked this). And I'd be able to have 2GB more in my working set before paging really starts to become unbearable.

So, with an uptime of 4 hours, I haven't even swapped to disk yet (I know this, because swapspace hasn't allocated any swapfiles yet). The machine hasn't yet thrashed half to death yet. That must be a recent record for me.


Yes, the module is in the staging tree. It's already deadlocked on me once, getting things stuck in the D state. And the machine has deadlocked with unexplained reasons a couple of other times recently (with X having had the screensaver going at the time, so no traces, and no idea whether it's general kernel 3.0 flakiness or zram in particular; I had forgotten until tonight that I even had previously configured zram back in the 2.6.32 days).

What I really *really* want, since I lack the ability to add more ram to the machines, is a volatile-ram ESata device, used purely as a swap device, reinitialised each boot (ie, having a battery backup is just pointless complexity and expense, and SSD is slow, fragile and prone to bad behaviour when you can't use TRIM, for the amount of writes involved in a swap device). There is the Gigabyte i-Ram and GC-RAMDISK and similar devices, but they're kinda pricey, even without the RAM included in the price. Why is SSD so much cheaper than plain old simple RAM these days? I thought the complexity involved would very much make it go the other way around.

What I really *really* **really** want, is for software to be less bloaty crap.

Planet Linux AustraliaTim Connors: We do things differently, here

Another slightly edited letter published in The Age today. Maybe I should become a media mogul. Original here:

I've just come back from a tour of Europe. Wind farms are everywhere. Near tourist attractions, and along roads where it is particularly windy. Anywhere appropriate, and especially near townships and individual houses, because that's where the consumers of electricity are. No one seems to have a problem with them. People there don't suffer from increased rates of cancer or bogus self-imagined inflictions. They don't ban building windmills within 2km of towns. They don't ban the building along a windy stretch of road because tourists happen to drive along there.

They also don't develop in green wedges, allow cattle to graze in their national parks, or try to make it easier to log old growth forest and make it harder to protect such lands.

Europeans seem to have no problem accepting that, despite their per-capita emissions being way below ours, that something has to change. It's a pity we are not lead by leaders. The best I can manage out of my local member, Ted Baillieu, is a form letter in reply to my concerns, uttering vague niceties about planning and the economy, and attacking the opposition.

Planet Linux AustraliaTim Connors: Failed and now discarded Victorian cycling strategy

I've always been a crazy cat man who sends letters to the editor and his local parliamentary representitatives. Except that I have no cat. Anyway, a shortened version of this letter was published in The Age today.


Dear Sir/Madam (CCed my State local member, and The Age letters),

The auditor general's report into the Victorian government cycling strategy said "the Department of Transport and VicRoads had not ... addressed conflicts and delays where cyclists crossed busy roads, and where cyclists and pedestrians shared paths." ("No way to spin it, the wheels are off", The Age, Aug 18). It's not just busy crossings that cause unnecessary delays to people not in the precious car.

A trip along Southbank is usually hampered by 2 traffic light controlled crossings where up to 100 pedestrians and riders are waiting at the lights with an average waiting time of around 5 minutes between them. That's 10 minutes per return trip wasted. We wait for just an occasional car (each with just one person in them) and empty tram to pass. The cars are frequent enough to make running ("jaywalking") across the road a little unsafe, but not frequent enough that the road is anywhere near capacity and that adding more frequent red lights for the car traffic is going to harm the flow into and out of the city at all. And it will help the 100 waiting pedestrians.

In the land where cycling is taken seriously, Holland, they want cyclists to have to wait no more than 15 seconds. And they're experimenting with microwave sensors to extend the green cycle to let cyclists cross safely. Here, the Southbank story is repeated across the city. Outside Swinburne University, a pedestrian light regularly sees several dozens of students wait several minutes at lunchtime to cross while a small handful of cars (each with just one person in them) cross just frequently enough to make a quick dash across the road down from the lights, impossible. The vehicle sensor loops at Camberwell Junction have, for years, been tuned to not even be sensitive enough to detect cyclists, so you can wait at 11pm for 2 cycles of the lights to completely bypass turning green on the road you're travelling on, while no cars cross the other legs of the junction at all before you give up and cross illegally (completely safely, because there's no traffic at all at that time of night and you can see for miles).

Given that you can't continue to keep encouraging people to get in their cars because it's been conclusively proven over the last 40 years that you can't build your way out of car congestion, perhaps it time to promote other forms of travel?

Planet Linux AustraliaTim Connors: New Media

The Age published an article about http://theconversation.edu.au/, a new media outlet run by the former chief in editor of The Age. Not only have I seen intelligent articles on it, their editors and authors understand Creative Commons licenses.

Planet Linux AustraliaTim Connors: Mary Poppins

Small country towns and petrol. I should have learned by now. I was running short of where I expected to run low[1], but the maps told me there was a little town up the road with fuel, so I stopped there. It's Sunday. In a small town. That's OK, I've got 36km of fuel left according to the onboard computer, and a guy reckons a town 20km back there had fuel. I didn't see it, and it was a very small town, so I decide to push my luck and head over the mountains, where I know a town 63km away (Mansfield) has fuel. I've stopped before when the computer reckoned there was 7km left, and I really had 1.5L of fuel left, which should be good for 30km. So 36+30km should get me there with 3km to spare! Except that mountains take juice. So I absolutely babied it over. Then thought I was lost because of a GPS stuffup (problem exists between touchscreen and bike seat). I never went above 80km/h hour (and associated with the want not to have to roll on the throttle, is the want to not put on too much brake to waste too much energy. Not braking and mountains aren't really a clever mix). And it should have been a really fun road. I still hadn't reached the top of the mountain when the computer said there was 0km left. I couldn't listen to music, because I had to listen to the telltale signs of the engine missing or other signs that I should immediately turn off the ignition lest the fuel pump bearings burn up. Anyway, 20km later, no signs of trouble, and I roll into the fuel station. 1.5L left still.

I've got a Mary Poppins fuel tank.

Anyhoo, that blew the cobwebs away. 2 days ought to be enough holiday between jobs, right? Wish me luck tomorrow! I've reccied where it is that I'm working, so now I've just got to find out what I'm actually doing. Oh, and find a house.

[1] At Caltex in the main street of Wagga Wagga, I got the worst quality fuel I've ever got (5.8L/100km, for premium grade, despite traveling most of the time at the speed limit, and riding pretty conservatively (compared to the rest of my trip in NSW!). The previous fuel was 5.1L/100km, and the next fuel was 5.4L/100km despite only having the lower grade available)

Planet Linux AustraliaTim Connors: The wrong politic

It seems that Senator Carr didn't like the frank and fearless advice his public servants were offering him, and the Chief Scientist's position became untenable. Sure, you're not meant to offer that frank and fearless advice through the media, but what's the point of a having a chief scientist or indeed any publicly paid scientist if they're only allowed to tow the party line, and not allowed to tell the public what they need to know? We see this time and again. CSIRO researchers have been completely barred from making any public comments without going through the central media office. What's the use of public funding if the public research isn't allowed to be told?

Tony Rabbit wanted to remove the Chief Scientist's office because it was too political (I did read this in the SMH a few months ago, but can't find the cite). Senator Carr wanted to remove the officer because she was the wrong politic and was telling too much truth.

Planet Linux AustraliaTim Connors: Why I don't donate to natural disasters in Australia anymore

I donated towards the Black Saturday fires, and then the donation policy of Red Cross became "we'll forward donations to people with insurance and people with holiday homes that got burned down". I wanted my money to go to people who can't afford to pay for insurance, and certainly to people who can't afford holiday and investment homes. Insurance will cover those who can afford it. The rest truly deserve a break. The Qld flood donations are going to people who simply won't need it.

And as to who would pay for it, and whether Australia should postpone bringing ourselves back into budget surplus: If we didn't dump the mining rent resources tax, we'd be fine. Not only would the annual amount generated by the tax neatly match the amount that needs to be spent repairing Qld, but if it was framed ideally (ie, applied to all mining companies) it would come from companies that were largely responsible for the worsening of these severe storms. I.e., they wouldn't be able to externalise their costs onto the rest of society so much anymore - those that actually consume more would end up paying for the damage it does, which would then partly fund the mitigation costs we all endure. Actually, it should come partly from farmers too. What did you expect would happen when you clear the land of its natural ability to regulate water flow?

The guy who texted into JJJ talkback that we should just drop the National Broadband scheme instead, on the basis that it would be obsolete by the time it was built, made me laugh. Yes sure, if we don't build something, then the next thing we can't build would be even better!

Planet Linux AustraliaTim Connors: Wrong technological fixes to problems vol. #8123

Arizona state apparently spent $1B to attempt to automate the detection of people crossing a 53 mile section of the Mexican border.

If we expect the lifetime of such a project to be 15 years before the infrastructure completely falls apart and needs to be renewed, then in that same 15 year period, we can employ 33333/15=2222 staff at what seem to be typical US wage rates (neglecting inflation. But since the US economy is a basket case, I might be justified in doing that). In that 53 mile space, we could space 2222 guards every 40 metres in a line, or a bit more sparse if you wanted a grid of guards to detect tunneling.

<hint of sarcasm="maybe">
I'm sure most government spending is useful, and I'm sure the expense of the project could be entirely justified. I'm sure the article is just ill-informed.
</hint of sarcasm>

Planet Linux AustraliaTim Connors: Vale Purrple

I'm not having much luck selecting cats for longevity rather than character. Still, I'd rather character than longevity.



Purrple has been living with mum since Phred died, because cats always deserve fellow playthings, and I wasn't about to get another cat. The signs of her (presumably) cancer started showing in October, but the tests the vet did didn't reveal anything (he wasn't searching for cancer though). On Monday this week, she started showing other signs - that of kidney failure. In the end, she went the same way Phred did.

I didn't get to see her in the end - the curse of long distance part time veterinaries. He made the call, it could either be sew her back up, give her drugs, and transport her back to us, or put her down.

And it's only just hit me. I had to type that up.

Planet Linux AustraliaTim Connors: Reqium for a species

Yikes. I'm reading Clive Hamilton's "Requiem for a species. Why we resist the truth about Climate Change". (all tyops are mine)


To date, governments have shunned geoengineering for fear of being accused of wanting to avoid their responsibilities with science fiction solutions. The topic is not mentioned in the Stern report and receives only one page in Australia's Garnaut report. As a sign of its continuing political sensitivity, when in April 2009 it was reported that President Obama's new science adviser John Holdren had said that geoengineering is being vigorously discussed as an emergency option in the White House, he immediately felt the need to issue a "clarification" claiming that he was only expressing his personal views. Holdren is one of the sharpest minds in the business and would not be entertaining what is now known as 'Plan B'— engineering the planet to head off catastrophic warming — unless he was fairly sure Plan A would fail.


It is far easier, on the face of it (and certainly, politically), to perform geoengineering than to slow down the generation of CO2. So cheap that one country can afford it, instead of it being such a huge (political) task that not even all of the worlds countries acting cooperatively will be able to pull it off. So great, lets go servo the eco-system. Control Systems are easy, right? They never break into unwanted oscillations while you're still learning their response function.


The implications are sobering. In August 1883 the painter Edvard Munch witnessed an unusual blood-red sunset over Oslo. He was shaken by it, writing that he 'felt a great, unending scream piercing through nature'. The incident inspired him to create his famous work, The Scream. The sunset he saw that evening followed the eruption of Krakatoa off the coast of Java. The explosion, one of the most violent in recorded history, sent a massive plume of ash into the stratosphere, causing the Earth to cool by more than one degree and disrupting weather patterns for several years. More vivid sunsets would be one of the consequences of using sulphate aerosols to engineer the climate; but a more disturbing effect of enhanced dimming would be the permanent whitening of daytime skies. A washed-out sky would become the norm. If the nations of the world resort to climate engineering as an expedient response to global heating, and in doing so relieve pressure to cut carbon emissions, then as the concentration of carbon dioxide in the atmosphere continued to rise so would the latent warming that must be suppressed. It would then become impossible to stop sulphur injections into the stratosphere, even for a year or two, without an immediate jump in temperature. It's estimated that, if we did stop, the backup of greenhouse gases could see warming rebound at a rate 10-20 times faster than in the recent past, a phenomenon referred to, apparently without irony, as the "termination problem". Once we start manipulating the atmosphere we could be trapped, forever dependent on a program of sulphur injections into the stratosphere. In that case, human beings would never see a blue sky again.


Please read his book. The book goes down many paths -- human pyschology, politics, science. It's bloody depressing, but people need to understand why we not going down a better route.

Planet Linux AustraliaTim Connors: Health and Safety

It has always frustrated me that the medical profession and unions and the like are always pushing the health and safety barrow so much without critical thought.

Putting up safety fencing to the point that no one pays attention anymore, because they assume the safety fencing will always be everywhere (I am reminded here of work. You have to pay a lot more attention now lest your attention lapses when you go near a fence with a hole in it because the engineering simply makes it impossible to make everything safe). (I could rant how the unions have forced legislation on how brightly lit my office has to be, to the point where it hurts my eyes if I don't wear sunglasses, but I'm going offtopic here)

Putting sensors in cars that are so safe that you don't need to pay attention anymore, so that most people drive like they're driving a Volvo (I fear and loath the research into automatically driven cars − unless those cars all limit themselves to 40km/h or they legislate against cyclists and kangaroos from roads, the research will be a failure, safety wise). Rear view cameras being mandated in cars simply because a few people in 4wds are too stupid to look backwards before running over their spawn? How is that going to protect against a child that is lying underneath the car?

But when it comes to mandatory helmet legislation, it seems the medical profession are just blind and dogmatic, and lack any critical thinking skills.

The head of Montreal’s trauma unit, Dr Raziz, really needs to come out to Australia and have a look in the trauma unit of Melbourne's hospitals some time. There he'll see that helmets do very little against cars that hit helmeted cyclists; after all, the standards only test to impact speeds of 19.5km/h - a fall of 1.5metres without any additional velocity components of heavy blunt metal. Helmets do nothing when a cyclist is ejected over the handlebars face first into the tarmac, because their handlebars got caught in the wheel-well of an excessively high 4wd. Cars are driven recklessly by drivers who have never gotten used to, nor tolerated cyclists because most cyclists had been driven off the road 20 years ago precisely because of the legislation that people like him were trying to push. When you only look at one small part of the picture, you only see a very small part of the picture. Get out there and look at the big picture. Getting more people cycling is the solution.

Something that makes people feel safer, but is not actually safer (bicycle helmets) just leads to risk compensation. Forcing people to wear helmets is anything but safe. The choice to wear helmets should be your choice, and your choice only (or your parents, if you are of an age that it is deemed that you can't legally decide for yourself).

A far more effective piece of legislation to introduce would be to ban vehicles from having a bonnet of height more than that of your typical sedan. The aggressivity ratings of 4WDs is unacceptably large, so they have poor crash compatibility with other road users. If only the legislation worked to minimise risky practices rather than forcing passive safety and adopting other practices that lead to risk compensation.

Planet Linux AustraliaTim Connors: Conga line of suckholes

I've got a higher respect for ex-leader of the ALP, Mark Latham than I currently do have for Prime Minister Julia Gillard and Attorney-General Robert McClelland.

Conga line of suckholes indeed.

Sociological ImagesSunday Fun: How Professors Spend Their Time

It’s back-to-school season!  Professors, I thought you might enjoy this bit from PhD Comics:

1

Via The Society Pages Editor’s Desk.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Geek FeminismQuick hit: Free travel grants for women to attend EuroBSDcon 2014 in Sofia, Bulgaria

Google is offering 5 grants for women in computer science (either working in or studying it) to attend EuroBSDcon 2014 — the main European conference about the open-source BSD family of operating systems — in Sofia, Bulgaria, to take place September 25-28. The grants cover conference registration as well as up to €1000 in travel costs.

Women who have a strong academic background and have demonstrated leadership (though if you don’t think you do, you should apply anyway) are encouraged to apply. Google’s form requires selecting either “male” or “female” as a gender; if you are not binary-identified but are marginalized in computer science and wish to apply, make use of the contact information for this Google program.

Also note that EuroBSDcon does not appear to have a code of conduct or anti-harassment policy. (If I’m wrong, add it to the wiki’s list of conferences that have anti-harassment policies!)

Planet DebianGregor Herrmann: Debian Perl Group Micro-Sprint

DebConf 14 has started earlier today with the first two talks in sunny portland, oregon.

this year's edition of DebConf didn't feature a preceding DebCamp, & the attempts to organize a proper pkg-perl sprint were not very successful.

nevertheless, two other members of the Debian Perl Group & me met here in PDX on wednesday for our informal unofficial pkg-perl µ-sprint, & as intended, we've used the last days to work on some pkg-perl QA stuff:

  • upload packages which were waiting for Perl 5.20
  • upload packages which didn't have the Perl Group in Maintainer
  • update OpenTasks wiki page
  • update subscription to Perl packages in Ubuntu/Launchpad
  • start annual git repos cleanup
  • pkg-perl-tools: improve scripts to integrate upstream git repo
  • update alternative (build) dependencies after perl 5.20 upload
  • update Module::Build (build) dependencies

as usual, having someone to poke besides you, & the opportunity to get a second pair of eyes quickly was very beneficial. – & of course, spending time with my nice team mates is always a pleasure for me!

,

Planet DebianThorsten Alteholz: Moving WordPress to another server

Today I moved this blog from a vServer to a dedicated server. The migration went surprisingly smooth. I just had to apt-get install the Debian packages apache2, mysql-server and wordpress. Afterwards only the following steps were necessary:

  • dumping the old database with basically just one command:

    mysqldump -u$DBUSER -p$DBPASS –lock-tables=false $DBNAME > $DBFILE

  • creating the database on the new host:

    CREATE DATABASE $DBNAME;
    \r $DBNAME
    GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER ON $DBNAME TO ‘$DBUSER’@'localhost’ IDENTIFIED BY ‘$DBPASS’;
    GRANT SELECT,INSERT,UPDATE,DELETE,CREATE,DROP,ALTER ON $DBNAME.* TO ‘$DBUSER’@'localhost’ IDENTIFIED BY $DBPASS’;
    FLUSH PRIVILEGES;

  • importing the dump with something like:

    mysql –user=$DBUSER –password=$DBPASS $DBNAME < $DBFILE

and almost done …

Finally some fine tuning of /etc/wordpress/htaccess and access rights of a few directories to allow installation of plugins. As I wanted to clean up my wp-content-directory, I manually reinstalled all plugins instead of just copying them. Thankfully all of the important plugins store their data in the database and all settings survived the migration.

Planet DebianAntti-Juhani Kaijanaho: A milestone toward a doctorate

Yesterday I received my official diploma for the degree of Licentiate of Philosophy. The degree lies between a Master’s degree and a doctorate, and is not required; it consists of the coursework required for a doctorate, and a Licentiate Thesis, “in which the student demonstrates good conversance with the field of research and the capability of independently and critically applying scientific research methods” (official translation of the Government decree on university degrees 794/2004, Section 23 Paragraph 2).

The title and abstract of my Licentiate Thesis follow:

Kaijanaho, Antti-Juhani
The extent of empirical evidence that could inform evidence-based design of programming languages. A systematic mapping study.
Jyväskylä: University of Jyväskylä, 2014, 243 p.
(Jyväskylä Licentiate Theses in Computing,
ISSN 1795-9713; 18)
ISBN 978-951-39-5790-2 (nid.)
ISBN 978-951-39-5791-9 (PDF)
Finnish summary

Background: Programming language design is not usually informed by empirical studies. In other fields similar problems have inspired an evidence-based paradigm of practice. Central to it are secondary studies summarizing and consolidating the research literature. Aims: This systematic mapping study looks for empirical research that could inform evidence-based design of programming languages. Method: Manual and keyword-based searches were performed, as was a single round of snowballing. There were 2056 potentially relevant publications, of which 180 were selected for inclusion, because they reported empirical evidence on the efficacy of potential design decisions and were published on or before 2012. A thematic synthesis was created. Results: Included studies span four decades, but activity has been sparse until the last five years or so. The form of conditional statements and loops, as well as the choice between static and dynamic typing have all been studied empirically for efficacy in at least five studies each. Error proneness, programming comprehension, and human effort are the most common forms of efficacy studied. Experimenting with programmer participants is the most popular method. Conclusions: There clearly are language design decisions for which empirical evidence regarding efficacy exists; they may be of some use to language designers, and several of them may be ripe for systematic reviewing. There is concern that the lack of interest generated by studies in this topic area until the recent surge of activity may indicate serious issues in their research approach.

Keywords: programming languages, programming language design, evidence-based paradigm, efficacy, research methods, systematic mapping study, thematic synthesis

A Licentiate Thesis is assessed by two examiners, usually drawn from outside of the home university; they write (either jointly or separately) a substantiated statement about the thesis, in which they suggest a grade. The final grade is almost always the one suggested by the examiners. I was very fortunate to have such prominent scientists as Dr. Stefan Hanenberg and Prof. Stein Krogdahl as the examiners of my thesis. They recommended, and I received, the grade “very good” (4 on a scale of 1–5).

The thesis has been accepted for publication published in our faculty’s licentiate thesis series and will in due course appear has appeared in our university’s electronic database (along with a very small number of printed copies). In the mean time, if anyone wants an electronic preprint, send me email at antti-juhani.kaijanaho@jyu.fi.

Figure 1 of the thesis: an overview of the mapping processFigure 1 of the thesis: an overview of the mapping process

As you can imagine, the last couple of months in the spring were very stressful for me, as I pressed on to submit this thesis. After submission, it took me nearly two months to recover (which certain people who emailed me on Planet Haskell business during that period certainly noticed). It represents the fruit of almost four years of work (way more than normally is taken to complete a Licentiate Thesis, but never mind that), as I designed this study in Fall 2010.

Figure 8 of the thesis: Core studies per publication yearFigure 8 of the thesis: Core studies per publication year

Recently, I have been writing in my blog a series of posts in which I have been trying to clear my head about certain foundational issues that irritated me during the writing of the thesis. The thesis contains some of that, but that part of it is not very strong, as my examiners put it, for various reasons. The posts have been a deliberately non-academic attempt to shape the thoughts into words, to see what they look like fixed into a tangible form. (If you go read them, be warned: many of them are deliberately provocative, and many of them are intended as tentative in fact if not in phrasing; the series also is very incomplete at this time.)

I closed my previous post, the latest post in that series, as follows:

In fact, the whole of 20th Century philosophy of science is a big pile of failed attempts to explain science; not one explanation is fully satisfactory. [...] Most scientists enjoy not pondering it, for it’s a bit like being a cartoon character: so long as you don’t look down, you can walk on air.

I wrote my Master’s Thesis (PDF) in 2002. It was about the formal method called “B”; but I took a lot of time and pages to examine the history and content of formal logic. My supervisor was, understandably, exasperated, but I did receive the highest possible grade for it (which I never have fully accepted I deserved). The main reason for that digression: I looked down, and I just had to go poke the bridge I was standing on to make sure I was not, in fact, walking on air. In the many years since, I’ve taken a lot of time to study foundations, first of mathematics, and more recently of science. It is one reason it took me about eight years to come up with a doable doctoral project (and I am still amazed that my department kept employing me; but I suppose they like my teaching, as do I). The other reason was, it took me that long to realize how to study the design of programming languages without going where everyone has gone before.

Debian people, if any are still reading, may find it interesting that I found significant use for the dctrl-tools toolset I have been writing for Debian for about fifteen years: I stored my data collection as a big pile of dctrl-format files. I ended up making some changes to the existing tools (I should upload the new version soon, I suppose), and I wrote another toolset (unfortunately one that is not general purpose, like the dctrl-tools are) in the process.

For the Haskell people, I mainly have an apology for not attending to Planet Haskell duties in the summer; but I am back in business now. I also note, somewhat to my regret, that I found very few studies dealing with Haskell. I just checked; I mention Haskell several times in the background chapter, but it is not mentioned in the results chapter (because there were not studies worthy of special notice).

I am already working on extending this work into a doctoral thesis. I expect, and hope, to complete that one faster.

Planet DebianJoachim Breitner: This blog goes static

After a bit more than 9 years, I am replacing Serendipity, which as been hosting my blog, by a self-made static solution. This means that when you are reading this, my server no longer has to execute some rather large body of untyped code to produce the bytes sent to you. Instead, that happens once in a while on my laptop, and they are stored as static files on the server.

I hope to get a little performance boost from this, so that my site can more easily hold up to being mentioned on hackernews. I also do not want to worry about security issues in Serendipity – static files are not hacked.

Of course there are down-sides to having a static blog. The editing is a bit more annoying: I need to use my laptop (previously I could post from anywhere) and I edit text files instead of using a JavaScript-based WYSIWYG editor (but I was slightly annoyed by that as well). But most importantly your readers cannot comment on static pages. There are cloud-based solutions that integrate commenting via JavaScript on your static pages, but I decided to go for something even more low-level: You can comment by writing an e-mail to me, and I’ll put your comment on the page. This has the nice benefit of solving the blog comment spam problem.

The actual implementation of the blog is rather masochistic, as my web page runs on one of these weird obfuscated languages (XSLT). Previously, it contained of XSLT stylesheets producing makefiles calling XSLT sheets. Now it is a bit more-self-contained, with one XSLT stylesheet writing out all the various html and rss files.

I managed to import all my old posts and comments thanks to this script by Michael Hamann (I had played around with this some months ago and just spend what seemed to be an hour to me to find this script again) and a small Haskell script. Old URLs are rewritten (using mod_rewrite) to the new paths, but feed readers might still be confused by this.

This opens the door to a long due re-design of my webpage. But not today...