Planet Russell


Planet DebianSteve Kemp: A final post about the lua-editor.

I recently mentioned that I'd forked Antirez's editor and added lua to it.

I've been working on it, on and off, for the past week or two now. It's finally reached a point where I'm content:

  • The undo-support is improved.
  • It has buffers, such that you can open multiple files and switch between them.
    • This allows this to work "kilua *.txt", for example.
  • The syntax-highlighting is improved.
    • We can now change the size of TAB-characters.
    • We can now enable/disable highlighting of trailing whitespace.
  • The default configuration-file is now embedded in the body of the editor, so you can run it portably.
  • The keyboard input is better, allowing multi-character bindings.
    • The following are possible, for example ^C, M-!, ^X^C, etc.

Most of the obvious things I use in Emacs are present, such as the ability to customize the status-bar (right now it shows the cursor position, the number of characters, the number of words, etc, etc).

Anyway I'll stop talking about it now :)

Planet Linux AustraliaSimon Lyall: Gather Conference 2016 – Afternoon

The Gathering

Chloe Swarbrick

  • Whose responsibility is it to disrupt the system?
  • Maybe try and engage with the system we have for a start before writing it off.
  • You disrupt the system yourself or you hold the system accountable

Nick McFarlane

  • He wrote a book
  • Rock Stars are dicks to work with

So you want to Start a Business

  • Hosted by Reuben and Justin (the accountant)
  • Things you need to know in your first year of business
  • How serious is the business, what sort of structure
    • If you are serious, you have to do things properly
    • Have you got paying customers yet
    • Could just be an idea or a hobby
  • Sole Trader vs Incorporated company vs Trust vs Partnership
  • Incorperated
    • Directors and Shareholders needed to be decided on
    • Can take just half an hour
  • when to get a GST number?
    • If over $60k turnover a year
    • If you have lots of stuff you plan to claim back.
  • Have an accounting System from Day 1 – Xero Pretty good
  • Get an advisor or mentor that is not emotionally invested in your company
  • If partnership then split up responsibilities so you can hold each other accountable for specific items
  • If you are using Xero then your accountant should be using Xero directly not copying it into a different system.
  • Remuneration
    • Should have a shareholders agreement
    • PAYE possibility from drawings or put 30% aside
    • Even if only a small hobby company you will need to declare income to IRD especially non-trivial level.
  • What Level to start at Xero?
    • Probably from the start if the business is intended to be serious
    • A bit of pain to switch over later
  • Don’t forget about ACC
  • Remember you are due provisional tax once you get over the the $2500 for the previous year.
  • Home Office expense claim – claim percentage of home rent, power etc
  • Get in professionals to help

Diversity in Tech

  • Diversity is important
    • Why is it important?
    • Does it mean the same for everyone
  • Have people with different “ways of thinking” then we will have a diverse views then wider and better solutions
  • example “Polish engineer could analysis a Polish specific character input error”
  • example “Controlling a robot in Samoan”, robots are not just in english
  • Stereotypes for some groups to specific jobs, eg “Indians in tech support”
  • Example: All hires went though University of Auckland so had done the same courses etc
  • How do you fix it when people innocently hire everyone from the same background? How do you break the pattern? No be the first different-hire represent everybody in that group?
  • I didn’t want to be a trail-blazer
  • Wow’ed out at “Women in tech” event, first time saw “majority of people are like me” in a bar.
  • “If he is a white male and I’m going to hire him on the team that is already full of white men he better be exception”
  • Worried about implication that “diversity” vs “Meritocracy” and that diverse candidates are not as good
  • Usual over-representation of white-males in the discussion even in topics like this.
  • Notion that somebody was only hired to represent diversity is very harmful especially for that person
  • If you are hiring for a tech position then 90% of your candidates will be white-males, try place your diversity in getting more diverse group applying for the jobs not tilt in the actual hiring.
  • Even in maker spaces where anyone is welcome, there are a lot fewer women. Blames mens mags having things unfinished, women’s mags everything is perfect so women don’t want to show off something that is unfinished.
  • Need to make the workforce diverse now to match the younger people coming into it
  • Need to cover “power income” people who are not exposed to tech
  • Even a small number are role models for the future for the young people today
  • Also need to address the problem of women dropping out of tech in the 30s and 40s. We can’t push girls into an “environment filled with acid”
  • Example taking out “cocky arrogant males” from classes into “advanced stream” and the remaining class saw women graduating and staying in at a much higher rate.


  • Paul Spain from Podcast New Zealand organising
  • Easiest to listen to when doing manual stuff or in car or bus
  • Need to avoid overload of commercials, eg interview people from the company about the topic of interest rather than about their product
  • Big firms putting money into podcasting
  • In the US 21% of the market are listening every single month. In NZ perhaps more like 5% since not a lot of awareness or local content
  • Some radios shows are re-cutting and publishing them
  • Not a good directory of NZ podcasts
  • Advise people use proper equipment if possible if more than a once-off. Bad sound quality is very noticeable.
  • One person: 5 part series on immigration and immigrants in NZ
  • Making the charts is a big exposure
  • Apples “new and noteworthy” list
  • Domination by traditional personalities and existing broadcasters at present. But that only helps traction within New Zealand




Planet DebianFrancois Marier: Replacing a failed RAID drive

Here's the complete procedure I followed to replace a failed drive from a RAID array on a Debian machine.

Replace the failed drive

After seeing that /dev/sdb had been kicked out of my RAID array, I used smartmontools to identify the serial number of the drive to pull out:

smartctl -a /dev/sdb

Armed with this information, I shutdown the computer, pulled the bad drive out and put the new blank one in.

Initialize the new drive

After booting with the new blank drive in, I copied the partition table using parted.

First, I took a look at what the partition table looks like on the good drive:

$ parted /dev/sda
unit s

and created a new empty one on the replacement drive:

$ parted /dev/sdb
unit s
mktable gpt

then I ran mkpart for all 4 partitions and made them all the same size as the matching ones on /dev/sda.

Finally, I ran toggle 1 bios_grub (boot partition) and toggle X raid (where X is the partition number) for all RAID partitions, before verifying using print that the two partition tables were now the same.

Resync/recreate the RAID arrays

To sync the data from the good drive (/dev/sda) to the replacement one (/dev/sdb), I ran the following on my RAID1 partitions:

mdadm /dev/md0 -a /dev/sdb2
mdadm /dev/md2 -a /dev/sdb4

and kept an eye on the status of this sync using:

watch -n 2 cat /proc/mdstat

In order to speed up the sync, I used the following trick:

blockdev --setra 65536 "/dev/md0"
blockdev --setra 65536 "/dev/md2"
echo 300000 > /proc/sys/dev/raid/speed_limit_min
echo 1000000 > /proc/sys/dev/raid/speed_limit_max

Then, I recreated my RAID0 swap partition like this:

mdadm /dev/md1 --create --level=0 --raid-devices=2 /dev/sda3 /dev/sdb3
mkswap /dev/md1

Because the swap partition is brand new (you can't restore a RAID0, you need to re-create it), I had to update two things:

  • replace the UUID for the swap mount in /etc/fstab, with the one returned by mkswap (or running blkid and looking for /dev/md1)
  • replace the UUID for /dev/md1 in /etc/mdadm/mdadm.conf with the one returned for /dev/md1 by mdadm --detail --scan

Ensuring that I can boot with the replacement drive

In order to be able to boot from both drives, I reinstalled the grub boot loader onto the replacement drive:

grub-install /dev/sdb

before rebooting with both drives to first make sure that my new config works.

Then I booted without /dev/sda to make sure that everything would be fine should that drive fail and leave me with just the new one (/dev/sdb).

This test obviously gets the two drives out of sync, so I rebooted with both drives plugged in and then had to re-add /dev/sda to the RAID1 arrays:

mdadm /dev/md0 -a /dev/sda2
mdadm /dev/md2 -a /dev/sda4

Once that finished, I rebooted again with both drives plugged in to confirm that everything is fine:

cat /proc/mdstat

Then I ran a full SMART test over the new replacement drive:

smartctl -t long /dev/sdb

Planet DebianMateus Bellomo: Send/receive text messages to buddies

Some weeks ago I’ve implemented the option to send a text message from telepathy-resiprocate Empathy. At that time I just implemented it at apps/telepathy/TextChannel class which wasn’t the ideal. Now, with a better understand of the resip/recon and resip/dum APIs, I was able to move this implementation there.

Besides that I also have implemented the option to receive a text message. For that I have done some changes at resip/recon/ConversationManager and resip/recon/UserAgent classes, along some other.

The complete changes could be seen at [1]. This branch also holds modifications related to send/receive presence. This is necessary since to send a message to a contact he/she should be online.

There is still work to be done specially checking the possible error cases but at least we could see a first prototype working. Follow some images:

textChannel_JitsiAccount logged in with Jitsi



textChannel_EmpathyAccount logged in with Empathy using telepathy-resiprocate



Planet Linux AustraliaSimon Lyall: Gather Conference 2016 – Morning

At the Gather Conference again for about the 6th time. It is a 1-day tech-orientated unconference held in Auckland every year.

The day is split into seven streamed sessions each 40 minutes long (of about 8 parallel rooms of events that are each scheduled and run by attendees) plus and opening and a keynote session.

How to Steer your own career – Shirley Tricker

  • Asked people hands up on their current job situation, FT vs PT, sinmgle v multiple jobs
  • Alternatives to traditional careers of work. possible to craft your career
  • Recommended Blog – Free Range Humans
  • Job vs Career
    • Job – something you do for somebody else
    • Career – Uniqie to you, your life’s work
    • Career – What you do to make a contribution
  • Predicted that a greater number of people will not stay with one (or even 2 or 3) employers through their career
  • Success – defined by your goals, lifestyle wishes
  • What are your strengths – Know how you are valuable, what you can offer people/employers, ways you can branch out
  • Hard and Soft Skills (soft skills defined broadly, things outside a regular job description)
  • Develop soft skills
    • List skills and review ways to develop and improve them
    • Look at people you admire and copy them
    • Look at job desctions
  • Skills you might need for a portfilio career
    • Good at organising, marketing, networking
    • flexible, work alone, negotiation
    • Financial literacy (handle your accounts)
  • Getting started
    • Start small ( don’t give up your day job overnight)
    • Get training via work or independently
    • Develop you strengths
    • Fix weaknesses
    • Small experiments
    • cheap and fast (start a blog)
    • Don’t have to start out as an expert, you can learn as you go
  • Just because you are in control doesn’t make it easy
  • Resources
    • Seth Goden
    • Tim Ferris
    • eg outsources her writing.
  • Tools
    • Xero
    • WordPress
    • Canva for images
    • Meetup
    • Odesk and other freelance websites
  • Feedback from Audience
    • Have somebody to report to, eg meet with friend/adviser monthly to chat and bounce stuff off
    • Cultivate Women’s mentoring group
    • This doesn’t seem to filter through to young people, they feel they have to pick a career at 18 and go to university to prep for that.
    • Give advice to people and this helps you define
    • Try and make the world a better place: enjoy the work you are doing, be happy and proud of the outcome of what you are doing and be happy that it is making the world a bit better
    • How to I “motivate myself” without a push from your employer?
      • Do something that you really want to do so you won’t need external motivation
      • Find someone who is doing something write and see what they did
      • Awesome for introverts
    • If you want to start a startup then work for one to see what it is like and learn skills
    • You don’t have to have a startup in your 20s, you can learn your skills first.
    • Sometimes you have to do a crappy job at the start to get onto the cool stuff later. You have to look at the goal or path sometimes

Books and Podcasts – Tanya Johnson

Stuff people recommend

  • Intelligent disobedience – Ira
  • Hamilton the revolution – based on the musical
  • Never Split the difference – Chris Voss (ex hostage negotiator)
  • The Three Body Problem – Lia CiXin – Sci Fi series
  • Lucky Peach – Food and fiction
  • Unlimited Memory
  • The Black Swan and Fooled by Randomness
  • The Setup ( website
  • Tim Ferris Podcast
  • Freakonomics Podcast
  • Moonwalking with Einstein
  • Clothes, Music, Boy – Viv Albertine
  • TIP: Amazon Whispersync for Kindle App (audiobook across various platforms)
  • TIP: Blinkist – 15 minute summaries of books
  • An Intimate History of Humanity – Theodore Zenden
  • How to Live – Sarah Bakewell
  • TIP: Pocketcasts is a good podcast app for Android.
  • Tested Podcast from Mythbusters people
  • Trumpcast podcast from Slate
  • A Fighting Chance – Elizabeth Warren
  • The Choice – Og Mandino
  • The Good life project Podcast
  • The Ted Radio Hour Podcast (on 1.5 speed)
  • This American Life
  • How to be a Woman by Caitlin Moran
  • The Hard thing about Hard things books
  • Flashboys
  • The Changelog Podcast – Interview people doing Open Source software
  • The Art of Oppertunity Roseland Zander
  • Red Rising Trilogy by Piers Brown
  • On the Rag podcast by the Spinoff
  • Hamish and Andy podcast
  • Radiolab podcast
  • Hardcore History podcast
  • Car Talk podcast
  • Ametora – Story of Japanese menswear since WW2
  • .net rocks podcast
  • How not to be wrong
  • Savage Love Podcast
  • Friday Night Comedy from the BBC (especially the News Quiz)
  • Answer me this Podcast
  • Back to work podcast
  • Reply All podcast
  • The Moth
  • Serial
  • American Blood
  • The Productivity podcast
  • Keeping it 1600
  • Ruby Rogues Podcast
  • Game Change – John Heilemann
  • The Road less Travelled – M Scott Peck
  • The Power of Now
  • Snow Crash – Neil Stevensen

My Journey to becoming a Change Agent – Suki Xiao

  • Start of 2015 was a policy adviser at Ministry
  • Didn’t feel connected to job and people making policies for
  • Outside of work was a Youthline counsellor
  • Wanted to make a difference, organised some internal talks
  • Wanted to make changes, got told had to be a manager to make changes (10 years away)
  • Found out about R9 accelerator. Startup accelerator looking at Govt/Business interaction and pain points
  • Get seconded to it
  • First month was very hard.
  • Speed of change was difficult, “Lean into the discomfort” – Team motto
  • Be married to the problem
    • Specific problem was making sure enough seasonal workers, came up with solution but customers didn’t like it. Was not solving the actual problem customers had.
    • Team was married to the problem, not the married to the solution
  • When went back to old job, found slower pace hard to adjust back
  • Got offered a job back at the accelerator, coaching up to 7 teams.
    • Very hard work, lots of work, burnt out
    • 50% pay cut
    • Worked out wasn’t “Agile” herself
    • Started doing personal Kanban boards
    • Cut back number of teams coaching, higher quality
  • Spring Board
    • Place can work at sustainable pace
    • Working at Nomad 8 as an independent Agile consultant
    • Work on separate companies but some support from colleges
  • Find my place
    • Joined Xero as a Agile Team Facilitator
  • Takeaways
    • Anybody can be a change agent
    • An environment that supports and empowers
    • Look for support
  • Conversation on how you overcome the “Everest” big huge goal
    • Hard to get past the first step for some – speaker found she tended to do first think later. Others over-thought beforehand
    • It seems hard but think of the hard things you have done in your life and it is usually not as bad
    • Motivate yourself by having no money and having no choice
    • Point all the bad things out in the open, visualise them all and feel better cause they will rarely happen
    • Learn to recognise your bad patterns of thoughts
    • “The Way of Art” Steven Pressfield (skip the Angels chapter)
  • Are places Serious about Agile instead of just placing lip-service?
    • Questioner was older and found places wanted younger Agile coaches
    • Companies had to completely change into organisation, eg replace project managers
    • eg CEO is still waterfall but people lower down are into Agile. Not enough management buy-in.
    • Speaker left on client that wasn’t serious about changing
  • Went though an Agile process, made “Putting Agile into the Org” as the product
  • Show customers what the value is
  • Certification advice, all sorts of options. Nomad8 course is recomended



CryptogramFriday Squid Blogging: Sperm Whale Eats Squid

A post-mortem of a stranded sperm whale shows that he had recently eaten squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

TEDWhy TED takes two weeks off every summer is about to go dark for two weeks. No new TED Talks will be posted until Monday, August 8, 2016, while most of the TED staff takes a twoweek vacation. Yes, we all (or almost all) go on vacation at the same time. No, we don’t all go to the same place.

We’ve been doing it this way now for seven years. Our summer break is a little hack that solves the problem of an office full of Type-A’s with raging FOMO. We avoid the fear of missing out on emails and new projects and blah blah blah … by making sure that nothing is going on.

I love how the inventor of this holiday, TED’s founding head of media June Cohen, once explained it: “When you have a team of passionate, dedicated overachievers, you don’t need to push them to work harder, you need to help them rest. By taking the same two weeks off, it makes sure everyone takes vacation,” she says. “Planning a vacation is hard — most of us still feel a little guilty to take two weeks off, and we’d be likely to cancel when something inevitably came up. This creates an enforced rest period, which is so important for productivity and happiness.”

Bonus: “It’s efficient,” she says. “In most companies, people stagger their vacations through the summer. But this means you can never quite get things done all summer long. You never have all the right people in the room.”

So, as the bartender said: You don’t have to go home, but you can’t stay here. We won’t post new TED Talks for the next two weeks. The office is (mostly) empty. And we stay off email. The whole point is that vacation time should be truly restful, and we should be able to recharge without having to check in or worry about what we’re missing back at the office.

See you on Monday, August 8!

Note: This piece was first posted on July 17, 2014. It was updated on July 27, 2015, and again on July 20, 2016.

TEDAn advanced prosthetic arm, an underground park, and the case for taking a vacation


The TED community has been very busy over the past few weeks. Below, some newsy highlights.

A major upgrade for prosthetics. After nearly a decade in development, Dean Kamen’s prosthetic arm is finally nearing its commercial launch two years after its approval by the FDA. Developed for wounded soldiers at the behest of the United States Department of Defense, the LUKE arm (named for Luke Skywalker’s own prosthetic) is a major advancement for a device that has remained more or less unchanged since the Civil War. Using electrodes, LUKE picks up electrical signals from the user’s muscles, making it much more intuitive than traditional prosthetics, and it also offers wearers greater strength, dexterity, and flexibility. While the commercial launch is set for late 2016 through Mobius Bionics, a medical device company focused on advanced bionics, the cost has not yet been announced. (Watch Dean’s TED Talk)

An underground haven. Seven years ago, over copious amounts of wine, Dan Barasch and James Ramsey hatched an unheard-of plan: an underground park. Nicknamed the Lowline after sister greenspace the Highline–a name that eventually stuck–the park planned to repurpose a deserted trolley terminal and serve as a place of respite and greenery in the heart of one of New York City’s busiest neighborhoods. While the idea captured the imagination (and donations) of the public, with visitors flocking to a proof-of-concept exhibit, the project had not received official approval from the city — until now. On July 14, the city provisionally approved use of the space, requiring that the project raise a cool $10 million and submit plans within the next 12 months. (Watch Dan’s TED Talk)

Service for trust. “It is clear to me that you don’t have to wear a military uniform to serve your country,” writes Stanley McChrystal, chair of the Service Year Alliance and former commander of US and international forces in Afghanistan, in The Atlantic. McChrystal proposes that service may be at the heart of restoring trust in a country where it has reached its lowest levels in generations. At a time when “tensions and violence in cities across America are reminders of how quickly communities can erupt with an absence of social trust,” he believes that a service year would help reestablish political and civic responsibility while bringing together Americans of all backgrounds to learn to work together as a team. (Watch Stanley’s TED Talk)

A multidisciplinary search for extraterrestrial life. Nathalie Cabrol, director of the Carl Sagan Center for Research at the SETI Institute, proposed a broader approach to the search for extraterrestrial life in a paper published in Astrobiology on July 7. “To find ET, we must open our minds beyond a deeply rooted, Earth-centric perspective, expand our research methods and deploy new tools,” she writes. To push us beyond our anthropocentric vision of extraterrestrial life, she promotes the establishment of a Virtual Institute that will engage the global scientific community. SETI will be exploring resources for the Virtual Institute over the coming months. (Watch Nathalie’s TED Talk)

Women helping women. Sheryl Sandberg’s talk at TEDWomen 2011 launched the Lean In movement, and her latest initiative takes that original vision a step further.  Together Women Can encourages women to help each other succeed by mentoring and becoming allies with their female colleagues, a mission that runs contrary to popular myth but perhaps not reality. In an op-ed for The New York Times, coauthored by fellow TED speaker Adam Grant, Sandberg takes on the myth of the catty woman. (Watch Sheryl’s and Adam’s TED Talks)

Go ahead, take a break. It’s easy to feel guilty about taking a vacation, but it’s much more than a luxury or indulgence, it’s a necessity. In the Harvard Business Review, Shawn Achor and Michelle Gielan make the data-driven case for taking a vacation using their own research and results from a new study by Project: Time Off. Take note: the duo found that if you plan ahead, create social connections on the trip, go far from your place of work, and feel safe, “94% of vacations have a good ROI in terms of your energy and outlook upon returning to work.” (Watch Shawn’s TED Talk)

The flip side of art. Vik Muniz’s latest exhibition takes a look at a hidden side of famous artworks: their backs. Over 15 years and with a team of specialists by his side, Muniz has traveled widely to photograph the backsides of masterworks and carefully re-create them. “The back of the painting is a bit like the artist’s studio,” he says, “It’s a little dirty, it’s a little bit deceiving, but it also gives you a sense of intimacy.” (Watch Vik’s TED Talk)

Snapchat for women’s education. As the latest guest on The Late Late Show’s Carpool Karaoke sketch, Michelle Obama and host James Corden talk about her Let Girls Learn initiative in between raucous bouts of singing to Stevie Wonder, Beyoncé, and Missy Elliott. Let Girls Learn is focused on helping the 62 million girls around the world who are not in school for a variety of reasons surmount physical, cultural and financial barriers to receive an education. On the show, the First Lady announced her upcoming trip to Liberia, Morocco and Spain for the initiative, inviting viewers to follow along on her recently created Snapchat account. (Watch Michelle’s TED Talk)

An agile new drone. Raffaello D’Andrea mesmerized crowds at TED2016 with a live demo of drones circling the audience in a dazzling light show. On July 22, his team released a demo on Youtube for their IDSC tail-sitter. Combining efficient forward flight with hover capabilities, the IDSC tail-sitter is capable of agile movement while remaining robust. (Watch Raffaello’s TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

CryptogramCyberweapons vs. Nuclear Weapons

Good essay pointing out the absurdity of comparing cyberweapons with nuclear weapons.

On the surface, the analogy is compelling. Like nuclear weapons, the most powerful cyberweapons -- malware capable of permanently damaging critical infrastructure and other key assets of society -- are potentially catastrophically destructive, have short delivery times across vast distances, and are nearly impossible to defend against. Moreover, only the most technically competent of states appear capable of wielding cyberweapons to strategic effect right now, creating the temporary illusion of an exclusive cyber club. To some leaders who matured during the nuclear age, these tempting similarities and the pressing nature of the strategic cyberthreat provide firm justification to use nuclear deterrence strategies in cyberspace. Indeed, Cold War-style cyberdeterrence is one of the foundational cornerstones of the 2015 U.S. Department of Defense Cyber Strategy.

However, dive a little deeper and the analogy becomes decidedly less convincing. At the present time, strategic cyberweapons simply do not share the three main deterrent characteristics of nuclear weapons: the sheer destructiveness of a single weapon, the assuredness of that destruction, and a broad debate over the use of such weapons.

Sociological ImagesEnglish Acquisition Among Immigrants to the U.S.

Flashback Friday.

Is it true that Spanish-speaking immigrants to the United States resist assimilation?

Not if you judge by language acquisition and compare them to earlier European immigrants. The sociologist Claude S. Fischer, at Made in America, offers this data:

The bottom line represents the percentage of English-speakers among the wave of immigrants counted in the 1900, 1910, and 1920 census. It shows that less than half of those who had been in the country five years or less could speak English. This jumped to almost 75% by the time they were here six to ten years and the numbers keep rising slowly after that.

Fast forward 80 years. Immigrants counted in the 1980, 1990, and 2000 Census (the top line) outpaced earlier immigrants by more than 25 percentage points. Among those who have just arrived, almost as many can speak English as earlier immigrants who’d been here between 11 and 15 years.

If you look just at Spanish speakers (the middle line), you’ll see that the numbers are slightly lower than all recent immigrants, but still significantly better than the previous wave. Remember that some of the other immigrants are coming from English-speaking countries.

Fischer suggests that the ethnic enclave is one of the reasons that the wave of immigrants at the turn of the 20th century learned English more slowly:

When we think back to that earlier wave of immigration, we picture neighborhoods like Little Italy, Greektown, the Lower East Side, and Little Warsaw – neighborhoods where as late as 1940, immigrants could lead their lives speaking only the language of the old country.

Today, however, immigrants learn to speak with those outside of their own group more quickly, suggesting that all of the flag waving to the contrary is missing the big picture.

Originally posted in 2010.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

(View original at

Planet DebianNorbert Preining: Yukio Mishima – The Temple of the Golden Pavilion

A masterpiece of modern Japanese literature: Yukio Mishima (三島由紀夫) The Temple of the Golden Pavilion (金閣寺). The fictional story about the very real arson attack that destroyed the Golden Pavilion in 1950.

A bit different treatise on beauty and ugliness!

How shall I put it? Beauty – yes, beauty is like a decayed tooth. It rubs against one’s tongue, it hangs there, hurting one, insisting on its own existence, finally […] the tooth extracted. Then, as one looks at the small, dirty, brown, blood-stained tooth lying in one’s hand, one’s thoughts are likely to be as follows: ‘Is this it?’

Mizoguchi, a stutterer, is from young years on taken by a near mystical imagination of the Golden Pavilion, influence by his father who considers it the most beautiful object in the world. After his father’s death he moves to Kyoto and becomes acolyte in the temple. He develops a friendship with Kashiwagi, who uses his clubfeet to make women feel sorry for him and make them fall in love with his clubfeet, as he puts it. Kashiwagi also puts Mizoguchi onto the first tracks of amorous experiences, but Mizoguchi invariably turns out to be impotent – but not due to his stuttering, but due to the image of the Golden Pavilion appearing in the essential moment and destroying every chance.

Yes, this was really the coast of the Sea of Japan! Here was the source of all my unhappiness, of all my gloomy thoughts, the origin of all my ugliness and all my strength. It was a wild sea.

Mizoguchi is getting more and more mentally about his relation with the head monk, neglects his studies, and after a stark reprimand he escapes to the north coast, from where he is brought back by police to the temple. He decides to burn down the Golden Pavilion, which has taken more and more command of his thinking and doing. He carries out the deed with the aim to burn himself in the top floor, but escapes in the last second to retreat into the hills to watch the spectacle.

Closely based on the true story of the arsonist of the Golden Pavilion, whom Mishima even visited in prison, the book is a treatise about beauty and ugly.

At his trial he [the real arsonist] said: “I hate myself, my evil, ugly, stammering self.” Yet he also said that he did not in any way regret having burned down the Kinkakuji.
Nancy Wilson Ross in the preface

Mishima is master in showing these two extremes by contrasting the refined qualities of Japanese culture – flower arrangement, playing the shakuhachi, … – with immediate outburst of contrasting behavior: cold and brutal recklessness. Take for example the scene were Kashiwagi is arranging flowers, stolen by Mizoguchi from the temple grounds, while Mizoguchi is playing the flute. They also discuss koans and various interpretations. Enters the Ikebana teacher, and mistress of Kashiwagi. She congratulates Kashiwagi to his excellent arrangement, which he answers coldly by quitting their relationship, both as teacher as well as mistress, and telling her not to see him again in a formal style. She, still ceremonially kneeling, suddenly destroys the flower arrangement, only to be beaten and thrown out by Kashiwagi. And the beauty and harmony has turned to ugliness and hate in seconds.

Beauty and Ugliness, two sides of the same medal, or inherently the same, because it is only up to the point of view. Mishima ingeniously plays with this duality, and leads us through the slow and painful development of Mizoguchi to the bitter end, which finally gives him freedom, freedom from the force of beauty. Sometimes seeing how our society is obsessed with beauty – I cannot get rid of the feeling that there are far more Mizoguchis at heart.

Planet DebianPaul Tagliamonte: HOPE 11

I’ll be at HOPE 11 this year - if anyone else will be around, feel free to send me an email! I won’t have a phone on me (so texting only works if you use Signal!)

Looking forward for a chance to see everyone soon!

Worse Than FailureError'd: Wait...Press What?!

"Um, I'm not sure the programmers and the engineers were working together on this one," wrote Rob.


"Apparently, not having any problems is a problem itself," writes Chris F.


Max wrote, "It's ok AT&T, I test in prod sometimes too."


"To get online in Iceland you need to seriously man up...or stop translating sites with Google," Ivan writes.


Daniel wrote, "To avoid any delays to your journey, Red Funnel has redefined the term On Time."


"Feedback goes right into the bit bucket? I guess I shouldn't be too surprised," writes Carl.


Tom writes, "Whilst trying (and failing) to upgrade my phone for the second time in as many days, I decided to give EE's live chat a go. They really want to know what my account type is."


[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianRussell Coker: 802.1x Authentication on Debian

I recently had to setup some Linux workstations with 802.1x authentication (described as “Ethernet authentication”) to connect to a smart switch. The most useful web site I found was the Ubuntu help site about 802.1x Authentication [1]. But it didn’t describe exactly what I needed so I’m writing a more concise explanation.

The first thing to note is that the authentication mechanism works the same way as 802.11 wireless authentication, so it’s a good idea to have the wpasupplicant package installed on all laptops just in case you need to connect to such a network.

The first step is to create a wpa_supplicant config file, I named mine /etc/wpa_supplicant_SITE.conf. The file needs contents like the following:

 phase2="auth=CHAP password=PASS"

The first difference between what I use and the Ubuntu example is that I’m using “eap=PEAP“, that is an issue of the way the network is configured, whoever runs your switch can tell you the correct settings for that. The next difference is that I’m using “auth=CHAP” and the Ubuntu example has “auth=PAP“. The difference between those protocols is that CHAP has a challenge-response and PAP just has the password sent (maybe encrypted) over the network. If whoever runs the network says that they “don’t store unhashed passwords” or makes any similar claim then they are almost certainly using CHAP.

Change USERNAME and PASS to your user name and password.

wpa_supplicant -c /etc/wpa_supplicant_SITE.conf -D wired -i eth0

The above command can be used to test the operation of wpa_supplicant.

Successfully initialized wpa_supplicant
eth0: Associated with 00:01:02:03:04:05
eth0: CTRL-EVENT-EAP-STARTED EAP authentication started
eth0: CTRL-EVENT-EAP-PROPOSED-METHOD vendor=0 method=25
TLS: Unsupported Phase2 EAP method 'CHAP'
eth0: CTRL-EVENT-EAP-METHOD EAP vendor 0 method 25 (PEAP) selected
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
eth0: CTRL-EVENT-EAP-PEER-CERT depth=0 subject=''
EAP-MSCHAPV2: Authentication succeeded
EAP-TLV: TLV Result - Success - EAP-TLV/Phase2 Completed
eth0: CTRL-EVENT-EAP-SUCCESS EAP authentication completed successfully
eth0: CTRL-EVENT-CONNECTED - Connection to 00:01:02:03:04:05 completed [id=0 id_str=]

Above is the output of a successful test with wpa_supplicant. I replaced the MAC of the switch with 00:01:02:03:04:05. Strangely it doesn’t like “CHAP” but is automatically selecting “MSCHAPV2” and working, maybe anything other than “PAP” would do.

auto eth0
iface eth0 inet dhcp
  wpa-driver wired
  wpa-conf /etc/wpa_supplicant_SITE.conf

Above is a snippet of /etc/network/interfaces that works with this configuration.

CryptogramDARPA Document: "On Countering Strategic Deception"

Old, but interesting. The document was published by DARPA in 1973, and approved for release in 2007. It examines the role of deception on strategic warning systems, and possible actions to protect from strategic foreign deception.

Planet DebianDirk Eddelbuettel: RcppCCTZ 0.0.5

Version 0.0.5 of RcppCCTZ arrived on CRAN a couple of days ago. It reflects an upstream fixed made a few weeks ago. CRAN tests revealed that g++-6 was tripping over one missing #define; this was added upstream and I subsequently synchronized with upstream. At the same time the set of examples was extended (see below).

Somehow useR! 2016 got in the way and while working on the then-incomplete examples during the traveling I forgot to release this until CRAN reminded me that their tests still failed. I promptly prepared the 0.0.5 release but somehow failed to update NEWS files etc. They are correct in the repo but not in the shipped package. Oh well.

CCTZ is a C++ library for translating between absolute and civil times using the rules of a time zone. In fact, it is two libraries. One for dealing with civil time: human-readable dates and time, and one for converting between between absolute and civil times via time zones. It requires only a proper C++11 compiler and the standard IANA time zone data base which standard Unix, Linux, OS X, ... computers tend to have in /usr/share/zoneinfo. RcppCCTZ connects this library to R by relying on Rcpp.

Two good examples are now included, and shown here. The first one tabulates the time difference between New York and London (at a weekly level for compactness):

R> example(tzDiff)

tzDiffR> # simple call: difference now
tzDiffR> tzDiff("America/New_York", "Europe/London", Sys.time())
[1] 5

tzDiffR> # tabulate difference for every week of the year
tzDiffR> table(sapply(0:52, function(d) tzDiff("America/New_York", "Europe/London",
tzDiff+                                       as.POSIXct(as.Date("2016-01-01") + d*7))))

 4  5 
 3 50 

Because the two continents happen to spring forward and fall backwards between regular and daylight savings times, there are, respectively, two and one week periods where the difference is one hour less than usual.

A second example shifts the time to a different time zone:

R> example(toTz)

toTzR> toTz(Sys.time(), "America/New_York", "Europe/London")
[1] "2016-07-14 10:28:39.91740 CDT"

Note that because we return a POSIXct object, it is printed by R with the default (local) TZ attribute (for "America/Chicago" in my case). A more direct example asks what time it is in my time zone when it is midnight in Tokyo:

R> toTz(ISOdatetime(2016,7,15,0,0,0), "Japan", "America/Chicago")
[1] "2016-07-14 15:00:00 CDT"

More changes will come in 0.0.6 as soon as I find time to translate the nice time_tool (command-line) example into an R function.

Changes in this version are summarized here:

Changes in version 0.0.5 (2016-07-09)

  • New utility example functions toTz() and tzDiff

  • Synchronized with small upstream change for additional #ifdef for compiler differentiation

We also have a diff to the previous version thanks to CRANberries. More details are at the RcppCCTZ page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMartin Michlmayr: Debian on Seagate Personal Cloud and Seagate NAS

The majority of NAS devices supported in Debian are based on Debian's Kirkwood platform. This platform is quite dated now and can only run Debian's armel port.

Debian now supports the Seagate Personal Cloud and Seagate NAS devices. They are based on Marvell's Armada 370, a platform which can run Debian's armhf port. Unfortunately, even the Armada 370 is a bit dated now, so I would not recommend these devices for new purchases. If you have one already, however, you now have the option to run native Debian.

There are some features I like about the Seagate NAS devices:

  • Network console: you can connect to the boot loader via the network. This is useful to load Debian or to run recovery commands if needed.
  • Mainline support: the devices are supported in the mainline kernel.
  • Good contacts: Seagate engineer Simon Guinot is interested in Debian support and is a joy to work with. There's also a community for LaCie NAS devices (Seagate acquired LaCie).

If you have a Seagate Personal Cloud and Seagate NAS, you can follow the instructions on the Debian wiki.

If Seagate releases more NAS devices on Marvell's Armada platform, I intend to add Debian support.


Planet DebianVincent Fourmond: QSoas version 2.0 is out / QSoas paper

I thought it would come before that, but I've finally gotten around releasing version 2.0 of my data analysis program, QSoas !

It provides significant improvements to the fit interface, in particular for multi-buffer fits, with a “Multi” fit engine that performs very well for large multibuffer fits, a spreadsheet editor for fit parameters, and more usability improvements. It also features the definition of fits with distribution of values of one of the fit parameter, and new built-in fits. In addition, QSoas version 2.0 features new commands to derive data, to flag buffers and handle large multi-column datasets, and improvements of existing commands. The full list of changes since version 1.0 can be found there.

As before, you can download the source code from our website, and purchase the pre-built binaries following the links from that page too.

In addition, I am glad to announce that QSoas is now described in a recent publication, Fourmond, Anal. Chem., 2016, 88, 5050-5052. Please cite this publication if you used QSoas to process your data.

LongNowSeth Lloyd Seminar Tickets


The Long Now Foundation’s monthly

Seminars About Long-term Thinking

Seth Lloyd on Quantum Computer Reality

Seth Lloyd on “Quantum Computer Reality”


Tuesday August 9, 02016 at 7:30pm SFJAZZ Center

Long Now Members can reserve 2 seats, join today! General Tickets $15


About this Seminar:

Seth Lloyd is a professor at MIT whose areas of research include quantum information and quantum computing. He will discuss the current state of quantum computer progress, where it stands in life’s long process of comprehending and harnessing information in the universe, and what the prospects are for the field over the next few decades.

Planet DebianOlivier Grégoire: Height week: create an API on library ring client (LRC)

At the beginning of the week, I didn’t really use the LRC to communicate with my client.
-The client calls an function in it to call my method who calls my program
-The daemon sends his signal connect to an Qslot in LRC. After that, I just send another signal connect to a lambda function of the client

I have never programmed API before and I began to write some code without checking how doing that. I needed to extract all the information of my map<s,s> sending by the daemon to present all it in my API. After observing the code, I saw LRC follow the kde library code policy. So, I change my architecture to follow the same policies . Basically, I needed to create a public and private header by using the D-Pointer. My private header contains my slot who is connect with the daemon and all private variable. My public header contains a signal connect to lambda function who indicates to the client when some information change and he need to refresh it. This header contains obviously all the getters too.

I have now a functional API.

Next week I will work on the gnome client to use this new API.

Krebs on SecurityCanadian Man Behind Popular ‘Orcus RAT’

Far too many otherwise intelligent and talented software developers these days apparently think they can get away with writing, selling and supporting malicious software and then couching their commerce as a purely legitimate enterprise. Here’s the story of how I learned the real-life identity of Canadian man who’s laboring under that same illusion as proprietor of one of the most popular and affordable tools for hacking into someone else’s computer.

Earlier this week I heard from Daniel Gallagher, a security professional who occasionally enjoys analyzing new malicious software samples found in the wild. Gallagher said he and members of @malwrhunterteam and @MalwareTechBlog recently got into a Twitter fight with the author of Orcus RAT, a tool they say was explicitly designed to help users remotely compromise and control computers that don’t belong to them.

A still frame from a Youtube video showing Orcus RAT's keylogging ability to steal passwords from Facebook users and other credentials.

A still frame from a Youtube video demonstrating Orcus RAT’s keylogging ability to steal passwords from Facebook and other sites.

The author of Orcus — a person going by the nickname “Ciriis Mcgraw” a.k.a. “Armada” on Twitter and other social networks — claimed that his RAT was in fact a benign “remote administration tool” designed for use by network administrators and not a “remote access Trojan” as critics charged. Gallagher and others took issue with that claim, pointing out that they were increasingly encountering computers that had been infected with Orcus unbeknownst to the legitimate owners of those machines.

The malware researchers noted another reason that Mcgraw couldn’t so easily distance himself from how his clients used the software: He and his team are providing ongoing technical support and help to customers who have purchased Orcus and are having trouble figuring out how to infect new machines or hide their activities online.

What’s more, the range of features and plugins supported by Armada, they argued, go well beyond what a system administrator would look for in a legitimate remote administration client like Teamviewer, including the ability to launch a keylogger that records the victim’s every computer keystroke, as well as a feature that lets the user peek through a victim’s Web cam and disable the light on the camera that alerts users when the camera is switched on.

A new feature of Orcus announced July 7 lets users configure the RAT so that it evades digital forensics tools used by malware researchers, including an anti-debugger and an option that prevents the RAT from running inside of a virtual machine.

Other plugins offered directly from Orcus’s tech support page (PDF) and authored by the RAT’s support team include a “survey bot” designed to “make all of your clients do surveys for cash;” a “USB/.zip/.doc spreader,” intended to help users “spread a file of your choice to all clients via USB/.zip/.doc macros;” a “ checker” made to “check a file of your choice to see if it had been scanned on VirusTotal;” and an “Adsense Injector,” which will “hijack ads on pages and replace them with your Adsense ads and disable adblocker on Chrome.”


Gallagher said he was so struck by the guy’s “smugness” and sheer chutzpah that he decided to look closer at any clues that Ciriis Mcgraw might have left behind as to his real-world identity and location. Sure enough, he found that Ciriis Mcgraw also has a Youtube account under the same name, and that a video Mcgraw posted in July 2013 pointed to a 33-year-old security guard from Toronto, Canada.

ciriis-youtubeGallagher noticed that the video — a bystander recording on the scene of a police shooting of a Toronto man — included a link to the domain policereview[dot]info. A search of the registration records attached to that Web site name show that the domain was registered to a John Revesz in Toronto and to the email address

A reverse WHOIS lookup ordered from shows the same address was used to register at least 20 other domains, including “,” “, revesztechnologies[dot]com,” and — perhaps most tellingly —  ““.

Johnrevesz[dot]com is no longer online, but this cached copy of the site from the indispensable includes his personal résumé, which states that John Revesz is a network security administrator whose most recent job in that capacity was as an IT systems administrator for TD Bank. Revesz’s LinkedIn profile indicates that for the past year at least he has served as a security guard for GardaWorld International Protective Services, a private security firm based in Montreal.

Revesz’s CV also says he’s the owner of the aforementioned Revesz Technologies, but it’s unclear whether that business actually exists; the company’s Web site currently redirects visitors to a series of sites promoting spammy and scammy surveys, come-ons and giveaways.


Contacted by KrebsOnSecurity, Revesz seemed surprised that I’d connected the dots, but beyond that did not try to disavow ownership of the Orcus RAT.

“Profit was never the intentional goal, however with the years of professional IT networking experience I have myself, knew that proper correct development and structure to the environment is no free venture either,” Revesz wrote in reply to questions about his software. “Utilizing my 15+ years of IT experience I have helped manage Orcus through its development.”

Revesz continued:

“As for your legalities question.  Orcus Remote Administrator in no ways violates Canadian laws for software development or sale.  We neither endorse, allow or authorize any form of misuse of our software.  Our EULA [end user license agreement] and TOS [terms of service] is very clear in this matter. Further we openly and candidly work with those prudent to malware removal to remove Orcus from unwanted use, and lock out offending users which may misuse our software, just as any other company would.”

Revesz said none of the aforementioned plugins were supported by Orcus, and were all developed by third-party developers, and that “Orcus will never allow implementation of such features, and or plugins would be outright blocked on our part.”

In an apparent contradiction to that claim, plugins that allow Orcus users to disable the Webcam light on a computer running the software and one that enables the RAT to be used as a “stresser” to knock sites and individuals users offline are available directly from Orcus Technologies’ Github page.

Revesz’s also offers a service to help people cover their tracks online. Using his alter ego “Armada” on the hacker forum Hackforums[dot]net, Revesz also sells a “bulletproof dynamic DNS service” that promises not to keep records of customer activity.

Dynamic DNS services allow users to have Web sites hosted on servers that frequently change their Internet addresses. This type of service is useful for people who want to host a Web site on a home-based Internet address that may change from time to time, because dynamic DNS services can be used to easily map the domain name to the user’s new Internet address whenever it happens to change.


Unfortunately, these dynamic DNS providers are extremely popular in the attacker community, because they allow bad guys to keep their malware and scam sites up even when researchers manage to track the attacking IP address and convince the ISP responsible for that address to disconnect the malefactor. In such cases, dynamic DNS allows the owner of the attacking domain to simply re-route the attack site to another Internet address that he controls.

Free dynamic DNS providers tend to report or block suspicious or outright malicious activity on their networks, and may well share evidence about the activity with law enforcement investigators. In contrast, Armada’s dynamic DNS service is managed solely by him, and he promises in his ad on Hackforums that the service — to which he sells subscriptions of various tiers for between $30-$150 per year — will not log customer usage or report anything to law enforcement.

According to writeups by Kaspersky Lab and Heimdal Security, Revesz’s dynamic DNS service has been seen used in connection with malicious botnet activity by another RAT known as Adwind.  Indeed, Revesz’s service appears to involve the domain “nullroute[dot]pw”, which is one of 21 domains registered to a “Ciriis Mcgraw,” (as well as orcus[dot]pw and orcusrat[dot]pw).

I asked Gallagher (the researcher who originally tipped me off about Revesz’s activities) whether he was persuaded at all by Revesz’s arguments that Orcus was just a tool and that Revesz wasn’t responsible for how it was used.

Gallagher said he and his malware researcher friends had private conversations with Revesz in which he seemed to acknowledge that some aspects of the RAT went too far, and promised to release software updates to remove certain objectionable functionalities. But Gallagher said those promises felt more like the actions of someone trying to cover himself.

“I constantly try to question my assumptions and make sure I’m playing devil’s advocate and not jumping the gun,” Gallagher said. “But I think he’s well aware that what he’s doing is hurting people, it’s just now he knows he’s under the microscope and trying to do and say enough to cover himself if it ever comes down to him being questioned by law enforcement.”

Cory DoctorowEFF is suing the US government to invalidate the DMCA’s DRM provisions

The Electronic Frontier Foundation has just filed a lawsuit that challenges the Constitutionality of Section 1201 of the DMCA, the “Digital Rights Management” provision of the law, a notoriously overbroad law that bans activities that bypass or weaken copyright access-control systems, including reconfiguring software-enabled devices (making sure your IoT light-socket will accept third-party lightbulbs; tapping into diagnostic info in your car or tractor to allow an independent party to repair it) and reporting security vulnerabilities in these devices.

EFF is representing two clients in its lawsuit: Andrew “bunnie” Huang, a legendary hardware hacker whose NeTV product lets users put overlays on DRM-restricted digital video signals; and Matthew Green, a heavyweight security researcher at Johns Hopkins who has an NSF grant to investigate medical record systems and whose research plans encompass the security of industrial firewalls and finance-industry “black boxes” used to manage the cryptographic security of billions of financial transactions every day.

Both clients reflect the deep constitutional flaws in the DMCA, and both have standing to sue the US government to challenge DMCA 1201 because of its serious criminal provisions (5 years in prison and a $500K fine for a first offense).

The US Trade Rep has propagated the DMCA’s anticircumvention rules to most of the world’s industrial nations, and a repeal in the US will strengthen the argument for repealing their international cousins.

Huang has written an inspirational essay explaining his reasons for participating in this suit, explaining that he feels it is his duty to future generations:

Our recent generation of Makers, hackers, and entrepreneurs have developed under the shadow of Section 1201. Like the parable of the frog in the well, their creativity has been confined to a small patch, not realizing how big and blue the sky could be if they could step outside that well. Nascent 1201-free ecosystems outside the US are leading indicators of how far behind the next generation of Americans will be if we keep with the status quo.

Our children deserve better.

I can no longer stand by as a passive witness to this situation. I was born into a 1201-free world, and our future generations deserve that same freedom of thought and expression. I am but one instrument in a large orchestra performing the symphony for freedom, but I hope my small part can remind us that once upon a time, there was a world free of such artificial barriers, and that creativity and expression go hand in hand with the ability to share without fear.

The EFF’s complaint, filed minutes ago with the US District Court, is as clear and comprehensible an example of legal writing as you could ask for. It builds on two recent Supreme Court precedents (Golan and Eldred), in which the Supremes stated that the only way to reconcile free speech with copyright’s ability to restrict who may utter certain words and expressions is fair use and other exemptions to copyright, which means that laws that don’t take fair use into account fail to pass constitutional muster.

In this decade, more and more companies have figured out that the DMCA gives them the right to control follow-on innovation and suppress embarrassing revelations about defects in their products; consequently, DMCA 1201-covered technologies have proliferated into cars and tractors, medical implants and home security systems, thermostats and baby-monitors.

With this lawsuit, the EFF has fired a starter pistol in the race to repeal section 1201 of the DMCA and its cousins all over the world: to legitimize the creation of commercial businesses that unlock the value in the gadgets you’ve bought that the original manufacturers want to hoard for themselves; to open up auditing and disclosure on devices that are disappearing into our bodies, and inside of which we place those bodies.

I’ve written up the lawsuit for the Guardian:

Suing on behalf of Huang and Green, EFF’s complaint argues that the wording of the statute requires the Library of Congress to grant exemptions for all conduct that is legal under copyright, including actions that rely on fair use, when that conduct is hindered by the ban on circumvention.

Critically, the supreme court has given guidance on this question in two rulings, Eldred and Golan, explaining how copyright law itself is constitutional even though it places limits on free speech; copyright is, after all, a law that specifies who may utter certain combinations of words and other expressive material.

The supreme court held that through copyright’s limits, such as fair use, it accommodates the first amendment. The fair-use safety valve is joined by the “idea/expression dichotomy”, a legal principle that says that copyright only applies to expressions of ideas, not the ideas itself.

In the 2015 DMCA 1201 ruling, the Library of Congress withheld or limited permission for many uses that the DMCA blocks, but which copyright itself allows – activities that the supreme court has identified as the basis for copyright’s very constitutionality.

If these uses had been approved, people such as Huang and Green would not face criminal jeopardy. Because they weren’t approved, Huang and Green could face legal trouble for doing these legitimate things.


America’s broken digital copyright law is about to be challenged in court
[Cory Doctorow/The Guardian]

Why I’m Suing the US Government
[Andrew “bunnie” Huang]

Section 1201 of the DMCA Cannot Pass Constitutional Scrutiny

[Kit Walsh/EFF]

(Image: Bunnie Huang, Joi Ito, CC-BY)

Planet DebianReproducible builds folks: Reproducible builds: week 62 in Stretch cycle

What happened in the Reproducible Builds effort between June 26th and July 2nd 2016:

Read on to find out why we're lagging some weeks behind…!

GSoC and Outreachy updates

  • Ceridwen described using autopkgtest code to communicate with containers and how to test the container handling.

  • reprotest 0.1 has been accepted into Debian unstable, and any user reports, bug reports, feature requests, etc. would be appreciated. This is still an alpha release, and nothing is set in stone.

Toolchain fixes

  • Matthias Klose uploaded doxygen/1.8.11-3 to Debian unstable (closing #792201) with the upstream patch improving SOURCE_DATE_EPOCH support by using UTC as timezone when parsing the value. This was the last patch we were carrying in our repository, thus this upload obsoletes the version in our experimental repository.
  • cmake/3.5.2-2 was uploaded by Felix Geyer, which sorts file lists obtained with file(GLOB).
  • Dmitry Shachnev uploaded sphinx/1.4.4-2, which fixes a timezone related issue when SOURCE_DATE_EPOCH is set.

With the doxygen upload we are now down to only 2 modified packages in our repository: dpkg and rdfind.

Weekly reports delay and the future of statistics

To catch up with our backlog of weekly reports we have decided to skip some of the statistics for this week. We might publish them in a future report, or we might switch to a format where we summarize them more (and which we can create (even) more automatically), we'll see.

We are doing these weekly statistics because we believe it's appropriate and useful to credit people's work and make it more visible. What do you think? We would love to hear your thoughts on this matter! Do you read these statistics? Somewhat?

Actually, thanks to the power of notmuch, Holger came up with what you can see below, so what's missing for this week are the uploads fixing irreprodubilities. Which we really would like to show for the reasons stated above and because we really really need these uploads to happen ;-)

But then we also like to confirm the bugs are really gone, which (atm) requires manual checking, and to look for the words "reproducible" and "deterministic" (and spelling variations) in debian/changelogs of all uploads, to spot reproducible work not tracked via the BTS.

And we still need to catch up on the backlog of weekly reports.

Bugs submitted with reproducible usertags

It seems DebCamp in Cape Town was hugely successful and made some people get a lot of work done:

61 bugs have been filed with reproducible builds usertags and 60 of them had patches:

Package uploads, fixing one or more reproducible issues


misc.git/reports/bin/review-uploads give back a list of uploads, which is correct, except soundscaperenderer and reprotest are not relevant:

this is the list:

Bas Couwenberg ( pdl 1:2.016-3 (source amd64) into unstable James Cowgill ( brainparty 0.61+dfsg-3 (source) into unstable Sascha Steinbis ( genometools 1.5.8+ds-4 (source all amd64) into unstable intrigeri ( libmemcached-libmemcached-perl 1.001801+dfsg-2 (source) into unstable intrigeri ( libextutils-parsexs-perl 3.300000-2 (source) into unstable Dr. Tobias Quat ( aspell-en 2016.06.26-0-0.1 (source all) into unstable Elimar Riesebie ( mailfilter 0.8.4-2 (source) into unstable ChangZhuo Chen ( hime 0.9.10+git20150916+dfsg1-8 (source amd64 all) into unstable Simon McVittie ( yquake2 5.34~dfsg1-1 (source) into unstable Scott Kitterman ( opendkim 2.11.0~alpha-3 (source amd64) into experimental Axel Beckert ( dpmb 0~2016.06.30 (source all) into unstable intrigeri ( libur-perl 0.440-3 (source) into unstable intrigeri ( latexdiff 1.1.1-2 (source) into unstable ChangZhuo Chen ( hime 0.9.10+git20150916+dfsg1-6 (source amd64 all) into unstable Georges Khaznad ( previsat (source amd64) into unstable Julian Andres K ( ndiswrapper 1.60-2 (source) into unstable Orestis Ioannou ( cloc 1.68-1.1 (source) into unstable Markus Koschany ( lordsawar 0.3.0-3 (source) into unstable Nicolas Braud-S ( syncthing 0.13.9+dfsg1-2 (source all amd64) into unstable Eric Heintzmann ( gnustep-base 1.24.9-2 (source all amd64) into unstable Daniel Kahn Gil ( gnupg2 2.1.13-3 (source) into experimental ChangZhuo Chen ( gcin 2.8.4+dfsg1-7 (source amd64 all) into unstable Simon McVittie ( openarena-textures 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-players-mature 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-players 0.8.5split-8 (source) into unstable Gianfranco Cost ( libsdl2-gfx 1.0.1+dfsg-4 (source) into unstable Felix Geyer ( cmake 3.5.2-2 (source) into unstable ChangZhuo Chen ( pacapt 2.3.8-2 (source all) into unstable Simon McVittie ( openarena-oacmp1 3-2 (source) into unstable Simon McVittie ( openarena-misc 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-maps 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-data 0.8.5split-8 (source) into unstable Simon McVittie ( openarena-088-data 0.8.8-6 (source) into unstable Simon McVittie ( openarena-085-data 0.8.5split-8 (source) into unstable Simon McVittie ( ostree 2016.6-2 (source) into unstable Simon McVittie ( flatpak 0.6.6-2 (source) into unstable Vagrant Cascadi ( u-boot 2016.03+dfsg1-6 (source) into unstable intrigeri ( libwx-perl 1:0.9928-1 (source) into unstable intrigeri ( libur-perl 0.440-2 (source) into unstable Simon McVittie ( openarena 0.8.8-16 (source) into unstable Simon McVittie ( ioquake3 1.36+u20160616+dfsg1-1 (source) into unstable Matthias Klose ( doxygen 1.8.11-3 (source amd64 all) into unstable Al Stone ( libbrahe 1.3.2-6 (source amd64) into unstable Clint Adams ( libmsv 1.1-2 (source) into unstable Sébastien Ville ( slicot 5.0+20101122-3 (source) into unstable Martin Pitt ( media-player-info 22-3 (source all) into unstable intrigeri ( libglib-perl 3:1.321-1 (source) into unstable Sébastien Ville ( lapack 3.6.1-1 (source) into unstable intrigeri ( libmarpa-r2-perl 2.086000~dfsg-6 (source) into unstable intrigeri ( libgtk2-perl 2:1.2498-2 (source) into unstable intrigeri ( libgnome2-perl 1.046-3 (source) into unstable gregor herrmann ( libnet-tclink-perl 3.4.0-9 (source) into unstable gregor herrmann ( libembperl-perl 2.5.0-7 (source) into unstable

Package reviews

437 new reviews have been added (though most of them were just linking the bug, "only" 56 new issues in packages were found), an unknown number has been been updated and 60 have been removed in this week, adding to our knowledge about identified issues.

4 new issue types have been found:

Weekly QA work

98 FTBFS bugs have been reported by Chris Lamb and Santiago Vila.

diffoscope development

strip-nondeterminism development

  • Chris Lamb made sure that .zhfst files are treated as ZIP files.

  • Mattia Rizzolo uploaded pbuilder/0.225.1~bpo8+1 to jessie-backports and it has been installed on all build nodes. As a consequence all armhf and i386 builds will be done with eatmydata; this will hopefully cut down the build time by a noticable factor.


This week's edition was written by Mattia Rizzolo, Reiner Herrmann, Ceridwen and Holger Levsen and reviewed by a bunch of Reproducible builds folks on IRC.

Google AdsenseBig events cause big spikes; use them to grow your business.

Across the world, users flock online to view, share and talk about big things that matter to them. From Royal Weddings to World Cups, when big events come around, they make waves online. Spotting and predicting these spikes in web traffic can give you an opportunity to grow your web business, capitalizing on the increase of users online by drawing them to your content.

Recent events and the spikes they created:
  • The 2012 Olympics website received 431 million online visits1
  • The Royal Wedding was tweeted about 237 times per second2
  • The Cricket World Cup 2015 was searched for 323 million times3
  • Searches for the Tour De France in 2015 increased by a factor of 50 during the event4
  • There were 1.55m tweets using #supportyourteam during London 20125
  • 90 million people filled out an NBA ‘bracket’ online in 20146
Big events that generate global interest, particularly sports events, have historically created surges in web traffic. By examining these trends, you can know when the spikes are coming and create the right content to capture that crowd.

What does this mean for AdSense publishers?

Understanding the spikes and when they might happen is invaluable information for an online content creator of any kind. If web traffic has increased due to interest around a particular event, it stands to reason that if publishers incorporate related content into their sites, you’re more likely to draw the crowds. 

For example; say an AdSense publisher runs a food blog and a large sports event is trending. This publisher may choose to harness that spike and write a piece about food inspired by the host city or nation, or even focus on restaurants in the host city for those attending. Linking the site’s content to this trending event could lead to more traffic to this publisher’s site and could result in increased revenue. 

By predicting and reacting to web traffic spikes, AdSense publishers can create relevant content and stand the best chance of drawing the crowds in a crowded marketplace.

To start learning from past spikes and how they could influence your upcoming content, take a look at Google Trends.

This tool allows you look at what users are searching for on a global scale. You can select topic areas and drill down into regions for those topics, enabling you to find data relevant to your audience. Once you’ve established the kinds of spikes certain events create, you can create relevant content and harness the insights you find to predict what spikes may happen in the future.

Whatever your site’s focus, web traffic spikes can have a huge impact of your growth. Start exploring the data now and think about how your content can adapt and take advantage of these moments.

Posted by Jay Castro, AdSense Content Marketing Specialist


Planet DebianChris Lamb: Python quirk: Signatures are evaluated at import time

Every Python programmer knows to avoid mutable default arguments:

def fn(mutable=[]):
    print mutable

$ python
['elem', 'elem']

However, many are not clear that this is due to arguments being evaluated at import time, rather than the first time the function is evaluated.

This results in related quirks such as:

def never_called(error=1/0):
$ python
Traceback (most recent call last):
  File "", line 1, in <module>
ZeroDivisionError: integer division or modulo by zero

... and an—implementation-specific—quirk caused by naive constant folding:

def never_called():
    99999999 ** 9999999
$ python

I suspect that this can be used as denial-of-service vector.

Worse Than FailureThe Keys to Cloud Storage

When you want to store data in Amazon’s S3 cloud-based storage, you have to assign that data a key. In practice, this looks and behaves like a filename, but the underlying APIs treat it like a key/value store, where the value can be a large data object.

S3 is flexible and cost-effective enough that Melinda’s company decided to use it for logging HTTP requests to their application. These requests often contained large data files for upload, and those files might need to be referenced in the future, so a persistent and reliable storage was important.

Each of these incoming HTTP requests had a request_id field, so a naive implementation of logging would be to write the body of the request to an S3 key following a pattern like requests/c418b58b-164d-4e1f-970b-ed00dea855b6. For a number of reasons, however, clients might send multiple requests using the same request_id. Since a logging system that overwrites old logs would be pretty terrible, they decided that each log file also needed an ID, so they could write them out with keys like requests/c418b58b-164d-4e1f-970b-ed00dea855b6/${x}, where ${x} was the ID of the log file.

The developer responsible for implementing this decided that ${x} should be an auto-incremented number. This presented a problem, though: how on earth could they keep that index in sync across all of their API nodes?

function findFreeKey(bucket, key, append, startNum, callback) {
        var testPath = key;
        if (typeof startNum != 'number' || startNum < 0)
                startNum = 0;
        else if (startNum > 0)
                testPath += (append ? append : '') + startNum;
        get(bucket, testPath, function(err) {
                if (err) {
                        if (err == 404)
                                callback(null, testPath);
                        findFreeKey(bucket, key, append, startNum + 1, callback);

function get(bucket, key, callback) {
        var client = getClient(bucket);
        var req = client.get(key);
        req.on('response', function(res) {
                if (res.statusCode >= 200 && res.statusCode < 300) {
                        var str = '';
                        res.on('data', function(chunk) {
                                str += chunk;
                        res.on('end', function() {
                                callback(null, res, str);
                        callback(res.statusCode, res, null);
        req.on('error', function(err) {

The core idea of this code is that instead of trying to keep the autoincremented index in sync, instead just start at zero, and fetch requests/c418b58b-164d-4e1f-970b-ed00dea855b6/0. If you get a 404, great! Use that key to write the file. If there’s actually data at that key, try fetching requests/c418b58b-164d-4e1f-970b-ed00dea855b6/1. And so on.

This, of course, does nothing to defend against race conditions. There was no requirement that ${x} be sequential, there was never a need to order these log files that way, so the developer could have used a UUID for each log file, and there would have been no problems. That’s not the actual problem with this code, though.

Note the line var req = client.get(key). This uses the Amazon S3 API to get the object located at that key- the entire object, including the data. These requests could contain large data files, and the entire body would be downloaded. It should be noted that there is a perfectly good listObjects function which can simply return a list of used keys with a single request.

So, each time a request_id was reused, the logging of that request took longer and longer, as every single previous request with that request_id needed to be re-downloaded in its entirety before the system could finish logging. It should also be noted that S3 does charge you based on both the content stored there, and how much bandwidth you use.

Melinda noticed this atrocity, and thought her trendy, self-organizing, and democratic team might want to tackle it. Each week, everyone is allowed to nominate a piece of ugly technical debt, and then the team votes for what they want to tackle first. Over the past two years, they’ve replaced their terrible test-fixtures with merely bad ones, they’ve swapped out the ORM tool that no one liked with an ORM tool that only the technical lead likes, and they’ve cycled through every JavaScript build system out there before deciding that they’re better off with an in-house solution.

In those two years, no matter how many times Melinda nominated this particular block of code, it’s remained their lowest-priority piece of technical debt.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet Linux AustraliaOpenSTEM: Conversations on Collected Health Data

wearable-health-deviceThere are more and more wearable devices that collect a variety of health data, and other health records are kept electronically. More often than not, the people whose data it is don’t actually have access. There are very important issues to consider, and you could use this for a conversation with your students, and in assignments.

On the individual level, questions such as

  • Who should own your health data?
  • Should you be able to get an overview of who has what kind of your data?  (without fuzzy vague language)
  • Should you be able to access your own data? (directly out of a device, or online service where a device sends its data)
  • Should you be able to request a company to completely remove data from their records?

For society, questions like

  • Should a company be allowed to hoard data, or should they be required to make it accessible (open data) for other researchers?

A comment piece in this week’s Nature entitled “Lift the blockade on health data” could be used as a starting point a conversation and for additional information:

Technology titans, such as Google and Apple, are moving into health. For all the potential benefits, the incorporation of people’s health data into algorithmic ‘black boxes’ could harm science and exacerbate inequalities, warn John Wilbanks and Eric Topol in a Comment piece in this week’s Nature. “When it comes to control over our own data, health data must be where we draw the line,” they stress.

Cryptic digital profiling is already shaping society; for example, online adverts are tailored to people’s age, location, spending and browsing habits. Wilbanks and Topol envision a future in which “companies are able to trade people’s disease profiles, unbeknown to them” and where “health decisions are abstruse and difficult to challenge, and advances in understanding are used to aggressively market health-related services to people — regardless of whether those services actually benefit their health.”

The authors call for a campaigning movement similar to the environmental one to break open how people’s data are being used, and to illuminate how such information could be used in the future. In their view, “the creation of credible competitors that are open source is the most promising way to regulate” corporations that have come to “resemble small nations in their own right”.



Planet DebianDaniel Pocock: How many mobile phone accounts will be hijacked this summer?

Summer vacations have been getting tougher in recent years. Airlines cut into your precious vacation time with their online check-in procedures and a dozen reminder messages, there is growing concern about airport security and Brexit has already put one large travel firm into liquidation leaving holidaymakers in limbo.

If that wasn't all bad enough, now there is a new threat: while you are relaxing in the sun, scammers fool your phone company into issuing a replacement SIM card or transferring your mobile number to a new provider and then proceed to use it to take over all your email, social media, Paypal and bank accounts. The same scam has been appearing around the globe, from Britain to Australia and everywhere in between. Many of these scams were predicted in my earlier blog SMS logins: an illusion of security (April 2014) but they are only starting to get publicity now as more aspects of our lives are at risk, scammers are ramping up their exploits and phone companies are floundering under the onslaught.

With the vast majority of Internet users struggling to keep their passwords out of the wrong hands, many organizations have started offering their customers the option of receiving two-factor authentication codes on their mobile phone during login. Rather than making people safer, this has simply given scammers an incentive to seize control of telephones, usually by tricking the phone company to issue a replacement SIM or port the number. It also provides a fresh incentive for criminals to steal phones while cybercriminals have been embedding code into many "free" apps to surreptitiously re-route the text messages and gather other data they need for an identity theft sting.

Sadly, telephone networks were never designed for secure transactions. Telecoms experts have made this clear numerous times. Some of the largest scams in the history of financial services exploited phone verification protocols as the weakest link in the chain, including a $150 million heist reminiscent of Ocean's 11.

For phone companies, SMS messaging came as a side-effect of digital communications for mobile handsets. It is less than one percent of their business. SMS authentication is less than one percent of that. Phone companies lose little or nothing when SMS messages are hijacked so there is little incentive for them to secure it. Nonetheless, like insects riding on an elephant, numerous companies have popped up with a business model that involves linking websites to the wholesale telephone network and dressing it up as a "security" solution. These companies are able to make eye-watering profits by "purchasing" text messages for $0.01 and selling them for $0.02 (one hundred percent gross profit), but they also have nothing to lose when SIM cards are hijacked and therefore minimal incentive to take any responsibility.

Companies like Google, Facebook and Twitter have thrown more fuel on the fire by encouraging and sometimes even demanding users provide mobile phone numbers to "prove they are human" or "protect" their accounts. Through these antics, these high profile companies have given a vast percentage of the population a false sense of confidence in codes delivered by mobile phone, yet the real motivation for these companies does not appear to be security at all: they have worked out that the mobile phone number is the holy grail in cross-referencing vast databases of users and customers from different sources for all sorts of creepy purposes. As most of their services don't involve any financial activity, they have little to lose if accounts are compromised and everything to gain by accurately gathering mobile phone numbers from as many users as possible.

Can you escape your mobile phone while on vacation?

Just how hard is it to get a replacement SIM card or transfer/port a user's phone number while they are on vacation? Many phone companies will accept instructions through a web form or a phone call. Scammers need little more than a user's full name, home address and date of birth: vast lists of these private details are circulating on the black market, sourced from social media, data breaches (99% of which are never detected or made public), marketing companies and even the web sites that encourage your friends to send you free online birthday cards.

Every time a company has asked me to use mobile phone authentication so far, I've opted out and I'll continue to do so. Even if somebody does hijack my phone account while I'm on vacation, the consequences for me are minimal as it will not give them access to any other account or service, can you and your family members say the same thing?

What can be done?

  • Opt-out of mobile phone authentication schemes.
  • Never give the mobile phone number to web sites unless there is a real and pressing need for them to call you.
  • Tell firms you don't have a mobile phone or that you share your phone with your family and can't use it for private authentication.
  • If you need to use two-factor authentication, only use technical solutions such as smart cards or security tokens that have been engineered exclusively for computer security. Leave them in a locked drawer or safe while on vacation. Be wary of anybody who insists on SMS and doesn't offer these other options.
  • Rather than seeking to "protect" accounts, simply close some or all social media accounts to reduce your exposure and eliminate the effort of keeping them "secure" and updating "privacy" settings.
  • If your bank provides a relationship manager or other personal contact, this
    can also provide a higher level of security as they get to know you.

Previous blogs on SMS messaging, security and two factor authentication, including my earlier blog SMS Logins: an illusion of security.

Planet DebianMichal Čihař: New projects on Hosted Weblate

For almost two months I found very little time to process requests to host free software on Hosted Weblate. Today the queue has been emptied, what means that you can find many new translations there.

To make it short, here is list of new projects:

PS: If you didn't receive reply for your hosting request today, it was probably lost, so don't hesitate to ask again.

Filed under: Debian English Weblate | 0 comments

TED4 TED Talks that make the case for open science in health care

Photo: James Duncan Davidson

Dr. Ben Goldacre asks why medical researchers seem to publish only positive results of pharmaceutical testing, instead of openly sharing both good and bad results. (Wouldn’t you want to know everything possible about a drug you’re about to take?) Photo: James Duncan Davidson

Sometimes it seems as if the Internet has created a bold new era of openness. But if there is one place where openness appears to be lagging, it would be scientific research. The scientific community is full of intricate (and often little-known) systems that regulate and control it, sometimes to great purpose, but sometimes to its own detriment.

Scarce funding has created a competitive environment obsessed with the publication of successful positive studies. But often, useful information is neither positive nor complete. Emphasis on publication incentivizes scientists to hoard their work in its early stages, and it reinforces the idea that the only work worth sharing is that which yields a successful confirmation of a hypothesis. When the study doesn’t work as expected, it’s often filed away.

This kind of bias is especially dangerous in the health care industry. Doctors’ decisions carry a life-or-death importance that requires the disclosure of all relevant information. Overlooking or obscuring new medical information, whether intentional or not, is a danger to all of us.

Michael Nielsen uses his TEDx talk to explore how an open industry may lead to more rapid and efficient solving of today’s most difficult scientific problems.

Sharing three examples — the Polymath project, the Quantum Wiki and the GenBank — Nielsen describes the advantages and pitfalls of an open system. He concludes that to convince scientists to contribute to collaborative projects that may advance the greater good, rather that focus only on their own publications, we must make collaboration essential to their survival. More succinctly: “Any publicly funded science should be open science.”

Meanwhile, Jay Bradner’s talk serves as a personal report from the front lines of the fight against cancer — and the possibilities of open science. After discovering an important compound for cancer research, Bradner and his team decided to ask: “What would happen if we were as open and honest at the earliest phase of discovery chemistry research as we could be?”

His firsthand account shows how day-one openness helped him and his colleagues advance their research rapidly and efficiently. By borrowing “from the amazing successes of the computer-science industry, [they established] two principles — that of open source and that of crowdsourcing — to quickly, responsibly accelerate the delivery of targeted therapeutics to patients with cancer.”

To hear another personal story of the triumphs of open science, consider Pardis Sabeti’s talk about the collaborative effort that helped stop Ebola.

There are also more nuanced ramifications to the culture of hoarding and secrecy. In her talk, Ellen t’ Hoen describes in detail how the oppressive structure of medical patents prevents low-income patients from receiving the treatment they need to survive.

She compares health care to the aviation industry, whose growth was also once stymied by patent battles. Due to the US government’s logical interest in successful flight, they ordered that all aviation patents be “pooled,” or freely shared between competitors to ensure the development of military aircraft.

This structure was later applied to fight AIDS and develop cheap and accessible retroviral drugs. However, unlike aviation industry, this patent pool was voluntary. With out legal obligation, this progress is contingent “on the willingness of drug companies to make [pooling] happen. We count on those companies that understand that it is … not only in the interest of the global good, but also in their own interest, to move from conflict to collaboration.” Perhaps it’s time our governments took action again.

Secrecy in science may also lead to a certain insidious bias, one that prevents accurate information regarding the effectiveness of new drugs from reaching doctors and consumers. Ben Goldacre describes what he refers to as the “publication bias” and how it often leads negative trials to go missing in action.

Goldacre invites us to imagine medical trials as a coin toss: “If I flipped a coin 100 times but then withheld the results from you from half of those tosses, I could make it look as if I had a coin that always came up heads.” A system that only rewards positive results incentivizes the willful omission of studies yielding negative ones, which leads to over-estimated success rates and redundant research. It creates to false hope and wasted time. Goldacre demands that, to preserve the integrity of the health industry and ensure the safety of the public, all trials conducted on humans must be published, old and new, successful or not.

Many more TED Talks explore the positives (and challenges) of open-source policy, both in and outside of health care. Explore our “open-source” tag.

Planet DebianShirish Agarwal: Debconf 16 and My Experience with Debian

It has been often said that you should continually try new things in life so that

a. Unlike the fish you do not mistake the pond to be the sea.

b. You see other people, other types and ways of living and being which you normally won’t in your day-to-day existence.

With both of those as mantras I decided to take a leap into the unknown. I was unsure both about the visa process as well as the travel bit as I was traveling to an unknown place and although I had done some research about the place I was unsure about the authenticity of whatever is/was shared on the web.

During the whole journey both to and fro, I couldn’t sleep a wink. The Doha airport is huge. There are 5 Concourses, A, B , C, D, E and around 30+ gates in each Concourse. The ambition of the small state is something to be reckoned with. Almost 95% of the blue workers in the entire airport were of Asian sub-continent. While the Qatari Rial is 19 times stronger to us, the workers I suspect are worse-off than people doing similar things back home. Add to that the sharia law, even for all the money in the world, I wouldn’t want to settle therein.

Anyways, during the journey, a small surprise awaited me, Ritesh Raj Saraff, a DD was also traveling to Debconf. We bumped into each other while going to see the Doha City, courtesy Al-Hamad International Airport. I would probably share a bit more about Doha and my experiences with the city in upcoming posts.

Cut to Cape Town, South Africa, we landed in the city half an hour after our scheduled time and then we sped along to University of Cape Town (UCT) which was to become our home for the next 13 odd days.

The first few days were a whirlwind as there were new people to meet, old people whom I knew only as an e-mail id or an IRC nickname turned out to be real people and you have to try to articulate yourself in English, which is not a native language of mine. During Debcamp I was fortunate to be able visit some of the places and the wiki page had a lot of places which I knew I wouldn’t be able to complete unless I had 15 days unlimited time and money to go around so didn’t even try.

I had gone with few goals in mind :-

a. Do some documentation of the event – In this I failed completely as just the walk from the venue to where the talks were energy-draining for me. Apart from that, you get swept in meeting new people and talking about one of million topics in Debian which interest you or the other person and while they are fulfilling, it is and was both physically and emotionally draining for me (in a good way). Bernelle (one of the organizers) had warned us of this phenomenon but you disregard it as you know you have a limited time-frame in which to meet and greet people and it is all a over-whelming experience.

b. Another goal was to meet my Indian brethren who had left the country around 60~100 years mostly as slaves of East India company – In this I was partially successful. I met a couple of beautiful ladies who had either a father or a mother who was Indian while the other was of African heritage. It seemed in them a yearning to know the culture but from what little they had, only Bollywood and Indian cuisine was what they could make of Indian culture. One of the girls, ummm… women to be more truer, shared a somewhat grim tale. She had both an African boyfriend as well as Indian boyfriend in her life and in both cases, she was rejected by the boy’s parents because she wasn’t pure enough. This was deja vu all over again as the same thing can be seen here happening in casteism so there wasn’t any advice I could give but just nod in empathy. What was sort of relevation was when their parents or grandparents came, the name and surnames were thrown off and the surname was just the place from where they belong. From the discussions it emerged that there were also lot of cases of forced conversions to Christianity during that era as well as temptations of a better life.

As shared, this goal succeeded partially, as I was actually interested in their parents or grand-parents to know the events that shaped the Indian diaspora over there. While the children know only of today, yester-years could only be known by those people who made the unwilling perilous journey to Africa. I had also wanted to know more about Gandhiji’s role in that era but alas, that part of history would have to wait for another day as I guess, both those goals would only have met had I visited Durban but that was not to be.

I had applied for one talk ‘My Experience with Debian’ and one workshop for Installation of Debian on systems. The ‘My Experience with Debian’ was aimed at newbies and I had thought of using show-and-tell to share the differences between proprietary Operating Systems and a FOSS distribution such as Debian. I was going to take simple things such as changelogs, apt-listbugs, real-time knowledge of updates and upgrades as well as /etc/apt/sources.list to share both the versatility of the Debian desktop and real improvements than what proprietary Operating Systems had to offer. But I found myself engaging with Debian Developers (DD’s) rather than the newbies so had to change the orientation and fundamentals of the talk on the fly. I knew or suspected rather that the old idea would not work as it would just be repeating to the choir. With that in the back of mind, and the idea that perhaps they would not be so aware of the politics and events which happened in India over the last couple of decades, I tried to share what little I was able to recollect what little I was able to remember about those times. Apart from that, I was also highly conscious that I had been given just the before lunch slot aka ‘You are in the way of my lunch’ slot. So I knew I had to speak my piece as quickly as possible being as clear as can be. Later, I did get feedback that I was fast and seeing it through couple of times, do agree that I could have done a better job. What’s done is done and the only thing I could do to salvage it a bit is to make a presentation which I am sharing as below.


Would be nice if somebody could come up with a lighter template for presentations. For reference the template I have taken it from is shared at . Some pictures from the presentation.




You can find the video at

This is by no means the end of the Debconf16 experience, but actually the starting. I hope to share more of my thoughts, ideas and get as much feedback from all the wonderful people I met during Debconf.

Filed under: Miscellenous Tagged: #Debconf16, Doha, My talk, Qatar

CryptogramDetecting Spoofed Messages Using Clock Skew

Two researchers are working on a system to detect spoofed messages sent to automobiles by fingerprinting the clock skew of the various computer components within the car, and then detecting when those skews are off. It's a clever system, with applications outside of automobiles (and isn't new).

To perform that fingerprinting, they use a weird characteristic of all computers: tiny timing errors known as "clock skew." Taking advantage of the fact that those errors are different in every computer­ -- including every computer inside a car­ -- the researchers were able to assign a fingerprint to each ECU based on its specific clock skew. The CIDS' device then uses those fingerprints to differentiate between the ECUs, and to spot when one ECU impersonates another, like when a hacker corrupts the vehicle's radio system to spoof messages that are meant to come from a brake pedal or steering system.

Paper: "Fingerprinting Electronic Control Units for Vehicle Intrusion Detection," by Kyong-Tak Cho and Kang G. Shin.

Abstract: As more software modules and external interfaces are getting added on vehicles, new attacks and vulnerabilities are emerging. Researchers have demonstrated how to compromise in-vehicle Electronic Control Units (ECUs) and control the vehicle maneuver. To counter these vulnerabilities, various types of defense mechanisms have been proposed, but they have not been able to meet the need of strong protection for safety-critical ECUs against in-vehicle network attacks. To mitigate this deficiency, we propose an anomaly-based intrusion detection system (IDS), called Clock-based IDS (CIDS). It measures and then exploits the intervals of periodic in-vehicle messages for fingerprinting ECUs. The thus-derived fingerprints are then used for constructing a baseline of ECUs' clock behaviors with the Recursive Least Squares (RLS) algorithm. Based on this baseline, CIDS uses Cumulative Sum (CUSUM) to detect any abnormal shifts in the identification errors -- a clear sign of intrusion. This allows quick identification of in-vehicle network intrusions with a low false-positive rate of 0.055%. Unlike state-of-the-art IDSs, if an attack is detected, CIDS's fingerprinting of ECUs also facilitates a rootcause analysis; identifying which ECU mounted the attack. Our experiments on a CAN bus prototype and on real vehicles have shown CIDS to be able to detect a wide range of in-vehicle network attacks.

Worse Than FailureNot A Fan

Red computer cooling fan

Larry worked in the IT department of a medium-sized financial company. Bright and early on what should have been a promising day, the phone rang. Larry cursed the caller ID for informing him that Graham was on the line. The resident old man of the office and bane of IT, he frequently disregarded sound advice and policy to satisfy his own whims.

Powering past the foreboding that'd settled over him, Larry picked up the phone and forced out a greeting through teeth that were already set on edge. "Good morning, IT services. How may I help you?"

"Yeah. I need help with my computer." Graham skipped decorum to get to the heart of the matter. "It won't turn on."

The computers the accountants used were old, but still in good shape. Larry hoped he'd be able to deal with this over the phone. "OK. Let's walk through some basic troubleshooting—"

"No!" Graham cut him off. "Someone's gotta come over here! I can't afford to be dead in the water with month-end coming up!"

Larry stifled a groan. "Let me log the ticket in our system, and I'll be right over."

He hung up, sparing himself another useless rant, and filed the ticket. That done, he left his cube to head for the accountants' corner. The heat from their ancient boxes ratcheted the temperature several degrees higher. Half a dozen whirring fans worked overtime, but only pushed hot air around in a futile exercise.

"Where the hell were you?" Graham reclined in his swivel-chair, greeting Larry with a scowl. "It doesn't take that long to walk over here."

Larry tugged at his collar, ignoring the cheerful welcome. "Let's go through some basic troubleshooting, OK? I'm sure you already did a lot of this before you called—" Yeah, right, he thought to himself "—but I just wanna be thorough here. First, let's make sure it's plugged in."

Graham didn't budge an inch in his chair, his expression unimpressed.

Larry verified the computer was plugged in. The monitor powered on obediently, but the box remained dormant. Switching outlets didn't help.

"When did this happen?" Larry asked next. "Did it just shut down while you were in the middle of something, or did you shut it off yesterday and can't start it up now?"

"It was fine yesterday," Graham replied. "It won't start up today."

Larry dug into more specific details, none of which helped with the matter at hand. "My guess is that it's some kind of hardware problem," he concluded with a sigh. "I'll probably have to take your machine to look into it further."

Graham bolted upright in his chair. "Unacceptable! I need this fixed now!"

In his peripherals, Larry noticed that Graham had taken to twirling something through his fingers. He glanced over for a better look, then gaped. Was that ... a screwdriver?

Larry's viscera clenched up. Dreading the answer, he asked, "What'd you need that screwdriver for?"

Graham glanced at the tool in his hand, then shrugged. "The sound the computer was making was bothering me, so I took out the source."

"Oh, for ..." Larry stifled himself, then grabbed the screwdriver. Upon opening the box, he confirmed the fan was missing; a quick search determined its new home to be the trash can in the corner of Graham's cube. In the process of the fanectomy, Graham had also managed to unplug several wires and destroy the motherboard.

Aware that it probably wouldn't stick, Larry nonetheless delivered a remarkably polite, profanity-free explanation about the risks of opening computers, and why one should never remove fans. Before returning to his own desk, he asked all of Graham's cube-neighbors to kindly warn him if they ever noticed a tool in their coworker's hands again.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet DebianSteinar H. Gunderson: Solskogen 2016 videos

I just published the videos from Solskogen 2016 on Youtube; you can find them all in this playlist. The are basically exactly what was being sent out on the live stream, frame for frame, except that the audio for the live shader compos has been remastered, and of course a lot of dead time has been cut out (the stream was sending over several days, but most of the time, only the information loop from the bigscreen).

YouTube doesn't really support the variable 50/60 Hz frame rate we've been using well as far as I can tell, but mostly it seems to go to some 60 Hz upconversion, which is okay enough, because the rest of your setup most likely isn't free-framerate anyway.

Solskogen is interesting in that we're trying to do a high-quality stream with essentially zero money allocated to it; where something like Debconf can use €2500 for renting and transporting equipment (granted, for two or three rooms and not our single stream), we're largely dependent on personal equipment as well as borrowing things here and there. (I think we borrowed stuff from more or less ten distinct places.) Furthermore, we're nowhere near the situation of “two cameras, a laptop, perhaps a few microphones”; not only do you expect to run full 1080p60 to the bigscreen and switch between that and information slides for each production, but an Amiga 500 doesn't really have an HDMI port, and Commodore 64 delivers an infamously broken 50.12 Hz signal that you really need to deal with carefully if you want it to not look like crap.

These two factors together lead to a rather eclectic setup; here, visualized beautifully from my ASCII art by ditaa:

Solskogen 2016 A/V setup diagram

Of course, for me, the really interesting part here is near the end of the chain, with Nageru, my live video mixer, doing the stream mixing and encoding. (There's also Cubemap, the video reflector, but honestly, I never worry about that anymore. Serving 150 simultaneous clients is just not something to write home about anymore; the only adjustment I would want to make would probably be some WebSockets support to be able to deal with iOS without having to use a secondary HLS stream.) Of course, to make things even more complicated, the live shader compo needs two different inputs (the two coders' laptops) live on the bigscreen, which was done with two video capture cards, text chroma-keyed on top from Chroma, and OBS, because the guy controlling the bigscreen has different preferences from me. I would take his screen in as a “dirty feed” and then put my own stuff around it, like this:

Solskogen 2016 shader compo screenshot

(Unfortunately, I forgot to take a screenshot of Nageru itself during this run.)

Solskogen was the first time I'd really used Nageru in production, and despite super-extensive testing, there's always something that can go wrong. And indeed there was: First of all, we discovered that the local Internet line was reduced from 30/10 to 5/0.5 (which is, frankly, unusable for streaming video), and after we'd half-way fixed that (we got it to 25/4 or so by prodding the ISP, of which we could reserve about 2 for video—demoscene content is really hard to encode, so I'd prefer a lot more)… Nageru started crashing.

It wasn't even crashes I understood anything of. Generally it seemed like the NVIDIA drivers were returning GL_OUT_OF_MEMORY on things like creating mipmaps; it's logical that they'd be allocating memory, but we had 6 GB of GPU memory and 16 GB of CPU memory, and lots of it was free. (The PC we used for encoding was much, much faster than what you need to run Nageru smoothly, so we had plenty of CPU power left to run x264 in, although you can of course always want more.) It seemed to be mostly related to zoom transitions, so I generally avoided those and ran that night's compos in a more static fashion.

It wasn't until later that night (or morning, if you will) that I actually understood the bug (through the godsend of the NVX_gpu_memory_info extension, which gave me enough information about the GPU memory state that I understood I wasn't leaking GPU memory at all); I had set Nageru to lock all of its memory used in RAM, so that it would never ever get swapped out and lose frames for that reason. I had set the limit for lockable RAM based on my test setup, with 4 GB of RAM, but this setup had much more RAM, a 1080p60 input (which uses more RAM, of course) and a second camera, all of which I hadn't been able to test before, since I simply didn't have the hardware available. So I wasn't hitting the available RAM, but I was hitting the amount of RAM that Linux was willing to lock into memory for me, and at that point, it'd rather return errors on memory allocations (including the allocations the driver needed to make for its texture memory backings) than to violate the “never swap“ contract.

Once I fixed this (by simply increasing the amount of lockable memory in limits.conf), everything was rock-stable, just like it should be, and I could turn my attention to the actual production. Often during compos, I don't really need the mixing power of Nageru (it just shows a single input, albeit scaled using high-quality Lanczos3 scaling on the GPU to get it down from 1080p60 to 720p60), but since entries come in using different sound levels (I wanted the stream to conform to EBU R128, which it generally did) and different platforms expect different audio work (e.g., you wouldn't put a compressor on an MP3 track that was already mastered, but we did that on e.g. SID tracks since they have nearly zero ability to control the overall volume), there was a fair bit of manual audio tweaking during some of the compos.

That, and of course, the live 50/60 Hz switches were a lot of fun: If an Amiga entry was coming up, we'd 1. fade to a camera, 2. fade in an overlay saying we were switching to 50 Hz so have patience, 3. set the camera as master clock (because the bigscreen's clock is going to go away soon), 4. change the scaler from 60 Hz to 50 Hz (takes two clicks and a bit of waiting), 5. change the scaler input in Nageru from 1080p60 to 1080p50, 6. steps 3,2,1 in reverse. Next time, I'll try to make that slightly smoother, especially as the lack of audio during the switch (it comes in on the bigscreen SDI feed) tended to confuse viewers.

So, well, that was a lot of fun, and it certainly validated that you can do a pretty complicated real-life stream with Nageru. I have a long list of small tweaks I want to make, though; nothing beats actual experience when it comes to improving processes. :-)

Sam VargheseNew Zealand rugby has something going for it

NEXT weekend, teams from New Zealand, Australia and South Africa will begin battling it out in the knockout phase of the 2016 Super Rugby tournament.

From 12 teams in 1996, the tournament now has 18 teams: six from South Africa, five each from Australia and New Zealand, and one apiece from Argentina and Japan.

New Zealand’s overall population is just a shade over four million. Yet half the teams in the playoffs who play for honours will be from those two islands they call the shaky isles.

It is a remarkable phenomenon.

South Africa, with a population of 54 million, has three teams in the fray while Australia, with a touch over 24 million in its borders, has just one team in the running.

Last year, the final was an all-New Zealand affair, with the Otago Highlanders defeating the Wellington Hurricanes to take the trophy. It has been that way five times.

For the first five years of the tournament, teams from New Zealand topped; only then, did an Australian team win. It took until 2007 for a South African team, the Bulls from Pretoria, to win the tournament.

In the 20 years of Super Rugby, South African teams have taken the trophy home just thrice while Australian teams have won four times. The other 13 times, teams from tiny New Zealand have been triumphant.

Seven of those Kiwi wins have been by the Canterbury Crusaders, and three by the Auckland Blues, with the Waikato Chiefs winning twice and the Highlanders once.

How is it that this tiny nation can dominate in this sport, and not for a year or two, but over decades and decades?

There is a book titled Legacy: 15 lessons in leadership which tells part of the story, detailing the culture of the All Blacks, the New Zealand national rugby team. All the national players are drawn from the super rugby teams; nobody who plays outside the country qualifies.

This book tells of the influence of Maori culture on the team and the players. It is a wonderful example of a nation of white people where the lessons of the first peoples still remain. This is the only case of a team from a white nation in any sport that does a war dance before the game inspired by its first peoples. It is the only white country that sings its national anthem in the language of its first peoples before it sings the same verses in English.

Legacy tells the story of how the New Zealand team learns to lead, how it stays ahead and how it cultivates the spirit of winning. It is a spirit that is followed in the five New Zealand franchises where the coaches are more often than not former national players.

As former All Blacks coach Graham Henry puts it in the book, the expectation is that the team will win every match, and if that expectation wasn’t there, then the team wouldn’t be half as successful.

Perhaps there is a lesson to be learned from this little country that produces such magnificent teams year after year and plays the game as it should be played: with flamboyance and flair.

Planet DebianDaniel Stender: Theano in Debian: maintenance, BLAS and CUDA

I'm glad to announce that we have the current release of Theano (0.8.2) in Debian unstable now, it's on its way into the testing branch and the Debian derivatives, heading for Debian 9. The Debian package is maintained in behalf of the Debian Science Team.

We have a binary package with the modules in the Python 2.7 import path (python-theano), if you want or need to stick to that branch a little longer (as a matter of fact, in the current popcon stats it's the most installed package), and a package running on the default Python 3 version (python3-theano). The comprehensive documentation is available for offline usage in another binary package (theano-doc).

Although Theano builds its extensions on run time and therefore all binary packages contain the same code, the source package generates arch specific packages1 for the reason that the exhaustive test suite could run over all the architectures to detect if there are problems somewhere (#824116).

what's this?

In a nutshell, Theano is a computer algebra system (CAS) and expression compiler, which is implemented in Python as a library. It is named after a Classical Greek female mathematician and it's developed at the LISA lab (located at MILA, the Montreal Institute for Learning Algorithms) at the Université de Montréal.

Theano tightly integrates multi-dimensional arrays (N-dimensional, ND-array) from NumPy (numpy.ndarray), which are broadly used in Scientific Python for the representation of numeric data. It features a declarative Python based language with symbolic operations for the functional definition of mathematical expressions, which allows to create functions that compute values for them. Internally the expressions are represented as directed graphs with nodes for variables and operations. The internal compiler then optimizes those graphs for stability and speed and then generates high-performance native machine code to evaluate resp. compute these mathematical expressions2.

One of the main features of Theano is that it's capable to compute also on GPU processors (graphical processor unit), like on custom graphic cards (e.g. the developers are using a GeForce GTX Titan X for benchmarks). Today's GPUs became very powerful parallel floating point devices which can be employed also for scientific computations instead of 3D video games3. The acronym "GPGPU" (general purpose graphical processor unit) refers to special cards like NVIDIA's Tesla4, which could be used alike (more on that below). Thus, Theano is a high-performance number cruncher with an own computing engine which could be used for large-scale scientific computations.

If you haven't came across Theano as a Pythonistic professional mathematician, it's also one of the most prevalent frameworks for implementing deep learning applications (training multi-layered, "deep" artificial neural networks, DNN) around5, and has been developed with a focus on machine learning from the ground up. There are several higher level user interfaces build in the top of Theano (for DNN, Keras, Lasagne, Blocks, and others, or for Python probalistic programming, PyMC3). I'll seek for some of them also becoming available in Debian, too.

helper scripts

Both binary packages ship three convenience scripts, theano-cache, theano-test, and theano-nose. Instead of them being copied into /usr/bin, which would result into a binaries-have-conflict violation, the scripts are to be found in /usr/share/python-theano (python3-theano respectively), so that both module packages of Theano can be installed at the same time.

The scripts could be run directly from these folders, e.g. do $ python /usr/share/python-theano/theano-nose to achieve that. If you're going to heavy use them, you could add the directory of the flavour you prefer (Python 2 or Python 3) to the $PATH environment variable manually by either typing e.g. $ export PATH=/usr/share/python-theano:$PATH on the prompt, or save that line into ~/.bashrc.

Manpages aren't available for these little helper scripts6, but you could always get info on what they do and which arguments they accept by invoking them with the -h (for theano-nose) resp. help flag (for theano-cache).

running the tests

On some occasions you might want to run the testsuite of the installed library, like to check over if everything runs fine on your GPU hardware. There are two different ways to run the tests (anyway you need to have python{,3}-nose installed). One is, you could launch the test suite by doing $ python -c 'import theano; theano.test() (or the same with python3 to test the other flavour), that's the same what the helper script theano-test does. However, by doing it that way some particular tests might fail by raising errors also for the group of known failures.

Known failures are excluded from being errors if you run the tests by theano-nose, which is a wrapper around nosetests, so this might be always the better choice. You can run this convenience script with the option --theano on the installed library, or from the source package root, which you could pull by $ sudo apt-get source theano (there you have also the option to use bin/theano-nose). The script accept options for nosetests, so you might run it with -v to increase verbosity.

For the tests the configuration switch config.device must be set to cpu. This will also include the GPU tests when a proper accessible device is detected, so that's a little misleading in the sense of it doesn't mean "run everything on the CPU". You're on the safe side if you run it always like this: $ THEANO_FLAGS=device=cpu theano-nose, if you've set config.device to gpu in your ~/.theanorc.

Depending on the available hardware and the used BLAS implementation (see below) it could take quite a long time to run the whole test suite through, on the Core-i5 in my laptop that takes around an hour even excluded the GPU related tests (which perform pretty fast, though). Theano features a couple of switches to manipulate the default configuration for optimization and compilation. There is a rivalry between optimization and compilation costs against performance of the test suite, and it turned out the test suite performs a quicker with lesser graph optimization. There are two different switches available to control config.optimizer, the fast_run toggles maximal optimization, while fast_compile runs only a minimal set of graph optimization features. These settings are used by the general mode switches for config.mode, which is either FAST_RUN by default, or FAST_COMPILE. The default mode FAST_RUN (optimizer=fast_run, linker=cvm) needs around 72 minutes on my lower mid-level machine (on un-optimized BLAS). To set mode=FAST_COMPILE (optimizer=fast_compile, linker=py) brings some boost for the performance of the test suite because it runs the whole suite in 46 minutes. The downside of that is that C code compilation is disabled in this mode by using the linker py, and also the GPU related tests are not included. I've played around with using the optimizer fast_compile with some of the other linkers (c|py and cvm, and their versions without garbage collection) as alternative to FAST_COMPILE with minimal optimization but also machine code compilation incl. GPU testing. But to my experience, fast_compile without another than the linker py results in some new errors and failures of some tests on amd64, and this might the case also on other architectures, too.

By the way, another useful feature is DebugMode for config.mode, which verifies the correctness of all optimizations and compares the C to Python results. If you want to have detailed info on the configuration settings of Theano, do $ python -c 'import theano; print theano.config' | less, and check out the chapter config in the library documentation in the documentation.

cache maintenance

Theano isn't a JIT (just-in-time) compiler like Numba, which generates native machine code in the memory and executes it immediately, but it saves the generated native machine code into compiledirs. The reason for doing it that way is quite practical like the docs explain, the persistent cache on disk makes it possible to avoid generating code for the same operation, and to avoid compiling again when different operations generate the same code. The compiledirs by default are located within $(HOME)/.theano/.

After some time the folder becomes quite large, and might look something like this:

$ ls ~/.theano

If the used Python version changed like in this example you might to want to purge obsolete cache. For working with the cache resp. the compiledirs, the helper theano-cache comes in handy. If you invoke it without any arguments the current cache location is put out like ~/.theano/compiledir_Linux-4.5--amd64-x86_64-with-debian-stretch-sid--2.7.12-64 (the script is run from /usr/share/python-theano). So, the compiledirs for the old Python versions in this example (11+ and 12rc1) can be removed to free the space they occupy.

All compiledirs resp. cache directories meaning the whole cache could be erased by $ theano-cache basecompiledir purge, the effect is the same as by performing $ rm -rf ~/.theano. You might want to do that e.g. if you're using different hardware, like when you got yourself another graphics card. Or habitual from time to time when the compiledirs fill up so much that it slows down processing with the harddisk being very busy all the time, if you don't have an SSD drive available. For example, the disk space of build chroots carrying (mainly) the tests completely compiled through on default Python 2 and Python 3 consumes around 1.3 GB (see here).

BLAS implementations

Theano needs a level 3 implementation of BLAS (Basic Linear Algebra Subprograms) for operations between vectors (one-dimensional mathematical objects) and matrices (two-dimensional objects) carried out on the CPU. NumPy is already build on BLAS and pulls the standard implementation (libblas3, soure package: lapack), but Theano links directly to it instead of using NumPy as intermediate layer to reduce the computational overhead. For this, Theano needs development headers and the binary packages pull libblas-dev by default, if any other development package of another BLAS implementation (like OpenBLAS or ATLAS) isn't already installed, or pulled with them (providing the virtual package The linker flags could be manipulated directly through the configuration switch config.blas.ldflags, which is by default set to -L/usr/lib -lblas -lblas. By the way, if you set it to an empty value, Theano falls back to using BLAS through NumPy, if you want to have that for some reason.

On Debian, there is a very convenient way to switch between BLAS implementations by the alternatives mechanism. If you have several alternative implementations installed at the same time, you can switch from one to another easily by just doing:

$ sudo update-alternatives --config
There are 3 choices for the alternative (providing /usr/lib/

  Selection    Path                                  Priority   Status
* 0            /usr/lib/openblas-base/      40        auto mode
  1            /usr/lib/atlas-base/atlas/   35        manual mode
  2            /usr/lib/libblas/            10        manual mode
  3            /usr/lib/openblas-base/      40        manual mode

Press <enter> to keep the current choice[*], or type selection number:

The implementations are performing differently on different hardware, so you might want to take the time to compare which one does it best on your processor (the other packages are libatlas-base-dev and libopenblas-dev), and choose that to optimize your system. If you want to squeeze out all which is in there for carrying out Theano's computations on the CPU, another option is to compile an optimized version of a BLAS library especially for your processor. I'm going to write another blog posting on this issue.

The binary packages of Theano ship the script to check over how well a BLAS implementation performs with it, and if everything works right. That script is located in the misc subfolder of the library, you could locate it by doing $ dpkg -L python-theano | grep check_blas (or for the package python3-theano accordingly), and run it with the Python interpreter. By default the scripts puts out a lot of info like a huge perfomance comparison reference table, the current setting of blas.ldflags, the compiledir, the setting of floatX, OS information, the GCC version, the current NumPy config towards BLAS, NumPy location and version, if Theano linked directly or has used the NumPy binding, and finally and most important, the execution time. If just the execution time for quick perfomance comparisons is needed this script could be invoked with -q.

Theano on CUDA

The function compiler of Theano works with alternative backends to carry out the computations, like the ones for graphics cards. Currently, there are two different backends for GPU processing available, one docks onto NVIDIA's CUDA (Compute Unified Device Architecture) technology7, and another one for libgpuarray, which is also developed by the Theano developers in parallel.

The libgpuarray library is an interesting alternative for Theano, it's a GPU tensor (multi-dimensional mathematical object) array written in C with Python bindings based on Cython, which has the advantage of running also on OpenCL8. OpenCL, unlike CUDA9, is full free software, vendor neutral and overcomes the limitation of the CUDA toolkit being only available for amd64 and the ppc64el port (see here). I've opened an ITP on libgpuarray and we'll see if and how this works out. Another reason for it would be great to have it available is that it looks like CUDA currently runs into problems with GCC 610. More on that, soon.

Here's a litle checklist for setting up your CUDA device so that you don't have to experience something like this:

$ THEANO_FLAGS=device=gpu,floatX=float32 python ./ 
WARNING (theano.sandbox.cuda): CUDA is installed, but device gpu is not available (error: Unable to get the number of gpus available: no CUDA-capable device is detected)

hardware check

For running Theano on CUDA you need an NVIDIA graphics card which is capable of doing that. You can recheck if your device is supported by CUDA here. When the hardware isn't too old (CUDA support started with GeForce 8 and Quadro X series) or too strange I think it isn't working only in exceptional cases. You can check your model and if the device is present in the system on the bare hardware level by doing this:

$ lspci | grep -i nvidia
04:00.0 3D controller: NVIDIA Corporation GM108M [GeForce 940M] (rev a2)

If a line like this doesn't get returned, your device most probably is broken, or not properly connected (ouch). If rev ff appears at the end of the line that means the device is off meaning powered down. This might be happening if you have a laptop with Optimus graphics hardware, and the related drivers have switched off the unoccupied device to safe energy11.

kernel module

Running CUDA applications requires the proprietary NVIDIA driver kernel module to be loaded into the kernel and working.

If you haven't already installed it for another purpose, the NVIDIA driver and the CUDA toolkit are both in the non-free section of the Debian archive, which is not enabled by default. To get non-free packages you have to add non-free (and it's better to do so, also contrib) to your package source in /etc/apt/sources.list, which might then look like this:

deb testing main contrib non-free

After doing that, perform $ apt-cache update to update the package lists, and there you go with the non-free packages.

The headers of the running kernel are needed to compile modules, you can get them together with the NVIDIA kernel module package by running:

$ sudo apt-get install linux-headers-$(uname -r) nvidia-kernel-dkms build-essential

DKMS will then build the NVIDIA module for the kernel and does some other things on the system. When the installation has finished, it's generally advised to reboot the system completely.


If you have problems with the CUDA device, it's advised to verify if the following things concerning the NVIDIA driver resp. kernel module are in order:

blacklist nouveau

Check if the default Nouveau kernel module driver (which blocks the NVIDIA module) for some reason still gets loaded by doing $ lsmod | grep nouveau. If nothing gets returned, that's right. If it's still in the kernel, just add blacklist nouveau to /etc/modprobe.d/blacklist.conf, and update the booting ramdisk with § sudo update-initramfs -u afterwards. Then reboot once more, this shouldn't be the case then anymore.

rebuild kernel module

To fix it when the module haven't been properly compiled for some reason you could trigger a rebuild of the NVIDIA kernel module with $ sudo dpkg-reconfigure nvidia-kernel-dkms. When you're about to send your hardware in to repair because everything looks all right but the device just isn't working, that really could help (own experience).

After the rebuild of the module or modules (if you have a few kernel packages installed) has completed, you could recheck if the module really is available by running:

$ sudo modinfo nvidia-current
filename:       /lib/modules/4.4.0-1-amd64/updates/dkms/nvidia-current.ko
alias:          char-major-195-*
version:        352.79
supported:      external
license:        NVIDIA
alias:          pci:v000010DEd00000E00sv*sd*bc04sc80i00*
alias:          pci:v000010DEd*sv*sd*bc03sc02i00*
alias:          pci:v000010DEd*sv*sd*bc03sc00i00*
depends:        drm
vermagic:       4.4.0-1-amd64 SMP mod_unload modversions 
parm:           NVreg_Mobile:int

It should be something similiar to this when everything is all right.

reload kernel module

When there are problems with the GPU, maybe the kernel module isn't properly loaded. You could recheck if the module has been properly loaded by doing

$ lsmod | grep nvidia
nvidia_uvm             73728  0
nvidia               8540160  1 nvidia_uvm
drm                   356352  7 i915,drm_kms_helper,nvidia

The kernel module could be loaded resp. reloaded with $ sudo nvidia-modprobe (that tool is from the package nvidia-modprobe).

unsupported graphics card

Be sure that you graphics cards is supported by the current driver kernel module. If you have bought new hardware, that's quite possible to come out being a problem. You can get the version of the current NVIDIA driver with:

$ cat /proc/driver/nvidia/version 
NVRM version: NVIDIA UNIX x86_64 Kernel Module 352.79  Wed Jan 13 16:17:53 PST 2016
GCC version:  gcc version 5.3.1 20160528 (Debian 5.3.1-21)

Then, google the version number like nvidia 352.79, this should get you onto an official driver download page like this. There, check for what's to be found under "Supported Products".

I you're stuck with that there are two options, to wait until the driver in Debian got updated, or replace it with the latest driver package from NVIDIA. That's possible to do, but something more for experienced users.

occupied graphics card

The CUDA driver cannot work while the graphical interface is busy like by processing the graphical display of your X.Org server. Which kernel driver actually is used to process the desktop could be examined by this command:12

$ grep '(II).*([0-9]):' /var/log/Xorg.0.log
[    37.700] (II) intel(0): Using Kernel Mode Setting driver: i915, version 1.6.0 20150522
[    37.700] (II) intel(0): SNA compiled: xserver-xorg-video-intel 2:2.99.917-2 (Vincent Cheng <>)
[    39.808] (II) intel(0): switch to mode 1920x1080@60.0 on eDP1 using pipe 0, position (0, 0), rotation normal, reflection none
[    39.810] (II) intel(0): Setting screen physical size to 508 x 285
[    67.576] (II) intel(0): EDID vendor "CMN", prod id 5941
[    67.576] (II) intel(0): Printing DDC gathered Modelines:
[    67.576] (II) intel(0): Modeline "1920x1080"x0.0  152.84  1920 1968 2000 2250  1080 1083 1088 1132 -hsync -vsync (67.9 kHz eP)

This example shows that the rendering of the desktop is performed by the graphical device of the Intel CPU, which is just like it's needed for running CUDA applications on your NVIDIA graphics card, if you don't have another one.


With the Debian package of the CUDA toolkit everything pretty much runs out of the box for Theano. Just install it with apt-get, and you're ready to go, the CUDA backend is the default one. Pycuda is among the suggested dependencies of the binary packages, but this is needed mainly for the test suite.

The up-to-date CUDA release 7.5 is of course available, with that you have Maxwell architecture support so that you can run Theano on e.g. a GeForce GTX Titan X with 6,2 TFLOPS on single precision13 at an affordable price. CUDA 814 is around the corner with support for the new Pascal architecture15. Like the GeForce GTX 1080 high-end gaming graphics card already has 8,23 TFLOPS16. When it comes to professional GPGPU hardware like the Tesla P100 there is much more computational power available, scalable by multiplication of cores resp. cards up to genuine little supercomputers which fit on a desk, like the DGX-1. Theano can use multiple GPUs for calculations to work with high scaled hardware, I'll write another blog post on this issue.

Theano on the GPU

It's not difficult to run Theano on the GPU.

Only single precision floating point numbers (float32) are supported on the GPU, but that is sufficient for deep learning applications. Theano uses double precision floats (float64) by default, so you have to set the configuration variable config.floatX to float32, like written on above, either with the THEANO_FLAGS environment variable or better in your .theanorc file, if you're going to use the GPU a lot.

Switching to the GPU actually happens with the config.device configuration variable, which must be set to either gpu or gpu0, gpu1 etc., to choose a particular one if multiple devices are available.

Here's is a little test script, it's taken from the docs but slightly altered:

from __future__ import print_statement
from theano import function, config, shared, sandbox
import theano.tensor as T
import numpy
import time
from six.moves import range

vlen = 10 * 30 * 768  # 10 x #cores x # threads per core
iters = 1000

rng = numpy.random.RandomState(22)
x = shared(numpy.asarray(rng.rand(vlen), config.floatX))
f = function([], T.exp(x))
t0 = time.time()
for i in range(iters):
    r = f()
t1 = time.time()
print("Looping %d times took %f seconds" % (iters, t1 - t0))
print("Result is %s" % (r,))
if numpy.any([isinstance(x.op, T.Elemwise) for x in f.maker.fgraph.toposort()]):
    print("Used the cpu")
    print("Used the gpu")

You can run that script either with python or python3 (there was a single test failure on the Python 3 package, so the Python 2 library might be a little more stable currently). For comparison, here's an example on how it perfoms on my hardware, one time on the CPU, one more time on the GPU:

$ THEANO_FLAGS=floatX=float32 python ./ 
[Elemwise{exp,no_inplace}(<TensorType(float32, vector)>)]
Looping 1000 times took 4.481719 seconds
Result is [ 1.23178029  1.61879337  1.52278066 ...,  2.20771813  2.29967761
Used the cpu

$ THEANO_FLAGS=floatX=float32,device=gpu python ./ 
Using gpu device 0: GeForce 940M (CNMeM is disabled, cuDNN not available)
[GpuElemwise{exp,no_inplace}(<CudaNdarrayType(float32, vector)>), HostFromGpu(GpuElemwise{exp,no_inplace}.0)]
Looping 1000 times took 1.164906 seconds
Result is [ 1.23178029  1.61879349  1.52278066 ...,  2.20771813  2.29967761
Used the gpu

If you got a result like this you're ready to go with Theano on Debian, training computer vision classifiers or whatever you want to do with it. I'll write more on for what Theano could be used, soon.

  1. Some ports are disabled because they are currently not supported by Theano. There are NotImplementedErrors and other errors in the tests on the numpy.ndarray object being not aligned. The developers commented on that, see here. And on some ports the build flags -m32 resp. -m64 of Theano aren't supported by g++, the build flags can't be manipulated easily. 

  2. Theano Development Team: "Theano: a Python framework for fast computation of mathematical expressions

  3. Marc Couture: "Today's high-powered GPUs: strong for graphics and for maths". In: RTC magazine June 2015, pp. 22–25 

  4. Ogier Maitre: "Understanding NVIDIA GPGPU hardware". In: Tsutsui/Collet (eds.): Massively parallel evolutionary computation on GPGPUs. Berlin, Heidelberg: Springer 2013, pp. 15-34 

  5. Geoffrey French: "Deep learing tutorial: advanved techniques". PyData London 2016 presentation 

  6. Like the description of the Lintian tag binary-without-manpage says, that's not needed for them being in /usr/share

  7. Tom. R. Halfhill: "Parallel processing with CUDA: Nvidia's high-performance computing platform uses massive multithreading". In: Microprocessor Report January 28, 2008 

  8. Faber "Parallelwelten: GPU-Programmierung mit OpenCL". In: C't 26/2014, pp. 160-165 

  9. For comparison, see: Valentine Sinitsyn: "Feel the taste of GPU programming". In: Linux Voice February 2015, pp. 106-109 


  11. If Optimus (hybrid) graphics hardware is present (like commonly today on "Windows laptops"), Debian launches the X-server on the graphics processing unit of the CPU, which is ideal for CUDA. The problem with Optimus actually is the graphics processing on the dedicated GPU. If you are using Bumblebee, the Python interpreter which you want to run Theano on has be to be started with the launcher primusrun, because Bumblebee powers the GPU down with the tool bbswitch every time it isn't used, and I think also the kernel module of the driver is dynamically loaded. 

  12. Thorsten Leemhuis: "Treiberreviere. Probleme mit Grafiktreibern für Linux lösen": In: C't Nr.2/2013, pp. 156-161 

  13. Martin Fischer: "4K-Rakete: Die schnellste Single-GPU-Grafikkarte der Welt". In C't 13/2015, pp. 60-61 


  15. Martin Fischer: "All In: Nvidia enthüllt die GPU-Architektur 'Pascal'". In: C't 9/2016, pp. 30-31 

  16. Martin Fischer: "Turbo-Pascal: High-End-Grafikkarte für Spieler: GeForce GTX 1080". In: C't 13/2016, pp. 100-103 

Krebs on SecurityCici’s Pizza: Card Breach at 130+ Locations

Cici’s Pizza, a Coppell, Texas-based fast-casual restaurant chain, today acknowledged a credit card breach at more than 135 locations. The disclosure comes more than a month after KrebsOnSecurity first broke the news of the intrusion, offering readers a sneak peak inside the sprawling cybercrime machine that thieves used to siphon card data from Cici’s customers in real-time.

cicisIn a statement released Tuesday evening, Cici’s said that in early March 2016, the company received reports from several of its restaurant locations that point-of-sale systems were not working properly.

“The point-of-sale vendor immediately began an investigation to assess the problem and initiated heightened security measures,” the company said in a press release. “After malware was found on some point-of-sale systems, the company began a restaurant-by-restaurant review and remediation, and retained a third-party cybersecurity firm, 403 Labs, to perform a forensic analysis.”

According to Cici’s, “the vast majority of the intrusions began in March of 2016,” but the company acknowledges that the breach started as early as 2015 at some locations. Cici’s said it was confident the malware has been removed from all stores. A list of affected locations is here (PDF).

On June 3, 2016, KrebsOnSecurity reported that sources at multiple financial institutions suspected a card breach at Cici’s. That story featured a quote from Stephen P. Warne, vice president of service and support for Datapoint POS, a point-of-sale provider that services a large number of Cici’s locations. Warne told this author that the fraudsters responsible for the intrusions had tricked employees into installing the card-stealing malicious software.

On June 8, 2016, this author published Slicing Into a Point-of-Sale Botnet, which brought readers inside of the very crime machine the perpetrators were using to steal credit card data in real-time from Cici’s customers. Along with card data, the malware had intercepted private notes that Cici’s Pizza employees left to one another about important developments between job shifts.

Point-of-sale based malware has driven most of the credit card breaches over the past two years, including intrusions at Target and Home Depot, as well as breaches at a slew of point-of-sale vendors. The malware usually is installed via hacked remote administration tools. Once the attackers have their malware loaded onto the point-of-sale devices, they can remotely capture data from each card swiped at that cash register.

Thieves can then sell the data to crooks who specialize in encoding the stolen data onto any card with a magnetic stripe, and using the cards to buy gift cards and high-priced goods from big-box stores like Target and Best Buy.

Readers should remember that they’re not liable for fraudulent charges on their credit or debit cards, but they still have to report the phony transactions. There is no substitute for keeping a close eye on your card statements. Also, consider using credit cards instead of debit cards; having your checking account emptied of cash while your bank sorts out the situation can be a hassle and lead to secondary problems (bounced checks, for instance).


Planet DebianMichael Prokop: DebConf16 in Capetown/South Africa: Lessons learnt

DebConf 16 in Capetown/South Africa was fantastic for many reasons.

My Capetown/South Africa/Culture/Flight related lessons:

  • Avoid flying on Sundays (especially in/from Austria where plenty of hotlines are closed on Sundays or at least not open when you need them)
  • Actually turn back your seat on the flight when trying to sleep and not forget that this option exists *cough*
  • While UCT claims to take energy saving quite serious (e.g. “turn off the lights” mentioned at many places around the campus), several toilets flush all their water, even when trying to do just small™ business and also two big lights in front of a main building seem to be shining all day long for no apparent reason
  • There doesn’t seem to be a standard for the side of hot vs. cold water-taps
  • Soap pieces and towels on several toilets
  • For pedestrians there’s just a very short time of green at the traffic lights (~2-3 seconds), then red blinking lights show that you can continue walking across the street (but *should* not start walking) until it’s fully red again (but not many people seem to care about the rules anyway :))
  • Warning lights of cars are used for saying thanks (compared to hand waving in e.g. Austria)
  • The 40km/h speed limit signs on the roads seem to be showing the recommended minimum speed :-)
  • There are many speed bumps on the roads
  • Geese quacking past 11:00 p.m. close to a sleeping room are something I’m also not used to :-)
  • Announced downtimes for the Internet connection are something I’m not used to
  • WLAN in the dorms of UCT as well as in any other place I went to at UCT worked excellent (measured ~22-26 Mbs downstream in my room, around 26Mbs in the hacklab) (kudos!)
  • WLAN is available even on top of the Table Mountain (WLAN working and being free without any registration)
  • Number26 credit card is great to withdraw money from ATMs without any extra fees from common credit card companies (except for the fee the ATM itself charges but displays ahead on-site anyway)
  • Splitwise is a nice way to share expenses on the road, especially with its mobile app and the money beaming using the Number26 mobile app

My technical lessons from DebConf16:

  • ran into way too many yak-shaving situations, some of them might warrant separate blog posts…
  • finally got my hands on gbp-pq (manage quilt patches on patch queue branches in git): very nice to be able to work with plain git and then get patches for your changes, also having upstream patches (like cherry-picks) inside debian/patches/ and the debian specific changes inside debian/patches/debian/ is a lovely idea, this can be easily achieved via “Gbp-Pq: Topic debian” with gbp’s pq and is used e.g. in pkg-systemd, thanks to Michael Biebl for the hint and helping hand
  • David Bremner’s gitpkg/git-debcherry is something to also be aware of (thanks for the reminder, gregoa)
  • autorevision: extracts revision metadata from your VCS repository (thanks to pabs)
  • blhc: build log hardening check
  • Guido’s gbp skills exchange session reminded me once again that I should use `gbp import-dsc –download $URL_TO_DSC` more often
  • features specific copyright + patches sections (thanks, Matthieu Caneill)
  • dpkg-mergechangelogs(1) for 3-way merge of debian/changelog files (thanks, buxy)
  • meta-git from pkg-perl is always worth a closer look
  • ifupdown2 (its current version is also available in jessie-backports!) has some nice features, like `ifquery –running $interface` to get the life configuration of a network interface, json support (`ifquery –format=json …`) and makotemplates support to generate configuration for plenty of interfaces

BTW, thanks to the video team the recordings from the sessions are available online.

Planet DebianJoey Hess: Re: Debugging over email

Lars wrote about the remote debugging problem.

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this.

This is also something I've thought about on and off, that affects me most every day.

I've found that building the test suite into the program, such that users can run it at any time, is a great way to smoke out problems. If a user thinks they have problem A but the test suite explodes, or also turns up problems B C D, then I have much more than the user's problem report to go on. git annex test is a good example of this.

Asking users to provide a recipe to reproduce the bug is very helpful; I do it in the git-annex bug report template, and while not all users do, and users often provide a reproducion recipe that doesn't quite work, it's great in triage to be able to try a set of steps without thinking much and see if you can reproduce the bug. So I tend to look at such bug reports first, and solve them more quickly, which tends towards a virtuous cycle.

I've noticed that reams of debugging output, logs, test suite failures, etc can be useful once I'm well into tracking a problem down. But during triage, they make it harder to understand what the problem actually is. Information overload. Being able to reproduce the problem myself is far more valuable than this stuff.

I've noticed that once I am in a position to run some commands in the environment that has the problem, it seems to be much easier to solve it than when I'm trying to get the user to debug it remotely. This must be partly psychological?

Partly, I think that the feeling of being at a remove from the system, makes it harder to think of what to do. And then there are the times where the user pastes some output of running some commands and I mentally skip right over an important part of it. Because I didn't think to run one of the commands myself.

I wonder if it would be helpful to have a kind of ssh equivilant, where all commands get vetted by the remote user before being run on their system. (And the user can also see command output before it gets sent back, to NACK sending of personal information.) So, it looks and feels a lot like you're in a mosh session to the user's computer (which need not have a public IP or have an open ssh port at all), although one with a lot of lag and where rm -rf / doesn't go through.

Planet DebianLars Wirzenius: Debugging over email

I write free software and I have some users. My primary support channels are over email and IRC, which means I do not have direct access to the system where my software runs. When one of my users has a problem, we go through one or more cycles of them reporting what they see and me asking them for more information, or asking them to try this thing or that thing and report results. This can be quite frustrating.

I want, nay, need to improve this. I've been thinking about this for a while, and talking with friends about it, and here's my current ideas.

First idea: have a script that gathers as much information as possible, which the user can run. For example, log files, full configuration, full environment, etc. The user would then mail the output to me. The information will need to be anonymised suitably so that no actual secrets are leaked. This would be similar to Debian's package specific reportbug scripts.

Second idea: make it less likely that the user needs help solving their issue, with better error messages. This would require error messages to have sufficient explanation that a user can solve their problem. That doesn't necessarily mean a lot of text, but also code that analyses the situation when the error happens to include things that are relevant for the problem resolving process, and giving error messages that are as specific as possible. Example: don't just fail saying "write error", but make the code find out why writing caused an error.

Third idea: in addition to better error messages, might provide diagnostics tools as well.

A friend suggested having a script that sets up a known good set of operations and verifies they work. This would establish a known-working baseline, or smoke test, so that we can rule things like "software isn't completely installed".

Do you have ideas? Mail me ( or tell me on (@liw) or Twitter (@larswirzenius).

Google Adsense[New Resource] Are native ads right for your site?

Spending on native ads is expected to grow to $21 billion in 2018, presenting a huge opportunity for publishers to enhance their user experience and tap into new revenues.

What are “Native Ads”? They’re a variety of paid ads with the goal of being “so cohesive with the page content, assimilated into the design, and consistent with the platform behavior that the viewer simply feels that they belong,” says the Interactive Advertising Bureau (IAB). You might recognize some of the more popular native ad formats, such as custom sponsored content, content recommendations, and in-feed ad units.

To help determine if native ads are the right fit for your site, we’ve created a quick guide that includes direction on how you can:
  • Search for opportunities throughout your site where native ads can unlock new ad revenue
  • Determine how to maximize your user experience and ad revenue before implementing native ads
  • Ensure allocation of time and resources needed for proper implementation

Posted by Kate Pietrelli from the AdSense Team

Planet DebianDirk Eddelbuettel: Rcpp 0.12.6: Rolling on

The sixth update in the 0.12.* series of Rcpp has arrived on the CRAN network for GNU R a few hours ago, and was just pushed to Debian. This 0.12.6 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, and the 0.12.5 release in May --- making it the tenth release at the steady bi-montly release frequency. Just like the previous release, this one is once again more of a refining maintenance release which addresses small bugs, nuisances or documentation issues without adding any major new features. That said, some nice features (such as caching support for sourceCpp() and friends) were added.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 703 packages on CRAN depend on Rcpp for making analytical code go faster and further. That is up by about fourty packages from the last release in May!

Similar to the previous releases, we have contributions from first-time committers. Artem Klevtsov made na_omit run faster on vectors without NA values. Otherwise, we had many contributions from "regulars" like Kirill Mueller, James "coatless" Balamuta and Dan Dillon as well as from fellow Rcpp Core contributors. Some noteworthy highlights are encoding and string fixes, generally more robust builds, a new iterator-based approach for vectorized programming, the aforementioned caching for sourceCpp(), and several documentation enhancements. More details are below.

Changes in Rcpp version 0.12.6 (2016-07-18)

  • Changes in Rcpp API:

    • The long long data type is used only if it is available, to avoid compiler warnings (Kirill Müller in #488).

    • The compiler is made aware that stop() never returns, to improve code path analysis (Kirill Müller in #487 addressing issue #486).

    • String replacement was corrected (Qiang in #479 following mailing list bug report by Masaki Tsuda)

    • Allow for UTF-8 encoding in error messages via RCPP_USING_UTF8_ERROR_STRING macro (Qin Wenfeng in #493)

    • The R function Rf_warningcall is now provided as well (as usual without leading Rf_) (#497 fixing #495)

  • Changes in Rcpp Sugar:

    • Const-ness of min and max functions has been corrected. (Dan Dillon in PR #478 fixing issue #477).

    • Ambiguities for matrix/vector and scalar operations have been fixed (Dan Dillon in PR #476 fixing issue #475).

    • New algorithm header using iterator-based approach for vectorized functions (Dan in PR #481 revisiting PR #428 and addressing issue #426, with futher work by Kirill in PR #488 and Nathan in #503 fixing issue #502).

    • The na_omit() function is now faster for vectors without NA values (Artem Klevtsov in PR #492)

  • Changes in Rcpp Attributes:

    • Add cacheDir argument to sourceCpp() to enable caching of shared libraries across R sessions (JJ in #504).

    • Code generation now deals correctly which packages containing a dot in their name (Qiang in #501 fixing #500).

  • Changes in Rcpp Documentation:

    • A section on default parameters was added to the Rcpp FAQ vignette (James Balamuta in #505 fixing #418).

    • The Rcpp-attributes vignette is now mentioned more prominently in question one of the Rcpp FAQ vignette.

    • The Rcpp Quick Reference vignette received a facelift with new sections on Rcpp attributes and plugins begin added. (James Balamuta in #509 fixing #484).

    • The bib file was updated with respect to the recent JSS publication for RProtoBuf.

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

CryptogramStealing Money from ISPs Through Premium Rate Calls

I think the best hacks are the ones that are obvious once they're explained, but no one has thought of them before. Here's an example:

Instagram ($2000), Google ($0) and Microsoft ($500) were vulnerable to direct money theft via premium phone number calls. They all offer services to supply users with a token via a computer-voiced phone call, but neglected to properly verify whether supplied phone numbers were legitimate, non-premium numbers. This allowed a dedicated attacker to steal thousands of EUR/USD/GBP/... . Microsoft was exceptionally vulnerable to mass exploitation by supporting virtually unlimited concurrent calls to one premium number. The vulnerabilities were submitted to the respective Bug Bounty programs and properly resolved.

News articles. Slashdot threads.

Worse Than FailureCodeSOD: OhgodnoSQL

How about those NoSQL databases, huh? There’s nothing more trendy than a NoSQL database, and while they lack many of the features that make a traditional RDBMS desirable (like, um… guaranteeing writes?) , they compensate by being more scalable and easier to integrate into an application.

Chuck D’s company made a big deal out of migrating their data to a more “modern”, “JSON-based” solution. Chuck wasn’t involved in that project, but after the result went live, he got roped in to diagnose a problem: the migration of data from the old to the new database created duplicate records. Many duplicates. So he took a look at the migration script, and found piles of code that looked like this:

    UPDATE DataItemSet SET Content = '{"id":116,"type":"Plan for Today", "title":"Initech Retirement Fund", "learnItemLink":"","content":"Re-balance fund portfolio."}' WHERE Name = 'OPERATION' and Content like '{"id":116,%'

If reading that line doesn’t cause you to break out into hives, take a closer look at the schema.

Id Content Unread Type Name
79 {“id”:9,“title”:“Initech Facilities Revision”,“type”:“CR05”,“img”:“images/initechfac”,content:“{"id:"55, "title":"Billing Code"… ”} 0 Global_Customer CUSTOMERRESOURCE
102 {“id”:94,“title”:“Initech Facilities Construction”,“type”:“CR05”,“img”:“images/initechfac”,content:“{"id:"55, "title":"Billing Code"… ”} 0 Global_Customer CUSTOMERRESOURCE

The Content column holds a string of text, and that string of text is expected to be JSON, and they often need to filter by the content of the column, which means they have lots of queries with WHERE clauses like: WHERE Content LIKE '%"title":"Initech"%', which rank as one of the slowest possible queries you can run in SQL. The ID field in the Content column has no relationship to the autonumbered ID field that actually is on the database table. The name column sometimes contains IDs, many of the fields in the JSON content (like the img and learnItemLink fields) are actually foreign key references to other tables in the database. There’s a type column on the record, which seems to control scope, but shouldn’t be confused with the type field on the JSON document, which may or may not mean anything.

After many weeks of combing through the migration scripts, it turned out not to be a problem with the scripts at all. One of their customers started a process of simplifying their billing codes, which meant billing codes were constantly changing. Instead of only changing the codes that were… well, actually being changed, the web app which provided that interface created new records for every billing code.

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Planet DebianChris Lamb: Python quirk: os.stat's return type

import os
import stat

st = os.stat('/etc/fstab')

# __getitem__
x = st[stat.ST_MTIME]
print((x, type(x)))

# __getattr__
x = st.st_mtime
print((x, type(x)))
(1441565864, <class 'int'>)
(1441565864.3485234, <class 'float'>)

Krebs on SecurityCarbanak Gang Tied to Russian Security Firm?

Among the more plunderous cybercrime gangs is a group known as “Carbanak,” Eastern European hackers blamed for stealing more than a billion dollars from banks. Today we’ll examine some compelling clues that point to a connection between the Carbanak gang’s staging grounds and a Russian security firm that claims to work with some of the world’s largest brands in cybersecurity.

The Carbanak gang derives its name from the banking malware used in countless high-dollar cyberheists. The gang is perhaps best known for hacking directly into bank networks using poisoned Microsoft Office files, and then using that access to force bank ATMs into dispensing cash. Russian security firm Kaspersky Lab estimates that the Carbanak Gang has likely stolen upwards of USD $1 billion — but mostly from Russian banks.

Image: Kaspersky

Image: Kaspersky

I recently heard from security researcher Ron Guilmette, an anti-spam crusader whose sleuthing has been featured on several occasions on this site and in the blog I wrote for The Washington Post. Guilmette said he’d found some interesting commonalities in the original Web site registration records for a slew of sites that all have been previously responsible for pushing malware known to be used by the Carbanak gang.

For example, the domains “weekend-service[dot]com” “coral-trevel[dot]com” and “freemsk-dns[dot]com” all were documented by multiple security firms as distribution hubs for Carbanak crimeware. Historic registration or “WHOIS” records maintained by for all three domains contain the same phone and fax numbers for what appears to be a Xicheng Co. in China — 1066569215 and 1066549216, each preceded by either a +86 (China’s country code) or +01 (USA). Each domain record also includes the same contact address: ““.

According to data gathered by ThreatConnect, a threat intelligence provider [full disclosure: ThreatConnect is an advertiser on this blog], at least 484 domains were registered to the address or to one of 26 other email addresses that listed the same phone numbers and Chinese company.  “At least 304 of these domains have been associated with a malware plugin [that] has previously been attributed to Carbanak activity,” ThreatConnect told KrebsOnSecurity.

Going back to those two phone numbers, 1066569215 and 1066549216; at first glance they appear to be sequential, but closer inspection reveals they differ slightly in the middle. Among the very few domains registered to those Chinese phone numbers that haven’t been seen launching malware is a Web site called “cubehost[dot]biz,” which according to records was registered in Sept. 2013 to a 28-year-old Artem Tveritinov of Perm, Russia.

Cubehost[dot]biz is a dormant site, but it appears to be the sister property to a Russian security firm called Infocube (also spelled “Infokube”). The InfoKube web site — — is also registered to Mr. Tveritinov of Perm, Russia; there are dozens of records in the WHOIS history for, but only the oldest, original record from 2011 contains the email address 

That same email address was used to register a four-year-old profile account at the popular Russian social networking site Vkontakte for Artyom “LioN” Tveritinov from Perm, Russia. The “LioN” bit is an apparent reference to an Infokube anti-virus product by the same name.

Mr. Tveritinov is quoted as “the CEO of InfoKub” in a press release from FalconGaze, a Moscow-based data security firm that partnered with the InfoKube to implement “data protection and employee monitoring” at a Russian commercial research institute. InfoKube’s own press releases say the company also has been hired to develop “a system to protect information from unauthorized access” undertaken for the City of Perm, Russia, and for consulting projects relating to “information security” undertaken for and with the State Ministry of Interior of Russia.

The company’s Web site claims that InfoKube partners with a variety of established security firms — including Symantec and Kaspersky. The latter confirmed InfoKube was “a very minor partner” of Kaspersky’s, mostly involved in systems integration. Zyxel, another partner listed on InfoKube’s partners page, said it had no partners named InfoKube. Slovakia-based security firm ESET said “Infokube is not and has never been a partner of ESET in Russia.”

Presented with Guilmette’s findings, I was keen to ask Mr. Tveritinov how the phone and fax numbers for a Chinese entity whose phone number has become synonymous with cybercrime came to be copied verbatim into Cubehost’s Web site registration records. I sent requests for comment to Mr. Tveritinov via email and through his Vkontakte page.

Initially, I received a friendly reply from Mr. Tveritinov via email expressing curiosity about my inquiry, and asking how I’d discovered his email address. In the midst of composing a more detailed follow-up reply, I noticed that the Vkontakte social networking profile that Tveritinov had maintained regularly since April 2012 was being permanently deleted before my eyes. Tveritinov’s profile page and photos actually disappeared from the screen I had up on one monitor as I was in the process of composing an email to him in the other.

Not long after Tveritinov’s Vkontakte page was deleted, I heard from him via email. Ignoring my question about the sudden disappearance of his social media account, Tveritinov said he never registered and that his personal information was stolen and used in the registration records for

“Our company never did anything illegal, and conducts all activities according to the laws of Russian Federation,” Tveritinov said in an email. “Also, it’s quite stupid to use our own personal data to register domains to be used for crimes, as [we are] specialists in the information security field.”

Turns out, InfoKube/Cubehost also runs an entire swath of Internet addresses managed by Petersburg Internet Network (PIN) Ltd., an ISP in Saint Petersburg, Russia that has a less-than-stellar reputation for online badness.

For example, many of the aforementioned domain names that security firms have conclusively tied to Carbanak distribution (e.g., freemsk-dns[dot].com) are hosted in Internet address space assigned to Cubehost. A search of the RIPE registration records for the block of addresses at turns up a physical address in Ras al Khaimah, an emirate of the United Arab Emirates (UAE) that has sought to build a reputation as a tax shelter and a place where it is easy to create completely anonymous offshore companies. The same listing says abuse complaints about Internet addresses in that address block should be sent to “”

This PIN hosting provider in St. Petersburg has achieved a degree of notoriety in its own right and is probably worthy of additional scrutiny given its reputation as a haven for all kinds of online ne’er-do-wells. In fact, Doug Madory, director of Internet analysis at Internet performance management firm Dyn, has referred to the company as “…perhaps the leading contender for being named the Mos Eisley of the Internet” (a clever reference to the spaceport full of alien outlaws in the 1977 movie Star Wars).

Madory explained that PIN’s hard-won bad reputation stems from the ISP’s documented propensity for absconding with huge chunks of Internet address blocks that don’t actually belong to it, and then re-leasing that purloined Internet address space to spammers and other Internet miscreants.

For his part, Guilmette points to a decade’s worth of other nefarious activity going on at the Internet address space apparently assigned to Tveritinov and his company. For example, in 2013 Microsoft seized a bunch of domains parked there that were used as controllers for Citadel online banking malware, and all of those domains had the same “Xicheng Co.” data in their WHOIS records.  A Sept. 2011 report on the security blog notes several domains with that Xicheng Co. WHOIS information showing up in online banking heists powered by the Sinowal banking Trojan way back in 2006.

“If Mr. Tveritinov, has either knowledge of, or direct involvement in even a fraction of the criminal goings-on within his address block, then the possibility that he may perhaps also have a role in other and additional criminal enterprises… including perhaps even the Carbanak cyber banking heists… becomes all the more plausible and probable,” Guilmette said.

It remains unclear to what extent the Carbanak gang is still active. Last month, authorities in Russia arrested 50 people allegedly tied to the organized cybercrime group, whose members reportedly hail from Russia, China, Ukraine and other parts of Europe. The action was billed as the biggest ever crackdown on financial hackers in Russia.


TEDHow Jane Chen built a better baby warmer — and a thriving business


In her 2013 TEDWomen Talk, entrepreneur (and TED Fellow) Jane Chen noted that “there are 15 million pre-term and underweight babies born every year around the world, and one of the biggest problems they face is staying warm.”

Premature babies can’t properly regulate their body temperatures and need an incubator in order for their organs to develop properly. If a baby is wasting energy on trying to stay warm, a range of problems can result: diabetes, heart disease, low IQ, and sometimes death. Four million of these babies die annually.

Shortly after receiving her MBA from Stanford University, Chen moved to India and set up her company, Embrace Innovations, in order to develop a low-cost, portable, reusable incubator to help women in remote areas of the world where a lack of reliable electricity and the high cost of medical equipment made the traditional incubators we have in hospitals impossible.

After two years of clinical testing, setting up manufacturing and distribution, Chen’s company launched the Embrace. The comfortable infant wrap uses phase-change material to melt at human body temperature and stay the proper temperature for eight hours. After that, the heat source can be replaced with a new one, to continuously supply a nurturing environment for babies who need it.

In a new post for Forbes magazine, Chen talks about what happened next.

“After five years as CEO, I returned to San Francisco and was on the verge of closing a deal with a major medical device company that was taking the full round of our next investment and would become our global distributor. I was ecstatic. This was exactly where I had hoped to take the company — this would make us scalable, and would significantly increase the impact we could make.”

But then, as she describes it, in a “cruel twist of fate,” the company she had signed on with fired its CEO and the deal she had worked so hard on disappeared overnight. Her company had seven days of cash left.

She went on to describe the whirlwind that many start-ups go through: she took out two bridge loans and asked everyone she knew for small investments to keep her company going until she could arrange another deal. She finally found an angel investor in Marc Benioff, the CEO and founder of, who had personal experience with his own child needing an incubator. He gave her company the lifeline it needed to stay afloat and give Chen the time she needed to look for a new way forward.

Later that year, she started surfing in Hawaii, another lifelong dream. She likens her experiences with her start-up to the profound lessons she has learned as a beginner surfer: “Everything is impermanent. When the waves knock you down, try again. Take the lessons you can from it, and move on to the next wave. Don’t be afraid to catch bigger waves. Accept what cannot be changed. And always have fun.”

As “someone who has failed many times,” she urges people to “try, try, and try again.” Her biggest lesson? “Don’t waste energy fighting the things that cannot be changed. Instead, adapt to the situation and learn to ride with it.”

It worked for her. Today, Chen’s company is flourishing and she has turned an idea into a product that has helped save thousands of lives. To date, the Embrace has helped over 200,000 children in 15 countries. She hopes to grow to the point where the Embrace will save 1 million babies globally, and recently launched Little Lotus, a line of baby swaddles, sleeping bags and blankets for the US market with a temperature control function to help babies sleep better, and a 1:1 model: every purchase helps to save a baby in a developing country with the Embrace warmer.

Jane Chen will be attending this year’s TEDWomen conference, Oct. 26–28, 2016 in San Francisco. Tickets are now available, so register to attend today at the TEDWomen website. Follow Chen on Twitter at @janemariechen

Cross-posted from TEDWomen host Pat Mitchell’s blog.

TEDA gorgeous new digital book celebrates the TED Fellows

Tokyo-based Cameroonian artist and designer Serge Mouangue blends African and Japanese design for visually arresting and useful objects – such as kimonos using traditional African prints. From the digital book Swimming Against the Tide: Adventures with the TED Fellows.

Tokyo-based Cameroonian artist and designer Serge Mouangue blends African and Japanese design for visually arresting and useful objects – such as kimonos using traditional African prints. From the digital book Swimming Against the Tide: Adventures with the TED Fellows.

It’s never easy to push against the current, to experiment with what’s never been done before, challenge uncomfortable truths. But 400 TED Fellows do it every day, all around the world. In the newly released free digital book Swimming Against the Tide: Adventures with the TED Fellows, dive into the wonder, kinship, curiosity, hope and creation that underlies innovation. The coffeetable book, written by Patrick D’Arcy and Karen Eng and designed by In-House International, is full of gorgeous photographs to dive into, as well as Q&As, essays and much more.

Below is a teaser — you can download the free book right here.

Interested in being a TED Fellow yourself? Applications are now open for the TED2017 class. The deadline is July 30.


“Only 2 percent of supercells create tornadoes, but when one starts to form, we get into chase mode,” says photographer and TED Senior Fellow Camille Seaman. “There are no bathroom breaks, no pulling over to get a drink, no chance to check the map. These storms are moving, sometimes at 20 miles an hour, sometimes at 60.”

Marveling at nature’s uncontrollable wrath, Seaman sees both its destructive and creative beauty through a lens of  wonder.

The Lovely Monster Over the Farm, Lodgepole, NE, 22 June 2012 19:15CST / Photo: Camille Seaman

Native American photographer Camille Seaman captures the harsh beauty of remote Arctic landscapes and the effects of climate change, as well as epic tornados in the Midwestern US, like this 2012 supercell in Nebraska. The Lovely Monster Over the Farm, Lodgepole, NE, 22 June 2012 19:15CST / Photo: Camille Seaman

Here are how other TED Fellows use a sense of wonder in their work:


In transmedia artist Lars Jan’s project HOLOSCENES, performers in a large aquarium enact everyday activities while water levels swell and recede around them, asking audiences to reflect on the potential consequences of climate change.

South African astrophysicist Renee Hlozek studies light patterns that reveal the total intensity of light emitted by interstellar dust in the Milky Way, revealing the structure of our galaxy’s magnetic field to better understand the initial conditions of our universe. Photo: ESA/Planck Collaboration

South African astrophysicist Renee Hlozek studies light patterns that reveal the total intensity of light emitted by interstellar dust in the Milky Way, revealing the structure of our galaxy’s magnetic field to better understand the initial conditions of our universe. Photo: ESA/Planck Collaboration



“Ever since the first hominids walked on Earth, humans have lived in relationship with each other,” says TED Fellow and ecologist Eric Berlow. “In the 1990s, the World Wide Web promised us a global village in which we would all become close neighbors with new possibilities of meeting each other to solve big problems at a global scale…. But one ironic, unintended consequence of the ease with which like finds like online has been the erosion of the global village into fragmented social silos and echo chambers.”

In the age of silos, the bonds of kinship matter more than ever in solving problems and building a more unified, harmonious world.

In this infographic, each node represents a TED Fellow. Each Fellow is linked by collaboration, each color representing an “emergent collaboration cluster” -- or a group of Fellows that tend to collaborate with each other more. This infographic was created by network mapping startup MAPPR, itself a collaboration between TED Fellows Eric Berlow, Kaustuv DeBiswas and Erin Gurman.

In this infographic, each node represents a TED Fellow. Each Fellow is linked by collaboration, each color representing an “emergent collaboration cluster” — or a group of Fellows that tend to collaborate with each other more. This infographic was created by network mapping startup MAPPR, itself a collaboration between TED Fellows Eric Berlow, Kaustuv DeBiswas and Erin Gurman.

Here are how other TED Fellows use kinship to challenge convention:

Chilean-American queer artist Constance Hockaday makes large-scale installations on open water, celebrating creative freedom and counterculture communities while defying gentrification. Her floating peep show (pictured) launched in the San Francisco Bay in June 2014.

Chilean-American queer artist Constance Hockaday makes large-scale installations on open water, celebrating creative freedom and counterculture communities while defying gentrification. Her floating peep show (pictured) launched in the San Francisco Bay in June 2014.

When revolution swept through Egypt in 2011, Lebanese-Egyptian art historian Bahia Shehab sprayed stencilled images incorporating the Arabic word for “no” in the streets of Cairo to protest military rule and violence.

When revolution swept through Egypt in 2011, Lebanese-Egyptian art historian Bahia Shehab sprayed stencilled images incorporating the Arabic word for “no” in the streets of Cairo to protest military rule and violence.


‘I’m a prisoner of curiosity. With my back against the wall, whether consciously or not, I’ll choose that direction every time,” says TED Fellow David Lang, founder of OpenROV, a company that makes low-cost underwater robots for exploration.

“Curiosity is neither the question nor the answer, rather the ethereal space between the two. It’s a place of perpetual dissatisfaction and yearning…. This is also the good news. In the ruins of disaster, the wake of immense loss or the face of improbable odds, the journey continues. In these situations, the capacity for curiosity becomes more than a luxury — it becomes a life support system. Curiosity is indefinitely and unshakably hopeful. It has to be.”

Maker David Lang’s OpenROV -- an open-source, low-cost underwater robot -- makes investigating the mysteries of the ocean accessible to anyone curious and adventurous enough to dive deep.

Maker David Lang’s OpenROV — an open-source, low-cost underwater robot — makes investigating the mysteries of the ocean accessible to anyone curious and adventurous enough to dive deep.

Curiosity is key to invention and a catalyst to progress. Read how other TED Fellows use this as fuel:

Archeological geneticist Christina Warinner analyzes DNA from the bones and teeth of ancient people to study how humans have co-evolved with their environments -- bridging the gap between archaeology, anthropology and the biomedical sciences.

Archeological geneticist Christina Warinner analyzes DNA from the bones and teeth of ancient people to study how humans have co-evolved with their environments — bridging the gap between archaeology, anthropology and the biomedical sciences.

Investigative journalist Trevor Aaronson reports on the FBI’s misuse of informants in counterterrorism operations, asking whether the United States is catching terrorists or creating them.

Investigative journalist Trevor Aaronson reports on the FBI’s misuse of informants in counterterrorism operations, asking whether the United States is catching terrorists or creating them.


“A kite is singlehandedly the most hopeful thing. It waits for the promise of the wind hidden behind the sun, the clouds and the morning filled with dew,” says an excerpt from poet Lee Mokobe’s piece On Hope, “The kite. It waits for the tapestry of the sky to grow weary of the sunlight. To pack up and go home to invite the breeze to work and blow in any which direction it wishes.”

Lerato Mokobe speaks at TEDWomen2015 - Momentum, Session 2, May 28, 2015, Monterey Conference Center, Monterey, California, USA. Photo: Marla Aufmuth/TED

Lee Mokobe is an award-winning slam poet who explores social injustice and gender identity issues. He is also the founder of Vocal Revolutionaries, a volunteer-run literary organization focused on empowering African youth. Photo: Marla Aufmuth/TED

To change what is, you must hope for what could be. Read how TED Fellows use hope to make things better.

Brazilian conservation biologist Juliana Machado Ferreira fights illegal wildlife trafficking in Brazil with her organization FREELAND Brasil. It is helping to establish a Wildlife Enforcement Network in South America, allowing for transnational collaboration, stronger environmental legislation and more.

Brazilian conservation biologist Juliana Machado Ferreira fights illegal wildlife trafficking in Brazil with her organization FREELAND Brasil. It is helping to establish a Wildlife Enforcement Network in South America, allowing for transnational collaboration, stronger environmental legislation and more.

Robert Gupta at TEDGlobal 2012. Photo: Ryan Lash

Violinist Vijay Gupta — who joined the Los Angeles Philharmonic at the age of 19 — runs Street Symphony, a nonprofit that organizes classical music concerts for overlooked populations such as the homeless and prisoners. Here he rehearses for TEDGlobal 2012. Photo: Ryan Lash


“What would it mean to take this even further and to help generate a creative ethos in Mexico City that traverses many different territories?” asks culture curator Gabriella Gómez-Mont. “After I hosted TEDxMexicoCity, the newly elected mayor of the city gave me a call. He invited me to invent a new government office from scratch: a laboratory for my very favorite city in the world.”

Seemingly odd combinations — like art, culture and government systems — can give birth to innovation. Read about how other TED Fellows do the same:

Australian body architect Lucy McRae explores the relationship between the body and technology using synthetic and organic materials. This project, Germination Day 8, was created from pantyhose, sawdust and grass seed.

Body architect Lucy McRae explores the relationship between the body and technology using synthetic and organic materials. This project, Germination Day 8, a project with Bart Hess, was created from pantyhose, sawdust and grass seed.

Artist Cyrus Kabiru turns recyclables and found materials into art. These spectacles are crafted from recycled materials found around his home in Nairobi, Kenya.

Artist Cyrus Kabiru turns recyclables and found materials into art. These spectacles are crafted from materials found around his home in Nairobi, Kenya.

British astrobiologist and geologist Louisa Preston looks for analogues to possible life on Mars in the most extreme environments on Earth, such as here in Iceland.

British astrobiologist and geologist Louisa Preston looks for analogues to possible life on Mars in the most extreme environments on Earth, such as here in Iceland.


Planet DebianJohn Goerzen: Building a home firewall: review of pfsense

For some time now, I’ve been running OpenWRT on an RT-N66U device. I initially set that because I had previously been using my Debian-based file/VM server as a firewall, and this had some downsides: every time I wanted to reboot that, Internet for the whole house was down; shorewall took a fair bit of care and feeding; etc.

I’ve been having indications that all is not well with OpenWRT or the N66U in the last few days, and some long-term annoyances prompted me to search out a different solution. I figured I could buy an embedded x86 device, slap Debian on it, and be set.

The device I wound up purchasing happened to have pfsense preinstalled, so I thought I’d give it a try.

As expected, with hardware like that to work with, it was a lot more capable than OpenWRT and had more features. However, I encountered a number of surprising issues.

The biggest annoyance was that the system wouldn’t allow me to set up a static DHCP entry with the same IP for multiple MAC addresses. This is a very simple configuration in the underlying DHCP server, and OpenWRT permitted it without issue. It is quite useful so my laptop has the same IP whether connected by wifi or Ethernet, and I have used it for years with no issue. Googling it a bit turned up some rather arrogant pfsense people saying that this is “broken” and poor design, and that your wired and wireless networks should be on different VLANs anyhow. They also said “just give it the same hostname for the different IPs” — but it rejects this too. Sigh. I discovered, however, that downloading the pfsense backup XML file, editing the IP within, and re-uploading it gets me what I want with no ill effects!

So then I went to set up DNS. I tried to enable the “DNS Forwarder”, but it wouldn’t let me do that while the “DNS Resolver” was still active. Digging in just a bit, it appears that the DNS Forwarder and DNS Resolver both provide forwarding and resolution features; they just have different underlying implementations. This is not clear at all in the interface.

Next stop: traffic shaping. Since I use VOIP for work, this is vitally important for me. I dove in, and found a list of XML filenames for wizards: one for “Dedicated Links” and another for “Multiple Lan/Wan”. Hmmm. Some Googling again turned up that everyone suggests using the “Multiple Lan/Wan” wizard. Fine. I set it up, and notice that when I start an upload, my download performance absolutely tanks. Some investigation shows that outbound ACKs aren’t being handled properly. The wizard had created a qACK queue, but neglected to create a packet match rule for it, so ACKs were not being dealt with appropriately. Fixed that with a rule of my own design, and now downloads are working better again. I also needed to boost the bandwidth allocated to qACK (setting it to 25% seemed to do the trick).

Then there was the firewall rules. The “interface” section is first-match-wins, whereas the “floating” section is last-match-wins. This is rather non-obvious.

Getting past all the interface glitches, however, the system looks powerful, solid, and well-engineered under the hood, and fairly easy to manage.

TED“We are going to make things happen”: Notes from a TED-Ed Innovative Educator

At TEDSummit, 30 TED-Ed Innovative Educators got together to swap ideas, support each other and learn from fellow teachers all around the world. Here is one educator's report.

At TEDSummit, this gorgeous group of TED-Ed Innovative Educators got together to swap ideas, support each other and learn from fellow teachers all around the world.


Wendy Morales is one of a group of TED-Ed Innovative Educators who got together at our recent TEDSummit conference. She files this report, which we’re cross-posting from her own blog, The Risk-Taking Educator:

When asked what I do for a living, sometimes I catch myself saying, “I am just a teacher.” It’s as if I am apologizing for not being something greater, like a doctor, scientist or engineer. That single word “just” reveals the extent to which I have been affected by how little our society, in general, values teachers. Deep down, I know the work I do is important, and I am fortunate to work in a community that does encourage and celebrate teachers. Unfortunately, most educators are not given many opportunities to develop a voice and sense of empowerment, resulting in frustration or feelings of inferiority. I myself have been guilty of these feelings at times in my career, resulting in some confusion as to how I see myself professionally.

These occasional feelings of inadequacy and self-doubt were squashed the minute I was chosen as one of the thirty TED-Ed Innovative Educators. From the moment I met the exceptionally warm and talented TED-Ed team and my equally amazing cohort members, I knew that great things were about to happen. For the past few months, we have explored incredible TED and TED-Ed resources, shared ideas and raised questions that will undoubtedly improve our school communities. I felt valued and heard. This past week, we finally came together in Banff, Canada at TED Summit 2016. While I could never put in writing (at least not in a blog post of reasonable length) everything I gained from this opportunity, I would like to share my biggest takeaways from this experience and how it helped me redefine my role as a teacher.

Intimidation Turns Into Pride

Chris Anderson, curator of TED, with educator Wendy Morales at TEDSummit.

When you are just a teacher, it is easy to feel intimidated by all the geniuses at TED. I found myself eating meals and engaging in conversation with famous authors, social activists, scientists, artists, etc. When these incredible people looked at my name badge and saw that I was an educator, each one asked me about my career and seemed genuinely interested and appreciative of what I do. Even after I rudely interrupted the Head of TED, Chris Anderson, while he was trying to answer a text, the first thing he said to me after seeing my name badge was, “Thank you for all you do.” Wow. As each day passed, more and more people approached all of us in the cohort, wanting to know more about the TIE program. I can honestly say that I have never felt so proud to be a teacher.

Power in Numbers (and Ideas!)

When you are just a teacher, it is easy to feel alone and overwhelmed. But as a TIE, I know have a community of people who I now think of as family, supporting everything I do. At TED Summit I found professional soulmates (i.e. Jennifer Ward) who share my same passions and will certainly become future collaboration partners. Even more importantly, though, I found like-minded educators with ideas, experiences and perspectives different from my own. These talented educators from various parts of the world have opened my mind to limitless possibilities that I never would have imagined. I learned something from every single member of my cohort, and the best part is…this learning and collaborating has just begun! Together, we are going to make things happen!

TED-Ed Innovative Educators (or TIEs) (or, yes, TIE Fighters!) bond on a hike in Banff.

While I reflected on my experiences at TED Summit, I realized that I will never again refer to myself as just a teacher. The single word “teacher” doesn’t even do justice to any educator I know. We are all so much more than that. As I think about who I am, I can now say with pride that I am problem-solver, change maker, collaborator, teammate, writer, creator, leader, history nerd, tech-lover, global citizen and a TED-Ed Innovative Educator! It is my wish that more educators will come to this realization and understand that we all have “ideas worth spreading.”

Thank you to the TED and TED-Ed teams, my TIE cohort (the “TIE Fighters”) and the other inspirational people I have met on this exciting journey. You have not only helped me redefine who I am as a teacher, but also who I am as a human being. Can’t wait for this journey to continue!

Please check out TED, TED-Ed and the TED-Ed Innovative Educator Program for more information on these amazing people, resources and opportunities!

Falkvinge - Pirate PartyBitcoin, Innovation Of Governance; Lightning Rod Striking Balance Of Power

Photo credit – Dr. Frankenstein's dream II by Joaquin Casarini

Activism – Nozomi Hayase:In its seven years of existence, Bitcoin has gained wide mainstream attention with its disruptive potential in finance. Yet, currency is just its first application. The technology’s other potential lies in affecting governance and law. Democracy has weakened in the existing systems of governance. With concentration of power created through hierarchy, ordinary people are kept out of influencing policies or participating in vital decision-making. In this lock down system, many politicians do not represent true interests of the people and those who do are often blocked out. Can Bitcoin strike this balance of power? In this article, I argue how Bitcoin is not just an innovation of banking and finance, but at its core concerns innovation of governance systems, built upon a new security model that protects and empowers everyday people.

For many decades, activists, workers and concerned citizens have been working hard and dedicating their life to bring equality and justice. Unprecedented levels of government and corporate corruption in recent years have signaled a breakdown of checks and balances, while an extreme trend toward authoritarianism has discouraged popular dissent, often depriving people of hope.

Problems are not simply a lack of care or will for change. The fundamental issue seems to revolve around our basic view of humanity. Many tend to think that people are inherently good and operate with similar motives to themselves. The deep failure of democracy has shaken up these assumptions, showing this to be a naive and overly idealistic view of man. The 2008 financial meltdown and crisis of legitimacy exposed the existence of individuals who have a radically different makeup than the rest of the population. These are psychopaths, whom psychopathy expert Robert Hare called “social predators who charm, manipulate, and ruthlessly plow their way through life”.

Psychopaths exhibit total lack of conscience and empathy for others. They embody a dark side of individuality, with aggressive and narrow selfish desires that often come in conflict with the public good. Regulation has shown to be ineffective and laws often fail to offer protection because its very mechanism has been gutted and used by those in power for their advantage. The question now is how to account for this hidden vulture within humanity and build a system that is resilient to these adversarial forces.

Security Holes Within Representative Democracy

In that seminal white paper, mysterious creator Satoshi Nakamoto described Bitcoin as a purely peer-to-peer version of electronic cash that would allow “online payments to be sent directly from one party to another without going through a financial institution”. The core invention is distributed trust and Nakamoto stated that it was put forward as a solution to the “inherent weakness of the trust based model”, where financial institutions act as trusted third parties.

What is this inherent weakness identified by the inventor of Bitcoin? Most people are bound by empathy and naturally restrain actions in consideration of others’ needs. On the other hand, psychopaths are not governed by these internal laws of empathy and therefore cannot regulate self-interests. Moreover, as was articulated by psychiatrist Hervey M. Cleckley in Mask of Sanity, deception is at the core of psychopathy. With superficial charm, these predators hide their claws and teeth and gleefully trespass other’s boundaries, erasing their trails and even manipulating laws to get away with their crimes.

Trust is a vital foundation of human relationship and this has become psychopaths’ primary entry point for predation. These ruthless individuals fake empathy to elicit trust and then exploit it. When a governance model is structured in a manner that relies heavily on trust, such a system inevitably becomes vulnerable to this unknown member of society who can cleverly mimic good attributes of human nature and blend into society.

Representative democracy that requires people to trust those who claim to represent them in the form of elected officials has increasingly become a mask used by these ruthless individuals to hide and gain a grip on the populace. Behind the veil of secrecy, psychopaths leverage our trusting nature and construct promise-based governance. For instance, corporate masters behind the charade of electoral politics sponsor political candidates, who with campaign promises keep people passive and manage down their expectation levels. With future faking, which involves making plans that will never happen and gas-lighting, a tactic known to challenge one’s memory, they deceive and gain power over others.

Money dependent on systems of representation requires trust to work. It has now largely been turned into promissory notes and fabricated interest obligations, becoming a weapon for psychopathic control. The hidden captains of this managed democracy direct the flow of currency through financial engineering and have created incentive structures that are bent toward preserving their power. Radical deregulation is enacted under the banner of a ‘free market’ to manipulate interest rates and fiscal policy, creating never ending cycles of harsh austerity and usury.

Stimulated by toxic asset bubbles, derivatives and quantitative easing, these incentives work like invisible hands of the market, promoting fraud and depravity. It suppresses democratic values by controlling information, which is the currency of democracy, and constraining free speech with economic censorship, as was seen in the case of the financial blockade against WikiLeaks. All of this has resulted in the creation of a two-tiered justice system and derisked capitalism, where those in power are never allowed to fail and are not held accountable either by markets or the legal system.

Bitcoin as a New Security Model

Bitcoin addresses this inherent weakness of third party trust that has been exploited to create systemic parasitic rent-seeking structures. As asset-based digital cash, it offers an alternative to the promissory system of value creation by decree from above. Bitcoin’s underlying technology, the blockchain is a public asset ledger. This is a distributed database that records a history of transactions in the network without anyone in charge. Once data is verified, no one can undo it. This immutable timestamp goes beyond simple accounting of monetary transactions.

Bitcoin enables a new security model and it addresses the problem of security holes in the existing trust-based model of governance. Author and security expert Andreas Antonopoulos called this “trust by computation” that has “no central authority or trusted third party”. He explained this form of trust as follows:

Trust does not depend on excluding bad actors, as they cannot ‘fake’ trust. They cannot pretend to be the trusted party, as there is none. They cannot steal the central keys as there are none. They cannot pull the levers of control at the core of the system, as there is no core and no levers of control.

With this trust by computation, the need to trust institutions or central authorities is replaced with mathematics. Human trust is easily exploited by those prone to act with little concern for others. In the Bitcoin network where there is no point of control, attackers cannot fake trust. In order to gain control over the network, they would have to compromise math.

Power corrupts, and the best way to check and balance power is to not have these points of control in the first place. Thus, decentralization is a natural progression of security models. In a decentralized system, there is no ladder of power that psychopaths can climb and exploit others. Through distributing trust across a network and minimizing the necessity to trust a third party, the system removes vulnerabilities that often lead to such concentration of power.

Honest Account of the Darkness Within

So, how does Bitcoin distribute trust and secure this peer-to-peer network? In traditional systems, psychopaths rise to power, cheat and control the game. In these new cryptographic systems, psychopathic deception and attempts to cheat the system could manifest in covert chip fabrication, spam attacks and miners colluding in a mining pool to earn more than their fair share at the expense of honest miners.

Yet, the genius of this protocol is in the ability for this math-based network to enforce rules of consensus and fair play. At its foundation is Satoshi. The Japanese character of his name is translated as history of philosophy. This philosophy is like wisdom gained through history; an understanding of the contradiction inherent in man as both corruptible as well as perfectible. This is at the crux of Bitcoin’s game theory. Instead of naively assuming good intentions in others, the creator of this technology expected that some would try to cheat and attack the network. This is an acknowledgment that we live in a world where we cannot just eliminate psychopaths out of the equation.

This assumption is shared by developers who are committed to Satoshi’s vision of this particular security model. At the Hong Kong Scaling Bitcoin conference, developer Andrew Poelstra explained the mindset that Bitcoin lives in an adversarial environment and that the possibility of individuals acting selfishly and taking advantage of others’ good will needs to be factored into designing its governance. Bitcoin core developer Peter Todd also emphasized the necessity of adversarial thinking. In a Twitter interaction on the topic of security, Todd noted, “security isn’t about people promising they won’t do something, it’s about people being unable to do something”.

When greed and self-interests are condemned or denied, these aspects do not disappear, but are simply pushed out of sight and kept hidden. Efforts through law enforcement to regulate and punish selfish actors can just make them more cunning and deceitful. Bitcoin’s security model is based on honest accounting of our selfishness within. Instead of trying to shun this darkness, it finds a way to acknowledge and openly work with it.

Rule of Algorithmic Consensus

What governs Bitcoin is a consensus mechanism called proof-of work. By embodying Bitcoin’s particular security assumption, it works like a lighting rod. It attracts potentially destructive forces and diverts them in order to protect the network.

Through using bitcoins as tokens of value with a combination of cryptographic hash functions, game theory and economic incentives, a whole new economy is now being created. Bitcoin mining is a broadcast math competition engaged by a network of computers around the world with clear rules such as the total number of bitcoin created, a predictable issuance rate and automatic adjustment of mining difficulty. By using precious resources, miners work to solve difficult math problems. Each 10 minutes, problems are solved and whoever solves the problem first wins a fixed number of bitcoins. This process leads to both creation of money and clearing of transactions and it is designed to create economies of scale, with rewards proactively incentivizing all to follow the network rules of consensus.

Miners play a crucial role in the Bitcoin ecosystem. Yet, what makes the system resilient is not just miners and developers, but everyone’s participation in the network. This includes merchants, investors, entrepreneurs and users. Journalist Aaron van Wirdum describes how full nodes that relay and validate transactions within the network check and enforce Bitcoin’s consensus rules. He explains how “not all full nodes are equal from a network perspective”. The full nodes that miners, companies and developers run “all add weight to a set of consensus rules”. Yet, he emphasizes how all users play a crucial role in governance, as they are what ultimately gives Bitcoin value.

By removing third parties, the inventor of this technology found a way to create a direct feedback loop among all participants, aligning the balance of supply and demand with the force of consensus, which is more democratic than the current oligarchic system that operates under a pretense of democracy. In the current financially engineered markets, monetary supply does not correlate with the real needs of people. Yet, with this new Bitcoin market, monetary supply is created through real demand with the feature of infinite divisibility (bitcoin can be divided into 8 decimal points and more if consensus is reached).

The only way miners and developers get paid for their work is to be on the side of consensus, so they are incentivized to respond to the demands of users. This direct feedback loop created though decentralization is a crucial wire that connects the lighting rod with the ground.

Law of Self-Regulation

In the current system of representation, activists and human right lawyers have been trying to regulate greed and hold selfish actors accountable. ‘Power does not concede without demand’, yet in the existing model of governance, people struggle to make real demands. Any plea for change does not reach the merciless logic of this small section of society. While traditional efforts have shown to be ineffective in enforcing rule of law upon the elites, Bitcoin brings a new form of accountability through algorithmic regulation.

The Bitcoin incentive structure, designed as a lightning rod, captures and creatively engages the mind of psychopaths. Hare pointed out how a psychopaths’ brain is wired differently and how they have weakened moral force. Unlike most people, they cannot overcome temptations and restrain their actions in the face of opportunities for short-term self-gratification. Hare described this as a lack of ability to imagine the consequences of their own actions, noting that for psychopaths, “concrete rewards are pitted against vague future consequences – with the rewards clearly the stronger contender”.

Research from Vanderbilt University on the brain’s reward system in psychopathy further supports this finding. Lead researcher Joshua W. Buckholtz described how in experiments, individuals with high scores in psychopathy get heightened levels of dopamine responses in anticipated rewards compared to non-psychopathic subjects, showing how the brain of a psychopath is more susceptible to rewards. Buckholtz explained that this is because “once they focus on the chance to get a reward, psychopaths are unable to alter their attention until they get what they’re after” and these rewards override any concerns over threat or punishment.

With this ability to think like an attacker, market forces are used in the Bitcoin network to create a kind of electric circuit that allows energy to move naturally and convert it for good use. This enables a new law to regulate ruthless actions without relying on the moral strength of any individual or external authority. Robert Wolinsky, senior manager of blockchain research, explains how “Satoshi introduces a cost equation to cheating/collusion via the proof-of-work protocol”, making it clear to parties what the cost of attacking the network is and having them pay for it upfront. Furthermore, by making the rewards for playing by the rules higher than the value of attacking the network, it can proactively protect the system from the lack of impulse control of those who are instinctively programmed to strike with no remorse.

While the language of altruism and empathy doesn’t compute with those who have fallen from a communal ground, Bitcoin is a source code that speaks the language of cold and calculating rationale that can reach the selfish parts within ourselves and turn on the brain of the super computer of the world. Bitcoin mining reintroduces risk into the market. Here, concrete rewards are used to channel risk-taking and self-serving inclinations, making all compete for honesty and truth. The competitive drive of survival of the fittest, fueled by this global math contest does not create ruthless bloodbaths or make a killing on the back of someone’s misery, but instead is guided to serve the whole network. The fire of this hashing power burns aggressive and violent parts of our humanity, transforming them into generating global level security for all.

Power of Free Speech

Over the decades, many democratic governments have been taken over by cannibals within humanity and become vehicles of control that have lost their fail-safe. Increasingly, people are held hostage by corrupted political systems. While the flow of currency is controlled, free speech as a foundation of democracy has increasingly become permissioned.

Satoshi’s act of publishing the white paper in 2008 unleashed the power of free speech. Progress and true social change is only possible through each person freely sharing their ideas and associating with fellow men and women to innovate better systems. Bitcoin is an open source project that brings together diverse developers around the world who are inspired by Satoshi’s freeing of speech. By writing codes, they too have begun exercising free speech.

While psychopaths deceive us and exploit our trust with promises that never match real actions, Bitcoin, as a holy grail of the Cypherpunks is stewarded by those who speak with codes instead of making promises. By making software open source, which allows anyone to read and modify the codes, innovators of this system make themselves available to be held accountable by their equal peers. This freely available code calls for voluntary association with this language of risk and reward, which then builds the network demand for armory against any psychopathic attack.

Governance without central authority can at first seem inefficient. But it is more secure than the current system of representation. The more the system reduces the need to trust a third party, replacing it with a borderless network, the lower the security risk becomes. The Bitcoin blockchain opens a door into a pluralistic society where all can participate in creating many governance models and currencies that manifest our true values through the principles of mutual aid and voluntary association. Upon such a secure foundation, progressive ideas of basic income, universal health-care, free tuition as well as privacy and truly free markets can be built as an app.

As Bitcoin gains more value, the proof-of-work lightning rod attracts malicious attackers. Man is fallible and each person alone can’t account for themselves. But, through our genuine efforts of working together to keep the network decentralized, a spark is created that emanates light out of our own darkness. Every 10 minutes, the heart of the Bitcoin network expands, time-stamping on greed and antisocial impulses, so the beast inside does not grow too large. The networked consensus lights the lamp of liberty, validating the universal truth that ordinary people are the source of all legitimacy.

Photo credit – Dr. Frankenstein’s dream II by Joaquin Casarini

Planet Linux AustraliaBinh Nguyen: Social Engineering/Manipulation, Rigging Elections, and More

We recently had an election locally and I noticed how they were handing out 'How To Vote' cards which made me wonder. How much social engineering and manipulation do we experience each day/throughout our lives (please note, that all of the results are basically from the first few pages of any publicly available search engine)? - think about the education system and the way we're mostly taught to

Planet DebianReproducible builds folks: Preparing for the second release of reprotest

Author: ceridwen

I now have working test environments set up for null (no container, build on the host system), schroot, and qemu. After fixing some bugs, null and qemu now pass all their tests!

schroot still has a permission error related to disorderfs. Since the same code works for null and qemu and for schroot when disorderfs is disabled, it's something specific to disorderfs and/or its combination with schroot. The following is debug output that shows ls for the build directory on the testbed before and after the mock build, and stat for both the build directory and the mock build artifact itself. The first control run, without disorderfs, succeeds: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rwxr--r-- 1 user user 2340 Jun 28 18:43
-rwxr--r-- 1 user user  175 Jun  3 15:42
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 DBG: testbed command exited with code 0 DBG: testbed command ['sh', '-ec', 'cd /tmp/autopkgtest.5oMipL/control/ ;\n python3 ;\n'], kind short, sout raw, serr pipe, env ['LANG=en_US.UTF-8', 'HOME=/nonexistent/first-build', 'VIRTUAL_ENV=~/code/reprotest/.tox/py35', 'PATH=~/code/reprotest/.tox/py35/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin', 'PYTHONHASHSEED=559200286', 'TZ=GMT+12'] DBG: testbed command exited with code 0 DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rw-r--r-- 1 root root    0 Jul 18 15:06 artifact
-rwxr--r-- 1 user user 2340 Jun 28 18:43
-rwxr--r-- 1 user user  175 Jun  3 15:42
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 DBG: testbed command exited with code 0 DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/control/'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/control/'
  Size: 4096        Blocks: 8          IO Block: 4096   directory
Device: 56h/86d Inode: 1351634     Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1000/    user)   Gid: ( 1000/    user)
Access: 2016-07-18 15:06:31.105915342 -0400
Modify: 2016-07-18 15:06:31.089915352 -0400
Change: 2016-07-18 15:06:31.089915352 -0400
 Birth: - DBG: testbed command exited with code 0 DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/control/artifact'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/control/artifact'
  Size: 0           Blocks: 0          IO Block: 4096   regular empty file
Device: fc01h/64513d    Inode: 40767795    Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-07-18 15:06:31.089915352 -0400
Modify: 2016-07-18 15:06:31.089915352 -0400
Change: 2016-07-18 15:06:31.089915352 -0400
 Birth: - DBG: testbed command exited with code 0 DBG: sending command to testbed: copyup /tmp/autopkgtest.5oMipL/control/artifact /tmp/tmpw_mwks82/control_artifact
schroot: DBG: executing copyup /tmp/autopkgtest.5oMipL/control/artifact /tmp/tmpw_mwks82/control_artifact
schroot: DBG: copyup_shareddir: tb /tmp/autopkgtest.5oMipL/control/artifact host /tmp/tmpw_mwks82/control_artifact is_dir False downtmp_host /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52//tmp/autopkgtest.5oMipL
schroot: DBG: copyup_shareddir: tb(host) /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/control/artifact is not already at destination /tmp/tmpw_mwks82/control_artifact, copying DBG: got reply from testbed: ok

That last bit indicates that copy command for the build artifact from the testbed to a temporary directory on the host succeeded. This is the debug output from the second run, with disorderfs enabled: DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rwxr--r-- 1 user user 2340 Jun 28 18:43
-rwxr--r-- 1 user user  175 Jun  3 15:42
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 DBG: testbed command exited with code 0 DBG: testbed command ['sh', '-ec', 'cd /tmp/autopkgtest.5oMipL/disorderfs/ ;\n umask 0002 ;\n linux64 --uname-2.6 python3 ;\n'], kind short, sout raw, serr pipe, env ['LC_ALL=fr_CH.UTF-8', 'CAPTURE_ENVIRONMENT=i_capture_the_environment', 'HOME=/nonexistent/second-build', 'VIRTUAL_ENV=~/code/reprotest/.tox/py35', 'PATH=~/code/reprotest/.tox/py35/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin/i_capture_the_path', 'LANG=fr_CH.UTF-8', 'PYTHONHASHSEED=559200286', 'TZ=GMT-14'] DBG: testbed command exited with code 0 DBG: testbed command ['ls', '-l', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
total 20
drwxr-xr-x 2 user user 4096 Jul 15 23:43 __pycache__
-rw-r--r-- 1 root root    0 Jul 18 15:06 artifact
-rwxr--r-- 1 user user 2340 Jun 28 18:43
-rwxr--r-- 1 user user  175 Jun  3 15:42
-rw-r--r-- 1 user user  252 Jun 14 16:06 template.ini
-rwxr-xr-x 1 user user 1600 Jul 15 23:18 DBG: testbed command exited with code 0 DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/disorderfs/'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/disorderfs/'
  Size: 4096        Blocks: 8          IO Block: 4096   directory
Device: 58h/88d Inode: 1           Links: 3
Access: (0755/drwxr-xr-x)  Uid: ( 1000/    user)   Gid: ( 1000/    user)
Access: 2016-07-18 15:06:31.201915291 -0400
Modify: 2016-07-18 15:06:31.185915299 -0400
Change: 2016-07-18 15:06:31.185915299 -0400
 Birth: - DBG: testbed command exited with code 0 DBG: testbed command ['stat', '/tmp/autopkgtest.5oMipL/disorderfs/artifact'], kind short, sout raw, serr raw, env []
  File: '/tmp/autopkgtest.5oMipL/disorderfs/artifact'
  Size: 0           Blocks: 0          IO Block: 4096   regular empty file
Device: 58h/88d Inode: 7           Links: 1
Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
Access: 2016-07-18 15:06:31.185915299 -0400
Modify: 2016-07-18 15:06:31.185915299 -0400
Change: 2016-07-18 15:06:31.185915299 -0400
 Birth: - DBG: testbed command exited with code 0 DBG: sending command to testbed: copyup /tmp/autopkgtest.5oMipL/disorderfs/artifact /tmp/tmpw_mwks82/experiment_artifact
schroot: DBG: executing copyup /tmp/autopkgtest.5oMipL/disorderfs/artifact /tmp/tmpw_mwks82/experiment_artifact
schroot: DBG: copyup_shareddir: tb /tmp/autopkgtest.5oMipL/disorderfs/artifact host /tmp/tmpw_mwks82/experiment_artifact is_dir False downtmp_host /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52//tmp/autopkgtest.5oMipL
schroot: DBG: copyup_shareddir: tb(host) /var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/disorderfs/artifact is not already at destination /tmp/tmpw_mwks82/experiment_artifact, copying
schroot: DBG: cleanup...
schroot: DBG: execute-timeout: schroot --run-session --quiet --directory=/ --chroot jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52 --user=root -- rm -rf -- /tmp/autopkgtest.5oMipL
rm: cannot remove '/tmp/autopkgtest.5oMipL/disorderfs': Device or resource busy
schroot: DBG: execute-timeout: schroot --quiet --end-session --chroot jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52
Unexpected error:
Traceback (most recent call last):
  File "~/code/reprotest/reprotest/lib/", line 708, in mainloop
  File "~/code/reprotest/reprotest/lib/", line 646, in command
    r = f(c, ce)
  File "~/code/reprotest/reprotest/lib/", line 584, in cmd_copyup
    copyupdown(c, ce, True)
  File "~/code/reprotest/reprotest/lib/", line 469, in copyupdown
    copyupdown_internal(ce[0], c[1:], upp)
  File "~/code/reprotest/reprotest/lib/", line 494, in copyupdown_internal
    copyup_shareddir(sd[0], sd[1], dirsp, downtmp_host)
  File "~/code/reprotest/reprotest/lib/", line 408, in copyup_shareddir
    shutil.copy(tb, host)
  File "/usr/lib/python3.5/", line 235, in copy
    copyfile(src, dst, follow_symlinks=follow_symlinks)
  File "/usr/lib/python3.5/", line 114, in copyfile
    with open(src, 'rb') as fsrc:
PermissionError: [Errno 13] Permission denied: '/var/lib/schroot/mount/jessie-amd64-ac94881d-ae71-4f24-a004-1847889d5d52/tmp/autopkgtest.5oMipL/disorderfs/artifact'

ls shows that the artifact is created in the right place. However, when reprotest tries to copy it from the testbed to the host, it gets a permission error. The traceback is coming from virt/schroot, and it's a Python open() call that's failing. Note that the permissions are wrong for the second run, but that's expected because my schroot is stable so the umask bug isn't fixed yet; and that the rm error from disorderfs not being unmounted early enough (see below). I expect to see the umask test fail, though, not a crash in every test where the build succeeds.

After a great deal of effort, I isolated the bug that was causing the process to hang not to my code or autopkgtest's code, but to CPython and contextlib. It's supposed to be fixed in CPython 3.5.3, but for now I've worked around the problem by monkey-patching the patch provided in the latter issue onto contextlib.

Here is my current to-do list:

  • Fix PyPi not installing the virt/ scripts correctly.

  • Move the disorderfs unmount into the shell script. (When the virt/ scripts encounter an error, they try to delete a temporary directory, which fails if disorderfs is mounted, so the script needs to unmount it before that happens.)

  • Find and fix the schroot/disorderfs permission error bug.

  • Convert my notes on setting up for the tests into something useful for users.

  • Write scripts to synch version numbers and documentation.

  • Fix the headers in the autopkgtest code to conform to reprotest style.

  • Add copyright information for the contextlib monkey-patch and the autopkgtest files I've changed.

  • Close #829113 as wontfix.

And here are the questions I'd like to resolve before the second release:

  • Is there any other documentation that's essential? Finishing the documentation will come later.

  • Should I release before finishing the rest of the variation? This will slow down the release of the first version with something resembling full fuctionality.

  • Do I need to write a chroot test now? Given the duplication with schroot, I'm unconvinced this is worthwhile.

Google AdsenseLearn how #hashtags can help you

It’s official, #hashtags have taken over the internet. Much like memes, gifs, and audio-less fast motion cooking instructional videos, #hashtags fill up social media news feeds. However, unlike the other popular content types, what’s unique to #hashtags is that they organize conversations across the web. Even Jimmy Fallon and Justin Timberlake had something to say about #hashtags.

#Hashtags started around 2007 on Twitter, and have rapidly grown into a common medium for users to express their feelings or interests primarily on social networks. As the summer of sport kicks off, it’s a good idea for you to consider incorporating #hashtags into your content strategy as a key ingredient to #drawthecrowds.

#Hashtags are quite simple to use and can attract new users to your content when you understand how they work. Essentially, when the pound/hash sign is used in front of a group of words it automatically turns that group of words into a searchable link. This transforms those keywords into a conversation that the entire web can participate in and follow.

The use of #hashtags can be boiled down into two main use cases:
  1. Create your own, unique #hashtag to organize your content and start a conversation. This could be tricky because there are millions of #hashtags online, so don’t be afraid to repurpose one that exists. AdSense uses original #hashtags like #AdSenseGuide to promote our downloadable content or #AskAdSense for our Q&A sessions. We’re also using #drawthecrowds during the summer season to help AdSense publishers draw crowds to your content during big events. 
  2. Use an existing #hashtag and join in on a conversation. Use social network search options to find trending #hashtags that are relevant to your audience and join the conversation. For example, #BurningMan is a popular #hashtag used in the summer months to find news and updates about the annual event. Everyone from news publications to the thousands of people in the Black Rock Desert of Nevada will be using #BurningMan to share their perspective of the Burning Man experience. Using existing and popular #hashtags presents an opportunity for you to contribute your unique perspective to the digital conversation.
To get the most out of #hashtags, here’s a “do and don’t” list to reference as you build out your content strategy:
  • Use one to three #hashtags per post, any more is generally overdoing it. 
  • Use #hashtags that are relevant to your audience or ones that your industry is using. If you’re writing an article on the food to try, you could use #hashtags like #Foodie and #Yummy so users will find you when they search for those always trending keywords.
  • It’s ok to be specific. In most cases, the more specific the #hashtag, the better. If you’re going to talk about do it yourself (DIY) summer projects, you’d want to use #hashtags like #diyprojects, #diyideas or #diyweddings instead of general keywords like #DIY or #DoItYourSelf. Using specific #hashtags helps users pinpoint the exact content they’re looking for. 
  • Letters and numbers are OK to use in #hashtags.
  • Keep #hashtags short.

  • Don’t string too many words together. #itbecomesreallyreallyhardtoread and it can take up most of your Twitter character count.
  • Don’t use punctuation marks or spaces, they will break the searchable link.
  • Don’t use the same hashtag twice in the same social post. It’s just #weird.
Now that you understand how to use #hashtags and how they can help you #drawthecrowds this summer, share with us how you’re going to incorporate them into your content strategy – we’d love to follow along. 

Posted by Jay Castro, AdSense Content Marketing Specialist

CryptogramFuturistic Cyberattack Scenario

This is a piece of near-future fiction about a cyberattack on New York, including hacking of cars, the water system, hospitals, elevators, and the power grid. Although it is definitely a movie-plot attack, all the individual pieces are plausible and will certainly happen individually and separately.

Worth reading -- it's probably the best example of this sort of thing to date.

Worse Than FailureOptimizing the Backup

Leslie, head of IT at BlueBox, knew there was trouble when one of her underlings called her at 3AM. “The shared server’s down,” she said. “Disk failure. Accounting can’t issue invoices, design can’t get to its prototypes, and the CEO just lost his PowerPoint for next week’s conference speech.”

BlueBox, like many companies, kept many important documents on a shared server. It also held personal directories for every employee, and many (like the CEO) used it to store personal files. That data, totaling 100 GB, was backed up to a remote server every 24 hours. “Okay, swap out the disk and restore it.”

“I can’t find the backup,” the underling replied.

ZFS server front 26

Leslie groaned, then rolled out of bed, booted her laptop, and RDPed into the remote server. The blood drained from her face: while there were backups of every other server that BlueBox need to operate, the shared server’s was missing.

Bracing for the headache she would face at the office, Leslie made a call to a data recovery specialist. Later that morning, while the shared docs were being salvaged from the failed disk, Leslie prepped for the postmortem.

The Consultant

The remote server held 8 1TB HDDs in RAID 1+0, formatted with ZFS. With that robust configuration, it probably wasn’t be a hardware issue that caused the backup to disappear. It clearly had to be something wrong with the file system.

Naturally, a ZFS consultant was hired.

“I just don’t see how it’s possible for a 100GB file to ‘disappear.’” The consultant addressed Leslie and the rest of IT sat in the conference room. He gestured the air quotes. “ZFS uses copy-on-write transactions. While a file is getting rewritten, the old file data remains on-disk until the operation is completed. If there were a hardware failure during that time, the file-system would fall back to the old file data. It wouldn’t ‘disappear.’”

“We’re paying you a lot of money,” Leslie said. “Why don’t you see for yourself.”

A laptop was brought with an open connection to the server. The consultant grimaced as he opened the DOS command prompt, muttering something about Bash, then ran several commands to check the integrity of the file-system. As he worked, his mouth went agape, cheeks twitching. “No, it’s not possible… This is a fresh file. Are you sure the file wasn’t, well … deleted?”

Leslie sighed. “Thank you for your time. Security will show you out.”

Just Saving Space

After spending thousands on a dead-end, Leslie decided to start with the basics, interviewing every member of IT about the day in question. After grilling several employees on her team, she called in Heather, who oversaw their backup solution.

“There’s a scheduled task to perform the backup on the shared server,” Heather began. “I have it timed for 3AM.”

“That’s close to when the backup failed. Does the scheduled task run a batch script?”

“Yeah.” Heather opened the script on her laptop and showed her.

Leslie’s stomach dropped. “Line 12 … you delete the old backup before creating a new one?”

“I always delete the last backup before I do the next backup,” Heather said. “It helps save space and keeps the hardware optimized. All the other servers are set up that way.”

It was all Leslie could do to keep herself from firing Heather on the spot.

The Solution

Leslie watched as Heather rewrote every backup batch script line-by-line. 7 previous backups would be kept, with new ones written every 24 hours, and old backups would be deleted only after the most recent backup was written. The consultant was still paid, despite offering little help. His invoice led to upper management reconsidering ZFS for their remote backup solution.

A few days afterwards, Leslie got an unexpected visitor. The CEO of BlueBox, effuse with praise, thanked her for finding his PowerPoint before the conference began. He offered a substantial bonus.

Leslie handed the CEO a business card. It had the contact info for the data recovery specialist who salvaged the PowerPoint file from the failed disk. “You ought to give him one, too,” Leslie said, “since he saved your presentation.”

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!


Planet DebianLisandro Damián Nicanor Pérez Meyer: KDEPIM ready to be more broadly tested

As was posted a couple of weeks ago, the latest version of KDEPIM has been uploaded to unstable.

All packages are now uploaded and built and we believe this version is ready to be more broadly tested.

If you run unstable but have refrained from installing the kdepim packages up to now, we would appreciate it if you go ahead and install them now, reporting any issues that you may find.

Given that this is a big update that includes quite a number of plugins and libraries, it's strongly recommended that you restart your KDE session after updating the packages.

Happy hacking,

The Debian Qt/KDE Team.

Note lun jul 18 08:58:53 ART 2016: Link fixed and s/KDE/KDEPIM/.

Planet DebianIustin Pop: Energy bar restored!

So, I've been sick. Quite sick, as for the past ~2 weeks I wasn't able to bike, run, work or do much beside watch movies, look at photos and play some light games (ARPGs rule in this case, all you need to do is keep the left mouse button pressed).

It was supposed to be only a light viral infection, but it took longer to clear out than I expected, probably due to it happening right after my dental procedure (and possibly me wanting to restart exercise too soon, to fast). Not fun, it felt like the thing that refills your energy/mana bar in games broke. I simply didn't feel restored, despite sleeping a lot; 2-3 naps per day sound good as long as they are restorative, if they're not, sleeping is just a chore.

The funny thing is that recovery happened so slow, that when I finally had energy it took me by surprise. It was like “oh, wait, I can actually stand and walk without feeling dizzy! Wohoo!” As such, yesterday was a glorious Saturday ☺

I was therefore able to walk a bit outside the house this weekend and feel like having a normal cold, not like being under a “cursed: -4 vitality” spell. I expect the final symptoms to clear out soon, and that I can very slowly start doing some light exercise again. Not tomorrow, though…

In the meantime, I'm sharing a picture from earlier this year that I found while looking through my stash. Was walking in the forest in Pontresina on a beatiful sunny day, when a sudden gust of wind caused a lot of the snow on the trees to fly around and make it look a bit magical (photo is unprocessed beside conversion from raw to jpeg, this is how it was straight out of the camera):

Winter in the forest

Why a winter photo? Because that's exactly how cold I felt the previous weekend: 30°C outside, but I was going to the doctor in jeans and hoodie and cap, shivering…

Planet DebianMichael Stapelberg: mergebot: easily merging contributions

Recently, I was wondering why I was pushing off accepting contributions in Debian for longer than in other projects. It occurred to me that the effort to accept a contribution in Debian is way higher than in other FOSS projects. My remaining FOSS projects are on GitHub, where I can just click the “Merge” button after deciding a contribution looks good. In Debian, merging is actually a lot of work: I need to clone the repository, configure it, merge the patch, update the changelog, build and upload.

I wondered how close we can bring Debian to a model where accepting a contribution is just a single click as well. In principle, I think it can be done.

To demonstrate the feasibility and collect some feedback, I wrote a program called mergebot. The first stage is done: mergebot can be used on your local machine as a command-line tool. You provide it with the source package and bug number which contains the patch in question, and it will do the rest:

midna ~ $ mergebot -source_package=wit -bug=#831331
2016/07/17 12:06:06 will work on package "wit", bug "831331"
2016/07/17 12:06:07 Skipping MIME part with invalid Content-Disposition header (mime: no media type)
2016/07/17 12:06:07 gbp clone --pristine-tar git+ssh:// /tmp/mergebot-743062986/repo
2016/07/17 12:06:09 git config push.default matching
2016/07/17 12:06:09 git config --add remote.origin.push +refs/heads/*:refs/heads/*
2016/07/17 12:06:09 git config --add remote.origin.push +refs/tags/*:refs/tags/*
2016/07/17 12:06:09 git config stapelberg AT debian DOT org
2016/07/17 12:06:09 patch -p1 -i ../latest.patch
2016/07/17 12:06:09 git add .
2016/07/17 12:06:09 git commit -a --author Chris Lamb <lamby AT debian DOT org> --message Fix for “wit: please make the build reproducible” (Closes: #831331)
2016/07/17 12:06:09 gbp dch --release --git-author --commit
2016/07/17 12:06:09 gbp buildpackage --git-tag --git-export-dir=../export --git-builder=sbuild -v -As --dist=unstable
2016/07/17 12:07:16 Merge and build successful!
2016/07/17 12:07:16 Please introspect the resulting Debian package and git repository, then push and upload:
2016/07/17 12:07:16 cd "/tmp/mergebot-743062986"
2016/07/17 12:07:16 (cd repo && git push)
2016/07/17 12:07:16 (cd export && debsign *.changes && dput *.changes)

midna ~ $ cd /tmp/mergebot-743062986/repo
midna /tmp/mergebot-743062986/repo $ git log HEAD~2..
commit d983d242ee546b2249a866afe664bac002a06859
Author: Michael Stapelberg <stapelberg AT debian DOT org>
Date:   Sun Jul 17 13:32:41 2016 +0200

    Update changelog for 2.31a-3 release

commit 5a327f5d66e924afc656ad71d3bfb242a9bd6ddc
Author: Chris Lamb <lamby AT debian DOT org>
Date:   Sun Jul 17 13:32:41 2016 +0200

    Fix for “wit: please make the build reproducible” (Closes: #831331)
midna /tmp/mergebot-743062986/repo $ git push
Counting objects: 11, done.
Delta compression using up to 8 threads.
Compressing objects: 100% (11/11), done.
Writing objects: 100% (11/11), 1.59 KiB | 0 bytes/s, done.
Total 11 (delta 6), reused 0 (delta 0)
remote: Sending notification emails to:
remote: Sending notification emails to:
To git+ssh://
   650ee05..d983d24  master -> master
 * [new tag]         debian/2.31a-3 -> debian/2.31a-3
midna /tmp/mergebot-743062986/repo $ cd ../export
midna /tmp/mergebot-743062986/export $ debsign *.changes && dput *.changes
Uploading wit_2.31a-3.dsc
Uploading wit_2.31a-3.debian.tar.xz
Uploading wit_2.31a-3_amd64.deb
Uploading wit_2.31a-3_amd64.changes

Of course, this is not quite as convenient as clicking a “Merge” button yet. I have some ideas on how to make that happen, but I need to know whether people are interested before I spend more time on this.

Please see for more details, and please get in touch if you think this is worthwhile or would even like to help. Feedback is accepted in the GitHub issue tracker for mergebot or the project mailing list mergebot-discuss. Thanks!

Planet DebianVasudev Kamath: Switching from approx to apt-cacher-ng

After a long ~5 years (from 2011) journey with approx I finally wanted to switch to something new like apt-cacher-ng. And after a bit of changes I finally managed to get apt-cacher-ng into my work flow.

Bit of History

I should first give you a brief on how I started using approx. It all started in MiniDebconf 2011 which I organized at my Alma-mater. I met Jonas Smedegaard here and from him I learned about approx. Jonas has a bunch of machines at his home and he was active user of approx and he showed it to me while explaining the Boxer project. I was quite impressed with approx. Back then I was using a 230kbps slow INTERNET connection and I was also maintaining a couple of packages in Debian. Updating the pbuilder chroots was time consuming task for me as I had to download multiple times over slow net. And approx largely solved this problem and I started using it.

5 years fast forward I now have quite fast INTERNET with good FUP. (About 50GB a month), but I still tend to use approx which makes building packages quite faster. I also use couple of containers on my laptop which all use my laptop as approx cache.

Why switch?

So why change to apt-cacher-ng?. Approx is a simple tool, it runs mainly with inetd and sits between apt and the repository on INTERNET. Where as apt-cacher-ng provides a lot of features. Below are some listed from the apt-cacher-ng manual.

  • use of TLS/SSL repositories (may be possible with approx but I'm notsure how to do it)
  • Access control of who can access caching server
  • Integration with debdelta (I've not tried, approx also supports debdelta)
  • Avoiding use of apt-cacher-ng for some hosts
  • Avoiding caching of some file types
  • Partial mirroring for offline usage.
  • Selection of ipv4 or ipv6 for connections.

The biggest change I see is the speed difference between approx and apt-cacher-ng. I think this is mainly because apt-cacher-ng is threaded where as approx runs using inetd.

I do not want all features of apt-cacher-ng at the moment, but who knows in future I might need some features and hence I decided to switch to apt-cacher-ng over approx.


Transition from approx to apt-cacher-ng was smoother than I expected. There are 2 approaches you can use one is explicit routing another is transparent routing. I prefer transparent routing and I only had to change my /etc/apt/sources.list to use the actual repository URL.

deb unstable main contrib non-free
deb-src unstable main

deb experimental main contrib non-free
deb-src experimental main

After above change I had to add a 01proxy configuration file to /etc/apt/apt.conf.d/ with following content.

Acquire::http::Proxy "http://localhost:3142/"

I use explicit routing only when using apt-cacher-ng with pbuilder and debootstrap. Following snippet shows explicit routing through /etc/apt/sources.list.

deb http://localhost:3142/ unstable main

Usage with pbuilder and friends

To use apt-cacher-ng with pbuilder you need to modify /etc/pbuilderrc to contain following line


Usage with debootstrap

To use apt-cacher-ng with debootstrap, pass MIRROR argument of debootstrap as http://localhost:3142/


I've now completed full transition of my work flow to apt-cacher-ng and purged approx and its cache.

Though it works fine I feel that there will be 2 caches created when you use transparent and explicit proxy using localhost:3142 URL. I'm sure it is possible to configure this to avoid duplication, but I've not yet figured it. If you know how to fix this do let me know.


Jonas told me that its not 2 caches but 2 routing paths, one for transparent routing and another for explicit routing. So I guess there is nothing here to fix :-).

Planet DebianNeil Williams: Deprecating dpkg-cross

Deprecating the dpkg-cross binary

After a discussion in the cross-toolchain BoF at DebConf16, the gross hack which is packaged as the dpkg-cross binary package and supporting perl module have finally been deprecated, long after multiarch was actually delivered. Various reasons have complicated the final steps for dpkg-cross and there remains one use for some of the files within the package although not the dpkg-cross binary itself.

2.6.14 has now been uploaded to unstable and introduces a new binary package cross-config, so will spend a time in NEW. The changes are summarised in the NEWS entry for 2.6.14.

The cross architecture configuration files have moved to the new cross-config package and the older dpkg-cross binary with supporting perl module are now deprecated. Future uploads will only include the cross-config package.

Use cross-config to retain support for autotools and CMake cross-building configuration.

If you use the deprecated dpkg-cross binary, now is the time to migrate away from these path changes. The dpkg-cross binary and the supporting perl module should NOT be expected to be part of Debian by the time of the Stretch release.

2.6.14 also marks the end of my involvement with dpkg-cross. The Uploaders list has been shortened but I'm still listed to be able to get 2.6.14 into NEW. A future release will drop the perl module and the dpkg-cross binary, retaining just the new cross-config package.

Planet DebianValerie Young: Work after DebConf

First week after DebCamp and DebConf! Both were incredible — the debian project and it’s contributors never fail to impress and delight me. None the less it felt great to have a few quiet, peaceful days of uninterrupted programming.

Notes about last week:

1. Finished Mattia’s final suggestions for the conversion of the package set pages script to python.

Hopefully it will be deployed soon, awaiting final approval 🙂

2. Replace the bash code that produced the left navigation on the home page (and most other pages) with the mustache template the python scripts use.

Previously, html was constructed and spat out from both a python and shell script — now we have a single, DRY mustache template. (At the top of the bash function that produced the navigation html, you will find the comment: “this is really quite incomprehensible and should be killed, the solution is to write all html pages with python…”. Turns out the intermediate solution is to use templates 😉 )

3. Thought hard about navigation of the test website, and redesigned (by rearranging) links in the left hand navigation.

After code review, you will see these changes as well! Things to look forward to include:
– A link to the Debian dashboard on the top left of every page (except the package specific pages).
– The title of each page (except the package pages) stretches across the whole page (instead of being squashed into the top left).
– Hover text has been added to most links in the left navigation.
– Links in left navigation have been reordered, and headers added.

Once you see the changes, please let me know if you think anything is unintuitive or confusion, everything can be easily changed!

4. Cross suite and architecture navigation enabled for most pages.

For most pages, you will be one click away from seeing the same statistics for a different suite or architecture! Whoo!

Notes about next week:

Last week I got carried away imagining minor improvements that can be made to the test websites UI, and I now have a backlog of ideas I’d like to implement. I’ve begun editing the script that makes most of the pages with statistics or package list (for example, all packages with notes, or all recently tested packages) to use templates and contain a bit more descriptive text. I’d also like to do a some revamping of the package set pages I converted.

These addition UI changes will be my first tasks for the coming week — since they are fresh on my mind and I’m quite excited about them. The following week I’d like to get back to extensibility and database issues mentioned previously!


Planet DebianPaul Tagliamonte: The Open Source License API

Around a year ago, I started hacking together a machine readable version of the OSI approved licenses list, and casually picking parts up until it was ready to launch. A few weeks ago, we officially announced the osi license api, which is now live at

I also took a whack at writing a few API bindings, in Python, Ruby, and using the models from the API implementation itself in Go. In the following few weeks, Clint wrote one in Haskell, Eriol wrote one in Rust, and Oliver wrote one in R.

The data is sourced from a repo on GitHub, the licenses repo under OpenSourceOrg. Pull Requests against that repo are wildly encouraged! Additional data ideas, cleanup or more hand collected data would be wonderful!

In the meantime, use-cases for using this API range from language package managers pulling OSI approval of a licence programatically to using a license identifier as defined in one dataset (SPDX, for exampele), and using that to find the identifer as it exists in another system (DEP5, Wikipedia, TL;DR Legal).

Patches are hugly welcome, as are bug reports or ideas! I'd also love more API wrappers for other languages!

Planet Linux AustraliaLev Lafayette: GnuCOBOL: A Gnu Life for an Old Workhorse

COBOL is a business-orientated programming language that has been in use since 1959, making it one of the world's oldest programming languages.

Despite being much criticised (and for good reasons) it is still a major programming language in the financial sector, although there are a declining number of experienced programmers.

read more

Planet Linux AustraliaBen Martin: Making surface mount pcbs with a CNC machine

The cool kidsTM like to use toaster ovens with thermocouples to bake their own surface mount boards at home. I've been exploring doing that using boards that I make on a CNC locally. The joy of designing in the morning and having the working product in the evening. It seems SOIC size is ok, but smaller SMT IC packages currently present an issue. This gives interesting fodder for how to increase precision down further. Doing SOIC and SMD LED/Resistors from a sub $1k CNC machine isn't too bad though IMHO. And unlike other pcb specific CNC machines I can also cut wood and metal with my machine :-p

Time to stock up on some SOIC microcontrollers for some full board productions. It will be very interesting to see if I can do an SMD usb connector. Makes it a nice complete black box to do something and talk ROS over USB.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2016

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In June, 158.25 work hours have been dispatched among 11 paid contributors. Their reports are available:

DebConf 16 Presentation

If you want to know more about how the LTS project is organized, you can watch the presentation I gave during DebConf 16 in Cape Town.

Evolution of the situation

The number of sponsored hours increased a little bit at 135 hours per month thanks to 3 new sponsors (Laboratoire LEGI – UMR 5519 / CNRS, Quarantainenet BV, GNI MEDIA). Our funding goal is getting closer but it’s not there yet.

The security tracker currently lists 40 packages with a known CVE and the dla-needed.txt file lists 38 packages awaiting an update.

Thanks to our sponsors

New sponsors are in bold.

CryptogramDallas Police Use a Robot to Kill a Person

This seems to be a first.

EDITED TO ADD (7/10): Another article.

EDITED TO ADD (7/12): And another article.

EDITED TO ADD (7/16): Several views.

CryptogramFriday Squid Blogging: Stuffed Squid with Chard and Potatoes

Looks like a tasty recipe.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Rondam RamblingsMost incompetent VP rollout ever

Wow.  Just, wow.  If Trump screws up his VP announcement this badly what hope is there for a Trump administration to be anything other than a total train wreck? [UDATE] And as if Trump's own incompetence weren't enough, now there's a coup in Turkey to steal what's left of his thunder this news cycle.


TEDHow we broke the Panama Papers story: Gerard Ryle at TEDSummit

Photo by Bret Hartman/TED.

Faced with an immense cache of documents from a secret Panama bank, Gerard Ryle and the International Consortium of Investigative Journalists assembled a record-breaking team of global journalists to tell the world-shaking stories that might be contained therein. Photo by Bret Hartman/TED.

Imagine you’ve been handed the biggest single cache of leaked documents in recent history. Eleven and half million documents to be exact, implicating important figures from around the globe in decades of tax evasion and hidden accounts. But you only have 26 people at your disposal to go through them. What do you do?

This was reality for the German newspaper Süddeutsche Zeitung (SZ), when an anonymous source known as “John Doe” leaked an astounding amount of information regarding the Panamanian law firm Mossack Fonseca, a cache widely known as the Panama Papers. These documents amounted to 2 million PDFs, 5 million emails and every spreadsheet the firm created for the past 40 years. It was clear that SZ would not be capable of combing through this data on their own, so they went to Ryle’s organization, the International Consortium of Investigative Journalist (ICIJ). What they chose to do next went against everything any of them had ever been taught to do as an investigatory journalist: They shared it.

The Panama Papers contained evidence of offshore banking, implicating everyone from American movie stars to Argentinian soccer players to Icelandic presidents. It was clear that no single journalistic entity would have the resources to accurately report on all of the data, to know what every name meant and how to connect the stories threading through the data. Given the scope of the subject matter, Ryle thought, “Who was best to know is important in Nigerian business than a Nigerian?” So he and the ICIJ, over time, amassed a team of 356 journalists from 107 separate publications based in 80 different countries, with a philosophy of “native eyes on native names.” They operated under only two rules: 1) We share everything we find and 2) We all publish on the same day.

The temptation to publish early was strong and persistent. Many times, Ryle was called in to calm journalists desperate to share this injustice with the world. But, in the end, none of them broke the rules. They decided that the integrity and depth of the reporting was more important than glory for any single news outlet.

Beyond exposing the unfathomable amounts of money stored away in these offshore accounts, Ryle believes the Panama Papers was a purely journalistic breakthrough too. Their joint effort, spanning continents and uniting competitors, proved that the very technology that’s allegedly destroying the legacy of print could actually allow them “to reinvent journalism itself.” They communicated across oceans, built shared searchable databases and created a space where everyone could use their unique expertise, whether it be sports, politics or blood diamonds. This allowed them to report on the story in a “truly global way.” It’s a way of thinking that Ryle believes journalism has been staggeringly slow to adopt. Perhaps the success of the Panama Papers story will actually stand to redefine the crisis of our bleeding institutions of journalism. Because, as Ryle puts it, “where there crisis, there is also opportunity.”

CryptogramI Have Joined the Board of Directors of the Tor Project

This week, I have joined the board of directors of the Tor Project.

Slashdot thread. Hacker News thread.

Krebs on SecurityCybercrime Overtakes Traditional Crime in UK

In a notable sign of the times, cybercrime has now surpassed all other forms of crime in the United Kingdom, the nation’s National Crime Agency (NCA) warned in a new report. It remains unclear how closely the rest of the world tracks the U.K.’s experience, but the report reminds readers that the problem is likely far worse than the numbers suggest, noting that cybercrime is vastly under-reported by victims.

ons-statThe NCA’s Cyber Crime Assessment 2016, released July 7, 2016, highlights the need for stronger law enforcement and business partnership to fight cybercrime. According to the NCA, cybercrime emerged as the largest proportion of total crime in the U.K., with “cyber enabled fraud” making up 36 percent of all crime reported, and “computer misuse” accounting for 17 percent.

One explanation for the growth of cybercrime reports in the U.K. may be that the Brits are getting better at tracking it. The report notes that the U.K. Office of National Statistics only began including cybercrime for the first time last year in its annual Crime Survey for England and Wales.

“The ONS estimated that there were 2.46 million cyber incidents and 2.11 million victims of cyber crime in the U.K. in 2015,” the report’s authors wrote. “These figures highlight the clear shortfall in established reporting, with only 16,349 cyber dependent and approximately 700,000 cyber-enabled incidents reported to Action Fraud over the same period.”

The report also focuses on the increasing sophistication of organized cybercrime gangs that develop and deploy targeted, complex malicious software — such as Dridex and Dyre, which are aimed at emptying consumer and business bank accounts in the U.K. and elsewhere.

Avivah Litan, a fraud analyst with Gartner Inc., said cyber fraudsters in the U.K. bring their best game when targeting U.K. banks, which generally require far more stringent customer-facing security measures than U.S. banks — including smart cards and one-time tokens.

“I’m definitely hearing more about advanced attacks on U.K. banks than in the U.S.,” Litan said, adding that the anti-fraud measures put in place by U.K. banks have forced cybercriminals to focus more on social engineering U.K. retail and commercial banking customers.

Litan said if organized cybercrime gangs prefer to pick on U.K. banks, businesses and consumers, it may have more to do with convenience for the fraudsters than anything else. After all, she said, London is just two time zones behind Moscow, whereas the closest time zone in the U.S. is 7 hours behind.

“In most cases, the U.K. banks are pretty close to the fraudster’s own time zone, it’s a language the criminals can speak, and they’ve studied the banks’ systems up close and know how to get around security controls,” Litan said. “Just because you have more fraud controls doesn’t mean the fraudsters can’t beat them, it just forces the [crooks] to stay on top of their game. Why would you want to stay up all night doing online fraud against banks in the U.S. when you’d rather be out drinking with your buddies?”

The report observes that “despite the growth in scale and complexity of cyber crime and intensifying attacks, visible damage and losses have not (yet) been large enough to impact long term on shareholder value. The UK has yet to experience a cyber attack on business as damaging and publicly visible as the attack on the Target US retail chain.”

Although it would likely be difficult for a large, multinational European company to hide a breach similar in scope to that of the 2013 breach at Target, European nations generally have not had to adhere to the same data breach disclosure laws that are currently on the books in nearly every U.S. state — laws which prompt multiple U.S. companies each week to publicly acknowledge when they’ve suffered data breaches.

However, under the new European Union General Data Protection Regulation, companies that do business in Europe or with European customers will need to disclose “a breach of security leading to the accidental or unlawful destruction, loss, alteration, unauthorized disclosure of, or access to, personal data transmitted, stored or otherwise processed.”

It may be some time yet before U.K. and European businesses start coming forward about data breaches: For better or worse, the GDPR requirements don’t go into full effect for two more years.

Hat tip to Trend Micro’s blog as the inspiration for this post.

Sociological ImagesBreaking Down the Force/Choice Binary in the Sterilization of Women of Color

Flashback Friday.

U.S. women of color have historically been the victims of forced sterilization.  Sometimes women were sterilized during Cesarean sections and never told; others were threatened with termination of welfare benefits or denial of medical care if they didn’t “consent” to the procedure; teaching hospitals would sometimes perform unnecessary hysterectomies on poor women of color as practice for their medical residents.  In the south it was such a widespread practice that it had a euphemism: a “Mississippi appendectomy.”

Interestingly, today populations that were subject to this abuse have high rates of voluntary sterilization.  A recent report by the Urban Indian Health Institute included data showing that, compared to non-Hispanic white women (in gray), American Indian and Alaskan Native women (in cream) have very high rates of sterilization:

Iris Lopez, in an article titled “Agency and Constraint,” writes about what she discovered when she asked Puerto Rican women in New York City why they choose to undergo sterilization.

During the U.S. colonization of Puerto Rico, over 1/3rd of all women were sterilized.  And, today, still, Puerto Rican women in both Puerto Rico and the U.S. have “one of the highest documented rates of sterilization in the world.”  Two-thirds of these women are sterilized before the age of 30.

Lopez finds that 44% of the women would not have chosen the surgery if their economic conditions were better.  They wanted, but simply could not afford more children.

They also talked about the conditions in which they lived and explained that they didn’t want to bring children into that world.  They:

…talked about the burglaries, the lack of hot water in the winter and the dilapidated environment in which they live. Additionally, mothers are constantly worried about the adverse effect that the environment might have on their children. Their neighborhoods are poor with high rates of visible crime and substance abuse. Often women claimed that they were sterilized because they could not tolerate having children in such an adverse environment…

Many were unaware of other contraceptive options.  Few reported that their health care providers talked to them about birth control. So, many of them felt that sterilization was the only feasible “choice.”

Lopez argues that, by contrasting the choice to become sterilized with the idea of forced sterilization, we overlook the fact that choices are primed by larger institutional structures and ideological messages.  Reproductive freedom not only requires the ability to choose from a set of safe, effective convenient and affordable methods of birth control developed for men and women, but also a context of equitable social, political and economic conditions that allows women to decide whether or not to have children, how many, and when.

Originally posted in 2010. Cross-posted at Ms. 

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

(View original at

Planet DebianLars Wirzenius: Two-factor auth for local logins in Debian using U2F keys

Warning: This blog post includes instructions for a procedure that can lead you to lock yourself out of your computer. Even if everything goes well, you'll be hunted by dragons. Keep backups, have a rescue system on a USB stick, and wear flameproof clothing. Also, have fun, and tell your loved ones you love them.

I've recently gotten two U2F keys. U2F is a open standard for authentication using hardware tokens. It's probably mostly meant for website logins, but I wanted to have it for local logins on my laptop running Debian. (I also offer a line of stylish aluminium foil hats.)

Having two-factor authentication (2FA) for local logins improves security if you need to log in (or unlock a screen lock) in a public or potentially hostile place, such as a cafe, a train, or a meeting room at a client. If they have video cameras, they can film you typing your password, and get the password that way.

If you set up 2FA using a hardware token, your enemies will also need to lure you into a cave, where a dragon will use a precision flame to incinerate you in a way that leaves the U2F key intact, after which your enemies steal the key, log into your laptop and leak your cat GIF collection.

Looking up information for how to set this up, I found a blog post by Sean Brewer, for Ubuntu 14.04. That got me started. Here's what I understand:

  • PAM is the technology in Debian for handling authentication for logins and similar things. It has a plugin architecture.

  • Yubico (maker of Yubikeys) have written a PAM plugin for U2F. It is packaged in Debian as libpam-u2f. The package includes documentation in /usr/share/doc/libpam-u2f/README.gz.

  • By configuring PAM to use libpam-u2f, you can require both password and the hardware token for logging into your machine.

Here are the detailed steps for Debian stretch, with minute differences from those for Ubuntu 14.04. If you follow these, and lock yourself out of your system, it wasn't my fault, you can't blame me, and look, squirrels! Also not my fault if you don't wear sufficient protection against dragons.

  1. Install pamu2fcfg and libpam-u2f.
  2. As your normal user, mkdir ~/.config/Yubico. The list of allowed U2F keys will be put there.
  3. Insert your U2F key and run pamu2fcfg -u$USER > ~/.config/Yubico/u2f_keys, and press the button on your U2F key when the key is blinking.
  4. Edit /etc/pam.d/common-auth and append the line auth required cue.
  5. Reboot (or at least log out and back in again).
  6. Log in, type in your password, and when prompted and the U2F key is blinking, press its button to complete the login.

pamu2fcfg reads the hardware token and writes out its identifying data in a form that the PAM module understands; see the pam-u2f documentation for details. The data can be stored in the user's home directory (my preference) or in /etc/u2f_mappings.

Once this is set up, anything that uses PAM for local authentication (console login, GUI login, sudo, desktop screen lock) will need to use the U2F key as well. ssh logins won't.

Next, add a second key to your u2f_keys. This is important, because if you lose your first key, or it's damaged, you'll otherwise have no way to log in.

  1. Insert your second U2F key and run pamu2fcfg -n > second, and press the second key's button when prompted.
  2. Edit ~/.config/Yubico/u2f_keys and append the output of second to the line with your username.
  3. Verify that you can log in using your second key as well as the first key. Note that you should have only one of the keys plugged in at the same time when logging in: the PAM module wants the first key it finds so you can't test both keys plugged in at once.

This is not too difficult, but rather fiddly, and it'd be nice if someone wrote at least a way to manage the list of U2F keys in a nicer way.

Planet DebianRitesh Raj Sarraf: Fully SSL for my website

I finally made full switch to SSL for my website. Thanks to this simple howto on Let's Encrypt. I had to use the upstream git repo though. The Debian packaged tool,, did not have enough documentation/pointers in place. And finally, thanks to the Let's Encrypt project as a whole.

PS: http is now redirected to https. I hope nothing really breaks externally.




Worse Than FailureError'd: A Case of Mistaken Identity

"Wow, even Google doesn't understand the current mess that is British politics," writes Mike R.


"Variables. Gotta catch 'em all!" writes JR


Travis wrote, "I tried to sign in using my username and password, but Facebook wants me to sign in with the body of an error message."


Mark B. writes, "I seem to have found a very popular page on this website."


"I guess if I were to take this job, I'd be praying fervently every month that it was renewed," wrote John O.


James writes, "Gold price apparently spikes 9000% on Brexit worries."


"I was trying to report an issue with an online defensive driving course, but now I have an issue with their bug report form!" wrote Ben B.


Alex D. wrote, "Yes, Microsoft, you're trying your best. You'll make it someday."


[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.


Planet DebianAndrew Cater: Who wrote Hello world

Who wrote "Hello, world" ?
Rereading Kernighan and Ritchie's classic book on C - - almost the first thing you find is the listing for hello world. The comments make it clear that this is a commonplace - the sort of program that every programmer writes as a first test - the new computer works, the compiler / interpreter produces useful output and so on. It' s the classic, canonical thing to do.

A long time back, I got asked whether programming was an art or a science: it's both, but most of all it's only good insofar as it's shared and built on. I used hello world as an example: you can write hello world. You decide to add different text - a greeting (Hej! / ni hao / Bonjour tout le monde! )for friends. 

You discover at / cron / anacron - now you can schedule reminders "It's midnight - do you know where your code is?" "Go to bed, you have school tomorrow"

You can discover how to code for a graphical environment: how to build a test framework around it to check that it _only_ prints hello world and doesn't corrupt other memory ... the uses are endless if it sparks creativity.

If you feel like it, you can share your version - and learn from others. Write it in different languages - there's the analogous 99 bottles of beer site showing how to count and use different languages at

Not everyone will get it: not everyone will see it but everyone needs the opportunity 

Everyone needs the chance to share and make use of the commons, needs to be able to feel part of this 

Needs to be included: needs to feel that this is part of common heritage.
If you work for an employer: get them to contribute code / money / resources - even if it's as a charitable donation or to offset against taxes

If you work for a government: get them to use Free/Libre/Open Source products

If you work for a hosting company / ISP - get them to donate bandwidth for schools/coding clubs.

Give your time, effort, expertise to help: you gained from others, help others gain

If you work for an IT manufacturer - get them to think of FLOSS as the norm, not the exception



Planet DebianSteve Kemp: Adding lua to all the things!

Recently Antirez made a post documenting a simple editor in 1k of pure C, the post was interesting in itself, and the editor is a cute toy because it doesn't use curses - instead using escape sequences.

The github project became very popular and much interesting discussion took place on hacker news.

My interest was piqued because I've obviously spent a few months working on my own console based program, and so I had to read the code, see what I could learn, and generally have some fun.

As expected Salvatore's code is refreshingly simple, neat in some areas, terse in others, but always a pleasure to read.

Also, as expected, a number of forks appeared adding various features. I figured I could do the same, so I did the obvious thing in adding Lua scripting support to the project. In my fork the core of the editor is mostly left alone, instead code was moved out of it into an external lua script.

The highlight of my lua code is this magic:

  -- Keymap of bound keys
  local keymap = {}

  --  Default bindings
  keymap['^A']        = sol
  keymap['^D']        = function() insert( ) end
  keymap['^E']        = eol
  keymap['^H']        = delete
  keymap['^L']        = eval
  keymap['^M']        = function() insert("\n") end

I wrote a function invoked on every key-press, and use that to lookup key-bindings. By adding a bunch of primitives to export/manipulate the core of the editor from Lua I simplified the editor's core logic, and allowed interesting facilities:

  • Interactive evaluation of lua.
  • The ability to remap keys on the fly.
  • The ability to insert command output into the buffer.
  • The implementation of copy/past entirely in Lua_.

All in all I had fun, and I continue to think a Lua-scripted editor would be a neat project - I'm just not sure there's a "market" for another editor.

View my fork here, and see the sample kilo.lua config file.

Planet DebianSune Vuorela: Leaky lambdas and self referencing shared pointers

After a bit of a debugging session, I ended up looking at some code in a large project

m_foo = std::make_shared<SomeQObject>();
/* plenty of lines and function boundaries left out */ 
(void)connect(m_foo.get(), &SomeQObject::someSignal, [m_foo]() {
  /* */

The connection gets removed when the pointer inside m_foo gets de-allocated by the shared_ptr.
But the connection target is a lambda that has captured a copy of the shared_ptr…

There is at least a couple of solutions.

  • Keep the connection object (QMetaObject::Connection) around and call disconnect in your destructor. That way the connection gets removed and the lamda object should get removed
  • Capture the shared pointer by (const) reference. Capture the shared pointer as a weak pointer. Or as a raw pointer. All of this is safe because whenever the shared pointer gets a refcount of zero, the connection gets taken down with the object.

I guess the lesson learnt is be careful when capturing shared pointers.

Krebs on SecurityThe Value of a Hacked Company

Most organizations only grow in security maturity the hard way — that is, from the intense learning that takes place in the wake of a costly data breach. That may be because so few company leaders really grasp the centrality of computer and network security to the organization’s overall goals and productivity, and fewer still have taken an honest inventory of what may be at stake in the event that these assets are compromised.

If you’re unsure how much of your organization’s strategic assets may be intimately tied up with all this technology stuff, ask yourself what would be of special worth to a network intruder. Here’s a look at some of the key corporate assets that may be of interest and value to modern bad guys.


This isn’t meant to be an exhaustive list; I’m sure we can all think of other examples, and perhaps if I receive enough suggestions from readers I’ll update this graphic. But the point is that whatever paltry monetary value the cybercrime underground may assign to these stolen assets individually, they’re each likely worth far more to the victimized company — if indeed a price can be placed on them at all.

In years past, most traditional, financially-oriented cybercrime was opportunistic: That is, the bad guys tended to focus on getting in quickly, grabbing all the data that they knew how to easily monetize, and then perhaps leaving behind malware on the hacked systems that abused them for spam distribution.

These days, an opportunistic, mass-mailed malware infection can quickly and easily morph into a much more serious and sustained problem for the victim organization (just ask Target). This is partly because many of the criminals who run large spam crime machines responsible for pumping out the latest malware threats have grown more adept at mining and harvesting stolen data.

That data mining process involves harvesting and stealthily testing interesting and potentially useful usernames and passwords stolen from victim systems. Today’s more clueful cybercrooks understand that if they can identify compromised systems inside organizations that may be sought-after targets of organized cybercrime groups, those groups might be willing to pay handsomely for such ready-made access.

It’s also never been easier for disgruntled employees to sell access to their employer’s systems or data, thanks to the proliferation of open and anonymous cybercrime forums on the Dark Web that serve as a bustling marketplace for such commerce. In addition, the past few years have seen the emergence of several very secretive crime forums wherein members routinely solicited bids regarding names of people at targeted corporations that could serve as insiders, as well as lists of people who might be susceptible to being recruited or extorted.

The sad truth is that far too many organizations spend only what they have to on security, which is often to meet some kind of compliance obligation such as HIPAA to protect healthcare records, or PCI certification to be able to handle credit card data, for example. However, real and effective security is about going beyond compliance — by focusing on rapidly detecting and responding to intrusions, and constantly doing that gap analysis to identify and shore up your organization’s weak spots before the bad guys can exploit them.

How to fashion a cybersecurity focus beyond mere compliance. Source: PWC on NIST framework.

How to fashion a cybersecurity focus beyond mere compliance. Source: PWC on NIST framework.

Those weak spots very well may be your users, by the way. A number of security professionals I know and respect claim that security awareness training for employees doesn’t move the needle much. These naysayers note that there will always be employees who will click on suspicious links and open email attachments no matter how much training they receive. While this is generally true, at least such security training and evaluation offers the employer a better sense of which employees may need more heavy monitoring on the job and perhaps even additional computer and network restrictions.

If you help run an organization, consider whether the leadership is investing enough to secure everything that’s riding on top of all that technology powering your mission: Chances are there’s a great deal more at stake than you realize.

Organizational leaders in search of a clue about how to increase both their security maturity and the resiliency of all their precious technology stuff could do far worse than to start with the Cybersecurity Framework developed by the National Institute of Standards and Technology (NIST), the federal agency that works with industry to develop and apply technology, measurements, and standards. This primer (PDF) from PWC does a good job of explaining why the NIST Framework may be worth a closer look.

Image: PWC.

Image: PWC.

If you liked this post, you may enjoy the other two posts in this series — The Scrap Value of a Hacked PC and The Value of a Hacked Email Account.

CryptogramSecurity Effectiveness of the Israeli West Bank Barrier

Interesting analysis:

Abstract: Objectives -- Informed by situational crime prevention (SCP) this study evaluates the effectiveness of the "West Bank Barrier" that the Israeli government began to construct in 2002 in order to prevent suicide bombing attacks.

Methods -- Drawing on crime wave models of past SCP research, the study uses a time series of terrorist attacks and fatalities and their location in respect to the Barrier, which was constructed in different sections over different periods of time, between 1999 and 2011.

Results -- The Barrier together with associated security activities was effective in preventing suicide bombings and other attacks and fatalities with little if any apparent displacement. Changes in terrorist behavior likely resulted from the construction of the Barrier, not from other external factors or events.

Conclusions -- In some locations, terrorists adapted to changed circumstances by committing more opportunistic attacks that require less planning. Fatalities and attacks were also reduced on the Palestinian side of the Barrier, producing an expected "diffusion of benefits" though the amount of reduction was considerably more than in past SCP studies. The defensive roles of the Barrier and offensive opportunities it presents, are identified as possible explanations. The study highlights the importance of SCP in crime and counter-terrorism policy.

Unfortunately, the whole paper is behind a paywall.

Note: This is not a political analysis of the net positive and negative effects of the wall, just a security analysis. Of course any full analysis needs to take the geopolitics into account. The comment section is not the place for this broader discussion.

Worse Than FailureThe Not-So-Highly-Paid Consultant

Consulting. It's as much art as science. You apply for a job to create/change some system, and need to bid an amount that not only covers your time, but leaves a little something extra in your pocket. Of course, we all know that requirements are never absolute, or even well thought out. As such, you need to build some extra cost into your bid to take this into account. Build in too much and you will be overpriced and not get the job. Build in too little and you will be under-priced and get the job at what will inevitably become a loss.

Writing a contract that restricts the work to a specific list of features is nearly impossible because nobody ever thinks through what they want in advance (think about your last outsourced project). Given that, you need to be skilled at letting the client know that you will be nice and implement tiny things that are not in the spec for free, but anything that is outside the contract spec and takes any real time will be at an added cost (the art of saying no: why yes, we can add that feature, but it will take x weeks at a cost of y).

During the start of January 2016, Sean was contracted by a local news organization to modify their news website for them. Their website was built using WordPress. Believing that it was just a simple addition of pages, footers, headers, and theme, he took the job, and agreed upon a deadline of January 31 with a very small fixed fee of $30 (yes, t-h-i-r-t-y dollars for several weeks of work). Sean felt relieved that he was not going to have to build a full-blown news website because he already had another project in his start-up on queue.

My lawn-guy gets more than that for ten minutes of mowing.

Sean was given the credentials to the web host they were using and started to work. Upon opening the website, it took more than 10 seconds for it to fully load. He felt sad but endured the pain because he believed the task was just "easy." In the first two weeks, doing the job felt good. He optimized the WordPress website a bit, added the necessary pages and footers, and added SEO. Everything was fine and Sean was ready to show them the website.

A week later, the client called Sean and completely changed the requirements. They asked him to add a custom look on two of the pages, change the font, and add an interactive news map. That was not in the originally agreed-upon site design! Sean vigorously protested, but the client just said (non-verbatim), "Aww. Sean, you're a very good programmer! You can do it right? It can't be that hard."

When people tell you how easy your job is, the best thing to do is to make them do it for themselves.

Sean was not in a position to increase the cost of the job to cover the extra work, and could not do anything about it at that time. A week passed and he finished the custom look. He even had to pull in the source code from the website to his laptop because the loading was so slow that he could no longer bear it. What was left to be done was the interactive news map.

Now I don't know anything about web design but that sounds like something that's significantly more complicated than you can do for $30, let alone on top of the other work.

The interactive news map they requested was such that when the user clicked on a given province on a map, news for that province would be displayed on the bottom of the map. Sean did not know how he would implement that feature. It was certainly not in the cards given the original fee.

Sean thought that they should receive service that was comparable to the fee they paid. He told to them that the interactive news map couldn't be done because of "technical stuff." They bought the excuse.

What he gave them was a website that looked done but actually had a lot of visual bugs. What they asked him to do was to modify their website by just adding a couple of pages, a theme, and add the necessary information, and that's what he gave them.

Before and during the start of work, Sean learned that he was the second programmer they contracted to develop their website. The first programmer they contracted was a friend of his who was also asked to modify the site and add an interactive news map. He bailed out immediately because of the discrepancy between the pay and the amount of work.

To this day, their news website is still up and running, albeit really slowly. However, it seems that they haven't added their articles yet.

It's like when you see job postings where they want an expert with ten years of experience in each of web design, Java, C++, C# and .NET, system administration and as a DBA in each of Sybase, Oracle, DB2 and SQL-Server, and their pay range goes up to $60/hour. And they wonder why they can't fill the job.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianNorbert Preining: Osamu Dazai – No Longer Human

Japanese authors have a tendency to commit suicide, it seems. I have read Ryunosuke Akutagawa (芥川 龍之介, at 35), Yukio Mishima (三島 由紀夫, at 45), and also Osamu Dazai (太宰 治, at 39). Their end often reflects in their writings, and one of these examples is the book I just finished, No Longer Human.


Considered as Dazai’s master piece, and with Soseki’s Kokoro the best selling novels in Japan. The book recounts the life of Oba Yozo, from childhood to the end in a mental hospital. The early years, described in the first chapter (“Memorandum”), are filled with the feeling of differentness, alienation from the rest, and Oba starts his way of living by playing the clown, permanently making jokes. The Second Memorandom spans the time to university, where he drops out, tries to become a painter, indulges in alcohol, smoking and prostitutes, leading to a suicide attempt together with a married woman, but he survived. The first part of the Third Memorandom sees a short recovering due to his relationship with a woman. He stops drinking and works as cartoonist, but in the last part his drinking pal from university times shows up again and they return into an ever increasing vicious drinking. Eventually he is separated from his wife, and confined to a mental hospital.

Very depressing to read, but written in a way that one cannot stop reading. The disturbing thing about this book is that, although the main actor conceives many bad actions, we feel somehow attached to him and feel pity for him. It is somehow a exercise how circumstances and small predispositions can make a huge change in our lives. And it warns us that each one of us can easily come to this brink.


Planet DebianSteinar H. Gunderson: Cubemap 1.3.0 released

I just released version 1.3.0 of Cubemap, my high-performance video reflector. For a change, both new features are from (indirect) user requests; someone wanted support for raw TS inputs and it was easy enough to add.

And then I heard a rumor that people had found Cubemap useless because “it was logging so much”. Namely, if you have a stream that's down, Cubemap will connect to it every 200 ms, and log two lines for every failed connection attempt. Now, why people discard software on ~50 MB/day of logs (more like 50 kB/day after compression) on a broken setup (if you have a stream that's not working, why not just remove it from the config file and reload?) instead of just asking the author is beyond me, but hey, eventually it reached my ears, and after a grand half hour of programming, there's rate-limiting of logging failed connection attempts. :-)

The new version hasn't hit Debian unstable yet, but I'm sure it will very soon.

Google AdsenseHow creating a content calendar can help you #DrawTheCrowds

In today’s fast-paced and global media landscape, timing is everything when you’re looking to draw crowds to your site. Throughout the year, there are trends and events that create opportunities for publishers to connect with audiences in the moments that matter. If you can time your content right and make it topical, you stand a much better chance of capturing the interest of a wider audience.

Let’s take a simple example. There’s no sense in a fashion blogger writing a piece on Coachella fashion when the annual music festival season is over. If that blogger could know when events relevant to fashion were approaching, she could much better plan her content. That’s where content calendars come in.

A content calendar is an essential tool for any publisher looking to harness some of the online buzz generated by big events. Though they can come in many different forms and countless templates exist, the essential components of a content calendar are the same. Big events relevant to your users are plotted on a calendar, allowing you to get a jump on your content plan.

If you know what events are approaching, you can start to think about the kind of content you can create to harness your users’ interest in them. 

To get you started, we’ve pulled out a few upcoming events sure to draw the crowds over the next few months. There are 5 calendar categories for you to choose from–national holidays, sports, entertainment, travel, and news. Scroll down to take a look at what big moments are coming up, click to add them to your calendar, then start planning your content!

This initial list is just to get you started, once you get into the habit of planning your content in advance, you can start to do your own research into what upcoming events might interest your audience. Need an insight into what your users like? Google Analytics can help you get to know your audience better.

Get started and plan your year today to begin drawing the crowds! New to AdSense? Sign up now and turn your passion into profit.

Posted by Jay Castro, AdSense Content Marketing Specialist

Planet DebianNiels Thykier: Selecting key packages via UDD

Thanks to Lucas Nussbaum, we now have a UDD script to filter/select key packages. Some example use cases:

Which key packages used compat 4?

# Data file compat-4-packages (one *source* package per line)
$ curl --silent --data-binary @compat-4-packages \

Also useful for things like bug#830997, which was my excuse for requesting this.:)

Is package foo a key package (yet)?

$ is-key-pkg() { 
 RES=$(echo "$1" | curl --silent --data-binary @- \
 if [ "$RES" ]; then
   echo yes
   echo no
$ is-key-pkg bash
$ is-key-pkg mscgen
$ is-key-pkg NotAPackage


Above shell snippets might need tweaking for better error handling, etc.

Once again, thanks to Lucas for the server-side UDD script.:)

Filed under: Debian

CryptogramVisiting a Website against the Owner's Wishes Is Now a Federal Crime

While we're on the subject of terrible 9th Circuit Court rulings:

The U.S. Court of Appeals for the 9th Circuit has handed down a very important decision on the Computer Fraud and Abuse Act.... Its reasoning appears to be very broad. If I'm reading it correctly, it says that if you tell people not to visit your website, and they do it anyway knowing you disapprove, they're committing a federal crime of accessing your computer without authorization.

Planet DebianDominique Dumont: A survey for developers about application configuration


Markus Raab, the author of Elektra project, has created a survey to get FLOSS developer’s point of view on the configuration of application.

If you are a developer, please fill this survey to help Markus’ work on improving application configuration management. Feeling this survey should take about 15 mns.

Note that the survey will close on July 18th.

The fact that this blog comes 1 month after the beginning of the survey is entirely my fault. Sorry about that…

All the best

Tagged: configuration, Perl

CryptogramPassword Sharing Is Now a Crime

In a truly terrible ruling, the US 9th Circuit Court ruled that using someone else's password with their permission but without the permission of the site owner is a federal crime.

The argument McKeown made is that the employee who shared the password with Nosal "had no authority from Korn/Ferry to provide her password to former employees."

At issue is language in the CFAA that makes it illegal to access a computer system "without authorization." McKeown said that "without authorization" is "an unambiguous, non-technical term that, given its plain and ordinary meaning, means accessing a protected computer without permission." The question that legal scholars, groups such as the Electronic Frontier Foundation, and dissenting judge Stephen Reinhardt ask is an important one: Authorization from who?

Reinhardt argues that Nosal's use of the database was unauthorized by the firm, but was authorized by the former employee who shared it with him. For you and me, this case means that unless Netflix specifically authorizes you to share your password with your friend, you're breaking federal law.

The EFF:

While the majority opinion said that the facts of this case "bear little resemblance" to the kind of password sharing that people often do, Judge Reinhardt's dissent notes that it fails to provide an explanation of why that is. Using an analogy in which a woman uses her husband's user credentials to access his bank account to pay bills, Judge Reinhardt noted: "So long as the wife knows that the bank does not give her permission to access its servers in any manner, she is in the same position as Nosal and his associates." As a result, although the majority says otherwise, the court turned anyone who has ever used someone else's password without the approval of the computer owner into a potential felon.

The Computer Fraud and Abuse Act has been a disaster for many reasons, this being one of them. There will be an appeal of this ruling.

Planet DebianReproducible builds folks: Reprotest containers are (probably) working, finally

Author: ceridwen

After testing and looking at the code for Plumbum, I decided that it wouldn't work for my purposes. When a command is created by something like remote['ls'], it actually looks up which ls executable will be run and uses in the command object an internal representation of a path like /bin/ls. To make it work with autopkgtest's code would have required writing some kind of middle layer that would take all of the Plumbum code that makes subprocess calls, does path lookups, or uses the Python standard library to access OS functions and convert them into shell scripts. Another similar library, sh, has the same problems. I think there's a strong argument that something like Plumbum's or sh's API would be much easier to work with than adt_testbed, and I may take steps to improve it at some point, but for the moment I've focused on getting reprotest working with the existing autopkgtest/adt_testbed API.

To do this, I created a minimalistic shell AST library in using the POSIX shell grammar. I omitted a lot of functionality that wasn't immediately relevant for reprotest and simplified the AST in some places to make it easier to work. Using this, it generates a shell script and runs it with adt_testbed.Testbed.execute(). With these pieces in place, the tests succeed for both null and schroot! I haven't tested the rest of the containers, but barring further issues with autopkgtest, I expect they should work.

At this point, my goal is to push the next release out by next week, though as usual it will depend on how many snags I hit in the process. I see the following as the remaining blockers:

  • Test chroot and qemu in addition to null and schroot.

  • PyPi still doesn't install the scripts in virt/ properly.

  • While I fixed part of adt_testbed's error handling, some build failures still cause it to hang, and I have to kill the process by hand.

  • Better user documentation. I don't have time to be thorough, but I can provide a few more pointers.

Sociological ImagesA Woman Steps Out onto the Glass Cliff: Theresa May to Lead the UK

At the end of last month, just after the United Kingdom voted to leave the European Union, a commentator at the lauded US News and World Report claimed that the “general consensus” was that the vote was a “veritable dumpster fire.” Since then, most citizens of the EU, many Americans, and lots of UK citizens, including many who voted to leave, seem to think that this was a terrible decision, sending the UK into treacherous political and economic territory.

The Prime Minister agreed to step down and, rather quickly, two women rose to the top of the replacement pool. Yesterday Theresa May was the lone contender left standing and today she was sworn in.


Is it a coincidence that a woman is about to step into the top leadership position after the Brexit?

Research suggests that it’s not. In contexts as wide-ranging as the funeral business, music festivals, political elections, the military, and law firms, studies have found a tendency for women to be promoted in times of crisis. As a result, women are given jobs that have a higher risk of failure — like, for example, cleaning up a dumpster fire.  It’s called the “glass cliff,” an invisible hazard that harms women’s likelihood of success. One study found that, because of this phenomenon, the average tenure of a female CEO is only about 60% as long as that of the average male CEO.

As one Democratic National Committee chair once said: “The only time to run a woman is when things look so bad that your only chance is to do something dramatic.” Maybe doing “something dramatic” is why so many women are promoted during times of crisis, but the evidence suggests that another reason is because men protect other men from having to take precarious positions. This was the experience of one female Marine Corps officer:

It’s the good old boys network. The guys helping each other out and we don’t have the women helping each other out because there are not enough of us around. The good old boys network put the guys they want to get promoted in certain jobs to make them stand out, look good.

If women are disproportionately promoted during times of crisis, then they will fail more often than their male counterparts. And they do. It will be interesting to watch whether May can clean up this dumpster fire and, if she can’t, what her legacy will be.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

(View original at

Krebs on SecurityAdobe, Microsoft Patch Critical Security Bugs

Adobe has pushed out a critical update to plug at least 52 security holes in its widely-used Flash Player browser plugin, and another update to patch holes in Adobe Reader. Separately, Microsoft released 11 security updates to fix vulnerabilities more than 40 flaws in Windows and related software.

brokenflash-aFirst off, if you have Adobe Flash Player installed and haven’t yet hobbled this insecure program so that it runs only when you want it to, you are playing with fire. It’s bad enough that hackers are constantly finding and exploiting zero-day flaws in Flash Player before Adobe even knows about the bugs.

The bigger issue is that Flash is an extremely powerful program that runs inside the browser, which means users can compromise their computer just by browsing to a hacked or malicious site that targets unpatched Flash flaws.

The smartest option is probably to ditch this insecure program once and for all and significantly increase the security of your system in the process. I’ve got more on that approach — as well as slightly less radical solutions — in A Month Without Adobe Flash Player.

If you choose to update, please do it today. The most recent versions of Flash should be available from this Flash distribution page or the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). Chrome and IE should auto-install the latest Flash version on browser restart.

Happily, Adobe has delayed plans to stop distributing direct download links to its Flash Player program. The company had said it would decommission the direct download page on June 30, 2016, but the latest, patched Flash version for Windows and Mac systems is still available there. The wording on the site has been changed to indicate the download links will be decommissioned “soon.”

Adobe’s advisory on the Flash flaws is here. The company also released a security update that addresses at least 30 security holes in Adobe Reader. The latest version of Reader for most Windows and Mac users is v. 15.017.20050.

brokenwindowsSix of the 11 patches Microsoft issued this month earned its most dire “critical” rating, which Microsoft assigns to software bugs that can be exploited to remotely commandeer vulnerable machines with little to no help from users, save from perhaps browsing to a hacked or malicious site.

In fact, most of the vulnerabilities Microsoft fixed this Patch Tuesday are in the company’s Web browsers — i.e., Internet Explorer (15 vulnerabilities) and its newer Edge browser (13 flaws). Both patches address numerous browse-and-get-owned issues.

Another critical patch from Redmond tackles problems in Microsoft Office that could be exploited through poisoned Office documents.

For further breakdown on the patches this month from Adobe and Microsoft, check out these blog posts from security vendors Qualys and Shavlik. And as ever, if you encounter any problems downloading or installing any of the updates mentioned above please leave a note about your experience in the comments below.

Worse Than FailureCodeSOD: Lunatic Schema-tic

One day, James’s boss asked him to take a look at a ticket regarding the “Cash Card Lookup” package, which had an issue. James had no idea what that was, so he asked.

“I don’t know,” his boss replied. “I just know the call center uses it. You’ll need to talk to them.”

James picked up the ticket and called the customer.

“Oh, yes,” the customer replied. “We need this to get customer details based on their cash-card number. I think Timmy made it.”

“Timmy? Who’s Timmy?”

“He’s our tech guy. He sets up our computers, helps us when we have issues, that stuff. Let me transfer you to him…”

Timmy had indeed made it, because he “did a little programming”. There was also the issue of internal billing- like many large companies, each business unit needed to charge other business units for their time. The software development team billed at $95/hr, but Timmy was already on salary to the customer service department.

He had grabbed a spare box, slapped Linux and MySQL on it, then whipped up a simple Perl script that served up a web page for doing the lookup.

Data entry, on the other hand, was a different problem all together. Knowing Remy’s Law of Requirements Gathering, Timmy gave them an Excel spreadsheet with a VBA macro that could connect to the MySQL database to do bulk uploads of data. "

When James pulled up the code, he saw every horror he expected from Perl and VBA. When he saw the database, it got even worse. The data itself had a number of problems, the first one being that Timmy never set up a test environment, and instead, tested in production. And didn’t clean up the test records. Even worse, though, the VBA macro tried to sanitize the inputs, and handle escaping characters like the single quote, but it did it wrong, leading to records like:

OReilly Kevin

As you might imagine, the database only had one table, and it was this code that really got James’s attention.

  `masterorder` varchar(16) default NULL,
  `ordering_store_number` varchar(10) default NULL,
  `order_date` varchar(10) default NULL,
  `last_name` varchar(20) default NULL,
  `first_name` varchar(10) default NULL,
  `middle_initial` varchar(1) default NULL,
  `company_name` varchar(32) default NULL,
  `address1` varchar(32) default NULL,
  `address2` varchar(32) default NULL,
  `city` varchar(20) default NULL,
  `state_province` varchar(2) default NULL,
  `postal_code` varchar(9) default NULL,
  `country` varchar(15) default NULL,
  `phone` varchar(10) default NULL,
  `sequence` varchar(2) default NULL,
  `sku` varchar(10) default NULL,
  `card_value` varchar(11) default NULL,
  `shipping_method` varchar(3) default NULL,
  `insert_id` varchar(5) default NULL,
  `customer_number` varchar(70) default NULL,
  `last_4_cus_number` varchar(70) default NULL,
  `card_value2` varchar(70) default NULL,
  `prepared_by` varchar(70) default NULL,
  `witnessed_by` varchar(70) default NULL,
  `card_number` varchar(19) default NULL,
  `shipping_date` varchar(10) default NULL,
  `invoiced` text,
  `ship_method_code` varchar(3) default NULL,
  `valuation_date` varchar(10) default NULL,
  `comments` text,
  `num_of_days_from_ship_to_valuation_date` text,
  `ship_date` varchar(20) default NULL,
  `activation_date` varchar(20) default NULL,
  `id` int(10) unsigned NOT NULL auto_increment,
  PRIMARY KEY  (`id`)
) ;

It’s not just the VARCHAR’s everywhere. It’s things like card_value2 and card_value, which both hold the same data, but have wildly differing lengths. Date fields might be 10 or 20 characters long, the num_of_days_from_ship_to_valuation_date is a text type, but only holds, well… a number, and usually one less than 15. The field invoiced, also text, only holds “True” or “False” (or “Yes”, “y”, “Y”, “N”, “???”, NULL).

But the real special absurdity, the real line that made James scratch his head and ask WTF, was this one:

    `last_4_cus_number` varchar(70) default NULL

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianNorbert Preining: Jonas Jonasson – The Girl Who Saved the King of Sweden

Just finished my first book of Jonas Jonasson, a Swedish journalist and author. Most famous for his book The Hundred-Year-Old Man Who Climbed Out the Window and Disappeared, but author of two others. The one I read was The Girl Who Saved the King of Sweden, which strange enough became in German Die Analphabetin die rechnen konnte (The analphabet who could compute).

Jonas Jonasson - Die Analphabetin, die rechnen konnte

The story recounts the countless turns the life of Nombeko Mayeki, a black girl born in Soweto as latrine cleaner, who manages to save the Swedish king as well as most of the world from an atomic desaster by first getting driven over by a drunkard of South African nuclear bomb engineer, then meeting a clique of three Chinese sisters excelling in faking antiquities, and two Mossad agents. With the (unwilling) help of those agents she escapes to Sweden (including the atomic bomb) where she meets twins of a psychotic father who brought them up as one child so that the spare one can eradicate the Swedish monarchy. After many twists and setbacks, including several meetings with the Chinese premier Hu Jintao, she finally manages to get rid of the atomic bomb, get her “undercover” twin a real identity, and set up a proper life – ah, and not to forget, save the King of Sweden!

A fast paced, surprisingly funny and lovely story about how little things can change our lives completely.


Planet DebianSteinar H. Gunderson: Cisco WLC SNMP password reset

If you have a Cisco wireless controller whose admin password you don't know, and you don't have the right serial cable, you can still reset it over SNMP if you forgot to disable the default read/write community:

snmpset -Os -v 2c -c private s foobarbaz

Thought you'd like to know. :-P

(There are other SNMP-based variants out there that rely on the CISCO-CONFIG-COPY-MIB, but older versions of the WLc software doesn't suppport it.)

CryptogramGoogle's Post-Quantum Cryptography

News has been bubbling about an announcement by Google that it's starting to experiment with public-key cryptography that's resistant to cryptanalysis by a quantum computer. Specifically, it's experimenting with the New Hope algorithm.

It's certainly interesting that Google is thinking about this, and probably okay that it's available in the Canary version of Chrome, but this algorithm is by no means ready for operational use. Secure public-key algorithms are very hard to create, and this one has not had nearly enough analysis to be trusted. Lattice-based public-key cryptosystems such as New Hope are particularly subtle -- and we cryptographers are still learning a lot about how they can be broken.

Targets are important in cryptography, and Google has turned New Hope into a good one. Consider this an opportunity to advance our cryptographic knowledge, not an offer of a more-secure encryption option. And this is the right time for this area of research, before quantum computers make discrete-logarithm and factoring algorithms obsolete.

Cory DoctorowMy interview on Utah Public Radio’s “Access Utah”

Science fiction novelist, blogger and technology activist Cory Doctorow joins us for Tuesday’s AU. In a recent column, Doctorow says that “all the data collected in giant databases today will breach someday, and when it does, it will ruin peoples’ lives. They will have their houses stolen from under them by identity thieves who forge their deeds (this is already happening); they will end up with criminal records because identity thieves will use their personal information to commit crimes (this is already happening); … they will have their devices compromised using passwords and personal data that leaked from old accounts, and the hackers will spy on them through their baby monitors, cars, set-top boxes, and medical implants (this is already hap­pening)…” We’ll talk with Cory Doctorow about technology, privacy, and intellectual property.

Cory Doctorow is the co-editor of popular weblog Boing Boing and a contributor to The Guardian, Publishers Weekly, Wired, and many other newspapers, magazines and websites. He is a special consultant to the Electronic Frontier Foundation, a non-profit civil liberties group that defends freedom in technology law, policy, standards and treaties. Doctorow is also an award-winning author of numerous novels, including “Little Brother,” “Homeland,” and “In Real Life.”


Planet DebianOlivier Grégoire: Seventh week: create a new architecture and send a lot of information

At the begin of this week, I thought my architecture needed to be closer to the call . The goal was to create one class per call to save the different information. With this method, I only need to call the instance who I am interested and I can easly pull the information.
So, I began to rewrite my architecture on the daemon to create an instance of my class link directly with the callID.
After implementing it, I was really disappointed. This class was hardly to call from the upper software layers. Indeed, I didn’t know what is the current call display on my client.
I change my mind and I rewrite the architecture again. I observed the information I want to pull (frame rate, bandwidth, resolution…) and they are all generating in my daemon. Therefore, it will update every time something change in the client. So, I just need to catch it and send it to the upper software layers. My new architecture is simply a singleton because I just need one instance and I need to pull it from everywhere in my program.

Beyond that, I wanted to pull some information about the video (frame rate, resolution, codec for the local and remote computer). So, I looked to understand how the frame generate work. Now I can pull:
-Local and remote video codec
-Local and remote frame rate
-Remote audio codec
-Remote resolution


Next week, I will begin with working on creating an API on the client library. After that, I will continue to retrieve other information.

Worse Than FailureA Song of API And Fire


Emily didn't expect much excitement at her day job. She worked for a health insurance company, so most of her projects were pretty routine enterprise-level things: hooking up the accounting software to the billing software, managing mailing lists, the usual stuff. When she was given a minor role on a large project, she never dreamed it would be any different than the usual fare. She was unprepared for what she received: Project Aegon.

Insurance companies reach out to people a lot: direct mail advertisements, mail to their subscribers, telemarketing phone calls, and the like. Before Project Aegon, each of the contact lists was housed in a different little kingdom. Subscriber information in the North, direct-marketing addresses in the Riverlands, and so on. Project Aegon was meant to unify these all into a single central repository, and that meant conquering several different application datastores and mastering them all in one location, establishing a new primary source of intel in King's Landing.

Emily's part in this large debacle was Dorne, the email provider: think Mailchimp, but more enterprise. Dorne was an important target for the migration, as it controlled most of the company's outgoing email. However, it was a difficult target to attack strategically, as it used guerilla warfare in the form of a terrible API to protect its information. The API used XML, but it wasn't SOAP, preventing Emily from using a simple library to interface with it. It was far from REST as well; there was no rhyme or reason to the endpoint design, as it had grown "organically" over the years.

For a time, Emily thought she was making headway when she discovered the existence of an API for querying the SQL directly. Surely that would be an easier method of obtaining up-to-date subscriber information? Then she saw the example query. She didn't make it any further than the following before bailing:


Bound and determined to conquer Dorne, Emily finally found some luck: the batch API. She signed up for a developer key, providing her company email address, and plugged away at the thing until she got it to do what she wanted. The files it generated were massive XML files, but she wrote a shim that received them via SFTP once a day and provided a REST API to query against the data so the rest of the Aegon development team could interface with it. Satisfied, she washed her hands of the seven kingdoms and moved on to Project Braavos.

Years later, while working on Project Ibben, she received a furious email: Dorne's reports were clogging up the system, hogging resources and slowing business functions that shared a server to a halt. Confused as to why she was even being emailed, Emily took a look. Sure enough, the daily process had been run 6,000 times in the past 4 days, with 4,000 more pending in the queue.

A little digging revealed how her name got attached to the project: the current devs were still using her personal API key in the test region. Furthermore, they'd been having a problem, so with the test region still hooked up to the production Dorne server, they had bumped up the poll frequency to every 30 seconds to look for a sporadic issue. Emily was forced to manually cancel and delete all 10,000 batch requests to clear the queue. Once finished, she pulled her API key for good measure. It was time to play a little game with the current devs ...

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaStewart Smith: Using Smatch static analysis on OpenPOWER OPAL firmware

For Skiboot, I’m always looking at new automated systems to find bugs in the code. A little while ago, I read about the Smatch tool developed by some folks at Oracle (they also wrote about using it on the Linux kernel).

I was eager to try it with skiboot to see if it could find anything.

Luckily, it was pretty easy. I built Smatch according to their documentation and then built skiboot:

make CHECK="/home/stewart/smatch/smatch" C=1 -j20 all check

Due to some differences in how we implement abort() and assert() in skiboot, I added “_abort”, “abort” and “assert_fail” to smatch_data/no_return_funcs in the Smatch source tree to silence some false positives.

It seems that there’s a few useful warnings there (some of which I’ve fixed in skiboot master already), along with some false positives around the preprocessor/complier tricks we do to ensure at compile time that an OPAL call definition has the correct number of arguments specified.

So far, so good though. Try it on your project!

Sky CroeserPost-Arab Spring Tunisia: women’s political participation and political decentralisation

Admira Dini Salim, International Foundation for Electoral Systems
Increasing Women’s Political Participation in Indonesia
Indonesia’s performance on gender equality is lagging. The UNDP Gender Equality Index ranks it 108 out of 187 countries. Civil society continues to fight for change. A lower proportion of women  than men are registered to vote, but turnout is better among women. Despite this, only 23% of voters voted for female candidates in the 2014 elections. Unfortunately, a survey in 2013 showed that if male and female candidates were equally qualified, voters would prefer a male candidate (including female voters). In 2014, 17% of seats in the national parliament were held by women.

cniwbwsumaakyveThe quota system is evolving – at first it was at 30% of candidates that needed to be women (but with no sanctions), after the 2009 election, the election commission added a clause saying that the electoral authorities would announce all parties that didn’t meet the quota. In 2014, working through civil society, the election commission imposed a 1 in 3 quota around gender.

For regional head elections (264 regions) in 2014, female candidates were only 7.5% of candidates. Only 8.5% of female candidates won positions as regional heads or vice.

IFES is working on supporting women to work as election commissioners and in other official positions. The law mandates that 30% of election administrators should be women.

The challenges for women’s political participation are both regulatory and non-regulatory. Regulatory challenges include the lack of enforcement of the quota system, political parties lack of promotion of women as candidates or leaders, discriminatory legislation at the regional level in some areas (for example, in Aceh and other places, there are local regulations that impose curfews on women being out of the house in the evenings, which limits their ability to go to political meetings), and the high costs of elections limit women’s participation. Non-regulatory barriers include social and cultural roles and other factors.

IFES has several programs to improve gender representation, including the Women’s Electoral Leadership Program, She Leads, and the Training of Female Legislators program. These tie in with movements led by local civil society organisations. IFES is thinking about the full election cycles: it’s not just about election day, but about all the stages at which women might be better included.

There are a number of challenges to regulations that could improve women’s participation, including making the 30% quota obligatory and including a strong sanction; offering a subsidy as an incentive for parties to comply with gender quotas; maintaining the open list proportional system to minimize the control by a small political elite in allocating seats in parliament; requiring that female candidates make up 30% of candidates in party lists; and placing women candidates at the top of candidate lists for national, provincial, and regency elections. Civil society is playing an important role in developing and supporting legislation that supports women’s participation in the political system.

Najla Abbes, League of Tunisian Women Voters
Women’s Participation in Political and Public Life: Gains and Challenges

Abbes began by noting that both women and men took to the streets during the revolution. Since 2011, women have been taking part in all levels of elections. However, speaking from her own experience, she notes that the visibility of efforts for women’s rights wasn’t always high, and she began by worrying that women weren’t ready for political participation. But Abbes notes that both men and women were excluded from participation in the democratic process, so everyone will be learning together.

The ‘zipper system’, outlined in Article 16 of the Constitution, requires alternation between men and women in the lists. But at first, only 7% of the top of lists were women. Only one party implemented horizontal and vertical parity, and it was seen as ‘too modern’.

Parity is a great gain, but there’s been an ebb and flow. Abbes notes that Tunisian women get told, “Tunisia is far ahead of the rest of the Arab world, so you should be happy as things are”. But that’s not enough: the requirement of parity is in the Constitution, and it’s important to keep working towards it. Civil society needs to keep working to preserve and extend women’s rights. Part of this work has been pushing for both horizontal and vertical parity to be imposed, and for parties to face sanctions if their lists don’t support parity.

The League of Tunisian Women Voters has been working to support women candidates, including preparing them to participate effectively when elected. They’re also concerned that when women are elected, they’re representing their parties, rather than a ‘women’s agenda’.


Dina Afrianty, Australian Catholic University
Indonesia’s Democracy: Political Decentralisation and Local Women’s Movement
Afrianty’s research suggests that decentralisation has been seen by religious conservatives in Indonesia as an opportunity to return to an Islamic vision of politics. Initial attempts by Islamic political parties to gain power were not successful. After this, many conservative Muslims started to push for conservative interpretations of Islamic law to be incorporated at the local level.

Aceh is currently the only region that is governed by shariah law, with a number of laws brought in at the local level in 2009. These laws have been seen by much of civil society as discriminatory. After the tsunami, when international humanitarian organisations began working in Aceh, more space opened for civil society to voice their opposition. Many organisations from Aceh have pointed to a long history of women’s involvement in leadership in Aceh, including centuries ago when it was a Muslim kingdom, and are engaging in doctrinal debate to offer alternative visions of Islamic law.

Getting more women into power doesn’t necessarily lead to progress. There are several notable examples in Indonesia of women coming into power on platforms that are quite regressive.

Planet DebianNorbert Preining: Michael Köhlmeier: Zwei Herren am Strand

This recent book of the Austrian author Michael Köhlmeier, Zwei Herren am Strand (Hanser Verlag), spins a story about an imaginative friendship between Charlie Chaplin and Winston Churchill. While there might not as be more different people than these two, in the book they are connected by a common fight – the fight against their own depression, explicitly as well as implicitly by fighting Nazi Germany.

Zwei Herren am Strand_ Roman - Michael Koehlmeier

Michael Köhlmeier’s recently released book Zwei Herren am Strand tells the fictive story of Charlie Chaplin and Winston Churchill meeting and becoming friends, helping each other fighting depression and suicide thoughts. Based on a bunch of (fictive) letters of a (fictive) private secretary of Churchill, as well as (fictive) book on Chaplin, the first person narrator dives into the interesting time of the mid-20ies to about the Second World War.

churchill-chaplinChaplin is having a hard time after the divorce from his wife Rita, paired with the difficulties at the production of The Circus, and is contemplating suicide. He is conveying this fact to Churchill during a walk on the beach. Churchill is reminded of his own depressions he suffers from early age on. The two of them agree to make a pact fighting the “Black Dog” inside.

Later Churchill asks Chaplin about his method to overcome the phases of depression, and Chaplin explains him the “Method of the Clown”: Put a huge page of paper on the floor, lie yourself facing down onto the paper and start writing a letter to yourself while rotating clockwise and creating a spiral inward.

According to Chaplin, he took this method from Buster Keaton and Harold Lloyd (hard to verify), and it works by making oneself ridiculous, so that one part of oneself can laugh about the other part.

The story continues into the early stages of the world war, with both sides fighting Hitler, one politically, one by comedy. The story finishes somewhere in the middle when the two meet while Chaplin is in a deep depression during cutting his movie
The great dictator, and together to manage once more to overcome the “black dog”.

The book is pure fiction, and Köhlmeier dives into a debaucherous story telling, jumping back and forth between several strands of narration lines. An entertaining and very enjoyable book if you are the type of reader that enjoys story telling. For me this book is in best tradition of Michael Köhlmeier, whom I consider an excellent story teller. I loved his (unfinished trilogy of) books on Greek mythology (Telemach and Calypso), but found that after these books he got lost too much in radio programs of story telling. While in itself good, I preferred his novels. Thus, I have to admit that I have forgotten about Köhlmeier for some years, until recently I found this little book, which reminded me of him and his excellent stories.

A book that is – if you are versed in German – well worth enjoying, especially if one likes funny and a bit queer stories.

Planet DebianDirk Eddelbuettel: RProtoBuf 0.4.4, and new JSS paper

A new release 0.4.4 of RProtoBuf is now on CRAN, and corresponds to the source archive for the Journal of Statistical Software paper about RProtoBuf as JSS vol71 issue 02. The paper is also included as a pre-print in the updated package.

RProtoBuf provides R bindings for the Google Protocol Buffers ("Protobuf") data encoding library used and released by Google, and deployed as a language and operating-system agnostic protocol by numerous projects.

This release brings small cleanups as well as build-system updates for the updated R 3.3.0 infrastructure based on g++ 4.9.*.

Changes in RProtoBuf version 0.4.4 (2016-07-10)

  • New vignette based on our brand-new JSS publication (v71 i02)

  • Some documentation enhancements were made, as well as other minor cleanups to file modes and operations

  • Unit-test vignette no longer writes to /tmp per CRAN request

  • The new Windows toolchain (based on g++ 4.9.*) is supported

CRANberries also provides a diff to the previous release. The RProtoBuf page has an older package vignette, a 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Krebs on SecuritySerial Swatter, Stalker and Doxer Mir Islam Gets Just 1 Year in Jail

Mir Islam, a 21-year-old Brooklyn man who pleaded guilty to an impressive array of cybercrimes including cyberstalking, “doxing” and “swatting” celebrities and public officials (as well as this author), was sentenced in federal court today to two years in prison. Unfortunately, thanks to time served in this and other cases, Islam will only see a year of jail time in connection with some fairly heinous assaults that are becoming all too common.

While Islam’s sentence fell well short of the government’s request for punishment, the case raises novel legal issues as to how federal investigators intend to prosecute ongoing cases involving swatting — an extremely dangerous prank in which police are tricked into responding with deadly force to a phony hostage crisis or bomb scare at a residence or business.

Mir Islam, at his sentencing hearing today. Sketches copyright by Hennessy /

Mir Islam, at his sentencing hearing today. Sketches copyright by Hennessy / Yours Truly is pictured in the blue shirt behind Islam.

On March 14, 2014, Islam and a group of as-yet-unnamed co-conspirators used a text-to-speech (TTY) service for the deaf to relay a message to our local police department stating that there was an active hostage situation going on at our modest town home in Annandale, Va. Nearly a dozen heavily-armed officers responded to the call, forcing me out of my home at gunpoint and putting me in handcuffs before the officer in charge realized it was all a hoax.

At the time, Islam and his pals were operating a Web site called Exposed[dot]su, which sought to “dox” public officials and celebrities by listing the name, birthday, address, previous address, phone number and Social Security number of at least 50 public figures and celebrities, including First Lady Michelle Obama, then-FBI director Robert Mueller, and then Central Intelligence Agency Director John Brennan. also documented which of these celebrities and public figures had been swatted, including a raft of California celebrities and public figures, such as former California Governor Arnold Schwartzenegger, actor Ashton Kutcher, and performer Jay Z.

Exposed[dot]su was built with the help of identity information obtained and/or stolen from ssndob[dot]ru.

Exposed[dot]su was built with the help of identity information obtained and/or stolen from ssndob[dot]ru.

At the time, most media outlets covering the sheer amount of celebrity exposure at Exposed[dot]su focused on the apparently starling revelation that “if they can get this sensitive information on these people, they can get it on anyone.” But for my part, I was more interested in how they were obtaining this data in the first place.

On March 13, 2013 KrebsOnSecurity featured a story — Credit Reports Sold for Cheap in the Underweb –which sought to explain how the proprietors of Exposed[dot]su had obtained the records for the public officials and celebrities from a Russian online identity theft service called sssndob[dot]ru.

I noted in that story that sources close to the investigation said the assailants were using data gleaned from the ssndob[dot]ru ID theft service to gather enough information so that they could pull credit reports on targets directly from, a site mandated by Congress to provide consumers a free copy of their credit report annually from each of the three major credit bureaus.

Peeved that I’d outed his methods for doxing public officials, Islam helped orchestrate my swatting the very next day. Within the span of 45 minutes, came under a sustained denial-of-service attack which briefly knocked my site offline.

At the same time, my hosting provider received a phony letter from the FBI stating my site was hosting illegal content and needed to be taken offline. And, then there was the swatting which occurred minutes after that phony communique was sent.

All told, the government alleges that Islam swatted at least 19 other people, although only seven of the victims (or their representatives) showed up in court today to tell similarly harrowing stories (I was asked to but did not testify).

Officers responding to my 2013 swatting incident.

Security camera footage of Fairfax County police officers responding to my 2013 swatting incident.

Going into today’s sentencing hearing, the court advised that under the government’s sentencing guidelines Islam was facing between 37 and 46 months in prison for the crimes to which he’d pleaded guilty. But U.S. District Court Judge Randolph Moss seemed especially curious about the government’s rationale for charging Islam with conspiracy to transmit a threat to kidnap or harm using a deadly weapon.

Judge Moss said the claim raises a somewhat novel legal question: Can the government allege the use of deadly force when the perpetrator of a swatting incident did not actually possess a weapon?

Corbin Weiss, an assistant US attorney and a cybercrime coordinator with the U.S. Department of Justice, argued that in most of the swatting attacks Islam perpetrated he expressed to emergency responders that any responding officers would be shot or blown up. Thus, the government argued, Islam was using police officers as a proxy for assault with a deadly weapon by ensuring that responding officers would be primed to expect a suspect who was armed and openly hostile to police.

Islam’s lawyer argued that his client suffered from multiple psychological disorders, and that he and his co-conspirators orchestrated the swattings and the creation of exposed[dot]su out of a sense of “anarchic libertarianism,” bent on exposing government overreach on consumer privacy and use of force issues.

As if to illustrate his point, a swatting victim identified by the court only as Victim #4 was represented by Fairfax, Va. lawyer Mark Dycio. That particular victim did not wish to be named or show up in court, but follow-up interviews confirmed that Dycio was representing Wayne LaPierre, the executive vice president of the National Rifle Association.

According to Dycio, police responded to reports of a hostage situation at the NRA boss’s home just days after my swatting in March 2013. Impersonating LaPierre, Islam told police he had killed his wife and that he would shoot any officers responding to the scene. Dycio said police initially had difficulty identifying the object in LaPierre’s hand when he answered the door. It turned out to be a cell phone, but Dycio said police assumed it was a weapon and stripped the cell phone from his hands when entering his residence. The police could have easily mistaken the mobile phone for a weapon, Dycio said.

Another victim that spoke at today’s hearing was Stephen P. Heymann, an assistant U.S. attorney in Boston. Heymann was swatted because he helped prosecute the much-maligned case against the late Aaron Swartz, a computer programmer who committed suicide after the government by most estimations overstepped its bounds by charging him with hacking for figuring out an automated way to download academic journals from the Massachusetts Institute of Technology (MIT).

Heymann, whose disability requires him to walk with a cane, recounted the early morning hours of April 1, 2013, when police officers surrounded his home in response to a swatting attack launched by Islam on his residence. Heymann recalled worrying that officers responding to the phony claim might confuse his cane with a deadly weapon.

One of the victims represented by a proxy witness in today’s hearings was the wife of a SWAT team member in Arizona who recounted several tense hours hunkered down at the University of Arizona, while her husband joined a group of heavily-armed police officers who were responding to a phony threat about a shooter on the campus.

Not everyone had nightmare swatting stories that aligned neatly with Islam’s claims. A woman representing an anonymous “Victim #3” of Islam’s was appearing in lieu of a cheerleader at the University of Arizona that Islam admitted to cyberstalking for several months. When the victim stopped responding to Islam’s overtures, he phoned in an active shooter threat to the local police there that a crazed gunman was on the loose at the University of Arizona campus.

According to Robert Sommerfeld, police commander for the University of Arizona, that 2013 swatting incident involved 54 responding officers, all of whom were prevented from responding to a real emergency as they moved from building to building and room to room at the university, searching for a fictitious assailant. Sommerfeld estimates that Islam’s stunt cost local responders almost $40,000, and virtually brought the business district surrounding the university to a standstill for the better part of the day.

Toward the end of today’s sentencing hearing, Islam — bearded, dressed in a blue jumpsuit and admittedly 75 pounds lighter than at the time of his arrest — addressed the court. Those in attendance who were hoping for an apology or some show of remorse from the accused were left wanting as the defendant proceeded to blame his crimes on multiple psychological disorders which he claimed were not being adequately addressed by the U.S. prison system. Not once did Islam offer an apology to his victims, nor did he express remorse for his actions.

“I didn’t expect to go as far as I did, but because of these disorders I felt I was invincible,” Islam told the court. “The mistakes I made before, I have to pay for that. I understand that.”

Sentences that noticeably depart from the government’s sentencing guidelines are grounds for appeal by the defendant, and Judge Moss today seemed reluctant to imprison Islam for the maximum 46 months allowed under the criminals statutes to which Islam had admitted to violating. Judge Moss also seemed to ignore the fact that Islam expressed exactly zero remorse for his crimes.

Central to the judge’s reluctance to sentence Islam to the statutory maximum penalty was Islam’s 2012 arrest in connection with a separate cybercrime sting orchestrated by the FBI called Operation Card Shop, in which federal agents created a fake cybercrime forum dedicated to credit card fraud called CarderProfit[dot]biz.

U.S. law enforcement officials in Washington, D.C. involved in prosecuting Islam for his swatting, doxing and stalking crimes were confident that Islam would be sentenced to at least two years in prison for trying to sell and buy stolen credit cards from federal agents in the New York case, thanks to a law that imposes a mandatory two-year sentence for crimes involving what the government terms as “aggravated identity theft.”

Much to the government’s chagrin, however, the New York judge in that case sentenced Islam to just one day in jail. But by his own admission, even while Islam was cooperating with federal prosecutors in New York he was busy orchestrating his swatting attacks and administering the Exposed[dot]su Web site.

Islam was re-arrested in September 2013 for violating the terms of his parole, and for the swatting and doxing attacks to which he pleaded guilty. But the government didn’t detain Islam in connection with those crimes until July 2015. Since Islam has been in federal detention since then, and Judge Moss seemed eager to ensure that this would count as time served against Islam’s sentence, meaning that Islam will serve just 12 months of his 24-month sentence before being released.

There is absolutely no question that we need to have a serious, national conversation about excessive use of force by police officers, as well as the over-militarization of local police forces nationwide.

However, no one should be excused for perpetrating these potentially deadly swatting hoaxes, regardless of the rationale. Judge Moss, in explaining his brief deliberation on arriving at Islam’s two-year (attenuated) sentence, said he hoped to send a message to others who would endeavor to engage in swatting attacks. In my estimation, today’s sentence sent the wrong message, and missed that mark by a mile.

Sky CroeserPost-Arab Spring Tunisia: local government in Indonesia, a start-up democracy, and the youth

Greg Barton, Alfred Deakin Institute
Indonesian democratic transition: an examination of the vital elements
Barton argues that we can now call Tunisia a successful democratic transition, as elections have been  held with limited violence and instability. There are important parallels with Indonesia, which is democratic (although not without its problems), well-educated and literate, well-connected, globalised, and with a demographic youth bulge. Both countries also have a significant Muslim population, and Islamic movements have made important contributions to civil society.

There’s a tendency, particularly in the West, to overlook religious participation in civil society. In Indonesia, progressive Muslim movements played a key role not only in the resistance to colonialism and formation of alternative institutions, but also in developing opposition to Suharto. Progressive Islamic thought was supported in Reformasi throughout the 1970s and 1980s. Islam became a positive factor: progressive Islamic thought lays a foundation for democratic thinking and social activism.

Civil society, including religious movements, will continue to play a role in Indonesia’s democracy. One key area here is local elections: these are, in many ways, the most relevant to voters, as they are seen to have the greatest impact on their lives. However, challenges continue in this area. Among other issues, only 7% of candidates in the last Indonesian local elections were women.

Innes Ben Youssef, Free Patriots
Tunisian Revolution: a story of start-up democracy
Tunisia is seen as the only success story of the Arab Spring, and there have been many advances, including the successful implementation of a technocratic government to guide the process of forming the constitution.

To ensure democratic transition, it is important to shine a light on decentralisation and local democracy, and focus on the significant role of civil society, especially women and the youth. In order to do that, we need to re-evaluate the role of the state, strengthen local governments, improve the capacities of municipalities, and improve citizen’s participation in local decision-making.

Ghazoua Ltaief, Sawty
Promoting the Inclusion of Youth in Democratic Transitions


Tunisia is a success, a glimmer of hope as it undergoes a continuous transition process, but it still faces challenges. The constitution has set a new path for Tunisia, and the shift to more decentralised government is key to that. There are still many challenges for youth involvement, and many youth feel disappointed in, and disconnected from, politicians.

Sawty is an important part of the democratic process, working in the regions as well a in Tunisia. One of their programs: “Raise Your Voice”, is aimed at increasing youth participation in local government. Through this program, Sawty is working with youth to articulate their problems, and connecting youth with politicians and other decision-makers to try to develop these solutions.

Sawty is also working with broader networks, including the netmed youth network. This project is helping countries in the Mediterranean to develop youth policies in consultation with the youth. Bus Citoyen, another Sawty program, is a bus that travels around regions in Tunisia working on voter education, including why to, and how to, vote.

Ltaief says that despite the challenges, they are still optimistic (because they need to be).


Google AdsenseWhat could be causing your CTR/RPM to be lower on mobile?

Mobile is the fastest growing platform and it’s important for us to ensure that our sites are set up for long-term success. Recently, we’ve heard questions and concerns about lower than expected click through rate (CTR) and revenue per thousand impressions (RPM) for mobile inventory and we’d like to share some insights as to what may cause this.

To understand what’s happening you need to look at your performance reports. Analyzing your AdSense performance allows you to see how well your ads are performing and which devices your ad units were viewed on.

To view this report simply log into your AdSense account, click “Performance reports” and select “Platforms” from the “Report type” dropdown.

You may notice that your CTR or RPM is lower on mobile than it is on tablet and desktop. This might be caused by one or more of the following reasons:

  • Your site may be displaying suboptimal ad sizes
  • Your responsive design is using the column drop approach
  • You’re not focusing on optimizing for viewability

Here are three tips to increase help you identify the issues and take action to improve your mobile RPM.

If you’re not using AdSense to monetize your online content, be sure to sign up so you can start turning your passion into profit. 

1) Use high-impact mobile ad formats to ensure you have optimal ad sizes throughout your site. 

The 320x50 ad unit was the original mobile ad banner, but now there are a range of mobile ad sizes and formats to choose from. If you’re using 320x50 ads you should consider replacing them with 320x100 (just above the fold), 300x250 (below the fold), or a responsive ad unit. These ad sizes tend to generate higher RPMs than 320x50 ad units.

  • The 320x100 ad unit performs best on mobile screens and can be placed in numerous positions throughout your site. Research from Google shows the most viewable ad position is right above the fold. The 320x100 also increases the fill-rate competition because it allows the 320x50 format to compete for the same space.

  • The 300x250 ad unit is a popular ad size used by advertisers across the globe, resulting in a large ad supply, increased competition, and a potential increase in earnings. Research on viewability has shown that a 300x250 ad unit placed just below the fold has generated approximately a 50% viewability rate for other publishers. This ad unit could potentially help you maximize the impact of your ad space. 
  • Responsive ad units automatically adapt to your page layout and the space available for the ad unit across desktop, tablets, and smartphones. AdSense identifies the appropriate ad size and then determines the best size for the screen. 
2) Pay close attention to your ad placements, especially if you’re using a responsive design column drop layout.

Responsive websites are a great multi-screen strategy, but they present a challenge for high-impact desktop ad units that can be easily avoided. Responsive websites commonly use a column drop approach to design. This is where the entire right-hand column on a desktop screen drops down to the bottom of the page when the website is viewed on a mobile screen.

This means that a strong performing ad unit on the right-hand column of your desktop site could become an underperforming unit below the fold on mobile devices.

If this is how your site is designed, you should consider alternative ad placements, for example: moving your right-column ad units above the fold on mobile devices.

3) Focus on counting viewable impressions with Active View in your AdSense account.

Checking the Active View percent of your ads will give you a good indication of your mobile vs. tablet and desktop ad viewability. A display ad is counted as viewable when at least 50% of the ad is within the viewable space on the user’s screen for one second or more.

If you find that your Active View percent is much lower on mobile than it is on desktop, this could mean that your ads are not visible to your users and alternative ad placements may need to be tested.

There you have it, three tips to increase your mobile RPM.

Monitor your AdSense performance across devices to understand and identify potential underperforming units. If you’re using a responsive web design, make sure your ads are viewable on all devices. Finally, use Active View to track the viewability of your ads.

We’d love to hear your thoughts in the comments below. Also, be sure to follow us on Google+ and Twitter, we offer tips, tricks, and downloads to help you make the most of your AdSense account. 

Until next time,

Posted by Paul Healy, from the AdSense team

CryptogramReport on the Vulnerabilities Equities Process

I have written before on the vulnerabilities equities process (VEP): the system by which the US government decides whether to disclose and fix a computer vulnerability or keep it secret and use it offensively. Ari Schwartz and Rob Knake, both former Directors for Cybersecurity Policy at the White House National Security Council, have written a report describing the process as we know it, with policy recommendations for improving it.

Basically, their recommendations are focused on improving the transparency, oversight, and accountability (three things I repeatedly recommend) of the process. In summary:

  • The President should issue an Executive Order mandating government-wide compliance with the VEP.
  • Make the general criteria used to decide whether or not to disclose a vulnerability public.
  • Clearly define the VEP.
  • Make sure any undisclosed vulnerabilities are reviewed periodically.
  • Ensure that the government has the right to disclose any vulnerabilities it purchases.
  • Transfer oversight of the VEP from the NSA to the DHS.
  • Issue an annual report on the VEP.
  • Expand Congressional oversight of the VEP.
  • Mandate oversight by other independent bodies inside the Executive Branch.
  • Expand funding for both offensive and defensive vulnerability research.

These all seem like good ideas to me. This is a complex issue, one I wrote about in Data and Goliath (pages 146-50), and one that's only going to get more important in the Internet of Things.

News article.

Sociological ImagesTrump Supporters Substantially More Racist Than Other Republicans

A set of polls by Reuters/Ipsos — the first done just before Cruz and Kasich dropped out of the primary race and the second sometime after — suggests that, when it comes to attitudes toward African Americans, Republicans who favored Cruz and (especially) Kasich have more in common with Clinton supporters than they do Trump supporters.

The first thing to notice is how overwhelmingly common it still is for Americans to believe that “black people in general” are less intelligent, ruder, lazier, and more violent and criminal than whites. Regardless of political affiliation of preferred candidate, at least one-in-five and sometimes more than one-in-three will say so.

But Trump supporters stand out. Clinton and Kasich’s supporters actually have quite similar views. Cruz’s supporters report somewhat more prejudiced views than Kasich’s. But Trump’s supporters are substantially more likely to have negative views of black compared to white people, exceeding the next most prejudiced group by ten percentage points or more in every category.

These differences are BIG. We wouldn’t be surprised to see strong attitudinal differences between Democrats and Republicans — partisanship drives a lot of polls — but for the size of the difference between Democrats and Republicans overall to be smaller than the size of the difference between Trump supporters and other Republicans is notable. It suggests that the Republican party really is divided and that Trump has carved out a space within it by cultivated a very specific appeal.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

(View original at

CryptogramIntellectual Property as National Security

Interesting research: Debora Halbert, "Intellectual property theft and national security: Agendas and assumptions":

Abstract: About a decade ago, intellectual property started getting systematically treated as a national security threat to the United States. The scope of the threat is broadly conceived to include hacking, trade secret theft, file sharing, and even foreign students enrolling in American universities. In each case, the national security of the United States is claimed to be at risk, not just its economic competitiveness. This article traces the U.S. government's efforts to establish and articulate intellectual property theft as a national security issue. It traces the discourse on intellectual property as a security threat and its place within the larger security dialogue of cyberwar and cybersecurity. It argues that the focus on the theft of intellectual property as a security issue helps justify enhanced surveillance and control over the Internet and its future development. Such a framing of intellectual property has consequences for how we understand information exchange on the Internet and for the future of U.S. diplomatic relations around the globe.

EDITED TO ADD (7/6): Preliminary version, no paywall.

Sky CroeserPost-Arab Spring Tunisia, session 2: local government,

I presented in the second session, so my notes are more limited here. Still, two very interesting papers from Belhassen Turki and Therese Pearce Laanela.

cndumdlvuaapimaBelhassen Turki, Tunisian Local Governance Project
Local Democracy and Territorial Reform in Tunisia
Turki’s presentation drew on long experience in developing local government to talk about the Tunisian Local Governance Project, supported by international and Tunisian institutions. He began by discussing Tunisian history with decentralisation and the significant growth in municipalities since 1956, which nevertheless were kept weak before 2011: local government had only 4% of GDP in funding, and had limited autonomy.

The new Tunisian constitution makes specific provision, in Article 131, for local authorities’ role in governance. The Tunisian Local Governance Project aims to enhance the dialogue on decentralisation, provide supporting research, strengthen Tunisian local government and share lessons with other MENA countries.

Therese Pearce Laanela, Australian National University
Trusting Tunisian Elections
While there is a substantial body of knowledge about how to organise elections, there’s a need to ensure that people trust the results. Laanela’s research explores an important issue: what makes people trust public institutions? In order to answer this, it’s necessary to look beyond election day and understand the processes that underpin elections.

A huge amount of work has gone into all elections in Tunisia since 2010, and the country has some important strengths in this area:

  • A strong public service tradition (albeit with some upstream problems with decision-making),
  • A strong pool of talent due to the wide availability of education,
  • State of the art electoral practice (including the vertical and horizontal parity required of party lists, and the monitoring of political financing),
  • International support structures,
  • Strong social cohesion (in that most Tunisians are very proud of their country’s achievements, and want it to succeed), and
  • Elite buy-in.

Elections underpin the societal commitment to manage political change in a stable and inclusive way. A lack of electoral trust is therefore costly and potentially risk, but at the same time trust is elusive and messy, and elections take place during a time of agitation. Trust requires not only ongoing delivery of particular services, but also a sense of common purpose, which requires taking the sense of agitation seriously. There’s a need to understand, and be seen to understand, candidates’ and voters’ anxieties about the process.

In Tunisia, electoral authorities are competent and respectful, but they’re being let down by a failure of politicians to pass the necessary legislation in a timely way. This puts staff at the coalface of running elections at risk of being underprepared. Laanela argues that those involved in Tunisian civil society therefore need to be putting pressure on legislators right now to stay on task and pass vital legislation.

CryptogramAnonymization and the Law

Interesting paper: "Anonymization and Risk," by Ira S. Rubinstein and Woodrow Hartzog:

Abstract: Perfect anonymization of data sets has failed. But the process of protecting data subjects in shared information remains integral to privacy practice and policy. While the deidentification debate has been vigorous and productive, there is no clear direction for policy. As a result, the law has been slow to adapt a holistic approach to protecting data subjects when data sets are released to others. Currently, the law is focused on whether an individual can be identified within a given set. We argue that the better locus of data release policy is on the process of minimizing the risk of reidentification and sensitive attribute disclosure. Process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been "anonymized." It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk. By focusing on process, data release policy can better balance privacy and utility where nearly all data exchanges carry some risk.

Worse Than FailureCodeSOD: Hanging By a String

We all know that truth is a flexible thing, that strict binaries of true and false are not enough.

Dana’s co-worker knew this, and so that co-worker didn’t use any pidling boolean values, no enums. They could do one better.

Now, we’re missing a lot of the code, but the pieces Dana shared with us are enough to get the picture…

    public CustomerRecord fetchNextCustomer()
            String   yesNoString = String.valueOf(BusinessDAO.custFlagIsSet());

            if(yesNoString.equalsIgnoreCase("true")) yesNoString="Y";

            //… and later in this same method …
            if (yesNoString.equalsIgnoreCase("Y")) {
                //set a flag on the customer

True, false, “Y”, “N”, it’s all the same thing, yes? But how does this code actually get used?

    public Vector<Records> getCustomers()

            String a = String.valueOf(BusinessDAO.custFlagIsSet());
            if (a.equalsIgnoreCase("TRUE"))
                while(true) {
                    CustomerRecord aCustomer = fetchNextCustomer();

                    if (null != aCustomer) {
                    else {
            } else {
                while(true) {
                    CustomerRecord aCustomer = fetchNextCustomer();

                    if (null != aCustomer) {
                    else {
            return records;

That’s an excellent use of if statements. They’re future proofed- if they ever do need to branch on that flag, they’re already done with that.

But what about that custFlagIsSet code? What on Earth is that doing?

    public String custFlagIsSet()
        BusinessConfig domainObject;
            domainObject = getDomainObject;
        catch (Exception e)

        boolean isFlag = domainObject.custFlagIsSet();

        String isFlagString = String.valueOf(isFlag) ;

        return isFlagString;

Obviously, what we’re seeing here is a low-level API- that domainObject- being adapted into a more abstract API. Where that low-level API only uses little booleans, our custFlagIsSet object makes sure it only ever exposes strings- a much more flexible and robust datatype. Now, we can see some of the history in this code- before that custFlagIsSet modernized the underlying API, the other methods still needed to use String.valueOf, just in case a boolean accidentially came back.

If you say it carefully, it almost sounds like it makes sense. Almost.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Sky CroeserPost-Arab Spring Tunisia: Women and Democratic Transitions


This panel drew on both Tunisian and Indonesian perspectives, with Admira Dini Salim and Dina Afrianty talking about the Indonesian experience, and Najla Abbes, Ghazoua Ltaief, and Ines Ben Youssef speaking about Tunisia. (More about these speakers in the program.)

We began with a general overview of the situation from the speakers.
Admira Dini Salim: Indonesia has a lot to learn from Tunisia, rather than vice versa. We need to look at all stages of the cycle, where women are included or excluded.
Najla Abbes: two key issues. Firstly, a fear of regression, as Tunisia has made a lot of progress and men are starting to complain about their need to ‘regain power’. Secondly, there’s a need to show that women deserve political participation and can participate effectively.
Ghazoua Ltaief: youth and women were at the front lines of the revolution, and then the first and second elections produced a huge gap between them and politicians. So now we’re trying to increase participation and make young people (including young women’s) voices heard, and not just on issues regarding women or youth.
Ines Ben Youssef: Women living in regional areas were excluded from political and civil life previously, so there’s a need to address this.
Dina Afrianty: 1999 – approval in the Parliament for Indonesian women to get equal political rights. However, there has a been a push-back against women’s participation in political life and the workplace in recent years.
Question: what do you do with the gap left in managing the household when women take on other responsibilities? And what is men’s roles?
Najla Abbes:
our observations are that women deputies involved in election representative were frequently blamed/scolded for ‘neglecting’ household responsibilities. We also see that men in parliament have difficulties in balancing life and work.
Ghazoua Ltaief: there’s an organisation called Tunisian Women which has done some research on women who are deputies in the national constituent assembly, and how that affects their lives. Many of these women did not get strong support from their husbands.
Admira Dini Salim: many women in Indonesia take on work that allows them time to take care of their families. Many women didn’t see political parties as safe spaces to work, including because of late-night meetings.
Dina Afrianty: Indonesia is a very large country which stretches across many islands, which makes things harder. For example, Indonesian girls in rural areas have had difficulty accessing educating. Another issue is the lack of childcare. Conservatives claim that LGBT issues emerge because children aren’t being ‘properly’ cared for and educator.
Question: could you talk about alliances happening across classes, and between secular and religious feminists, happening in Indonesia and Tunisia?
Najla Abbes:
we have a charter of shared beliefs, and we recruit and support anyone who is willing to adhere to that charter. The first provision is that women and men are unconditionally equal and deserve equal participation as citizens.
Ghazoua Ltaief: there are no alliances, unfortunately, between ‘modern feminists’ and Islamists, or conservative feminists. I have personally participated in many roundtables, focus groups, etc that bring together women from different backgrounds. There’s always some way to find ground for common communication, but going deeper there are always divisions. I’m not a feminist, because I work more on youth issues. But what I’ve been noticing is that there’s a deep division.
Najla Abbes: when we talk about alliances, I think about women working together too. When we started working in the field, we noticed that there were claims that we were too new, we didn’t know what was going on, and we lacked legitimacy. It took older organisations some time to realise that we all work together on the same cause, but now we are working together. When the old constitution draft talked about gender ‘complementarity’, we protested on the streets and there were over 100 associations involved in that. We come together around these threats to our lives.


Sky CroeserPost-Arab Spring Tunisia, session 3: decentralisation and intercultural practice

Saber Houchati, National Federation of Tunisian Cities
Constitutional Transition, Decentralisation and Local Electoral Processes in Tunisia
This presentation went into more detail about the process of decentralisation currently under way in Tunisia. The 7th chapter of the constitution sets out three levels of elected bodies: municipalities, regions, and districts. Subsidiarity is, Houchati noted again, a key principle of decentralisation. He framed this as based on service delivery: the authority that is near the citizen is responsible for managing services.

As work continues in Tunisia, there are efforts to build administrative and financial autonomy at the local government level. Part of this is the need for local governments still need to consider partnerships and foreign relations (which were previously controlled by a higher level of government). Challenges include low public support and a lack of instruments for participatory democracy, and limited resources. In Tunisia, local governments get only 4% of GDP in funding (compared to Morocco, 11% and Turkey, 20%). Another issue is that many people don’t want to work in the municipalities – they want to work in ministries and other areas.

There were a number of short term measures taken after the revolution to work on developing democracy, including the nomination of special delegations (which means that most services have continue to work throughout the period after 2011), and capacity building of local managers. Shifting to the medium term, there’s been the incorporation of participatory budgets, attention to gender issues, and other attempts to deal with inequality.

Now, there’s a need to work on decentralisation, broad participation, and promoting transparency and communication about the process of decentralisation. This needs to bring together those in government, international experts, and civil society. Civil society has played an important role in managing the dialogue around decentralisation.

Lynda Ford, iGen Foundation
The Role of Local Government in Leading Social, Technological and Entrepreneurial Innovation

Ford notes that Australia has a long multicultural history. Before European settlement, there were more than 160 language groups, and today a high proportion of people living here have at least one parent born oversees. Now, shifting towards an intercultural perspective is useful (in tandem with multiculturalism).

iGen’s Getting Down to Business is a statewide program for young entrepreneurs bringing together those working across a range of areas and backgrounds. Many young people in the program already employ others, including contracting others in supply chain management, the sharing economy, and employing people direct. iGen tries to ensure gender and cultural diversity within this group through specific recruitment strategies.

The program uses a hybrid model of coaching, webinars, mentoring, and networking. Organisers also consider broader ecosystem development (eg. coworking spaces, access to investment).

iGen also run a variety of other programs supporting intercultural connections. They ran the ‘techfugees’ hackathon in Melbourne. Since then, they’ve been working on an ideas incubator to develop projects from the hackathon and seek funding. They’ve also been developing an online magazine to help with practical program and service design and implementation around intercultural practice, and a desktop intercultural training to help people working in local government. An important part of this work involves connecting people in people in local government to international networks.

Planet Linux AustraliaPia Waugh: Pia, Thomas and little A’s Excellent Adventure – Week 3

The last fortnight has just flown past! We have been getting into the rhythm of being on holidays, a difficult task for yours truly as the workaholic I am! Meanwhile we have also caught a lot more fish (up to 57 now, 53 were released), have been keeping up with the studies and little A has been (mostly) enjoying a broad range of new foods and experiences. The book is on hold for another week or two while I finish another project off.

Photos are added every few days to the flickr album.


My studies are going well. The two (final) subject are “Law, Governance and Policy” and “White Collar Crime”. They are both great subjects and I’ve been thoroughly enjoying the readings, discussions and thinking critically about the issues therein. The White Collar Crime topic in particular has been fascinating! Each week we look at case studies of WCC in the news and there are some incredible issues every single week. A recent one directly relevant to us was the ACCC suing Heinz for a baby food advertised as “99% fruit” but made up of fruit concentrates and purees, resulting in a 67% sugar product. Wow! The advertising is all about how healthy it is and how it developed a taste for real foods in toddlers but it basically is just a sugar hit worse than a soft drink!

Fishing and weather

We have been doing fairly well and the largest trout so far was 69cm (7.5 pounds). We are exploring the area and finding some great new spots but there is certainly some crowding on weekends! Although Thomas was lamenting the lack of rain the first week, it then torrented leaving him to lament about too much rain! Hopefully now we’ll get a good mix of both rain (for fish) and sunshine. Meanwhile it has been generally much warmer than Canberra and the place we are staying in is always toasty warm so we are very comfortable.

Catchups in Wellington and Auckland

We are planning to go to Auckland for Gather later this month and to Wellington for GovHack at the end of July and then for the OS/OS conference in August. The plan is to catch up with ALL TEH PEEPS during those trips which we are really looking forward to! Little A and I did a little one day fly in fly out trip to Wellington last week to catch up with the team to exchange information and experience with running government data portals. It was great to see Nadia, Rowan and the team and to see the recent work happening with the new and to share some of the experience we had with Thanks very much to the team for great day and good luck in the next steps with your ambitious agenda! I know it will go well!


Last week we had our first visitors. Thomas’ parents stayed with us for a week which has been lovely! Little A had a great time being pampered and we enjoyed showing them around. We had a number of adventures with them including some fishing, a trip to the local national park to see some beautiful volcanoes (still active!) and a place reminiscent of the Hydro Majestic in the Blue Mountains.

We also visited Te Porere Redoubt a Maori defensive structure including trenches, and a visit to the site of an old Maori settlement. The trench warfare skills developed by the Maori were used in the New Zealand wars and I got a few photos to show the deep trench running around the outside of the structure and then the labyrinth in the middle. There is a photo of a picture of a fortified Maori town showing that large spikes would have also been used for the defensive structure, and potentially some kind of roof? Incredible use of tactical structures for defence. One for you Sherro!

Wolverine baby

Finally, we had a small incident with little A which really showed how resilient little kids are. We were bushwalking with little A in a special backpack for carrying children. I had to step across a small gap and checked out the brush but only saw the soft leaves of a tree. I stepped across and suddenly little A screamed! Thomas was right on to it (I couldn’t see what was happening) and there had been a tiny low hanging piece of bramble (thorny vine) at little A’s face height! He quickly disentangled her and we sat her down to see the damage and console her. It had caught on her neck and luckily only gave her a few very shallow scratches but she was inconsolable. Anyway, a few cuddles later, some antiseptic cream and a warm shower and little A was perfectly happy, playing with her usual toys whilst Thomas and I were still keyed up. The next day the marks were dramatically faded and within a couple of days you could barely see them. She is healing super fast, like a baby Wolverine :) She is happily enjoying a range of foods now and gets a lot of walks and some time at the local playgroup for additional socialisation.

Sky CroeserPost-Arab Spring Tunisia: Decentralisation and Democracy

My notes from the first session, providing a critical understanding of decentralisation, and placing that within the context of Tunisia and Indonesia:

flagFethi Mansouri, Alfred Deakin Institute
The Democratic Process in Tunisia: Conditions for Consolidations and Future Outlook
This paper addresses interconnected issues around democratic transition in the Arab world. There were a number of key structural and historical factors that ushered in the Arab Spring, and there have been different outcomes in Arab countries that have experienced popular uprising.

In the social sciences the notion of ‘Arab exceptionalism’ or ‘democratic deficit’ in the Arab world has predominated. Instead of democratisation, we’ve seen ‘authoritarian upgrading’ in the Arab world.This is a failure to reform, and states instead make key concessions.

In contrast to deterministic assumptions that drive mainstream social science, we should understand the upheavals associated with political transformations as inherently fluid, unpredictable, and not easily ‘theorisable’.

We’re seeing social and economic failures: the Arab Spring erupted in part because of the cumulative failures of successive economic policies, and not just because of the lack of political reforms. In particular, there’s been a “pursuit of an ill-suited neo-liberal approach whilst ignoring many authentic and successful Eastern/Asian models”. This has lead to rising poverty and inequality.

The Arab political landscape is actually highly diverse in terms of political actors, activism, history. Mansouri argues for an understanding of the trajectories of the revolutions from a Gramscian viewpoint, in terms of the strength of pre-existing civil society. Here, we can see three kinds of Arab regimes with regards to revolution:

  • homogenous initiators (states that trigger revolutionary contagions)
  • divided authoritarian states (those that follow the initiators and experience prolonged violence),
  • divided wealthy monarchical regimes (which may be able to avoid, or at least forestall, revolution).

Outcomes are, therefore, considerably different.

Monsouri identifies several key variables in predicting outcomes of democratic transition:

  • civil society,
  • the role of military institutions,
  • religion/politics nexus,
  • and external influences.

The transition towards stable democratic governance is characterised by three key stages:

  • a breakdown of authoritarianism,
  • a transition phase,
  • and the onset of a democratisation process which is supposed to produce stable and ‘democratic rule’.

Tunisia has seen deep ideological polarisation as they move towards this third phase.

Key achievements in Tunisia:

  • Adoption of the new constitution in 2014, which emphasises that Tunisia is a civil state with its legitimacy based in the will of the people, and which establishes freedom of conscience and belief,
  • A focus on gender parity not only in election lists, but also in who’s at the top of lists,
  • Transitional justice continues to be a divisive issue: focuses on key aspects of reconciliation, including how to construct historical memory, and how to engage in reparation and reconciliation.

Tunisia also faces important challenges:

  • lack of a clear and practical plan for improving the economic situation,
  • the rise of extremist violence,
  • growing voter/citizen apathy.

Five years after the revolution, 75% of Tunisians have a negative perceptions of political parties, 67% of Tunisians see political parties as close to them in terms of understanding their needs, and only 53% see political parties as useful for democratisation, 48% think that parties are not useful at all for dealing with local and regional development issues.

Voters want politicians to honour promises, fight social exclusion and poverty, create jobs, and do something about the rising cost of living. Security issues are not on voters’ lists of top issues at the moment. This indicates that dealing with local issues in a decentralised way is vital for establishing democracy in Tunisia.

Mansouri argues that in order to address the challenges Tunisia faces, we need to steer away from deterministic assumptions, incorporate informal processes (including civil society), and be mindful of changing political discourse.

Bligh Grant, The University of Technology Sydney
Decentralisation in the Australian Context: The Promise—and Failure—of the Recent White Paper Experience
This paper focuses on local government, which Grant argues is an eternal issue. We tend to think of Australia’s centre as being ‘hollowed out’ – we focus on the seaboard, and especially the eastern seaboard, in our understandings of Australia. However, it’s important to understand that Australia is a divided sovereignty, and has been since 1901.

We tend to think of Australia as very stable, a western advanced democracy. But Australia has massive socioeconomic disadvantage once you go outside the cities. We need to understand the political fragmentation of Australia. State governments in Australia are very good at looking after the cities. On the other hand, rural areas are often neglected, and there are very strong class divisions in these areas. We should also note that before European colonisation, Australia’s political landscape was highly decentralised.

In Australia, there’s been a recent ‘discussion paper’ released around decentralisation, with suggestions that ended up being canned by the Turnbull government.

All local government people champion the idea of ‘subsidiarity’ (though its specific meaning is contested). There are two key streams in the mainstream understanding of subsidiarity:

  • Deontological (duty-based) meanings: every tier of government has a proper role. This is quite a moral stance.
  • Consequentialist approaches: more economically-based.

Grant and Drew have developed a normative ideal for decentralisation. The Australian federal system is changing. Local governments are becoming larger (in terms of population), and their functional scope is widening. We need to think about the framework in which those shifts are happening.


Vedi Hadiz, The University of Melbourne
Democracy and Decentralisation in Comparative Perspective: Insights for Tunisia

Hadiz argues for a more critical understanding of how support for decentralisation happens, drawing on his research on Indonesia. He notes that when we talk about ‘decentralisation’ we assume we’re talking about the same thing. But we’re not. This is in part because the sources of support are very different. For example, international development agencies think of decentralised local governments as more responsive to the market. Their understanding of ‘decentralisation’ is heavily shaped by neoliberal ideas: the notion that decentralisation creates small, nimble, institutions that can better, more efficiently respond to the market. Civil society organisations support them because they think of them as more democratic and accountable. This is already an important disjuncture.

We often forget there’s another source of support: local elites. We therefore need to understand what the influence of local elites is. Do they have an interest in local accountability and transparency? Or in using the new authority to insulate themselves from civil society? The failure to recognise this third influence means that we often experience ‘unintended consequences’ of decentralisation.

Indonesia has made progress – despite issues, it is a democracy. But Tunisia might learn from some of the problems that Indonesia has experienced, including around decentralisation. Both places have seen demands coming from regions that have thought of themselves as marginalised by centralised authoritarian regimes captured by centralised elites.

In Indonesia, where politically-marginalised regions with considerable resources have made (reasonable) demands for more development, local elites saw decentralisation as an opportunity to move up the ladder in Indonesia. Indonesia has amply demonstrated that those who are best positioned to take advantage of decentralisation are those who already have power.

Two things that Tunisia has going for it: firstly, Indonesia’s authoritarian regime was much more effective in destroying civil society. In contrast, in Tunisia you had functioning trade unions, and Ennahda was able to exist. A stronger (though diverse and contested) civil society is an important resource. Secondly, Tunisia’s military was deliberately kept weak. So hopefully Tunisia will be able to sidestep some of the issue that Indonesia has faced.


Planet Linux AustraliaDonna Benjamin: The Moon tonight


CryptogramFriday Squid Blogging: How Squids See Color Despite Black-and-White Vision

It's chromatic aberration.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

TED7 times TED Talks anticipated history

TED’s river of ideas is a constant flow — and sometimes vital topics make their way onto our site far before they head into the mainstream. As part of the 10th birthday of TED Talks, we highlight seven Talks that foresaw big trends when they were only ripples in the pond.

Wikipedia: Is it really a thing?

At TEDGlobal 2005, Jimmy Wales gave a talk about his relatively new initiative, Wikipedia, and imagined its potential for growth. At the time, many were still skeptical at the legitimacy and reliability of an online encyclopedia, run by volunteers, that anyone could edit. Wales took to the TED stage to say that Wikipedia and open knowledge had a place in the Wild West digital climate of the mid-2000s.

Today, Wikipedia is the Internet’s encyclopedia, a go-to source for a quick informational deep dive on almost any topic imaginable (although >caveat emptor). It’s even referenced as a source in legal proceedings around the world. In fact, TED has recently teamed up with Wikipedia to create even more open, accessible knowledge.

The jaw-dropping multi-touch screen

A little over a year after Jeff Han demonstrated this mind-blowing tech of multi-touch interface at TED2006, Apple released the first iPhone — and the rest is history.

Smartphones take over our lives

In 2007, Jan Chipchase called out the coming revolution of the mobile phone (the word “smartphone” hadn’t really caught on yet). From his research on human behavior, Chipchase predicted that our needs would come to intertwine with our ever-better phones in ways never imagined before. Our devices could begin to support us in pursuit of Maslow’s hierarchy of needs — anytime, anywhere. The convenience of things like banking and international money-lending through mobiles were still so novel and astounding to our flip-phone brains.

Currently, more than half of the global population owns a smartphone. (You’re probably reading this on a smartphone right now.) Isn’t that cool?

Real progress on protecting oceans

When oceanographer Sylvia Earle won the TED Prize in 2009, only 1 percent of the world’s oceans were protected. At the time, global support for marine protected areas (MPAs) wasn’t prominent, and the conversation on humanity’s impact on the climate hadn’t hit critical mass. By choosing Earle to receive the Prize, TED helped give her a platform to start that conversation and ignite assistance internationally. Fast forward to 2016 and that initial number of MPAs has jumped to 4%; still, there’s a long way to go — you can help Earle’s organization Mission Blue and join the fight of restoring our oceans.

Twitter: Is it really a thing?

Twitter co-founder Evan Williams came to the TED2009 stage on the heels of massive growth in his company — but before Twitter became so integral to our media lives. Twitter secured its global legitimacy as a place for open communication in the Internet Age two years after the talk in 2011, when it was one of the main means of communication for those in the Egyptian Revolution.

At one point during his talk, Williams shares that “there are 47 members of Congress who have Twitter accounts.” Today, there are 570 members who share their thoughts in tweet form. Twitter has proven itself an integral part of the news cycle, and a vital tool for protest and online community-building.

Inequality matters

At TEDGlobal in the summer of 2011, public health researcher Richard Wilkinson relayed some startling statistics on inequality — and its insidious role in health and happiness. His talk shattered perceptions and laid bare our societies’ deepening economic inequality and its real, detrimental effects, before it hit the mainstream news cycle in a big way, kicked off by the Occupy movement later that same year.

A few years later, in 2014, Nick Hanaeur — from his perspective as a venture capitalist — warned that if this rift between the rich and the poor continues, things could get a little medieval.

Follow the money trail of corruption

The TED Prize in 2014 was awarded to Charmian Gooch, an anti-corruption activist whose wish was to lift the shroud of mystery surrounding anonymous companies and the individuals they protected … two years before the Panama Papers leaked. (Global Witness campaign leader Robert Palmer gave a talk in response to the controversy via Skype — a first in TED’s history — to get on the record about such a historic leak and its ripple effects.)

CryptogramI'm on an "Adam Ruins Everything" Podcast

Adam Conover interviewed me on his podcast.

If you remember, I was featured on his "Adam Ruins Everything" TV episode on security.

Cory DoctorowAs browsers decline in relevance, they’re becoming DRM timebombs

My op-ed in today’s issue of The Tech, MIT’s leading newspaper, describes how browser vendors and the W3C, a standards body that’s housed at MIT, are collaborating to make DRM part of the core standards for future browsers, and how their unwillingness to take even the most minimal steps to protect academics and innovators from the DMCA will put the MIT community in the crosshairs of corporate lawyers and government prosecutors.

If you’re a researcher or security/privacy expert and want to send a message to the W3C that it has a duty to protect the open web from DRM laws, you can sign this open letter to the organization.

The W3C’s strategy for “saving the web” from the corporate-controlled silos of apps is to replicate the systems of control that make apps off-limits to innovation and disruption. It’s a poor trade-off, one that sets a time-bomb ticking in the web’s foundations, making the lives of monopolists easier, and the lives of security researchers and entrepreneurs much, much more perilous.

The Electronic Frontier Foundation, a W3C member, has proposed a compromise that will protect the rights of academics, entrepreneurs, and security researchers to make new browser technologies and report the defects in the old ones: we asked the W3C to extend its patent policy to the DMCA, so that members who participated in making DRM would have to promise not to use the DMCA to attack implementers or security researchers.

But although this was supported by a diverse group of W3C members, the W3C executive did not adopt the proposal. Now, EME has gone to Candidate Recommendation stage, dangerously close to completion. The purpose of HTML5 is to provide the rich interactivity that made apps popular, and to replace apps as the nexus of control for embedded systems, including the actuating, sensing world of “internet of things” devices.

We can’t afford to have these devices controlled by a system that is a no-go zone for academic work, security research, and innovative disruption. Although some of the biggest tech corporations in the world today support EME, very few of them could have come into being if EME-style rules had been in place at their inception. A growing coalition of leading international privacy and security researchers have asked the W3C to reconsider and protect the open web from DRM, a proposal supported by many W3C staffers, including Danny Weitzner (CSAIL/W3C), who wrote the W3C’s patent policy.

Browsers’ bid for relevance is turning them into time-bombs
[Cory Doctorow/The Tech]

(Image: Wfm stata center, Raul654, CC-BY-SA)

Krebs on Security1,025 Wendy’s Locations Hit in Card Breach

At least 1,025 Wendy’s locations were hit by a malware-driven credit card breach that began in the fall of 2015, the nationwide fast-food chain said Thursday. The announcement marks a significant expansion in a data breach that is costing banks and credit unions plenty: Previously, Wendy’s had said the breach impacted fewer than 300 locations.

An ad for Wendy's (in Russian).

An ad for Wendy’s (in Russian).

On January 27, 2016, this publication was the first to report that Wendy’s was investigating a card breach. In mid-May, the company announced in its first quarter financial statement that the fraud impacted just five percent of stores. But in a statement last month, Wendy’s warned that its estimates about the size and scope of the breach were about to get much meatier.

Wendy’s has published a page that breaks down the breached restaurant locations by state.

Wendy’s is placing blame for the breach on an unnamed third-party that serves franchised Wendy’s locations, saying that a “service provider” that had remote access to the compromised cash registers got hacked.

For better or worse, countless restaurant franchises outsource the management and upkeep of their point-of-sale systems to third party providers, most of whom use remote administration tools to access and manage the systems remotely over the Internet.

Unsurprisingly, the attackers have focused on hacking the third-party providers and have had much success with this tactic. Very often, the hackers just guess at the usernames and passwords needed to remotely access point-of-sale devices. But as more POS vendors start to tighten up on that front, the criminals are shifting their focus to social engineering attacks — that is, manipulating employees at the targeted organization into opening the backdoor for the attackers.

As detailed in Slicing Into a Point-of-Sale Botnet, hackers responsible for stealing millions of customer credit card numbers from pizza chain Cici’s Pizza used social engineering attacks to trick employees at third party point-of-sale providers into installing malicious software.

Perhaps predictably, Wendy’s has been hit with at least one class action lawsuit over the breach. First Choice Federal Credit Union reportedly alleged that the data breach could have been prevented or at least lessened had the company acted faster. That’s difficult to argue against: The company first learned about the breach in January 2016, and stores were still being milked of customer card data six months later.

More lawsuits are likely to come. As noted in Credit Unions Feeling Pinch in Wendy’s Breach, the CEO of the National Association of Federal Credit Unions believes the losses their members have suffered from cards compromised at Wendy’s locations so far eclipse those that came in the wake of the huge card breaches at Target and Home Depot.

People who are in the habit of regularly eating at or patronizing a company that is in the midst of responding to a data breach pose a frustrating challenge for smaller banks and credit unions that fight card fraud mainly by issuing customers a new card. Not long after a new card is shipped, these customers turn around and unwittingly re-compromise their cards, prompting institutions to weigh the costs of continuously re-issuing versus the chances that the cards will be sold in the underground and used for fraud.

A number of readers have written in this past week apparently concerned about my whereabouts and well-being. It’s nice to be missed; I took a few days off for a much-needed staycation and to visit with friends and family. I’m writing this post because some stories you just have to see through to the bitter end. But fear not: KrebsOnSecurity will be back in full swing next week!

Sociological ImagesA SocImages Collection: Police, Black Americans, and U.S. Society

Why are relations between black America and the police so fraught? I hope that this collection of 50 posts on this topic and the experience of being black in this country will help grow understanding. See, also, the Ferguson syllabus put together by Sociologists for Justice, the Baltimore syllabus, and this summary of the facts by Nicki Lisa Cole.

Race and policing:

Perceptions of black men and boys as inherently criminal:

Proof that Americans have less empathy for black people:

Evidence of the consistent maltreatment, misrepresentation, and oppression of black people in every part of American society:

On violent resistance:

The situation now:

W.E.B. DuBois (1934):

The colored people of America are coming to face the fact quite calmly that most white Americans do not like them, and are planning neither for their survival, nor for their definite future if it involves free, self-assertive modern manhood. This does not mean all Americans. A saving few are worried about the Negro problem; a still larger group are not ill-disposed, but they fear prevailing public opinion. The great mass of Americans are, however, merely representatives of average humanity. They muddle along with their own affairs and scarcely can be expected to take seriously the affairs of strangers or people whom they partly fear and partly despise.

For many years it was the theory of most Negro leaders that this attitude was the insensibility of ignorance and inexperience, that white America did not know of or realize the continuing plight of the Negro.  Accordingly, for the last two decades, we have striven by book and periodical, by speech and appeal, by various dramatic methods of agitation, to put the essential facts before the American people.  Today there can be no doubt that Americans know the facts; and yet they remain for the most part indifferent and unmoved.

– From A Negro Nation Within a Nation

(View original at

CryptogramResearchers Discover Tor Nodes Designed to Spy on Hidden Services

Two researchers have discovered over 100 Tor nodes that are spying on hidden services. Cory Doctorow explains:

These nodes -- ordinary nodes, not exit nodes -- sorted through all the traffic that passed through them, looking for anything bound for a hidden service, which allowed them to discover hidden services that had not been advertised. These nodes then attacked the hidden services by making connections to them and trying common exploits against the server-software running on them, seeking to compromise and take them over.

The researchers used "honeypot" .onion servers to find the spying computers: these honeypots were .onion sites that the researchers set up in their own lab and then connected to repeatedly over the Tor network, thus seeding many Tor nodes with the information of the honions' existence. They didn't advertise the honions' existence in any other way and there was nothing of interest at these sites, and so when the sites logged new connections, the researchers could infer that they were being contacted by a system that had spied on one of their Tor network circuits.

This attack was already understood as a theoretical problem for the Tor project, which had recently undertaken a rearchitecting of the hidden service system that would prevent it from taking place.

No one knows who is running the spying nodes: they could be run by criminals, governments, private suppliers of "infowar" weapons to governments, independent researchers, or other scholars (though scholarly research would not normally include attempts to hack the servers once they were discovered).

The Tor project is working on redesigning its system to block this attack.

Vice Motherboard article. Defcon talk announcement.

Worse Than FailureError'd: Not in Kansas Anymore

Eric G. wrote, "It looks like Dinerware, a point of sale system for restaurants, has a similar problem to the Scarecrow in the Wizard of Oz."


"I noticed an 'Eject Thumb Drive' icon in my system tray and, since I didn't have a thumb drive plugged in, I was curious what it was referring to," Russ J. writes.


"It's one thing to see ~Firstname-YP~}, but my first name is IN the email!" wrote Jack.


"I spotted this at a train station in Atlanta. I guess I'll miss out on what the screensaver was," writes Christopher E.


"Walgreens really wants me to reply to this message but also to understand that they won't process or read it. I'm confused," Aaron D. writes.


"I had to double check, but I don't think that Fitbit knows how to do datetime math," writes Pascal.


Matthew J. writes, "I was trying to install some software and the C++ runtime libraries apparently have a problem with the far east."


[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet Linux AustraliaRussell Coker: Nexus 6P and Galaxy S5 Mini

Just over a month ago I ordered a new Nexus 6P [1]. I’ve had it for over a month now and it’s time to review it and the Samsung Galaxy S5 Mini I also bought.


The first noteworthy thing about this phone is the fingerprint scanner on the back. The recommended configuration is to use your fingerprint for unlocking the phone which allows a single touch on the scanner to unlock the screen without the need to press any other buttons. To unlock with a pattern or password you need to first press the “power” button to get the phone’s attention.

I have been considering registering a fingerprint from my non-dominant hand to reduce the incidence of accidentally unlocking it when carrying it or fiddling with it.

The phone won’t complete the boot process before being unlocked. This is a good security feature.

Android version 6 doesn’t assign permissions to apps at install time, they have to be enabled at run time (at least for apps that support Android 6). So you get lots of questions while running apps about what they are permitted to do. Unfortunately there’s no “allow for the duration of this session” option.

A new Android feature prevents changing security settings when there is an “overlay running”. The phone instructs you to disable overlay access for the app in question but that’s not necessary. All that is necessary is for the app to stop using the overlay feature. I use the Twilight app [2] to dim the screen and use redder colors at night. When I want to change settings at night I just have to pause that app and there’s no need to remove the access from it – note that all the web pages and online documentation saying otherwise is wrong.

Another new feature is to not require unlocking while at home. This can be a convenience feature but fingerprint unlocking is so easy that it doesn’t provide much benefit. The downside of enabling this is that if someone stole your phone they could visit your home to get it unlocked. Also police who didn’t have a warrant permitting search of a phone could do so anyway without needing to compel the owner to give up the password.


This is one of the 2 most attractive phones I’ve owned (the other being the sparkly Nexus 4). I think that the general impression of the appearance is positive as there are transparent cases on sale. My phone is white and reminds me of EVE from the movie Wall-E.


This phone uses the USB Type-C connector, which isn’t news to anyone. What I didn’t realise is that full USB-C requires that connector at both ends as it’s not permitted to have a data cable with USB-C at the device and and USB-A at the host end. The Nexus 6P ships with a 1M long charging cable that has USB-C at both ends and a ~10cm charging cable with USB-C at one end and type A at the other (for the old batteries and the PCs that don’t have USB-C). I bought some 2M long USB-C to USB-A cables for charging my new phone with my old chargers, but I haven’t yet got a 1M long cable. Sometimes I need a cable that’s longer than 10cm but shorter than 2M.

The USB-C cables are all significantly thicker than older USB cables. Part of that would be due to having many more wires but presumably part of it would be due to having thicker power wires for delivering 3A. I haven’t measured power draw but it does seem to charge faster than older phones.

Overall the process of converting to USB-C is going to be a lot more inconvenient than USB SuperSpeed (which I could basically ignore as non-SuperSpeed connectors worked).

It will be good when laptops with USB-C support become common, it should allow thinner laptops with more ports.

One problem I initially had with my Samsung Galaxy Note 3 was the Micro-USB SuperSpeed socket on the phone being more fiddly for the Micro-USB charging plug I used. After a while I got used to that but it was still an annoyance. Having a symmetrical plug that can go into the phone either way is a significant convenience.

Calendars and Contacts

I share most phone contacts with my wife and also have another list that is separate. In the past I had used the Samsung contacts system for the contacts that were specific to my phone and a Google account for contacts that are shared between our phones. Now that I’m using a non-Samsung phone I got another Gmail account for the purpose of storing contacts. Fortunately you can get as many Gmail accounts as you want. But it would be nice if Google supported multiple contact lists and multiple calendars on a single account.

Samsung Galaxy S5 Mini

Shortly after buying the Nexus 6P I decided that I spend enough time in pools and hot tubs that having a waterproof phone would be a good idea. Probably most people wouldn’t consider reading email in a hot tub on a cruise ship to be an ideal holiday, but it works for me. The Galaxy S5 Mini seems to be the cheapest new phone that’s waterproof. It is small and has a relatively low resolution screen, but it’s more than adequate for a device that I’ll use for an average of a few hours a week. I don’t plan to get a SIM for it, I’ll just use Wifi from my main phone.

One noteworthy thing is the amount of bloatware on the Samsung. Usually when configuring a new phone I’m so excited about fancy new hardware that I don’t notice it much. But this time buying the new phone wasn’t particularly exciting as I had just bought a phone that’s much better. So I had more time to notice all the annoyances of having to download updates to Samsung apps that I’ll never use. The Samsung device manager facility has been useful for me in the past and the Samsung contact list was useful for keeping a second address book until I got a Nexus phone. But most of the Samsung apps and 3d party apps aren’t useful at all.

It’s bad enough having to install all the Google core apps. I’ve never read mail from my Gmail account on my phone. I use Fetchmail to transfer it to an IMAP folder on my personal mail server and I’d rather not have the Gmail app on my Android devices. Having any apps other than the bare minimum seems like a bad idea, more apps in the Android image means larger downloads for an over-the-air update and also more space used in the main partition for updates to apps that you don’t use.

Not So Exciting

In recent times there hasn’t been much potential for new features in phones. All phones have enough RAM and screen space for all common apps. While the S5 Mini has a small screen it’s not that small, I spent many years with desktop PCs that had a similar resolution. So while the S5 Mini was released a couple of years ago that doesn’t matter much for most common use. I wouldn’t want it for my main phone but for a secondary phone it’s quite good.

The Nexus 6P is a very nice phone, but apart from USB-C, the fingerprint reader, and the lack of a stylus there’s not much noticeable difference between that and the Samsung Galaxy Note 3 I was using before.

I’m generally happy with my Nexus 6P, but I think that anyone who chooses to buy a cheaper phone probably isn’t going to be missing a lot.


CryptogramHijacking Someone's Facebook Account with a Fake Passport Copy

BBC has the story. The confusion is that a scan of a passport is much easier to forge than an actual passport. This is a truly hard problem: how do you give people the ability to get back into their accounts after they've lost their credentials, while at the same time prohibiting hackers from using the same mechanism to hijack accounts? Demanding an easy-to-forge copy of a hard-to-forge document isn't a good solution.

TEDSelf-organized learners around the world team up to raise money

SOLE Colombia caption TK. Photo: Courtesy of SOLE Colombia

SOLE Colombia teaches digital literacy to kids in rural communities across the country, giving them big questions to research in groups. Photo: Courtesy of SOLE Colombia

In Kingston, Jamaica, 4- to 6-year-olds in early education programs think about questions like, “What does it mean to be selfish?” In a school on the outskirts of Lahore, Pakistan, fifth graders research topics like, “What is WordPress?” In rural Colombia, students at local libraries puzzle over prompts like, “Why are yawns contagious?”

The School in the Cloud, founded by education innovator Sugata Mitra with the 2013 TED Prize, asks students to follow their curiosity as they research big questions online. The tech-forward school has five official learning labs in India and two in the UK — plus partner programs scattered around the world, where educators use its online platform to run Self-Organized Learning Environments, or SOLEs. For the first time, ten of these partner programs have teamed up for a Crowdrise campaign, to raise money for equipment and growth. SOLE Pakistan has raised more than $3,000; SOLE Colombia more than $2,000; and SOLE Jamaica more than $1,500.

At Khud in Lahore, Pakistan, students learn to use computers — and give compelling presentations. Photo: Courtesy of Khud

At Khud in Lahore, Pakistan, students learn to use computers — and give compelling presentations. Photo: Courtesy of Khud

Salahuddin Khawaja, who runs SOLE Pakistan (known as Khud), says that School in the Cloud fills a void in his country. “Pakistan’s education system is in crisis. More than a million teachers are needed,” he said. “SOLE sessions empower kids by developing their 21st-century skills. They’re equipping themselves with knowledge of computers, presentations and video editing.”

Rondeen McLean of SOLE Jamaica wants to buy 65 computers to brings SOLE sessions to 13 more early-childhood education programs in underserved communities. “Working with our first 100 students, we’ve observed sharper critical-thinking skills and real development as independent readers,” she said.

At the School in the Cloud Area 4 in Phaltan, India, students who've never used computers before quickly build skills. Photo: Courtesy of School in the Cloud

At School in the Cloud’s Area 4 lab in Phaltan, India, students who’ve never used computers before quickly build skills. Photo: Courtesy of School in the Cloud

Sanjay Fernandes of SOLE Colombia is fundraising to take SOLE sessions on the road. “We’re a small group of education and tech enthusiasts, and we want to take SOLE to all the corners of this country,” he said. “In rural areas in Colombia, the government has spent a lot of money setting up computers and connectivity in public places like libraries, kiosks and schools. We’ve convinced them to use SOLE as a method to get people of all ages using them. We’ve been to 300 places so far.”

Other partners participating in the campaign: SOLE Spain, SOLE Mexico, SOLE Argentina, SOLE India, SOLE Phaltan, SOLE Greece and SOLE-NYC. Learn more at the School in the Cloud Crowdrise campaign.

At SOLE NYC, a learning lab inside a public school in Harlem, students participate in SOLE sessions once a week. Photo: Dian Lofton/TED

At SOLE-NYC, a lab inside a public school in Harlem, students join in SOLE sessions once a week. Photo: Dian Lofton/TED

Sociological ImagesWhen Force is Hardest to Justify, Victims of Police Violence Are Most Likely to be Black

A Pew study found that 63% of white and 20% of black people think that Michael Brown’s death at the hands of Darren Wilson was not about race. This week many people will probably say the same about two more black men killed by police, Philando Castile and Alton Sterling.

Those people are wrong.

African Americans are, in fact, far more likely to be killed by police. Among young men, blacks are 21 times more likely to die at the hands of police than their white counterparts.

But, are they more likely to precipitate police violence?  No. The opposite is true. Police are more likely to kill black people regardless of what they are doing. In fact, “the less clear it is that force was necessary, the more likely the victim is to be black.”

1 (4)

That’s data from the FBI.

This question was also studied by sociologist Lance Hannon. With an analysis of over 950 non-justifiable homicides from police files, he tested whether black people were more likely to take actions that triggered their own murder. The answer was no. He found no evidence that blacks were more likely than whites to engage in verbal or physical antecedents that explained their death.

There is lots, lots more evidence if one bothers to go looking for it.

Castile and Sterling, unlike Brown, were carrying weapons. People will try to use that fact to justify the police officer’s fatal aggression. But it doesn’t matter. Black men and women are killed disproportionately whether they are carrying weapons or not, whether carrying weapons is legal or not. Carrying weapons is, in fact, legal in both Minnesota and Louisiana, the states of this week’s killings. What they were carrying is no more illegal than Trayvon’s pack of Skittles. Black people can’t carry guns safely; it doesn’t matter whether they are legal. Heck, they can’t carry Skittles safely. Because laws that allow open and concealed carry don’t apply the same way to them as they do white people. No laws apply the same way to them. The laws might be race neutral; America is not.

Revised from 2014.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and Gender, a textbook. You can follow her on Twitter, Facebook, and Instagram.

(View original at

CryptogramThe Difficulty of Routing around Internet Surveillance States

Interesting research: "Characterizing and Avoiding Routing Detours Through Surveillance States," by Anne Edmundson, Roya Ensafi, Nick Feamster, and Jennifer Rexford.

Abstract: An increasing number of countries are passing laws that facilitate the mass surveillance of Internet traffic. In response, governments and citizens are increasingly paying attention to the countries that their Internet traffic traverses. In some cases, countries are taking extreme steps, such as building new Internet Exchange Points (IXPs), which allow networks to interconnect directly, and encouraging local interconnection to keep local traffic local. We find that although many of these efforts are extensive, they are often futile, due to the inherent lack of hosting and route diversity for many popular sites. By measuring the country-level paths to popular domains, we characterize transnational routing detours. We find that traffic is traversing known surveillance states, even when the traffic originates and ends in a country that does not conduct mass surveillance. Then, we investigate how clients can use overlay network relays and the open DNS resolver infrastructure to prevent their traffic from traversing certain jurisdictions. We find that 84% of paths originating in Brazil traverse the United States, but when relays are used for country avoidance, only 37% of Brazilian paths traverse the United States. Using the open DNS resolver infrastructure allows Kenyan clients to avoid the United States on 17% more paths. Unfortunately, we find that some of the more prominent surveillance states (e.g., the U.S.) are also some of the least avoidable countries.

Worse Than FailureClassic WTF: The Circle of Fail

"Doctor, it hurts whenever I do this!" This classic ran back in 2013 -- Remy

During Ulrich’s days as an undergraduate, he landed a part-time gig at a nuclear power plant. It was an anxious time to be on board at the nuke plant- the late 1990s. The dreaded Y2K loomed over all of their aging systems. One decimal point in the wrong spot at midnight on January 1st, 2000 and… well, nothing good would come of it.

Ulrich’s job for the big conversion was more benign though. He needed to update the simple graphics on the monitoring program the nuclear technicians used to keep tabs on the reactor. The very basic macro language generated Commodore 64-quality graphics; it displayed the position of the control rods, neutron flux, water temperatures & pressure, turbine and generator stats, and how many three-eyed fish were caught in the neighboring lake. All of this was then shown on 10 massive CRT monitors mounted around the main control room.

Ulrich worked diligently to get his screens prepared, and the day came for him to roll out the changes. They didn’t have a “test control room”, so the demo needed to be run live. He invited the engineers to gather ’round the monitors to see his spectacular new designs. When the program booted and Ulrich went to pull up the control rod screen, all 10 monitors went as black as the cloak on a member of the Night’s Watch. As the engineers chuckled, Ulrich turned bright red and ran back to the server room to see what happened. It didn’t take him long to realize that whatever he screwed up caused the entire mainframe to go down.

Thus began a two-week battle to troubleshoot the mainframe issue, during which time the computer monitoring was completely unavailable. This caused the nuclear technicians to have to leave their air conditioned control room so they could use primitive analog monitoring tools from the 1970’s to check on the reactor. Every time Ulrich walked past one of them, he could sense them glaring and thinking “There’s that little pipsqueak that killed the monitors!”

The tools Ulrich had to debug the program weren’t merely useless to him. They went beyond uselessness into outright opposition. The custom macro-language had no debugger or real documentation. The mainframe was purchased from the Czech Republic and one would have to know Czech in order to read the error logs. He was able to locate a sticker on top of the server with the phone number of the vendor. He was able to reach one of their ‘experts’ named Miklos, who asked him for the serial number of the product. Ulrich provided it but the expert retorted “That is not full number! This is too short. What you need help with? Toaster? Coffee maker?”

Confused, Ulrich replied, “Ummm, a mainframe?” Had the nuclear plant bought their server from some sort of Czech Coffee, Toaster, and Mainframe Corp.? Miklos said “Oh no, Miklos can not help you. I give you number for Blazej. He does help with mainframe.” Blazej was an engineer at another nuclear power plant in the Czech Republic, who also had the same mainframe. Ulrich called there, not expecting much.

Through a series of conversations with Blazej, Ulrich was able to finally narrow down the problem to the presence of circles in the screen outputs. Apparently drawing fancy circles was far too much for the monitoring program to handle. He removed all the circles from his screens, uploaded the changes to the mainframe and finally the engineers could see the reactor statistics on the bright, beautiful monitors; without any circles. The result was ugly, boxy, and barely readable, but it worked. Ulrich breathed a sigh of relief then decided to call Czech Coffee, Toaster, and Mainframe Corp. back to notify them of the horrible bug in their program.

Ulrich once again got connected to his buddy Miklos. “Hi Miklos, this is Ulrich. I called a while back concerning our power plant monitoring program crashing the mainframe. You’ll be glad to know that Blazej and I were able to determine the problem. It all had to do with circles being drawn on the screen. I know it sounds silly, but that causes the whole mainframe to come down.”

Miklos seemed to be offended by such an accusation. “You do a circle and server come down? You want Miklos to fix this? You stupid? If you know circle cause trouble, then DO NOT USE CIRCLE!” Miklos abruptly hung up. Ulrich shrugged it off since his job was done. He eventually finished his undergrad program before Y2K and moved on from the nuclear power plant. When New Years 2000 rolled around, he made sure he was far, far away at a ski resort just in case anyone else slipped a circle into the graphics and the plant melted down as a result.

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.


Sociological ImagesThe Vanishing Barbershop?

The barbershop holds a special place in American culture. With its red, white, and blue striped poles, dark Naugahyde chairs, and straight razor shaves, the barbershop has been a place where men congregate to shore up their stubble and get a handle on their hair. From a sociological perspective, the barbershop is an interesting place because of its historically homosocial character, where men spend time with other men. In the absence of women, men create close relationships with each other. Some might come daily to talk with their barbers, discuss the news, or play chess. Men create community in these places, and community is important to people’s health and well-being.

But is the barbershop disappearing? If so, is anything taking its place?

In my study of high-service men’s salons — dedicated to the primping and preening of an all male clientele — hair stylists described the “old school” barbershop as a vanishing place. They explained that men are seeking out a pampered grooming experience that the bare bones barbershop with its corner dusty tube television doesn’t offer. The licensed barbers I interviewed saw these newer men’s salons as a “resurgence” of “a men-only place” that provides more “care” to clients than the “dirty little barbershop.” And those barbershops that are sticking around, said Roxy, one barber, are “trying to be a little more upscale.” She encourages barbers to “repaint and add flat-screen TVs.”

When I asked clients of one men’s salon, The Executive, if they ever had their hair cut at a barbershop, they explained that they did not fit the demographic. Barbershops, they said, are for old men with little hair to worry about or young boys who don’t have anyone to impress. As professional white-collar men, they see themselves as having outgrown the barbershop. A salon, with its focus on detailed haircuts and various services, including manicures, pedicures, hair coloring, and body waxing, help these mostly white men to obtain what they consider to be a “professional” appearance. “Professional men… they know that if they look successful, that will create connotations to their clients or customers or others that they work with — that they are smart, that they know what they’re doing,” said Gill, a client of the salon and vice-president in software, who reasoned why men go to the salon.

Indeed the numbers support the claim that barbershops are dwindling, and it may indeed be due to white well-to-do men’s shifting attitudes about what a barbershop is, what it can offer, and who goes there. (In my earlier research on a small women’s salon, one male client told me the barbershop is a place for the mechanic, or “grease-monkey,” who doesn’t care how he looks, and for “machismo” men who prefer a pile of Playboy magazines rather than the finery of a salon). According to Census data, there is a fairly steady decline in the number of barbershops over twenty years. From 1992-2012, we saw a 23% decrease in barbershops in the United Stated, with a slight uptick in 2013.

U.S. Census Bureau, Statistics of U.S. Businesses,
U.S. Census Bureau, Statistics of U.S. Businesses,

But these attitudes about the barbershop as a place of ol’, as a fading institution that provides outdated fades, is both a classed and raced attitude. With all the nostalgia for the barbershop in American culture, there is surprisingly little academic writing about it. It is telling, though, that research considering the importance of the barbershop in men’s lives focuses on black barbershops. The corner barbershop is alive and well in black communities and it serves an important role in the lives of black men. In her book, Barbershops, Bibles, and BET, political scientist and TV host, Melissa Harris-Perry, wrote about everyday barbershop talk as important for understanding collective efforts to frame black political thought. Scholars also find the black barbershop remains an important site for building communities and economies in black neighborhoods and for socializing young black boys.

And so asking if the barbershop is vanishing is the wrong question. Rather, we should be asking: Where and for whom is the barbershop vanishing? And where barbershops continue as staples of a community, what purpose do they serve? Where they are disappearing, what is replacing them, and what are the social relations underpinning the emergence of these new places?

In some white hipster neighborhoods, the barbershop is actually making a comeback. In his article, What the Barbershop Renaissance Says about Men, journalist and popular masculinities commentator, Thomas Page McBee, writes that these places provide sensory pleasures whereby men can channel a masculinity that existed unfettered in the “good old days.” The smell of talcum powder and the presence of shaving mugs help men to grapple with what it means to be a man at a time when masculinity is up for debate. But in a barbershop that charges $45 for a haircut, some men are left out. And so, in a place that engages tensions between ideas of nostalgic masculinity and a new sort of progressive man, we may very well see opportunities for real change fall by the wayside. The hipster phenomenon, after all, is a largely white one that appropriates symbols of white working-class masculinity: think white tank tops with tattoos or the plaid shirts of lumbersexuals.

When we return to neighborhoods where barbershops are indeed disappearing, and being replaced with high-service men’s salons like those in my book, Styling Masculinityit is important to put these shifts into context. They are not signs of a disintegrating by-gone culture of manhood. Rather, they are part of a transformation of white, well-to-do masculinity that reflects an enduring investment in distinguishing men along the lines of race and class according to where they have their hair cut. And these men are still creating intimate relationships; but instead of immersing themselves in communities of men, they are often building confidential relationships with women hair stylists.

Kristen Barber, PhD is a sociologist at Southern Illinois University and the author of Styling Masculinity: Gender, Class, and Inequality in the Men’s Grooming Industry. She blogs at Feminist Reflections, where this post originally appeared.

*Thank you to Trisha Crashaw, graduate student at Southern Illinois University, Carbondale, for her work on the included graph.

(View original at

TEDExperiences: Notes from Session 6 of TEDSummit

Bodacious beats: Kenyan musician, producer and DJ, “Blinky Bill” Sellanga simultaneously brought the house down and the TED audience to their feet with a lively, genre-bending musical performance. “My, oh my,” he sings: “What a wonderful feeling.”

Photo by Bret Hartman/TED.

Ione Wells sparked a social media campaign that gives assault survivors a voice. Photo by Bret Hartman/TED.

Sexual assault, social media and justice. One night, walking home from London’s Tube, Ione Wells was followed home, grabbed from behind, and an attacker — after smashing her head against the pavement — sexually assaulted her. As she recovered, she wrote a letter to this person, telling them just how deeply he hurt an entire community: “You did not just attack me that night. I’m a daughter, I’m a friend, I’m a sister…all the people who formed these relations to me make up my community and you assaulted every single one of them.” She later published the letter in her school’s newspaper and asked others to reply with their experiences under the #notguilty hashtag. It went viral overnight and became a campaign for empowering the voices of  survivors. But as Wells explained to the  TEDSummit audience, the attention got her thinking about how in the social media age, people leap to react on injustices, creating a spiral of negativity that blocks the voices of survivors themselves. Instead, Wells pleads, let’s take a more considered approach to social media in the wake of injustice. Let’s take the time to listen to those actually affected by it instead of simply creating noise.

How we broke the Panama Papers story. Head of the International Consortium of Journalists, Gerard Ryle, played a central role in the breaking of the Panama Papers. It was his organization that amassed the globe spanning team responsible for combing through 40 years of spread sheets, PDF’s, and emails. Sharing such a massive story went against everything Ryle knew as an investigatory journalist and unsurprisingly the experience led him to adopt a unconventional new outlook. Rather than destroy the medium, he now believes that technology may be the way to finally bring journalism into the truly global age. Read more about Gerard’s talk …

Photo by Bret Hartman/TED.

“Blinky Bill” Sellanga rocked the house (and kicked off a conga line at 9:45 in the morning). Photo by Marla Aufmuth/TED.

An approach so transparent, it’s opaque. In June of 2002,  as Hasan Elahi re-entered the US at the airport in Detroit, Michigan, he was stopped at customs and taken to an interrogation room where he was asked, among many other things, if he had any explosives in his storage unit near his home in Florida, and where he was September 12.  Luckily, he was a meticulous calendar keeper and through his Palm Pilot could easily state where he was that day and every day since. Eventually — after six months of questioning back home, Elahi was cleared by the FBI. But he learned something in those months — not to put up a fight over what information to reveal. He told them everything. And then, even when he was cleared, he didn’t stop. It started with emails of travel plans so he wouldn’t get hassled again, but soon he was compiling virtually all possible data on his life and movements — including photos of meals, GPS coordinates and even photos of urinals. There was no datum too small. More than a decade later, Elahi feels that we stress out much too much about surveillance — instead of trying to withhold it, why not open the floodgates? “This is happening, this is not going way, this is the reality we have to live with,” says Elahi, an associate art professor at the University of Maryland and an American citizen. “In a world where surveillance is the norm for many governments and businesses, transparency is resistance.”  Of all the arguments in favor of more openness in the world, Elahi’s is among the cleverest and more thought-provoking.

The paradox of knowledge. Almost 30 years ago, the British-born, Indian-extracted writer Pico Iyer took a trip to Japan and fell in love. He wrote up his enthusiasm and findings about Japanese culture, fashions, customs and architecture in an essay so long no magazine could publish it  — but no matter, as he soon decided to pick up and move to Japan to live, where he has, for 28 years. But while everything he knew about Japan seemed to fill his mind back then — it seems gargantuan with what he feels he knows now. A keen observer of the human spirit and the sometimes-backwards journeys it makes going forward, Iyer professes that today he feels he knows far, far less about Japan — or indeed, about anything — than his 30-year-old self felt he knew. It’s a curious insight about knowledge gained with age: that the more we know, the more we see how little we know. “I don’t believe that ignorance is bliss,” says Iyer. “Knowledge is a priceless gift, but the illusion of knowledge can be more dangerous than ignorance. The one thing I have learned is that transformation comes when I am not in charge, when I don’t what’s coming next. There are certainly some things that we need to know , but there are other things that are better left unexplored.” It may sound like an ironic way to close a conference celebrating the pursuit of knowledge and understanding, but it’s also a lovely reminder that, as Iyer says, “in the end, being human is much more important than being fully in the know.”


Photo by Bret Hartman/TED.

Pico Iyer: “Knowledge is a priceless gift, but the illusion of knowledge can be more dangerous than ignorance.” Photo by Bret Hartman/TED

TEDPathways: Notes from Session 4 of TEDSummit

This morning’s Session 4 explored the ways we connect — the pathways our money takes, our communication, our trust, even our intelligence(s). Read on:

Trust in your neighbor, but maybe not in your bank. Why is it that, despite being told “don’t get into a car with a stranger” for as long as we can remember, five million of us opt to do this every day when we call an Uber? Rachel Botsman believes the popularity of these services, including Uber, AirBnB and the like, represents a fundamental shift in our societies away from an institutional model of trust and towards a distributive model. In recent history, a handful of major events have severely weakened the public’s trust in our banks, our government and even the church. As this form of institutional trust collapses, we have witnesses a simultaneous rise in what’s known as the “sharing economy.” This new bottom-up model for trust is empowered by technology, including systems like the blockchain, which may someday remove the need for third-party trust systems entirely. As trust becomes more and more local and accountability-based, technology will continue to shift power away from these economic institutions and distribute itself into the hands of all of us.

Photo by Bret Hartman/TED.

Rachel Botsman explores the changing nature of trust — and how informal trust networks powered by new tech have created new behaviors. Photo by Bret Hartman/TED.

Benedict Android: how your phone can be hacked to betray you. The good news about smartphone surveillance is really good: In the past few years, many ordinary citizens have been able to capture shocking and irrefutable evidence of violent civil-rights abuses by police, soldiers and others, starting huge and important conversations. But the bad news is also really bad. Apple may have made news by refusing to bend or break the high-security encryption on its iPhones even for a FBI terrorism investigation. However, as the noted surveillance researcher Christopher Soghoian explains, Apple is the exception, and its products are affordable only to an upper-income tier of the world’s population. The security encryptions on most smartphones — the Android-style phones used by most of the world —  are far easier to hack by law enforcement or government officials, putting Android users at a much greater risk for having their phones (and the contents of them)) used against them. Soghoian calls this problem “the digital security divide,” and, having extensively studied how governments use malware and other underhanded surveillance measures to hack into computers and smartphones, he offers a very compelling case that, whatever new cool apps, trendy games and photo filters are in the pipeline for the next generation of smartphones, Apple-strength security measures are desperately needed first. “If the only people who can protect themselves from the gaze of the government are the elite, that’s a problem,” says Soghoian. “It’s not a technology problem — it’s a civil rights problem.”

Photo by Marla Aufmuth/TED.

Christopher Soghoian advocates for encryption on all our smartphones — not just Apple products. Photo by Marla Aufmuth/TED.

The awe of the puzzle. The Rubik’s cube is one of the most recognizable puzzles the world over, but as techno-illusionist Marco Tempest points out, it’s still as challenging today as when it first appeared in the 1970s. As he handed audience members cubes to jumble up, he explained their tugging pull: “Puzzles are mysteries that promise a solution, we just have to find it.” He then collected the cubes and brought an audience member onstage, where she challenged him to solve a cube — which he did in under 10 seconds. Holding up another cube, she showed him each of its six sides and he, almost effortlessly, matched a separate cube to reflect its same disorder. Without blinking, Tempest arranged the cubes into a square sculpture, while illustrating the universal appeal of the puzzle, “The Rubik’s cube is not an easy puzzle, but its design is elegant and it taps into that universal desire to solve problems, to bring meaning from chaos. It’s one of the traits that makes us human and has taken us to where we are now.” 

Photo by Marla Aufmuth/TED.

Techno-illusionist Marco Tempest hands out Rubik’s cubes, inviting the audience to scramble them up. Photo by Marla Aufmuth/TED.

Censorship and the fight against terror. If there is one thing Rebecca MacKinnon believes, it’s that the fight against terrorism cannot be won without the strict preservation of human rights. Human rights, in her opinion, along with freedom of the press and an open internet, are integral tools to stop the spread of radical extremist ideologies in democratic societies. Yet, the unfortunate reality is that the people on the forefront of exercising these liberties, such as independent journalists and bloggers, are often persecuted by the same government forces as the extremists. This persecution can take the form of actual jail time, as it has in Morocco, Turkey and Saudi Arabia, but may also occur in less direct ways. In the US, Washington DC and Silicon Valley have teamed up to stop the spread of ISIS’s online communities. However, their censorship has inadvertently silenced the voices of some who simply happen to share a name with a suspected terrorist or terrorist group, like the scores of women named Isis who have found their Twitter accounts deleted. As democratic governments across the world continue to crack down on whistleblowers and dissenters, 2015 marked the 10th consecutive year that freedom had been on a decline worldwide. This is why MacKinnon believes we need to fight for transparency and accountability from our governments and for the right to encryption for all citizens. She believes that privacy is essential to the survival of investigative journalism and public discourse, thus we must make choices to reflect our support lest we stifle the very people on the frontline of the fight against extremism.

The inevitable tendencies of artificial intelligence. “The actual path of a raindrop as it goes down the valley is unpredictable, but the general direction is inevitable,” says digital visionary Kevin Kelly, and technology is much the same, driven by patterns that may surprise us but that are driven by inevitable tendencies. One tendency in particular stands out because it will have a profound impact on the next 20 years: our tendency to make things smarter and smarter — the process of cognification — that we identify as artificial intelligence. Kelly explores three trends of AI that we need to understand in order to embrace it, because it’s only by embracing artificial intelligence that we can steer it. But the big takeaway? We’re in very, very beginning of artificial intelligence. “The most popular AI product 20 years from now that everyone uses has not been invented yet — that means that you’re not late.”

Scientific proof that trees talk. Forest ecologist Suzanne Simard researches the quiet and cohesive ways of the woods. In her research, she’s discovered monumental evidence that will change the way you look at these stoic plants — because trees, like humans and most living things, communicate and develop communities.  Using their roots to deliver information, forests and similar collections of trees build a resilient, self-healing family; there are even “mother” trees who look after seedlings and share wisdom when injured or dying. As Simard says: “A forest is much more than what you see.” Read more about Suzanne Simard’s talk.

I am a Brit. Two days ago, Alexander Betts agreed to give a talk here at TEDSummit on an issue close to his heart: how Brexit happened, and what it means for his home country and his global vision. In a powerful talk, he asks why the UK seemed to split apart on June 24 … and whether or not this should have come (as it did to many) as a shock. Read more about Alexander Betts’ talk.


TEDOrganizing principles: Notes from Session 5 of TEDSummit

Do we have the vision and the energy to confront seemingly impossible problems — like predatory corporations, political deadlock, the wasted potential of millions of refugees? Session 5 rounded up people who are jumping right in.

A call to action on fossil fuels. Costa Rica, climate advocate Monica Araya’s native country,  gets almost 100 percent of its electricity from renewable sources, including hydropower, geothermal and solar. It started with the country’s bold decision to abolish its military in 1948. Investing that money in social spending created stability, which gave Costa Rica the freedom to explore alternative energy options. But it’s no utopia, Aaraya explains, because fossil fuels are still used for the country’s transportation systems — systems that are gridlocked and crumbling. Going forward, she urges the next generation to form coalitions of citizens, corporations and clean energy champions to get Costa Rica off fossil fuels completely and commit clean energy in all sectors.

Photo by Ryan Lash/TED.

Monica Araya suggests that the future of alternative energy is in places like her home, Costa Rica. Photo by Ryan Lash/TED.

There are reasons to hope. Across the world, there are true signs of progress, despite the media’s constant drone of doom and gloom in their headlines. Global affairs thinker Jonathan Tepperman has seen it with his own eyes in three countries: Canada, Indonesia and Mexico. In each country, Tepperman examines their historical trajectory and transformation into places of societal advancement and inclusivity — drawing a common thread that connects them all. Within their borders, these nations have embraced the extreme in times of existential peril, found power in promiscuous, open-minded thinking and exercised compromise to its fullest extent. “The real obstacle is not ability and it’s not circumstances,” says Tepperman. “It’s much simpler: Making big changes involves taking big risks, and taking big risks is scary. Overcoming that fear requires guts.”

Online education for all. Imagine a world where every refugee has access to a free higher education, anywhere, at anytime. Although this may seem unbelievable, this is Shai Reshefs dream, and so far he has already made progress towards achieving it. Soon, the University of the People, founded by Reshef, will admit 500 Syrian refugees at no cost to them. University of the People is an online education platform that he believes will make this goal not only accessible and affordable but also replicable and scalable across the world. Despite the return on investment for education being incredibly high, currently refugees are 10% likely to receive higher education in their host countries. Beyond increasing this dismal statistic, Reshef hopes his institution will be able to help refugees with the lack of legal identification often holding them back, and eventually facilitate their transfer into local universities. Right now, 250 additional students are slated to be enrolled in the coming months and eventually they hope to sustain 12,000. Reshef wants to create an entire program ran by refugees for other refugees, proving that higher education need not exclude anyone, because as Reshef says “online, everyone gets a front row seat.”

Photo by Ryan Lash/TED.

Pavan Sukhdev says: While the backbone of our global economy is the corporation, we’ve evolved corporate systems that ruthlessly drain public benefits for private gain.  Photo by Ryan Lash/TED.

A new company for a new economy. “The last two and a half decades have seen scientists, economists, and politicians say again and again and more and more often that we need to change economic direction. we need a green economy, a circular economy. Despite all that agreement, we are still hurtling towards planetary boundaries.” To understand why, we need to ask an important question: can the corporations of today deliver the economy of tomorrow? According to environmental economist Pavan Sukhdev, the answer is no. That’s because today’s business as usual creates huge public costs to generate private profits — “this is the biggest free lunch in the history of mankind.” The good news? There are micro-solutions and if we follow them, we can evolve a new type of corporation whose goals are aligned with society rather than at its expense.

Who is making the decisions that increasingly govern our lives? What we see and then think? What we think and then do? The questions isn’t who — it’s what. And the answer is the increasingly powerful algorithms employed by entities  from Facebook to human resources departments to prison sentencing boards. It’s a problem that troubles sociologist Zeynep Tufekci, who explains that the complex way that algorithms grow and improve — through  a semi-autonomous form of computing called machine learning, which evolved from pattern recognition and prediction software — makes them hard to see through and hard to steer effectively. “What safeguards do you have that your black box isn’t doing something shady?” wonders Tufekci. Making things worse, companies are very protective of their secret recipes for algorithms, so it’s almost impossible to gauge how objective they really are — but given that they’re only as unbiased as the data they are fed, that doesn’t sound like a recipe for fairness.

Photo by Bret Hartman/TED.

As AIs learn to learn, there’s a point where, says Sam Harris, they might outstrip our own intelligence. Photo by Bret Hartman/TED.

Scared of AI? You should be. Regardless of whether or not you’re afraid of Artificial Intelligence, Sam Harris wants you to be more afraid. He believes that we are culturally “unable to marshall an appropriate emotional response to the dangers that lay ahead.” Although it may seem alarming, Harris is not imagining a dystopian terminator future straight out of science fiction. Rather, his fear is based on three rational assumptions: 1. Intelligence is a matter of information processing information through physical systems, 2. We will continue to improve our intelligent machines, and 3. We as humans do not rank anywhere close to the possible apex of intelligent life. The eventual existence of a hyper intelligent machine is undeniable and when our goals and the machine’s inevitably differ, these superior machines will waste no time disposing of any thing standing between them and their objective. Due to the immense havoc these innovations are capable of wreaking, Harris urges that the time to begin tackling the ethics of AI is now, regardless of how far away it may seem. Because we only have one shot at getting the initial conditions right and we better make sure they’re conditions we can live with.  

Humility in the face of fear. In a vulnerable, striking and meditative move, author Anand Giridharadas read “A letter to the other half” to the TEDSummit audience. Penned just days before the conference, it reflected Giridharadas’ regret over ignoring the legitimate struggles and instability of a people enraged over a changing globalized world — echoing events such as Brexit and the rise of Donald Trump.

Unsubscribe. Comedian James Veitch wrapped up session 5, turning his frustrations into whimsy and amusement when his local supermarket refused to take him off their email list, despite numerous attempts on his end. The hijinks that ensues is an entertaining and priceless venture into the world of online customer care.

CryptogramGood Article on Airport Security

The New York Times wrote a good piece comparing airport security around the world, and pointing out that moving the security perimeter doesn't make any difference if the attack can occur just outside the perimeter. Mark Stewart has the good quote:

"Perhaps the most cost-effective measure is policing and intelligence -- to stop them before they reach the target," Mr. Stewart said.

Sounds like something I would say.

Worse Than FailureCodeSOD: Classic WTF: RegExp from Down Under

This particularly bad example of regular expressions and client side validation was originally published in 2009. I thought Australia was supposed to be upside down, not bass ackwards. - Remy

"The company I work for sells vacation packages for Australia," writes Nathan, "and for whatever reason, they're marketed under different two different brands — and — depending on whether you live Down Under or somewhere else in the world."

Nathan continues, "one of the requirements for the international website ( is to disallow people from within Australia and New Zealand to make bookings. But the way this is done from the front end... well, it's a real gem."

  *Checks to see if Australia is typed into the other country box
  function checkContactCountry(inputBox)
    var validator = new RegExp(
          |(N|n)(E|e)(W|w) (Z|z)(E|e)(A|a)(L|l)(A|a)(N|n)(D|d)$/);

         alert("Your Residential Address must be outside Australia. "
             + "Enter your residential address outside this country,"
             + "or visit to make a booking if "
             + " you live in Australia.");
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux Australiasthbrx - a POWER technical blog: Where to Get a POWER8 Development VM

POWER8 sounds great, but where the heck can I get a Power VM so I can test my code?

This is a common question we get at OzLabs from other open source developers looking to port their software to the Power Architecture. Unfortunately, most developers don't have one of our amazing servers just sitting around under their desk.

Thankfully, there's a few IBM partners who offer free VMs for development use. If you're in need of a development VM, check out:

So, next time you wonder how you can test your project on POWER8, request a VM and get to it!

Planet Linux AustraliaChris Neugebauer: 2017 wants your talks!


You might have noticed earlier this week that 2017, which is happening in Hobart, Tasmania (and indeed, which I’m running!) has opened its call for proposals.

Hobart’s a wonderful place to visit in January – within a couple of hours drive, there’s wonderful undisturbed wilderness to go bushwalking in, historic sites from Tasmania’s colonial past, and countless wineries, distilleries, and other producers. Not to mention, the MONA Festival of Music and Arts will probably be taking place around the time of the conference. Coupled with temperate weather, and longer daylight hours than anywhere else in Australia, so there’s plenty of time to make the most of your visit. is – despite the name – one of the world’s best generalist Free and Open Source Software conferences. It’s been running annually since 1999, and this year, we’re inviting people to talk abut the Future of Open Source.

That’s a really big topic area, so here’s how our CFP announcement breaks it down:

THE FUTURE OF YOUR PROJECT is well-known for deeply technical talks, and lca2017 will be no exception. Our attendees want to be the first to know about new and upcoming developments in the tools they already use every day, and they want to know about new open source technology that they’ll be using daily in two years time.

Many of the techniques that have made Open Source so successful in the software and hardware world are now being applied to fields as disparate as science, data, government, and the law. We want to know how Open Thinking will help to shape your field in the future, and more importantly, we want to know how the rest of the world can help shape the future of Open Source.

It’s easy to think that Open Source has won, but for every success we achieve, a new challenge pops up. Are we missing opportunities in desktop and mobile computing? Why is the world suddenly running away from open and federated communications? Why don’t the new generation of developers care about licensing? Let’s talk about how Software Freedom and Open Source can better meet the needs of our users and developers for years to come.

It’s hard for us to predict the future, but we know that you should be a part of it. If you think you have something to say about Free and Open Source Software, then we want to hear from you, even if it doesn’t fit any of the categories above.

My friend, and former director, Donna Benjamin blogged about the CFP on medium and tweeted the following yesterday:

At @linuxconfau in Hobart, I’d like to hear how people are USING free & open source software, and what they do to help tend the commons.

Our CFP closes on Friday 5 August – and we’re not planning on extending that deadline – so put your thinking caps on. If you have an idea for the conference, feel free to e-mail me for advice, or you can always ask for help on IRC – we’re in on freenode – or you can find us on Facebook or Twitter.

What does the future of Open Source look like? Tell us by submitting a talk, tutorial, or miniconf proposal now! We can’t wait to hear what you have to say.


Planet Linux AustraliaColin Charles: Speaking in July 2016

  • Texas LinuxFest – July 8-9 2016 – Austin, Texas – I’ve never spoken at this event before but have heard great things about it. I’ve got a morning talk about what’s in MariaDB Server 10.1, and what’s coming in 10.2.
  • db tech showcase – July 13-15 2016 – Tokyo, Japan – I’ve regularly spoken at this event and its a case of a 100% pure database conference, with a very captive audience. I’ll be talking about the lessons one can learn from other people’s database failures (this is the kind of talk that keeps on changing and getting better as the software improves).
  • The MariaDB Tokyo Meetup – July 21 2016 – Tokyo, Japan – Not the traditional meetup timing, since its 1.30pm-7pm, there will be many talks and its organised by the folk behind the SPIDER storage engine. It should be fun to see many people and food is being provided too. In Japanese: MariaDB コミュニティイベント in Tokyo, MariaDB Community Event in TOKYO.

LongNowCraters & Mudrock: Tools for Imagining Distant Future Finlands

 Lake LappajärviLake Lappajärvi (Photo Credit: Hannu Oksa)
About 73 million years ago a meteorite crashed into what is now Finland’s Southern Ostrobothnia region. Today, serene Lake Lappajärvi rests in the twenty-three kilometer wide crater made in the distant past blast’s wake. Locals still enjoy boating to Lappajärvi’s Kärnänsaari: an island formed by the Cretaceous meteorite collision’s melt-rock. Paddling there is an encounter with Finland’s landscape’s deep history.

Lappajärvi has caught the attention of safety case experts working on radioactive waste management company Posiva Oy’s underground dump for used-up nuclear fuel at Olkiluoto, Western Finland. These experts are tasked with predicting how Posiva’s repository will interact with the region’s rocks, groundwater, ecosystems, and populations throughout nuclear waste’s multi-millennial time spans of dangerous radioactivity. From 02012 to 02014, I spent thirty-two months in Finland conducting anthropological research on how safety case experts see the world, how they relate to one another, and how they reckon with various spans of time in their professional lives.

When I returned to my home institution Cornell University in August 02014, I wrote a three-article series for NPR’s Cosmos & Culture blog. In it I described how safety case experts envisioned Finnish landscapes changing over the next ten thousand years. I explained how they study a present-day ice sheet in Greenland and a uranium deposit in Southern Finland as analogues to help them think about Finland’s far future ice sheets and nuclear waste deposits. I suggested that, in this moment of global environmental uncertainty some call the Anthropocene, it becomes a pressing societal task to embrace long-termist “deep time thinking.”

I continue this line of thought here by exploring how safety case experts study prehistoric places – like Lappajärvi crater-lake – to forecast how Finland will change one million years hence. I present these prehistoric places as tools for imagining distant future worlds. I advocate that societies at large use these tools to do intellectual exercises, imagination workouts, or thought experiments to cultivate their own deep time thinking skills. Doing so is crucial on a damaged planet wracked by environmental crisis.

Safety case experts make mathematical models of how the Olkiluoto repository might endure or fall apart in the extreme long-term. They assess the nuclear waste dump’s physical strengths. This is the crux of their work. However, they also develop more qualitative, speculative, quirky approaches in their Complementary Considerations report. A hodgepodge of scientific evidence and PR tools aimed at persuading various audiences of the facility’s safety, this report plays a supporting role in their broader safety argument. And it contains a fascinating thought experiment: a section called “The Evolution of the Repository System Beyond A Million Years in the Future” (p197-200).

OlkiluotoFinland’s nuclear waste repository at Olkiluoto (Photo Credit: Posiva Oy)
Complementary Considerations explains how Lappajärvi crater-lake kept its form throughout numerous past Ice Age glaciation and post-Ice Age de-glaciation periods. It tells a story of “fairly stable conditions and slow surface processes” over millions of years. In light of this, safety case experts expect only limited erosion and landmass movement throughout the repository’s multimillion-year futures. Lappajärvi’s deep histories are, in this way, taken as windows into Olkiluoto’s deep futures. From this angle, safety case experts argue that Posiva’s repository can, like Lappajärvi’s crater, withstand the waxing and waning of future Ice Ages’ ice sheets advancing and retreating.

Safety case experts also use prehistoric Littleham mudstone in Devon, England as a tool for forecasting Finland’s far futures. In Devon one can find copper that has survived over 170 million years without corroding away. The copper was long encased in the sedimentary rock. Complementary Considerations predicts a similar fate for the huge copper canisters Posiva will use to secure Finland’s nuclear waste. It also suggests that – because Littleham mudstone is more abrasive to copper than is the bentonite clay to surround Posiva’s canisters – the canister copper might see even rosier futures.

Safety case experts see the distant pasts of mudstone and copper in England as tools for envisioning the distant futures of bentonite and canisters in Finland. They see the distant pasts of a Southern Ostrobothnian crater-lake as tools for envisioning the distant futures of an Olkiluoto repository’s local geology. Deep time forecasts are, in this way, made through techniques of analogy. Visions of far future worlds emerge from analogies across time (extrapolating from long pasts to reckon long futures) and analogies across space (extrapolating across distant locales sometimes thousands of miles apart).

Yet, as safety case experts and their critics both cautioned me, one should not take these deep time analogies too seriously. There are, of course, limits to what, say, native copper in mudrock in Devon can really tell us about manufactured copper pieces in clayin Olkiluoto. Differences between repository conditions and these prehistoric places are, for many, simply too vast to make reasonable analogies between them.

But I am only half-interested in whether these techniques ought to persuade us of Posiva’s repository’s safety. I let the engineers, geologists, chemists, metallurgists, ecosystems modelers, and regulatory authorities sort that out. Instead, I find a unique intellectual opportunity in them. I wonder: can safety case experts’ techniques be retooled to help populations reposition their everyday lives within broader horizons of time? Can farsighted organizations like The Long Now Foundation help inspire general long-term thinking?

One does not have to be a Nordic nuclear waste expert to benefit from the deep time toolkits I present here. An educated public can too reflect on how analogical reasoning can stretch one’s imaginative horizons further forward and backward across time. For example, many drive through rural regions where stratigraphic rock layers are visible on highways carved into rocky hills. When doing so, why not visualize what the surrounding landscape might have looked like in each of the past times the rock faces’ layers respectively represent? Are the imageries that come to mind drawn from forest, mountain, desert, or snowy environments out there in the world today? What analogical resources did your mind tap to imagine distant past worlds? What might these landscapes’ far futures look like if they were to have, say, Sahara-like conditions? What about Amazonian rainforest-like conditions?

Posiva FacilityThe tunnel into Posiva’s underground research facility ONKALO (Photo Credit: Posiva Oy)
Straining to imagine present-day landscapes in such radically different states – in ways inspired by encounters with the deep time of Earth’s everyday environments – can be an intellectual calisthenics strengthening one’s long-termist intuitions. It can serve as an imaginative mental workout for prepping one’s mind for better adopting the farsightedness necessary to think more clearly about today’s climate change, biodiversity, Anthropocene, sustainability, or human extinction challenges.

Scenes in which radically long time horizons enter practical planning, policy, or regulatory projects – with Finland’s nuclear waste repository safety case work as but one example – can be sources of tools, techniques, and inspiration for thinking more creatively across wider time spans. And groups that advocate long-termism like The Long Now Foundation have a key role to play in disseminating these tools, techniques, and inspirations publically in this moment of planetary uncertainty.

Vincent Ialenti is a National Science Foundation Graduate Research Fellow and a PhD Candidate in Cornell University’s Department of Anthropology. He holds an MSc in “Law, Anthropology & Society” from the London School of Economics.

TEDHow Syria’s buildings laid the foundation for brutal war: Marwa Al-Sabouni at TEDSummit

Recorded over Skype, young architect Marwa Al-Sabouni talks about life right now in Homs, Syria -- and suggests that the built environment played a role in the country's deadly conflict. Photo: Ryan Lash

Recorded over Skype, young architect Marwa Al-Sabouni talks about life right now in Homs, Syria — and suggests that the built environment played a role in the country’s deadly conflict. Photo: Ryan Lash

“E pluribus unum” worked in Syria once too.

The merciless six-year civil war in Syria has destroyed cities, killed hundreds of thousands of people and displaced millions more. The Syria of a decade ago is but a memory. The causes have been detailed exhaustively — social, economic, religious, geopolitical. But one woman, an architect who was born, grew up and still lives today in the central Syrian town of Homs, believes that one culprit has so far gone unnamed and unblamed — architecture. “It has played a role in creating, directing and amplifying conflict between warring factions,” she says bluntly.

But does architecture have that much power? Can it exert such an influence? Marwa Al-Sabouni, who ran a small architecture studio with her husband in the old city center of Homs for several years until the war destroyed most of the historic area, believes that it does and it can — and her contention is the crux of her memoir about life during wartime, “The Battle For Home.” She has stayed in Homs for six years watching the war tear her city apart, and believes that architecture and a century of thoughtless urban planning played a crucial role in the slow unraveling of Syrian cities’ social fabric, preparing the way for once-friendly, now-fragmented groups to become enemies instead of neighbors.

“The harmony of the social environment got trampled over by elements of modernity,” says Al-Sabouni. “The brutal, unfinished concrete blocks and the divisive urbanism that zoned communities by class, creed or affluence.”

Being a virtual prisoner in her home for two years after the war started, she says, gave her only too much time to think about the incredible transformation of the city she grew up in. “This has been historically a tolerant city, accustomed to variety, accommodating a wide range of beliefs, origins and customs, where mosques and churches were built back to back. What has led to this senseless war? How did my country degenerate into civil war, violence, displacement and unprecedented sectarian hatred?” So she began writing, mapping out how 20th-century urban planning took a united society of different threads and slowly rewove them into a cityscape of difference and division.

“It started with French colonial city planners, blowing up streets and relocating monuments,” she says. Then, she says, modern buildings started going up with little or no thought, design or planning, fracturing delicate communities further: “Architecture became a way of differentiation.” By the end of the 20th century, all that remained in Homs was a city center and, around it, a ring of ghettoized communities, each housing its own ethnic or religious group, and each enemies of the others.

Al-Sabouni does have hope for the future, she says — partly because she has a wildly optimistic husband, and partly because she feels there is now both room and reason to learn from the past and rebuild it better. That means not building giant tower blocks which isolate and alienate people — it means lower, mixed use buildings that can accommodate all kinds of people, races, ages, beliefs and more. When a rope breaks, the strongest way to mend it is to weave all the ends together. That is what Al-Sabouni wants — and what Homs, Syria and the whole world need.

Planet Linux AustraliaTim Serong: Thunderbird Uses OpenGL – Who Knew?

I have a laptop and a desktop system (as well as a bunch of other crap, but let’s ignore that for a moment). Both laptop and desktop are running openSUSE Tumbleweed. I’m usually in front of my desktop, with dual screens, a nice keyboard and trackball, and the laptop is sitting with the lid closed tucked away under the desk. Importantly, the laptop is where my mail client lives. When I’m at my desk, I ssh from desktop to laptop with X forwarding turned on, then fire up Thunderbird, and it appears on my desktop screen. When I go travelling, I take the laptop with me, and I’ve still got my same email client, same settings, same local folders. Easy. Those of you considering heckling me for not using $any_other_mail_client and/or $any_other_environment, please save it for later.

Yesterday I had an odd problem. A new desktop system arrived, so I installed Tumbleweed, eventually ssh’d to my Laptop, started Thunderbird, and…

# thunderbird

…nothing happened. There’s usually a little bit of junk on the console at that point, and the Thunderbird window should have appeared on my desktop screen. But it didn’t. strace showed it stuck in a loop, waiting for something:

wait4(22167, 0x7ffdfc669be4, 0, NULL)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGVTALRM {si_signo=SIGVTALRM, si_code=SI_TKILL, si_pid=22164, si_uid=1000} ---
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)
wait4(22167, 0x7ffdfc669be4, 0, NULL)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGVTALRM {si_signo=SIGVTALRM, si_code=SI_TKILL, si_pid=22164, si_uid=1000} ---
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)
wait4(22167, 0x7ffdfc669be4, 0, NULL)   = ? ERESTARTSYS (To be restarted if SA_RESTART is set)
--- SIGVTALRM {si_signo=SIGVTALRM, si_code=SI_TKILL, si_pid=22164, si_uid=1000} ---
rt_sigreturn({mask=[]})                 = -1 EINTR (Interrupted system call)

After an assortment of random dead ends (ancient and useless bug reports about Thunderbird and Firefox failing to run over remote X sessions), I figured I may as well attach a debugger to see if I could get any more information:

# gdb -p 22167
GNU gdb (GDB; openSUSE Tumbleweed) 7.11
Attaching to process 22167
Reading symbols from /usr/lib64/thunderbird/thunderbird-bin...
0x00007f2e95331a1d in poll () from /lib64/
(gdb) break
Breakpoint 1 at 0x7f2e95331a1d
(gdb) bt
#0 0x00007f2e95331a1d in poll () from /lib64/
#1 0x00007f2e8730b410 in ?? () from /usr/lib64/
#2 0x00007f2e8730cecf in ?? () from /usr/lib64/
#3 0x00007f2e8730cfe2 in xcb_wait_for_reply () from /usr/lib64/
#4 0x00007f2e86ecc845 in ?? () from /usr/lib64/
#5 0x00007f2e86ec74b8 in ?? () from /usr/lib64/
#6 0x00007f2e86e9a2a9 in ?? () from /usr/lib64/
#7 0x00007f2e86e9654b in ?? () from /usr/lib64/
#8 0x00007f2e86e966b3 in glXChooseVisual () from /usr/lib64/
#9 0x00007f2e90fa0d6f in glxtest () at /usr/src/debug/thunderbird/mozilla/toolkit/xre/glxtest.cpp:230
#10 0x00007f2e90fa1003 in fire_glxtest_process () at /usr/src/debug/thunderbird/mozilla/toolkit/xre/glxtest.cpp:333
#11 0x00007f2e90f9b4cd in XREMain::XRE_mainInit (this=this@entry=0x7ffdfc66c448, aExitFlag=aExitFlag@entry=0x7ffdfc66c3ef) at /usr/src/debug/thunderbird/mozilla/toolkit/xre/nsAppRunner.cpp:3134
#12 0x00007f2e90f9ee27 in XREMain::XRE_main (this=this@entry=0x7ffdfc66c448, argc=argc@entry=1, argv=argv@entry=0x7ffdfc66d958, aAppData=aAppData@entry=0x7ffdfc66c648)
at /usr/src/debug/thunderbird/mozilla/toolkit/xre/nsAppRunner.cpp:4362
#13 0x00007f2e90f9f0f2 in XRE_main (argc=1, argv=0x7ffdfc66d958, aAppData=0x7ffdfc66c648, aFlags=) at /usr/src/debug/thunderbird/mozilla/toolkit/xre/nsAppRunner.cpp:4484
#14 0x00000000004054c8 in do_main (argc=argc@entry=1, argv=argv@entry=0x7ffdfc66d958, xreDirectory=0x7f2e9504a9c0) at /usr/src/debug/thunderbird/mail/app/nsMailApp.cpp:195
#15 0x0000000000404c4a in main (argc=1, argv=0x7ffdfc66d958) at /usr/src/debug/thunderbird/mail/app/nsMailApp.cpp:332
(gdb) continue
[Inferior 1 (process 22167) exited with code 01]

OK, so it’s libGL that’s waiting for something. Why is my mail client trying to do stuff with OpenGL?

Hang on! When I told gdb to continue, suddenly Thunderbird appeared, running properly, on my desktop display. WTF?

As far as I can tell, the problem is that my new desktop system has an NVIDIA GPU (nouveau drivers, BTW), and my laptop and previous desktop system both have Intel GPUs. Something about ssh’ing from the desktop with the NVIDIA GPU to the laptop with the Intel GPU, causes Thunderbird (and, indeed, any GL app — I also tried glxinfo and glxgears) to just wedge up completely. Whereas if I do the reverse (ssh from Intel GPU laptop to NVIDIA GPU desktop) and run GL apps, it works fine.

After some more Googling, I discovered I can make Thunderbird work properly over remote X like this:


That will apparently cause glXCreateContext to return BadValue, which is enough to kick Thunderbird along. LIBGL_ALWAYS_SOFTWARE=1 works equally well to enable Thunderbird to function, while presumably still allowing it to use OpenGL if it really needs to for something (proof: LIBGL_ALWAYS_INDIRECT=1 glxgears fails, LIBGL_ALWAY_SOFTWARE=1 glxgears gives me spinning gears).

I checked Firefox too, and it of course has the same remote X problem, and the same solution.

Cryptogram"Dogs Raise Fireworks Threat Level to 'Gray'"


The Department of Canine Security urges dogs to remain on high alert and employ the tactic of See Something, Say Something. Remember to bark upon spotting anything suspicious; e.g. firecrackers, sparklers, Roman candles, cats, squirrels, mail carriers, shadows, reflections, other dogs on TV, etc.

Worse Than FailureClassic WTF: Manual Automation

This article originally ran in 2014, and it's the rare case of a happy ending. They DO exist! -- Remy

Aikh was the new hire on the local bank’s data warehousing/business intelligence team. His manager threw him right into the hurricane: a project with the neediest, whiniest and most demanding business unit. Said business hated their unreliable batch process for archiving reports, and the manual slog of connect > find/create directory > upload > pray. They hoped the DW team would code to the rescue.

Eager to impress, Aikh sketched out a simple, automated client/server solution. The business quickly approved his design and estimates. To mentor and keep the project on-track, Aikh’s manager assigned Dean, a more senior developer, to help out.
John Henry-27527
What do you mean, “steam powered hammer?”

“I could really use a good library to transfer files via secure shell,” Aikh told Dean during their initial meeting.

Dean leaned back in his chair with confident disinterest. “I know a good open-source package. I’ll build you a wrapper.”

A month passed. Aikh hammered out the UI and daemon, and now needed to write the code for file transfer. However, he’d never received anything from Dean. Aikh hadn’t wanted to nag- surely Dean had several important projects on his plate- but found himself stymied. He visited the senior dev’s cube to inquire about the library and wrapper.

“Oh, right,” Dean said, never pausing from his typing. “I’ll email the package in couple of days.”

Aikh received the open-source library as promised… and an executable file that simply displayed an empty command prompt. He was back at Dean’s cube in short order. “What’s this?”

Dean narrowed his eyes, not sure he was dealing with a sentient creature. “It’s waiting for a command. See?” He demoed an execution, typing rapidly and without explanation.

“This, uh, isn’t what I’m looking for,” Aikh said. “I need something to integrate with my Java application.”

“Execute this with the Runtime class, then pass in commands.” Dean had already tabbed back to Facebook.

Aikh tamped down his aggravation. “Sorry, but, can you please just write a wrapper and jar it up for me?”

Several days later, Dean sent a jar file containing the class, no comments. Aikh replied to the email. Any documentation on how to use this?

Another few hours, and Dean replied with a single line:

SFTPWrapper.write( srcDir, tgtDir, user, pass ); srcFile, user, pass ); …

After a few moments’ experimentation, Aikh returned to his email client to hit Reply with a vengeance. How will I know whether that call was successful? No errors were reported on invalid parameters!

An entire day passed as Dean composed his riposte. Each method will return a StringBuffer, which contains the response from the command-line.

For? Aikh asked.

Log from the sftp package, Dean replied. Y’know, the code I told you to write.

Aikh gaped at the email chain, having watched this horror show unfold in achingly slow motion. This was just supposed to expose a simple interface to a third party SFTP package. How was it so hard?

He made a more diplomatic lament to his manager. “Dean… isn’t giving me what I need,” he admitted. “We’re coming up on our deadline, and I’m getting worried.”

“I’m not.” His manager’s smile was reassuring. “Go with what Dean’s given you. The business is used to a manual interface anyway.”

“They don’t want a manual interface anymore. That’s the whole point of this project!” Aikh cried. “We’d be delivering something out of spec!”

“It’s what we can deliver on-time and on-budget. They’ll take it.” The manager leaned in and lowered his voice. “Listen, Aikh, we don’t like automation around here. Automation means the businesses have no need for our very lucrative support services. You don’t want to break our budget, do you? Of course not. So you’ll produce software that keeps us… involved. Understood?”

Aikh’s jaw crashed through the floor.

What could the poor junior dev do but report his roadblocks at the next project status meeting? The business was so worried about losing their automated process, they approved the purchase of a fast, supported library for file transfer. Aikh finished the solution done in time, much to the business users’ delight.

Aikh’s manager grumbled about the new guy “depriving the department of future support revenue.” Fortunately, he didn’t remain Aikh’s manager for long. When the business decided they needed their own internal IT staff, Aikh was at the top of their list.

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Planet Linux Australiasthbrx - a POWER technical blog: Optical Action at a Distance

Generally when someone wants to install a Linux distro they start with an ISO file. Now we could burn that to a DVD, walk into the server room, and put it in our machine, but that's a pain. Instead let's look at how to do this over the network with Petitboot!

At the moment Petitboot won't be able to handle an ISO file unless it's mounted in an expected place (eg. as a mounted DVD), so we need to unpack it somewhere. Choose somewhere to host the result and unpack the ISO via whatever method you prefer. (For example bsdtar -xf /path/to/image.iso).

You'll get a bunch of files but for our purposes we only care about a few; the kernel, the initrd, and the bootloader configuration file. Using the Ubuntu 16.04 ppc64el ISO as an example, these are:


In grub.cfg we can see that the boot arguments are actually quite simple:

set timeout=-1

menuentry "Install" {
    linux   /install/vmlinux tasks=standard pkgsel/language-pack-patterns= pkgsel/install-language-support=false --- quiet
    initrd  /install/initrd.gz

menuentry "Rescue mode" {
    linux   /install/vmlinux rescue/enable=true --- quiet
    initrd  /install/initrd.gz

So all we need to do is create a PXE config file that points Petitboot towards the correct files.

We're going to create a PXE config file which you could serve from your DHCP server, but that does not mean we need to use PXE - if you just want a quick install you only need make these files accessible to Petitboot, and then we can use the 'Retrieve config from URL' option to download the files.

Create a petitboot.conf file somewhere accessible that contains (for Ubuntu):

label Install Ubuntu 16.04 Xenial Xerus
    kernel http://myaccesibleserver/path/to/vmlinux
    initrd http://myaccesibleserver/path/to/initrd.gz
    append tasks=standard pkgsel/language-pack-patterns= pkgsel/install-language-support=false --- quiet

Then in Petitboot, select 'Retrieve config from URL' and enter http://myaccesibleserver/path/to/petitboot.conf. In the main menu your new option should appear - select it and away you go!


Worse Than FailureIndependence Day

Today is the 4th of July, which is a holiday with historical significance in the US. Twenty years ago, Jeff Goldblum and the Fresh Prince defeated an alien invasion using a PowerBook and a hastily written computer virus. It’s such a big holiday, they’ve just released a mediocre and forgettable film about it.

This scene has spawned many a flamewar. Anyone with a vague idea of how computers work may note that hardware architectures are complicated, and even with access to alien hardware and software, designing a virus capable of disabling all of the alien spacecraft in one fell swoop strains credulity. Some people point to a deleted scene which explains that computers are based on alien technology captured in Roswell, and thus, our computers are already compatible. Others mutter something about, “It’s just a movie, what the hell is wrong with you?” while rolling their eyes.

Here at TDWTF, we know that no competently run IT organization is going to let its entire shielding system across an entire battlefleet be vulnerable to a single virus delivered to a single node on the network. We know the real story must be quite the WTF.

Lisa graduated from the Aldebaran Institute of Technology in 1996, expecting the “rising tide” of the late 90s tech boom to carry her to wealth and riches. She went to a college job fair shortly before graduating, handed out some resumes, and tried to resist senioritis long enough to make it to the end of the semester.

An alien from ID4:ResurgenceThis is Lisa

A week later, she got a comm from a recruiter. “Hey, Lisa, I just saw your resume, and have I got an opportunity for you! An established invasion fleet with a proven track record of subjugating alien planets needs some junior engineers to provide tier–1 technical support. This is a great entry-level job, with 100% travel, which is such an amazing opportunity for a young Sectoid such as yourself- you really get to experience the whole galaxy. Now, the salary might not look like much, but you’ll also receive equity in the invasion, and you are absolutely going to make out extremely well- they’ve identified a planetary sector that’s completely unexploited.”

Lisa was young, inexperienced, and the recruiter was very good at his job. She went in for an interview, chatted with Al (the head of IT), met a few of the other techs, and even got to meet one of the fighter pilots, who cut quite the dashing figure. Star struck and seduced by the promises of fantastic wealth (once they handle that minor, piddling problem of conquering the Earth and blowing up a few easily recognizable landmarks), Lisa signed on and boarded the mothership just a few days after graduation.

Wil Smith punches an alien and says, 'welcome to earf'Spoilers: that dashing pilot doesn’t look as dashing by the end of the movie

On her first day, Lisa was invited into Al’s office for some orientation. The office was little more than a closet, just off the main hangar bay. It was made even more cramped by Al’s insistence on covering the walls with the various certifications he’d earned in his career- A+, Net+, and in the fanciest frame, MCSE.

“Now, I know you’re a college-educated wunderkind,” Al said, “but I got here through old-fashioned knowhow. The first and most important thing you need to understand is that we deliver IT services, and we’re not happy unless our users are happy.”

A few days into the voyage to Earth, one of their users wasn’t happy- the Hangar Operations Officer was having issues with spacemail. Lisa went to his workstation to try and help.

“My broodmate sent me pictures of our newly hatched clutch, but Outlook won’t let me open the attachement!”

It was instantly obvious to Lisa what was going on, since the file was “”. “This is almost certainly not pictures of your clutch, but is probably a virus.”

“That’s absurd,” the hangar operations officer said, his tentacles waving angrily. “My mate wouldn’t send me a virus!”

“Well, it might not have come from your mate,” Lisa explained. “See, spacemail lets you claim the email comes from any-”

“Look, are you going to let me get these photos or not?”

“I can’t,” Lisa said. “They’re not photos.”

“We’ll see about that!” the officer said. He commed Al directly. “I want you to know that your new tech is refusing to let me see my pictures.”

“They’re quarantined as a virus,” Lisa said.

“Oh, well,” Al said, “we can fix that. Let me just disable the quarantine.”

What?” Lisa cried.

“Remember,” Al warned her over the comm, “we’re not happy unless our users are happy.”

Cringing, Lisa watched the hangar operations officer open the virus. Fortunately, or perhaps unfortunately, it did open a window with a picture in it- a lewd picture of a Muton’s posterior- and flashed a message that “you have been pranked!”. For a finale, it inverted the mouse pointer.

“I told you,” Lisa said, “that probably wasn’t from your mate. You’re just lucky it was a piece of joke software and not a dangerous virus.” A quick reboot set the mouse back to normal, and Lisa made sure the dangerous email was deleted before she handed the mouse back to the Ops Officer. “Please don’t open strange attachments in the future,” she warned.

The next few weeks were mostly routine support, until that dashing pilot- Lieutenant Bradford- submitted a ticket about his fighter craft. It was stuck in a reboot loop- the main computer would turn on, print out an error message, and then reboot. Obviously, this needed to be fixed before the invasion started. Lisa fired up Gopher to try and find out what was going on.

As it turned out, this was a bug in the v8.0.2 firmware running on the entire fleet of fighters. When the system clock’s battery started running low and the clock started to drift, the firmware had a bug that would trap it in this reboot cycle. This particular bug had been fixed in v8.0.5, which was released six years prior. The manufacturer had actually cut support for the entire v8.x.x series and was up to v11.x.x.

You could fix it by replacing the battery and resetting the BIOS, which Lisa did, but she approached Al about these dangerously out of date software versions. “There’s been a LOT of bugfixes that our ships don’t have.”

Al shook his head and laughed at Lisa. “See, you don’t get it. These software vendors, they just want to sell you new things. Trust me, the last time we tried to do an upgrade to the latest patches, they sent a tech onsite who kept trying to get us to buy new versions of all of their software. It’s a scam, Lisa, just a scam. Our users are happy, so why should we spend money with the vendor when we can just keep using firmware that works perfectly fine?”

Two days before they arrived at Earth, a new ticket came in, this time from the invasion fleet’s Supreme Commander. It was a bit of a cluttered mess of a ticket, in that it didn’t represent one single issue, but instead the Supreme Commander wanted to vent about all of the problems she had with IT. Lisa interpreted the ticket as a series of bullet points:

  • The Supreme Commander’s desktop made too much noise (Lisa diagnosed this as a sign that there was too much dust clogging the fans, and fixed it with some canned atmosphere)
  • The network was slow
  • The Supreme Commander’s computer was slow (Lisa diagnosed this as an overfilled hard drive and the Supreme Commander running an Active Desktop)
  • The network was slow
  • Assault Ship ZX–80 had shared a folder with the Supreme Commander- but the Supreme Commander couldn’t access the shared folder

A slow network was difficult to diagnose, but an inability to access a shared folder was easier to explain: the mothership’s firewall blocked that port. Unfortunately, the firewall software wasn’t one she’d ever seen before, and the configuration Al had built for it was pretty much an incomprehensible mess of exceptions and whitelists and blacklists and more exceptions. Lisa needed to get Al to fix it.

“Oh, a slow network, eh,” Al said.

“Well, I’m less worried about that, and more worried about the shared folder…”

“Enh,” Al said, waving a tentacle dismissively, “we can probably fix both at once.” He turned off the firewall. “I mean,” he explained, “this is just a barrier between the ships in our invasion fleet. It doesn’t really make sense to put security software between the ships that we control, right? Right.”

Things got really busy during the invasion. There was a lot of coordination that needed to happen. Several squadrons of fighters- including Lt. Bradford’s- got transfered to the Assault Ships. Lisa barely had time to notice. As it turned out, no one had run a test on the landmark-destroying superlasers since the last invasion, and Al- in a fit of cost-saving- had installed 15 amp breakers in the power supply, which were entirely insufficient to the task. Lisa had to walk the Assault Ship techs through the process of identifying which circuit had the necessary 40 amp breaker on it, and then how to find the superlaser’s power cable to connect it to the right circuit. That’s if there was a 40 amp breaker available- Lisa had to coordinate an on-site electrician for Assault Ship ZX–80 (which hovered over the White House), and it was a near thing to get the circuit re-wired in time to fire as part of the coordinated attack.

After a few days of eighteen hour shifts, Lisa finally got a bit of a break. All the easily recognizable landmarks had been blown up, and the Supreme Commander was confident that the humans would surrender any second. And that’s when she noticed a new fighter joining the network. This one was running an ancient version of the firmware- v4.1.2, which was supposed to be removed from service fifty years ago.

Lisa grumbled and tried to identify the asset tag for that fighter craft. By the time she found it, the craft had docked just several meters from her workstation. She could see into the cockpit… and that’s when the two humans inside waved at her…

For the next few days, we'll be running some classic WTFs as we have a small summer break. We'll be back on Friday with a fresh Error'd.

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!