Planet Russell


CryptogramOURSA Conference

Responding to the lack of diversity at the RSA Conference, a group of security experts have announced a competing one-day conference: OUR Security Advocates, or OURSA. It's in San Francisco, and it's during RSA, so you can attend both.

Worse Than FailureError'd: ICANN't Even...

Jeff W. writes, "You know, I don't think this one will pass."


"Wow! This Dell laptop is pretty sweet!...but I wonder what that other 999999913 GB of data I have can contain..." writes Nicolas A.


"XML is big news at our university!" Gordon S. wrote.


Mark B. wrote, "On Saturday afternoons this British institution lets its hair down and fully embraces the metric system."


"Apparently, my computer and I judge success by very different standards," Michael C. writes.


"I agree you can't disconnect something that doesn't exist, more so when it's named two random Unicode characters," wrote Jurjen.


[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!

Planet DebianChristoph Berg: Cool Unix Features: paste

paste is one of those tools nobody uses [1]. It puts two file side by side, line by line.

One application for this came up today where some tool was called for several files at once and would spit out one line by file, but unfortunately not including the filename.

$ paste <(ls *.rpm) <(ls *.rpm | xargs -r rpm -q --queryformat '%{name} \n' -p)

[1] See "J" in The ABCs of Unix

[PS: I meant to blog this in 2011, but apparently never committed the file...]

Planet DebianChristoph Berg: Stepping down as DAM

After quite some time (years actually) of inactivity as Debian Account Manager, I finally decided to give back that Debian hat. I'm stepping down as DAM. I will still be around for the occasional comment from the peanut gallery, or to provide input if anyone actually cares to ask me about the old times.

Thanks for the fish!

Planet DebianIustin Pop: Corydalis 0.3.0 release

Short notice: just release 0.3.0, with a large number of new features and improvements - see the changelog for details.

Without aiming for, this release follows almost exactly a month after v0.2, so maybe a monthly release cycle while I still have lots of things to add (and some time to actually do it) would be an interesting goal.

One potentially interesting thing: since v0.2, I've added a demo site at using a few photos from my own collection, so if you're curious what this actually is, check that out.

Planet DebianSteve Kemp: A change of direction ..

In my previous post I talked about how our child-care works here in wintery Finland, and suggested there might be a change in the near future.

So here is the predictable update; I've resigned from my job and I'm going to be taking over childcare/daycare. Ideally this will last indefinitely, but it is definitely going to continue until November. (Which is the earliest any child could be moved into public day-care if there problems.)

I've loved my job, twice, but even though it makes me happy (in a way that several other positions didn't) there is no comparison. Child-care makes me happier-still. Sure there are days when your child just wants to scream, refuse to eat, and nothing works. But on average everything is awesome.

It's a hard decision, a "brave" decision too apparently (which I read negatively!), but also an easy one to make.

It'll be hard. I'll have no free time from 7AM-5PM, except during nap-time (11AM-1PM, give or take). But it will be worth it.

And who knows, maybe I'll even get to rant at people who ask "Where's his mother?" I live for those moments. Truly.

Planet Linux AustraliaOpenSTEM: Amelia Earhart in the news

Recently Amelia Earhart has been in the news once more, with publication of a paper by an American forensic anthropologist, Richard Jantz. Jantz has done an analysis of the measurements made of bones found in 1940 on the island of Nikumaroro Island in Kiribati. Unfortunately, the bones no longer survive, but they were analysed in […]

Planet DebianJoey Hess: prove you are not an Evil corporate person

In which Google be Google and I drop a hot AGPL tip.


Google Is Quietly Providing AI Technology for Drone Strike Targeting Project
Google Is Helping the Pentagon Build AI for Drones

to automate the identification and classification of images taken by drones — cars, buildings, people — providing analysts with increased ability to make informed decisions on the battlefield

These news reports don't mention reCaptcha explicitly, but it's been asking about a lot of cars lately. Whatever the source of the data that Google is using for this, it's disgusting that they're mining it from us without our knowledge or consent.

Google claims that "The technology flags images for human review, and is for non-offensive uses only". So, if a drone operator has a neural network that we all were tricked & coerced into training to identify cars and people helping to highlight them on their screen and center the crosshairs just right, and the neural network is not pressing the kill switch, is it being used for "non-offensive purposes only"?

Google is known to be deathly allergic to the AGPL license. Not only on servers; they don't even allow employees to use AGPL software on workstations. If you write free software, and you'd prefer that Google not use it, a good way to ensure that is to license it under the AGPL.

I normally try to respect the privacy of users of my software, and of personal conversations. But at this point, I feel that Google's behavior has mostly obviated those moral obligations. So...

Now seems like a good time to mention that I have been contacted by multiple people at Google about several of my AGPL licensed projects (git-annex and either keysafe or debug-me I can't remember which) trying to get me to switch them to the GPL, and had long conversations with them about it.

Google has some legal advice that the AGPL source provision triggers much more often than it's commonly understood to. I encouraged them to make that legal reasoning public, so the community could address/debunk it, but I don't think they have. I won't go into details about it here, other than it seemed pretty bonkers.

Mixing in some AGPL code with an otherwise GPL codebase also seems sufficient to trigger Google's allergy. In the case of git-annex, it's possible to build all releases (until next month's) with a flag that prevents linking with any AGPL code, which should mean the resulting binary is GPL licensed, but Google still didn't feel able to use it, since the git-annex source tree includes AGPL files.

I don't know if Google's allergy to the AGPL extends to software used for drone murder applications, but in any case I look forward to preventing Google from using more of my software in the future.

(Illustration by scatter//gather)

Planet DebianRuss Allbery: My friend Stirge

Eric Sturgeon, one of my oldest and dearest friends, died this week of complications from what I'm fairly certain was non-alcoholic fatty liver disease.

It was not entirely unexpected. He'd been getting progressively worse over the past six months. But at the same time there's no way to expect this sort of hole in my life.

I've known Stirge for twenty-five years, more than half of my life. We were both in college when we first met on Usenet in 1993 in the rec.arts.comics.* hierarchy, where Stirge was the one with the insane pull list and the canonical knowledge of the Marvel Universe. We have been friends ever since: part of on-line fiction groups, IRC channels, and free-form role-playing groups. He's been my friend through school and graduation, through every step of my career, through four generations of console systems, through two moves for me and maybe a dozen for him, through a difficult job change... through my entire adult life.

For more than fifteen years, he's been spending a day or a week or two, several times a year, sitting on my couch and playing video games. Usually he played and I navigated, researching FAQs and walkthroughs. Twitch was immediately obvious to me the moment I discovered it existed; it's the experience I'd had with Stirge for years before that. I don't know what video games are without his thoughts on them.

Stirge rarely was able to put his ideas into stories he could share with other people. He admired other people's art deeply, but wasn't an artist himself. But he loved fictional worlds, loved their depth and complexity and lore, and was deeply and passionately creative. He knew the stories he played and read and watched, and he knew the characters he played, particularly in World of Warcraft and Star Wars: The Old Republic. His characters had depth and emotions, histories, independent viewpoints, and stories that I got to hear. Stirge wrote stories the way that I do: in our heads, shared with a small number of people if anyone, not crafted for external consumption, not polished, not always coherent, but deeply important to our thoughts and our emotions and our lives. He's one of the very few people in the world I was able to share that with, who understood what that was like.

He was the friend who I could not see for six months, a year, and then pick up a conversation with as if we'd seen each other yesterday.

After my dad had a heart attack and emergency surgery to embed a pacemaker while we were on vacation in Oregon, I was worrying about how we would manage to get him back home. Stirge immediately volunteered to drive down from Seattle to drive us. He had a crappy job with no vacation, and if he'd done that he almost certainly would have gotten fired, and I knew with absolute certainty that he would have done it anyway.

I didn't take him up on the offer (probably to his vast relief). When I told him years later how much it meant to me, he didn't feel like it should have counted, since he didn't do anything. But he did. In one of the worst moments of my life, he said exactly the right thing to make me feel like I wasn't alone, that I wasn't bearing the burden of figuring everything out by myself, that I could call on help if I needed it. To this day I start crying every time I think about it. It's one of the best things that anyone has ever done for me.

Stirge confided in me, the last time he visited me, that he didn't think he was the sort of person anyone thought about when he wasn't around. That people might enjoy him well enough when he was there, but that he'd quickly fade from memory, with perhaps a vague wonder about what happened to that guy. But it wasn't true, not for me, not ever. I tried to convince him of that while he was alive, and I'm so very glad that I did.

The last time I talked to him, he explained the Marvel Cinematic Universe to me in detail, and gave me a rundown of the relative strength of every movie, the ones to watch and the ones that weren't as good, and then did the same thing for the DC movies. He got to see Star Wars before he died. He would have loved Black Panther.

There were so many games we never finished, and so many games we never started.

I will miss you, my friend. More than I think you would ever have believed.

Planet DebianDaniel Pocock: Bug Squashing and Diversity

Over the weekend, I was fortunate enough to visit Tirana again for their first Debian Bug Squashing Party.

Every time I go there, female developers (this is a hotspot of diversity) ask me if they can host the next Mini DebConf for Women. There have already been two of these very successful events, in Barcelona and Bucharest. It is not my decision to make though: anybody can host a MiniDebConf of any kind, anywhere, at any time. I've encouraged the women in Tirana to reach out to some of the previous speakers personally to scope potential dates and contact the DPL directly about funding for necessary expenses like travel.

The confession

If you have read Elena's blog post today, you might have seen my name and picture and assumed that I did a lot of the work. As it is International Women's Day, it seems like an opportune time to admit that isn't true and that as in many of the events in the Balkans, the bulk of the work was done by women. In fact, I only bought my ticket to go there at the last minute.

When I arrived, Izabela Bakollari and Anisa Kuci where already at the venue getting everything ready. They looked busy, so I asked them if they would like a bonus responsibility, presenting some slides about bug squashing that they had never seen before while translating them into Albanian in real-time. They delivered the presentation superbly, it was more entertaining than any TED talk I've ever seen.

The bugs that won't let you sleep

The event was boosted by a large contingent of Kosovans, including 15 more women. They had all pried themselves out of bed at 03:00 am to take the first bus to Tirana. It's rare to see such enthusiasm for bugs amongst developers anywhere but it was no surprise to me: most of them had been at the hackathon for girls in Prizren last year, where many of them encountered free software development processes for the first time, working long hours throughout the weekend in the summer heat.

and a celebrity guest

A major highlight of the event was the presence of Jona Azizaj, a Fedora contributor who is very proactive in supporting all the communities who engage with people in the Balkans, including all the recent Debian events there. Jona is one of the finalists for Red Hat's Women in Open Source Award. Jona was a virtual speaker at DebConf17 last year, helping me demonstrate a call from the Fedora community WebRTC service to the Debian equivalent, At Mini DebConf Prishtina, where fifty percent of talks were delivered by women, I invited Jona on stage and challenged her to contemplate being a speaker at Red Hat Summit. Giving a talk there seemed like little more than a pipe dream just a few months ago in Prishtina: as a finalist for this prestigious award, her odds have shortened dramatically. It is so inspiring that a collaboration between free software communities helps build such fantastic leaders.

With results like this in the Balkans, you may think the diversity problem has been solved there. In reality, while the ratio of female participants may be more natural, they still face problems that are familiar to women anywhere.

One of the greatest highlights of my own visits to the region has been listening to some of the challenges these women have faced, things that I never encountered or even imagined as the stereotypical privileged white male. Yet despite enormous social, cultural and economic differences, while I was sitting in the heat of the summer in Prizren last year, it was not unlike my own time as a student in Australia and the enthusiasm and motivation of these young women discovering new technologies was just as familiar to me as the climate.

Hopefully more people will be able to listen to what they have to say if Jona wins the Red Hat award or if a Mini DebConf for Women goes ahead in the Balkans (subscribe before posting).


Planet DebianMarkus Koschany: My Free Software Activities in February 2018

Welcome to Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

  • Last month I wrote about „The state of Debian Games“ and I was pleasantly surprised that someone apparently read my post and offered some help with saving endangered games. Well, I don’t know how it will turn out but at least it is encouraging to see that there are people who still care about some old fashioned games. As a matter of fact the GNOME maintainers would like to remove some obsolete GNOME 2 libraries which makes a few of our games RC-buggy. Ideally they should be ported to GNOME 3 but if they could be replaced with a similar game written in a different and awesome programming language (such as Java or Clojure?), for a different desktop environment, that would do as well. 😉 If you’re bored to death or just want a challenge contact us at
  • I packaged a new release of mupen64plus-qt to fix a FTBFS bug (#887576)
  • I uploaded a new version of freeciv to stretch-backports.
  • Pygame-sdl2 and renpy got some love too. (new upstream releases)
  • I sponsored a new revision of redeclipse for Martin-Erik Werner to fix #887744.
  • Yangfl introduced ddnet to Debian which is a popular modification/standalone game similar to teeworlds. I reviewed and eventually sponsored a new upstream release for him. If you are into multiplayer games then ddnet is certainly something you should look forward to.
  • I gladly applied another patch by Peter Green to fix #889059 in warzone2100 and Aurelien Jarno’s fix for btanks (#890632).

Debian Java

  • The Eclipse problem: The Eclipse IDE is seriously threatened to be removed from Debian. Once upon a time we even had a dedicated team that cared about the package but nowadays there is nobody. We regularly get requests to update the IDE to the latest version but there is no one who wants to do the necessary work. The situation is best described in #681726. This alone is worrying enough but due to an interesting dependency chain (batik -> maven -> guice -> libspring-java -> aspectj -> eclipse-platform) Eclipse cannot be removed without breaking dozens of other Java packages. So long story short I started to work on it and packaged a standalone libequinox-osgi-java package, so that we can save at least all reverse-dependencies for this package. Next was tycho which is required to build newer Eclipse versions. Annoyingly it requires said newer version of Eclipse to build…which means we must bootstrap it. I’m still in the process to upgrade tycho to version 1.0 and hope to make some progress in March.
  • I prepared security updates for jackson-databind, lucene-solr and tomcat-native.
  • New upstream releases: jboss-xnio, commons-parent, jboss-logging, jboss-module, mongo-java-driver and libspring-java (#890001).
  • Bug fixes and triaging: wagon2 (#881815, #889427), byte-buddy, (#884207), commons-io, maven-archiver (#886875), jdeb (#889642), commons-math, jflex (#890345), commons-httpclient (#871142)
  • I introduced jboss-bridger which is a new build-dependency of jboss-modules.
  • I sponsored a freeplane update for Felix Natter.

Debian LTS

This was my twenty-fourth month as a paid contributor and I have been paid to work 23,75 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 05.02.2018 until 11.02.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in binutils, graphicsmagick, wayland, unzip, kde-runtime, libjboss-remoting-java, libvirt, exim4, libspring-java, puppet, audacity, leptonlib, librsvg, suricata, exiv2, polarssl and imagemagick.
  • I tested a security update for exim4 and uploaded a package for Abhijith.
  • DLA-1275-1. Issued a security update for uwsgi fixing 1 CVE.
  • DLA-1276-1. Issued a security update for tomcat-native fixing 1 CVE.
  • DLA-1280-1. Issued a security update for pound fixing 1 CVE.
  • DLA-1281-1. Issued a security update for advancecomp fixing 1 CVE.
  • DLA-1295-1. Issued a security update for drupal7 fixing 4 CVE.
  • DLA-1296-1. Issued a security update for xmltooling fixing 1 CVE.
  • DLA-1301-1. Issued a security update for tomcat7 fixing 2 CVE.


  • I NMUed vdk2 (#885760) to prevent the removal of langdrill.

Thanks for reading and see you next time.

Planet DebianSteinar H. Gunderson: Nageru 1.7.0 released

I've just released version 1.7.0 of Nageru, my free software video mixer. The poster child feature for this release is the direct integration of CEF, yielding high-performance HTML5 graphics directly into Nageru. This obsoletes the earlier CasparCG integration through playing a video from a socket (although video support is of course still very much present!), which were significantly slower and more flimsy. (Also, when CEF gets around to integrating with clients on the GPU level, you'll see even higher performance, and also stuff like WebGL, which I've turned off the for time being.)

Unfortunately, Debian doesn't carry CEF, and I haven't received any answers to my probes of whether it would be possible to do so—it would certainly involve some coordination with the Chromium maintainers. Thus, it is an optional dependency, and the packages that are coming into unstable are built without CEF support.

As always, the changelog is below, and the documentation has been updated to reflect new features and developments. Happy mixing!

Nageru 1.7.0, March 8th, 2018

  - Support for HTML5 graphics directly in Nageru, through CEF
    (Chromium Embedded Framework). This performs better and is more
    flexible than integrating with CasparCG over a socket. Note that
    CEF is an optional component; see the documentation for more

  - Add an HTTP endpoint for enumerating channels and one for getting
    only their colors. Intended for remote tally applications;
    set the documentation.

  - Add a video grid display that removes the audio controls and shows
    the video channels only, potentially in multiple rows if that makes
    for a larger viewing area.

  - Themes can now present simple menus in the Nageru UI. See the
    documentation for more information.

  - Various bugfixes.

Cory DoctorowClassroom materials for Little Brother from Mary Kraus

Mary Kraus — who created a key to page-numbers in the Little Brother audiobook for students with reading disabilities — continues to create great classroom materials for Little Brother: Who’s Who in “Little Brother” is a Quizlet that teaches about the famous people mentioned in the book, from Alan Turing to Rosa Luxembourg; while the Acronym Challenge asks students to unpack acronyms like DHS, NPR, IM, DNS, and ACLU.

TEDMeet the 2018 class of TED Fellows and Senior Fellows

The TED Fellows program is excited to announce the new group of TED2018 Fellows and Senior Fellows.

Representing a wide range of disciplines and countries — including, for the first time in the program, Syria, Thailand and Ukraine — this year’s TED Fellows are rising stars in their fields, each with a bold, original approach to addressing today’s most complex challenges and capturing the truth of our humanity. Members of the new Fellows class include a journalist fighting fake news in her native Ukraine; a Thai landscape architect designing public spaces to protect vulnerable communities from climate change; an American attorney using legal assistance and policy advocacy to bring justice to survivors of campus sexual violence; a regenerative tissue engineer harnessing the body’s immune system to more quickly heal wounds; a multidisciplinary artist probing the legacy of slavery in the US; and many more.

The TED Fellows program supports extraordinary, iconoclastic individuals at work on world-changing projects, providing them with access to the global TED platform and community, as well as new tools and resources to amplify their remarkable vision. The TED Fellows program now includes 453 Fellows who work across 96 countries, forming a powerful, far-reaching network of artists, scientists, doctors, activists, entrepreneurs, inventors, journalists and beyond, each dedicated to making our world better and more equitable. Read more about their visionary work on the TED Fellows blog.

Below, meet the group of Fellows and Senior Fellows who will join us at TED2018, April 10–14, in Vancouver, BC, Canada.

Antionette Carroll
Antionette Carroll (USA)
Social entrepreneur + designer
Designer and founder of Creative Reaction Lab, a nonprofit using design to foster racially equitable communities through education and training programs, community engagement consulting and open-source tools and resources.

Psychiatrist Essam Daod comforts a Syrian refugee as she arrives ashore at the Greek island of Lesvos. His organization Humanity Crew provides psychological aid to refugees and recently displaced populations. (Photo: Laurence Geai)

Essam Daod
Essam Daod (Palestine | Israel)
Mental health specialist
Psychiatrist and co-founder of Humanity Crew, an NGO providing psychological aid and first-response mental health interventions to refugees and displaced populations.

Laura L. Dunn
Laura L. Dunn (USA)
Victims’ rights attorney
Attorney and Founder of SurvJustice, a national nonprofit increasing the prospect of justice for survivors of campus sexual violence through legal assistance, policy advocacy and institutional training.

Rola Hallam
Rola Hallam (Syria | UK)
Humanitarian aid entrepreneur 
Medical doctor and founder of CanDo, a social enterprise and crowdfunding platform that enables local humanitarians to provide healthcare to their own war-devastated communities.

Olga Iurkova
Olga Yurkova (Ukraine)
Journalist + editor
Journalist and co-founder of, an independent Ukrainian organization that trains an international cohort of fact-checkers in an effort to curb propaganda and misinformation in the media.

Glaciologist M Jackson studies glaciers like this one — the glacier Svínafellsjökull in southeastern Iceland. The high-water mark visible on the mountainside indicates how thick the glacier once was, before climate change caused its rapid recession. (Photo: M Jackson)

M Jackson
M Jackson (USA)
Geographer + glaciologist
Glaciologist researching the cultural and social impacts of climate change on communities across all eight circumpolar nations, and an advocate for more inclusive practices in the field of glaciology.

Romain Lacombe
Romain Lacombe (France)
Environmental entrepreneur
Founder of Plume Labs, a company dedicated to raising awareness about global air pollution by creating a personal electronic pollution tracker that forecasts air quality levels in real time.

Saran Kaba Jones
Saran Kaba Jones (Liberia | USA)
Clean water advocate
Founder and CEO of FACE Africa, an NGO that strengthens clean water and sanitation infrastructure in Sub-Saharan Africa through innovative community support services.

Yasin Kakande
Yasin Kakande (Uganda)
Investigative journalist + author
Journalist working undercover in the Middle East to expose the human rights abuses of migrant workers there.

In one of her long-term projects, “The Three: Senior Love Triangle,” documentary photographer Isadora Kosofsky shadowed a three-way relationship between aged individuals in Los Angeles, CA – Jeanie (81), Will (84), and Adina (90). Here, Jeanie and Will kiss one day after a fight.

Isadora Kosofsky
Isadora Kosofsky (USA)
Photojournalist + filmmaker
Photojournalist exploring underrepresented communities in America with an immersive approach, documenting senior citizen communities, developmentally disabled populations, incarcerated youth, and beyond.

Adam Kucharski
Adam Kucharski (UK)
Infectious disease scientist
Infectious disease scientist creating new mathematical and computational approaches to understand how epidemics like Zika and Ebola spread, and how they can be controlled.

Lucy Marcil
Lucy Marcil (USA)
Pediatrician + social entrepreneur
Pediatrician and co-founder of StreetCred, a nonprofit addressing the health impact of financial stress by providing fiscal services to low-income families in the doctor’s waiting room.

Burçin Mutlu-Pakdil
Burçin Mutlu-Pakdil (Turkey | USA)
Astrophysicist studying the structure and dynamics of galaxies — including a rare double-ringed elliptical galaxy she discovered — to help us understand how they form and evolve.

Faith Osier
Faith Osier (Kenya | Germany)
Infectious disease doctor
Scientist studying how humans acquire immunity to malaria, translating her research into new, highly effective malaria vaccines.

In “Birth of a Nation” (2015), artist Paul Rucker recast Ku Klux Klan robes in vibrant, contemporary fabrics like spandex, Kente cloth, camouflage and white satin – a reminder that the horrors of slavery and the Jim Crow South still define the contours of American life today. (Photo: Ryan Stevenson)

Paul Rucker
Paul Rucker (USA)
Visual artist + cellist
Multidisciplinary artist exploring issues related to mass incarceration, racially motivated violence, police brutality and the continuing impact of slavery in the US.

Kaitlyn Sadtler
Kaitlyn Sadtler (USA)
Regenerative tissue engineer
Tissue engineer harnessing the body’s natural immune system to create new regenerative medicines that mend muscle and more quickly heal wounds.

DeAndrea Salvador (USA)
Environmental justice advocate
Sustainability expert and founder of RETI, a nonprofit that advocates for inclusive clean-energy policies that help low-income families access cutting-edge technology to reduce their energy costs.

Harbor seal patient Bogey gets a checkup at the Marine Mammal Center in California. Veterinarian Claire Simeone studies marine mammals like harbor seals to understand how the health of animals, humans and our oceans are interrelated. (Photo: Ingrid Overgard / The Marine Mammal Center)

Claire Simeone
Claire Simeone (USA)
Marine mammal veterinarian
Veterinarian and conservationist studying how the health of marine mammals, such as sea lions and dolphins, informs and influences both human and ocean health.

Kotchakorn Voraakhom
Kotchakorn Voraakhom (Thailand)
Urban landscape architect
Landscape architect and founder of Landprocess, a Bangkok-based design firm building public green spaces and green infrastructure to increase urban resilience and protect vulnerable communities from climate change.

Mikhail Zygar
Mikhail Zygar (Russia)
Journalist + historian
Journalist covering contemporary and historical Russia and founder of Project1917, a digital documentary project that narrates the 1917 Russian Revolution in an effort to contextualize modern-day Russian issues.

TED2018 Senior Fellows

Senior Fellows embody the spirit of the TED Fellows program. They attend four additional TED events, mentor new Fellows and continue to share their remarkable work with the TED community.

Prosanta Chakrabarty
Prosanta Chakrabarty (USA)
Evolutionary biologist and natural historian researching and discovering fish around the world in an effort to understand fundamental aspects of biological diversity.

Aziza Chaouni
Aziza Chaouni (Morocco)
Civil engineer and architect creating sustainable built environments in the developing world, particularly in the deserts of the Middle East.

Shohini Ghose
Shohini Ghose (Canada)
Quantum physicist + educator
Theoretical physicist developing quantum computers and novel protocols like teleportation, and an advocate for equity, diversity and inclusion in science.

A pair of shrimpfish collected in Tanzanian mangroves by ichthyologist Prosanta Chakrabarty and his colleagues this past year. They may represent an unknown population or even a new species of these unusual fishes, which swim head down among aquatic plants.

Zena el Khalil
Zena el Khalil (Lebanon)
Artist + cultural activist
Artist and cultural activist using visual art, site-specific installation, performance and ritual to explore and heal the war-torn history of Lebanon and other global sites of trauma.

Bektour Iskender
Bektour Iskender (Kyrgyzstan)
Independent news publisher
Co-founder of Kloop, an NGO and leading news publication in Kyrgyzstan, committed to freedom of speech and training young journalists to cover politics and investigate corruption.

Mitchell Jackson
Mitchell Jackson (USA)
Writer + filmmaker
Writer exploring race, masculinity, the criminal justice system, and family relationships through fiction, essays and documentary film.

Jessica Ladd
Jessica Ladd (USA)
Sexual health technologist
Founder and CEO of Callisto, a nonprofit organization developing technology to combat sexual assault and harassment on campus and beyond.

Jorge Mañes Rubio
Jorge Mañes Rubio (Spain)
Artist investigating overlooked places on our planet and beyond, creating artworks that reimagine and revive these sites through photography, site-specific installation and sculpture.

An asteroid impact is the only natural disaster we have the technology to prevent, but since prevention takes time, we must search for near-Earth asteroids now. Astronomer Carrie Nugent does just that, discovering and studying asteroids like this one. (Illustration: Tim Pyle and Robert Hurt / NASA/JPL-Caltech)

Carrie Nugent (USA)
Asteroid hunter
Astronomer using machine learning to discover and study near-Earth asteroids, our smallest and most numerous cosmic neighbors.

David Sengeh
David Sengeh (Sierra Leone + South Africa)
Biomechatronics engineer
Research scientist designing and deploying new healthcare technologies, including artificial intelligence, to cure and fight disease in Africa.

CryptogramExtracting Secrets from Machine Learning Systems

This is fascinating research about how the underlying training data for a machine-learning system can be inadvertently exposed. Basically, if a machine-learning system trains on a dataset that contains secret information, in some cases an attacker can query the system to extract that secret information. My guess is that there is a lot more research to be done here.

EDITED TO ADD (3/9): Some interesting links on the subject.

CryptogramNew DDoS Reflection-Attack Variant

This is worrisome:

DDoS vandals have long intensified their attacks by sending a small number of specially designed data packets to publicly available services. The services then unwittingly respond by sending a much larger number of unwanted packets to a target. The best known vectors for these DDoS amplification attacks are poorly secured domain name system resolution servers, which magnify volumes by as much as 50 fold, and network time protocol, which increases volumes by about 58 times.

On Tuesday, researchers reported attackers are abusing a previously obscure method that delivers attacks 51,000 times their original size, making it by far the biggest amplification method ever used in the wild. The vector this time is memcached, a database caching system for speeding up websites and networks. Over the past week, attackers have started abusing it to deliver DDoSes with volumes of 500 gigabits per second and bigger, DDoS mitigation service Arbor Networks reported in a blog post.

Cloudflare blog post. BoingBoing post.

EDITED TO ADD (3/9): Brian Krebs covered this.

Krebs on SecurityLook-Alike Domains and Visual Confusion

How good are you at telling the difference between domain names you know and trust and impostor or look-alike domains? The answer may depend on how familiar you are with the nuances of internationalized domain names (IDNs), as well as which browser or Web application you’re using.

For example, how does your browser interpret the following domain? I’ll give you a hint: Despite appearances, it is most certainly not the actual domain for software firm CA Technologies (formerly Computer Associates Intl Inc.), which owns the original domain name:


Go ahead and click on the link above or cut-and-paste it into a browser address bar. If you’re using Google Chrome, Apple’s Safari, or some recent version of Microsoft‘s Internet Explorer or Edge browsers, you should notice that the address converts to “” This is called “punycode,” and it allows browsers to render domains with non-Latin alphabets like Cyrillic and Ukrainian.

Below is what it looks like in Edge on Windows 10; Google Chrome renders it much the same way. Notice what’s in the address bar (ignore the “fake site” and “Welcome to…” text, which was added as a courtesy by the person who registered this domain):

The domain https://www.са.com/ as rendered by Microsoft Edge on Windows 10. The rest of the text in the image (beginning with “Welcome to a site…”) was added by the person who registered this test domain, not the browser.

IE, Edge, Chrome and Safari all will convert https://www.са.com/ into its punycode output (, in part to warn visitors about any confusion over look-alike domains registered in other languages. But if you load that domain in Mozilla Firefox and look at the address bar, you’ll notice there’s no warning of possible danger ahead. It just looks like it’s loading the real

What the fake domain looks like when loaded in Mozilla Firefox. A browser certificate ordered from Comodo allows it to include the green lock (https://) in the address bar, adding legitimacy to the look-alike domain. The rest of the text in the image (beginning with “Welcome to a site…”) was added by the person who registered this test domain, not the browser. Click to enlarge.

The domain “” pictured in the first screenshot above is punycode for the Ukrainian letters for “s” (which is represented by the character “c” in Russian and Ukrainian), as well as an identical Ukrainian “a”.

It was registered by Alex Holden, founder of Milwaukee, Wis.-based Hold Security Inc. Holden’s been experimenting with how the different browsers handle punycodes in the browser and via email. Holden grew up in what was then the Soviet Union and speaks both Russian and Ukrainian, and he’s been playing with Cyrillic letters to spell English words in domain names.

Letters like A and O look exactly the same and the only difference is their Unicode value. There are more than 136,000 Unicode characters used to represent letters and symbols in 139 modern and historic scripts, so there’s a ton of room for look-alike or malicious/fake domains.

For example, “a” in Latin is the Unicode value “0061” and in Cyrillic is “0430.”  To a human, the graphical representation for both looks the same, but for a computer there is a huge difference. Internationalized domain names (IDNs) allow domain names to be registered in non-Latin letters (RFC 3492), provided the domain is all in the same language; trying to mix two different IDNs in the same name causes the domain registries to reject the registration attempt.

So, in the Cyrillic alphabet (Russian/Ukrainian), we can spell АТТ, УАНОО, ХВОХ, and so on. As you can imagine, the potential opportunity for impersonation and abuse are great with IDNs. Here’s a snippet from a larger chart Holden put together showing some of the more common ways that IDNs can be made to look like established, recognizable domains:

Image: Hold Security.

Holden also was able to register a valid SSL encryption certificate for https://www.са.com from, which would only add legitimacy to the domain were it to be used in phishing attacks against CA customers by bad guys, for example.


To be clear, the potential threat highlighted by Holden’s experiment is not new. Security researchers have long warned about the use of look-alike domains that abuse special IDN/Unicode characters. Most of the major browser makers have responded in some way by making their browsers warn users about potential punycode look-alikes.

With the exception of Mozilla, which by most accounts is the third most-popular Web browser. And I wanted to know why. I’d read the Mozilla Wiki’s IDN Display Algorithm FAQ,” so I had an idea of what Mozilla was driving at in their decision not to warn Firefox users about punycode domains: Nobody wanted it to look like Mozilla was somehow treating the non-Western world as second-class citizens.

I wondered why Mozilla doesn’t just have Firefox alert users about punycode domains unless the user has already specified that he or she wants a non-English language keyboard installed. So I asked that in some questions I sent to their media team. They sent the following short statement in reply:

“Visual confusion attacks are not new and are difficult to address while still ensuring that we render everyone’s domain name correctly. We have solved almost all IDN spoofing problems by implementing script mixing restrictions, and we also make use of Safe Browsing technology to protect against phishing attacks. While we continue to investigate better ways to protect our users, we ultimately believe domain name registries are in the best position to address this problem because they have all the necessary information to identify these potential spoofing attacks.”

If you’re a Firefox user and would like Firefox to always render IDNs as their punycode equivalent when displayed in the browser address bar, type “about:config” without the quotes into a Firefox address bar. Then in the “search:” box type “punycode,” and you should see one or two options there. The one you want is called “network.IDN_show_punycode.” By default, it is set to “false”; double-clicking that entry should change that setting to “true.”

Incidentally, anyone using the Tor Browser to anonymize their surfing online is exposed to IDN spoofing because Tor by default uses Mozilla as well. I could definitely see spoofed IDNs being used in targeting phishing attacks aimed at Tor users, many of whom have significant assets tied up in virtual currencies. Fortunately, the same “about:config” instructions work just as well on Tor to display punycode in lieu of IDNs.

Holden said he’s still in the process of testing how various email clients and Web services handle look-alike IDNs. For example, it’s clear that Twitter sees nothing wrong with sending the look-alike domain in messages to other users without any context or notice. Skype, on the other hand, seems to truncate the IDN link, sending clickers to a non-existent page.

“I’d say that most email services and clients are either vulnerable or not fully protected,” Holden said.

For a look at how phishers or other scammers might use IDNs to abuse your domain name, check out this domain checker that Hold Security developed. Here’s the first page of results for, which indicate that someone at one point registered krebsoṇsecurity[dot]com (that domain includes a lowercase “n” with a tiny dot below it, a character used by several dozen scripts). The results in yellow are just possible (unregistered) domains based on common look-alike IDN characters.

The first page of warnings for from Hold Security’s IDN scanner tool.

I wrote this post mainly because I wanted to learn more about the potential phishing and malware threat from look-alike domains, and I hope the information here has been interesting if not also useful. I don’t think this kind of phishing is a terribly pressing threat (especially given how far less complex phishing attacks seem to succeed just fine for now). But it sure can’t hurt Firefox users to change the default “visual confusion” behavior of the browser so that it always displays punycode in the address bar (see the solution mentioned above).

[Author’s note: I am listed as an adviser to Hold Security on the company’s Web site. However this is not a role for which I have been compensated in any way now or in the past.]

Planet DebianAlexandre Viau: testeduploads - looking for GSOC mentor

I have been waiting for the right opportunity to participate to GSOC for a while. I have worked on a project idea that is just right for my skill set, it would be a great learning opportunity for me and I hope that it can useful to the wider Debian community.

Please take a look at the project description and let me know if you would be interested in mentoring me over the summer.

testeduploads: test your packages before they hit the archive


testeduploads is a service that provides a way to test Debian source packages. The main goal of the project is to empower Debian Developers by giving them easy access to more rigorous testing before they upload a package to the archive. It runs tests that Debian Developers don’t necessarily run because of lack of time and resources.

testeduploads can also be used to test a large number of packages in contexts such as:

  • detecting whether or not packages can be updated to a newer upstream version
  • detecting whether or not packages can be backported
  • testing new versions of compilers
  • testing new versions of debhelper add-ons


Packages can be submitted to testeduploads with dput. Depending on the upload queue that was used, it can also automatically forward the uploads to

dput testeduploads [changesfile] will upload a package to the configured testeduploads queue and trigger the following tests:

  • rebuild the source package from the .dsc and verify that the signature matches
  • build binary packages
  • run autopkgtests on the package
  • rebuild all reverse dependencies using the new package
  • run autopkgtests on all reverse dependencies

On success:

  • the uploader is notified
  • logs are made available
  • if the package was received through the test-and-upload queue, it is automatically forwarded to

On failure:

  • the uploader is notified
  • logs are made available

Results and statistics are accessible through a web interface and a REST API. All uploads are assigned an id. HTTP uploads immediately return an upload id that can be used to query test status and to perform other actions. This allows for other tools to build on top of testeduploads.

The service accepts uploads to several queues that define specific behaviours:

  • test-and-upload: test the package on all release architectures and forward it to on success
  • test-only: test the package on all release architectures but not forward it to on success
  • amd64/test-and-upload: limit the tests to amd64 and apply test-and-upload behaviour

Why me

I have been contributing to Debian for a couple of years now and I have been a Debian Developper since 2015. For now, I have mostly been conttibuting to packaging new software and fixing packaging-related bugs.

Participating to Google Summer of Code would be a great opportunity for me to contribute to Debian in other areas. Starting a new project like testeduploads is a good learning opportunity but it requires a lot of time. The summer would be more than enough for me to kick start development of the service. Then, I can see myself maintaining and improving it for a long time.

For me, this summer is just the right time. There is very few classes that I could take over the summer, it is a good opportunity to take a summer off and work on GSOC.

For general GSOC questions, please refer to the debian-outreach mailing list or to #debian-outreach on

If you are interested in the project and want to mentor it over the summer, please get in touch with me at

Debian GSOC coordination guide

debian-outreach mailing list

testeduploads prototype

CryptogramHistory of the US Army Security Agency

Interesting history of the US Army Security Agency in the early years of Cold War Germany.

Planet DebianLars Wirzenius: New chapter of Hacker Noir on Patreon

For the 2016 NaNoWriMo I started writing a novel about software development, "Hacker Noir". I didn't finish it during that November, and I still haven't finished it. I had a year long hiatus, due to work and life being stressful, when I didn't write on the novel at all. However, inspired by both the Doctorow method and the Seinfeld method, I have recently started writing again.

I've just published a new chapter. However, unlike last year, I'm publishing it on my Patreon only, for the first month, and only for patrons. Then, next month, I'll be putting that chapter on the book's public site (, and another new chapter on Patreon.

I don't expect to make a lot of money, but I am hoping having active supporters will motivate me to keep writing.

I'm writing the first draft of the book. It's likely to be as horrific as every first-time author's first draft is. If you'd like to read it as raw as it gets, please do. Once the first draft is finished, I expect to read it myself, and be horrified, and throw it all away, and start over.

Also, I should go get some training on marketing.

Worse Than FailureCodeSOD: Let's Set a Date

Let’s imagine, for a moment, that you came across a method called setDate. Would you think, perhaps, that it stores a date somewhere? Of course it does. But what else does it do?

Matthias was fixing some bugs in a legacy project, and found himself asking exactly that question.

function setDate(objElement, strDate, objCalendar) {

    if (objElement.getAttribute("onmyfocus")) {
        eval(objElement.getAttribute("onmyfocus").replace(/this/g, "$('" + + "')"));
    } else if (objElement.onfocus && objElement.onfocus.toString()) {
        eval(GetInnerFunction(objElement.onfocus.toString()).replace(/this/g, "$('" + + "')"));

    objElement.value = parseDate(strDate);

    if (objElement.getAttribute("onmyblur")) {
        eval(objElement.getAttribute("onmyblur").replace(/this/g, "$('" + + "')"));
    } else if (objElement.onblur && objElement.onblur.toString()) {
        eval(GetInnerFunction(objElement.onblur.toString()).replace(/this/g, "$('" + + "')"));

    if (objCalendar) {
    } else {

In this code, objElement and objCalendar are both expected to be DOM elements. strDate, as the name implies, is a string holding a date. You can see a few elements in the code which obviously have something to do with the actual function of setting a date: objElement.value = parseDate(strDate) and the conditional about trying to toggle the calendar object seem like they might have something to do with managing the date.

It’s the rest of the code that gets… weird. The purpose, at a guess, is that this setDate method is emulating a user interacting with a DOM element- perhaps this is part of some bolted-on calendar widget- so they want to fire the on-focus and on-blur methods of the underlying element. That, alone, would be an ugly but serviceable hack.

But that’s not what they do.

First, they’ve apparently created attributes onmyfocus and onmyblur. Should the element have those attributes, they extract the value there, and replace any references to this with a call to $(), passing in the objElementId… and then they eval it.

If there isn’t a special onmyfocus/onmyblur attribute, they instead check for the more normal onfocus/onblur event handlers. Which are functions. But this code doesn’t want functions, so it converts them to a string and replaces this again, before passing it back to eval.

Replacing this means that they were trying to reinvent function.apply, a JavaScript method that allows you to pass in whatever object you want to be this within the function you’re calling. But, at least in the case of the onfocus/onblur, this isn’t necessary, since every browser has had a method to dispatchEvent or createEvent since time immemorial. You don’t need to mangle a function to emulate an event.

The jQuery experts might notice that $ and say, “Well, heck, if they’re using jQuery, that has a .trigger() method which fires events.” That’s a good thought, but this code is actually worse than it looks. I’ll allow Matthias to explain:

$ is NOT jQuery, but a global function that does a getElementById-lookup

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet DebianElena Gjevukaj: Bug Squashing Party in Tirana

Bug Squashing Party was organized by Debian and OpenLabs in Tirana last weekend (3,4 March 2018). BSP is a come-together of Debian Developers and Debian enthusiasts on a specified timeframe where these persons try to fix as many bugs as possible.

Unusually for tech events, in this one there were like 90% of women participating and I think if anyone saw us working together they would doubt that it was a tech event. As in other fields in general tech world is not an exeption when it comes for dicrimination and sexisem, but luckly for us in this event organized by our friend Daniel Pocock (from Debian) and OpenLabs Tirana that wasn’t our case.

We were a large group of computer science, students and graduates coming from Kosovo.

For me it was the first time in OpenLabs and I must say It was and amazing time meeting the organizers and members and working with them.

After the presentation about the Openlabs and it’s events we had some interesting topics and projects that we could choose to work on. Mainly, I worked with other girls into translating some parts of Debian text to Albanian, also we did some research for bugs into systems.

In the evning we had a nice dinner, in an Italian resturant in Tirana.

Discovering Tirana.


Planet DebianVincent Bernat: Packaging an out-of-tree module for Debian with DKMS

DKMS is a framework designed to allow individual kernel modules to be upgraded without changing the whole kernel. It is also very easy to rebuild modules as you upgrade kernels.

On Debian-like systems,1 DKMS enables the installation of various drivers, from ZFS on Linux to VirtualBox kernel modules or NVIDIA drivers. These out-of-tree modules are not distributed as binaries: once installed, they need to be compiled for your current kernel. Everything is done automatically:

# apt install zfs-dkms
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
  binutils cpp cpp-6 dkms fakeroot gcc gcc-6 gcc-6-base libasan3 libatomic1 libc-dev-bin libc6-dev
  libcc1-0 libcilkrts5 libfakeroot libgcc-6-dev libgcc1 libgomp1 libisl15 libitm1 liblsan0 libmpc3
  libmpfr4 libmpx2 libnvpair1linux libquadmath0 libstdc++6 libtsan0 libubsan0 libuutil1linux libzfs2linux
  libzpool2linux linux-compiler-gcc-6-x86 linux-headers-4.9.0-6-amd64 linux-headers-4.9.0-6-common
  linux-headers-amd64 linux-kbuild-4.9 linux-libc-dev make manpages manpages-dev patch spl spl-dkms
  zfs-zed zfsutils-linux
3 upgraded, 44 newly installed, 0 to remove and 3 not upgraded.
Need to get 42.1 MB of archives.
After this operation, 187 MB of additional disk space will be used.
Do you want to continue? [Y/n]
# dkms status
spl,, 4.9.0-6-amd64, x86_64: installed
zfs,, 4.9.0-6-amd64, x86_64: installed
# modinfo zfs | head
filename:       /lib/modules/4.9.0-6-amd64/updates/dkms/zfs.ko
license:        CDDL
author:         OpenZFS on Linux
description:    ZFS
srcversion:     42C4AB70887EA26A9970936
depends:        spl,znvpair,zcommon,zunicode,zavl
retpoline:      Y
vermagic:       4.9.0-6-amd64 SMP mod_unload modversions
parm:           zvol_inhibit_dev:Do not create zvol device nodes (uint)

If you install a new kernel, a compilation of the module is automatically triggered.

Building your own DKMS-enabled package🔗

Suppose you’ve gotten your hands on an Intel XXV710-DA2 NIC. This card is handled by the i40e driver. Unfortunately, it only got support from Linux 4.10 and you are using a stock 4.9 Debian Stretch kernel. DKMS provides here an easy solution!

Download the driver from Intel, unpack it in some directory and add a debian/ subdirectory with the following files:

  • debian/changelog:

    i40e-dkms (2.4.6-0) stretch; urgency=medium
      * Initial package.
     -- Vincent Bernat <>  Tue, 27 Feb 2018 17:20:58 +0100
  • debian/control:

    Source: i40e-dkms
    Maintainer: Vincent Bernat <>
    Build-Depends: debhelper (>= 9), dkms
    Package: i40e-dkms
    Architecture: all
    Depends: ${misc:Depends}
    Description: DKMS source for the Intel i40e network driver
  • debian/rules:

    #!/usr/bin/make -f
    include /usr/share/dpkg/
            dh $@ --with dkms
            dh_install src/* usr/src/i40e-$(DEB_VERSION_UPSTREAM)/
            dh_dkms -V $(DEB_VERSION_UPSTREAM)
  • debian/i40e-dkms.dkms:

  • debian/compat:


In debian/changelog, pay attention to the version. The version of the driver is 2.4.6. Therefore, we use 2.4.6-0 for the package. In debian/rules, we install the source of the driver in /usr/src/i40e-2.4.6—the version is extracted from debian/changelog.

The content of debian/i40e-dkms.dkms is described in details in the dkms(8) manual page. The i40e driver is fairly standard and dkms is able to figure out how to compile it. However, if your kernel module does not follow the usual conventions, it is the right place to override the build command.

Once all the files are in place, you can turn the directory into a Debian package with, for example, the dpkg-buildpackage command.2 At the end of this operation, you get your DKMS-enabled package, i40e-dkms_2.4.6-0_all.deb. Put it in your internal repository and install it on the target.

Avoiding compilation on target🔗

If you feel uncomfortable installing compilation tools on the target servers, there is a simple solution. Since version,3 thanks to Thijs Kinkhorst, dkms can build lean binary packages with only the built modules. For each kernel version, you build such a package in your CI system:

KERNEL_VERSION=4.9.0-6-amd64 # could be a Jenkins parameter
apt -qyy install \
      i40e-dkms \
      linux-image-${KERNEL_VERSION} \

DRIVER_VERSION=$(dkms status i40e | awk -F', ' '{print $2}')
dkms mkbmdeb i40e/${DRIVER_VERSION} -k ${KERNEL_VERSION}

cd /var/lib/dkms/i40e/${DRIVER_VERSION}/bmdeb/
dpkg -c i40e-modules-${KERNEL_VERSION}_*
dpkg -I i40e-modules-${KERNEL_VERSION}_*

Here is the shortened output of the two last commands:

# dpkg -c i40e-modules-${KERNEL_VERSION}_*
-rw-r--r-- root/root    551664 2018-03-01 19:16 ./lib/modules/4.9.0-6-amd64/updates/dkms/i40e.ko
# dpkg -I i40e-modules-${KERNEL_VERSION}_*
 new debian package, version 2.0.
 Package: i40e-modules-4.9.0-6-amd64
 Source: i40e-dkms-bin
 Version: 2.4.6
 Architecture: amd64
 Maintainer: Dynamic Kernel Modules Support Team <>
 Installed-Size: 555
 Depends: linux-image-4.9.0-6-amd64
 Provides: i40e-modules
 Section: misc
 Priority: optional
 Description: i40e binary drivers for linux-image-4.9.0-6-amd64
  This package contains i40e drivers for the 4.9.0-6-amd64 Linux kernel,
  built from i40e-dkms for the amd64 architecture.

The generated Debian package contains the pre-compiled driver and only depends on the associated kernel. You can safely install it without pulling dozens of packages.

  1. DKMS is also compatible with RPM-based distributions but the content of this article is not suitable for these. ↩︎

  2. You may need to install some additional packages: build-essential, fakeroot and debhelper↩︎

  3. Available in Debian Stretch and in the backports for Debian Jessie. However, for Ubuntu Xenial, you need to backport a more recent version of dkms↩︎

Worse Than FailureCodeSOD: Just One More Point

Fermat Points Proof

Tim B. had been tasked with updating an older internal application implemented in Java. Its primary purpose was to read in and display files containing a series of XY points—around 100,000 points per file on average—which would then be rendered as a line chart. It was notoriously slow, taking 1-2 minutes to process each file, but otherwise remained fairly stable.

Except that lately, some newer files were failing during the loading process. Tim quickly identified the problem—date formats had changed—and fixed the necessary code. Since the code that read in the XY points happened to reside in the same class, Tim asked his boss whether he could take a crack at killing two birds with one stone. With her approval, he dug in to figure out why the loading process was so slow.

//Initial code, pulled from memory so forgive any errors.
try {
            //The 3rd party library we are passing the values to requires
            //an array of doubles
            double[][] points = null;
            BufferedReader input =  new BufferedReader(new FileReader(aFile));
            try {
                String line = null;
                while (( line = input.readLine()) != null)
                    //First, get the XY points from line using a convenience class
                    //to parse out the values.
                    XYPoint p = new XYPoint(line);
                    //Now, to store the points in the array.
                    if ( points == null )
                        //Okay, we've got one point so far.
                        points = new double[1][2];
                        points[0][0] = p.X;
                        points[0][1] = p.Y;
                        //Uh oh, we need more room. Let's create an array that's one larger
                        //and copy all of our points so far into it.
                        double[][] newPointArray = new double[points.length + 1][2];
                        for ( int i = 0; i < points.length; i++ )
                            newPointArray[i][0] = points[i][0];
                            newPointArray[i][1] = points[i][1];
                        //Now we can add the new point!
                        newPointArray[points.length][0] = p.X;
                        newPointArray[points.length][1] = p.Y;
                        points = newPointArray;
                //Now, we can pass this to our next function
                drawChart( points );
        } catch (IOException ex)
//End original code

After scouring the code twice, Tim called over a few coworkers to have a look for themselves. Unfortunately, no, he wasn't reading it wrong. Apparently the original developer, who no longer worked there, had run into the problem of not knowing ahead of time how many points would be in each file. However, he'd needed an array of doubles to pass to the next library so he could use a list, which only accepted objects. Thus had he engineered a truly brillant workaround.

Tim determined that for the average file of 100,000 points, each import required a jaw-dropping 2 billion copy operations (1 billion for the Xs, 1 billion for the Ys). After a quick refactoring to use an ArrayList, followed by a copy to a double array, the file load time went from minutes to nearly instantaneous.

//Re-factored code below.
try {
            //The 3rd party library we are passing the values to requires
            //an array of doubles
            double[][] points = null;
            ArrayList xyPoints = new ArrayList();
            BufferedReader input =  new BufferedReader(new FileReader(aFile));
            try {
                String line = null;
                while (( line = input.readLine()) != null)
                    xyPoints.add( new XYPoint(line) );
                //Now, convert the list to an array
                points = new double[xyPoints.size()][2];
                for ( int i = 0; i < xyPoints.size(); i++ )
                    points[i][0] = xyPoints.get(i).X;
                    points[i][1] = xyPoints.get(i).Y;
                //Now, we can pass this to our next function
                drawChart( points );
        } catch (IOException ex)
//End re-factored code.
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV March 2018 Workshop: Comparing window managers

Mar 17 2018 12:30
Mar 17 2018 16:30
Mar 17 2018 12:30
Mar 17 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Comparing window managers

We'll be looking at several of the many window managers available on Linux.

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 17, 2018 - 12:30

Planet DebianCraig Sanders: brawndo-installer

Tired of being oppressed by the slack-arse distro package maintainers who waste time testing that new versions don’t break anything and then waste even more time integrating software into the system?

Well, so am I. So I’ve fixed it, and it was easy to do. Here’s the ultimate installation tool for any program:

brawndo() {
   curl $1 | sudo /usr/bin/env bash

I’ve never written a shell script before in my entire life, I spend all my time writing javascript or ruby or python – but shell’s not a real language so it can’t be that hard to get right, can it? Of course not, and I just proved it with the amazing brawndo installer (It’s got what users crave – it’s got electrolyes!)

So next time some lame sysadmin recommends that you install the packaged version of something, just ask them if apt-get or yum or whatever loser packaging tool they’re suggesting has electrolytes. That’ll shut ’em up.

brawndo-installer is a post from: Errata

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #149

Here's what happened in the Reproducible Builds effort between Sunday February 25 and Saturday March 3 2018:

diffoscope development

Version 91 was uploaded to unstable by Mattia Rizzolo. It included contributions already covered by posts of the previous weeks as well as new ones from:

In addition, Juliana — our Outreachy intern — continued her work on parallel processing; the above work is part of it.

reproducible-website development

Packages reviewed and fixed, and bugs filed

An issue with the pydoctor documentation generator was merged upstream.

Reviews of unreproducible packages

73 package reviews have been added, 37 have been updated and 26 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (46)
  • Jeremy Bicha (4)


This week's edition was written by Chris Lamb, Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet Linux AustraliaSimon Lyall: Audiobooks – Background and February 2018 list


I started listening to audiobooks around the start of January 2017 when I started walking to work (I previously caught the bus and read a book or on my phone).

I currently get them for free from the Auckland Public Library using the Overdrive app on Android. However while I download them to my phone using the Overdrive app I listen to the using Listen Audiobook Player . I switched to the alternative player mainly since it supports playback speeds greater the 2x normal.

I’ve been posting a list the books I listened to at the end of each month to twitter ( See list from Jan 2018, Dec 2017, Nov 2017 ) but I thought I’d start posting them here too.

I mostly listen to history with some science fiction and other topics.

Books listened to in February 2018

The Three-Body Problem by Cixin Liu – Pretty good Sci-Fi and towards the hard-core end I like. Looking forward to the sequels 7/10

Destiny and Power: The American Odyssey of George Herbert Walker Bush by Jon Meacham – A very nicely done biography, comprehensive and giving a good positive picture of Bush. 7/10

Starship Troopers by Robert A. Heinlein – A pretty good version of the classic. The story works well although the politics are “different”. Enjoyable though 8/10

Uncommon People: The Rise and Fall of the Rock Stars 1955-1994 by David Hepworth – Read by the Author (who sounds like a classic Brit journalist). A Story or two plus a playlist from every year. Fascinating and delightful 9/10

The Long Haul: A Trucker’s Tales of Life on the Road by Finn Murphy – Very interesting and well written about the author’s life as a long distance mover. 8/10

Mornings on Horseback – David McCullough – The Early life of Teddy Roosevelt, my McCullough book for the month. Interesting but not as engaging as I’d have hoped. 7/10

The Battle of the Atlantic: How the Allies Won the War – Jonathan Dimbleby – Overview of the Atlantic Campaign of World War 2. The author works to stress it was on of the most important fronts and does pretty well 7/10






Cory DoctorowHey, Wellington! I’m headed your way!

I’ve just finished a wonderful time at the Adelaide Festival and now I’m headed to the last stop on the Australia/New Zealand tour for Walkaway: Wellington!

I’m doing a pair of events at Writers & Readers Week at the New Zealand Festival; followed by a special one-day NetHui on copyright and then a luncheon seminar for the Privacy Commissioner on “machine learning, big data and being less wrong.”

It starts on the 9th of March and finishes on the 13th, and I really hope I see you there! Thanks to everyone who’s come out in Perth, Sydney, Melbourne and Adelaide; you’ve truly made this a tour to remember.

Harald WelteReport from the Geniatech vs. McHardy GPL violation court hearing

Today, I took some time off to attend the court hearing in the appeal hearing related to a GPL infringement dispute between former netfilter colleague Partrick McHardy and Geniatech Europe

I am not in any way legally involved in the lawsuit on either the plaintiff or the defendant side. However, as a fellow (former) Linux kernel developer myself, and a long-term Free Software community member who strongly believes in the copyleft model, I of course am very interested in this case.

History of the Case

This case is about GPL infringements in consumer electronics devices based on a GNU/Linux operating system, including the Linux kernel and at least some devices netfilter/iptables. The specific devices in question are a series of satellite TV receivers built by a Shenzhen (China) based company Geniatech, which is represented in Europe by Germany-based Geniatech Europe GmbH.

The Geniatech Europe CEO has openly admitted (out of court) that they had some GPL incompliance in the past, and that there was failure on their part that needed to be fixed. However, he was not willing to accept an overly wide claim in the preliminary injunction against his company.

The history of the case is that at some point in July 2017, Patrick McHardy has made a test purchase of a Geniatech Europe product, and found it infringing the GNU General Public License v2. Apparently no source code (and/or written offer) had been provide alongside the binary - a straight-forward violation of the license terms and hence a violation of copyright. The plaintiff then asked the regional court of Cologne to issue a preliminary injunction against the defendant, which was granted on September 8th,2017.

In terms of legal procedure, in Germany, when a plaintiff applies for a preliminary injunction, it is immediately granted by the court after brief review of the filing, without previously hearing the defendant in an oral hearing. If the defendant (like in this case) wishes to appeal the preliminary injunction, it files an appeal which then results in an oral hearing. This is what happened, after which the district court of cologne (Landgericht Koeln) on October 20, 2017 issued ruling 14 O 188/17 partially upholding the injunction.

All in all, nothing particularly unusual about this. There is no dispute about a copyright infringement having existed, and this generally grants any of the copyright holders the right to have the infringing party to cease and desist from any further infringement.

However, this injunction has a very wide scope, stating that the defendant was to cease and desist not only from ever publishing, selling, offering for download any version of Linux (unless being compliant to the license). It furthermore asked the defendant to cease and desist

  • from putting hyperlinks on their website to any version of Linux
  • from asking users to download any version of Linux

unless the conditions of the GPL are met, particularly the clauses related to providing the complete and corresponding source code.

The appeals case at OLG Cologne

The defendant now escalated this to the next higher court, the higher regional court of Cologne (OLG Koeln), asking to withdraw the earlier ruling of the lower court, i.e. removing the injunction with its current scope.

The first very positive surprise at the hearing was the depth in which the OLG court has studied the subject matter of the dispute prior to the hearing. In the many GPL related court cases that I witnessed so far, it was by far the most precise analysis of how Linux kernel development works, and this despite the more than 1000 pages of filings that parties had made to the court to this point.

Just to give you some examples:

  • the court understood that Linux was created by Linus Torvalds in 1991 and released under GPL to facilitate the open and collaborative development
  • the court recognized that there is no co-authorship / joint authorship (German: Miturheber) in the Linux kernel as a whole, as it was not a group of people planning+developing a given program together, but it is a program that has been released by Linus Torvalds and has since been edited by more than 15.000 developers without any "grand joint plan" but rather in successive iterations. This situation constitutes "editing authorship" (German: Bearbeiterurheber)
  • the court further recognized that being listed as "head of the netfilter core team" or a "subsystem maintainer" doesn't necessarily mean that one is contributing copyrightable works. Reviewing thousands of patches doesn't mean you own copyright on them, drawing an analogy to an editorial office at a publisher.
  • the court understood there are plenty of Linux versions that may not even contain any of Patric McHardy's code (such as older versions)

After about 35 minutes of the presiding judge explaining the court's understanding of the case (and how kernel development works), it went on to summarize the summary of their internal elaboration at the court prior to the meeting.

In this summary, the presiding judge stated very clearly that they believe there is some merit to the arguments of the defendant, and that they would be inclined in a ruling favorable to the defendant based on their current understanding of the case.

He cited the following main reasons:

  • The Linux kernel development model does not support the claim of Patrick McHardy having co-authored Linux. In so far, he is only an editing author (Bearbeiterurheber), and not a co-author. Nevertheless, even an editing author has the right to ask for cease and desist, but only on those portions that he authored/edited, and not on the entire Linux kernel.
  • The plaintiff did not sufficiently show what exactly his contributions were and how they were forming themselves copyrightable works
  • The plaintiff did not substantiate what copyrightable contributions he has made outside of netfilter/iptables. His mere listing as general networking subsystem maintainer does not clarify what his copyrightable contributions were
  • The plaintiff being a member of the netfilter core team or even the head of the core team still doesn't support the claim of being a co-author, as netfilter substantially existed since 1999, three years before Patrick's first contribution to netfilter, and five years before joining the core team in 2004.

So all in all, it was clear that the court also thought the ruling on all of Linux was too far-fetching.

The court suggested that it might be better to have regular main proceedings, in which expert witnesses can be called and real evidence has to be provided, as opposed to the constraints of the preliminary procedure that was applied currently.

Some other details that were mentioned somewhere during the hearing:

  • Patrick McHardy apparently unilaterally terminated the license to his works in an e-mail dated 26th of July 2017 towards the defendant. According to the defendant (and general legal opinion, including my own position), this is in turn a violation of the GPLv2, as it only allowed plaintiff to create and publish modified versions of Linux under the obligation that he licenses his works under GPLv2 to any third party, including the defendant. The defendant believes this is abuse of his rights (German: Rechtsmissbraeuchlich).
  • sworn affidavits of senior kernel developer Greg Kroah-Hartman and current netfilter maintainer Pablo Neira were presented in support of some of the defendants claims. The contents of those are unfortunately not public, neither is the contents of the sworn affidavists presented by the plaintiff.
  • The defendant has made substantiated claims in his filings that Patrick McHardy would perform his enforcement activities not with the primary motivation of achieving license compliance, but as a method to generate monetary gain. Such claims include that McHardy has acted in more than 38 cases, in at least one of which he has requested a contractual penalty of 1.8 million EUR. The total amount of monies received as contractual penalties was quoted as over 2 million EUR to this point. Please note that those are claims made by the defendant, which were just reproduced by the court. The court has not assessed their validity. However, the presiding judge explicitly stated that he received a phone calls about this case from a lawyer known to him personally, who supported that large contractual penalties are being paid in other related cases.
  • One argument by the plaintiff seems to center around being listed as a general kernel networking maintainer until 2017 (despite his latest patches being from 2015, and those were netfilter only)

Withdrawal by Patrick McHardy

At some point, the court hearing was temporarily suspended to provide the legal representation of the plaintiff with the opportunity to have a Phone call with the plaintiff to decide if they would want to continue with their request to uphold the preliminary injunction. After a few minutes, the hearing was resumed, with the plaintiff withdrawing their request to uphold the injunction.

As a result, the injunction is now withdrawn, and the plaintiff has to bear all legal costs (court fees, lawyers costs on both sides).

Personal Opinion

For me, this is all of course a difficult topic. With my history of being the first to enforce the GNU GPLv2 in (equally German) court, it is unsurprising that I am in favor of license enforcement being performed by copyright holders.

I believe individual developers who have contributed to the Linux kernel should have the right to enforce the license, if needed. It is important to have distributed copyright, and to avoid a situation where only one (possibly industry friendly) entity would be able to take [legal] action.

I'm not arguing for a "too soft" approach. It's almost 15 years since the first court cases on license violations on (embedded) Linux, and the fact that the problem still exists today clearly shows the industry is very far from having solved a seemingly rather simple problem.

On the other hand, such activities must always be oriented to compliance, and compliance only. Collecting huge amounts of contractual penalties is questionable. And if it was necessary to collect such huge amounts to motivate large corporations to be compliant, then this must be done in the open, with the community knowing about it, and the proceeds of such contractual penalties must be donated to free software related entities to prove that personal financial gain is not a motivation.

The rumors of Patrick performing GPL enforcement for personal financial gain have been around for years. It was initially very hard for me to believe. But as more and more about this became known, and Patrick would refuse to any contact requests by his former netfilter team-mates as well as the wider kernel community make it hard to avoid drawing related conclusions.

We do need enforcement, both out of court and in court. But we need it to happen out of the closet, with the community in the picture, and without financial gain to individuals. The "principles of community oriented enforcement" of the Software Freedom Conservancy as well as the more recent (but much less substantial) kernel enforcement statement represent the most sane and fair approach for how we as a community should deal with license violations.

So am I happy with the outcome? Not entirely. It's good that an over-reaching injunction was removed. But then, a lot of money and effort was wasted on this, without any verdict/ruling. It would have been IMHO better to have a court ruling published, in which the injunction is substantially reduced in scope (e.g. only about netfilter, or specific versions of the kernel, or specific products, not about placing hyperlinks, etc.). It would also have been useful to have some of the other arguments end up in a written ruling of a court, rather than more or less "evaporating" in the spoken word of the hearing today, without advancing legal precedent.

Lessons learned for the developer community

  • In the absence of detailed knowledge on computer programming, legal folks tend to look at "metadata" more, as this is what they can understand.
  • It matters who has which title and when. Should somebody not be an active maintainer, make sure he's not listed as such.
  • If somebody ceases to be a maintainer or developer of a project, remove him or her from the respective lists immediately, not just several years later.
  • Copyright statements do matter. Make sure you don't merge any patches adding copyright statements without being sure they are actually valid.

Lessons learned for the IT industry

  • There may be people doing GPL enforcement for not-so-noble motives
  • Defending yourself against claims in court can very well be worth it, as opposed to simply settling out of court (presumably for some money). The Telefonica case in 2016 <>_ has shown this, as has this current Geniatech case. The legal system can work, if you give it a chance.
  • Nevertheless, if you have violated the license, and one of the copyright holders makes a properly substantiated claim, you still will get injunctions granted against you (and rightfully so). This was just not done in this case (not properly substantiated, scope of injunction too wide/coarse).

Dear Patrick

For years, your former netfilter colleagues and friends wanted to have a conversation with you. You have not returned our invitation so far. Please do reach out to us. We won't bite, but we want to share our views with you, and show you what implications your actions have not only on Linux, but also particularly on the personal and professional lives of the very developers that you worked hand-in-hand with for a decade. It's your decision what you do with that information afterwards, but please do give us a chance to talk. We would greatly appreciate if you'd take up that invitation for such a conversation. Thanks.

Planet DebianSteinar H. Gunderson: Skellam distribution likelihood

I wondered if it was possible to make a ranking system based on the Skellam distribution, taking point spread as the only input; first step is figuring out what the likelihood looks like, so here's an example for k=4 (ie., one team beat the other by four goals):

Skellam distribution likelihood surface plot

It's pretty, but unfortunately, it shows that the most likely combination is µ1 = 0 and µ2 = 4, which isn't really that realistic. I don't know what I expected, though :-)

Perhaps it's different when we start summing many of them (more games, more teams), but you get into too high dimensionality to plot. If nothing else, it shows that it's hard to solve symbolically by looking for derivatives, as the extreme point is on an edge, not on a hill.

Krebs on SecurityWhat Is Your Bank’s Security Banking On?

A large number of banks, credit unions and other financial institutions just pushed customers onto new e-banking platforms that asked them to reset their account passwords by entering a username plus some other static identifier — such as the first six digits of their Social Security number (SSN), or a mix of partial SSN, date of birth and surname. Here’s a closer look at what may be going on (spoiler: small, regional banks and credit unions have grown far too reliant on the whims of just a few major online banking platform providers).

You might think it odd that any self-respecting financial institution would seek to authenticate customers via static data like partial SSN for passwords, and you’d be completely justified for thinking that, too. Nobody has any business using these static identifiers for authentication because they are for sale on most Americans quite cheaply in the cybercrime underground. The Equifax breach might have “refreshed” some of those data stores for identity thieves, but most U.S. adults have had their static details (DOB/SSN/MMN, address, previous address, etc) on sale for years now.

On Feb. 16, KrebsOnSecurity reader Brent Hoeft shared a copy of an email he’d just received from his financial institution Associated Bank, which at $30+ billion in assets happens to be Wisconsin’s largest by asset size.

The notice advised:

“Please read and save this information (including the password below) to prepare for your online and mobile banking upgrade.

Our refreshed online and mobile banking experience is officially launching on Monday, February 26, 2018.

We’re excited to share it with you, and want you to be aware of some important details about the transition.


Use this temporary password the first time you sign in after the upgrade. Your temporary password is the first four letters of your last name plus the last four digits of your Social Security Number.

XXXX#### [redacted by me but included in the email]

Note: your password is all lowercase without spaces.

Once the upgrade is complete, you will need your temporary password to begin the re-enrollment process.
• Beginning Monday, February 26, you will need to sign in using your existing user ID and the temporary password included above in this email. Please note that you are only required to reenroll in online or mobile banking but can access both using the same user ID and password.
• Once you sign in, you will be prompted to create a new password and establish other security features. Your user ID will remain the same.”

Hoeft said Associated Bank seems to treat the customer username as a secret, something to be protected along with the password.

“I contacted Associated’s customer service via email and received a far less satisfying explanation that the user name is required for re-activation and, that since [the username] was not provided in the email, the process they are using is in fact secure,” Hoeft said.

After speaking with Hoeft, I tweeted about whether to name and shame the bank before it was too late, or perhaps to try and talk some sense into them privately. Most readers advised that calling attention to the problem before the transition could cause more harm than good, and that at least until after Feb. 26 contacting some of the banks privately was the best idea (which is what I did).

Associated Bank wouldn’t say who their new consumer online banking platform provider was, but they did say it was one of the big ones. I took that to mean either FIS, Fiserv or Jack Henry, which collectively control approximately 70 percent of the market for bank core processors (according to, Fiserv is by far the largest).


The bank’s chief information security officer Joe Smits said Associated’s new consumer online banking platform provider required that new and existing customers log in with a username and a temporary password — which was described as choice among secondary, static data elements about customers — such as the first six digits of the customer’s SSN or date of birth.

Smits added that the bank originally started emailing customers the instructions for figuring out their temporary passwords, but then decided US mail would be a safer option and sent the rest out that way. He said only about 15 percent of Associated Bank customers (~50,000) received instructions about their temporary passwords through email.

I followed up with Hoeft to find out how his online banking upgrade went at Associated Bank. He told me that upon visiting the site, it asked for his username and the temporary password (the first four letters of his last name and the last four digits of his SSN).

“After entering that I was told to re-enter my temporary password and then create a new password,” Hoeft said. “I then was asked to select 5 security questions and provide answers. Next I was asked for a verification phone number. Upon entering that I received a text message with a 4 digit verification code. After entering the code it asked me to finish my profile information including name, email and daytime phone. After that it took me right into my online banking account.”

Hoeft said it seems like the “verification” step that was supposed to create an extra security check didn’t really add any security at all.

“If someone were able to get in with the temporary password, they would be able to create a new password, fill out all the security code information, and then provide their phone number to receive the verification code,” Hoeft said. “Armed with the verification code they then would be able to get right into my online banking account.”


A simple search online revealed Associated Bank wasn’t alone: Multiple institutions were moving to a new online banking platform all on the same day: Feb. 26, 2018.

My Credit Union also moved to a new online banking service in February, posting a notice stating that all customers will need to log in with their current username and the last four of their SSN as a temporary password.

Customers Bank, a $10 billion bank with nearly two dozen branches between Boston and Philadelphia, also told customers that starting Feb. 26 they would need to use a temporary password — the last six digits of their Social Security number — to re-enroll in online banking. Here’s part of their advice, which was published in a PDF on the bank’s site:

• You may notice a new co-branded logo for Customers Bank and BankMobile (Division Customers Bank).
• Your existing user name for Online Banking will remain the same within the new system; however, it must be entered as all lowercase letters.
• The first time you log into the new Online Banking system, your temporary password is the last 6-digits of your social security number. Your temporary
password will expire on Friday, April 20, 2018. Please be sure to log in prior to that date.
• Online Banking includes multi-factor authentication which will need to be reestablished as part of the initial sign in to the system.
• Your username and password credentials for Online Banking will be the same for Mobile Banking. Note: Before accessing the new Mobile Banking services,
you must first login to our enhanced Online Banking system to change your password.
• You will also need to enroll your mobile device, either through Online Banking by visiting the Mobile Banking Center option, or directly on the device through the
app. Both options will require additional authentication.

Columbia Bank, which has 140 branches in Washington, Oregon and Idaho, also switched gears on Feb. 26, but used a more sensible approach: Sending customers a new user ID, organization ID and temporary password in two separate mailings.


My tweet about whether to name Associated Bank attracted the attention of at least two banking industry security regulators, each of whom spoke with KrebsOnSecurity on condition of not being identified by name or regulatory agency.

Both said their agencies would be using the above examples in briefings with member institutions as instructional on how not to do online banking securely. Both also said small to mid-sized banks are massively beholden to their platform providers, and many banks simply accept the defaults instead of pushing for stronger alternatives.

“I have a lot of communications directly with the chief information security officers, chief security officers, and chief information officers in many institutions,” one regulator said. “Many of them have massively dumbed down their password requirements. A lot of smaller institutions often don’t understand the risk involved in online banking, which is why they try to outsource the whole thing to someone else. But they can’t outsource accountability.”

One of the regulators I spoke with suggested that all of the banks they’d seen transitioning to a new online banking platform on Feb. 26 were customers of Fiserv — the nation’s largest online banking platform provider.

Fiserv did not respond to specific questions for this story, saying only in a written statement that: “Fiserv regularly partners with financial institutions to provide capabilities that help mitigate and manage risk, enhance the customer experience, and allow banks to remain competitive. A variety of methodologies are used by institutions to enroll and authenticate new users onto online banking platforms, and password authentication is one of multiple layers of security used to protect customers.”

Both banking industry regulators I spoke with said a basic problem is that many smaller institutions unfortunately still treat usernames as secret codes. I have railed against this practice for years, but far too many banks treat customer usernames as part of their security, even though most customers pick something very close to the first part of their email address (before the “@” sign). I’ve even skewered some of the airline industry giants for doing the same (United does this with its super-secret frequent flyer account number).

“I think this will be an opportunity for us to coach them on that,” one banking regulator said. “This process has to involve random password generation and that needs to be standard operating procedure. If you can shortcut security just by supplying static data like SSN, it’s all screwed. Some of these organizations have had such poor control structure for so long they don’t even understand how bad it is.”

The other regulator said another challenge is how long banks should wait before disabling accounts if consumers don’t log in to the new online banking system.

“What they’re going to do is set up all these users on this brand new system and give them default passwords,” the regulator said. “Some individuals will log into their bank account every day, others once a month and sometimes quite randomly. So, how are they going to control that window of opportunity? At some point, maybe after a couple of weeks, they need to just disable those other accounts and have people start from scratch.”

The first regulator said it appears many banks (and their platform providers) are singularly focused on making these transitions as seamless and painless as possible for the financial institution and its customers.

“I think they’re looking at making it easier for their customers and lessening the fallout as they get fewer angry and frustrated calls,” the regulator said. “That’s their incentive more than anything else.”


While it may appear that banks are more afraid of calls from their customers than of fallout from identity thieves and hackers, remember that you the consumer can shop with your wallet, and should move your funds to another bank if you’re unhappy with the security practices of your current institution.

Also, don’t re-use passwords. In fact, wherever possible don’t use passwords at all. Instead, choose passphrases over passwords (remember, length is key). Unfortunately, passphrases may not be possible because some banks have chosen to truncate passwords after a certain number of characters, and to disallow special symbols.

If you’re the kind of person who likes to use the same password across multiple sites, then a password manager is definitely for you. That’s because password managers pick strong, long and secure passwords for you and the only thing you have to remember is a single master password.

Please consider any two-step or two-factor authentication options your financial institution may offer, and be sure to take full advantage of that when it’s available. Also, ask your bank to require a unique verbal password before discussing any of your account details over the phone; this prevents someone from calling in to your bank and convincing a customer service rep that he’s you just because he can regurgitate your static personal details.

Finally, take steps to prevent your security from being backdoored by your mobile provider: Check out last week’s tips on blocking mobile number port-out scams, which thieves sometimes use in cashing out hacked bank accounts.

Planet DebianArianit Dobroshi: Debian Bug Squashing Party in Tirana

On 3 March I attended a Debian Bug Squashing Party in Tirana. Organized by colleagues at Open Labs Albania Anisa and friends and Daniel. Debian is the second oldest GNU/Linux distribution still active and a launchpad for so many others.

A large number of Kosovo participants took place, mostly female students. I chose to focus on adding Kosovo to country-lists in Debian by verifying that Kosovo was missing and then filing bug reports or, even better, doing pull requests.

apt-cache rdepends iso-codes will return a list of packages that include ISO codes. However, this proved hard to examine by simply looking at these applications on Debian; one would have to search through their code to find out how the ISO MA-3166 codes are used. So I left that for another time.

I moved next to what I thought I would be able complete within the event. Coding is becoming quite popular with children in Kosovo. I looked into MIT’s Scratch and Google’s Blockly, the second one being freeer software and targeting younger children. They both work by snapping together logical building blocks into a program.

Translation of Blockly into Albanian is now complete and hopefully will get much use. You can improve on my work at Translatewiki.

Thank you for the all fish and see you at the next Debian BSP.


Planet DebianRaphaël Hertzog: My Free Software Activities in February 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Distro Tracker

Since we switched to salsa, and with the arrival of prospective GSOC students interested to work on distro-tracker this summer, I have been rather active on this project as can be seen in the project’s activity summary. Among the most important changes we can note:

  • The documentation and code coverage analysis is updated on each push.
  • Unit tests, functional tests and style checks (flake8) are run on each push but also on merge requests, allowing contributors to have quick feedback on their code. Implemented with this Gitlab CI configuration.
  • Multiple bug fixes (more of it). Update code to use python3-gpg instead of deprecated python3-gpgme (I had to coordinate with DSA to get the new package installed).
  • More unit tests for team related code. Still a work in progress but I made multiple reviews already.

Debian Live

I created the live-team on to prepare for the move of the various Debian live repositories. The move itself has been done by Steve McIntyre. In the discussion, we also concluded that the live-images source package can go away. I thus filed its removal request.

Then I spent a whole day reviewing all the pending patches. I merged most of them and left comments on the remaining ones:

  • Merged #885453 cleaning up double slashes in some paths.
  • Merged #885466 allowing to set upperdir tmpfs mount point size.
  • Merged #885455 switching back the live-boot initrd to use busybox’s wget as it supports https now.
  • Merged #886328 simplifying the mount points handling by using /run/live instead of /lib/live/mount.
  • Merged #886337 adding options to build smaller initrd by disabling some features.
  • Merged #866009 fixing a race condition between live-config and systemd-tmpfiles-setup.
  • Reviewed #884355 implementing new hooks in live-boot’s initrd. Not ready for merge yet.
  • Reviewed #884553 implementing cross-architecture linux flavour selection. Not ready for merge yet.
  • Merged #891206 fixing a regression with local mirrors.
  • Merged #867539 lowering the process priority of mksquasfs to avoid rendering the machine completely unresponsive during this step.
  • Merged #885692 adding UEFI support for ARM64.
  • Merged #847919 simplifying the bootstrap of foreign architectures.
  • Merged #868559 fixing fuse mounts by switching back to klibc’s mount.
  • Wrote a patch to fix verify-checksums option in live-boot (see #856482).
  • I released a new version of live-config but wanted some external testing before releasing the new live-boot. This did not happen yet unfortunately.

Debian LTS

I started a discussion on debian-devel about how we could handle the extension of the LTS program that some LTS sponsors are asking us to do.

The response have been rather mixed so far. It is unlikely that wheezy will be kept on the official mirror after its official EOL date but it’s not clear whether it would be possible to host the wheezy updates on some other server for longer.

Debian Handbook

I moved the git repository of the book to salsa and released a new version in unstable to fix two recent bugs: #888575 asking us to implement some parallel building to speed the build and #888578 informing us that a recent debhelper update broke the build process due to the presence of a build directory in the source package.

Debian Packaging

I moved all my remaining packages to and used the opportunity to clean them up:

  • dh-linktree, ftplib, gnome-shell-timer (fixed #891305 later), logidee-tools, publican, publican-debian, vboot-utils, rozofs
  • Some also got a new upstream release for the same price: tcpdf, lpctools, elastalert, notmuch-addrlookup.
  • I orphaned tcpdf in #889731 and I asked for the removal of feed2omb in #742601.
  • I updated django-modeltranslation to 0.12.2 to fix FTBFS bug #834667 (I submitted an upstream pull request at the same time).

Dolibarr. As a sponsor of dolibarr I filed its removal request and then I started a debian-devel discussion because we should be able to provide such applications to our users even though its development practice does not conform to some of our policies.

Bash. I uploaded a bash NMU (4.4.18-1.1) to fix a regression introduced by the PIE-enabled build (see #889869). I filed an upstream bug against bash but it turns out it’s actually a bug in qemu-user that really ought to be fixed. I reported the bug to qemu upstream but it hasn’t gotten much traction.

pkg-security team. I sponsored many updates over the month: rhash 1.3.5-1, medusa 2.2-5, hashcat, dnsrecon, btscanner, wfuzz 2.2.9, pixiewps 1.4.2-1, inetsim (new from kali). I also made a new upload of sslsniff with the OpenSSL 1.1 patch contributed by Hilko Bengen.

Debian bug reports

I filed a few bug reports:

  • #889814: lintian: Improve long description of epoch-change-without-comment
  • #889816: lintian: Complain when epoch has been bumped but upstream version did not go backwards
  • #890594: devscripts: Implement a salsa-configure script to configure project repositories
  • #890700 and #890701 about missing Vcs-Git fields to siridb-server and libcleri
  • #891301: lintian: privacy-breach-generic should not complain about <link rel=”generator”> and others

Misc contributions

Saltstack formulas. I pushed misc fixes to the munin-formula, the samba-formula and the openssh-formula. I submitted two other pull requests: on samba-formula and on users-formula.

QA’s carnivore database. I fixed a bug in a carnivore script that was spewing error messages about duplicate uids. This database links together multiple identifiers (emails, GPG key ids, LDAP entry, etc.) for the same Debian contributor.


See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianJonathan Dowland: Software for a service like

Can anyone recommend software for running a web service similar to

We are looking for something similar to manage digital assets within the Computing History Special Interest Group.

One suggestion I've had is CKAN which looks very interesting but possibly more geared towards opening up an API to existing live data (such as an relational DB of stuff, distributed or otherwise). We are mostly concerned with relatively static data sets: source code archives, collections of various types of publications, collections of images, etc.

(Having said that, there are some interesting possibilities for projects that consume the data sets in some fashion, perhaps via a web service, for e.g. reviewing OCR results for old raster scans of papers.)

I envisage something similar to the software powering We want both something that lets people explore collections of stuff via the web, including potentially via machine-friendly APIs in some cases; but also ideally manage uploading and categorising items via the web as well.

I've also had suggestions to look at media-manager software, but what I've seen so far is designed for personal media collections like movies, photos, etc., and focussed more on streaming them to LAN clients.

Can anyone recommend something worth looking at?

Planet DebianNorbert Preining: TeX Live 2018 pretest started

Preparations for the release of TeX Live 2018 have started a few days ago with the freeze of updates in TeX Live 2017 and the announcement of the official start of the pretest period. That means that we invite people to test the new release and help fixing bugs.

This year there hasn’t seen any notorious changes but the usual updates to pdftex, xetex, luatex, and the addition of a luatex53 based on lua53, which will (probably) become the default in TeX Live 2019, and addition of a few architectures (musl based linux, aarch64-linux, nothing earth-shaking.

Please test and report bugs to our mailing list.


CryptogramSecurity Vulnerabilities in Smart Contracts

Interesting research: "Finding The Greedy, Prodigal, and Suicidal Contracts at Scale":

Abstract: Smart contracts -- stateful executable objects hosted on blockchains like Ethereum -- carry billions of dollars worth of coins and cannot be updated once deployed. We present a new systematic characterization of a class of trace vulnerabilities, which result from analyzing multiple invocations of a contract over its lifetime. We focus attention on three example properties of such trace vulnerabilities: finding contracts that either lock funds indefinitely, leak them carelessly to arbitrary users, or can be killed by anyone. We implemented MAIAN, the first tool for precisely specifying and reasoning about trace properties, which employs inter-procedural symbolic analysis and concrete validator for exhibiting real exploits. Our analysis of nearly one million contracts flags 34,200 (2,365 distinct) contracts vulnerable, in 10 seconds per contract. On a subset of 3,759 contracts which we sampled for concrete validation and manual analysis, we reproduce real exploits at a true positive rate of 89%, yielding exploits for 3,686 contracts. Our tool finds exploits for the infamous Parity bug that indirectly locked 200 million dollars worth in Ether, which previous analyses failed to capture.

Worse Than FailureThe Unbidden Password

English - Mortise Lock with Key - Walters 52173

So here's a thing that keeps me up at night: we get a lot of submissions about programmers who cannot seem to think like users. There's a type of programmer who has never not known how computers worked, whose theory of computers in their mind has been so accurate for so long that they can't look at things in a different way. Many times, they close themselves off from users, insisting that if the user had a problem with using the software, they just don't know how computers work and need to educate themselves. Rather than focus on what would make the software more usable, they program what is easiest for the computer to do, and call it a day.

The same is sometimes true of security concerns. Rather than focus on what would be secure, on what the best practices are in the industry, these programmers hammer out something easy and straightforward and consider it good enough. Today's submitter, Rick, recently ran across just such a "security system."

Rick was shopping at a small online retailer, and found some items he liked. He got through the "fill in all your personal information and hope they have good security" stage of the online check-out process and placed his order. At no time was he asked if he wanted an account—which is good, because he never signs up for accounts at small independent retailers, preferring for his card information not to be stored at all. He was asked to fill in his email, which is common enough; a receipt and shipping updates are usually sent to the email associated with the order.

Sure enough, Rick received an email from the retailer moments later. Only this wasn't a receipt. It was, in fact, confirmation of a new account creation ... complete with a password in plain text.

Rick was understandably alarmed. He headed back to the site immediately to change the password to a longer, more secure one-off he could store in a password manager and never, ever have emailed to him in plaintext. But once on the site, he could find no sign of a login button or secure area. So at this point, he had an insecure password he couldn't appear to use, for an account he didn't even want in the first place.

Rick sent an email, worried about this state of affairs. The reply came fairly rapidly, from someone who was likely the sole tech department for the company: this was by design. All Rick had to do next time he purchased any goods was to enter the password on the checkout screen, and it would remember his delivery address for him.

As Rick put it:


So you send a random password insecurely and don't allow the user to change it, only because you think users would rather leave your web page to login to their email, search for the email that includes the password and copy that password in your web page, instead of just filling in their address that they know by heart.

Of course in this case, it doesn't matter one bit: Rick isn't going back to buy anything else. He didn't name-and-shame, but I encourage you to do so in the comments if you know of a retailer with similarly bad security. After all, there's only one thing that can beat programmer arrogance in this kind of situation: losing customers.

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

TEDStatement on incident at TEDxBrussels

March 5, 2018 — Today at TEDxBrussels, an independently organized TEDx event, speaker and performance artist Deborah De Robertis was forcibly removed from the stage by one of the event’s organizers, who objected to the talk’s content.

We have reviewed the situation and spoken with the organizer. While we know there are moments when it is difficult to decide how to respond to a situation, this response was deeply inappropriate. We are immediately revoking the TEDxBrussels license granted to this individual.

TEDFollow your dreams without fear: 4 questions with Zubaida Bai

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with women’s health advocate and TED Fellow Zubaida Bai about what inspires her work to improve the health and livelihoods of women worldwide.

TED: Tell us who you are.
Zubaida Bai: I am a women’s health advocate, a mother, a designer and innovator of health and livelihood solutions for underserved women and girls. I’ve traveled to the poorest communities in the world, listened compassionately to women and observed their challenges and indignities. As an entrepreneur and thought leader, I’m putting my passion into a movement that will address market failures, break taboos, and elevate the health of women and girls as a core topic in the world.

TED: What’s a bold move you’ve made in your career?
ZB: The decision I made with my husband and co-founder to make our company a for-profit venture. We wanted to prove that the poor are not poor in mind, and if you offer them a quality product that they need, and can afford, they will buy it. We also wanted to show that our business mode — serving the bottom of the pyramid — was scalable. Being a social sustainable enterprise is tough, especially if you serve women and children. But relying on non-profit donations especially for women’s health comes with a price. And that price is often an endless cycle of fundraising that makes it hard to create jobs and economically lift up the very communities being served. We are proud that every woman in our facilities in Chennai receives healthcare in addition to her salary.

TED: Tell us about a woman who inspires you.
ZB: My mother. She worked very hard under social constraints in India that were not favorable towards women. She was always working side jobs and creating small enterprises to help keep our family going, and I learned a lot from her. She also pushed me and believed in me and always created opportunities for me that she was denied and didn’t have access to.

TED: If you could go back in time, what would you tell your 18-year-old self?
ZB: To believe in your true potential. To follow your dreams without fear, as success is believing in your dreams and having the courage to pursue them — not the end result.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

Sociological ImagesAre We Really Looking at Body Cameras?

The Washington Post has been collecting data on documented fatal police shootings of civilians since 2015, and they recently released an update to the data set with incidents through the beginning of 2018. Over at Sociology Toolbox, Todd Beer has a great summary of the data set and a number of charts on how these shootings break down by race.

One of the main policy reforms suggested to address this problem is body cameras—the idea being that video evidence will reduce the number of killings by monitoring police behavior. Of course, not all police departments implement these cameras and their impact may be quite small. One small way to address these problems is public visibility and pressure.

So, how often are body cameras incorporated into incident reporting? Not that often, it turns out. I looked at all the shootings of unarmed civilians in The Washington Post’s dataset, flagging the ones where news reports indicated a body camera was in use. The measure isn’t perfect, but it lends some important context.

(Click to Enlarge)

Body cameras were only logged in 37 of 219 cases—about 17% of the time—and a log doesn’t necessarily mean the camera present was even recording. Sociologists know that organizations are often slow to implement new policies, and they don’t often just bend to public pressure. But there also hasn’t been a change in the reporting of body cameras, and this highlights another potential stumbling block as we track efforts for police reform.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Planet DebianJacob Adams: PGP Clean Room: GSoC Mentors Wanted

I am a prospective GSoC student and I would be very interested in working on the PGP Clean Room project for Debian this summer. Unfortunately the current confirmed mentor, Daniel Pocock, is involved in the admin team and possibly in multiple other GSoC projects as well. So I am looking for another mentor who would be willing to help me on this project.

The Problems of PGP

PGP is essential to Debian and many other free software projects. It secures almost everything these projects distribute on the Internet. But for new users it can be difficult to set up. It typically requires complex command line interactions that the user doesn’t really understand, leading to much confusion and silly mistakes. Best practice is to generate the keys offline and store them on a set of separate storage devices, but there isn’t currently a tool to handle this well at all. I eventually got TAILS to serve this purpose but it was more difficult than it should have been.

What the PGP Clean Room will do

The PGP Clean room will walk new users through setting up a set of USB flash drives or sd cards as a raid disk, generating new PGP keys, storing them there, and then exporting subkeys either on a separate USB stick or a security key like a Yubikey. I’d also like to add the ability to do things like revoke keys or extend expiration dates for them through the application. Additionally, I would like to add an import feature for new keys and support for X.509 key management. My current plan is to write a python-newt application for this and use GPGME’s python bindings to generate the keys.

My Qualifications

I am currently a package maintainer for a couple packages in Debian. I’m a freshman intending to major in Computer Science at the College of William and Mary in Virginia, USA. I’ve taken a few college-level CS classes, but as can been seen from my Github profile, I’m mostly self-taught.

I’ve started working on this a little bit and published it on Debian’s Gitlab.

PGP Clean Room GSoC Project

PGP Clean Room Wiki Page

GSoC Mentor’s Guide


Cory DoctorowHow to be better at being pissed off at Big Tech

My latest Locus column, “Let’s Get Better at Demanding Better from Tech,” looks at how science fiction can make us better critics of technology by imagining how tech could be used in difference social and economic contexts than the one we live in today.

The “pro-tech” side’s argument is some variation on, “You can’t get the social benefits of Facebook without letting us spy on you and manipulate you — if you want to stay in touch with your friends, that’s the price of admission.” All too often, the “anti-tech” side takes this premise at face value: “Since we can’t hang out with our friends online without being spied on and manipulated, you need to stop wanting to hang out with your friends online.”

But the science fiction version of this goes, “What kinds of systems could we build if we wanted to hang out with our friends without being spied on and manipulated — and what kinds of political, regulatory and technological interventions would make those systems easier to build?”

A critique of technology that focuses on its market conditions, rather than its code, yields up some interesting alternate narratives. It has become fashionable, for example, to say that advertising was the original sin of online publication. Once the norm emerged that creative work would be free and paid for through attention – that is, by showing ads – the wheels were set in motion, leading to clickbait, political polarization, and invasive, surveillant networks: “If you’re not paying for the product, you’re the product.”

But if we understand the contours of the advertising marketplace as being driven by market conditions, not “attention economics,” a different story emerges. Market conditions have driven incredible consolidation in every sector of the economy, meaning that fewer and fewer advertisers call the shots, and meaning that more and more of the money flows through fewer and fewer payment processors. Compound that with lax anti-trust enforcement, and you have companies that are poised to put pressure on publishers and control who sees which information.

In 2018, companies from John Deere to GM to Johnson & Johnson use digital locks and abusive license agreements to force you to submit to surveillance and control how you use their products. It’s true that if you don’t pay for the product, you’re the product – but if you’re a farmer who’s just shelled out $500,000 for a new tractor, you’re still the product.

The “original sin of advertising” story says that if only microtransactions had been technologically viable and commercially attractive, we could have had an attention-respecting, artist-compensating online world, but in a world of mass inequality, financializing culture and discourse means excluding huge swaths of the population from the modern public sphere. If the Supreme Court’s Citizens United decision has you convinced that money has had a corrupting influence on who gets to speak, imagine how corrupting the situation would be if you also had to pay to listen.

Let’s Get Better at Demanding Better from Tech [Cory Doctorow/Locus]

Rondam RamblingsIs it time to take the Hyperloop seriously? No.

Over four years since it was first introduced, Ars Technica asks if it is time to take the Hyperloop seriously.  And four years since I first gave it, the answer is still a resounding no.  Not only has the thermal expansion problem not been solved, there has been (AFAICT) absolutely no attention paid to simple operational concerns that could be show-stoppers.  Like terrorism.  If you think

Planet DebianGunnar Wolf: # apt install yum

# apt install yum

No, I'm not switching to Fedora or anything like that.

CryptogramIntimate Partner Threat

Princeton's Karen Levy has a good article computer security and the intimate partner threat:

When you learn that your privacy has been compromised, the common advice is to prevent additional access -- delete your insecure account, open a new one, change your password. This advice is such standard protocol for personal security that it's almost a no-brainer. But in abusive romantic relationships, disconnection can be extremely fraught. For one, it can put the victim at risk of physical harm: If abusers expect digital access and that access is suddenly closed off, it can lead them to become more violent or intrusive in other ways. It may seem cathartic to delete abusive material, like alarming text messages -- but if you don't preserve that kind of evidence, it can make prosecution more difficult. And closing some kinds of accounts, like social networks, to hide from a determined abuser can cut off social support that survivors desperately need. In some cases, maintaining a digital connection to the abuser may even be legally required (for instance, if the abuser and survivor share joint custody of children).

Threats from intimate partners also change the nature of what it means to be authenticated online. In most contexts, access credentials­ -- like passwords and security questions -- are intended to insulate your accounts against access from an adversary. But those mechanisms are often completely ineffective for security in intimate contexts: The abuser can compel disclosure of your password through threats of violence and has access to your devices because you're in the same physical space. In many cases, the abuser might even own your phone -- or might have access to your communications data because you share a family plan. Things like security questions are unlikely to be effective tools for protecting your security, because the abuser knows or can guess at intimate details about your life -- where you were born, what your first job was, the name of your pet.

Planet DebianJulien Danjou: Scaling a polling Python application with tooz

This article is the final one of the series I wrote about scaling a large number of connections in a Python application. If you don't remember what the problem we're trying to solve is, here it is, coming from one of my followers':

It so happened that I'm currently working on scaling some Python app. Specifically, now I'm trying to figure out the best way to scale SSH connections - when one server has to connect to thousands (or even tens of thousands) of remote machines in a short period of time (say, several minutes).

How would you write an application that does that in a scalable way?

The first blog post was exploring a solution based on threads, while the second blog post was exploring an architecture around asyncio.

In the two first articles, we wrote programs that could handle this problem by using multiple threads or asyncio – or both. While this worked pretty well, this had some limitations, such as only using one computer. So this time, we're going to take a different approach and use multiple computers!

The job

As we've already seen, writing a Python application that connects to a host by ssh can be done using Paramiko or asyncssh as we've seen previously. Here again, that will not be the focus of this blog post since it is pretty straightforward to do.

To keep this exercise simple, we'll reuse our ping function from the first article. It looked like this:

import subprocess

def ping(hostname):
p = subprocess.Popen(["ping", "-c", "3", "-w", "1", hostname],
return p.wait() == 0

As a reminder, running this program alone and pinging serially 255 IP addresses takes more than 10 minutes. Let's try to make it faster by running it in parallel.

The architecture

Remember: if pinging 255 hosts takes 10 minutes, pinging the whole Internet is going to take forever – around five years at this rate.

With our ping experiment, we already divided our mission (e.g. "who's alive on the Internet") into very small tasks ("ping"). If we want to ping 4 billion hosts, we need to run those tasks in parallel. But one computer is not going to be enough: we need to distribute those tasks to different hosts, so we can use some massive parallelism to go even faster!

There are two ways to distribute such a set of tasks:

  • Use a queue. That works well for jobs that are not determined in advance, such as user-submitted tasks or that are going to be executed only once.

  • Use a distribution algorithm. That works only for tasks are determined in advance, and that are scheduled regularly, such as polling.

We are going to pick the second option here, as those ping tasks (or polling in the original problem) should regularly be run. That approach will allow us to spread the jobs onto several processes whose can be even spread onto several nodes over a network. We also won't have to "maintain" the queue (e.g. make it work and monitor it) so that's also a bonus point.

That's infinite horizontal scalability!

The distribution algorithm

The algorithm we're going to use to distribute this task is based on a consistent hashring.

Here's how it works in short. Picture a circular ring. We map objects onto this ring. The ring is then split into partitions. Those partitions are distributed among all the workers. The workers take care of jobs that are in the partitions they are responsible for.

In the case where a new node joins the ring, it is inserted between 2 nodes and take a bit of their workload. In the case where a node leaves the ring, the partitions it was taking care of are reassigned to its adjacent nodes.

If you want more details, it exists plenty of explanations about how this algorithm work. Feel free to look online!

However, to make this work, we need to know which nodes are alive or dead. This is another problem to solve, and the best way to tackle it is to use a coordination mechanism. There are plenty of those, from Apache ZooKeeper to etcd.

Without going too much into details, those pieces of software provide a network service where every node can connect to and can manage its state. If a client gets disconnected or crashes, it's then easy to consider it as removed. That enables the application to get the full list of nodes, and split the ring accordingly. There's no need to have any shared state between the nodes other than who's alive and running.

Using group membership

To get a list of nodes that are available to help us pinging the Internet, we need a service that provides this and a library to interact with it. Since the use case is pretty simple and I don't know which backends you like the most, we're going to use the Tooz library.

Tooz provides a coordination mechanism on top of a large variety of backends: ZooKeeper or etcd, as suggested earlier, but also Redis or memcached for those who want to live more dangerously. Indeed, while ZooKeeper or etcd can be set up in a synchronized cluster, memcached, on the other hand, is a SPOF.

For the sake of the exercise, we're going to use a single instance of etcd here. Thanks to Tooz, switching to another backend would be a one-line change anyway.

Tooz provides a tooz.coordination.Coordinator object that represents the connection to the coordination subsystem. It then exposes an API based on groups and members. A member is a node connected through a Coordinator instance. A group is a place that members can join or leave.

Here's a first implementation of a member joining a group and printing the member list:

import sys
import time
from tooz import coordination
# Check that a client and group ids are passed as arguments
if len(sys.argv) != 3:
print("Usage: %s <client id> <group id>" % sys.argv[0])
# Get the Coordinator object
c = coordination.get_coordinator(
# Start it (initiate connection).
group = sys.argv[2].encode()
# Create the group
except coordination.GroupAlreadyExist:
# Join the group
while True:
# Print the members list
members = c.get_members(group)
# Leave the group
# Stop when we're done

Don't forget to run etcd on your machine before running this program. Running a first instance of this program will print set(['client1']) every second. As soon as you run a second instance of this program, they both start to print set(['client1', 'client2']). If you shut down one of the clients, they will print the member list with only one member of it.

This can work with any number of client. If a client crashes rather than disconnect properly, its membership will automatically expire a few seconds – you can configure this expiration period with by passing a timeout value in Tooz URL.

Using consistent hashing

Now that we have a group, which will turn out to be our ring, we can implement consistent hashring on top of it. Fortunately, Tooz also provides an implementation of this that is ready to be used. Rather than using the join_group method, we're gonna use the join_partitioned_group method.

import sys
import time
from tooz import coordination
# Check that a client and group ids are passed as arguments
if len(sys.argv) != 3:
print("Usage: %s <client id> <group id>" % sys.argv[0])
# Get the Coordinator object
c = coordination.get_coordinator(
# Start it (initiate connection).
group = sys.argv[2].encode()
# Join the partitioned group
p = c.join_partitioned_group(group)
while True:
# Leave the group
# Stop when we're done

Running this program on one node (or just one terminal) will output the following every second:

$ python client1 foobar
0 handled by set(['client1'])
1 handled by set(['client1'])
2 handled by set(['client1'])
3 handled by set(['client1'])
4 handled by set(['client1'])
5 handled by set(['client1'])
6 handled by set(['client1'])
7 handled by set(['client1'])
8 handled by set(['client1'])
9 handled by set(['client1'])

As soon as a second members join (just run another copy of the script in another terminal), the output changes and both the running programs output the same thing:

0 handled by set(['client2'])
1 handled by set(['client1'])
2 handled by set(['client1'])
3 handled by set(['client1'])
4 handled by set(['client1'])
5 handled by set(['client2'])
6 handled by set(['client2'])
7 handled by set(['client1'])
8 handled by set(['client1'])
9 handled by set(['client2'])

They just shared the ten objects between them. They did not communicate with each other. They just know each other presence, and since they are using the same algorithm to compute where an object should belong, they share the same results. You can do the test with a third copy of the program:

0 handled by set(['client2'])
1 handled by set(['client1'])
2 handled by set(['client1'])
3 handled by set(['client1'])
4 handled by set(['client1'])
5 handled by set(['client2'])
6 handled by set(['client2'])
7 handled by set(['client3'])
8 handled by set(['client1'])
9 handled by set(['client3'])

Here we got a third client in the mix, excellent! If we stop one of the clients, the rebalancing is done automatically.

While the consistent hashing approach is great, is has a few characteristics you might want to know about:

  • The distribution algorithm is not made to be perfectly even. If you have a vast number of objects, it might seem pretty even statistically, but if you are trying to distribute two objects on two nodes, it's probable one node will handle the two objects and the other one none.

  • The distribution is not done in real time, meaning there's a small chance that an object might be owned by two nodes at the same time. This is not a problem in a scenario such as this one, since pinging a host twice is not going to be a big deal, but if your job needed to be unique and executed once and only once, this might not be an adequate method of distribution. Rather use a queue which has the proper characteristics.

Distributed ping

Now that we have our hashring ready to distribute our job, we can implement our final program!

import sys
import subprocess
import time
from tooz import coordination

# Check that a client and group ids are passed as arguments
if len(sys.argv) != 3:
print("Usage: %s <client id> <group id>" % sys.argv[0])

# Get the Coordinator object
c = coordination.get_coordinator(
# Start it (initiate connection).
group = sys.argv[2].encode()
# Join the partitioned group
p = c.join_partitioned_group(group)

class Host(object):
def __init__(self, hostname):
self.hostname = hostname
def __tooz_hash__(self):
"""Returns a unique byte identifier so Tooz can distribute this object."""
return self.hostname.encode()
def __str__(self):
return "<%s: %s>" % (self.__class__.__name__, self.hostname)
def ping(self):
p = subprocess.Popen(["ping", "-q", "-c", "3", "-W", "1",
return p.wait() == 0

hosts_to_ping = [Host("192.168.2.%d" % i) for i in range(255)]
while True:
for host in hosts_to_ping:
if p.belongs_to_self(host):
print("Pinging %s" % host)
print(" %s is alive" % host)
# Leave the group
# Stop when we're done

When the first client starts, it starts iterating on the host, and since it is alone, all hosts belong to it. So it starts pinging all nodes:

$ python3 client1 ping
Pinging <Host:>
<Host:> is alive
Pinging <Host:>
<Host:> is alive
Pinging <Host:>

Then, a second client starts pinging too, and automatically the jobs are split. The client1 instance starts skipping some nodes that now belongs to client2:

# client1 output
Pinging <Host:>
<Host:> is alive
Pinging <Host:>
Pinging <Host:>
Pinging <Host:>
# client2 output
Pinging <Host:>
Pinging <Host:>
Pinging <Host:>
<Host:> is alive

On the other hand, client2 is skipping nodes that are belonging to client1. If you want to scale further our application, we can start new clients on other nodes on the network and expand our pinging system!

Just a first step

This ping job does not use a lot of CPU time or I/O bandwidth, neither would the original ssh case by Alon. However, if that would be the case, this method would be even more efficient as the scalability of the resources would be a key.

These are just the first steps of the distribution and scalability mechanism that you can implement using Python. There are a few other options available on top of this mechanism such as defining different weights for different nodes or using replicas to achieve high-availability scenario. I've covered those in my book Scaling Python, if you're interested in learning more!

Planet DebianRenata D'Avila: Women in MiniDebConf Curitiba campaign

This is the text of the crowdfunding campaign I am organizing with five other extraordinary women: Alice, Ana Paula, Anna, Luciana and Miriam.

Women in MiniDebConf. Let's show that, yes, there are many women with potential that are interested in the world of free technologies and who use Debian! Help with the campaign for diversity at MiniDebConf:

Here is what Anna has to say about Debian: "Debian is one of the strongest GNU/Linux distributions with the free software philosophy, and that's why it's so impressive. Anyone who comes in contact with Debian necessarily learns more about free software and the FLOSS culture."

The Debian project provides travel grants for participation in conferences - for people who are considered project members.

And how many of these are women? Very few.

Women interested in attending MiniDebConf Curitiba are many, but most of them do not have the means to travel to Curitiba, specially in a big country like Brazil.

It is a fact that women do not have the same opportunities in the IT world as men, but we can change that history. For this, we need your help.

Let's show that, yes, there are many women with potential interested in the world of free technologies and who use Debian. At MiniDebConf, women who could already be contributing to the community will have an opportunity to interact with it, taking part in tutorials, workshops and talks. It is in everyone's best interest that the community get itself ready to include them.

So, our way of helping to increase diversity in MiniDebConf - and perhaps among the people who contribute to Debian as well - is by giving these women the conditions they need to participate in the conference.

The Debian Women community is well developed in other countries, but in Brazil there were still no registered groups.

At last year's MiniDebConf, there was not one single woman speaker.

But it does not have to be this way.

To be able to increase diversity and to change the current situation of exclusion that we currently have in the Brazilian community, we must act on many fronts. We are already working to foster the local community and to engage other women in the use and development of Debian.

That is why we want to also bring in women who are already Debian users, so they can share their experiences, so they can act as mentors to the newbies and so we can integrate all of them into the Debian development community.

There have already been successful campaigns in Brazil to include women in conferences and technology communities, both as participants and as speakers: PyLadies in FISL, PyLadies in Python Brazil 12, PyLadies in Python Brazil 13 and the Gophercon BR Diversity Scholarship.

With your collaboration, this will be another goal achieved - and the Debian and free software communities will become a bit more representative of our own society.

Bitcoin - 15YFYKHr6CfYmBCyf4JM2g8WFkCmNGDGi5

Women in MiniDebConf. Let's show that, yes, there are many women with potential that are interested in the world of free technologies and who use Debian! Help with the campaign for diversity at MiniDebConf:

Link to the campaign:

Planet DebianNorbert Preining: Debian/TeX Live 2017.20180305-1

TeX Live 2017 has received its final update just the other day and we are moving forward to start the period of testing leading up to the release of TeX Live 2018. Thus, this release for Debian is also the last one based on the TeX Live 2017, and updates based on TeX Live 2018 pretest will hit experimental in the next days. This release does not bring too much new items, mostly a bug fix for upgrades from Jessie, plus the usual bunch of updated packages, see below.

Baring any serious bugs, this will be the last upload to Debian/unstable for quite some time till, more specifically till TeX Live 2018 is released in about 2 months.

Enjoy the break!

Updated packages

animate, beebe, bib2gls, biblatex-publist, csplain, cyber, fei, fontawesome5, glossaries-extra, lshort-english, luaxml, mathpunctspace, media9, mpostinl, newtx, nicematrix, pixelart, polexpr, reledmac, rubik, thaispec, uantwerpendocs, univie-ling, xecjk, xint.

Worse Than FailureCodeSOD: A Very Private Memory

May the gods spare us from “clever” programmers.

Esben found this little block of C# code:

System.Diagnostics.Process proc = System.Diagnostics.Process.GetCurrentProcess();
long check = proc.PrivateMemorySize64;
if (check > 1150000000)

Even before you check on the objects and methods in use, it’s hard to figure out what the heck this method is supposed to do. If some memory stat is over a certain size, pop up a message box and break out of the method? Why? Isn’t this more the case for an exception? Since the base value is hard coded, what happens if I run this code on a machine with way more RAM? Or configure the CLR to give my process more memory? Or…

If the goal was to prevent an operation from starting if there wasn’t enough free memory, this code is dumb. It’s “clever”, in the sense that the original developer said, “Hey, I’m about to do something memory intensive, let me make sure there’s enough memory” and then patted themselves on the head for practicing defensive programming techniques.

But that isn’t what this code exactly does. PrivateMemorySize simply reports how much memory is allocated to the process. Not how much is free, not how much is used, just… how much there is. That number may grow, as the process allocates objects, so if it’s too large relative to available memory… you’ll be paging a bunch, which isn’t great I suppose, but it still doesn’t explain this check.

This is almost certainly means this was a case of “works on my/one machine”, where the developer tuned the number 1550000000 based on one specific machine. Either that, or there was a memory leak in the code- and yes, even garbage collected languages can still have memory leaks if you’re an agressively “clever” programmer- and this was the developer’s way of telling people “Hey, restart the program.”

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Planet DebianLars Wirzenius: dpkg maintainer script containerisation

Random crazy Debian idea of today: add support to dpkg so that it uses containers (or namespaces, or whatever works for this) for running package maintainer scripts (pre- and postinst, pre- and postrm), to prevent them from accidentally or maliciously writing to unwanted parts of the filesystem, or from doing unwanted network I/O.

I think this would be useful for third-party packages, but also for packages from Debian itself. You heard it here first! Debian package maintainers have been known to make mistakes.

Obviously there needs to be ways in which these restrictions can be overridden, but that override should be clear and obvious to the user (sysadmin), not something they notice because they happen to be running strace or tcpdump during the install.

Corollary: dpkg could restrict where a .deb can place files based on the origin of the package.

Example: Installing chrome.deb from Google installs a file in /etc/apt/sources.list.d, which is a surprise to some. If dpkg were to not allow that (as a file in the .deb, or a file created in postinst), unless the user was told and explicitly agreed to it, it would be less of a nasty surprise.

Example: Some stupid Debian package maintainer is very busy at work and does Debian hacking when they should really be sleeping, and types the following into their postrm script, while being asleep:


LIB="/var/lib/ $PKG"

rm -rf "$LIB"

See the mistake? Ideally, this would be found during automated testing before the package gets uploaded, but that assumes said package maintainer uses tools like piuparts.

I think it'd be better if we didn't rely only infallible, indefatigable people with perfect workflows and processes for safety.

Having dpkg make the whole filesystem read-only, except for the parts that clearly belong to the package, based on some sensible set of rules, or based a suitable override, would protect against mistakes like this.

Planet DebianRussell Coker: WordPress Multisite on Debian

WordPress (a common CMS for blogs) is designed to be copied to a directory that Apache can serve and run by a user with no particular privileges while managing installation of it’s own updates and plugins. Debian is designed around the idea of the package management system controlling everything on behalf of a sysadmin.

When I first started using WordPress there was a version called “WordPress MU” (Multi User) which supported multiple blogs. It was a separate archive to the main WordPress and didn’t support all the plugins and themes. As a main selling point of WordPress is the ability to select from the significant library of plugins and themes this was a serious problem.

Debian WordPress

The people who maintain the Debian package of WordPress have always supported multiple blogs on one system and made it very easy to run in that manner. There’s a /etc/wordpress directory for configuration files for each blog with names such as This allows having multiple separate blogs running from the same tree of PHP source which means only one thing to update when there’s a new version of WordPress (often fixing security issues).

One thing that appears to be lacking with the Debian system is separate directories for “media”. WordPress supports uploading images (which are scaled to several different sizes) as well as sound and apparently video. By default under Debian they are stored in /var/lib/wordpress/wp-content/uploads/YYYY/MM/filename. If you have several blogs on one system they all get to share the same directory tree, that may be OK for one person running multiple blogs but is obviously bad when several bloggers have independent blogs on the same server.


If you enable the “multisite” support in WordPress then you have WordPress support for multiple blogs. The administrator of the multisite configuration has the ability to specify media paths etc for all the child blogs.

The first problem with this is that one person has to be the multisite administrator. As I’m the sysadmin of the WordPress servers in question that’s an obvious task for me. But the problem is that the multisite administrator doesn’t just do sysadmin tasks such as specifying storage directories. They also do fairly routine tasks like enabling plugins. Preventing bloggers from installing new plugins is reasonable and is the default Debian configuration. Preventing them from selecting which of the installed plugins are activated is unreasonable in most situations.

The next issue is that some core parts of WordPress functionality on the sub-blogs refer to the administrator blog, recovering a forgotten password is one example. I don’t want users of other blogs on the system to be referred to my blog when they forget their password.

A final problem with multisite is that it makes things more difficult if you want to move a blog to another system. Instead of just sending a dump of the MySQL database and a copy of the Apache configuration for the site you have to configure it for which blog will be it’s master. If going between multisite and non-multisite you have to change some of the data about accounts, this will be annoying on both adding new sites to a server and moving sites from the server to a non-multisite server somewhere else.

I now believe that WordPress multisite has little value for people who use Debian. The Debian way is the better way.

So I had to back out the multisite changes. Fortunately I had a cron job to make snapshots of the BTRFS subvolume that has the database so it was easy to revert to an older version of the MySQL configuration.

Upload Location

update etbe_options set option_value='/var/lib/wordpress/wp-content/uploads/' where option_name='upload_path';

It turns out that if you don’t have a multisite blog then there’s no way of changing the upload directory without using SQL. The above SQL code is an example of how to do this. Note that it seems that there is special case handling of a value of ‘wp-content/uploads‘ and any other path needs to be fully qualified.

For my own blog however I choose to avoid the WordPress media management and use the following shell script to create suitable HTML code for an image that links to a high resolution version. I use GIMP to create the smaller version of the image which gives me a lot of control over how to crop and compress the image to ensure that enough detail is visible while still being small enough for fast download.

set -e

if [ "$BASE" = "" ]; then

while [ "$1" != "" ]; do
  SMALL=$(echo $1 | sed -s s/-big//)
  RES=$(identify $SMALL|cut -f3 -d\ )
  WIDTH=$(($(echo $RES|cut -f1 -dx)/2))px
  HEIGHT=$(($(echo $RES|cut -f2 -dx)/2))px
  echo "<a href=\"$BASE/$BIG\"><img src=\"$BASE/$SMALL\" width=\"$WIDTH\" height=\"$HEIGHT\" alt=\"\" /></a>"

Planet DebianLior Kaplan: Running for OSI board

After serving in the board of a few technological Israeli associations, I decided to run as an individual candidate in the OSI board elections which starts today. Hoping to add representation outside of North America and Europe. While my main interest is the licensing work, another goal I wish to achieve is to make OSI more relevant for Open Source people on a daily basis, making it more central for communities.

This year there are 12 candidates from 2 individual seats and 5 candidate for 2 affiliate seats (full list at OSI elections wiki page). Wish me luck (:

Planet DebianJan Wagner: Comparing (OVH) I/O performance

Since some time I'm using cloud resources provided by OVH for some projects I'm involved.

Recently we decided to give Zammad, an opensource support/ticketing solution, a try. We did choose the docker compose way for deployment, which also includes an elasticsearch instance. The important part of this information is, that for elasticsearch indexing the storage has a huge impact.

The documentation suggests at least 4 GB RAM for running the Zammad compose stack. So I did choose a VPS Cloud 2, it has 4 GB RAM and 50 GB Ceph storage, out of mind.

After I deployed my simple docker setup and on top the zammad compose setup everything was running smooth mostly. Unfortunately when starting the whole zammad compose stack, elasticsearch is regenerating the whole index, which might take a long(er) time depending on the size of the index and the performance of the system. This has to be done before the UI comes available and is ready for using.

To make a long story short, I had the same setup on a testground where it was several times faster then on the production setup. So I decided it's time to have a look into the performance of my OVH resources. Over the time I got access to a couple of them, even some bare metal systems.

For my test I just grabed the following sample:

  • VPS 2016 Cloud 2
  • VPS-SSD-3
  • VPS 2016 Cloud RAM 1
  • VPS 2014 Cloud 3
  • HG-7
  • SP-32 (that's a bare metal with software raid)

Looking into what would be the best way to benchmark I/O it came to my attention, that comparing I/O for cloud resources is not so uncommon. I also learned that dd might not be the first choice but fio seems a good catch for doing lazy I/O benchmarks and ioping for testing I/O latency.

As the systems all running Debian, at least 8.x, I used the following command(s) for doing my tests:

aptitude -y install -o quiet=2 ioping fio > /dev/null; && \
 time fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --name=test --output=/tmp/tempfile --bs=4k --iodepth=64 --size=4G --readwrite=randrw --rwmixread=75; \
 rm -f test.*; cat /tmp/tempfile; \
 ioping -c 10 /root | tail -4

The output on my VPS 2016 Cloud 2 system:

Jobs: 1 (f=1): [m(1)] [100.0% done] [1529KB/580KB/0KB /s] [382/145/0 iops] [eta 00m:00s]
real	14m20.420s
user	0m14.620s
sys	1m4.424s
test: (g=0): rw=randrw, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
Starting 1 process
test: Laying out IO file(s) (1 file(s) / 4096MB)

test: (groupid=0, jobs=1): err= 0: pid=19377: Fri Mar  2 18:16:12 2018
  read : io=3070.4MB, bw=3888.9KB/s, iops=972, runt=808475msec
  write: io=1025.8MB, bw=1299.2KB/s, iops=324, runt=808475msec
  cpu          : usr=1.43%, sys=6.34%, ctx=835077, majf=0, minf=9
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=785996/w=262580/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=3070.4MB, aggrb=3888KB/s, minb=3888KB/s, maxb=3888KB/s, mint=808475msec, maxt=808475msec
  WRITE: io=1025.8MB, aggrb=1299KB/s, minb=1299KB/s, maxb=1299KB/s, mint=808475msec, maxt=808475msec

Disk stats (read/write):
  sda: ios=787390/263575, merge=612/721, ticks=49277288/2701580, in_queue=51980604, util=100.00%
--- /root (ext4 /dev/sda1) ioping statistics ---
9 requests completed in 4.56 ms, 36 KiB read, 1.97 k iops, 7.71 MiB/s
generated 10 requests in 9.00 s, 40 KiB, 1 iops, 4.44 KiB/s
min/avg/max/mdev = 423.4 us / 506.8 us / 577.3 us / 43.7 us

The interesting parts:

  read : io=3070.4MB, bw=3888.9KB/s, iops=972, runt=808475msec
  write: io=1025.8MB, bw=1299.2KB/s, iops=324, runt=808475msec
min/avg/max/mdev = 423.4 us / 506.8 us / 577.3 us / 43.7 us

After comparing the results with the rest of the systems I think my samples of the VPS 2016 Cloud instances do not convince me that I would choose such a system for use cases where I/O might be a critical part.









Planet DebianRussell Coker: Compromised Guest Account

Some of the workstations I run are sometimes used by multiple people. Having multiple people share an account is bad for security so having a guest account for guest access is convenient.

If a system doesn’t allow logins over the Internet then a strong password is not needed for the guest account.

If such a system later allows logins over the Internet then hostile parties can try to guess the password. This happens even if you don’t use the default port for ssh.

This recently happened to a system I run. The attacker logged in as guest, changed the password, and installed a cron job to run every minute and restart their blockchain mining program if it had been stopped.

In 2007 a bug was filed against the Debian package openssh-server requesting that the AllowUsers be added to the default /etc/ssh/sshd_config file [1]. If that bug hadn’t been marked as “wishlist” and left alone for 11 years then I would probably have set it to only allow ssh connections to the one account that I desired which always had a strong password.

I’ve been a sysadmin for about 25 years (since before ssh was invented). I have been a Debian Developer for almost 20 years, including working on security related code. The fact that I stuffed up in regard to this issue suggests that there are probably many other people making similar mistakes, and probably most of them aren’t monitoring things like system load average and temperature which can lead to the discovery of such attacks.

Planet DebianRuss Allbery: Free software log (February 2018)

Last month, I did a single software release: a new version of pgpcontrol, the collection of tools to check signed Usenet control messages. This is a pure maintenance release to keep it alive using GnuPG 1.0. The package is kind of a mess and needs a clean rewrite that I haven't had time to do yet (which is why I don't even have a software page for it).

Other than that, I didn't finish anything sufficiently to generate a new release, but I'm close on a bunch of fronts. Most of the user-visible (eventually) work went into podlators, the conversion tools from POD (Perl's documentation format) to text and man pages. Based on an excellent series of bug reports from eponymous alias, I fixed a bunch of long-standing bugs in Pod::Text, Pod::Text::Color, and Pod::Text::Termcap, and continued the slow process of reworking the package test suite to be cleaner and easier to maintain.

In C TAP Harness, I took an idea from the Rust assert macros and changed the arguments for all the TAP functions from wanted and seen to left and right. This way, one doesn't have to care about the order in which to pass arguments (which I can never remember). It will make it easier to update the INN test suite to the current TAP library interface, since I had used the opposite order for all of the original INN tests I wrote.

I spent a bunch of time adding SPDX identifiers to my utility functions that are intended for copying into other packages, and laid the groundwork for using SPDX identifiers in all of my projects. I picked up the habit of being careful about license notices from Debian work, and SPDX (if a bit weird in places, such as its utterly opaque file specification) is the first comprehensive and unambiguous labeling system. I have a horrible Perl script that does a lot of guesswork to generate a license file for my packages now, and am hoping to replace that with something (largely) based on SPDX.

Finally, I updated my Debian packaging with Git notes, and wrote new notes on using sbuild.


Planet DebianThorsten Alteholz: My Debian Activities in February 2018

FTP master

This month everything came back to normal and I accepted 272 packages and rejected 30 uploads. The overall number of packages that got accepted this month was 423.

Debian LTS

This was my forty fourth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 23.75h. During that time I did LTS uploads of:

  • [DLA 1279-1] clamav security update for two CVEs
  • [DLA 1286-1] quagga security update for three CVEs
  • [DLA 1290-1] libvpx security update for one CVE
  • [DSA 4125-1] wavpack security update for three Jessie CVEs and three Stretch CVEs

The issues for wavpack did not affect Wheezy, so there has been no DLA. Instead the security team accepted my debdiff for Jessie and Stretch and published a DSA. Thanks to Sebastien for doing this.
I also started to work on a fix for ICU. Unfortunately Moritz did not agree with me on the correct patch for this. As upstream did not respond to my query yet, I did not do an upload.
I also did not finish my work on opencv, I am still searching for the correct C++ template. On the other hand I finished work on 12 of 22 CVEs for wireshark. The rest will be done in March.

Other stuff

During February I uploaded new upstream versions of …

I also moved all alljoyn packages as well as a56 to salsa.

Planet DebianNiels Thykier: Prototyping a new packaging papercut fix – DRY for the debhelper compat level

Have you ever looked at packaging and felt it is a long exercise in repeating yourself?  If you have, you are certainly not alone.  You can find examples of this on the Debian mailing lists (among other places).  Such as when Russ Allbery pointed out that the debhelper compat level vs. the version in the debhelper build-dependency that is very often but not always the same.

Russ suggests two ways of solving the problem:

  1. The first proposal is to generate the build-dependency from the compat file. However, generating the control file is (as Russ puts it) “fraught with peril”.  Probably because we do not have good or standardized tooling for it – creating such tooling and deploying it will take years.  Not to mention that most contributors appear to be uncomfortable with handling the debian/control as a generated file.
  2. The alternative proposal from Russ is to assume that the major version of the build-dependency should mark the compat level (assuming no compat file exist).  However, Russ again points out an issue here that solution might be “too magical”.  Indeed, this solution have the problem that you implicitly change compat level as soon as you bump the versioned dependency beyond a major version of debhelper.  But only if you do not have a compat file.

Looking at these two options, the concept behind second one is most likely to be deployable in the near future.  However, the solution would need some tweaking and I have spend my time coming up with an alternative.

The third alternative:

My current alternative to Russ’s second proposal is to make debhelper provide multiple versions of “debhelper-compat” and have packages use a Build-Depends on “debhelper-compat (= X)”, where X denotes the desired compat level.  The build-dependency will then replace the “debhelper (>= X~)” relation when the package does not require a specific version of debhelper (beyond what is required for the compat level).

On top of this, debhelper implements some safe-guards to ensure that it can reliably determine the compat level from the build-dependencies.  Notably, there must be exactly one debhelper compat relation, it must be in the “Build-Depends” field and it must have a “strictly equal version” as version constraint.  Furthermore, it must not have any Build-Profile or architecture restrictions and so on.


With all of this in place:

  1. We have no repetition when it is not required.
  2. debhelper can reliable auto-detect which debhelper-compat level you wanted.  Otherwise, debhelper will ensure the build fails (or apt/aptitude, if you end up misspelling the package name or using an invalid version).
  3. Bumping the debhelper compat level is still explicit and separate from bumping the debhelper dependency when you need a  feature or bug fix from a later version.

Testing the prototype:

If you want to test the prototype, you can do so in unstable and testing at the moment (caveat: it is an experimental feature and may change or disappear without notice).  However, please note that lintian is completely unaware of this and will spew out several false-positives – including one nonfatal auto-reject, so you will have to apply at least one lintian override.  Also note, I have only added “debhelper-compat” versions for non-deprecated compat levels.  In other words, you will have to use compat 9 or later to test the feature.

You can use “mscgen/0.20-11” as an example for minimum the changes required.  Admittedly, the example cheats and relies on “debhelper-compat (= 10)” implies a “debhelper (>= 11.1.5~alpha1)” as that is the first version with the provides for debhelper-compat.  Going forward, if you need a feature from debhelper that appears in a later version than that, then you will need an explicit “debhelper (>= Y)” relation for that feature on top of the debhelper-compat relation.

Will you remove the support for debian/compat if this prototype works?

I have no immediate plans to remove the debian/compat file even if this prototype is successful.

Will you upload this to stretch-backports?

Yes, although I am waiting for a fix for #889567 to be uploaded to stretch-backports first.

Will this work for backports?

It worked fine on the buildds when I deployed it in experimental and I believe the build-dependency resolution process for experimental is similar (enough) to backports.

Will the third-party debhelper tools need changes to support this?

Probably no; most third-party debhelper tools do not seem to check the compat level directly. Even then, most tools use the “compat” sub from “Debian::Debhelper::Dh_Lib”, which handles all of this automatically.

That said, if you have a third-party tool that wishes or needs to check the debhelper compat level, please file a bug requesting a cross-language API for this and I will happy look at this.

Future work

I am considering to apply this concept to the dh sequence add-ons as well (i.e. the “dh $@ –with foo”).  From my PoV, this is another case needing a DRY fix.  Plus this would also present an opportune method for solving #836699 – though, the hard part for #836699 is actually taming the dh add-on API plus dh’s sequence handling to consistently only affect the “indep” part of the sequences.

Planet DebianBits from Debian: New Debian Developers and Maintainers (January and February 2018)

The following contributors got their Debian Developer accounts in the last two months:

  • Alexandre Mestiashvili (mestia)
  • Tomasz Rybak (serpent)
  • Louis-Philippe Véronneau (pollo)

The following contributors were added as Debian Maintainers in the last two months:

  • Teus Benschop
  • Kyle John Robbertze
  • Maarten van Gompel
  • Dennis van Dok
  • Innocent De Marchi
  • David Rabel


Planet DebianJacob Adams: PGP Clean Room: GSoC Mentors Wanted

I am a prospective GSoC student and I would be very interested in working on the PGP Clean Room project for Debian this summer. Unfortunately the current confirmed mentor, Daniel Pocock, is involved in the admin team and possibly in multiple other GSoC projects as well. So I am looking for another mentor who would be willing to help me on this project.

The Problems of PGP

PGP is essential to Debian and many other free software projects. It secures almost everything these projects distribute on the Internet. But for new users it can be difficult to set up. It typically requires complex command line interactions that the user doesn’t really understand, leading to much confusion and silly mistakes. Best practice is to generate the keys offline and store them on a set of separate storage devices, but there isn’t currently a tool to handle this well at all. I eventually got TAILS to serve this purpose but it was more difficult than it should have been.

What the PGP Clean Room will do

The PGP Clean room will walk new users through setting up a set of USB flash drives or sd cards as a raid disk, generating new PGP keys, storing them there, and then exporting subkeys either on a separate USB stick or a security key like a Yubikey. I’d also like to add the ability to do things like revoke keys or extend expiration dates for them through the application. Additionally, I would like to add an import feature for new keys and support for X.509 key management. My current plan is to write a python-newt application for this and use GPGME’s python bindings to generate the keys.

My Qualifications

I am currently a package maintainer for a couple packages in Debian. I’m a freshman intending to major in Computer Science at the College of William and Mary in Virginia, USA. I’ve taken a few college-level CS classes, but as can been seen from my Github profile, I’m mostly self-taught.

I’ve started working on this a little bit and published it on Debian’s Gitlab.

PGP Clean Room GSoC Project

PGP Clean Room Wiki Page

GSoC Mentor’s Guide


Planet DebianSean Whitton: Why have combat encounters in 5e D&D?

A friend and I each run a D&D game, and we also play in each other’s games. We disagree on a number of different things about how the game is best played, and I learn a lot from seeing how both sets of choices play out in each of the two games.

One significant point of disagreement is how important it is to ensure that combat is balanced. In my game I disallow all homebrew and third party content. Only the core rulebooks, and official printed supplements, are permitted. By contrast, my friend has allowed several characters in his game to use homebrew races from the Internet, which are clearly more powerful than the PHB races. And he is quite happy to make modifications to spells and abilities without investigating the consequences for balance. Changes which seem innocuous can have balance consequences that you don’t realise for some time or do not observe without playtesting; I always assume the worst, and don’t change anything. (I constantly reflavour abilities and stats. In this post I’m interested in crunch alone.)

In this post I want to explain why I put such a premium on balance. Before getting on to that explanation, I first need to say something about the blogger Mike Shea’s claim that “D&D 5e is imbalanced by design and that’s ok. Imbalance leads to interesting stories.” (one, two). Shea is drawing a contrast between 4e and 5e. It was possible to more precisely quantify all character and monster abilities in 4e, which meant that if the calculations showed that an encounter would be easy, medium or hard, it was more likely to turn out to be easy, medium or hard. By contrast, 5e involves a number of abilities that can turn the tide of combat suddenly and against the odds. So while the XP thresholds might indicate that a planned encounter will be an easy one, a monster’s ability to petrify a character with just two failed saves could mean that the whole party goes down. Similarly for character abilities that can turn a powerful boss into a sitting duck for the entire combat. Shea points out that such abilities add an awful lot of fun and suspense to the game that might have been lacking from 4e.

I am not in a position to know whether 4e really lacked the kind of surprise and suspense described here. Regardless, Shea has identified something important about 5e. A great deal is added to combat by abilities on both sides that can quickly turn the tide. However, I find it misleading to say that this makes 5e unbalanced. Shea also uses the term ‘unpredictable’, and I think that’s a better way to refer to this phenomenon. For balance is more than determining an accurate challenge rating, and using this to pit the right number of monsters against the right number of players. In the absence of tide-turning abilities, that’s all balance is; however, balance is also a concept that applies to individual abilities, including tide-turning abilities.

I suggest that a very powerful ability, that has the potential to change the tide of a battle, is balanced by some combination of being (i) highly situational; (ii) very resource-depleting; and (iii) requires a saving throw, or similar, with the parameters set so that the full effect of the ability is unlikely to occur. Let me give a few examples. It’s been pointed out that the Fireball spell deals more damage than a multi-target 3rd level spell is meant to deal (DMG, p. 384). However, the spell is highly situational because it is highly likely to also hit your allies. (Evokers have a mitigation for this, but that is at the cost of a full class feature.) Power Word Kill might down a powerful enemy much sooner than expected. But there’s another enemy in the next room, and then that spell slot is gone.

We should conclude that 5e is not imbalanced by design, but unpredictable by design. In fact, I suggest that 5e spells and abilities are a mixture of the predictable and the unpredictable, and the concept of balance applies differently to these two kinds of abilities. A creature’s standard attack is predictable; balancing it is simply a matter of adjusting the to-hit bonus and the damage dice. Balancing its tide-turning ability is a matter of adjusting the factors I discussed in the previous paragraph, and possibly others. Playtesting will be necessary for both of these balancing efforts to succeed. Predictable abilities are unbalanced when they don’t do enough damage often enough, or too much damage too often, as compared with their CR. Unpredictable abilities are unbalanced when they offer systematic ways to change the tide of battle. Indeed, this makes them predictable.

Now that I’ve responded to Shea, I’ll say what I think the point of combat encounters is, and why this leads me to disallow content that has not been rigorously playtested. (My thinking here is very much informed by how Rodrigo Lopez runs D&D on the Critical Hit podcast, and what he says about running D&D in Q&A. Thank you Rodrigo!) Let me first set aside combat encounters that are meant to be a walkover, and combat encounters that are meant to end in multiple deaths or retreat. The purpose of walkover encounters is to set a particular tone in the story. It allows characters to know that certain things are not challenging to them, and this can be built into roleplaying (”we are among the most powerful denizens of the realm. That gives us a certain responsibility.”). The purpose of unwinnable combat encounters is to work through turning points in a campaign’s plot. The fact that an enemy cannot be defeated by the party is likely to drive a whole story arc; actually running that combat, rather than just saying that their attempt to fight the enemy fails, helps drive that home, and gives the characters points of reference (”you saw what happened when he turned his evil gaze upon you, Mazril. We’ve got to find another way!”).

Consider, then, other combat encounters. This is what I think they are all about. The GM creates an encounter that the rules say is winnable, or unwinnable but otherwise surviveable. Then the GM and the players fight out that encounter within the rules, each side trying to fully exploit the resources available to them, though without doing anything that would not make sense from the points of view of the characters and the monsters. Rolls are not made in secret or fudged, and HP totals are not arbitrarily adjusted. The GM does not pull punches. There are no restrictions on tactical thinking; for example, it’s fine for players to deduce enemy ACs, openly discuss them and act accordingly. However, actions taken must have an in-character justification. The outcome of the battle depends on a combination of tactics and luck: unpredictable abilities can turn the tide suddenly, and that might be enough to win or lose, but most of the time good tactical decision-making on the part of the players is rewarded. (The nature of monster abilities means that less interesting tactics are possible; further, the players have multiple brains between them, so ought to be able to out-think the GM in most cases.)

The result is that combat is a kind of minigame within D&D. The GM takes on a different role. In particular, GM fiat is suspended. The rules of the game are in charge (except, of course, where the GM has to make a call about a situation not covered by the rules). But isn’t this to throw out the advantages tabletop roleplaying games have over video games? Isn’t the GM’s freedom to bend the rules what makes D&D more fun and flexible? My basic response is that the rules for combat are only interesting when they do not depend on GM fiat, or other forms of arbitrariness, and for the parts of the game where GM fiat works well, it is better to use ability checks, or skills challenges, or straight roleplaying.

The thought is that the complexity of the combat rules is justified only when those rules are not arbitrary. If the players must think tactically within a system that can change arbitrarily, there’s no longer much point in investing energy in that tactical thinking. It is not intellectually interesting, it is much less fun, and it does not significantly contribute to the story. Tabletop games have an important role for a combination of GM fiat and dice rolls—the chance of those rolls succeeding remaining under the GM’s control—but that can be leveraged with simpler rules than those for combat. Now, I think that the combat rules are fun, so it is good to include them alongside the parts of the game that are more straightforwardly a collaboration between the GM and the players. But they should be deployed differently in order to bring out their strengths.

It should be clear, based on this, why I put such a premium on balance in combat: imbalance introduces arbitrariness to the combat system. If my tactical thinking is nullified by the systematic advantage that another party member has over my character, there’s no point in my engaging in that tactical thinking. Unpredictable abilities nullify tactical thinking in ways that are fun, but only when they are balanced in the ways I described above.

All of this is a matter of degree. I don’t think that combat is fun only when the characters and monsters are restricted to the core rulebooks; I enjoy combat when I play in my friend’s game. My view is just that combat is more fun the less arbitrary it is. I have certainly experienced the sense that my attempt to intellectually engage with the combat is being undermined by certain house rules and the overpowered abilities of homebrew races. Fortunately, thus far this problem has only affected a few turns of combat at a time, rather than whole combats.

Another friend is of the view that the GM should try to convince the players that they really are treating combat as I’ve described, but still fudge dice rolls in order to prevent, e.g., uninteresting character deaths. In response, I’ll note that I don’t feel capable of making those judgements, in the heat of the moment, about whether a death would be interesting. Further, having to worry about this would make combat less fun for me as the GM, and GM fun is important too.

Cory DoctorowA key to page-numbers in the Little Brother audiobook

Mary Kraus teaches my novel Little Brother to health science interns learning about cybersecurity; to help a student who has a print disability, Mary created a key that maps the MP3 files in the audiobook to the Tor paperback edition. She was kind enough to make her doc public to help other people move easily from the audiobook to the print edition — thanks, Mary!


Krebs on SecurityPowerful New DDoS Method Adds Extortion

Attackers have seized on a relatively new method for executing distributed denial-of-service (DDoS) attacks of unprecedented disruptive power, using it to launch record-breaking DDoS assaults over the past week. Now evidence suggests this novel attack method is fueling digital shakedowns in which victims are asked to pay a ransom to call off crippling cyberattacks.

On March 1, DDoS mitigation firm Akamai revealed that one of its clients was hit with a DDoS attack that clocked in at 1.3 Tbps, which would make it the largest publicly recorded DDoS attack ever.

The type of DDoS method used in this record-breaking attack abuses a legitimate and relatively common service called “memcached” (pronounced “mem-cash-dee”) to massively amp up the power of their DDoS attacks.

Installed by default on many Linux operating system versions, memcached is designed to cache data and ease the strain on heavier data stores, like disk or databases. It is typically found in cloud server environments and it is meant to be used on systems that are not directly exposed to the Internet.

Memcached communicates using the User Datagram Protocol or UDP, which allows communications without any authentication — pretty much anyone or anything can talk to it and request data from it.

Because memcached doesn’t support authentication, an attacker can “spoof” or fake the Internet address of the machine making that request so that the memcached servers responding to the request all respond to the spoofed address — the intended target of the DDoS attack.

Worse yet, memcached has a unique ability to take a small amount of attack traffic and amplify it into a much bigger threat. Most popular DDoS tactics that abuse UDP connections can amplify the attack traffic 10 or 20 times — allowing, for example a 1 mb file request to generate a response that includes between 10mb and 20mb of traffic.

But with memcached, an attacker can force the response to be thousands of times the size of the request. All of the responses get sent to the target specified in the spoofed request, and it requires only a small number of open memcached servers to create huge attacks using very few resources.

Akamai believes there are currently more than 50,000 known memcached systems exposed to the Internet that can be leveraged at a moment’s notice to aid in massive DDoS attacks.

Both Akamai and Qrator — a Russian DDoS mitigation company — published blog posts on Feb. 28 warning of the increased threat from memcached attacks.

“This attack was the largest attack seen to date by Akamai, more than twice the size of the September, 2016 attacks that announced the Mirai botnet and possibly the largest DDoS attack publicly disclosed,” Akamai said [link added]. “Because of memcached reflection capabilities, it is highly likely that this record attack will not be the biggest for long.”

According to Qrator, this specific possibility of enabling high-value DDoS attacks was disclosed in 2017 by a Chinese group of researchers from the cybersecurity 0Kee Team. The larger concept was first introduced in a 2014 Black Hat U.S. security conference talk titled “Memcached injections.”


On Thursday, KrebsOnSecurity heard from several experts from Cybereason, a Boston-based security company that’s been closely tracking these memcached attacks. Cybereason said its analysis reveals the attackers are embedding a short ransom note and payment address into the junk traffic they’re sending to memcached services.

Cybereason said it has seen memcached attack payloads that consist of little more than a simple ransom note requesting payment of 50 XMR (Monero virtual currency) to be sent to a specific Monero account. In these attacks, Cybereason found, the payment request gets repeated until the file reaches approximately one megabyte in size.

The ransom demand (50 Monero) found in the memcached attacks by Cybereason on Thursday.

Memcached can accept files and host files in temporary memory for download by others. So the attackers will place the 1 mb file full of ransom requests onto a server with memcached, and request that file thousands of times — all the while telling the service that the replies should all go to the same Internet address — the address of the attack’s target.

“The payload is the ransom demand itself, over and over again for about a megabyte of data,” said Matt Ploessel, principal security intelligence researcher at Cybereason. “We then request the memcached ransom payload over and over, and from multiple memcached servers to produce an extremely high volume DDoS with a simple script and any normal home office Internet connection. We’re observing people putting up those ransom payloads and DDoSsing people with them.”

Because it only takes a handful of memcached servers to launch a large DDoS, security researchers working to lessen these DDoS attacks have been focusing their efforts on getting Internet service providers (ISPs) and Web hosting providers to block traffic destined for the UDP port used by memcached (port 11211).

Ofer Gayer, senior product manager at security firm Imperva, said many hosting providers have decided to filter port 11211 traffic to help blunt these memcached attacks.

“The big packets here are very easy to mitigate because this is junk traffic and anything coming from that port (11211) can be easily mitigated,” Gayer said.

Several different organizations are mapping the geographical distribution of memcached servers that can be abused in these attacks. Here’s the world at-a-glance, from our friends at

The geographic distribution of memcached servers exposed to the Internet. Image:

Here are the Top 20 networks that are hosting the most number of publicly accessible memcached servers at this moment, according to data collected by Cybereason:

The global ISPs with the most number of publicly available memcached servers.

DDoS monitoring site publishes a live, running list of the latest targets getting pelted with traffic in these memcached attacks.

What do the stats at tell us? According to netlab@360, memcached attacks were not super popular as an attack method until very recently.

“But things have greatly changed since February 24th, 2018,” netlab wrote in a Mar. 1 blog post, noting that in just a few days memcached-based DDoS went from less than 50 events per day, up to 300-400 per day. “Today’s number has already reached 1484, with an hour to go.”

Hopefully, the global ISP and hosting community can come together to block these memcached DDoS attacks. I am encouraged by what I have heard and seen so far, and hope that can continue in earnest before these attacks start becoming more widespread and destructive.

Here’s the Cybereason video from which that image above with the XMR ransom demand was taken:

Cory DoctorowI’m coming to the Adelaide Festival this weekend (and then to Wellington, NZ!)

I’m on the last two cities in my Australia/NZ tour for my novel Walkaway: today, I’m flying to Adelaide for the Adelaide Festival, where I’m appearing in several program items: Breakfast with Papers on Sunday at 8AM; a book signing on Monday at 10AM in Dymocks at Rundle Mall; “Dust Devils,” a panel followed by a signing on Monday at 5PM on the West Stage at Pioneer Women’s Memorial Garden; and “Craphound,” a panel/signing on Tuesday at 5PM on the East Stage at Pioneer Women’s Memorial Garden.

After Adelaide, I’m off to Wellington for Writers and Readers Week and then the NetHui one-day copyright event.

I’ve had a fantastic time in Perth, Melbourne and Sydney and it’s been such a treat to meet so many of you — I’m looking so forward to these last two stops!

CryptogramFriday Squid Blogging: Searching for Humboldt Squid with Electronic Bait

Video and short commentary.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJohn Goerzen: Emacs #3: More on org-mode

This is third in a series on Emacs and org-mode.

Todo tracking and keywords

When using org-mode to track your TODOs, it can have multiple states. You can press C-c C-t for a quick shift between states. I have set this:

(setq org-todo-keywords '(
  (sequence "TODO(t!)" "NEXT(n!)" "STARTED(a!)" "WAIT(w@/!)" "OTHERS(o!)" "|" "DONE(d)" "CANCELLED(c)")

Here, I set up 5 states that are for a task that is not yet done: TODO, NEXT, STARTED, WAIT, and OTHERS. Each has a single-character shortcut (t, n, a, etc). The states after the pipe symbol are ones that are considered “done”. I have two: DONE (for things that I have done) and CANCELED (for things that I haven’t done, but for whatever reason, won’t).

The exclamation mark means to log the time when an item was changed to a state. I don’t add this to the done states because those are already logged anyhow. The @ sign means to prompt for a reason; so when switching to WAIT, org-mode will ask me why and add this to the note.

Here’s an example of an entry that has had some state changes:

** DONE This is a test
   CLOSED: [2018-03-02 Fri 03:05]
   - State "DONE"       from "WAIT"       [2018-03-02 Fri 03:05]
   - State "WAIT"       from "TODO"       [2018-03-02 Fri 03:05] \\
     waiting for pigs to fly
   - State "TODO"       from "NEXT"       [2018-03-02 Fri 03:05]
   - State "NEXT"       from "TODO"       [2018-03-02 Fri 03:05]

Here, the most recent items are on top.

Agenda mode, schedules, and deadlines

When you’re in a todo item, C-c C-s or C-c C-d can set a schedule or a deadline for it, respectively. These show up in agenda mode. The difference is in intent and presentation. A schedule is something that you expect to work on at around a time, while a deadline is something that is due at a specific time. By default, the agenda view will start warning you about deadline items in advance.

And while we’re at it, the agenda view will show you the items that you have coming up, offers a nice way to search for items based on plain text or tags, and handles bulk manipulation of items even across multiple files. I covered setting the files for agenda mode in part 2 of this series.


Of course org-mode has tags. You can quickly set them with C-c C-q.

You can set shortcuts for tags you might like to use often. Perhaps something like this:

  (setq org-tag-persistent-alist 
        '(("@phone" . ?p) 
          ("@computer" . ?c) 
          ("@websurfing" . ?w)
          ("@errands" . ?e)
          ("@outdoors" . ?o)
          ("MIT" . ?m)
          ("BIGROCK" . ?b)
          ("CONTACTS" . ?C)
          ("INBOX" . ?i)

You can also add tags to this list on a per-file basis, and also set tags for something on a per-file basis. I use that for my and files to set an INBOX tag. I can then review all items tagged INBOX from the agenda view each day, and the simple act of refiling them into other files will cause them to lost the INBOX tag.


“Refiling” is moving things around, either within a file or elsewhere. It has completion using your headlines. C-c C-w does this. I like these settings:

(setq org-outline-path-complete-in-steps nil)         ; Refile in a single go
(setq org-refile-use-outline-path 'file)


After awhile, you’ll get your files all cluttered with things that are done. org-mode has an archive feature to move things out of your main .org files and into some other files for future reference. If you have your org files in git or something, you may wish to delete these other files since you’d have things in history anyhow, but I find them handy for grepping and searching.

I periodically want to go through and archive everything in my files. Based on a stackoverflow discussion, I have this code:

(defun org-archive-done-tasks ()
   (lambda ()
     (setq org-map-continue-from (outline-previous-heading)))
   "/DONE" 'file)
   (lambda ()
     (setq org-map-continue-from (outline-previous-heading)))
   "/CANCELLED" 'file)

This is based on a particular answer — see the comments there for some additional hints. Now you can run M-x org-archive-done-tasks and everything in the current file marked DONE or CANCELED will be pulled out into a different file.

Up next

I’ll wrap up org-mode with a discussion of automatically receiving emails into org, and syncing org between machines.

Resources to accompany this article

Planet DebianUrvika Gola: BOB Konferenz’18 in Berlin

Recently Pranav Jain and I attended Bob Conference in Berlin, Germany. The conference started with a keynote on a very interesting topic, A language for making movies. Using Non Linear Video Editor for making movies was time consuming, ofcourse. The speaker talked about the struggle of merging presentation, video and high quality sound for conferences. Clearly, Automation was needed here which could be achieved by 1. Making a plugin for non linear VE, 2. Writing a UI automation tool like an operating system macro 3. Using shell scripting. However, dealing shell script for this purpose could be time consuming no matter how great shell scripts are. While the goal to achieve here was to edit videos using a language only and let the language get in the way of solving this. In other words a DSL Domain-Specific Language was required along with Syntax Parse. Video ( a language for making movies which integrated with Racket ecosystem. It combines the power of a traditional video editor with the capabilities of a full programming language.

The next session was about Reactive Streaming with Akka Streams. Streaming Big Data applications is a challenge in itself by ensuring there is near to real time processing, i.e there is no time to batch data and process later. Streaming has to be done in a fault tolerant way, we have no time to deal with faults. Talking about streams, they are two types of streams Bounded and Unbounded! Bounded streams basically mean that the incoming stream is batched, processed to give some output whereas an Unbounded streams just keeps on flowing… just like that. Akka Streams make it easy to model type-safe message processing pipelines. Type-safe means that at compile time, it’s checks that data definitions are compatible. Akka streams has explicit semantics, which is quite important.
Basic building blocks for Akka streams are Sources (produce element of a type A), Sinks (take item of type A and consume A) and Flow (consume element of type A and produce elements of type B). The source will send data via the flow to the sinks. There are situations where data is not consumed or produced. Materialized values are useful when we, for example want to know if the stream was successful or not, result of which could be true/false. Another concept involved was of Backpressure. When we read things from file, it’s fast. If we split that file based on \n, it’s faster. If we want via http from somewhere, it can be slow due to net connectivity. So what backpressure does is that, any component can say ‘wooh! slow down, I need more time’. Everything is just as fast as the slowest component in the flow, which means that slowest component in the chain would determine the throughput. However, there are situations when we really don’t want to/ can’t control the speed of source. To have explicit control over back pressuring we can use buffering. If many requests are coming and reaches a limit, can set a buffer after which the requests can be discarded or we can also push the back pressure upstream when the buffer is full.

Next we saw a fun demo on GRiSP, Bare Metal Functional Programming. GRiSP allows you to run Erlang on bare metal hardware, without a kernel. GRiSP board could be an alternative to raspberry pi Or arduino. The robot was stubborn however, interesting to watch! Since, Pranav and I have worked on a Real Time Communications projects we were inclined towards attending a talk on Understanding real time ecosystems which was very informative. Learned about HTTP, AJAX polling, AJAX Long polling, HTTP/2, Pub/Sub and other concepts which were relatable. Learned more about protocols/ layers in the last talk of the conference, Engineering TCP/IP with logic.

This is just a summary of our experiences and what we were able to grasp at the conference and also share our individual experience with Debian on GSoC and Outreachy.

Thank you Dr. Michael Sperber for the opportunity and the organizers for putting up the conference.

CryptogramMalware from Space

Since you don't have enough to worry about, here's a paper postulating that space aliens could send us malware capable of destroying humanity.

Abstract: A complex message from space may require the use of computers to display, analyze and understand. Such a message cannot be decontaminated with certainty, and technical risks remain which can pose an existential threat. Complex messages would need to be destroyed in the risk averse case.

I think we're more likely to be enslaved by malicious AIs.

Planet DebianPetter Reinholdtsen: Debian used in the subway info screens in Oslo, Norway

Today I was pleasantly surprised to discover my operating system of choice, Debian, was used in the info screens on the subway stations. While passing Nydalen subway station in Oslo, Norway, I discovered the info screen booting with some text scrolling. I was not quick enough with my camera to be able to record a video of the scrolling boot screen, but I did get a photo from when the boot got stuck with a corrupt file system:

[photo of subway info screen]

While I am happy to see Debian used more places, some details of the content on the screen worries me.

The image show the version booting is 'Debian GNU/Linux lenny/sid', indicating that this is based on code taken from Debian Unstable/Sid after Debian Etch (version 4) was released 2007-04-08 and before Debian Lenny (version 5) was released 2009-02-14. Since Lenny Debian has released version 6 (Squeeze) 2011-02-06, 7 (Wheezy) 2013-05-04, 8 (Jessie) 2015-04-25 and 9 (Stretch) 2017-06-15, according to a Debian version history on Wikpedia. This mean the system is running around 10 year old code, with no security fixes from the vendor for many years.

This is not the first time I discover the Oslo subway company, Ruter, running outdated software. In 2012, I discovered the ticket vending machines were running Windows 2000, and this was still the case in 2016. Given the response from the responsible people in 2016, I would assume the machines are still running unpatched Windows 2000. Thus, an unpatched Debian setup come as no surprise.

The photo is made available under the license terms Creative Commons 4.0 Attribution International (CC BY 4.0).

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Worse Than FailureError'd: I Don't Always Test my Code, but When I do...

"Does this mean my package is here or is it also in development?" writes Nariim.


Stuart L. wrote, "Who needs a development environment when you can just test in production on the 'Just In' feed?"


"It was so nice of Three to unexpectedly include me - a real user - in their User Acceptance Testing. Yeah, it's still not fixed," wrote Paul P.


"I found this great nearby hotel option that transcended into the complex plane," Rosenfield writes.


Stuart L. also wrote in, "I can't think of a better place for BoM to test out cyclone warnings than in production."


"The Ball Don't Lie blog at Yahoo! Sports seems to have run out of content during the NBA Finals so they started testing instead," writes Carlos S.


[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

CryptogramCellebrite Unlocks iPhones for the US Government

Forbes reports that the Israeli company Cellebrite can probably unlock all iPhone models:

Cellebrite, a Petah Tikva, Israel-based vendor that's become the U.S. government's company of choice when it comes to unlocking mobile devices, is this month telling customers its engineers currently have the ability to get around the security of devices running iOS 11. That includes the iPhone X, a model that Forbes has learned was successfully raided for data by the Department for Homeland Security back in November 2017, most likely with Cellebrite technology.


It also appears the feds have already tried out Cellebrite tech on the most recent Apple handset, the iPhone X. That's according to a warrant unearthed by Forbes in Michigan, marking the first known government inspection of the bleeding edge smartphone in a criminal investigation. The warrant detailed a probe into Abdulmajid Saidi, a suspect in an arms trafficking case, whose iPhone X was taken from him as he was about to leave America for Beirut, Lebanon, on November 20. The device was sent to a Cellebrite specialist at the DHS Homeland Security Investigations Grand Rapids labs and the data extracted on December 5.

This story is based on some excellent reporting, but leaves a lot of questions unanswered. We don't know exactly what was extracted from any of the phones. Was it metadata or data, and what kind of metadata or data was it.

The story I hear is that Cellebrite hires ex-Apple engineers and moves them to countries where Apple can't prosecute them under the DMCA or its equivalents. There's also a credible rumor that Cellebrite's mechanisms only defeat the mechanism that limits the number of password attempts. It does not allow engineers to move the encrypted data off the phone and run an offline password cracker. If this is true, then strong passwords are still secure.

EDITED TO ADD (3/1): Another article, with more information. It looks like there's an arms race going on between Apple and Cellebrite. At least, if Cellebrite is telling the truth -- which they may or may not be.

Planet DebianFrançois Marier: Redirecting an entire site except for the certbot webroot

In order to be able to use the webroot plugin for certbot and automatically renew the Let's Encrypt certificate for, I had to put together an Apache config that would do the following on port 80:

  • Let /.well-known/acme-challenge/* through on the bare domain (
  • Redirect anything else to

The reason for this is that the main Libravatar service listens on and not, but that cerbot needs to ascertain control of the bare domain.

This is the configuration I ended up with:

<VirtualHost *:80>
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes

    RewriteEngine on
    RewriteCond "/var/www/acme%{REQUEST_URI}" !-f
    RewriteRule ^(.*)$ [last,redirect=301]

The trick I used here is to make the redirection RewriteRule conditional on the requested file (%{REQUEST_URI}) not existing in the /var/www/acme directory, the one where I tell certbot to drop its temporary files.

Here are the relevant portions of /etc/letsencrypt/renewal/

authenticator = webroot
account = 

<span class="createlink"><a href="/ikiwiki.cgi?do=create&amp;from=posts%2Fredirecting-entire-site-except-certbot-webroot&amp;page=webroot_map" rel="nofollow">?</a>webroot map</span> = /var/www/acme = /var/www/acme


Planet DebianDirk Eddelbuettel: RcppArmadillo 0.8.400.0.0

armadillo image

RcppArmadillo release 0.8.400.0.0, originally prepared and uploaded on February 19, finally hit CRAN today (after having been available via the RcppCore drat repo for a number of days). A corresponding Debian release was prepared and uploaded as well. This RcppArmadillo release contains Armadillo release 8.400.0 with a number of nice changes (see below for details), and continues our normal bi-monthly CRAN release cycle (slight delayes in CRAN processing notwithstanding).

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 450 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.400.0.0 (2018-02-19)

  • Upgraded to Armadillo release 8.400.rc2 (Entropy Bandit)

    • faster handling of sparse matrices by repmat()

    • faster loading of CSV files

    • expanded kron() to handle sparse matrices

    • expanded index_min() and index_max() to handle cubes

    • expanded randi(), randu(), randn(), randg() to output single scalars

    • added submatrix & subcube iterators

    • added normcdf()

    • added mvnrnd()

    • added chi2rnd()

    • added wishrnd() and iwishrnd()

  • The configure generated header settings for LAPACK and OpenMP can be overridden by the user.

  • This release was preceded by two release candidates which were tested extensively.

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianGregor Herrmann: trains & snow

last weekend I attended the Debian SnowCamp at the lago maggiore in north-western italy, a small DIY hacking & socialising Debian meeting in the tradion of Debian SunCamp. – some impressions:


I wasn't aware of how tedious it can be to travel less than 500 kilometers into a neighbouring country by train in 2018. first of all, no train company would sell me a ticket from innsbruck to laveno-mombello, so innsbruck–verona from öbb & verona–laveno-mombello from trenitalia it was.

from the eight trains (yes, 3 changes per direction) exactly 0 had no delays at departure or arrival or both. in the end I "only" lost one connection. – & one direction from-door-to-door took roughly 9 hours.


the ostello casa rossa was lovely (& might be even more lovely with warmer temperature which would allow for longer stays in the garden), & the food at the ristorante concordia was mostly excellent. trying new local specialties (like the pizzoccheri) is always fun. & no lunch or dinner lasted shorter than 2 hours.


unsurprisingly, my work was mostly focussed on Debian Perl Group stuff. we managed to move our repos from alioth to salsa during the weekend, which involved not only importing ~3500 repositories but also e.g. recreating our .mrconfig setup.

in practice it didn't help alot for my contributions that I was at SnowCamp as the others were not there, & coordination happened via IRC (unlike a team sprint); but at least I had more people who listened to my shouts of joy or frustration than what I would have had at home :)


  • on my way back I even had snow while waiting at the delayed train in the unexciting station of laveno-mombello.
  • mille grazie to elena & friends for the perfect organisation!
  • if you speak german: the respective volume of my archäologie des alltags (also contains links to tweets with photos).
  • I see quite some potential for other Debian *Camps, maybe even longer & with more pre-planned team sprints under one roof (or in one garden) …

Krebs on SecurityFinancial Cyber Threat Sharing Group Phished

The Financial Services Information Sharing and Analysis Center (FS-ISAC), an industry forum for sharing data about critical cybersecurity threats facing the banking and finance industries, said today that a successful phishing attack on one of its employees was used to launch additional phishing attacks against FS-ISAC members.

The fallout from the back-to-back phishing attacks appears to have been limited and contained, as many FS-ISAC members who received the phishing attack quickly detected and reported it as suspicious. But the incident is a good reminder to be on your guard, remember that anyone can get phished, and that most phishing attacks succeed by abusing the sense of trust already established between the sender and recipient.

The confidential alert FS-ISAC sent to members about a successful phishing attack that spawned phishing emails coming from the FS-ISAC.

Notice of the phishing incident came in an alert FS-ISAC shared with its members today and obtained by KrebsOnSecurity. It describes an incident on Feb. 28 in which an FS-ISAC employee “clicked on a phishing email, compromising that employee’s login credentials. Using the credentials, a threat actor created an email with a PDF that had a link to a credential harvesting site and was then sent from the employee’s email account to select members, affiliates and employees.”

The alert said while FS-ISAC was already planning and implementing a multi-factor authentication (MFA) solution across all of its email platforms, “unfortunately, this incident happened to an employee that was not yet set up for MFA. We are accelerating our MFA solution across all FS-ISAC assets.”

The FS-ISAC also said it upgraded its Office 365 email version to provide “additional visibility and security.”

In an interview with KrebsOnSecurity, FS-ISAC President and CEO Bill Nelson said his organization has grown significantly in new staff over the past few years to more than 75 people now, including Greg Temm, the FS-ISAC’s chief information risk officer.

“To say I’m disappointed this got through is an understatement,” Nelson said. “We need to accelerate MFA extremely quickly for all of our assets.”

Nelson observed that “The positive messaging out of this I guess is anyone can become victimized by this.” But according to both Nelson and Temm, the phishing attack that tricked the FS-ISAC employee into giving away email credentials does not appear to have been targeted — nor was it particularly sophisticated.

“I would classify this as a typical, routine, non-targeted account harvesting and phishing,” Temm said. “It did not affect our member portal, or where our data is. That’s 100 percent multifactor. In this case it happened to be an asset that did not have multifactor.”

In this incident, it didn’t take a sophisticated actor to gain privileged access to an FS-ISAC employee’s inbox. But attacks like these raise the question: How successful might such a phishing attack be if it were only slightly more professional and/or organized?

Nelson said his staff members all participate in regular security awareness training and testing, but that there is always room to fill security gaps and move the needle on how many people click when they shouldn’t with email.

“The data our members share with us is fully protected,” he said. “We have a plan working with our board of directors to make sure we have added security going forward,” Nelson said. “But clearly, recognizing where some of these softer targets are is something every company needs to take a look at.”

Planet DebianAntoine Beaupré: February 2018 report: LTS, ...

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. This month was exclusively dedicated to my frontdesk work. I actually forgot to do it the first week and had to play catchup during the weekend, so I brought up a discussion about how to avoid those problems in the future. I proposed an automated reminder system, but it turns out people found this was overkill. Instead, Chris Lamb suggested we simply send a ping to the next person in the list, which has proven useful the next time I was up. In the two weeks I was frontdesk, I ended up triaging the following notable packages:

  • isc-dhcp - remote code execution exploits - time to get rid of those root-level daemons?
  • simplesamlphp - under embargo, quite curious
  • golang - the return of remote code execution in go get (CVE-2018-6574, similar to CVE-2017-15041 and CVE-2018-7187) - ended up being marked as minor, unfortunately
  • systemd - CVE-2017-18078 was marked as unimportant as this was neutralized by kernel hardening and systemd was not really in use back in wheezy. besides, CVE-2013-4392 was about a similar functionality which was claimed to not be supported in wheezy. i did, however, proposed to forcibly enable the kernel hardening through default sysctl configurations (Debian bug #889098) so that custom kernels would be covered by the protection in stable suites.

There were more minor triage work not mentioned here, those are just the juicy ones...

Speaking of juicy, the other thing I did during the month was to help with the documentation on the Meltdown and Spectre attacks on Intel CPUs. Much has been written about this and I won't do yet another summary. However, it seems that no one actually had written even semi-official documentation on the state of fixes in Debian, which lead to many questions to the (LTS) security team(s). Ola Lundqvist did a first draft of a page detailing the current status, and I expanded on the page to add formatting and more details. The page is visible here:

I'm still not fully happy with the results: we're missing some userland like Qemu and a timeline of fixes. In comparison, the Ubuntu page still looks much better in my opinion. But it's leagues ahead of what we had before, which was nothing... The next step for LTS is to backport the retpoline fixes back into a compiler. Roberto C. Sanchez is working on this, and the remaining question is whether we try to backport to GCC 4.7 or we backport GCC 4.9 itself into wheezy. In any case, it's a significant challenge and I'm glad I'm not the one dealing with such arcane code right now...

Other free software work

Not much to say this month, en vrac:

  • did the usual linkchecker maintenance
  • finally got my Prometheus node exporter directory size sample merged
  • added some docs updating the Dat project comparison with IPFS after investigating Dat. Turns out Dat's security garantees aren't as good as I hoped...
  • reviewed some PRs in the Git-Mediawiki project
  • found what I consider to be a security issue in the Borg backup software, but was disregarded as such by upstream. This ended up in a simple issue that I do not hope much from.
  • so I got more interested in the Restic community as well. I proposed a code of conduct to test the waters, but the feedback so far has been mixed, unfortunately.
  • started working on a streams page for the Sigal gallery. Expect an article about Sigal soon.
  • published undertime in Debian, which brought a slew of bug reports (and consequent fixes).
  • started looking at alternative GUIs because GTK2 is going a way and I need to port two projects. I have a list of "hello world" in various frameworks now, still not sure which one I'll use.
  • also worked on updating the Charybdis and Atheme-services packages with new co-maintainers (hi!)
  • worked with Darktable to try and render an exotic image out of my new camera. Might turn into a LWN article eventually as well.
  • started getting more involved in the local free software forum, a nice little community. In particular, i went to a "repair cafe" and wrote a full report on the experience there.

I'm trying to write more for LWN these days so it's taking more time. I'm also trying to turn those reports into articles to help ramping up that rhythm, which means you'll need to subscribe to LWN to get the latest goods before the 2 weeks exclusivity period.

CryptogramRussians Hacked the Olympics

Two weeks ago, I blogged about the myriad of hacking threats against the Olympics. Last week, the Washington Post reported that Russia hacked the Olympics network and tried to cast the blame on North Korea.

Of course, the evidence is classified, so there's no way to verify this claim. And while the article speculates that the hacks were a retaliation for Russia being banned due to doping, that doesn't ring true to me. If they tried to blame North Korea, it's more likely that they're trying to disrupt something between North Korea, South Korea, and the US. But I don't know.

Worse Than FailureCodeSOD: What a Stream

In Java 8, they added the Streams API. Coupled with lambdas, this means that developers can write the concise and expressive code traditionally oriented with functional programming. It’s the best bits of Java blended with the best bits of Clojure! The good news, is that it allows you to write less code! The better news is that you can abuse it to write more code, if you’re so inclined.

Antonio inherited some code written by “Frenk”, who was thus inclined. Frenk wasn’t particularly happy with their job, but were one of the “rockstar programmers” in the eyes of management, so Frenk was given the impossible-to-complete tasks and given complete freedom in the solution.

Frenk had a problem, though. Nothing Frenk was doing was actually all that impossible. If they solved everything with code that anyone else could understand, they wouldn’t look like an amazing genius. So Frenk purposefully obfuscated every line of code, ignoring indentation, favoring one-character variable names, and generally trying to solve each problem in the most obtuse way possible.

Which yielded this.

    Resource[] r; //@Autowired ndr
    Map<File, InputStream> m = null;
    if (file != null)
    m.put(file, new FileInputStream(file));}else

    m = -> { try { return x.getFile(); }
catch (Exception e) { throw new IllegalStateException(e);}},
    x -> {try{return x.getInputStream();}catch (Exception e){throw new IllegalStateException(e)

As purposefully unreadable code, I’d say that Frenk fails. That’s not to say that it isn’t bad, but Frenk’s attempts to make it unreadable… just make it annoying. I understand what the code does, but I’m just left wondering at why.

I can definitely say that this has never been tested in a case where the file variable is non-null, because that wouldn’t work. Antonio confirms that their IDE was throwing up plenty of warnings about calling a method on a variable that was probably null, with the m.put(…) line. It’s nice that they half-way protect against nulls- one variable is checked, but the other isn’t.

Frenk’s real artistry is in employing streams to convert an array to a map. On its surface, it’s not an objectively wrong approach- this is the kind of things streams are theoretically good at. Examine each element in the array, and apply a lambda that extracts the key and another lambda that extracts the value and put it into a map.

There are many real-world cases where I might use this exact technique. But in this case, Antonio refactored it to something a bit cleaner:

        Resource[] resources; //@Autowired again
        Map<File, InputStream> resourceMap = new HashMap<>();
        if (file != null)
            resourceMap.put(file, new FileInputStream(file));
            for (Resource res : resources)
                resourceMap.put(res.getFile(), res.getInputStream());

Here, the iterative approach is much simpler, and the intent of the code is more clear. Just because you have a tool doesn’t make it the right tool for the job. And before you wonder about the lack of exception handling- both the original block and the refactored version were already wrapped up in an exception handling block that can handle the IOException that failed access to the files would throw.

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!

Planet DebianRuss Allbery: Review: Coding Freedom

Review: Coding Freedom, by E. Gabriella Coleman

Publisher: Princeton University Press
Copyright: 2013
ISBN: 0-691-14461-3
Format: Trade paperback
Pages: 223

Subtitled The Ethics and Aesthetics of Hacking, Coding Freedom is a rare beast in my personal reading: an academic anthropological study of a fairly new virtual community. It's possible that many books of this type are being written, but they're not within my normal reading focus. It's also a bit of an awkward review, since the community discussed here is (one of) mine. I'm going to have an insider's nitpicks and "well, but" reactions to the anthropology, which is a valid reaction but not necessarily the intended audience.

I'm also coming to this book about four years after everyone finished talking about it, and even longer after Coleman's field work in support of the book. I think Coding Freedom suffers from that lack of currency. If this book were written today, I suspect its focus would change, at least in part. More on that in a moment.

Coding Freedom's title is more subtle and layered than it may first appear. It is about the freedom to write code, and about free software as a movement, but not only that. It's also about how concepts of freedom are encoded in the culture and language of hacking communities, and about the concept of code as speech (specifically free speech in the liberal tradition). And the title also captures the idea of code switching, where a speaker switches between languages even in the middle of sentences. The free software community does something akin to code switching between the domains of technical software problems, legal problems, and political beliefs and ideologies. Coleman covers all of that ground in this book.

Apart from an introduction and conclusion, the book is divided into five chapters in three parts. The opening part talks about the typical life story and community involvement of a free software hacker and briefly sketches the legal history of free software licenses. The second part talks about the experience of hacking, with a particular focus on playful expression and the tension between collaboration, competitiveness, and proving one's membership in the group. The final part dives into software as speech, legal and political struggles against the DMCA and other attempts to restrict code through copyright law, and the free software challenge to the liberal regime of capitalism and private property, grounded in the also-liberal value of free speech.

There's a lot here to discuss, but it's also worth noting what's not here, and what I think would have been here if the same field work were done today. There's nothing about gender or inclusion, which have surpassed DMCA issues to become the political flash point de jour. (Coleman notes early in the book that she intentionally omitted that topic as one that deserves its own separate treatment.) The presentation of social norms and behaviors also felt strongly centered in an early 2000s attitude towards social testing, with low tolerance of people who haven't proven their competence. Coleman uses the term meritocracy with very few caveats and complications. I don't think one would do that in work starting today; the flaws, unwritten borders, and gatekeeping for who can participate in that supposed meritocracy are now more frequently discussed.

Those omissions left me somewhat uncomfortable throughout. Coleman follows the community self-image from a decade or more ago (which makes sense, given that's when most of her field research and the majority of examples she draws on in the book are from): valuing technical acumen and skilled play, devoted to free speech, and welcoming and valuing anyone with similar technical abilities. While this self-image is not entirely wrong, it hides a world of unspoken rules and vicious gatekeeping to control who gets to have free speech within the community, what types of people are valued, and who is allowed to not do emotional labor. And who isn't.

These are rather glaring gaps, and for me they limit the usefulness of Coding Freedom as an accurate analysis of the community.

That said, I do want to acknowledge that this wasn't Coleman's project. Her focus, instead, is on the way free software communities noticed and pushed into the open some hidden conflicts in the tradition of liberalism. Free political speech and democratic politics have gone hand-in-hand with capitalism and an overwhelming emphasis on private property, extended into purely virtual objects such as computer software. Free software questions that alliance, pokes at it, and at times even rips it apart.

The free software movement is deeply embedded in liberalism. Although it has members from anarchist, communist, and other political traditions, the general community is not very radical in its understanding of speech, labor, or politics. It has a long tradition of trying to avoid disruptive politics, apart from issues that touch directly on free software, to maximize its political alliances and avoid alienating any members. Free software is largely not a critique of liberalism from the outside; it's a movement that expresses a conflict inside the liberal tradition. It asks whether self-expression is consistent with, and more important than, private property, a question that liberalism otherwise attempts to ignore.

This is the part of the book I found fascinating: looking at my community from the outside, putting emergent political positions in that community into a broader context, and showing the complex and skillful ways that the community discusses, analyzes, and reaches consensus on those positions while retaining a broad base of support and growing membership. Coleman provides a sense of being part of something larger in the best and most complicated way: not a revolution, not an ideology, but a community with complex boundaries, rituals that are both scoffed at and followed, and gatekeeping behavior that exist in part because any human community will create and enforce boundaries.

When one is deeply inside a culture, it's easy to get lost in the ethical debates over whether a particular community behavior is good or bad. It takes an anthropologist to recast all those behaviors, good and bad, as humans being human, and to ask curious questions about what social functions those behaviors serve. Coding Freedom gave me a renewed appreciation of the insight that can come from the disinterested observer. If nothing else, it might help me choose my battles more strategically, and have more understanding and empathy.

This is a very academic work, at least compared to what I normally read. I never lost the thread of Coleman's argument, but I found it hard going and heavy on jargon in a few places. If, like me, you're not familiar with current work in anthropology, you'll probably feel like part of the discussion is going over your head, and that some terms you're reading with their normal English meaning are actually terms of art with more narrow and specific definitions. This is a book rather than an academic paper, and it does try to be approachable, but it's more research than popularization.

I wish Coding Freedom were more engaged with the problems of free software today, instead of the problems of free software in 2002, the era of United States v. Elcom Ltd. and Free Dmitry. I wish that Coleman had been far more critical of the concept of a meritocracy, and had dug deeper into the gatekeeping and boundaries around who is allowed to participate and who is discouraged or excluded. And while I'm not going to complain about academic rigor, I wish the prose were a bit lighter and a bit more approachable, and that it hadn't taken me months to read this book.

But, that said, I'm not sorry to have finally read it. The perspective from the anthropological view of one's own community is quite valuable. The distance provides an opportunity for less judgmental analysis, and a reminder that human social structures are robust and complex attempts to balance contradictory goals.

Coleman made me feel more connected, not to an overarching ideology or political goal, but to a tangled, flawed, dynamic, and responsive community, whose primary shared purpose is to support that human complexity. Sometimes it's easy to miss that forest for the day-to-day trees.

If you want to get more of a feel for Coleman's work, her keynote on Anonymous at DebConf14 in Portland in 2014 is very interesting and consistent in tone and approach with this book (albeit on a somewhat more controversial topic).

Rating: 6 out of 10

Planet DebianPaul Wise: FLOSS Activities February 2018





  • myrepos: merge patches, triage bugs
  • Debian: forward domain expiry, discuss sensitive files with service maintainer
  • Debian QA: bug triage
  • Debian package tracker: deploy latest code
  • Debian mentors: check why package wasn't uploaded, restart importer after crash
  • Debian wiki: remove extraneous tmp file, fix user email address, unblacklist IP addresses, whitelist email addresses, whitelist email domain
  • Debian website: investigate translation update issue



The work on harmony and librecaptcha was sponsored by my employer. All other work was done on a volunteer basis.


Planet DebianNorbert Preining: Ten Mincho – Great font and ugly Adobe

I recently stumbled upon a very interesting article by Ken Lunde (well known from CJKV Information Processing book) on a new typeface for Japanese called Ten Mincho, designed by Ryoko Nishizuka and Robert Slimbach. Reading that the Kanji and Roman part is well balanced, and the later one designed by Robert Slimbach, I was very tempted to get these fonts for my own publications and reports. But well, not with Adobe 🙁

The fonts are available at TypeKit, but a detailed study of the license terms and EULA gave me to cold shivers:

These are not perpetual licenses. You won’t have direct access to the font files, so you will need to keep the Creative Cloud application running in order to keep using the Marketplace fonts you’ve synced.
A few things to know about fonts from Marketplace

So may I repeat:

  • You pay for the fonts but you can only use them while running Creative Cloud.
  • You have no way to use the fonts with any other application.
  • Don’t even think about using the fonts on Linux with TeX.
  • We can remove these fonts at our free will from your library, that is not perpetual license.

Also when you purchase the fonts you are warned in the last step that:

All sales are final. Sorry, no refunds. Please contact us if you have any questions before purchasing.

So not only can you not use the fonts you purchased freely, you also cannot ask for reimbursement.

Adobe, that is a shame.

Planet DebianJohn Goerzen: Emacs #2: Introducing org-mode

In my first post in my series on Emacs, I described returning to Emacs after over a decade of vim, and org-mode being the reason why.

I really am astounded at the usefulness, and simplicity, of org-mode. It is really a killer app.

So what exactly is org-mode?

I wrote yesterday:

It’s an information organization platform. Its website says “Your life in plain text: Org mode is for keeping notes, maintaining TODO lists, planning projects, and authoring documents with a fast and effective plain-text system.”

That’s true, but doesn’t quite capture it. org-mode is a toolkit for you to organize things. It has reasonable out-of-the-box defaults, but it’s designed throughout for you to customize.

To highlight a few things:

  • Maintaining TODO lists: items can be scattered across org-mode files, contain attachments, have tags, deadlines, schedules. There is a convenient “agenda” view to show you what needs to be done. Items can repeat.
  • Authoring documents: org-mode has special features for generating HTML, LaTeX, slides (with LaTeX beamer), and all sorts of other formats. It also supports direct evaluation of code in-buffer and literate programming in virtually any Emacs-supported language. If you want to bend your mind on this stuff, read this article on literate devops. The entire Worg website
    is made with org-mode.
  • Keeping notes: yep, it can do that too. With full-text search, cross-referencing by file (as a wiki), by UUID, and even into other systems (into mu4e by Message-ID, into ERC logs, etc, etc.)

Getting started

I highly recommend watching Carsten Dominik’s excellent Google Talk on org-mode. It is an excellent introduction.

org-mode is included with Emacs, but you’ll often want a more recent version. Debian users can apt-get install org-mode, or it comes with the Emacs packaging system; M-x package-install RET org-mode RET may do it for you.

Now, you’ll probably want to start with the org-mode compact guide’s introduction section, noting in particular to set the keybindings mentioned in the activation section.

A good tutorial…

I’ve linked to a number of excellent tutorials and introductory items; this post is not going to serve as a tutorial. There are two good videos linked at the end of this post, in particular.

Some of my configuration

I’ll document some of my configuration here, and go into a bit of what it does. This isn’t necessarily because you’ll want to copy all of this verbatim — but just to give you a bit of an idea of some of what can be configured, an idea of what to look up in the manual, and maybe a reference for “now how do I do that?”

First, I set up Emacs to work in UTF-8 by default.

(prefer-coding-system 'utf-8)
(set-language-environment "UTF-8")

org-mode can follow URLs. By default, it opens in Firefox, but I use Chromium.

(setq browse-url-browser-function 'browse-url-chromium)

I set the basic key bindings as documented in the Guide, plus configure the M-RET behavior.

(global-set-key "\C-cl" 'org-store-link)
(global-set-key "\C-ca" 'org-agenda)
(global-set-key "\C-cc" 'org-capture)
(global-set-key "\C-cb" 'org-iswitchb)

(setq org-M-RET-may-split-line nil)

Configuration: Capturing

I can press C-c c from anywhere in Emacs. It will capture something for me, and include a link back to whatever I was working on.

You can define capture templates to set how this will work. I am going to keep two journal files for general notes about meetings, phone calls, etc. One for personal, one for work items. If I press C-c c j, then it will capture a personal item. The %a in all of these includes the link to where I was (or a link I had stored with C-c l).

(setq org-default-notes-file "~/org/")
(setq org-capture-templates
        ("t" "Todo" entry (file+headline "" "Tasks")
         "* TODO %?\n  %i\n  %u\n  %a")
        ("n" "Note/Data" entry (file+headline "" "Notes/Data")
         "* %?   \n  %i\n  %u\n  %a")
        ("j" "Journal" entry (file+datetree "~/org/")
         "* %?\nEntered on %U\n %i\n %a")
        ("J" "Work-Journal" entry (file+datetree "~/org/")
         "* %?\nEntered on %U\n %i\n %a")
(setq org-irc-link-to-logs t)

I like to link by UUIDs, which lets me move things between files without breaking locations. This helps generate UUIDs when I ask Org to store a link target for future insertion.

(require 'org-id)
(setq org-id-link-to-org-use-id 'create-if-interactive)

Configuration: agenda views

I like my week to start on a Sunday, and for org to note the time when I mark something as done.

(setq org-log-done 'time)
(setq org-agenda-start-on-weekday 0)

Configuration: files and refiling

Here I tell it what files to use in the agenda, and to add a few more to the plain text search. I like to keep a general inbox (from which I can move, or “refile”, content), and then separate tasks, journal, and knowledge base for personal and work items.

  (setq org-agenda-files (list "~/org/"
  (setq org-agenda-text-search-extra-files
        (list "~/org/"

  (setq org-refile-targets '((nil :maxlevel . 2)
                             (org-agenda-files :maxlevel . 2)
                             ("~/org/" :maxlevel . 2)
                             ("~/org/" :maxlevel . 2)
(setq org-outline-path-complete-in-steps nil)         ; Refile in a single go
(setq org-refile-use-outline-path 'file)

Configuration: Appearance

I like a pretty screen. After you’ve gotten used to org a bit, you might try this.

(require 'org-bullets)
(add-hook 'org-mode-hook
          (lambda ()
            (org-bullets-mode t)))
(setq org-ellipsis "⤵")

Coming up next…

This hopefully showed a few things that org-mode can do. Coming up next, I’ll cover how to customize TODO keywords and tags, archiving old tasks, forwarding emails to org-mode, and using git to synchronize between machines.

You can also see a list of all articles in this series.

Resources to accompany this article

Planet DebianDirk Eddelbuettel: #17: Dependencies.

Dependencies are invitations for other people to break your package.
-- Josh Ulrich, private communication

Welcome to the seventeenth post in the relentlessly random R ravings series of posts, or R4 for short.

Dependencies. A truly loaded topic.

As R users, we are spoiled. Early in the history of R, Kurt Hornik and Friedrich Leisch built support for packages right into R, and started the Comprehensive R Archive Network (CRAN). And R and CRAN had a fantastic run with. Roughly twenty years later, we are looking at over 12,000 packages which can (generally) be installed with absolute ease and no suprises. No other (relevant) open source language has anything of comparable rigour and quality. This is a big deal.

And coding practices evolved and changed to play to this advantage. Packages are a near-unanimous recommendation, use of the install.packages() and update.packages() tooling is nearly universal, and most R users learned to their advantage to group code into interdependent packages. Obvious advantages are versioning and snap-shotting, attached documentation in the form of help pages and vignettes, unit testing, and of course continuous integration as a side effect of the package build system.

But the notion of 'oh, let me just build another package and add it to the pool of packages' can get carried away. A recent example I had was the work on the prrd package for parallel recursive dependency testing --- coincidentally, created entirely to allow for easier voluntary tests I do on reverse dependencies for the packages I maintain. It uses a job queue for which I relied on the liteq package by Gabor which does the job: enqueue jobs, and reliably dequeue them (also in a parallel fashion) and more. It looks light enough:

R> tools::package_dependencies(package="liteq", recursive=FALSE, db=AP)$liteq
[1] "assertthat" "DBI"        "rappdirs"   "RSQLite"   

Two dependencies because it uses an internal SQLite database, one for internal tooling and one for configuration.

All good then? Not so fast. The devil here is the very innocuous and versatile RSQLite package because when we look at fully recursive dependencies all hell breaks loose:

R> tools::package_dependencies(package="liteq", recursive=TRUE, db=AP)$liteq
 [1] "assertthat" "DBI"        "rappdirs"   "RSQLite"    "tools"     
 [6] "methods"    "bit64"      "blob"       "memoise"    "pkgconfig" 
[11] "Rcpp"       "BH"         "plogr"      "bit"        "utils"     
[16] "stats"      "tibble"     "digest"     "cli"        "crayon"    
[21] "pillar"     "rlang"      "grDevices"  "utf8"      
R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=AP)$RSQLite
 [1] "bit64"      "blob"       "DBI"        "memoise"    "methods"   
 [6] "pkgconfig"  "Rcpp"       "BH"         "plogr"      "bit"       
[11] "utils"      "stats"      "tibble"     "digest"     "cli"       
[16] "crayon"     "pillar"     "rlang"      "assertthat" "grDevices" 
[21] "utf8"       "tools"     

Now we went from four to twenty-four, due to the twenty-two dependencies pulled in by RSQLite.

There, my dear friend, lies madness. The moment one of these packages breaks we get potential side effects. And this is no laughing matter. Here is a tweet from Kieran posted days before a book deadline of his when he was forced to roll a CRAN package back because it broke his entire setup. (The original tweet has by now been deleted; why people do that to their entire tweet histories is somewhat I fail to comprehened too; in any case the screenshot is from a private discussion I had with a few like-minded folks over slack.)

That illustrates the quote by Josh at the top. As I too have "production code" (well, CRANberries for one relies on it), I was interested to see if we could easily amend RSQLite. And yes, we can. A quick fork and few commits later, we have something we could call 'RSQLighter' as it reduces the dependencies quite a bit:

R> IP <- installed.packages()   # using my installed mod'ed version
R> tools::package_dependencies(package="RSQLite", recursive=TRUE, db=IP)$RSQLite
 [1] "bit64"     "DBI"       "methods"   "Rcpp"      "BH"        "bit"      
 [7] "utils"     "stats"     "grDevices" "graphics" 

That is less than half. I have not proceeded with the fork because I do not believe in needlessly splitting codebases. But this could be a viable candidate for an alternate or shadow repository with more minimal and hence more robust dependencies. Or, as Josh calls, the tinyverse.

Another maddening aspect of dependencies is the ruthless application of what we could jokingly call Metcalf's Law: the likelihood of breakage does of course increase with the number edges in the dependency graph. A nice illustration is this post by Jenny trying to rationalize why one of the 87 (as of today) tidyverse packages has now state "ORPHANED" at CRAN:

An invitation for other people to break your code. Well put indeed. Or to put rocks up your path.

But things are not all that dire. Most folks appear to understand the issue, some even do something about it. The DBI and RMySQL packages have saner strict dependencies, maybe one day things will improve for RMariaDB and RSQLite too:

R> tools::package_dependencies(package=c("DBI", "RMySQL", "RMariaDB"), recursive=TRUE, db=AP)
[1] "methods"

[1] "DBI"     "methods"

 [1] "bit64"     "DBI"       "hms"       "methods"   "Rcpp"      "BH"       
 [7] "plogr"     "bit"       "utils"     "stats"     "pkgconfig" "rlang"    


And to be clear, I do not believe in giving up and using everything via docker, or virtualenvs, or packrat, or ... A well-honed dependency system is wonderful and the right resource to get code deployed and updated. But it required buy-in from everyone involved, and an understanding of the possible trade-offs. I think we can, and will, do better going forward.

Or else, there will always be the tinyverse ...

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cory DoctorowHey, Sydney! I’m coming to see you tonight (then Adelaide and Wellington!)

I’m just about to go to the airport to fly to Sydney for tonight’s event, What should we do about Democracy?

It’s part of the Australia/New Zealand tour for Walkaway, and from Sydney, I’m moving on to the Adelaide Festival and then to Wellington for Writers and Readers Week and the NetHui one-day event on copyright.

It feels like democracy is under siege, even in rich, peaceful countries like Australia that have escaped financial shocks and civil strife. Populist impulses have been unleashed in the UK and USA. There is a record lack of trust in the institutions of politics and government, exacerbated by the ways in which social media and digital technology can spread ‘fake news’ and are being harnessed by foreign powers to meddle in politics. Important issues that citizens care about, like climate change, are sidelined by professional politicians, enhancing the appeal of outsider figures. Do these problems add up to the failure of democracy? Are Brexit and Trump outliers, or the new normal? Join a lively panel of experts and commentators explore some big questions about the future of democracy, and think more clearly about what we ought to do.

Speakers Cory Doctorow, A.C. Grayling, Rebecca Huntley and Lenore Taylor

Chair Jeremy Moss

Cory DoctorowMy short story about better cities, where networks give us the freedom to schedule our lives to avoid heat-waves and traffic jams

I was lucky enough to be invited to submit a piece to Ian Bogost’s Atlantic series on the future of cities (previously: James Bridle, Bruce Sterling, Molly Sauter, Adam Greenfield); I told Ian I wanted to build on my 2017 Locus column about using networks to allow us to coordinate our work and play in a way that maximized our freedom, so that we could work outdoors on nice days, or commute when the traffic was light, or just throw an impromptu block party when the neighborhood needed a break.

The story is out today, with a gorgeous illustration by Molly Crabapple; the Atlantic called it “The City of Coordinated Leisure,” but in my heart it will always be “Coase’s Day Off: a microeconomics of coordinated leisure.”

There had been some block parties on Lima Street when Arturo had been too small to remember them, but then there had been a long stretch of unreasonably seasonable weather and no one had tried it, not until the year before, on April 18, a Thursday after a succession of days that vied to top each other for inhumane conditions, the weather app on the hallway wall showing 112 degrees before breakfast.

Mr. Papazian was the block captain for that party, and the first they’d known of it was when Arturo’s dad called out to his mom that Papazian had messaged them about a block party, and there was something funny in Dad’s tone, a weird mix of it’s so crazy and let’s do it.

That had been a day to remember, and Arturo had remembered, and watched the temperature.

The City of Coordinated Leisure [Cory Doctorow/The Atlantic]

Planet DebianChris Lamb: Free software activities in February 2018

Here is my monthly update covering what I have been doing in the free software world in February 2018 (previous month):

Reproducible builds

Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:

  • Add support for comparing Berkeley DB files. (Unfortunately this is currently incomplete because the libraries do not report metadata reliably!) (#890528)
  • Add support for comparing "XMLBeans" binary schemas. [...]
  • Drop spurious debugging code in Android tests. [...]


My activities as the current Debian Project Leader are covered in my "Bits from the DPL" email to the debian-devel-announce mailing list.

Patches contributed

  • debian-policy: Replace dh_systemd_install with dh_installsystemd. (#889167)
  • juce: Missing build-depends on graphviz. (#890035)
  • roffit: debian/rules does not override targets as intended. (#889975)
  • Please add rel="canonical" to bug pages. (#890338)

Debian LTS

This month I have been paid to work 18 hours on Debian Long Term Support (LTS). In that time I did the following:


  • redis:
    • 4.0.8-1 — New upstream release and fix a potential hardlink vulnerability.
    • 4.0.8-2 — Also listen on ::1 (IPv6) by default. (#891432)
  • python-django:
    • 1.11.10-1 — New upstream security release.
    • 2.0.2-1 — New upstream security release.
  • redisearch:
    • 1.0.6-1 — New upstream release.
    • 1.0.7-1 — New upstream release & add Lintian overrides for package-does-not-install-examples.
    • 1.0.8-1 — New upstream release, which includes my reproducibility-related change improvement.
  • adminer:
    • 4.6.1-1 — New upstream release and override debian-watch-does-not-check-gpg-signature as upstream do not release signatures.
    • 4.6.2-1 — New upstream release.
  • process-cpp:
    • 3.0.1-3 — Make the documentation reproducible.
    • 3.0.1-4 — Correct Vcs-Bzr to Vcs-Git.
  • sleekxmpp (1.3.3-3) — Make the build reproducible. (#890193)
  • python-redis (2.10.6-2) — Correct autopkgtest dependencies and misc packaging updates.
  • bfs (1.2.1-1) — New upstream release.

I also made misc packaging updates for docbook-to-man (1:2.0.0-41), gunicorn (19.7.1-4), installation-birthday (8) & python-daiquiri (1.3.0-3).

Finally, I performed the following sponsored uploads: check-manifest (0.36-2), django-ipware (2.0.1-1), nose2 (0.7.3-3) & python-keyczar (0.716+ds-2).

Debian bugs filed

  • zsh: Please make apt install completion work on "local" files. (#891140)
  • git-gui: Ignores git hooks. (#891552)
  • python-coverage:
    • Installs pyfile.html into wrong directory breaking HTML report generation. (#890560)
    • Document copyright information for bundled JavaScript source. (#890578)

FTP Team

As a Debian FTP assistant I ACCEPTed 123 packages: apticron, aseba, atf-allwinner, bart-view, binutils, browserpass, bulk-media-downloader, ceph-deploy, colmap, core-specs-alpha-clojure, ctdconverter, debos, designate, editorconfig-core-py, essays1743, fis-gtm, flameshot, flex, fontmake, fonts-league-spartan, fonts-ubuntu, gcc-8, getdns, glyphslib, gnome-keyring, gnome-themes-extra, gnome-usage, golang-github-containerd-cgroups, golang-github-go-debos-fakemachine, golang-github-mattn-go-zglob, haskell-regex-tdfa-text, https-everywhere, ibm-3270, ignition-fuel-tools, impass, inetsim, jboss-bridger, jboss-threads, jsonrpc-glib, knot-resolver, libctl, liblouisutdml, libopenraw, libosmo-sccp, libtest-postgresql-perl, libtickit, linux, live-tasks, minidb, mithril, mutter, neuron, node-acorn-object-spread, node-babel, node-call-limit, node-color, node-colormin, node-console-group, node-consolidate, node-cosmiconfig, node-css-color-names, node-date-time, node-err-code, node-gulp-load-plugins, node-html-comment-regex, node-icss-utils, node-is-directory, node-mdn-data, node-mississippi, node-mutate-fs, node-node-localstorage, node-normalize-range, node-postcss-filter-plugins, node-postcss-load-options, node-postcss-load-plugins, node-postcss-minify-font-values, node-promise-retry, node-promzard, node-require-from-string, node-rollup, node-rollup-plugin-buble, node-ssri, node-validate-npm-package-name, node-vue-resource, ntpsec, nvidia-cuda-toolkit, nyx, pipsi, plasma-discover, pokemmo, pokemmo-installer, polymake, privacybadger, proxy-switcher, psautohint, purple-discord, pytest-astropy, pytest-doctestplus, pytest-openfiles, python-aiomeasures, python-coverage, python-fitbit, python-molotov, python-networkmanager, python-os-service-types, python-pluggy, python-stringtemplate3, python3-antlr3, qpack, quintuple, r-cran-animation, r-cran-clustergeneration, r-cran-phytools, re2, sat-templates, sfnt2woff-zopfli, sndio, thunar, uhd, undertime, usbauth-notifier, vmdb2 & xymonq.

I additionally filed 15 RC bugs against packages that had incomplete debian/copyright files against: browserpass, designate, fis-gtm, flex, gnome-keyring, ibm-3270, knot-resolver, libopenraw, libtest-postgresql-perl, mithril, mutter, ntpsec, plasma-discover, pytest-arraydiff & r-cran-animation.

Krebs on SecurityHow to Fight Mobile Number Port-out Scams

T-Mobile, AT&T and other mobile carriers are reminding customers to take advantage of free services that can block identity thieves from easily “porting” your mobile number out to another provider, which allows crooks to intercept your calls and messages while your phone goes dark. Tips for minimizing the risk of number porting fraud are available below for customers of all four major mobile providers, including Sprint and Verizon.

Unauthorized mobile phone number porting is not a new problem, but T-Mobile said it began alerting customers about it earlier this month because the company has seen a recent uptick in fraudulent requests to have customer phone numbers ported over to another mobile provider’s network.

“We have been alerting customers via SMS that our industry is experiencing a phone number port out scam that could impact them,” T-Mobile said in a written statement. “We have been encouraging them to add a port validation feature, if they’ve not already done so.”

Crooks typically use phony number porting requests when they have already stolen the password for a customer account (either for the mobile provider’s network or for another site), and wish to intercept the one-time password that many companies send to the mobile device to perform two-factor authentication.

Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.

In these cases, the fraudsters can call a customer service specialist at a mobile provider and pose as the target, providing the mark’s static identifiers like name, date of birth, social security number and other information. Often this is enough to have a target’s calls temporarily forwarded to another number, or ported to a different provider’s network.

Port out fraud has been an industry problem for a long time, but recently we’ve seen an uptick in this illegal activity,” T-Mobile said.  “We’re not providing specific metrics, but it’s been enough that we felt it was important to encourage customers to add extra security features to their accounts.”

In a blog post published Tuesday, AT&T said bad guys sometimes use illegal porting to steal your phone number, transfer the number to a device they control and intercept text authentication messages from your bank, credit card issuer or other companies.

“You may not know this has happened until you notice your mobile device has lost service,” reads a post by Brian Rexroad, VP of security relations at AT&T. “Then, you may notice loss of access to important accounts as the attacker changes passwords, steals your money, and gains access to other pieces of your personal information.”

Rexroad says in some cases the thieves just walk into an AT&T store and present a fake ID and your personal information, requesting to switch carriers. Porting allows customers to take their phone number with them when they change phone carriers.

The law requires carriers to provide this number porting feature, but there are ways to reduce the risk of this happening to you.

T-Mobile suggests adding its port validation feature to all accounts. To do this, call 611 from your T-Mobile phone or dial 1-800-937-8997 from any phone. The T-Mobile customer care representative will ask you to create a 6-to-15-digit passcode that will be added to your account.

“We’ve included alerts in the T-Mobile customer app and on, but we don’t want customers to wait to get an alert to take action,” the company said in its statement. “Any customer can call 611 at any time from their mobile phone and have port validation added to their accounts.”

Verizon requires a match on a password or a PIN associated with the account for a port to go through. Subscribers can set their PIN via their Verizon Wireless website account or by visiting a local shop.

Sprint told me that in order for a customer to port their number to a different carrier, they must provide the correct Sprint account number and PIN number for the port to be approved. Sprint requires all customers to create a PIN during their initial account setup.

AT&T calls its two-factor authentication “extra security,” which involves creating a unique passcode on your AT&T account that requires you to provide that code before any changes can be made — including ports initiated through another carrier. Follow this link for more information. And don’t use something easily guessable like your SSN (the last four of your SSN is the default PIN, so make sure you change it quickly to something you can remember but that’s non-obvious).

Bigger picture, these porting attacks are a good reminder to use something other than a text message or a one-time code that gets read to you in an automated phone call. Whenever you have the option, choose the app-based alternative: Many companies now support third-party authentication apps like Google Authenticator and Authy, which can act as powerful two-factor authentication alternatives that are not nearly as easy for thieves to intercept.

Several of the mobile companies referred me to the work of a Mobile Authentication task force created by the carriers last fall. They say the issue of unauthorized ports to commit fraud is being addressed by this initiative.

For more on tightening your mobile security stance, see last year’s story, “Is Your Mobile Carrier Your Weakest Link?

CryptogramApple to Store Encryption Keys in China

Apple is bowing to pressure from the Chinese government and storing encryption keys in China. While I would prefer it if it would take a stand against China, I really can't blame it for putting its business model ahead of its desires for customer privacy.

Two more articles.

Worse Than FailureCodeSOD: The Part Version

Once upon a time, there was a project. Like most projects, it was understaffed, under-budgeted, under-estimated, and under the gun. Death marches ensued, and 80 hour weeks became the norm. The attrition rate was so high that no one who was there at the start of the project was there at the end of the project. Like the Ship of Theseus, each person was replaced at least once, but it was still the same team.

Eric wasn’t on that team. He was, however, a consultant. When the project ended and nothing worked, Eric got called in to fix it. And then called back to fix it some more. And then called back to implement new features. And called back…

While diagnosing one problem, Eric stumbled across the method getPartVersions. A part number was always something like “123456–1”, where the first group of numbers were the part number itself, and the portion after the “-” was the version of that part.

So, getPartVersions, then, should be something like:

String getPartVersions(String part) {
    //sanity checks omitted
    return part.split("-")[1];

The first hint that things weren’t implemented in a sane way was the method’s signature:

    private List<Integer> getPartVersions(final String searchString)

Why was it returning a list? The calling code always used the first element in the list, and the list was always one element long.

    private List<Integer> getPartVersions(final String searchString) {
        final List<Integer> partVersions = new ArrayList<>();
        if (StringUtils.indexOfAny(searchString, DELIMITER) != -1) {
            final String[] splitString = StringUtils.split(searchString, DELIMITER);
            if (splitString != null && splitString.length > 1) {
                //this is the partIdentifier, we make it empty it so it will not be parsed as a version
                splitString[0] = "";
                for (String s : splitString) {
                    s = s.trim();
                    try {
                        if (s.length() <= 2) {
                    } catch (final NumberFormatException ignored) {
                        //Do nothing probably not an partVersion
        return partVersions;

A part number is always in the form “{PART}-{VERSION}”. That is what the variable searchString should contain. So, they do their basic sanity checks- is there a dash there, does it split into two pieces, etc. Even these sanity checks hint at a WTF, as StringUtils obviously is just wrappers around built-in string functions.

Things get really odd, though, with this:

                splitString[0] = "";
                for (String s : splitString) //…

Throw away the part number, then iterate across the entire series of strings we made by splitting. Check the length- if it’s less than or equal to two, it must be the part version. Parse it into an integer and put it in the list. The real “genius” element of this code is that since the first entry in the splitString array is set to an empty string, Integer.parseInt will throw an exception, thus ensuring we don’t accidentally put the part number in our list.

I’ve personally written methods that have this sort of tortured logic, and given what Eric tells us about the history of the project, I suspect I know what happened here. This method was written before the requirement it fulfilled was finalized. No one, including the business users, actually knew the exact format or structure of a part number. The developer got five different explanations, which turned out to be wrong in 15 different ways, and implemented a compromise that just kept getting tweaked until someone looked at the results and said, “Yes, that’s right.” The dev then closed out the requirement and moved onto the next one.

Eric left the method alone: he wasn’t being paid to refactor things, and too much downstream code depended on the method signature returning a List<Integer>.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!


Krebs on SecurityBot Roundup: Avalanche, Kronos, NanoCore

It’s been a busy few weeks in cybercrime news, justifying updates to a couple of cases we’ve been following closely at KrebsOnSecurity. In Ukraine, the alleged ringleader of the Avalanche malware spam botnet was arrested after eluding authorities in the wake of a global cybercrime crackdown there in 2016. Separately, a case that was hailed as a test of whether programmers can be held accountable for how customers use their product turned out poorly for 27-year-old programmer Taylor Huddleston, who was sentenced to almost three years in prison for making and marketing a complex spyware program.

First, the Ukrainian case. On Nov. 30, 2016, authorities across Europe coordinated the arrest of five individuals thought to be tied to the Avalanche crime gang, in an operation that the FBI and its partners abroad described as an unprecedented global law enforcement response to cybercrime. Hundreds of malicious web servers and hundreds of thousands of domains were blocked in the coordinated action.

The global distribution of servers used in the Avalanche crime machine. Source:

The alleged leader of the Avalanche gang — 33-year-old Russian Gennady Kapkanov — did not go quietly at the time. Kapkanov allegedly shot at officers with a Kalashnikov assault rifle through the front door as they prepared to raid his home, and then attempted to escape off of his 4th floor apartment balcony. He was later released, after police allegedly failed to file proper arrest records for him.

But on Monday Agence France-Presse (AFP) reported that Ukrainian authorities had once again collared Kapkanov, who was allegedly living under a phony passport in Poltav, a city in central Ukraine. No word yet on whether Kapkanov has been charged, which was supposed to happen Monday.

Kapkanov’s drivers license. Source:


Lawyers for Taylor Huddleston, a 27-year-old programmer from Hot Springs, Ark., originally asked a federal court to believe that the software he sold on the sprawling hacker marketplace Hackforums — a “remote administration tool” or “RAT” designed to let someone remotely administer one or many computers remotely — was just a benign tool.

The bad things done with Mr. Huddleston’s tools, the defendant argued, were not Mr. Huddleston’s doing. Furthermore, no one had accused Mr. Huddleston of even using his own software.

The Daily Beast first wrote about Huddleston’s case in 2017, and at the time suggested his prosecution raised questions of whether a programmer could be held criminally responsible for the actions of his users. My response to that piece was “Dual-Use Software Criminal Case Not So Novel.

Photo illustration by Lyne Lucien/The Daily Beast

The court was swayed by evidence that yes, Mr. Huddleston could be held criminally responsible for those actions. It sentenced him to 33 months in prison after the defendant acknowledged that he knew his RAT — a Remote Access Trojan dubbed “NanoCore RAT” — was being used to spy on webcams and steal passwords from systems running the software.

Of course Huddleston knew: He didn’t market his wares on some Craigslist software marketplace ad, or via video promos on his local cable channel: He marketed the NanoCore RAT and another software licensing program called Net Seal exclusively on Hackforums[dot]net.

This sprawling, English language forum has a deep bench of technical forum discussions about using RATs and other tools to surreptitiously record passwords and videos of “slaves,” the derisive term for systems secretly infected with these RATs.

Huddleston knew what many of his customers were doing because many NanoCore users also used Huddleston’s Net Seal program to keep their own RATs and other custom hacking tools from being disassembled or “cracked” and posted online for free. In short: He knew what programs his customers were using Net Seal on, and he knew what those customers had done or intended to do with tools like NanoCore.

The sentencing suggests that where you choose to sell something online says a lot about what you think of your own product and who’s likely buying it.

Daily Beast author Kevin Poulsen noted in a July 2017 story that Huddleston changed his tune and pleaded guilty. The story pointed to an accompanying plea in which Huddleston stipulated that he “knowingly and intentionally aided and abetted thousands of unlawful computer intrusions” in selling the program to hackers and that he “acted with the purpose of furthering these unauthorized computer intrusions and causing them to occur.”


Bleeping Computer’s Catalin Cimpanu observes that Huddleston’s case is similar to another being pursued by U.S. prosecutors against Marcus “MalwareTech” Hutchins, the security researcher who helped stop the spread of the global WannaCry ransomware outbreak in May 2017. Prosecutors allege Hutchins was the author and proprietor of “Kronos,” a strain of malware designed to steal online banking credentials.

Marcus Hutchins, just after he was revealed as the security expert who stopped the WannaCry worm. Image:

On Sept. 5, 2017, KrebsOnSecurity published “Who is Marcus Hutchins?“, a breadcrumbs research piece on the public user profiles known to have been wielded by Hutchins. The data did not implicate him in the Kronos trojan, but it chronicles the evolution of a young man who appears to have sold and published online quite a few unique and powerful malware samples — including several RATs and custom exploit packs (as well as access to hacked PCs).

MalwareTech declined to be interviewed by this publication in light of his ongoing prosecution. But Hutchins has claimed he never had any customers because he didn’t write the Kronos trojan.

Hutchins has pleaded not guilty to all four counts against him, including conspiracy to distribute malicious software with the intent to cause damage to 10 or more affected computers without authorization, and conspiracy to distribute malware designed to intercept protected electronic communications.

Hutchins said through his @MalwareTechBlog account on Twitter Feb. 26 that he wanted to publicly dispute my Sept. 2017 story. But he didn’t specify why other than saying he was “not allowed to.”

MWT wrote: “mrw [my reaction when] I’m not allowed to debunk the Krebs article so still have to listen to morons telling me why I’m guilty based on information that isn’t even remotely correct.”

Hutchins’ tweet on Feb. 26, 2018.

According to a story at BankInfoSecurity, the evidence submitted by prosecutors for the government includes:

  • Statements made by Hutchins after he was arrested.
  • A CD containing two audio recordings from a county jail in Nevada where he was detained by the FBI.
  • 150 pages of Jabber chats between the defendant and an individual.
  • Business records from Apple, Google and Yahoo.
  • Statements (350 pages) by the defendant from another internet forum, which were seized by the government in another district.
  • Three to four samples of malware.
  • A search warrant executed on a third party, which may contain some privileged information.

The case against Hutchins continues apace in Wisconsin. A scheduling order for pretrial motions filed Feb. 22 suggests the court wishes to have a speedy trial that concludes before the end of April 2018.

TEDYou are here for a reason: 4 questions with Halla Tómasdóttir

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with financier, entrepreneur and onetime candidate for president of Iceland, Halla Tómasdóttir, about what influences, inspires and drives her to be bold.

TED: Tell us who you are.
Halla Tómasdóttir: I think of myself first and foremost as a change catalyst who is passionate about good leadership and a gender-balanced world. My leadership career started in corporate America with Mars and Pepsi Cola, but since then I have served as an entrepreneur, educator, investor, board director, business leader and presidential candidate. I am married, a proud mother of two teenagers and a dog and am perhaps best described by the title given to me by the New Yorker: “A Living Emoji of Sincerity.”

TED: What’s a bold move you’ve made in your career?
HT: I left a high-profile position as the first female CEO of the Iceland Chamber of Commerce to become an entrepreneur with the vision to incorporate feminine values into finance. I felt the urge to show a different way in a sector that felt unsustainable to me, and I longed to work in line with my own values.

TED: Tell us about a woman who inspires you.
HT: The women of Iceland inspired me at an early age, when they showed incredible courage, solidarity and sisterhood and “took the day off” (went on a strike) and literally brought the country to its knees — as nothing worked when women didn’t do any work. Five years later, Iceland was the first country in the world to democratically elect a woman as president. I was 11 years old at the time, and her leadership has inspired me ever since. Her clarity on what she cares about and her humble way of serving those causes is truly remarkable.

TED: If you could go back in time, what would you tell your 18-year-old self?
HT: I would say: Halla, just be you and know that you are enough. People will frequently tell you things like: “This is the way we do things around here.” Don’t ever take that as a valid answer if it doesn’t feel right to you. We are not here to continue to do more of the same if it doesn’t work or feel right anymore. We are here to grow, ourselves and our society. You are here for a reason: make your life and leadership matter.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

Worse Than Failure-0//

In software development, there are three kinds of problems: small, big and subtle. The small ones are usually fairly simple to track down; a misspelled label, a math error, etc. The large ones usually take longer to find; a race condition that you just can't reproduce, an external system randomly feeding you garbage, and so forth.

Internet word cloud

The subtle problems are an entirely different beast. It can be as simple as somebody entering 4321 instead of 432l (432L), or similar with 'i', 'l', '1', '0' and 'O'. It can be an interchanged comma and period. It can be something more complex, such as an unsupported third party library that throws back errors for undefined conditions, but randomly provides so little information as to be useful to neither user nor developer.

Brujo B encountered such a beast back in 2003 in a sub-equatorial bank that had been especially fond of VB6. This bank had tried to implement standards. In particular, they wanted all of their error messages to appear consistently for their users. To this end, they put a great deal of time and effort into building a library to display error messages in a consistent format. Specifically:


An example error message might be:

  File Not Found - 127 / File 'your.file' could not be found / FileImporter

Unfortunately, the designers of this routine could not compensate for all of the third party tools and libraries that did NOT set some/most/all of those variables. This led to interesting presentations of errors to both users and developers:

  - 34 / Network Connection Lost /
  Unauthorized - 401 //

Crystal Reports was particularly unhelpful, in that it refused to populate any field from which error details could be obtained, leading to the completely unhelpful:


...which could only be interpreted as Something really bad happened, but we don't know what that is and you have no way to figure that out. It didn't matter what Brujo and peers did. Everything that they tried to cajole Crystal Reports into giving context information failed to varying degrees; they could only patch specific instances of errors; but the Ever-Useless™ -0// error kept popping up to bite them in the arse.

After way too much time trying to slay the beast, they gave up, accepted it as one of their own and tried their best to find alternate ways of figuring out what the problems were.

Several years after moving on to saner pastures, Brujo returned to visit old friends. On the wall they had added a cool painting with many words that "describe the company culture". Layered in were management approved words, like "Trust" and "Loyalty". Some were more specific in-jokes, names of former employees, or references to big achievements the organization had made.

One of them was -0//

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

Don MartiWhat I don't get about Marketing

I want to try to figure out something I still don't understand about Marketing.

First, read this story by Sarah Vizard at Marketing Week: Why Google and Facebook should heed Unilever’s warnings.

All good points, right?

With the rise of fake news and revelations about how the Russians used social platforms to influence both the US election and EU referendum, the need for change is pressing, both for the platforms and for the advertisers that support them.

We know there's a brand equity crisis going on. Brand-unsafe placements are making mainstream brands increasingly indistinguishable from scams. So the story makes sense so far. But here's what I don't get.

For the call to action to work, Unilever really needs other brands to rally round but these have so far been few and far between.

Other brands? Why?

If brands are worth anything, they can at least help people tell one product apart from another.

Think Small VW ad

Saying that other brands need to participate in saving Unilever's brands from the three-ring shitshow of brand-unsafe advertising is like saying that Volkswagen really needs other brands to get into simple layouts and natural-sounding copy just because Volkswagen's agency did.

Not everybody has to make the same stuff and sell it the same way. Brands being different from each other is a good thing. (Right?)

generic food

Sometimes a problem on the Internet isn't a "let's all work together" kind of problem. Sometimes it's an opportunity for one brand to get out ahead of another.

What if every brand in a category kept on playing in the trash fire except one?

Planet Linux AustraliaLev Lafayette: Drupal "Access denied" Message

It happens rarely enough, but on occasion (such as an upgrade to a database system (e.g., MySQL, MariaDB) or system version of a web-scripting language (e.g., PHP), you can end up with one's Drupal site failing to load, displaying only the error message similar to:

PDOException: SQLSTATE[HY000] [1044] Access denied for user 'username'@'localhost' to database 'database' in lock_may_be_available() (line 167 of /website/includes/

read more


CryptogramE-Mail Leaves an Evidence Trail

If you're going to commit an illegal act, it's best not to discuss it in e-mail. It's also best to Google tech instructions rather than asking someone else to do it:

One new detail from the indictment, however, points to just how unsophisticated Manafort seems to have been. Here's the relevant passage from the indictment. I've bolded the most important bits:

Manafort and Gates made numerous false and fraudulent representations to secure the loans. For example, Manafort provided the bank with doctored [profit and loss statements] for [Davis Manafort Inc.] for both 2015 and 2016, overstating its income by millions of dollars. The doctored 2015 DMI P&L submitted to Lender D was the same false statement previously submitted to Lender C, which overstated DMI's income by more than $4 million. The doctored 2016 DMI P&L was inflated by Manafort by more than $3.5 million. To create the false 2016 P&L, on or about October 21, 2016, Manafort emailed Gates a .pdf version of the real 2016 DMI P&L, which showed a loss of more than $600,000. Gates converted that .pdf into a "Word" document so that it could be edited, which Gates sent back to Manafort. Manafort altered that "Word" document by adding more than $3.5 million in income. He then sent this falsified P&L to Gates and asked that the "Word" document be converted back to a .pdf, which Gates did and returned to Manafort. Manafort then sent the falsified 2016 DMI P&L .pdf to Lender D.

So here's the essence of what went wrong for Manafort and Gates, according to Mueller's investigation: Manafort allegedly wanted to falsify his company's income, but he couldn't figure out how to edit the PDF. He therefore had Gates turn it into a Microsoft Word document for him, which led the two to bounce the documents back-and-forth over email. As attorney and blogger Susan Simpson notes on Twitter, Manafort's inability to complete a basic task on his own seems to have effectively "created an incriminating paper trail."

If there's a lesson here, it's that the Internet constantly generates data about what people are doing on it, and that data is all potential evidence. The FBI is 100% wrong that they're going dark; it's really the golden age of surveillance, and the FBI's panic is really just its own lack of technical sophistication.

Krebs on SecurityUSPS Finally Starts Notifying You by Mail If Someone is Scanning Your Snail Mail Online

In October 2017, KrebsOnSecurity warned that ne’er-do-wells could take advantage of a relatively new service offered by the U.S. Postal Service that provides scanned images of all incoming mail before it is slated to arrive at its destination address. We advised that stalkers or scammers could abuse this service by signing up as anyone in the household, because the USPS wasn’t at that point set up to use its own unique communication system — the U.S. mail — to alert residents when someone had signed up to receive these scanned images.

Image: USPS

The USPS recently told this publication that beginning Feb. 16 it started alerting all households by mail whenever anyone signs up to receive these scanned notifications of mail delivered to that address. The notification program, dubbed “Informed Delivery,” includes a scan of the front of each envelope destined for a specific address each day.

The Postal Service says consumer feedback on its Informed Delivery service has been overwhelmingly positive, particularly among residents who travel regularly and wish to keep close tabs on any bills or other mail being delivered while they’re on the road. It has been available to select addresses in several states since 2014 under a targeted USPS pilot program, but it has since expanded to include many ZIP codes nationwide. U.S. residents can find out if their address is eligible by visiting

According to the USPS, some 8.1 million accounts have been created via the service so far (Oct. 7, 2017, the last time I wrote about Informed Delivery, there were 6.3 million subscribers, so the program has grown more than 28 percent in five months).

Roy Betts, a spokesperson for the USPS’s communications team, says post offices handled 50,000 Informed Delivery notifications the week of Feb. 16, and are delivering an additional 100,000 letters to existing Informed Delivery addresses this coming week.

Currently, the USPS allows address changes via the USPS Web site or in-person at any one of more than 35,000 USPS retail locations nationwide. When a request is processed, the USPS sends a confirmation letter to both the old address and the new address.

If someone already signed up for Informed Delivery later posts a change of address request, the USPS does not automatically transfer the Informed Delivery service to the new address: Rather, it sends a mailer with a special code tied to the new address and to the username that requested the change. To resume Informed Delivery at the new address, that code needs to be entered online using the account that requested the address change.

A review of the methods used by the USPS to validate new account signups last fall suggested the service was wide open to abuse by a range of parties, mainly because of weak authentication and because it is not easy to opt out of the service.

Signing up requires an eligible resident to create a free user account at, which asks for the resident’s name, address and an email address. The final step in validating residents involves answering four so-called “knowledge-based authentication” or KBA questions.

The USPS told me it uses two ID proofing vendors: Lexis Nexisand, naturally, recently breached big three credit bureau Equifax — to ask the magic KBA questions, rotating between them randomly.

KrebsOnSecurity has assailed KBA as an unreliable authentication method because so many answers to the multiple-guess questions are available on sites like Spokeo and Zillow, or via social networking profiles.

It’s also nice when Equifax gives away a metric truckload of information about where you’ve worked, how much you made at each job, and what addresses you frequented when. See: How to Opt Out of Equifax Revealing Your Salary History for how much leaks from this lucrative division of Equifax.

All of the data points in an employee history profile from Equifax will come in handy for answering the KBA questions, or at least whittling away those that don’t match salary ranges or dates and locations of the target identity’s previous addresses.

Once signed up, a resident can view scanned images of the front of each piece of incoming mail in advance of its arrival. Unfortunately, anyone able to defeat those automated KBA questions from Equifax and Lexis Nexis — be they stalkers, jilted ex-partners or private investigators — can see who you’re communicating with via the Postal mail.

Maybe this is much ado about nothing: Maybe it’s just a reminder that people in the United States shouldn’t expect more than a post card’s privacy guarantee (which in can leak the “who” and “when” of any correspondence, and sometimes the “what” and “why” of the communication). We’d certainly all be better off if more people kept that guarantee in mind for email in addition to snail mail. At least now the USPS will deliver your address a piece of paper letting you know when someone signs up to look at those W’s in your snail mail online.

Cory DoctorowPodcast: The Man Who Sold the Moon, Part 05

Here’s part five of my reading (MP3) (part four, part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.


Worse Than FailureCodeSOD: Waiting for the Future

One of the more interesting things about human psychology is how bad we are at thinking about the negative consequences of our actions if those consequences are in the future. This is why the death penalty doesn’t deter crime, why we dump massive quantities of greenhouse gases into the atmosphere, and why the Y2K bug happened in the first place, and why we’re going to do it again when every 32-bit Unix system explodes in 2038. If the negative consequence happens well after the action which caused it, humans ignore the obvious cause and effect and go on about making problems that have to be fixed later.

Fran inherited a bit of technical debt. Specifically, there’s an auto-numbered field in the database. Due to their business requirements, when the field hits 999,999, it needs to wrap back around to 000,001. Many many years ago, the original developer “solved” that problem thus:

function getstan($callingMethod = null)

    $sequence = 1;

    // get insert id back
    $rs = db()->insert("sequence", array(
        'processor' => 'widgetsinc',
        'RID'       => $this->data->RID,
        'method'    => $callingMethod,
        'card'      => $this->data->cardNumber
    ), false, false);
    if ($rs) { // if query succeeded...
        $sequence = $rs;
        if ($sequence > 999999) {
            db()->q("delete from sequence where processor='widgetsinc'");
                array('processor' => 'widgetsinc', 'RID' => $this->data->RID, 'card' => $this->data->cardNumber), false,
            $sequence = 1;

    return (substr(str_pad($sequence, 6, "0", STR_PAD_LEFT), -6));

The sequence table uses an auto-numbered column. They insert a row into the table, which returns the generated ID used. If that ID is greater than 999,999, they… delete the old rows. They then insert a new row. Then they return “000001”.

Unfortunately, sequences don’t work this way in MySQL, or honestly any other database. They keep counting up unless you alter or otherwise reset the sequence. So, the counter keeps ticking up, and this method keeps deleting the old rows and returning “000001”. The original developer almost certainly never tested what this code does when the counter breaks 999,999, because that day was so far out into the future that they could put off the problem.

Speaking of putting off solving problems, Fran also adds:

For the past 2 years this function has been returning 000001 and is starting to screw up reports.

Broken for at least two years, but only now is it screwing up reports badly enough that anyone wants to do anything to fix it.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Planet Linux AustraliaOpenSTEM: At Mercy of the Weather

It is the time of year when Australia often experiences extreme weather events. February is renowned as the hottest month and, in some parts of the country, also the wettest month. It often brings cyclones to our coasts and storms, which conversely enough, may trigger fires as lightening strikes the hot, dry bush. Aboriginal people […]


Planet Linux AustraliaChris Samuel: Vale Dad

[I’ve been very quiet here for over a year for reasons that will become apparent in the next few days when I finish and publish a long post I’ve been working on for a while – difficult to write, hence the delay]

It’s 10 years ago today that my Dad died, and Alan and I lost the father who had meant so much to both of us. It’s odd realising that it’s over 1/5th of my life since he died, it doesn’t seem that long.

Vale dad, love you…

This item originally posted here:

Vale Dad


Rondam RamblingsDevin Nunes doesn't realize that he's part of the government

I was reading about the long anticipated release of the Democratic rebuttal to the famous Republican dossier memo.  I've been avoiding writing about this, or any aspect of the Russia investigation, because there is just so much insanity going on there and I didn't want to get sucked into that tar pit.  But I could not let this slide: [O]n Saturday, committee chairman Devin Nunes (R-Calif.)

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main March 2018 Meeting: Unions - Hacking society's operating system

Mar 6 2018 18:30
Mar 6 2018 20:30
Mar 6 2018 18:30
Mar 6 2018 20:30
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Tuesday, March 6, 2018

6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000


Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

March 6, 2018 - 18:30

read more


Sam VargheseJoyce affair: incestuous relationship between pollies and journos needs some exposure

Barnaby Joyce has come (no pun intended) and Barnaby Joyce has gone, but one issue that is intimately connected with the circus that surrounded him for the last three weeks has yet to be subjected to any scrutiny.

And that is the highly incestuous relationship that exists between Australian journalists and politicians and often results in news being concealed from the public.

The Australian media examined the scandal around Deputy Prime Minister Joyce from many angles, ever since a picture of his pregnant mistress, Vikki Campion, appeared on the front page of the The Daily Telegraph.

Various high-profile journalists tried to offer mea culpas to justify their non-reporting of the affair.

This is not the first time that journalists in Canberra have known about newsworthy stories connected to politicians and kept quiet.

In 2005, journalists Michael Brissenden, Tony Wright and Paul Daley were at a dinner with former treasurer Peter Costello at which he told them he had set next April (2006) as the absolute deadline “that is, mid-term,” for John Howard to stand aside; if not, he would challenge him.

Costello was said by Brissenden to have declared that a challenge “will happen then” if “Howard is still there”. “I’ll do it,” he said. He said he was “prepared to go the backbench”. He said he’d “carp” at Howard’s leadership “from the backbench” and “destroy it” until he “won” the leadership.

But the three journalists kept mum about what would have been a big scoop, because Costello’s press secretary asked them not to write the yarn.

There was a great deal of speculation in the run-up to the 2007 election as to whether Howard would step down; one story in July 2006 said there had been an unspoken 1994 agreement between him and Costello to vacate the PM’s seat and make way for Costello to get the top job.

Had the three journalists at that 2005 dinner gone ahead and reported the story — as journalists are supposed to do — it is unlikely that Howard would have been able to carry on as he did. It would have forced Costello to challenge for the leadership or quit. In short, it would have changed the course of politics.

But Brissenden, Daley and Wright kept mum.

In the case of Joyce, it has been openly known since at least April 2017 that he was schtupping Campion. Indeed, the picture of Campion on the front page of the Telegraph indicates she was at least seven months pregnant — later it became known that the baby is due in April — which means Joyce must have been sleeping with her at least from June onwards.

The story was in the public interest, because Joyce and Campion are both paid from the public purse. When their affair became an issue, Joyce had her moved around to the offices of his National Party mates, Matt Canavan and Damian Drum, at salaries that went as high as $190,000. Joyce is also no ordinary politician – he is the deputy prime minister and thus acts as the head of the country whenever the prime minister is out of the country. Thus anything that affects his functioning is of interest to the public as he can make decisions that affect them.

But journalists like Katharine Murphy of the Guardian and Jacqueline Maley of the Sydney Morning Herald kept mum. A female journalist who is not part of this clique, Sharri Markson, broke the story. She was roundly criticised by many who belong the Murphy-Maley school of thinking.

Chris Uhlmann kept mum. So did Malcolm Farr and a host of others like Fran Bailey.

Both Murphy and Maley cited what they called “ethics” to justify keeping mum. But after the story broke, they leapt on it with claws extended. Another journalist, Julia Baird, tried to spin the story as one that showed how a woman in Joyce’s position would have been treated – much worse, was her opinion. She chose former prime minister Julia Gillard as her case study but did not offer the fact that Gillard was also a highly incompetent prime minister and that the flak she earned was also due to this aspect of her character.

Baird once was a columnist for Fairfax’s Weekend magazine and her profile pic in the publication at the time showed her in Sass & Bide jeans – the very business in which her husband was involved. Given that, when she moralises, one needs to take it with a kilo of salt.

But the central point is that, though she has a number of platforms to break a story, Baird never wrote a word about Joyce’s philandering. He promoted himself as a man who espoused family values by being photographed with his wife and four daughters repeatedly. He moralised more times than any other about the sanctity of marriage. Thus, he was fair game. Or so commonsense would dictate.

Why do these journalists and many others keep quiet and try to stay in the good books of politicians? The answer is simple: though the jobs of journalists and public relations people are diametric opposites, journalists have no qualms about crossing the divide because the money in PR is much more.

Salaries are much higher if a journalist gets onto the PR team of a senior politician. And with jobs in journalism disappearing at a rate of knots year after year, journalists like Murphy, Maley and Baird hedge their bets in order to stay in politicians’ good books. Remember Mark Simkin, a competent news reporter at the ABC? He joined the staff of — hold your breath — Tony Abbott when the man was prime minister. Simkin is rarely seen in public these days.

Nobody calls journalists on this deception and fraud. It emboldens them to continue to pose as people who act in the public interest when in reality they are no different from the average worker. Yet they climb on pulpits week after week and pontificate to the masses.

It has been said that journalists are like prostitutes: first, they do it for the fun of it, then they do it for a few friends, and finally they end up doing it for money. You won’t find too many arguments from me about that characterisation.

CryptogramFriday Squid Blogging: The Symbiotic Relationship Between the Bobtail Squid and a Particular Microbe

This is the story of the Hawaiian bobtail squid and Vibrio fischeri.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesDigital Drag?

Screenshot used with permission

As I was scrolling through Facebook a few weeks ago, I noticed a new trend: Several friends posted pictures (via an app) of what they would look like as “the opposite sex.” Some of them were quite funny—my female-identified friends sported mustaches, while my male-identified friends revealed long flowing locks. But my sociologist-brain was curious: What makes this app so appealing? How does it decide what the “opposite sex” looks like? Assuming it grabs the users’ gender from their profiles, what would it do with users who listed their genders as non-binary, trans, or genderqueer? Would it assign them male or female? Would it crash? And, on a basic level, why are my friends partaking in this “game?”

Gender is deeply meaningful for our social world and for our identities—knowing someone’s gender gives us “cues” about how to categorize and connect with that person. Further, gender is an important way our social world is organizedfor better or worse. Those who use the app engage with a part of their own identities and the world around them that is extremely significant and meaningful.

Gender is also performative. We “do” gender through the way we dress, talk, and take up space. In the same way, we read gender on people’s bodies and in how they interact with us. The app “changes people’s gender” by changing their gender performance; it alters their hair, face shape, eyes, and eyebrows. The app is thus a outlet to “play” with gender performance. In other words, it’s a way of doing digital drag. Drag is a term that is often used to refer to male-bodied people dressing in a feminine way (“drag queens”) or female-bodied people dressing in a masculine way (“drag kings”), but all people who do drag do not necessarily fit in this definition. Drag is ultimately about assuming and performing a gender. Drag is increasingly coming into the mainstream, as the popular reality TV series RuPaul’s Drag Race has been running for almost a decade now. As more people are exposed to the idea of playing with gender, we might see more of them trying it out in semi-public spaces like Facebook.

While playing with gender may be more common, it’s not all fun and games. The Facebook app in particular assumes a gender binary with clear distinctions between men and women, and this leaves many people out. While data on individuals outside of the gender binary is limited, a 2016 report from The Williams Institute estimated that 0.6% of the U.S. adult population — 1.4 million people — identify as transgender. Further, a Minnesota study of high schoolers found about 3% of the student population identify as transgender or gender nonconforming, and researchers in California estimate that 6% of adolescents are highly gender nonconforming and 20% are androgynous (equally masculine and feminine) in their gender performances.

The problem is that the stakes for challenging the gender binary are still quite high. Research shows people who do not fit neatly into the gender binary can face serious negative consequences, like discrimination and violence (including at least 28 killings of transgender individuals in 2017 and 4 already in 2018).  And transgender individuals who are perceived as gender nonconforming by others tend to face more discrimination and negative health outcomes.

So, let’s all play with gender. Gender is messy and weird and mucking it up can be super fun. Let’s make a digital drag app that lets us play with gender in whatever way we please. But if we stick within the binary of male/female or man/woman, there are real consequences for those who live outside of the gender binary.

Recommended Readings:

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at

Planet Linux AustraliaTim Serong: Strange Bedfellows

The Tasmanian state election is coming up in a week’s time, and I’ve managed to do a reasonable job of ignoring the whole horrible thing, modulo the promoted tweets, the signs on the highway, the junk the major (and semi-major) political parties pay to dump in my letterbox, and occasional discussions with friends and neighbours.

Promoted tweets can be blocked. The signs on the highway can (possibly) be re-purposed for a subsequent election, or can be pulled down and used for minor windbreak/shelter works for animal enclosures. Discussions with friends and neighbours are always interesting, even if one doesn’t necessarily agree. I think the most irritating thing is the letterbox junk; at best it’ll eventually be recycled, at worst it becomes landfill or firestarters (and some of those things do make very satisfying firestarters).

Anyway, as I live somewhere in the wilds division of Franklin, I thought I’d better check to see who’s up for election here. There’s no independents running this time, so I’ve essentially got the choice of four parties; Shooters, Fishers and Farmers Tasmania, Tasmanian Greens, Tasmanian Labor and Tasmanian Liberals (the order here is the same as on the TEC web site; please don’t infer any preference based on the order in which I list parties in this blog post).

I feel like I should be setting party affiliations aside and voting for individuals, but of the sixteen candidates listed, to the best of my knowledge I’ve only actually met and spoken with two of them. Another I noticed at random in a cafe, and I was ignored by a fourth who was milling around with some cronies at a promotional stand out the front of Woolworths in Huonville a few weeks ago. So, party affiliations it is, which leads to an interesting thought experiment.

When you read those four party names above, what things came most immediately to mind? For me, it was something like this:

  • Shooters, Fishers & Farmers: Don’t take our guns. Fuck those bastard Greenies.
  • Tasmanian Greens: Protect the natural environment. Renewable energy. Try not to kill anything. Might collaborate with Labor. Liberals are big money and bad news.
  • Tasmanian Labor: Mellifluous babble concerning health, education, housing, jobs, pokies and something about workers rights. Might collaborate with the Greens. Vehemently opposed to the Liberals.
  • Tasmanian Liberals: Mellifluous babble concerning jobs, health, infrastructure, safety and the Tasmanian way of life, peppered with something about small business and family values. Vehemently opposed to Labor and the Greens.

And because everyone usually automatically thinks in terms of binaries (e.g. good vs. evil, wrong vs. right, one vs. zero), we tend to end up imagining something like this:

  • Shooters, Fishers & Farmers vs. Greens
  • Labor vs. Liberal
  • …um. Maybe Labor and the Greens might work together…
  • …but really, it’s going to be Labor or Liberal in power (possibly with some sort of crossbench or coalition support from minor parties, despite claims from both that it’ll be majority government all the way).

It turns out that thinking in binaries is remarkably unhelpful, unless you’re programming a computer (it’s zeroes and ones all the way down), or are lost in the wilderness (is this plant food or poison? is this animal predator or prey?) The rest of the time, things tend to be rather more colourful (or grey, depending on your perspective), which leads back to my thought experiment: what do these “naturally opposed” parties have in common?

According to their respective web sites, the Shooters, Fishers & Farmers and the Greens have many interests in common, including agriculture, biosecurity, environmental protection, tourism, sustainable land management, health, education, telecommunications and addressing homelessness. There are differences in the policy details of course (some really are diametrically opposed), but in broad strokes these two groups seem to care strongly about – and even agree on – many of the same things.

Similarly, Labor and Liberal are both keen to tell a story about putting the people of Tasmania first, about health, education, housing, jobs and infrastructure. Honestly, for me, they just kind of blend into one another; sure there’s differences in various policy details, but really if someone renamed them Labal and Liberor I wouldn’t notice. These two are the status quo, and despite fighting it out with each other repeatedly, are, essentially, resting on their laurels.

Here’s what I’d like to see: a minority Tasmanian state government formed from a coalition of the Tasmanian Greens plus the Shooters, Fishers & Farmers party, with the Labor and Liberal parties together in opposition. It’ll still be stuck in that irritating Westminster binary mode, but at least the damn thing will have been mixed up sufficiently that people might actually talk to each other rather than just fighting.

CryptogramElection Security

I joined a letter supporting the Secure Elections Act (S. 2261):

The Secure Elections Act strikes a careful balance between state and federal action to secure American voting systems. The measure authorizes appropriation of grants to the states to take important and time-sensitive actions, including:

  • Replacing insecure paperless voting systems with new equipment that will process a paper ballot;

  • Implementing post-election audits of paper ballots or records to verify electronic tallies;

  • Conducting "cyber hygiene" scans and "risk and vulnerability" assessments and supporting state efforts to remediate identified vulnerabilities.

    The legislation would also create needed transparency and accountability in elections systems by establishing clear protocols for state and federal officials to communicate regarding security breaches and emerging threats.

Worse Than FailureError'd: Everybody's Invited!

"According to Outlook, it seems that I accidentally invited all of the EU and US citizens combined," writes Wouter.


"Just an array a month sounds like a pretty good deal to me! And I do happen to have some arrays to spare..." writes Rutger W.


Lucas wrote, "VMWare is on the cutting edge! They can support TWICE as much Windows 10 as their competitors!"


"I just wish it was CurrentMonthName so that I could take advantage of the savings!" Ken wrote.


Mark B. "I had no idea that Redboxes were so cultured."


"I'm a little uncomfortable about being connected to an undefined undefined," writes Joel B.


[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Krebs on SecurityChase ‘Glitch’ Exposed Customer Accounts

Multiple customers have reported logging in to their bank accounts, only to be presented with another customer’s bank account details. Chase has acknowledged the incident, saying it was caused by an internal “glitch” Wednesday evening that did not involve any kind of hacking attempt or cyber attack.

Trish Wexler, director of communications for the retail side of JP Morgan Chase, said the incident happened Wednesday evening, for “a pretty limited number of customers” between 6:30 pm  and 9 pm ET who “sporadically during that time while logged in to could see someone else’s account details.”

“We know for sure the glitch was on our end, not from a malicious actor,” Wexler said, noting that Chase is still trying to determine how many customers may have been affected. “We’re going through Tweets from customers and making sure that if anyone is calling us with issues we’re working one on one with customers. If you see suspicious activity you should give us a call.”

Wexler urged customers to “practice good security hygiene” by regularly reviewing their account statements, and promptly reporting any discrepancies. She said Chase is still working to determine the precise cause of the mix-up, and that there have been no reports of JPMC commercial customers seeing the account information of other customers.

“This was all on our side,” Wexler said. “I don’t know what did happen yet but I know what didn’t happen. What happened last night was 100 percent not the result of anything malicious.”

The account mix-up was documented on Wednesday by Fly & Dine, an online publication that chronicles the airline food industry. Fly & Dine included screenshots of one of their writer’s spouses logged into the account of a fellow Chase customer with an Amazon and Chase card and a balance of more than $16,000.

Kenneth White, a security researcher and director of the Open Crypto Audit Project, said the reports he’s seen on Twitter and elsewhere suggested the screwup was somehow related to the bank’s mobile apps. He also said the Chase retail banking app offered an update first thing Thursday morning.

Chase says the oddity occurred for both and users of the Chase mobile app. 

“We don’t have any evidence it was related to any update,” Wexler said.

“There’s only so many kind of logic errors where Kenn logs in and sees Brian’s account,” White said.  “It can be a devil to track down because every single time someone logs in it’s a roll of the dice — maybe they get something in the warmed up cache or they get a new hit. It’s tricky to debug, but this is like as bad as it gets in terms of screwup of the app.”

White said the incident is reminiscent of a similar glitch at online game giant Steam, which caused many customers to see account information for other Steam users for a few hours. He said he suspects the problem was a configuration error someplace within “caching servers,” which are designed to ease the load on a Web application by periodically storing some common graphical elements on the page — such as images, videos and GIFs.

“The images, the site banner, all that’s fine to be cached, but you never want to cache active content or raw data coming back,” White said. “If you’re CNN, you’re probably caching all the content on the homepage. But for a banking app that has access to live data, you never want that to be cached.”

“It’s fairly easy to fix once you identify the problem,” he added. “I can imagine just getting the basics of the core issue [for Chase] would be kind of tricky and might mean a lot of non techies calling your Tier 1 support people.”

Update, 8:10 p.m. ET: Added comment from Chase about the incident affecting both mobile device and Web browser users.


Planet Linux AustraliaRussell Coker: Dell PowerEdge T30

I just did a Debian install on a Dell PowerEdge T30 for a client. The Dell web site is a bit broken at the moment, it didn’t list the price of that server or give useful specs when I was ordering it. I was under the impression that the server was limited to 8G of RAM, that’s unusually small but it wouldn’t be the first time a vendor crippled a low end model to drive sales of more expensive systems. It turned out that the T30 model I got has 4*DDR4 sockets with only one used for an 8G DIMM. It apparently can handle up to 64G of RAM.

It has space for 4*3.5″ SATA disks but only has 4*SATA connectors on the motherboard. As I never use the DVD in a server this isn’t a problem for me, but if you want 4 disks and a DVD then you need to buy a PCI or PCIe SATA card.

Compared to the PowerEdge T130 I’m using at home the new T30 is slightly shorter and thinner while seeming to have more space inside. This is partly due to better design and partly due to having 2 hard drives in the top near the DVD drive which are a little inconvenient to get to. The T130 I have (which isn’t the latest model) has 4*3.5″ SATA drive bays at the bottom which are very convenient for swapping disks.

It has two PCIe*16 slots (one of which is apparently quad speed), one shorter PCIe slot, and a PCI slot. For a cheap server a PCI slot is a nice feature, it means I can use an old PCI Ethernet card instead of buying a PCIe Ethernet card. The T30 cost $1002 so using an old Ethernet card saved 1% of the overall cost.

The T30 seems designed to be more of a workstation or personal server than a straight server. The previous iterations of the low end tower servers from Dell didn’t have built in sound and had PCIe slots that were adequate for a RAID controller but vastly inadequate for video. This one has built in line in and out for audio and has two DisplayPort connectors on the motherboard (presumably for dual-head support). Apart from the CPU (an E3-1225 which is slower than some systems people are throwing out nowadays) the system would be a decent gaming system.

It has lots of USB ports which is handy for a file server, I can attach lots of backup devices. Also most of the ports support “super speed”, I haven’t yet tested out USB devices that support such speeds but I’m looking forward to it. It’s a pity that there are no USB-C ports.

One deficiency of the T30 is the lack of a VGA port. It has one HDMI and two DisplayPort sockets on the motherboard, this is really great for a system on or under your desk, any monitor you would want on your desk will support at least one of those interfaces. But in a server room you tend to have an old VGA monitor that’s there because no-one wants it on their desk. Not supporting VGA may force people to buy a $200 monitor for their server room. That increases the effective cost of the system by 20%. It has a PC serial port on the motherboard which is a nice server feature, but that doesn’t make up for the lack of VGA.

The BIOS configuration has an option displayed for enabling charging devices from USB sockets when a laptop is in sleep mode. It’s disappointing that they didn’t either make a BIOS build for a non-laptop or have the BIOS detect at run-time that it’s not on laptop hardware and hide that.


The PowerEdge T30 is a nice low-end workstation. If you want a system with ECC RAM because you need it to be reliable and you don’t need the greatest performance then it will do very well. It has Intel video on the motherboard with HDMI and DisplayPort connectors, this won’t be the fastest video but should do for most workstation tasks. It has a PCIe*16 quad speed slot in case you want to install a really fast video card. The CPU is slow by today’s standards, but Dell sells plenty of tower systems that support faster CPUs.

It’s nice that it has a serial port on the motherboard. That could be used for a serial console or could be used to talk to a UPS or other server-room equipment. But that doesn’t make up for the lack of VGA support IMHO.

One could say that a tower system is designed to be a desktop or desk-side system not run in any sort of server room. However it is cheaper than any rack mounted systems from Dell so it will be deployed in lots of small businesses that have one server for everything – I will probably install them in several other small businesses this year. Also tower servers do end up being deployed in server rooms, all it takes is a small business moving to a serviced office that has a proper server room and the old tower servers end up in a rack.

Rack vs Tower

One reason for small businesses to use tower servers when rack servers are more appropriate is the issue of noise. If your “server room” is the room that has your printer and fax then it typically won’t have a door and you just can’t have the noise of a rack mounted server in there. 1RU systems are inherently noisy because the small diameter of the fans means that they have to spin fast. 2RU systems can be made relatively quiet if you don’t have high-end CPUs but no-one seems to be trying to do that.

I think it would be nice if a company like Dell sold low-end servers in a rack mount form-factor (19 inches wide and 2RU high) that were designed to be relatively quiet. Then instead of starting with a tower server and ending up with tower systems in racks a small business could start with a 19 inch wide system on a shelf that gets bolted into a rack if they move into a better office. Any laptop CPU from the last 10 years is capable of running a file server with 8 disks in a ZFS array. Any modern laptop CPU is capable of running a file server with 8 SSDs in a ZFS array. This wouldn’t be difficult to design.

Google AdsenseIntroducing AdSense Auto ads

Finding the time to create great content for your users is an essential part of growing your publishing business. Today we are introducing AdSense Auto ads, a powerful new way to place ads on your site. Auto ads use machine learning to make smart placement and monetization decisions on your behalf, saving you time. Place one piece of code just once to all of your pages, and let Google take care of the rest.
Some of the benefits of Auto ads include:
  • Optimization: Using machine learning, Auto ads show ads only when they are likely to perform well and provide a good user experience.
  • Revenue opportunities: Auto ads will identify any available ad space and place new ads there, potentially increasing your revenue.
  • Easy to use: With Auto ads you only need to place the ad code on your pages once. When you’re ready to use new features and ad formats, simply turn them on and off with the flick of a switch -- there’s no need to change the code again.

How do Auto ads work?

  Select the ad formats you want to show on your pages by switching them on with a simple toggle

 Place the Auto ads code on your pages

Auto ads will now start working for you by analyzing your pages, finding potential ad placements, and showing new ads when they’re likely to perform well and provide a good user experience.
And if you want to have different formats on different pages you can use the new Advanced URL settings feature (e.g. you can choose to place In-feed ads on but not on
Getting started with AdSense Auto ads
Auto ads can work equally well on new sites and on those already showing ads.
Have you manually placed ads on your page?
There’s no need to remove them if you don’t want to. Auto ads will take into account all existing Google ads on your pages.

Already using Anchor or Vignette ads?
Auto ads include Anchor and Vignette ads and many more additional formats such as Text and display, In-feed, and Matched content. Note that all users that used Page-level ads are automatically migrated over to Auto ads without any need to add code to their pages again.

To get started with AdSense Auto ads:
  1. Sign in to your AdSense account.
  2. In the left navigation panel, visit My ads and select Get Started.
  3. On the "Choose your global settings" page, select the ad formats that you'd like to show and click Save.
  4. On the next page, click Copy code.
  5. Paste the ad code between the < head > and </ head > tags of each page where you want to show Auto ads.
  6. Auto ads will start to appear on your pages in about 10-20 minutes.

We'd love to hear what you think about Auto ads in the comments section below this post.

Posted by:
Tom Long, AdSense Engineering Manager
Violetta Kalathaki, AdSense Product Manager

CryptogramHarassment By Package Delivery

People harassing women by delivering anonymous packages purchased from Amazon.

On the one hand, there is nothing new here. This could have happened decades ago, pre-Internet. But the Internet makes this easier, and the article points out that using prepaid gift cards makes this anonymous. I am curious how much these differences make a difference in kind, and what can be done about it.

Worse Than FailureCodeSOD: Functional IsFunction

Julio S recently had to attempt to graft a third-party document viewer onto an internal web app. The document viewer was from a company which specialized in enterprise “document solutions”, which can be purchased for enterprise-sized licensing fees.

Gluing the document viewer onto their internal app didn’t go terribly well. While debugging, and browsing through the vendor’s javascript, he saw a lot of calls to a function called IsFunction. It was loaded from a “utilities.js”-type do-everything library file. Curious, Julio pulled up the implementation.

function IsFunction ( func ) {
    var bChk=false;
    if (func != "undefined") bChk=true;
    else bChk=false;
    return bChk;

I cannot emphasize enough how beautiful this block of code is, by the standards of bad code. There’s so much there. One variable, bChk uses Hungarian notation. Nothing else seems to. It’s a totally superfluous variable, as we could just do return func != "undefined".

Then again why would we even do that? The real beauty, though, is how the name of the function and its implementation have no relationship to each other, and the implementation is utterly useless. For example:

IsFunction("Hello World"); //true
IsFunction({spam: "eggs"}); //true
IsFunction(function() {}); //true, but it was probably an accident
IsFunction(undefined); //true
IsFunction("undefined"); //false

Yes, the only time this function returns false is the specific case where you pass it the string “undefined”. Everything else IsFunction apparently. The useless function sounds important. Someone wrote it, probably as a quick attempt at vaguely defensive programming. “I should make sure my inputs are valid”. They didn’t test it. The certainly didn’t think about it. But they wrote it. And then someone else saw the function in use, and said, “Oh… I should probably use that, too.” Somewhere, there’s probably a “Style Guide”, which mandates that, before attempting to invoke a variable that should contain a function, you use IsFunction to confirm it does. It comes up in code reviews, and code has been held from going into production because someone didn't use IsFunction.

And Julio probably is the first person to actually check the implementation since it was first written.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!


TEDRemembering pastor Billy Graham, and more news in brief

Behold, your recap of TED-related news:

Remembering Billy Graham. For more than 60 years, pastor Billy Graham inspired countless people around the world with his sermons. On Wednesday, February 21, he passed away at his home in North Carolina after struggling with numerous illnesses over the past few years. He was 99 years old. Raised on a dairy farm in N.C., Graham used the power of new technologies, like radio and television, to spread his message of personal salvation to an estimated 215 million people globally, while simultaneously reflecting on technology’s limitations. Reciting the story of King David to audiences at TED1998, “David found that there were many problems that technology could not solve. There were many problems still left. And they’re still with us, and you haven’t solved them, and I haven’t heard anybody here speak to that,” he said, referring to human evil, suffering, and death. To Graham, the answer to these problems was to be found in God. Even after his death, through the work of the Billy Graham Evangelistic Association, led by his son Franklin, his message of personal salvation will live on. (Watch Graham’s TED Talk)

Fashion inspired by Black Panther. TED Fellow and fashion designer Walé Oyéjidé draws on aesthetics from around the globe to create one-of-a-kind pieces that dismantle bias and celebrate often-marginalized groups. For New York Fashion Week, Oyéjidé designed a suit with a coat and scarf for a Black Panther-inspired showcase, sponsored by Marvel Studios. One of Oyéjidé’s scarves is also worn in the movie by its protagonist, King T’Challa. “The film is very much about the joy of seeing cultures represented in roles that they are generally not seen in. There’s villainy and heros, tech genius and romance,” Oyéjidé told the New York Times, “People of color are generally presented as a monolithic image. I’m hoping it smashes the door open to show that people can occupy all these spaces.” (Watch Oyéjidé’s TED Talk)

Nuclear energy advocate runs for governor. Environmentalist and nuclear energy advocate Michael Shellenberger has launched his campaign for governor of California as an independent candidate. “I think both parties are corrupt and broken. We need to start fresh with a fresh agenda,” he says. Shellenberger intends to run on an energy and environmental platform, and he hopes to involve student environmental activists in his campaign. California’s gubernatorial election will be held in November 2018. (Watch Shellenberger’s TED Talk)

Can UV light help us fight the flu? Radiation scientist David Brenner and his research team at Columbia University’s Irving Medical Center are exploring whether a type of ultraviolet light known as far-UVC could be used to kill the flu virus. To test their theory, they released a strain of the flu virus called H1N1 in an enclosed chamber and exposed it to low doses of UVC. In a paper published in Nature’s Scientific Reports, they report that far-UVC successfully deactivated the virus. Previous research has shown that far-UVC doesn’t penetrate the outer layer of human skin or eyes, unlike conventional UV rays, which means that it appears to be safe to use on humans. Brenner suggests that far-UVC could be used in public spaces to fight the flu. “Think about doctors’ waiting rooms, schools, airports and airplanes—any place where there’s a likelihood for airborne viruses,” Brenner told Time. (Watch Brenner’s TED Talk.)

A beautiful sculpture for Madrid. For the 400 anniversary of Madrid’s Plaza Mayor, artist Janet Echelman created a colorful, fibrous sculpture, which she suspended above the historic space. The sculpture, titled “1.78 Madrid,” aims to provoke contemplation of the interconnectedness of time and our spatial reality. The title refers to the number of microseconds that a day on Earth was shortened as a result of the 2011 earthquake in Japan, which was so strong it caused the planet’s rotation to accelerate. At night, colorful lights are projected onto the sculpture, which makes it an even more dynamic, mesmerizing sight for the city’s residents. (Watch Echelman’s TED Talk)

A graduate program that doesn’t require a high school degree. Economist Esther Duflo’s new master’s program at MIT is upending how we think about graduate school admissions. Rather than requiring the usual test scores and recommendation letters, the program allows anyone to take five rigorous, online courses for free. Students only pay to take the final exam, the cost of which ranges from $100 to $1,000 depending on income. If they do well on the final exam, they can apply to MIT’s master’s program in data, economics and development policy. “Anybody could do that. At this point, you don’t need to have gone to college. For that matter, you don’t need to have gone to high school,” Duflo told WBUR. Already, more than 8,000 students have enrolled online. The program intends to raise significant aid to cover the cost of the master’s program and living in Cambridge, with the first class arriving in 2020. (Watch Duflo’s TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

CryptogramNew Spectre/Meltdown Variants

Researchers have discovered new variants of Spectre and Meltdown. The software mitigations for Spectre and Meltdown seem to block these variants, although the eventual CPU fixes will have to be expanded to account for these new attacks.

Worse Than FailureShiny Side Up


It feels as though disc-based media have always been with us, but the 1990s were when researchers first began harvesting these iridescent creatures from the wild in earnest, pressing data upon them to create the beast known as CD-ROM. Click-and-point adventure games, encyclopedias, choppy full-motion video ... in some cases, ambition far outweighed capability. Advances in technology made the media cheaper and more accessible, often for the worst. There are some US households that still burn America Online 7.0 CDs for fuel.

But we’re not here to delve into the late-90s CD marketing glut. We’re nestling comfortably into the mid-90s, when Internet was too slow and unreliable for anyone to upload installers onto a customer portal and call it a day. Software had to go out on physical media, and it had to be as bug-free as possible before shipping.

Chris, a developer fresh out of college, worked on product catalog database applications that were mailed to customers on CDs. It was a small shop with no Tech Support department, so he and the other developers had to take turns fielding calls from customers having issues with the admittedly awful VB4 installer. It was supposed to launch automatically, but if the auto-play feature was disabled in Windows 95, or the customer canceled the installer pop-up without bothering to read it, Chris or one of his colleagues was likely to hear about it.

And then came the caller who had no clue what Chris meant when he suggested, "Why don't we open up the CD through the file system and launch the installer manually?"

These were the days before remote desktop tools, and the caller wasn't the savviest computer user. Talking him through minimizing his open programs, double-clicking on My Computer, and browsing into the CD drive took Chris over half an hour.

"There's nothing here," the caller said.

So close to the finish line, and yet so far. Chris stifled his exasperation. "What do you mean?"

"I opened the CD like you said, and it's completely empty."

This was new. Chris frowned. "You're definitely looking at the right drive? The one with the shiny little disc icon?"

"Yes, that's the one. It's empty."

Chris' frown deepened. "Then I guess you got a bad copy of the CD. I'm sorry about that! Let me copy down your name and address, and I'll get a new one sent out to you."

The customer provided his mailing address accordingly. Chris finished scribbling it onto a Post-it square. "OK, lemme read that back to—"

"The shiny side is supposed to be turned upwards, right?" the customer blurted. "Like a gramophone record?"

Chris froze, then slapped the mute button before his laughter spilled out over the line. After composing himself, he returned to the call as the model of professionalism. "Actually, it should be shiny-side down."

"Really? Huh. The little icon's lying, then."

"Yeah, I guess it is," Chris replied. "Unfortunately, that's on Microsoft to fix. Let's turn the disc over and try again."

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet Linux AustraliaColin Charles: MariaDB Developer’s unconference & M|18

Been a while since I wrote anything MySQL/MariaDB related here, but there’s the column on the Percona blog, that has weekly updates.

Anyway, I’ll be at the developer’s unconference this weekend in NYC. Even managed to snag a session on the schedule, MySQL features missing in MariaDB Server (Sunday, 12.15–13.00). Signup on meetup?

Due to the prevalence of “VIP tickets”, I too signed up for M|18. If you need a discount code, I’ll happily offer them up to you to see if they still work (though I’m sure a quick Google will solve this problem for you). I’ll publish notes, probably in my weekly column.

If you’re in New York and want to say hi, talk shop, etc. don’t hesitate to drop me a line.


CryptogramFacebook Will Verify the Physical Location of Ad Buyers with Paper Postcards

It's not a great solution, but it's something:

The process of using postcards containing a specific code will be required for advertising that mentions a specific candidate running for a federal office, Katie Harbath, Facebook's global director of policy programs, said. The requirement will not apply to issue-based political ads, she said.

"If you run an ad mentioning a candidate, we are going to mail you a postcard and you will have to use that code to prove you are in the United States," Harbath said at a weekend conference of the National Association of Secretaries of State, where executives from Twitter Inc and Alphabet Inc's Google also spoke.

"It won't solve everything," Harbath said in a brief interview with Reuters following her remarks.

But sending codes through old-fashioned mail was the most effective method the tech company could come up with to prevent Russians and other bad actors from purchasing ads while posing as someone else, Harbath said.

It does mean a several-days delay between purchasing an ad and seeing it run.

Krebs on SecurityMoney Laundering Via Author Impersonation on Amazon?

Patrick Reames had no idea why sent him a 1099 form saying he’d made almost $24,000 selling books via Createspace, the company’s on-demand publishing arm. That is, until he searched the site for his name and discovered someone has been using it to peddle a $555 book that’s full of nothing but gibberish.

The phony $555 book sold more than 60 times on Amazon using Patrick Reames’ name and Social Security number.

Reames is a credited author on Amazon by way of several commodity industry books, although none of them made anywhere near the amount Amazon is reporting to the Internal Revenue Service. Nor does he have a personal account with Createspace.

But that didn’t stop someone from publishing a “novel” under his name. That word is in quotations because the publication appears to be little more than computer-generated text, almost like the gibberish one might find in a spam email.

“Based on what I could see from the ‘sneak peak’ function, the book was nothing more than a computer generated ‘story’ with no structure, chapters or paragraphs — only lines of text with a carriage return after each sentence,” Reames said in an interview with KrebsOnSecurity.

The impersonator priced the book at $555 and it was posted to multiple Amazon sites in different countries. The book — which as been removed from most Amazon country pages as of a few days ago — is titled “Lower Days Ahead,” and was published on Oct 7, 2017.

Reames said he suspects someone has been buying the book using stolen credit and/or debit cards, and pocketing the 60 percent that Amazon gives to authors. At $555 a pop, it would only take approximately 70 sales over three months to rack up the earnings that Amazon said he made.

“This book is very unlikely to ever sell on its own, much less sell enough copies in 12 weeks to generate that level of revenue,” Reames said. “As such, I assume it was used for money laundering, in addition to tax fraud/evasion by using my Social Security number. Amazon refuses to issue a corrected 1099 or provide me with any information I can use to determine where or how they were remitting the royalties.”

Reames said the books he has sold on Amazon under his name were done through his publisher, not directly via a personal account (the royalties for those books accrue to his former employer) so he’d never given Amazon his Social Security number. But the fraudster evidently had, and that was apparently enough to convince Amazon that the imposter was him.

Reames said after learning of the impersonation, he got curious enough to start looking for other examples of author oddities on Amazon’s Createspace platform.

“I have reviewed numerous Createspace titles and its clear to me that there may be hundreds if not thousands of similar fraudulent books on their site,” Reames said. “These books contain no real content, only dozens of pages of gibberish or computer generated text.”

For example, searching Amazon for the name Vyacheslav Grzhibovskiy turns up dozens of Kindle “books” that appear to be similar gibberish works — most of which have the words “quadrillion,” “trillion” or a similar word in their titles. Some retail for just one or two dollars, while others are inexplicably priced between $220 and $320.

Some of the “books” for sale on Amazon attributed to a Vyacheslav Grzhibovskiy.

“Its not hard to imagine how these books could be used to launder money using stolen credit cards or facilitating transactions for illicit materials or funding of illegal activities,” Reames said. “I can not believe Amazon is unaware of this and is unwilling to intercede to stop it. I also believe they are not properly vetting their new accounts to limit tax fraud via stolen identities.”

Reames said Amazon refuses to send him a corrected 1099, or to discuss anything about the identity thief.

“They say all they can do at this point is send me a letter acknowledging than I’m disputing ever having received the funds, because they said they couldn’t prove I didn’t receive the funds. So I told them, ‘If you’re saying you can’t say whether I did receive the funds, tell me where they went?’ And they said, “Oh, no, we can’t do that.’ So I can’t clear myself and they won’t clear me.”

Amazon said in a statement that the security of customer accounts is one of its highest priorities.

“We have policies and security measures in place to help protect them. Whenever we become aware of actions like the ones you describe, we take steps to stop them. If you’re concerned about your account, please contact Amazon customer service immediately using the help section on our website.”

Beware, however, if you plan to contact Amazon customer support via phone. Performing a simple online search for Amazon customer support phone numbers can turn up some dubious and outright fraudulent results.

Earlier this month, KrebsOnSecurity heard from a fraud investigator for a mid-sized bank who’d recently had several customers who got suckered into scams after searching for the customer support line for Amazon. She said most of these customers were seeking to cancel an Amazon Prime membership after the trial period ended and they were charged a $99 fee.

The fraud investigator said her customers ended up calling fake Amazon support numbers, which were answered by people with a foreign accent who proceeded to request all manner of personal data, including bank account and credit card information. In short order, the customers’ accounts were used to set up new Amazon accounts as well as accounts at, a service that facilitates the purchase of virtual currencies like Bitcoin.

This Web site does a good job documenting the dozens of phony Amazon customer support numbers that are hoodwinking unsuspecting customers. Amazingly, many of these numbers seem to be heavily promoted using Amazon’s own online customer support discussion forums, in addition to third-party sites like

Interestingly, clicking on the Customer Help Forum link link from the Amazon Support Options and Contact Us page currently sends visitors to the page pictured below, which displays a “Sorry, We Couldn’t Find That Page” error. Perhaps the company is simply cleaning things up after being notified last week by KrebsOnSecurity about the bogus phone numbers being promoted on the forum.

In any case, it appears some of these fake Amazon support numbers are being pimped by a number dubious-looking e-books for sale on Amazon that are all about — you guessed it — how to contact Amazon customer support.

If you wish to contact Amazon by phone, the only numbers you should use are:

U.S. and Canada: 1-866-216-1072

International: 1-206-266-2992

Amazon’s main customer help page is here.

Update, 11:44 a.m. ET: Not sure when it happened exactly, but this notice says Amazon has closed its discussion boards.

Update, 4:02 p.m. ET: Amazon just shared the following statement, in addition to their statement released earlier urging people to visit a help page that didn’t exist (see above):

“Anyone who believes they’ve received an incorrect 1099 form or a 1099 form in error can contact and we will investigate.”

“This is the general Amazon help page:”

Update 4:01 p.m ET: Reader zboot has some good stuff. What makes Amazon a great cashout method for cybercrooks as opposed to, say, bitcoin cashouts, is that funds can be deposited directly into a bank account. He writes:

“It’s not that the darkweb is too slow, it’s that you still need to cash out at the end. Amazon lets you go from stolen funds directly to a bank account. If you’ve set it up with stolen credentials, that process may be faster than getting money out of a bitcoin exchange which tend to limit fiat withdraws to accounts created with the amount of information they managed to steal.”

Worse Than FailureCodeSOD: The Telltale Snippet

True! nervous, very, very dreadfully nervous I had been and am; but why will you say that I am mad? The disease had sharpened my senses, not destroyed, not dulled them. Above all was the sense of hearing acute. I heard all things in the heaven and in the earth. I heard many things in hell. How then am I mad? Hearken! and observe how healthily, how calmly I can tell you the whole story. - “The Telltale Heart” by Edgar Allen Poe

Today’s submitter credits themselves as Too Afraid To Say (TATS) who they are. Why? Because like a steady “thump thump” from beneath the floorboards, they are haunted by their crimes. The haunting continues to this very day.

It is impossible to say how the idea entered TATS’s brain, but as a fresh-faced junior developer, they set out to write a flexible web-control in JavaScript. What they wanted was to dynamically add items to the control. Each item was a set of fields- an ID, a tool tip, a description, etc.

Think about how you might pass a list of objects to a method.

    ObjectLookupField.prototype._AddItems = function _AddItems(objItems)
        if (objItems && objItems.length > 0)
            var objItemIDs = [];
            var objTooltips = [];
            var objImages = [];
            var objTypes = [];
            var objDeleted = [];
            var objDescriptions = [];
            var objParentTreeCodes = [];
            var objHasChilderen = [];
            var objPath = [];
            var objMarked = [];
            var objLocked = [];

            var blnSkip;

            for (var intI = 0; intI < objItems.length; intI++)
                objImages.push((objItems[intI].TypeIconURL ? objItems[intI].TypeIconURL : objItems[intI].IconURL));
                objTooltips.push(objItems[intI].Tooltip ? objItems[intI].Tooltip : '');
                objMarked.push(objItems[intI].Marked ? 'Marked' : '');

                                // SNIP, not really related
                            //TATS also implemented `addItems` which requires all these arrays
            window[this._strControlID].addItems([objItemIDs, objImages, objPath, objTooltips, objLocked, objMarked, objParentTreeCodes, objHasChilderen]);

TATS used the infamous “Arrject” pattern. Instead of having a list of objects, where each object has all of the fields it needs, the Arrject pattern has one array per field, and then we’ll hope that each index holds all the related data for a given item. For example:

    arrNames = {"Joebob", "Sallybob", "Suebob"};
    arrAddresses = {"123 Street St", "234 Road Rd", "345 Lane Ln"};
    arrPhones = {"555-1234", "555-2345", "555-3456"};

The 0th index of every array contains everything you want to know about Joebob.

Most uses of the Arrject pattern end up in code that doesn’t use objects at all, but TATS adds their own little twist. They explode an object into a set of arrays, and then pass those arrays to their own method which creates the necessary DOM elements.

TATS smiled, for what did they have to fear? They bade the senior developers welcome: use my code. And they did.

Before long, this little bit of code propagated throughout their entire codebase; copied, pasted, dropped in, loaded as a JS dependency, hosted on a private CDN. It was everywhere. Time passed, and careers changed. TATS got promoted up to senior. Other seniors left and handed their code off to TATS. And that’s when the thumping beneath the floorboards became intolerable. That is why they are “Too Afraid to Say”. This little ghost, this reminder of their mistakes as a junior dev is always there, waiting beneath their feet, and it keeps. getting. louder.

“Villains!” I shrieked, “dissemble no more! I admit the deed!—tear up the planks!—here, here!—it is the beating of his hideous heart!”

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!


CryptogramOn the Security of Walls

Interesting history of the security of walls:

Dún Aonghasa presents early evidence of the same principles of redundant security measures at work in 13th century castles, 17th century star-shaped artillery fortifications, and even "defense in depth" security architecture promoted today by the National Institute of Standards and Technology, the Nuclear Regulatory Commission, and countless other security organizations world-wide.

Security advances throughout the centuries have been mostly technical adjustments in response to evolving weaponry. Fortification -- the art and science of protecting a place by imposing a barrier between you and an enemy -- is as ancient as humanity. From the standpoint of theory, however, there is very little about modern network or airport security that could not be learned from a 17th century artillery manual. That should trouble us more than it does.

Fortification depends on walls as a demarcation between attacker and defender. The very first priority action listed in the 2017 National Security Strategy states: "We will secure our borders through the construction of a border wall, the use of multilayered defenses and advanced technology, the employment of additional personnel, and other measures." The National Security Strategy, as well as the executive order just preceding it, are just formal language to describe the recurrent and popular idea of a grand border wall as a central tool of strategic security. There's been a lot said about the costs of the wall. But, as the American finger hovers over the Hadrian's Wall 2.0 button, whether or not a wall will actually improve national security depends a lot on how walls work, but moreso, how they fail.

Lots more at the link.

Krebs on SecurityIRS Scam Leverages Hacked Tax Preparers, Client Bank Accounts

Identity thieves who specialize in tax refund fraud have been busy of late hacking online accounts at multiple tax preparation firms, using them to file phony refund requests. Once the Internal Revenue Service processes the return and deposits money into bank accounts of the hacked firms’ clients, the crooks contact those clients posing as a collection agency and demand that the money be “returned.”

In one version of the scam, criminals are pretending to be debt collection agency officials acting on behalf of the IRS. They’ll call taxpayers who’ve had fraudulent tax refunds deposited into their bank accounts, claim the refund was deposited in error, and threaten recipients with criminal charges if they fail to forward the money to the collection agency.

This is exactly what happened to a number of customers at a half dozen banks in Oklahoma earlier this month. Elaine Dodd, executive vice president of the fraud division at the Oklahoma Bankers Association, said many financial institutions in the Oklahoma City area had “a good number of customers” who had large sums deposited into their bank accounts at the same time.

Dodd said the bank customers received hefty deposits into their accounts from the U.S. Treasury, and shortly thereafter were contacted by phone by someone claiming to be a collections agent for a firm calling itself DebtCredit and using the Web site name debtcredit[dot]us.

“We’re having customers getting refunds they have not applied for,” Dodd said, noting that the transfers were traced back to a local tax preparer who’d apparently gotten phished or hacked. Those banks are now working with affected customers to close the accounts and open new ones, Dodd said. “If the crooks have breached a tax preparer and can send money to the client, they can sure enough pull money out of those accounts, too.”

Several of the Oklahoma bank’s clients received customized notices from a phony company claiming to be a collections agency hired by the IRS.

The domain debtcredit[dot]us hasn’t been active for some time, but an exact copy of the site to which the bank’s clients were referred by the phony collection agency can be found at jcdebt[dot]com — a domain that was registered less than a month ago. The site purports to be associated with a company in New Jersey called Debt & Credit Consulting Services, but according to a record (PDF) retrieved from the New Jersey Secretary of State’s office, that company’s business license was revoked in 2010.

“You may be puzzled by an erroneous payment from the Internal Revenue Service but in fact it is quite an ordinary situation,” reads the HTML page shared with people who received the fraudulent IRS refunds. It includes a video explaining the matter, and references a case number, the amount and date of the transaction, and provides a list of personal “data reported by the IRS,” including the recipient’s name, Social Security Number (SSN), address, bank name, bank routing number and account number.

All of these details no doubt are included to make the scheme look official; most recipients will never suspect that they received the bank transfer because their accounting firm got hacked.

The scammers even supposedly assign the recipients an individual “appointed debt collector,” complete with a picture of the employee, her name, telephone number and email address. However, the emails to the domain used in the email address from the screenshot above (debtcredit[dot]com) bounced, and no one answers at the provided telephone number.

Along with the Web page listing the recipient’s personal and bank account information, each recipient is given a “transaction error correction letter” with IRS letterhead (see image below) that includes many of the same personal and financial details on the HTML page. It also gives the recipient instructions on the account number, ACH routing and wire number to which the wayward funds are to be wired.

A phony letter from the IRS instructing recipients on how and where to wire the money that was deposited into their bank account as a result of a fraudulent tax refund request filed in their name.

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

On Feb. 2, 2018, the IRS issued a warning to tax preparers, urging them to step up their security in light of increased attacks. On Feb. 13, the IRS warned that phony refunds through hacked tax preparation accounts are a “quickly growing scam.”

“Thieves know it is more difficult to identify and halt fraudulent tax returns when they are using real client data such as income, dependents, credits and deductions,” the agency noted in the Feb. 2 alert. “Generally, criminals find alternative ways to get the fraudulent refunds delivered to themselves rather than the real taxpayers.”

The IRS says taxpayer who receive fraudulent transfers from the IRS should contact their financial institution, as the account may need to be closed (because the account details are clearly in the hands of cybercriminals). Taxpayers receiving erroneous refunds also should consider contacting their tax preparers immediately.

If you go to file your taxes electronically this year and the return is rejected, it may mean fraudsters have beat you to it. The IRS advises taxpayers in this situation to follow the steps outlined in the Taxpayer Guide to Identity Theft. Those unable to file electronically should mail a paper tax return along with Form 14039 (PDF) — the Identity Theft Affidavit — stating they were victims of a tax preparer data breach.

Worse Than FailureCousin of ITAPPMONROBOT

Logitech Quickcam Pro 4000

Every year, Initrode Global was faced with further and further budget shortages in their IT department. This wasn't because the company was doing poorly—on the contrary, the company overall was doing quite well, hitting record sales every quarter. The only way to spin that into a smaller budget was to dream bigger. Thus, every quarter, the budget demanded greater and greater increases in sales, and the exceptional growth was measured against the desired phenomenal growth and found wanting.

IT, being a cost center, was always hit by budget cuts the hardest. What did they need money for? The lights were still on, the mainframes still churning; any additional funds would only encourage them to take wild risks and break things.

One of the things people were worried about breaking were the thin clients. These had been purchased some years ago from Smyrt, who had been acquired the previous year by Hell Computers. There would be no tech support or patching, not from Hell. The IT department was on their own to ensure the clients kept running.

Unfortunately, the things seemed to have a will of their own—and that will did not include remaining up for weeks on end. Every once in a while, when booting Linux on the thin clients, the Thin Film Transistor screen would turn dark as soon as the X server started. They would remain dark after that; however, when the helpdesk SSH'd into the system, the screen would of course render perfectly on their end. So there was nothing to do to troubleshoot except lug a thin client to their work area and test workarounds from there.

The worst part of this kind of troubleshooting is when the problem is an intermittent one. The only way they could think to reproduce the problem was to spend hours in front of the client, turning it off and back on again. In the face of budget cuts, the already understaffed desk had no manpower to do something so trivial and dull.

Tedium is the mother of invention. Many of the most ingenious pieces of automation were put in place when an enterprising programmer was faced with performing a mind-numbing task over and over for the foreseeable future. Such is the case in this instance. Lacking the support staff to power cycle the machine over and over, the staff instead built a robot.

A webcam was found in the back room, dusty and abandoned, the last vestige of a proposed work-from-home solution that never quite came to fruition years before. A sticker of transparent rubber someone found in their desk was placed over the metal rim of the camera so it wouldn't leave any scratches on the glass of the TFT screen. The webcam was placed up close against one strategically chosen corner of the screen, and attached to a Raspberry Pi someone brought from home.

The Pi was programmed to run a bash script, which in turn called a CLI image-grabbing tool and then applied some ImageMagick filters to determine the brightness value of the patch of screen it could see. This brightness value was compared against a known list of brightnesses to determine which state the machine was in: the boot menu, the Linux kernel messages scrolling past, the colorful login screen, or the solid black screen representing the problem. When the Pi detected a login screen, it would run a scripted reboot on the thin client using SSH and a keypair. If, instead, the screen remained dark for a long period of time, it would send an IM through the company messaging solution to alert the staff that they could begin their testing, then exit.

We've seen machines with the ability to manipulate physical servers. Now, we have machines seeing and evaluating the world in front of them. How long before we reach peak Skynet potential here at TDWTF? And what would the robot revolution look like, with founding members such as these?

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.


Don MartiThe tracker will always get through?

(I work for Mozilla. None of this is secret. None of this is Mozilla policy. Not speaking for Mozilla here.)

A big objection to tracking protection is the idea that the tracker will always get through. Some people suggest that as browsers give users more ability to control how their personal information gets leaked across sites, things won't get better for users, because third-party tracking will just keep up. On this view, today's easy-to-block third-party cookies will be replaced by techniques such as passive fingerprinting where it's hard to tell if the browser is succeeding at protecting the user or not, and users will be stuck in the same place they are now, or worse.

I doubt this is the case because we're playing a more complex game than just trackers vs. users. The game has at least five sides, and some of the fastest-moving players with the best understanding of the game are the adfraud hackers. Right now adfraud is losing in some areas where they had been winning, and the resulting shift in adfraud is likely to shift the risks and rewards of tracking techniques.

Data center adfraud

Fraudbots, running in data centers, visit legit sites (with third-party ads and trackers) to pick up a realistic set of third-party cookies to make them look like high-value users. Then the bots visit dedicated fraudulent "cash out" sites (whose operators have the same third-party ads and trackers) to generate valuable ad impressions for those sites. If you wonder why so many sites made a big deal out of "pivot to video" but can't remember watching a video ad, this is why. Fraudbots are patient enough to get profiled as, say, a car buyer, and watch those big-money ads. And the money is good enough to motivate fraud hackers to make good bots, usually based on real browser code. When a fraudbot network gets caught and blocked from high-value ads, it gets recycled for lower and lower value forms of advertising. By the time you see traffic for sale on fraud boards, those bots are probably only getting past just enough third-party anti-fraud services to be worth running.

This version of adfraud has minimal impact on real users. Real users don't go to fraud sites, and fraudbots do their thing in data centers Doesn't everyone do their Christmas shopping while chilling out in the cold aisle at an Amazon AWS data center? Seems legit to me. and don't touch users' systems. The companies that pay for it are legit publishers, who not only have to serve pages to fraudbots—remember, a bot needs to visit enough legit sites to look like a real user—but also end up competing with adfraud for ad revenue. Adfraud has only really been a problem for legit publishers. The adtech business is fine with it, since they make more money from fraud than the fraud hackers do, and the advertisers are fine with it because fraud is priced in, so they pay the fraud-adjusted price even for real impressions.

What's new for adfraud

So what's changing? More fraudbots in data centers are getting caught, just because the adtech firms have mostly been shamed into filtering out the embarassingly obvious traffic from IP addresses that everyone can tell probably don't have a human user on them. So where is fraud going now? More fraud is likely to move to a place where a bot can look more realistic but probably not stay up as long—your computer or mobile device. Expect adfraud concealed within web pages, as a payload for malware, and of course in lots and lots of cheesy native mobile apps.The Google Play Store has an ongoing problem with adfraud, which is content marketing gold for Check Point Software, if you like "shitty app did WHAT?" stories. Adfraud makes way more money than cryptocurrency mining, using less CPU and battery.

So the bad news is that you're going to have to reformat your uncle's computer a lot this year, because more client-side fraud is coming. Data center IPs don't get by the ad networks as well as they once did, so adfraud is getting personal. The good news, is, hey, you know all that big, scary passive fingerprinting that's supposed to become the harder-to-beat replacement for the third-party cookie? Client-side fraud has to beat it in order to get paid, so they'll beat it. As a bonus, client-side bots are way better at attribution fraud (where a fraudulent ad gets credit for a real sale) than data center bots.

Users don't have to get protected from every possible tracking technique in order to shift the web advertising game from a hacking contest to a reputation contest. It often helps simply to shift the advertiser's ROI from negative-externality advertising below the ROI of positive-externality advertising.

Advertisers have two possible responses to adfraud: either try to out-hack it, or join the "flight to quality" and cut back on trying to follow big-money users to low-reputation sites in the first place. Hard-to-detect client-side bots, by making creepy fingerprinting techniques less trustworthy, tend to increase the uncertainty of the hacking option and make flight to quality relatively more attractive.


Planet Linux AustraliaPia Waugh: An optimistic future

This is my personal vision for an event called “Optimistic Futures” to explore what we could be aiming for and figure out the possible roles for government in future.

Technology is both an enabler and a disruptor in our lives. It has ushered in an age of surplus, with decentralised systems enabled by highly empowered global citizens, all creating increasing complexity. It is imperative that we transition into a more open, collaborative, resilient and digitally enabled society that can respond exponentially to exponential change whilst empowering all our people to thrive. We have the means now by which to overcome our greatest challenges including poverty, hunger, inequity and shifting job markets but we must be bold in collectively designing a better future, otherwise we may unintentionally reinvent past paradigms and inequities with shiny new things.

Technology is only as useful as it affects actual people, so my vision starts, perhaps surprisingly for some, with people. After all, if people suffer, the system suffers, so the well being of people is the first and foremost priority for any sustainable vision. But we also need to look at what all sectors and communities across society need and what part they can play:

  • People: I dream of a future where the uniqueness of local communities, cultures and individuals is amplified, where diversity is embraced as a strength, and where all people are empowered with the skills, capacity and confidence to thrive locally and internationally. A future where everyone shares in the benefits and opportunities of a modern, digital and surplus society/economy with resilience, and where everyone can meaningfully contribute to the future of work, local communities and the national/global good.
  • Public sectors: I dream of strong, independent, bold and highly accountable public sectors that lead, inform, collaborate, engage meaningfully and are effective enablers for society and the economy. A future where we invest as much time and effort on transformational digital public infrastructure and skills as we do on other public infrastructure like roads, health and traditional education, so that we can all build on top of government as a platform. Where everyone can have confidence in government as a stabilising force of integrity that provides a minimum quality of life upon which everyone can thrive.
  • The media: I dream of a highly effective fourth estate which is motivated systemically with resilient business models that incentivise behaviours to both serve the public and hold power to account, especially as “news” is also arguably becoming exponential. Actionable accountability that doesn’t rely on the linearity and personal incentives of individuals to respond will be critical with the changing pace of news and with more decisions being made by machines.
  • Private, academic and non-profit sectors: I dream of a future where all sectors can more freely innovate, share, adapt and succeed whilst contributing meaningfully to the public good and being accountable to the communities affected by decisions and actions. I also see a role for academic institutions in particular, given their systemic motivation for high veracity outcomes without being attached to one side, as playing a role in how national/government actions are measured, planned, tested and monitored over time.
  • Finally, I dream of a world where countries are not celebrated for being just “digital nations” but rather are engaged in a race to the top in using technology to improve the lives of all people and to establish truly collaborative democracies where people can meaningfully participate in the shaping the optimistic and inclusive futures.

Technology is a means, not an ends, so we need to use technology to both proactively invent the future we need (thank you Alan Kay) and to be resilient to change including emerging tech and trends.

Let me share a few specific optimistic predictions for 2070:

  • Automation will help us redesign our work expectations. We will have a 10-20 hour work week supported by machines, freeing up time for family, education, civic duties and innovation. People will have less pressure to simply survive and will have more capacity to thrive (this is a common theme, but something I see as critical).
  • 3D printing of synthetic foods and nanotechnology to deconstruct and reconstruct molecular materials will address hunger, access to medicine, clothes and goods, and community hubs (like libraries) will become even more important as distribution, education and social hubs, with drones and other aerial travel employed for those who can’t travel. Exoskeletons will replace scooters :)
  • With rocket travel normalised, and only an hour to get anywhere on the planet, nations will see competitive citizenships where countries focus on the best quality of life to attract and retain people, rather than largely just trying to attract and retain companies as we do today. We will also likely see the emergence of more powerful transnational communities that have nationhood status to represent the aspects of people’s lives that are not geopolitically bound.
  • The public service has highly professional, empathetic and accountable multi-disciplinary experts on responsive collaborative policy, digital legislation, societal modeling, identifying necessary public digital infrastructure for investment, and well controlled but openly available data, rules and transactional functions of government to enable dynamic and third party services across myriad channels, provided to people based on their needs but under their control. We will also have a large number of citizens working 1 or 2 days a week in paid civic duties on areas where they have passion, skills or experience to contribute.
  • The paralympics will become the main game, as it were, with no limits on human augmentation. We will do the 100m sprint with rockets, judo with cyborgs, rock climbing with tentacles. We have access to medical capabilities to address any form of disease or discomfort but we don’t use the technologies to just comply to a normative view of a human. People are free to choose their form and we culturally value diversity and experimentation as critical attributes of a modern adaptable community.

I’ve only been living in New Zealand a short time but I’ve been delighted and inspired by what I’ve learned from kiwi and Māori cultures, so I’d like to share a locally inspired analogy.

Technology is on one hand, just a waka (canoe), a vehicle for change. We all have a part to play in the journey and in deciding where we want to go. On the other hand, technology is also the winds, the storms, the thunder, and we have to continually work to understand and respond to emerging technologies and trends so we stay safely on course. It will take collaboration and working towards common goals if we are to chart a better future for all.

Don MartiThis is why we can't have nice brands.

What if I told you that there was an Internet ad technology that...

  • can reach the same user on mobile and desktop

  • uses open-standard persistent identifiers for users

  • can connect users to their purchase history

  • reaches the users that the advertiser chooses, at the time the advertiser chooses

  • and doesn't depend on the Google/Facebook duopoly?

Don't go looking for it on the Lumascape.

I'm describing email spam.

Every feature that adtech is bragging on, or working toward? Email spam had it in the 1990s.

So why didn't brand advertisers jump all over spam? Why did they mostly leave it to low-reputation brands and scammers?

To be honest, it probably wasn't a decision decision in most cases, just corporate sloth. But staying away from spam was the right answer. In the email inbox, spam from a high-reputation brand doesn't look any different from spam that any fly-by-night operation can send. All spammers can do the same stuff:

They can sell to people...for a fraction of what marketing used to cost. And they can collect data on these consumers, track what they buy, what they love and hate about the experience, and market to them directly much more effectively.

Oh, wait. That one isn't about spam in the 1990s. That's about targeted advertising on social media sites today. The CEO of digital advertising's biggest trade group says most big marketers are screwed unless they completely change their business models.

It's the direct consumer relationships, and the use of consumer data, that is completely game-changing for the marketing world. And most big marketers, such as Procter & Gamble and Unilever, are not ready for this new reality, the IAB says.

But of course they're ready. The difference is that those established brand advertisers aren't any more ready than some guy who watched a YouTube video series on "growth hacking" and is ready to start buying targeted ads and drop-shipping.

The "new reality," the targeted advertising business that the IAB wants brands to join them in, is a place where you win based not on how much the audience trusts you, but on how well you can out-hack the competition. And like any information space organized by hacking skill, it's a hellscape of deceptive crap. Read The Strange Brands in Your Instagram Feed by Alexis C. Madrigal.

Some Instagram retailers are legit brands with employees and products. Others are simply middlemen for Chinese goods, built in bedrooms, and launched with no capital or inventory. All of them have been pulled into existence by the power of Instagram and Facebook ads combined with a suite of e-commerce tools based around Shopify.

Of course, not every brand that buys a social media ad or other targeted ad is crap.

But a social media ad is useless for telling crap brands from non-crap ones. It doesn't carry economic signal. There's no such thing as a free watch. (PDF)

Rory Sutherand writes, in Reducing activities to their core misses the point,

Many billions of pounds of advertising expenditure have been shifted from conventional media, most notably newspapers, and moved into digital media in a quest for targeted efficiency. If advertising simply works by the conveyance of messages, this would be a sensible thing to do. However, it is beginning to become apparent that not all, perhaps not even most, advertising works this way. It seems that a large part of advertising creates trust and conviction in its audience precisely because it is perceived to be costly.

If anyone knows that any seller can watch a few YouTube videos and do a certain activity, does that activity really help the audience distinguish a high-reputation seller from a low-reputation one?

And how does it affect a legit brand when its ads show up on the same medium with all the crappy ones?Twitter has a solution that keeps its ads saleable: just don't show any ads to important people. I'm surprised they can get away with this, but given the mix of rip-off and real brand ads I keep seeing there, it seems to be working.

Extremists and state-sponsored misinformation campaigns aren't "abusing" targeted advertising. They're just taking advantage of a system optimized for deception and using it normally.

Now, I don't want to blame targeted advertising for all of the problems of brand equity. When you put high-fructose corn syrup in your product, brand equity suffers. When you outsource or de-skill the customer support function, brand equity suffers. All the half-ass "looks good this quarter" stuff that established brands are doing is bad for brand equity. It just turns out that the kinds of advertising that you can do on the Internet today are all half-ass "looks good this quarter" stuff. If you want to send a credible economic signal, buy TV time or put a flagship store on some expensive real estate. The Internet's got nothing for you.

Failure to create signal-carrying ad units should be more of a concern for people who want to earn ad money on the Internet than it is. See Bob Hoffman's "refrigerator test." All that work that went into building the most complicated ad medium ever? It went into building an ad medium optimized for low-reputation advertisers. And that kind of ad medium tends to see rates go down over time. It doesn't hold value.

And the medium can't gain value until the users trust it, which means they have to trust the browser. In-browser tracking protection is going to have to enable the legit web advertising industry the same way that spam filters enables the legit email newsletter industry.

Here’s why the epidemic of malicious ads grew so much worse last year

Facebook and Google could lose $2B in ad revenue over ‘toxic content’

How I Cracked Facebook’s New Algorithm And Tortured My Friends

Wanted: Console Text Editor for Windows

Where Did All the Advertising Jobs Go?

Facebook patents tech to determine social class

The Mozilla Blog: A Perspective: Firefox Quantum’s Tracking Protection Gives Users The Right To Be Curious

Breaking up with Facebook: users confess they're spending less time

Survey: Facebook is the big tech company that people trust least

The Perils of Paid Content


Unilever pledges to cut ties with ‘platforms that create division’

Content recommendation services Outbrain and Taboola are no longer a guaranteed source of revenue for digital publishers

The House That Spied on Me

Why Facebook's Disclosure to the City of Seattle Doesn't Add Up

Debunking common blockchain-saving-advertising myths

SF tourist industry struggles to explain street misery to horrified visitors

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

How Facebook Helped Ruin Cambodia's Democracy

Planet Linux AustraliaDonna Benjamin: Site building with Drupal

What even is "Site Building"?

At DrupalDownunder some years back, the wonderful Erica Bramham named her talk "All node, no code". Nodes were the fundamental building blocks in Drupal, they were like single drops of content. These days though, it's all about entities.

But hang on a minute, I'm using lots of buzz words, and worse, I'm using words that mean different things in different contexts. Jargon is one of the first hurdles you need to jump to understand the diverse worlds of the web. People who grow up multi-lingual learn that the meanings of words is somewhat arbitrary. They learn the same thing has different names. This is true for the web too. So the first thing to know about Site Building, is it means different things to different people. 

To me, it means being able to build a website with out knowing how to code. I also believe it means I can build a website without having to set up my own development environment. I know people who vehemently disagree with me about this. But that's ok. This is my blog, and these are my rules.

So - this is a post about site building, using SimplyTest.Me and Drupal 8 out of the box.

1. Go to

2. Type Drupal Core in the search field, and select "Drupal core" from the list

3. Choose the latest development branch, right at the bottom of the list.


For me, right now, that's 8.6.x, and here's a screenshot of what that looks like.

SimplyTest Me Screenshot, showing drop down fields described in the text.


4. Click "Launch sandbox".

Now wait.

In a few moments, you should see a fresh shiny Drupal 8 site, ready for you to explore.

For me today, it looks like this.  

Drupal 8.6.x front page screenshot


In the top right of the window, you should see a "Log in" link.

Click that, and enter admin/admin to login. 

You're now ready to practice some site building!

First, you'll need to create some content to play with.  Here's a short screencast that shows you how to login, add an article, and change the title using Quick Edit.

A guide to what's next

Follow the Drupal User guide to start building your site!

If you want to start at the beginning, you'll get a great overview of Drupal, and some important info on how to plan your site. But if you want to roll up your sleeves and get building, you can skip the chapter on site installation and jump straight to chapter 4, and dive into basic site configuration.



You have 24 hours to experiment with the sandbox - after that it disappears.


Get in touch

If you want something more permanent, you might want to "try drupal" or contact us at to discuss our Drupal services.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV February 2018 Workshop: Installing an Open Source OS on your tablet or phone

Feb 24 2018 12:30
Feb 24 2018 16:30
Feb 24 2018 12:30
Feb 24 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Installing an Open Source OS on your tablet or phone

Andrew Pam will demonstrate how to install LineageOS, previously known as CyanogenMod and based on the Android Open Source Project, on tablets and phones.  Feel free to bring your own tablets and phones and have a go, but please ensure you back them up if there is anything you still need stored on them!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 24, 2018 - 12:30

read more


CryptogramFriday Squid Blogging: Squid Pin

There's a squid pin on Kickstarter.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Rondam RamblingsYes, code is data, but that's not what makes Lisp cool

There has been some debate on Hacker News lately about what makes Lisp cool, in particular about whether the secret sauce is homo-iconicity, or the idea that "code is data", or something else.  I've read through a fair amount of the discussion, and there is a lot of misinformation and bad pedagogy floating around.  Because this is a topic that is near and dear to my heart, I thought I'd take a

CryptogramNew National Academies Report on Crypto Policy

The National Academies has just published "Decrypting the Encryption Debate: A Framework for Decision Makers." It looks really good, although I have not read it yet.

Not much news or analysis yet. Please post any links you find in the comments, and I will summarize them here.

Planet Linux AustraliaOpenSTEM: Australia at the Olympics

The modern Olympic games were started by Frenchman Henri de Baillot-Latour to promote international understanding. The first games of the modern era were held in 1896 in Athens, Greece. Australia has competed in all the Olympic games of the modern era, although our participation in the first one was almost by chance. Of course, the […]

Worse Than FailureError'd: Preparing for the Future

George B. wrote, "Wait, so is it done...or not done?"


George B. (different George, but is in good company) is seeing nearly the same thing with Crash Plan Pro where the backup is done ...maybe.


"I swear, that's the last time that I'm flying with Icarus Airlines" Allison V. writes.


"The best I can figure, someone wanted to see what the simulation app would do if executed in some far flung future where months don't matter and nothing makes any sense," writes M.C.


Joel C. wrote "I can't help it - Next time my train is late, I'm going to immediately think that it's because someone didn't click to dismiss a popup."


"I'm not sure what this means, but I guess it's to point out that there are website buttons, and then there are buttons on the website," Brian R. wrote.


[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.


Cory DoctorowDo We Need a New Internet?

I was one of the interview subjects on an episode of BBC’s Tomorrow’s World called Do We Need a New Internet? (MP3); it’s a fascinating documentary, including some very thoughtful commentary from Edward Snowden.

Cory DoctorowThe 2018 Locus Poll is open: choose your favorite science fiction of 2017!

Following the publication of its editorial board’s long-list of the best science fiction of 2017, science fiction publishing trade-journal Locus now invites its readers to vote for their favorites in the annual Locus Award. I’m honored to have won this award in the past, and doubly honored to see my novel Walkaway on the short list, and in very excellent company indeed.

While you’re thinking about your Locus List picks, you might also use the list as an aide-memoire in picking your nominees for the Hugo Awards.

Krebs on SecurityNew EU Privacy Law May Weaken Security

Companies around the globe are scrambling to comply with new European privacy regulations that take effect a little more than three months from now. But many security experts are worried that the changes being ushered in by the rush to adhere to the law may make it more difficult to track down cybercriminals and less likely that organizations will be willing to share data about new online threats.

On May 25, 2018, the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires technology companies to get affirmative consent for any information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — is poised to propose changes to the rules governing how much personal information Web site name registrars can collect and who should have access to the data.

Specifically, ICANN has been seeking feedback on a range of proposals to redact information provided in WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. (Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free).

In a bid to help domain registrars comply with the GDPR regulations, ICANN has floated several proposals, all of which would redact some of the registrant data from WHOIS records. Its mildest proposal would remove the registrant’s name, email, and phone number, while allowing self-certified 3rd parties to request access to said data at the approval of a higher authority — such as the registrar used to register the domain name.

The most restrictive proposal would remove all registrant data from public WHOIS records, and would require legal due process (such as a subpoena or court order) to reveal any information supplied by the domain registrant.

ICANN’s various proposed models for redacting information in WHOIS domain name records.

The full text of ICANN’s latest proposed models (from which the screenshot above was taken) can be found here (PDF). A diverse ICANN working group made up of privacy activists, technologists, lawyers, trademark holders and security experts has been arguing about these details since 2016. For the curious and/or intrepid, the entire archive of those debates up to the current day is available at this link.


To drastically simplify the discussions into two sides, those in the privacy camp say WHOIS records are being routinely plundered and abused by all manner of ne’er-do-wells, including spammers, scammers, phishers and stalkers. In short, their view seems to be that the availability of registrant data in the WHOIS records causes more problems than it is designed to solve.

Meanwhile, security experts are arguing that the data in WHOIS records has been indispensable in tracking down and bringing to justice those who seek to perpetrate said scams, spams, phishes and….er….stalks.

Many privacy advocates seem to take a dim view of any ICANN system by which third parties (and not just law enforcement officials) might be vetted or accredited to look at a domain registrant’s name, address, phone number, email address, etc. This sentiment is captured in public comments made by the Electronic Frontier Foundation‘s Jeremy Malcolm, who argued that — even if such information were only limited to anti-abuse professionals — this also wouldn’t work.

“There would be nothing to stop malicious actors from identifying as anti-abuse professionals – neither would want to have a system to ‘vet’ anti-abuse professionals, because that would be even more problematic,” Malcolm wrote in October 2017. “There is no added value in collecting personal information – after all, criminals are not going to provide correct information anyway, and if a domain has been compromised then the personal information of the original registrant isn’t going to help much, and its availability in the wild could cause significant harm to the registrant.”

Anti-abuse and security experts counter that there are endless examples of people involved in spam, phishing, malware attacks and other forms of cybercrime who include details in WHOIS records that are extremely useful for tracking down the perpetrators, disrupting their operations, or building reputation-based systems (such as anti-spam and anti-malware services) that seek to filter or block such activity.

Moreover, they point out that the overwhelming majority of phishing is performed with the help of compromised domains, and that the primary method for cleaning up those compromises is using WHOIS data to contact the victim and/or their hosting provider.

Many commentators observed that, in the end, ICANN is likely to proceed in a way that covers its own backside, and that of its primary constituency — domain registrars. Registrars pay a fee to ICANN for each domain a customer registers, although revenue from those fees has been falling of late, forcing ICANN to make significant budget cuts.

Some critics of the WHOIS privacy effort have voiced the opinion that the registrars generally view public WHOIS data as a nuisance issue for their domain registrant customers and an unwelcome cost-center (from being short-staffed to field a constant stream of abuse complaints from security experts, researchers and others in the anti-abuse community).

“Much of the registrar market is a race to the bottom, and the ability of ICANN to police the contractual relationships in that market effectively has not been well-demonstrated over time,” commenter Andrew Sullivan observed.

In any case, sources close to the debate tell KrebsOnSecurity that ICANN is poised to recommend a WHOIS model loosely based on Model 1 in the chart above.

Specifically, the system that ICANN is planning to recommend, according to sources, would ask registrars and registries to display just the domain name, city, state/province and country of the registrant in each record; the public email addresses would be replaced by a form or message relay link that allows users to contact the registrant. The source also said ICANN plans to leave it up to the registries/registrars to apply these changes globally or only to natural persons living in the European Economic Area (EEA).

In addition, sources say non-public WHOIS data would be accessible via a credentialing system to identify law enforcement agencies and intellectual property rights holders. However, it’s unlikely that such a system would be built and approved before the May 25, 2018 effectiveness date for the GDPR, so the rumor is that ICANN intends to propose a self-certification model in the meantime.

ICANN spokesman Brad White declined to confirm or deny any of the above, referring me instead to a blog post published Tuesday evening by ICANN CEO Göran Marby. That post does not, however, clarify which way ICANN may be leaning on the matter.

“Our conversations and work are on-going and not yet final,” White wrote in a statement shared with KrebsOnSecurity. “We are converging on a final interim model as we continue to engage, review and assess the input we receive from our stakeholders and Data Protection Authorities (PDAs).”

But with the GDPR compliance deadline looming, some registrars are moving forward with their own plans on WHOIS privacy. GoDaddy, one of the world’s largest domain registrars, recently began redacting most registrant data from WHOIS records for domains that are queried via third-party tools. And it seems likely that other registrars will follow GoDaddy’s lead.


For my part, I can say without hesitation that few resources are as critical to what I do here at KrebsOnSecurity than the data available in the public WHOIS records. WHOIS records are incredibly useful signposts for tracking cybercrime, and they frequently allow KrebsOnSecurity to break important stories about the connections between and identities behind various cybercriminal operations and the individuals/networks actively supporting or enabling those activities. I also very often rely on WHOIS records to locate contact information for potential sources or cybercrime victims who may not yet be aware of their victimization.

In a great many cases, I have found that clues about the identities of those who perpetrate cybercrime can be found by following a trail of information in WHOIS records that predates their cybercriminal careers. Also, even in cases where online abusers provide intentionally misleading or false information in WHOIS records, that information is still extremely useful in mapping the extent of their malware, phishing and scamming operations.

Anyone looking for copious examples of both need only to search this Web site for the term “WHOIS,” which yields dozens of stories and investigations that simply would not have been possible without the data currently available in the global WHOIS records.

Many privacy activists involved in to the WHOIS debate have argued that other data related to domain and Internet address registrations — such as name servers, Internet (IP) addresses and registration dates — should also be considered private information. My chief concern if this belief becomes more widely held is that security companies might stop sharing such information for fear of violating the GDPR, thus hampering the important work of anti-abuse and security professionals.

This is hardly a theoretical concern. Last month I heard from a security firm based in the European Union regarding a new Internet of Things (IoT) botnet they’d discovered that was unusually complex and advanced. Their outreach piqued my curiosity because I had already been working with a researcher here in the United States who was investigating a similar-sounding IoT botnet, and I wanted to know if my source and the security company were looking at the same thing.

But when I asked the security firm to share a list of Internet addresses related to their discovery, they told me they could not do so because IP addresses could be considered private data — even after I assured them I did not intend to publish the data.

“According to many forums, IPs should be considered personal data as it enters the scope of ‘online identifiers’,” the researcher wrote in an email to KrebsOnSecurity, declining to answer questions about whether their concern was related to provisions in the GDPR specifically.  “Either way, it’s IP addresses belonging to people with vulnerable/infected devices and sharing them may be perceived as bad practice on our end. We consider the list of IPs with infected victims to be private information at this point.”

Certainly as the Internet matures and big companies develop ever more intrusive ways to hoover up data on consumers, we also need to rein in the most egregious practices while giving Internet users more robust tools to protect and preserve their privacy. In the context of Internet security and the privacy principles envisioned in the GDPR, however, I’m worried that cybercriminals may end up being the biggest beneficiaries of this new law.

CryptogramElection Security

Good Washington Post op-ed on the need to use voter-verifiable paper ballots to secure elections, as well as risk-limiting audits.

Worse Than FailureIt's Called Abstraction, and It's a Good Thing

Steven worked for a company that sold “big iron” to big companies, for big bucks. These companies didn’t just want the machines, though, they wanted support. They wanted lots of support. With so many systems, processing so many transactions, installed at so many customer sites, Steven’s company needed a better way to analyze when things went squirrelly.

Thus was born a suite of applications called “DICS”- the Diagnostic Investigation Console System. It was, at its core, a processing pipeline. On one end, it would reach out to a customer’s site and download log files. The log files would pass through a series of analytic steps, and eventually reports would come out the other end. Steven mostly worked on the reporting side of things.

While working on reports, he’d sometimes hear about hiccups in the downloader portion of the pipeline, but as it was “not his circus, not his monkeys”, he didn’t pry too deeply. At least, he didn’t until one day, when his boss knocked on his cubicle divider.

“Hey, Steven. You know Perl, right?”

“Uh… sure.”

“And you’ve worked with XML files, right?”

“I… yes?”

“Great. Bob’s leaving. You’re going to need to take over the downloader portion of DICS. Talk to him ASAP. Great, thanks!”

Perl gets a reputation for being a “write only language”, which is at least partially undeserved. Bob was quite sensitive about that reputation, so he stressed, “I’ve worked really, really hard to keep the code as clean and clear as possible. Everything in the design is object oriented.”

Bob wasn’t kidding. Everything was wrapped up as a class. Everything. It was so class-happy it made the Spring framework jealous. JEE consultants would look at it and say, “Whoa, maybe slow down with the classes there.” A UML diagram of the architecture would drain ten printers worth of toner. The config file was stored in XML, and just for parsing out that file and storing the results, Bob had written 25 different classes, some as small as three lines. All in all, the whole downloader weighed in at about 5,000 lines of Perl code.

In the whirlwind tour, Steven asked Bob about the complexity. “It’s not complex. Each class is extremely simple. Well, aside from the config file wrapper, but it needs to have lots of methods because it has lots of data! There are so many fields in the XML file, and I needed to create getters and setters for them all! That way we can have Data Abstraction! That’s important! Data Abstraction is how we keep this project maintainable. What if the XML file format changes? It’s happened, you know. This will make it easy to keep our code in sync!”

Steven marveled at Bob’s ability to pronounce “data abstraction” as if it were in bold face, and resolved to touch the downloader script as little as possible. That resolution failed pretty much a week after Bob left, when the script fell down in production, leaving the DICS pipeline empty. Steven had to roll up his sleeves and get hands on with the code.

Now, one of Perl’s selling points is its rich library. While CPAN may have its own issues as a package manager, if you want to do something like parse an XML file, there’s a library that does it. There’s a dozen libraries that’ll do it. And they all follow a vaguely Perl-idiom, and instead of classes, they’ll favor associative arrays. That way, when you want to get something like the contents of the ip_addr tag from the config file, you could write code like this:

$ip_addr = $config->{hosts}[$n]{ip_addr}

This makes it easy to understand how the structure of the XML file relates to the Perl data structure, but that kind of mapping means that there isn’t any Data Abstraction, and thus was utterly the wrong approach. Instead, everything was done as a getter/setter method.

$ip_addr = $Config_object->host($n)->get_addr();

That doesn’t look too different, perhaps, but the devil is in the details. First, 90% of the getters were “thin”, so get_addr might look something like this:

sub get_addr { return $self->{Addr}; }

That raises questions about the value of these getters/setters for fetching config values, but the bigger problem was this: there was nothing in the config file called “Addr”. Does this method return the IP address? Or a string in the from “$ip_addr:$port”? Or maybe even an array, like [$ip_addr, $port].

Throughout the whole API, it was a bit of a crapshoot as to what any given method might return. And as for checking the documentation- they’d created a system that provided Data Abstraction, they didn’t need documentation, did they?

To track any given getter back to the actual field in the XML file it was getting, Steven had to trace through half a dozen different classes. It was frustrating and tedious, and Steven had half a mind to just throw the whole thing out and start over, consequences be damned. When he saw the “Translation” subsystem, he decided that it really did need to be thrown out, entirely.

You see, Bob’s goal with Data Abstraction was to make it so that, if the XML file changed, it would be easy to adapt the code. But the code was a mess. So when the XML file did change a few years back, Bob couldn’t update the config handling classes in any way that worked. So he did the next best thing- he wrote a “translation” module that would, using regular expressions, convert the new-style XML files back into the old-style XML files. Then his config-file classes could load and parse the old-style files.

Steven sums it up perfectly:

Bob’s classes weren’t data abstraction. It was just… data abstracturbation.

When Steven was done reimplementing Bob's work, he had about 500 lines of code, and the downloader stopped failing every few days.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!


Sociological ImagesWhat’s Trending? Feeling the Love

Valentine’s Day is upon us, but in a world of hookups and breakups many people are concerned about the state of romance. Where do Americans actually stand on sex and relationships? We took a look at some trends from the General Social Survey. They highlight an important point: while Americans are more accepting of things like divorce and premarital sex, that doesn’t necessarily mean that both are running rampant in society.

For example, since the mid 1970s, Americans have become much more accepting of sex before marriage. Today more than half of respondents say it isn’t wrong at all.

However, these attitudes don’t necessarily mean people are having more sex. Younger Americans today actually report having no sexual partners more frequently than people of the same age in earlier surveys.

And what about marriage? Americans are more accepting of divorce now, with more saying a divorce should be easier to obtain.

But again, this doesn’t necessarily mean everyone is flying the coop. While self-reported divorce rates had been on the rise since the mid 1970s, they have largely leveled off in recent years.

It is important to remember that for core social practices like love and marriage, we are extra susceptible to moral panics when faced with social change. These trends show how changes in attitudes don’t always line up with changes in behavior, and they remind us that sometimes we can save the drama for the rom-coms.

Inspired by demographic facts you should know cold, “What’s Trending?” is a post series at Sociological Images featuring quick looks at what’s up, what’s down, and what sociologists have to say about it.

Ryan Larson is a graduate student from the Department of Sociology, University of Minnesota – Twin Cities. He studies crime, punishment, and quantitative methodology. He is a member of the Graduate Editorial Board of The Society Pages, and his work has appeared in Poetics, Contexts, and Sociological Perspectives.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

CryptogramCan Consumers' Online Data Be Protected?

Everything online is hackable. This is true for Equifax's data and the federal Office of Personal Management's data, which was hacked in 2015. If information is on a computer connected to the Internet, it is vulnerable.

But just because everything is hackable doesn't mean everything will be hacked. The difference between the two is complex, and filled with defensive technologies, security best practices, consumer awareness, the motivation and skill of the hacker and the desirability of the data. The risks will be different if an attacker is a criminal who just wants credit card details ­ and doesn't care where he gets them from ­ or the Chinese military looking for specific data from a specific place.

The proper question isn't whether it's possible to protect consumer data, but whether a particular site protects our data well enough for the benefits provided by that site. And here, again, there are complications.

In most cases, it's impossible for consumers to make informed decisions about whether their data is protected. We have no idea what sorts of security measures Google uses to protect our highly intimate Web search data or our personal e-mails. We have no idea what sorts of security measures Facebook uses to protect our posts and conversations.

We have a feeling that these big companies do better than smaller ones. But we're also surprised when a lone individual publishes personal data hacked from the infidelity site, or when the North Korean government does the same with personal information in Sony's network.

Think about all the companies collecting personal data about you ­ the websites you visit, your smartphone and its apps, your Internet-connected car -- and how little you know about their security practices. Even worse, credit bureaus and data brokers like Equifax collect your personal information without your knowledge or consent.

So while it might be possible for companies to do a better job of protecting our data, you as a consumer are in no position to demand such protection.

Government policy is the missing ingredient. We need standards and a method for enforcement. We need liabilities and the ability to sue companies that poorly secure our data. The biggest reason companies don't protect our data online is that it's cheaper not to. Government policy is how we change that.

This essay appeared as half of a point/counterpoint with Priscilla Regan, in a CQ Researcher report titled "Privacy and the Internet."

Worse Than FailureCodeSOD: All the Rest Have Thirty One…

Aleksei received a bunch of notifications from their CI system, announcing a build failure. This was interesting, because no code had changed recently, so what was triggering the failure?

        private BillingRun CreateTestBillingRun(int billingRunGroupId, DateTime? billingDate, int? statusId)
            return new BillingRun
                BillingRunGroupId = billingRunGroupId,
                PeriodStart = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 1),
                BillingDate = billingDate ?? new DateTime(DateTime.Today.Year, DateTime.Today.Month, 15),
                CreatedDate = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 30),
                ItemsPreparedDate = new DateTime(2017, 4, 7),
                CompletedDate = new DateTime(2017, 4, 8),
                DueDate = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 13),
                StatusId = statusId ?? BillingRunStatusConsts.Completed,
                ErrorCode = "ERR_CODE",
                Error = "Full error description",
                ModifiedOn = new DateTime(2017, 1, 1)

Take a look at the instantiation of CreatedDate. I imagine the developer’s internal monologue went something like this:

Okay, the Period Start is the beginning of the month, the Billing Date is the middle of the month, and Created Date is the end of the month. Um… okay, well, beginning is easy. That’s the 1st. Phew. Okay, but the middle of the month. That’s hard. Oh, wait, wait a second! It’s billing, so I bet the billing department has a day they always send out the bills. Let me send an email to Steve in billing… oh, look at that. It’s always the 15th. Great. Boy. This programming stuff is easy. Whew. Okay, so now the end of the month. This one’s tricky, because months have different lengths, sometimes 30 days, and sometimes 31. Let me ask Steve again, if they have any specific requirements there… oh, look at that. They don’t really care so long as it’s the last day or two of the month. Great. I’ll just use 30, then. Good thing there aren’t any months with a shorter length.
Y’know, I vaguely remember reading a thing that said tests should always use the same values, so that every run tests exactly the same combination of inputs. I think I saved a bookmark to read it later. Should I read it now? No! I should commit this code, let the CI build run, and then mark the requirement as complete.
Boy, this programming stuff is easy.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Sky CroeserMotherhood, hope, and 76% less snark

Oh, hi.

I had that baby I was growing in my last post. She’s an amazing little person. She’s learned to clap her hands in the last week, and I am full of wonder and delight. She’s been sick, and I fretted for hours about her rash. (Should I call the doctor? Should I not? Is it a purple rash? Is it getting worse.)

I’m back at work, sitting in my office, relieved to have time to read and write and teach, and missing her fiercely. I feel this all at once: the relief of time and space away, and the missing. I think about her all the time, but also get bored by the way motherhood enfolds me.

At home, we walk in endless circles around the house as she holds out a hand for mine, demands the other hand, then drags me off to open cupboards or visit each room in turn. (At the same time, I love to see her do this: so clearly show me what she wants, so clearly refuse if I put my right hand in her left, or give her only one hand.)


Motherhood has changed me, and I don’t know how I feel about that. (I don’t have much time to work out how I feel about anything.) It is almost physically painful to think of parents losing children to war or violence. Of wanting to feed a hungry child and not being able to. I have the luxury of being able to look away, to take a break from imagining these scenes.

For the last few months the change to my work has been in the time and energy available. Everything needs to be broken up into smaller, more digestible chunks, to manage in nap times and evenings and while so very tired most of the time.

As I finished my undergraduate, I decided to focus on researching movements that gave me hope. Imperfect, complex movements with many flaws, but nevertheless full of people trying to change things for the better. I wanted, and want, to believe that we have the potential to change this. That hungry children can be fed, that we can look after our neighbours, that we can resist and fight back against tides of hatred and fear.

Last year, I found myself writing a presentation and a book chapter that shifted to focusing on the flaws in these movements. I was tired, and I got snarky and impatient with the imperfection of activists (particularly white men) who didn’t listen and try to define what counts as ‘radical’ and what doesn’t. I still feel that impatience, but that work was depressing. The snark of it was satisfying, but I’m not sure of the use of it and frankly I am subject to many of the same critiques.

As I try to find my way back into research and writing, I’m trying to recommit to finding threads of hope. Critique is important, especially the critiques I need to listen to from the margins of academia and activism: of white women’s role in feminism(s), of settler societies, of academic power structures. In my own writing I want to be finding materials to stitch into alternatives. I want to be finding spaces where my voice can be useful, rather than just adding more noise.

And it’s a terrible cliche, but the urgency of it comes through when I look at this tiny person and imagine other parents doing the same, hoping for safety and flourishing and care for these wonders we are trying to nourish.



Krebs on SecurityMicrosoft Patch Tuesday, February 2018 Edition

Microsoft today released a bevy of security updates to tackle more than 50 serious weaknesses in Windows, Internet Explorer/Edge, Microsoft Office and Adobe Flash Player, among other products. A good number of the patches issued today ship with Microsoft’s “critical” rating, meaning the problems they fix could be exploited remotely by miscreants or malware to seize complete control over vulnerable systems — with little or no help from users.

February’s Patch Tuesday batch includes fixes for at least 55 security holes. Some of the scarier bugs include vulnerabilities in Microsoft Outlook, Edge and Office that could let bad guys or bad code into your Windows system just by getting you to click on a booby trapped link, document or visit a compromised/hacked Web page.

As per usual, the SANS Internet Storm Center has a handy rundown on the individual flaws, neatly indexing them by severity rating, exploitability and whether the problems have been publicly disclosed or exploited.

One of the updates addresses a pair of serious vulnerabilities in Adobe Flash Player (which ships with the latest version of Internet Explorer/Edge). As KrebsOnSecurity warned last week, there are active attacks ongoing against these Flash vulnerabilities.

Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability. Chrome also bundles Flash, but blocks it from running on all but a handful of popular sites, and then only after user approval.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis. Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits.

The latest standalone version of Flash that addresses these bugs is for Windows, Mac, Linux and Chrome OS. But most users probably would be better off manually hobbling or removing Flash altogether, since so few sites actually require it still. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

People running Adobe Reader or Acrobat also need to update, as Adobe has shipped new versions of these products that fix at least 39 security holes. Adobe Reader users should know there are alternative PDF readers that aren’t so bloated or full of security issues. Sumatra PDF is a good, lightweight alternative.

Experience any issues, glitches or problems installing these updates? Sound off about it in the comments below.

TEDNew podcast alert: WorkLife with Adam Grant, a TED original, premieres Feb. 28

Adam Grant to Explore the Psychology of Unconventional Workplaces as Host of Upcoming New TED Original Podcast “WorkLife”

Organizational psychologist, professor, bestselling author and TED speaker Adam Grant is set to host a new TED original podcast series titled WorkLife with Adam Grant, which will explore unorthodox work cultures in search of surprising and actionable lessons for improving listeners’ work lives.

Beginning Wednesday, February 28, each weekly episode of WorkLife will center around one extraordinary workplace—from an award-winning TV writing team racing against the clock, to a sports team whose culture of humility propelled it to unexpected heights. In immersive interviews that take place in both the field and the studio, Adam brings his observations to vivid life – and distills useful insights in his friendly, accessible style.

“We spend a quarter of our lives in our jobs. This show is about making all that time worth your time,” says Adam, the bestselling author of OriginalsGive and Take, and Option B with Sheryl Sandberg. “In WorkLife, we’ll take listeners inside the minds of some fascinating people in some truly unusual workplaces, and mix in fresh social science to reveal how we can lead more creative, meaningful, and generous lives at work.”

Produced by TED in partnership with Pineapple Street Media and Transmitter Media, WorkLife is TED’s first original podcast created in partnership with a TED speaker. Its immersive, narrative format is designed to offer audiences a new way to explore TED speaker ideas in depth. Adam’s talks “Are you a giver or a taker?” and “The surprising habits of original thinkers” have together been viewed more than 11 million times in the past two years.

The show marks TED’s latest effort to test new content formats beyond the nonprofit’s signature first-person TED talk. Other recent TED original content experiments include Sincerely, X, an audio series featuring talks delivered anonymously;  Small Thing Big Idea, a Facebook Watch video series about everyday designs that changed the world; and the Indian prime-time live-audience television series TED Talks India: Nayi Soch, hosted by Bollywood star and TED speaker Shah Rukh Khan.

“We’re aggressively developing and testing a number of new audio and video programs that support TED’s mission of ‘Ideas Worth Spreading,’” said TED head of media and WorkLife co-executive producer Colin Helms. “In every case, our speakers and their ideas remain the focus, but with fresh formats, styles and lengths, we can reach and appeal to even more curious audiences, wherever they are.”

WorkLife debuts Wednesday, February 28 on Apple Podcasts, the TED Android app, or wherever you like to listen to podcasts. Season 1 features eight episodes, roughly 30 minutes each, plus two bonus episodes. It’s sponsored by Accenture, Bonobos, JPMorgan Chase & Co., and Warby Parker. New episodes will be made available every Wednesday.

CryptogramJumping Air Gaps

Nice profile of Mordechai Guri, who researches a variety of clever ways to steal data over air-gapped computers.

Guri and his fellow Ben-Gurion researchers have shown, for instance, that it's possible to trick a fully offline computer into leaking data to another nearby device via the noise its internal fan generates, by changing air temperatures in patterns that the receiving computer can detect with thermal sensors, or even by blinking out a stream of information from a computer hard drive LED to the camera on a quadcopter drone hovering outside a nearby window. In new research published today, the Ben-Gurion team has even shown that they can pull data off a computer protected by not only an air gap, but also a Faraday cage designed to block all radio signals.

Here's a page with all the research results.

BoingBoing post.

Worse Than FailureBudget Cuts

Xavier was the head of a 100+ person development team. Like many enterprise teams, they had to support a variety of vendor-specific platforms, each with their own vendor-specific development environment and its own licensing costs. All the licensing costs were budgeted for at year’s end, when Xavier would submit the costs to the CTO. The approval was a mere formality, ensuring his team would have everything they needed for another year.

Unfortunately, that CTO left to pursue another opportunity. Enter Greg, a new CTO who joined the company from the financial sector. Greg was a penny-pincher on a level that would make the novelty coin-smasher you find at zoos and highway rest-stops jealous. Greg started cutting costs left and right immediately. When the time came for budgeting development tool licensing, Greg threw down the gauntlet on Xavier’s “wild” spending.

Alan Rickman, in Galaxy Quest, delivering the line, 'By Grabthar's Hammer, what a savings' while looking like his soul is dying forever. "By Grabthar's Hammer, what a savings."

“Have a seat, X-man,” Greg offered, in a faux-friendly voice. “Let’s get to the point. I looked at your proposal for all of these tools, your team supposedly ‘needs’. $40,000 is absurd! Do you think we print money? If your team were any good,, they should be able to do everything they need without these expensive, gold-plated programs!”

Xavier was taken aback by Greg’s brashness, but he was prepared for a fight. “Greg, these tools are vital to our development efforts. There are maybe a few products we could do without, but most of them are absolutely required. Even the more ‘optional’ ones, like our refactoring and static analysis tools, they save us money and time and improve code quality. Not having them would be more expensive than the license.”

Greg scowled and tented his fingers. “There is no chance I’m approving this as it stands. Go back and figure out what you can do without. If you don’t cut this cost down, I’ll find an easier way to reduce expenses… like by cutting bonuses… or staff.”

Xavier spent the next few days having an extensive tool review with his lead developers. Many of the vendor-specific tools had no alternative, but there were a few third party tools they could do without, or use an open-source equivalent. Across the team of 100+ developers, the net cost savings would be $4,000, or 10%.

Xavier didn’t expect that to make Greg happy, but it was the best they could do. The following morning, Xavier presented his findings in Greg’s office, and it went smoother than expected. “Listen, X. I want this cost down even more, but we’re running out of time to approve this year’s budget. Since I did so much work cutting costs in other ways, I’ll submit this to finance. But enjoy your last year of all these fancy tools! Next year, things will be different!”

Xavier was relieved he didn’t have to fight further. Perhaps, over the next year, he could further demonstrate the necessity of their tooling. With the budget resolved, Xavier had some much-overdue vacation time. He had saved up enough PTO to spend a month in the Australian Outback. Development tools and budgets would be the furthest thing from his mind.

Three great weeks in the land down under were enhanced by being mostly cut off from communications from anyone in the company. During a trip through a town with cell phone reception, Xavier decided to check his voicemail, to make sure the sky wasn’t falling. Dave, his #2 in command, had left an urgent message two days prior.

“Xavier!” Dave shouted on the other end. “You need to get back here soon. Greg never paid the invoices for anything in our stack. We’re sitting here with a huge pile of unlicensed stuff. We’ve been racking up unlicensed usage and support costs, and Greg is going to flip when he sees our monthly statements.” With deep horror, Dave added, “One of the licenses he didn’t pay was for Oracle!”

Xavier reluctantly left the land of dingoes and wallabies to head back home. He arrived just about the same time the first vendor calls demanding payment did. The costs from just three weeks of unlicensed usage of enterprise software was astronomical. Certainly more than just buying the licenses would have been in the first place. Xavier scheduled a meeting with Greg to decide what to do next.

The following Monday, the dreaded meeting was on. “Sit,” Greg said. “I have some good news, and some bad news. The good news is that I’ve found a way to pay these ridiculous charges your team racked up.” Xavier leaned forward in his chair, eager to learn how Greg had pulled it off. “The bad news is that I’ve identified a redundant position- yours.”

Xavier slumped into his chair.

Greg continued. “While you were gone, I realized we were in quite capable hands with Dave, and his salary is quite a bit lower than yours. Coincidentally, the original costs and these ridiculous penalties add up to an amount just a little less than your annual salary. I guess you’re getting your wish: the development team can keep the tools you insist they need to do their jobs. It seems you were right about saving money in the long run, too.”

Xavier left Greg’s office, stunned. On his way out for the last time, he stopped by Dave to congratulate him on the new promotion.

“Oh,” Dave said, sourly, “it’s not a promotion. They’re just eliminating your position. What, you think Greg would give me a raise?”

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Don MartiTwo visions of GDPR

As far as I can tell, there are two sets of ambitious predictions about GDPR.

One is the VRM vision. Doc Searls writes, on ProjectVRM:

I am sure Google, Facebook and lesser purveyors of advertising online will find less icky ways to stay in business; but it is becoming clear that next May 25, when the GDPR goes into full effect, will be an extinction-level event for tracking-based advertising (aka adtech) as a business model.

Big impact? Not so fast. There's also a "business as usual" story, and that one, you'll find at Digital Advertising Consent.

Our complex ecosystem of companies must cooperate more closely than ever before to meet the transparency and consent requirements of European data protection law.

According to the adtech firms, well, maybe there will be more Bürokratie, more pointless dialogs that users have to click through, and one more line item, "GDPR compliance", to come out of the publisher's share, of course, but the second vision of GDPR is essentially just adtech/adfraud as usual. Upgrade to the new version of OpenRTB, and move along, nothing to see here.

Personally, I'm not buying either one of these GDPR visions. Because, just for fun and also because reasons, I run my own mail server.

And every little decision I have to make about how to configure the damn thing is based on playing a game with email spammers. Regulation is a part of my complete breakfast, but it's not the whole story.

The government doesn't give you freedom from spam. You have to take it for yourself, one filtering rule at a time. Or, do what most people do, and find a company that does it for you, but it has to be a company that you trust with your information.

A mail sender's decision to comply, or not comply, with some regulation is a bit of information. That feeds into the software that makes the final decision: inbox, spam folder, or reject. When a spam message complies with the regulations of some country, my mail server doesn't say, "Oh, wow, compliant! I can skip all the other checks and send this one straight to the inbox!" It uses the regulation compliance along with other information to make that decision.

So whatever extra consent forms that surveillance marketers are required to send by GDPR? They're not the final decision on What The User Must See. They're just data, coming over the network.

Some of that data will be interpreted to mean that this request is an obvious mismatch with how the user chooses to share their info. The user might not even see those consent forms, or the browser might pop up a notification:

4 requests to do creepy shit, that's obviously against your preferences, already denied. Isn't this the best browser ever?

(No, I don't write copy for browser notifications. But you get the idea.)

Browsers that implement tracking protection might end up with a feature where they detect requests for permission to do things that the user has already said no to—by turning on tracking protection in the first place—and auto-deny them.

Legit email senders had to learn "deliverability," the art and science of making legit mail look legit so that it can get past email spam filters. Legit advertisers will have to learn that users aren't identical and spherical, users choose tools to implement their data sharing preferences, and that regulatory compliance is only part of the job.

Should web browsers adopt Google’s new selective ad blocking tech?


Content recommendation services Outbrain and Taboola are no longer a guaranteed source of revenue for digital publishers

CryptogramCabinet of Secret Documents from Australia

This story of leaked Australian government secrets is unlike any other I've heard:

It begins at a second-hand shop in Canberra, where ex-government furniture is sold off cheaply.

The deals can be even cheaper when the items in question are two heavy filing cabinets to which no-one can find the keys.

They were purchased for small change and sat unopened for some months until the locks were attacked with a drill.

Inside was the trove of documents now known as The Cabinet Files.

The thousands of pages reveal the inner workings of five separate governments and span nearly a decade.

Nearly all the files are classified, some as "top secret" or "AUSTEO", which means they are to be seen by Australian eyes only.

Yes, that really happened. The person who bought and opened the file cabinets contacted the Australian Broadcasting Corp, who is now publishing a bunch of it.

There's lots of interesting (and embarassing) stuff in the documents, although most of it is local politics. I am more interested in the government's reaction to the incident: they're pushing for a law making it illegal for the press to publish government secrets it received through unofficial channels.

"The one thing I would point out about the legislation that does concern me particularly is that classified information is an element of the offence," he said.

"That is to say, if you've got a filing cabinet that is full of classified information ... that means all the Crown has to prove if they're prosecuting you is that it is classified ­ nothing else.

"They don't have to prove that you knew it was classified, so knowledge is beside the point."


Many groups have raised concerns, including media organisations who say they unfairly target journalists trying to do their job.

But really anyone could be prosecuted just for possessing classified information, regardless of whether they know about it.

That might include, for instance, if you stumbled across a folder of secret files in a regular skip bin while walking home and handed it over to a journalist.

This illustrates a fundamental misunderstanding of the threat. The Australian Broadcasting Corp gets their funding from the government, and was very restrained in what they published. They waited months before publishing as they coordinated with the Australian government. They allowed the government to secure the files, and then returned them. From the government's perspective, they were the best possible media outlet to receive this information. If the government makes it illegal for the Australian press to publish this sort of material, the next time it will be sent to the BBC, the Guardian, the New York Times, or Wikileaks. And since people no longer read their news from newspapers sold in stores but on the Internet, the result will be just as many people reading the stories with far fewer redactions.

The proposed law is older than this leak, but the leak is giving it new life. The Australian opposition party is being cagey on whether they will support the law. They don't want to appear weak on national security, so I'm not optimistic.

EDITED TO ADD (2/8): The Australian government backed down on that new security law.

EDITED TO ADD (2/13): Excellent political cartoon.

CryptogramPoor Security at the UK National Health Service

The Guardian is reporting that "every NHS trust assessed for cyber security vulnerabilities has failed to meet the standard required."

This is the same NHS that was debilitated by WannaCry.

EDITED TO ADD (2/13): More news.

And don't think that US hospitals are much better.