Planet Russell


Planet Linux AustraliaRussell Coker: Links September 2014

Matt Palmer wrote a short but informative post about enabling DNS in a zone [1]. I really should setup DNSSEC on my own zones.

Paul Wayper has some insightful comments about the Liberal party’s nasty policies towards the unemployed [2]. We really need a Basic Income in Australia.

Joseph Heath wrote an interesting and insightful article about the decline of the democratic process [3]. While most of his points are really good I’m dubious of his claims about twitter. When used skillfully twitter can provide short insights into topics and teasers for linked articles.

Sarah O wrote an insightful article about NotAllMen/YesAllWomen [4]. I can’t summarise it well in a paragraph, I recommend reading it all.

Betsy Haibel wrote an informative article about harassment by proxy on the Internet [5]. Everyone should learn about this before getting involved in discussions about “controversial” issues.

George Monbiot wrote an insightful and interesting article about the referendum for Scottish independence and the failures of the media [6].

Mychal Denzel Smith wrote an insightful article “How to know that you hate women” [7].

Sam Byford wrote an informative article about Google’s plans to develop and promote cheap Android phones for developing countries [8]. That’s a good investment in future market share by Google and good for the spread of knowledge among people all around the world. I hope that this research also leads to cheap and reliable Android devices for poor people in first-world countries.

Deb Chachra wrote an insightful and disturbing article about the culture of non-consent in the IT industry [9]. This is something we need to fix.

David Hill wrote an interesting and informative article about the way that computer game journalism works and how it relates to GamerGate [10].

Anita Sarkeesian shares the most radical thing that you can do to support women online [11]. Wow, the world sucks more badly than I realised.

Michael Daly wrote an article about the latest evil from the NRA [12]. The NRA continues to demonstrate that claims about “good people with guns” are lies, the NRA are evil people with guns.

Tom LimoncelliI'll be in Philly on Wednesday night

Hey Philly friends!

I will be speaking at the Philadelphia area Linux Users' Group (PLUG) meeting on Wednesday night (Oct 1st). They meet at the University of the Sciences in Philadelphia (USP). My topic will be "Highlights from The Practice of Cloud System Administration" and I'll have a few copies of the book to give away.

For more info, visit their website:

Hope to see you there!

Oreilly Linux PlanetSELinux Cookbook

In SELinux Cookbook, we cover everything from how to build SELinux policies to the integration of the technology with other systems and look at a wide range of examples to assist in creating additional policies. The first set of recipes work around file labeling as one of the most common and important SELinux administrative aspects. Then, we move on to custom policy development, showing how this is done for web application confinement, desktop application protection, and custom server policies. Next, we shift our focus to the end user, restricting user privileges and setting up role-based access controls. After that, we redirect our focus to the integration of SELinux with Linux systems, aligning SELinux with existing security controls on a Linux system. Finally, we will learn how applications interact with the SELinux subsystem internally; ensuring that whatever the challenge, we will be able to find the best solution.

Mark ShuttleworthFixing the internet for confidentiality and security

“The Internet sees censorship as damage and routes around it” was a very motivating tagline during my early forays into the internet. Having grown up in Apartheid-era South Africa, where government control suppressed the free flow of ideas and information, I was inspired by the idea of connecting with people all over the world to explore the cutting edge of science and technology. Today, people connect with peers and fellow explorers all over the world not just for science but also for arts, culture, friendship, relationships and more. The Internet is the glue that is turning us into a super-organism, for better or worse. And yes, there are dark sides to that easy exchange – internet comments alone will make you cry. But we should remember that the brain is smart even if individual brain cells are dumb, and negative, nasty elements on the Internet are just part of a healthy whole. There’s no Department of Morals I would trust to weed ‘em out or protect me or mine from them.

Today, the pendulum is swinging back to government control of speech, most notably on the net. First, it became clear that total surveillance is the norm even amongst Western democratic governments (the “total information act” reborn).  Now we hear the UK government wants to be able to ban organisations without any evidence of involvement in illegal activities because they might “poison young minds”. Well, nonsense. Frustrated young minds will go off to Syria precisely BECAUSE they feel their avenues for discourse and debate are being shut down by an unfair and unrepresentative government – you couldn’t ask for a more compelling motivation for the next generation of home-grown anti-Western jihadists than to clamp down on discussion without recourse to due process. And yet, at the same time this is happening in the UK, protesters in Hong Kong are moving to peer-to-peer mechanisms to organise their protests precisely because of central control of the flow of information.

One of the reasons I picked the certificate and security business back in the 1990′s was because I wanted to be part of letting people communicate privately and securely, for business and pleasure. I’m saddened now at the extent to which the promise of that security has been undermined by state pressure and bad actors in the business of trust.

So I think it’s time that those of us who invest time, effort and money in the underpinnings of technology focus attention on the defensibility of the core freedoms at the heart of the internet.

There are many efforts to fix this under way. The IETF is slowly become more conscious of the ways in which ideals can be undermined and the central role it can play in setting standards which are robust in the face of such inevitable pressure. But we can do more, and I’m writing now to invite applications for Fellowships at the Shuttleworth Foundation by leaders that are focused on these problems. TSF already has Fellows working on privacy in personal communications; we are interested in generalising that to the foundations of all communications. We already have a range of applications in this regard, I would welcome more. And I’d like to call attention to the Edgenet effort (distributing network capabilities, based on zero-mq) which is holding a sprint in Brussels October 30-31.

20 years ago, “Clipper” (a proposed mandatory US government back door, supported by the NSA) died on the vine thanks to a concerted effort by industry to show the risks inherent to such schemes. For two decades we’ve had the tide on the side of those who believe it’s more important for individuals and companies to be able to protect information than it is for security agencies to be able to monitor it. I’m glad that today, you are more likely to get into trouble if you don’t encrypt sensitive information in transit on your laptop than if you do. I believe that’s the right side to fight for and the right side for all of our security in the long term, too. But with mandatory back doors back on the table we can take nothing for granted – regulatory regimes can and do change, as often for the worse as for the better. If you care about these issues, please take action of one form or another.

Law enforcement is important. There are huge dividends to a society in which people to make long term plans, which depends on their confidence in security and safety as much as their confidence in economic fairness and opportunity. But the agencies in whom we place this authority are human and tend over time, like any institution, to be more forceful in defending their own existence and privileges than they are in providing for the needs of others. There has never been an institution in history which has managed to avoid this cycle. For that reason, it’s important to ensure that law enforcement is done by due process; there are no short cuts which will not be abused sooner rather than later. Checks and balances are more important than knee-jerk responses to the last attack. Every society, even today’s modern Western society, is prone to abusive governance. We should fear our own darknesses more than we fear others.

A fair society is one where laws are clear and crimes are punished in a way that is deemed fair. It is not one where thinking about crime is criminal, or one where talking about things that are unpalatable is criminal, or one where everybody is notionally protected from the arbitrary and the capricious. Over the past 20 years life has become safer, not more risky, for people living in an Internet-connected West. That’s no thanks to the listeners; it’s thanks to living in a period when the youth (the source of most trouble in the world) feel they have access to opportunity and ideas on a world-wide basis. We are pretty much certain to have hard challenges ahead in that regard. So for all the scaremongering about Chinese cyber-espionage and Russian cyber-warfare and criminal activity in darknets, we are better off keeping the Internet as a free-flowing and confidential medium than we are entrusting an agency with the job of monitoring us for inappropriate and dangerous ideas. And that’s something we’ll have to work for.

Planet DebianGunnar Wolf: Diego Gómez: Imprisoned for sharing

I got word via the Electronic Frontier Foundation about an act of injustice happening to a person for doing... Not only what I do day to day, but what I promote and believe to be right: Sharing academic articles.

Diego is a Colombian, working towards his Masters degree on conservation and biodiversity in Costa Rica. He is now facing up to eight years imprisonment for... Sharing a scholarly article he did not author on Scribd.

Many people lack the knowledge and skills to properly set up a venue to share their articles with people they know. Many people will hope for the best and expect academic publishers to be fundamentally good, not to send legal threats just for the simple, noncommercial act of sharing knowledge. Sharing knowledge is fundamental for science to grow, for knowledge to rise. Besides, most scholarly studies are funded by public money, and as the saying goes, they should benefit the public. And the public is everybody, is all of us.

And yes, if this sounds in any way like what drove Aaron Swartz to his sad suicide early this year... It is exactly the same thing. Thankfully (although, sadly, after the sad fact), thousands of people strongly stood on Aaron's side on that demand. Please sign the EFF petition to help Diego, share this, and try to spread the word on the real world needs for Open Access mandates for academics!

Some links with further information:

Sociological ImagesThis Month in SocImages (September 2014)

SocImages news:

On the heels of the release of Philip Cohen’s new textbook — The Family — comes one from yours truly and the extraordinary sociologist, Myra Marx Ferree: Gender: Ideas, Interactions, Institutions.

Thanks to Leland Bobbe, the photographer; Crystal Demure, the model; and the talent at W.W. Norton for the truly stunning cover!

Gender, by Wade and Ferree

Here are some ways to order, sample, and follow the book:

At SocImages this month…

You like!  Here are our most appreciated posts in September:

Thanks everybody!

Editor’s pick:

New Pinterest board!

  • Sexy what!? A collection of advertisements for products that shouldn’t be sexy, ever, but are. So help us God.

Social Media ‘n’ Stuff:

This is your monthly reminder that SocImages is on TwitterFacebookTumblrGoogle+, and Pinterest.  I’m on Facebook and most of the team is on Twitter: @lisawade@gwensharpnv@familyunequal, and @jaylivingston.

In other news…

Founder Gwen Sharp is the second SocImages sociologist featured at Cute Overload! Three kittens less than a week old were found nestled under a tire in the parking lot at Nevada State College. She’s been raising them ever since and documenting their progress at the NSC kittens tumblr.


Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Planet Linux AustraliaMichael Still: Blueprints implemented in Nova during Juno

As we get closer to releasing the RC1 of Nova for Juno, I've started collecting a list of all the blueprints we implemented in Juno. This was mostly done because it helps me write the release notes, but I am posting it here because I am sure that others will find it handy too.


  • Reserve 10 sql schema version numbers for back ports of Juno migrations to Icehouse. launchpad specification

Ongoing behind the scenes work

Object conversion

  • Support sub-classing objects. launchpad specification
  • Stop using the scheduler run_instance method. Previously the scheduler would select a host, and then boot the instance. Instead, let the scheduler select hosts, but then return those so the caller boots the instance. This will make it easier to move the scheduler to being a generic service instead of being internal to nova. launchpad specification
  • Refactor the nova scheduler into being a library. This will make splitting the scheduler out into its own service later easier. launchpad specification
  • Move nova to using the v2 cinder API. launchpad specification
  • Move prep_resize to conductor in preparation for splitting out the scheduler. launchpad specification

  • Use JSON schema to strongly validate v3 API request bodies. Please note this work will later be released as v2.1 of the Nova API. launchpad specification
  • Provide a standard format for the output of the VM diagnostics call. This work will be exposed by a later version of the v2.1 API. launchpad specification
  • Move to the OpenStack standard name for the request id header, in a backward compatible manner. launchpad specification
  • Implement the v2.1 API on the V3 API code base. This work is not yet complete. launchpad specification

  • Refactor the internal nova API to make the nova-network and neutron implementations more consistent. launchpad specification

General features

Instance features


  • Extensible Resource Tracking. The set of resources tracked by nova is hard coded, this change makes that extensible, which will allow plug-ins to track new types of resources for scheduling. launchpad specification
  • Allow a host to be evacuated, but with the scheduler selecting destination hosts for the instances moved. launchpad specification
  • Add support for host aggregates to scheduler filters. launchpad specification

  • i18n Enablement for Nova, turn on the lazy translation support from Oslo i18n and updating Nova to adhere to the restrictions this adds to translatable strings. launchpad specification
  • Offload periodic task sql query load to a slave sql server if one is configured. launchpad specification
  • Only update the status of a host in the sql database when the status changes, instead of every 60 seconds. launchpad specification
  • Include status information in API listings of hypervisor hosts. launchpad specification
  • Allow API callers to specify more than one status to filter by when listing services. launchpad specification
  • Add quota values to constrain the number and size of server groups a users can create. launchpad specification

Hypervisor driver specific




  • Move the vmware driver to using the oslo vmware helper library. launchpad specification
  • Add support for network interface hot plugging to vmware. launchpad specification
  • Refactor the vmware driver's spawn functionality to be more maintainable. This work was internal, but is mentioned here because it significantly improves the supportability of the VMWare driver. launchpad specification

Tags for this post: openstack juno blueprints implemented


Planet DebianRaphaël Hertzog: My Debian LTS report for September

Thanks to the sponsorship of multiple companies, I have been paid to work 11 hours on Debian LTS this month.

CVE triagingI started by doing lots of triage in the security tracker (if you want to help, instructions are here) because I noticed that the dla-needed.txt list (which contains the list of packages that must be taken care of via an LTS security update) was missing quite a few packages that had open vulnerabilities in oldstable.

In the end, I pushed 23 commits to the security tracker. I won’t list the details each time but for once, it’s interesting to let you know the kind of things that this work entailed:

  • I reviewed the patches for CVE-2014-0231, CVE-2014-0226, CVE-2014-0118, CVE-2013-5704 and confirmed that they all affected the version of apache2 that we have in Squeeze. I thus added apache2 to dla-needed.txt.
  • I reviewed CVE-2014-6610 concerning asterisk and marked the version in Squeeze as not affected since the file with the vulnerability doesn’t exist in that version (this entails some checking that the specific feature is not implemented in some other file due to file reorganization or similar internal changes).
  • I reviewed CVE-2014-3596 and corrected the entry that said that is was fixed in unstable. I confirmed that the versions in squeeze was affected and added it to dla-needed.txt.
  • Same story for CVE-2012-6153 affecting commons-httpclient.
  • I reviewed CVE-2012-5351 and added a link to the upstream ticket.
  • I reviewed CVE-2014-4946 and CVE-2014-4945 for php-horde-imp/horde3, added links to upstream patches and marked the version in squeeze as unaffected since those concern javascript files that are not in the version in squeeze.
  • I reviewed CVE-2012-3155 affecting glassfish and was really annoyed by the lack of detailed information. I thus started a discussion on debian-lts to see whether this package should not be marked as unsupported security wise. It looks like we’re going to mark a single binary packages as unsupported… the one containing the application server with the vulnerabilities, the rest is still needed to build multiple java packages.
  • I reviewed many CVE on dbus, drupal6, eglibc, kde4libs, libplack-perl, mysql-5.1, ppp, squid and fckeditor and added those packages to dla-needed.txt.
  • I reviewed CVE-2011-5244 and CVE-2011-0433 concerning evince and came to the conclusion that those had already been fixed in the upload 2.30.3-2+squeeze1. I marked them as fixed.
  • I droppped graphicsmagick from dla-needed.txt because the only CVE affecting had been marked as no-dsa (meaning that we don’t estimate that a security updated is needed, usually because the problem is minor and/or that fixing it has more chances to introduce a regression than to help).
  • I filed a few bugs when those were missing: #762789 on ppp, #762444 on axis.
  • I marked a bunch of CVE concerning qemu-kvm and xen as end-of-life in Squeeze since those packages are not currently supported in Debian LTS.
  • I reviewed CVE-2012-3541 and since the whole report is not very clear I mailed the upstream author. This discussion led me to mark the bug as no-dsa as the impact seems to be limited to some information disclosure. I invited the upstream author to continue the discussion on RedHat’s bugzilla entry.

And when I say “I reviewed” it’s a simplification for this kind of process:

  • Look up for a clear explanation of the security issue, for a list of vulnerable versions, and for patches for the versions we have in Debian in the following places:
    • The Debian security tracker CVE page.
    • The associated Debian bug tracker entry (if any).
    • The description of the CVE on and the pages linked from there.
    • RedHat’s bugzilla entry for the CVE (which often implies downloading source RPM from CentOS to extract the patch they used).
    • The upstream git repository and sometimes the dedicated security pages on the upstream website.
  • When that was not enough to be conclusive for the version we have in Debian (and unfortunately, it’s often the case), download the Debian source package and look at the source code to verify if the problematic code (assuming that we can identify it based on the patch we have for newer versions) is also present in the old version that we are shipping.

CVE triaging is often almost half the work in the general process: once you know that you are affected and that you have a patch, the process to release an update is relatively straightforward (sometimes there’s still work to do to backport the patch).

Once I was over that first pass of triaging, I had already spent more than the 11 hours paid but I still took care of preparing the security update for python-django. Thorsten Alteholz had started the work but got stuck in the process of backporting the patches. Since I’m co-maintainer of the package, I took over and finished the work to release it as DLA-65-1.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianMario Lang: A simple C++11 concurrent workqueue

For a little toy project of mine (a wikipedia XML dump word counter) I wrote a little C++11 helper class to distribute work to all available CPU cores. It took me many years to overcome my fear of threading: In the past, whenever I toyed with threaded code, I ended up having a lot of deadlocks, and generally being confused. It appears that I finally have understood enough of this crazyness to be able to come up with the small helper class below.

It makes use of C++11 threading primitives, lambda functions and move semantics. The idea is simple: You provide a function at construction time which defines how to process one item of work. To pass work to the queue, simply call the function operator of the object, repeatedly. When the destructor is called (once the object reachs the end of its scope), all remaining work is processed and all background threads are joined.

The number of threads defaults to the value of std::thread::hardware_concurrency(). This appears to work at least since GCC 4.9. Earlier tests have shown that std::thread::hardware_concurrency() always returned 1. I don't know when exactly GCC (or libstdc++, actually) started to support this, but at least since GCC 4.9, it is usable. Prerequisite on Linux is a mounted /proc.

The number of maximum items per thread in the queue defaults to 1. If the queue is full, calls to the function operator will block.

So the most basic usage example is probably something like:

int main() {
  typedef std::string item_type;
  distributor<item_type> process([](item_type &item) {
    // do work

  while (/* input */) process(std::move(/* item */));

  return 0;

That is about as simple as it can get, IMHO.

The code can be found in the GitHub project mentioned above. However, since the class is relatively short, here it is.

#include <algorithm>
#include <condition_variable>
#include <mutex>
#include <queue>
#include <stdexcept>
#include <thread>
#include <vector>

template <typename Type, typename Queue = std::queue<Type>>
class distributor: Queue, std::mutex, std::condition_variable {
  typename Queue::size_type capacity;
  bool done = false;
  std::vector<std::thread> threads;

  template<typename Function>
  distributor( Function function
             , unsigned int concurrency = std::thread::hardware_concurrency()
             , typename Queue::size_type max_items_per_thread = 1
  : capacity{concurrency * max_items_per_thread}
    if (not concurrency)
      throw std::invalid_argument("Concurrency must be positive and non-zero");
    if (max_items_per_thread)
      std::invalid_argument("Max items per thread must be positive and non-zero");

    for (unsigned int count {0}; count < concurrency; count += 1)
      threads.emplace_back(static_cast<void (distributor::*)(Function)>
                           (&distributor::consume), this, function);

  distributor(distributor &&) = default;
  distributor(distributor const &) = delete;
  distributor& operator=(distributor const &) = delete;

  ~distributor() {
      std::lock_guard<std::mutex> guard(*this);
      done = true;
    std::for_each(threads.begin(), threads.end(),

  void operator()(Type &&value) {
    std::unique_lock<std::mutex> lock(*this);
    while (Queue::size() == capacity) wait(lock);

  template <typename Function>
  void consume(Function process) {
    std::unique_lock<std::mutex> lock(*this);
    while (true) {
      if (not Queue::empty()) {
        Type item { std::move(Queue::front()) };
      } else if (done) {
      } else {

If you have any comments regarding the implementation, please drop me a mail.

Worse Than FailureCodeSOD: Stringify All the Things!

When Justin submitted this C# code, he knew what line to include in the subject line of the email to get our attention:

if (String.Empty == null) GC.KeepAlive(string.Empty);

String is the string datatype. string is just an alias to that type, used to make the code look more like C. Empty is a constant that represents an empty string. And GC represents the garbage collector- if the Empty string constant has somehow become null, prevent that constant which is currently not pointing at anything from being garbage collected.

That’s the cleanest, most easy to understand line of code in this entire module.

Justin’s co-worker wanted to convert built in types to Strings, and back, and he wanted to do this in the most enterprise-y way possible. Se he created:

public static class StructT
    /// <summary>Convert the given value type fo a string</summary>
    /// <typeparam name="TYPE">The type of the value to convert</typeparam>
    /// <param name="value">The value to convert</param>
    /// <returns>The string representation of the value, or null if HasValue is false</returns>
    /// <remarks>Only the types DateTime, Int32, Int64, Decimal, Single, Double, Char, Byte, Guid, Enum, Boolean are supported</remarks>
    public static string ToString<TYPE>(TYPE? value) where TYPE : struct
        return StructT<TYPE>._ToString(value);
    /// <summary>Convert the given value type fo a string</summary>
    /// <typeparam name="TYPE">The type of the value to convert</typeparam>
    /// <param name="value">The value to convert</param>
    /// <returns>The string representation of the value</returns>
    /// <remarks>Only the types DateTime, Int32, Int64, Decimal, Single, Double, Char, Byte, Guid, Enum, Boolean are supported</remarks>
    public static string ToString<TYPE>(TYPE value) where TYPE : struct
        return StructT<TYPE>._ToString(value);

This static class depends on a generic class with the same name. The underlying class has one of those delightfully clear constructors:

 static StructT()
    _formatProvider = System.Globalization.CultureInfo.InvariantCulture;
    _ourType = typeof(TYPE);
    if (String.Empty == null) GC.KeepAlive(string.Empty);
    else if (_ourType == typeof(DateTime)) { _TryParse = MkDeP<DateTime>(ParseDate);   _ToString = MkDeS<DateTime>(StringizeDate); }
    else if (_ourType == typeof(Int32))    { _TryParse = MkDeP<Int32>(ParseInt32);     _ToString = MkDeS<Int32>(StringizeInteger); }
    else if (_ourType == typeof(Int64))    { _TryParse = MkDeP<Int64>(ParseInt64);     _ToString = MkDeS<Int64>(StringizeInteger); }
    else if (_ourType == typeof(Decimal))  { _TryParse = MkDeP<Decimal>(ParseDecimal); _ToString = MkDeS<Decimal>(StringizeFloat); }
    else if (_ourType == typeof(Single))   { _TryParse = MkDeP<Single>(ParseSingle);   _ToString = MkDeS<Single>(StringizeFloat); }
    else if (_ourType == typeof(Double))   { _TryParse = MkDeP<Double>(ParseDouble);   _ToString = MkDeS<Double>(StringizeFloat); }
    else if (_ourType == typeof(Char))     { _TryParse = MkDeP<Char>(ParseChar);       _ToString = StringizeOther; }
    else if (_ourType == typeof(Byte))     { _TryParse = MkDeP<Byte>(ParseByte);       _ToString = MkDeS<Byte>(StringizeInteger); }
    else if (_ourType == typeof(Guid))     { _TryParse = MkDeP<Guid>(ParseGuid);       _ToString = StringizeOther; }
    else if (_ourType == typeof(Boolean))  { _TryParse = MkDeP<Boolean>(ParseBool);    _ToString = MkDeS<Boolean>(StringizeBoolean); }
    else if (_ourType.IsEnum)              { _TryParse = ParseEnum;                    _ToString = StringizeOther; }
    else throw new InvalidOperationException();

And MkDeP and MkDeS <script src="" type="text/javascript"></script>?

private static Converter<string, TYPE?> MkDeP<UUUU>(System.Converter<string, UUUU?> loww) where UUUU : struct { return (System.Converter<string, TYPE?>)(Delegate)loww; }
private static Converter<TYPE?, string> MkDeS<UUUU>(System.Converter<UUUU?, string> loww) where UUUU : struct { return (System.Converter<TYPE?, string>)(Delegate)loww; }

I just like seeing datatypes named UUUU.

In addition to these methods, there’s a set of methods to Stringize and Parse each primitive data-type. For example, StringizeDate:

private static string StringizeDate(DateTime? value)
    if (!value.HasValue)
        return null;

    var s = String.Format(_formatProvider, "{0:yyyy}-{0:MM}-{0:dd}T{0:HH}:{0:mm}:{0:ss}", value.Value);
    if (value.Value.Kind == DateTimeKind.Utc) s += "Z";
    return s;

That is actually sane and makes good use of the String format library. Contrast that to ParseDate, which is what happens when someone wants to reinvent the wheel:

private static DateTime? ParseDate(string raw)
    if (String.IsNullOrEmpty(raw)) return null;
    bool utcInd = false, lenGood;
    if (raw.Length == 10) raw += "T00:00:00";
    if (raw.Length == 20 && (raw[19] == &aposZ&apos || raw[19] == &aposz&apos)) { utcInd = true; lenGood = true; } else lenGood = raw.Length == 19;
    if (!lenGood) throw new FormatException();
    int year, month, day, hour, minute, second;
        year = Int32.Parse(raw.Substring(0, 4), _formatProvider);
        month = Int32.Parse(raw.Substring(5, 2), _formatProvider);
        day = Int32.Parse(raw.Substring(8, 2), _formatProvider);
        hour = Int32.Parse(raw.Substring(11, 2), _formatProvider);
        minute = Int32.Parse(raw.Substring(14, 2), _formatProvider);
        second = Int32.Parse(raw.Substring(17, 2), _formatProvider);
    catch { throw new FormatException(); }
    return new DateTime(year, month, day, hour, minute, second, utcInd ? DateTimeKind.Utc : DateTimeKind.Unspecified);

The .NET DateTime object has a built in Parse method which can parse a date given a format string.

This is ugly, and mostly useless code. .NET already has methods that make it easy to turn data into strings and back. So what’s the real WTF? The real WTF is this: the original developer was allergic to letting native data-types cross module boundaries. Every function accepted all of its parameters as strings. Every function returned strings. Every function depended on this class to interact with values that could easily have been passed in natively.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Kelvin ThomsonGas Reservation Policy

On September 19 I wrote in support of a gas reservation policy and the Australian Workers Union’s campaign to establish one.<o:p></o:p>

Now a study by BIS Shrapnel warns that one in five manufacturers could shut down over the next five years because of spiralling gas prices. This confirms my view that we should establish a gas reserve in Australia in order to reduce costs for manufacturing and consumers. I welcome the support of National Party MP Andrew Broad for a gas reservation policy.<o:p></o:p>

The AWU 'Reserve Our Gas' campaign comes amid reports that Australian gas prices will triple from July next year as LNG exports ramp up. Our resources should not be a zero-sum game where domestic needs are sacrificed on the altar of free-market ideology.<o:p></o:p>

It is economic madness that we have a situation in which our abundant gas reserves are hurting Australian jobs and households instead of helping them, where we are giving up a national competitive advantage so gas exporters can make more profits while trashing our manufacturing sector.<o:p></o:p>

I agree with AWU national secretary Scott McDine that Australia is out of step with other major nations, such as the United States, that reserve a percentage of gas for domestic use. Mr McDine is right when he says:
"Australians have a right to know their rapidly rising gas bills are actually completely preventable. We just need to do what every other gas-exporting nation does and bring in laws to look after the local population.”<o:p></o:p>

Australia should embrace are more practical economic policy where we have both a prosperous gas export industry and a competitive advantage for our local industry.<o:p></o:p>

Planet DebianFrancois Marier: Encrypted mailing list on Debian and Ubuntu

Running an encrypted mailing list is surprisingly tricky. One of the first challenges is that you need to decide what the threat model is. Are you worried about someone compromising the list server? One of the subscribers stealing the list of subscriber email addresses? You can't just "turn on encryption", you have to think about what you're trying to defend against.

I decided to use schleuder. Here's how I set it up.


What I decided to create was a mailing list where people could subscribe and receive emails encrypted to them from the list itself. In order to post, they need to send an email encrypted to the list' public key and signed using the private key of a subscriber.

What the list then does is decrypt the email and encrypts it individually for each subscriber. This protects the emails while in transit, but is vulnerable to the list server itself being compromised since every list email transits through there at some point in plain text.

Installing the schleuder package

The first thing to know about installing schleuder on Debian or Ubuntu is that at the moment it unfortunately depends on ruby 1.8. This means that you can only install it on Debian wheezy or Ubuntu precise: trusty and jessie won't work (until schleuder is ported to a more recent version of ruby).

If you're running wheezy, you're fine, but if you're running precise, I recommend adding my ppa to your /etc/apt/sources.list to get a version of schleuder that actually lets you create a new list without throwing an error.

Then, simply install this package:

apt-get install schleuder

Postfix configuration

The next step is to configure your mail server (I use postfix) to handle the schleuder lists.

This may be obvious but if you're like me and you're repurposing a server which hasn't had to accept incoming emails, make sure that postfix is set to the following in /etc/postfix/

inet_interfaces = all

Then follow the instructions from /usr/share/doc/schleuder/README.Debian and finally add the following line (thanks to the wiki instructions) to /etc/postfix/

local_recipient_maps = proxy:unix:passwd.byname $alias_maps $transport_maps

Creating a new list

Once everything is set up, creating a new list is pretty easy. Simply run schleuder-newlist and follow the instructions

After creating your list, remember to update /etc/postfix/transports and run postmap /etc/postfix/transports.

Then you can test it by sending an email to You should receive the list's public key.

Adding list members

Once your list is created, the list admin is the only subscriber. To add more people, you can send an admin email to the list or follow these instructions to do it manually:

  1. Get the person's GPG key: gpg --recv-key KEYID
  2. Verify that the key is trusted: gpg --fingerprint KEYID
  3. Add the person to the list's /var/lib/schleuder/HOSTNAME/LISTNAME/members.conf:
    - email:
      key_fingerprint: 8C470B2A0B31568E110D432516281F2E007C98D1
  4. Export the public key: gpg --export -a KEYID
  5. Paste the exported key into the list's keyring: sudo -u schleuder gpg --homedir /var/lib/schleuder/HOSTNAME/LISTNAME/ --import

Krebs on SecurityApple Releases Patches for Shellshock Bug

Apple has released updates to insulate Mac OS X systems from the dangerous “Shellshock” bug, a pervasive vulnerability that is already being exploited in active attacks.

osxPatches are available via Software Update, or from the following links for OS X Mavericks, Mountain Lion, and Lion.

After installing the updates, Mac users can check to see whether the flaw has been truly fixed by taking the following steps:

* Open Terminal, which you can find in the Applications folder (under the Utilities subfolder on Mavericks) or via Spotlight search.

* Execute this command:
bash –version

* The version after applying this update will be:

OS X Mavericks:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin13)
OS X Mountain Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin12)
OS X Lion:  GNU bash, version 3.2.53(1)-release (x86_64-apple-darwin11)

Planet Linux AustraliaMichael Still: My candidacy for Kilo Compute PTL

This is mostly historical at this point, but I forgot to post it here when I emailed it a week or so ago. So, for future reference:

I'd like another term as Compute PTL, if you'll have me.

We live in interesting times. openstack has clearly gained a large
amount of mind share in the open cloud marketplace, with Nova being a
very commonly deployed component. Yet, we don't have a fantastic
container solution, which is our biggest feature gap at this point.
Worse -- we have a code base with a huge number of bugs filed against
it, an unreliable gate because of subtle bugs in our code and
interactions with other openstack code, and have a continued need to
add features to stay relevant. These are hard problems to solve.

Interestingly, I think the solution to these problems calls for a
social approach, much like I argued for in my Juno PTL candidacy
email. The problems we face aren't purely technical -- we need to work
out how to pay down our technical debt without blocking all new
features. We also need to ask for understanding and patience from
those feature authors as we try and improve the foundation they are
building on.

The specifications process we used in Juno helped with these problems,
but one of the things we've learned from the experiment is that we
don't require specifications for all changes. Let's take an approach
where trivial changes (no API changes, only one review to implement)
don't require a specification. There will of course sometimes be
variations on that rule if we discover something, but it means that
many micro-features will be unblocked.

In terms of technical debt, I don't personally believe that pulling
all hypervisor drivers out of Nova fixes the problems we face, it just
moves the technical debt to a different repository. However, we
clearly need to discuss the way forward at the summit, and come up
with some sort of plan. If we do something like this, then I am not
sure that the hypervisor driver interface is the right place to do
that work -- I'd rather see something closer to the hypervisor itself
so that the Nova business logic stays with Nova.

Kilo is also the release where we need to get the v2.1 API work done
now that we finally have a shared vision for how to progress. It took
us a long time to get to a good shared vision there, so we need to
ensure that we see that work through to the end.

We live in interesting times, but they're also exciting as well.

I have since been elected unopposed, so thanks for that!

Tags for this post: openstack kilo compute ptl
Related posts: Juno Nova PTL Candidacy; Review priorities as we approach juno-3; Thoughts from the PTL; Havana Nova PTL elections; Expectations of core reviewers


Planet DebianDirk Eddelbuettel: Rcpp 0.11.3

A new release 0.11.3 of Rcpp is now on the CRAN network for GNU R, and an updated Debian package has been uploaded too.

Rcpp has become the most popular way of enhancing GNU R with C++ code. As of today, 273 packages on CRAN depend on Rcpp for making analyses go faster and further.

This release brings a fairly large number of continued enhancements, fixes and polishing to Rcpp. These were provided by a total of seven different contributors---which is a new record as well.

See below for a detailed list of changes extracted from the NEWS file, but some highlights included in this release are

  • Several API cleanups, polishes and a pre-announced code removal
  • New InternalFunction interface, and new Timer functionality.
  • More robust functionality of Rcpp Attributes as well as a new dryRun option.
  • The Rcpp FAQ was updated, as was the main Description: in the DESCRIPTION file.
  • Rcpp.package.skeleton() can now deploy functionality from pkgKitten to create Rcpp packages that purr.

One sore point, however, is that we missed that packages using Rcpp Modules appear to require a rebuild. We are sorry for the inconvenience; this has highlighted a shortcoming in our fairly robust and extensive tests. While we test our packages against all known CRAN dependents, such tests check for the ability to compile and run freshly and not whether previously built packages still run. We intend to augment our testing in this direction to avoid a repeat occurrence of such a misfeature.

Changes in Rcpp version 0.11.3 (2014-09-27)

  • Changes in Rcpp API:

    • The deprecation of RCPP_FUNCTION_* which was announced with release 0.10.5 last year is proceeding as planned, and the file macros/preprocessor_generated.h has been removed.

    • Timer no longer records time between steps, but times from the origin. It also gains a get_timers(int) methods that creates a vector of Timer that have the same origin. This is modelled on the Rcpp11 implementation and is more useful for situations where we use timers in several threads. Timer also gains a constructor taking a nanotime_t to use as its origin, and a origin method. This can be useful for situations where the number of threads is not known in advance but we still want to track what goes on in each thread.

    • A cast to bool was removed in the vector proxy code as inconsistent behaviour between clang and g++ compilations was noticed.

    • A missing update(SEXP) method was added thanks to pull request by Omar Andres Zapata Mesa.

    • A proxy for DimNames was added.

    • A no_init option was added for Matrices and Vectors.

    • The InternalFunction class was updated to work with std::function (provided a suitable C++11 compiler is available) via a pull request by Christian Authmann.

    • A new_env() function was added to Environment.h

    • The return value of range eraser for Vectors was fixed in a pull request by Yixuan Qiu.

  • Changes in Rcpp Sugar:

    • In ifelse(), the returned NA type was corrected for operator[].

  • Changes in Rcpp Attributes:

    • Include LinkingTo in DESCRIPTION fields scanned to confirm that C++ dependencies are referenced by package.

    • Add dryRun parameter to sourceCpp.

    • Corrected issue with relative path and R chunk use for sourceCpp.

  • Changes in Rcpp Documentation:

    • The Rcpp-FAQ vignette was updated with respect to OS X issues.

    • A new entry in the Rcpp-FAQ clarifies the use of licenses.

    • Vignettes build results no longer copied to /tmp to please CRAN.

    • The Description in DESCRIPTION has been shortened.

  • Changes in Rcpp support functions:

    • The Rcpp.package.skeleton() function will now use pkgKitten package, if available, to create a package which passes R CMD check without warnings. A new Suggests: has been added for pkgKitten.

    • The modules=TRUE case for Rcpp.package.skeleton() has been improved and now runs without complaints from R CMD check as well.

  • Changes in Rcpp unit test functions:

    • Functions from the RUnit package are now prefixed with RUnit::

    • The testRcppModule and testRcppClass sample packages now pass R CMD check --as-cran cleanly with NOTES or WARNINGS

Thanks to CRANberries, you can also look at a diff to the previous release As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Kelvin ThomsonVictorian Liberal Government Should Be Ashamed to Sign East-West Link Contracts

The Victorian Liberal Government should be ashamed that it last night signed the contracts for the East-West Link Road Tunnel, the Royal Park Freeway. In signing the contracts, the Victorian Liberals have ignored the calls of many in our community who oppose this white elephant of a project, and which will cause severe economic, social and environmental damage.<o:p></o:p>

I urge the contractors to take note while the ink is still wet, that the contract is not worth the paper it is written on. Victorian Labor has made it clear that it wants this election to be about better public transport, and should it win office in November, it will support the Moreland and Yarra Councils legal challenge against the project. Signing the contracts now is a reckless and clear act of arrogance on the part of the Victorian Liberals who should hang their heads in shame for promising not to build the tunnel before an election, and doing the opposite after.<o:p></o:p>

The Victorian Liberal Government should have the integrity to take the East-West Link proposal to the Victorian people at the November Election. This election should decide whether the people of Victorian want an $8 billion East-West Road Tunnel being built through Royal Park. It will not solve Melbourne’s traffic congestion issues, it will increase carbon emissions, it will increase motor vehicle dependency, it will damage the beautiful open space fauna and flora at Royal Park and along the Moonee Ponds Creek, it will see residential homes compulsorily acquired; and it will consume much of the Victorian State Government transport budget for many years to come, and prejudice the ability for our outer urban communities to meet their legitimate transport needs.<o:p></o:p>

As I told the Parliament last week, the Australian and Victorian Liberal Government’s should undertake a publicly transparent full economic, social and environmental analysis before committing billions of taxpayers’ dollars. However the known evidence demonstrates that economically the project does not stack up- it will reach capacity within a matter of years of being finalised, it is based on misleading assumptions regarding fuel prices, car prices and parking prices, it has a negative cost benefit ratio return for taxpayers, and it does not have the support of the Moreland or Yarra communities.<o:p></o:p>
<o:p> </o:p>
This issue should be turned over to the Victorian people to decide in November. The urgency behind getting the contracts signed represents a contempt for the democratic process. It is a clear sign the Victorian Liberals know that the Victorian people are wised up to this white elephant proposal, and will not support it at the ballot box. Let’s invest precious taxpayer dollars on projects Victorians actually want and need.


TED7 reasons to get TED Live


Just in time for TEDGlobal next month, we’ve rebuilt our TED Live program, allowing you to watch the entire conference—or a single day of it—on a high-definition live stream anywhere in the world, anytime. 7 reasons to try it:

  1. You’ll gain insight into what’s going on in the world. The TEDGlobal 2014 program is designed to look at the forces underlying recent news stories — and at the larger phenomenon shaping our global landscape. As you watch TED Live, you’ll see these big themes emerge, session by session. And you might even see a little breaking news.
  2. It’s cost efficient. Watching all four days of the TEDGlobal conference on TED Live costs $500, and we’ve also added a day pass option, for $200. TED Live is a great way to support TED’s nonprofit mission of spreading ideas —  and get a concentrated dose of inspiration in return.
  3. You’ll see every talk first. Talks from TEDGlobal will be released free to the world over the upcoming year. Just like always, we release one TED Talk per day, every weekday. TED Live allows you to see all the talks right as they happen.
  4. You can watch TEDGlobal whenever you like. With TED Live, you can choose to watch the sessions as they happen, or you can time-shift into your own time zone (or to a more convenient day altogether), using our online archive feature. It’s the TED Conference on your time.
  5. You can make it a social occasion. TED Conference sessions spark great conversations. Invite some friends or family members to watch a session with you over brunch — or host a dinner party you’ll never forget. Your TED Live membership allows you to display the feed to up to 10 friends. (If you’d like to throw a larger viewing party, write us.)
  6. Your feedback matters. With TED Live, you’ll be able to rate each speaker during the conference and let us know which ideas resonate the most with you. Your ratings will help us decide which talks end up on our homepage first as free TED Talks.
  7. It makes a cool gift. Certain people in your live have no need for more stuff. Why not give them a TED Live subscription? If you’re interested in doing this, email with the names of recipients and their contact info, and we’ll contact you to complete your transaction. From there, you’ll just need to figure out how to put a bow on it.

Bonus. As a TED Live member you’ll join community groups, schools and universities throughout Rio de Janeiro who are watching the event for free. TEDGlobal Para Todos will bring a free conference livestream to groups all over the city, in return for Rio’s kind hospitality to TED. If you’re a community organization or nonprofit based in Rio and would like to share a free livestream with your community, get in touch.

Register for TED Live »

TEDHow a TEDx event spun up in Bhilwara

Harsh Agrwal is the organizer of TEDxBhilwara, which he hopes proves that his city is more than a textile hub. Photo: Daksh Baheti

Harsh Agrwal is the organizer of TEDxBhilwara, which he hopes proves that his city is more than a textile hub. Photo: Daksh Baheti

Bhilwara is a city in Rajasthan, India, famous for its textile industry. Textiles dominate the local economy; after agriculture, it’s the leading employer in the city. The city is populated with spinning looms, fabric processing plants and weaving centers, producing huge swaths of yarn and thousands of tons of suiting fabric.

But seeing Bhilwara merely as a textiles hub prevents a real understanding of the city, says Harsh Agarwal, the 19-year-old who organized the first TEDx event here in July.

“Many do not know Bhilwara for its other special things,” Agarwal says, “like art and culture.” Too often, he says, his hometown is seen as stagnant and set in its ways, not a place where new ideas thrive. That’s why Agarwal decided to put together a TEDx event in Bhilwara centered on under-the-radar ideas, individuals and technology making waves in his hometown.

Agarwal learned about TED when his high-school teacher showed a few talks in class. Inspired by the breadth of ideas in the talks, Agarwal soon conspired to host a TEDx event at his high school with a friend. School and other activities won out, however, and his plans ended up on the back burner.

Flash forward to 2012, when a 17-year-old Agarwal received an invitation to attend TEDxTheCalorxSchool in Ahmedabad, a town near his own. He was touched by the experience. “Getting inspired from attending the TEDx event live, I felt honored and very special to be one among 100 attendees,” he said. “I then desperately wanted to know what goes on behind such a powerful event.”

After high school, Agarwal moved to Mumbai to attend university — and learn the ropes of TEDx event planning. As a student, he applied to work with local event TEDxGateway; was accepted; and began to assist in speaker curation. “I learned a lot from [TEDxGateway organizer] Yashraj Akashi,” Agarwal said, “and he inspired me to pull off a TEDx event in my hometown for the community members who have no idea about what TED is.”

The smiling audience at TEDxBhilwara. Photo: Courtesy of TEDxBhilwara

The smiling audience at TEDxBhilwara. Photo: Courtesy of TEDxBhilwara

Working with TEDxGateway gave Agarwal the confidence and practice he needed to take on a TEDx event in Bhilwara. To drum up interest in his hometown, Agarwal hosted two TEDx Salon events — events centered around watching and discussing recorded TED Talks. This led to a burst of excitement for his first live event.

“[I knew] the first event would not [necessarily] bring a big change but a small one … but after the event, I have observed that people started talking about TED, TEDx and its importance in the community, which they never did before,” Agarwal said. “Many of them started looking at collaboration in a different way and started reaching out to local speakers for collaboration on projects.”

Another interesting side effect: People piped up with their own ideas on who should speak. “After the event, many media people gave us names of amazing people who could be chosen to speak for the next edition of TEDxBhilwara,” he says. “Attendees of the event left the place with a complete different thought process because they saw and heard things they have never before.”

Agarwal also noticed changes in his own outlook after the event. “As the organizer, my experience has been full of excitement. I got to learn a lot about myself. In the course of planning and preparation, I did things I never did in my life before,” he says. “I saw the city through a different angle after the event. I get to see so much potential and passion in the people. I get to see an air of change blowing in the city, positive change.”

Overall, Agarwal says, he’s impressed — and extremely motivated — by the excitement and acceptance toward new ideas he’s seen in his community. “It was a sheer pleasure seeing the people in this small town coming together and showing so much interest and support in a TEDx event and its values and vision,” he said. “There is so much talent in this small city, and I found nothing better than a TEDx event to spread ideas and knowledge at a global level.”

Agarwal will be attending TEDGlobal 2014 as a TEDxChange scholar, and is hoping to draw some inspiration for the next TEDxBhilwara while in Rio. “I never thought in my wildest dreams when I first watched a TED Talk that I would ever attend a conference live,” he said, but in just a few weeks, he’ll join TEDx organizers from around the globe for a week of big ideas from speakers surfacing the intelligence and insights of their own cities, both small and large.

This story originally ran on the TEDx Innovations Blog. More stories:

Attendees pose for the camera. Photo: Saurabh Bhatt

Attendees pose for the camera. Photo: Saurabh Bhatt

TEDInspiring words from TED@IBM: The best way to predict the future is to invent it

The program guide for TED@IBM, themed "Reimagine our world." Photo: Marla Aufmuth/TED

The program guide for TED@IBM, themed “Reimagine our world.” Photo: Marla Aufmuth/TED

By Laura McClure, Emily McManus and Kate Torgovnick May

Big data is already transforming our daily lives. And at TED@IBM, a TED Institute event, we got a glimpse of what’s next. Speakers revealed how data will change spaces from the kitchen to the emergency room, and how it will even help us react more quickly to the next ebola-scale epidemic. Throughout the day echoed the message that, for the technologists creating this new future, it is both a tremendous opportunity as well as a big responsibility. 

Below, some choice words from each of the TED@IBM talks:


“There is no business-to-business, there’s no business-to-consumer, there is only human-to-human. You don’t sell to a brand, you sell to a person. People identify with human experience, with human conversation.”

Bryan Kramer, CEO of Pure Matter, who spoke on the importance of sharing. He asked viewers to tweet their own idea using the hashtag #SharingInspires, and it trended on Twitter.


“Today in the ICU, medical teams are bombarded with data and noise. Nearly 150 million data points and over 1,300 alarms are generated per day by one patient … and 90% of those alarms are completely inactionable.”

Inhi Cho Suh, Vice President of Big Data Integration and Governance for IBM Software Group, who spoke on how data can lead to better healthcare


“Imagine a world where, no matter how far you live from an electricity grid or a pump station, you will have access to clean power, heating, refrigeration, drinkable water and even gasoline. Now, imagine that this technology can do it using everyday materials like concrete, reflective aluminum foil and air — plus the ultra-high-efficiency solar cells used in satellites.”

Gianluca Ambrosetti, Head of Research for Airlight Energy, which seeks to bring solar energy to remote parts of the planet using sunflower-shaped panels.


“An algorithm spotted the ebola outbreak nine days before the Word Health Organization recognized it. We could have put simple measures in effect to contain that disease early on.. With the flood of big data and our capacity to make sense of it , we can now forecast the next humanitarian healthcare crisis.”

Monika Blaumueller, consultant at IBM, who spoke on how data can super-size humanitarian aid


“Why does tone matter? Tone reflects our intent, our frame of mind and our emotion. Tone is instinctive, but setting the right tone takes work … Tone can inspire and guide, or disrupt and hurt.”

Kareem Yusuf, of IBM’s Smarter Commerce initiative, who spoke about how businesses must think carefully about the tone they set with customers in the digital landscape


“A growing number of Americans work, play and live with people who think exactly like them — but ‘opportunity makers’ actively seek friendships and experiences with those who are different from them. Some of our most valuable allies are people who don’t act or think at all like us. ”

Kare Anderson, columnist for Forbes, who spoke on the three traits that opportunity makers have in common


“We often hear that movies are dead or dying. I don’t believe the cinema is dying, but I do believe it’s wounded from a million paper cuts. There are two things that a cinema has that you never get at home. One, a screen the size of Cleveland. Two, the magic of the room watching it, where there sits a dream shared with a one-time-only mix of strangers, a gold mine of emotions waiting to collide in the dark. People come to movies ready to dream, and that’s a resource that’s too precious to squander.”

Brad Bird, Academy Award-winning director, who spoke on how filmmakers can keep capturing that magic

Kareem Photo: Marla Aufmuth/TED

Kareem Yusuf shared the story of how he was able to bridge two very different working environments by striking the right tone. Photo: Marla Aufmuth/TED

“When my oldest daughter was three, she had to have tubes in her ears. The surgeon at the hospital told us they scheduled those operations on Tuesdays and Thursdays, on the same days of some of the most difficult adult procedures. Following those surgeries, the kids recover in a large and common recovery room – alongside the adults. Why? Because the adults recover faster when exposed to small children who are also recovering.”

Erick Brethenoux, of IBM’s Business Analytics Division, who spoke on what we can learn from emotional analytics


“With each generation, we don’t just pass along our DNA—we pass along our ideas. The innovations we uncover today become the building blocks for innovation for future generations.”

Lisa Seacat DeLuca, mobile software engineer for IBM’s Open Technology and Cloud Performance organization, who spoke on the “Internet of Everything”


“We’re bombarded with stories about how much data there is in the world. But much of our data doesn’t fit neatly into databases. It is highly subjective … Data doesn’t create meaning. We do. And we have to ask, ‘Did the data really show us this? Or does the conclusion make us feel more comfortable and successful?’”

Susan Etlinger, industry analyst with Altimeter Group, who spoke on the incredible responsibility that comes with interpreting data 


“Can computers help us create dishes that are healthy, taste good, and have never been seen before?”

Florian Pinel, Senior Software Engineer in the IBM Watson Group, who spoke about the potential of Chef Watson


“At some point, parents have to trust their teenagers. People who successfully parent teens into adulthood eventually tell them, ‘I know you can do this on your own.’ It’s time for us to learn how do this at work as well … Leaders today need to trust that their employees have the information, experience and good judgment to make decisions that in the past would have been sent ‘up the ladder.’”

Charlene Li, CEO of the Altimeter Group, who spoke about the topic of her bestselling-book, Open Leadership


As more and more of our lives are captured in digital, the insights that can be generated by big data analytics will become increasingly impactful. As a society we can choose to let these insights be used as a force for good or evil; to exploit individuals or enrich their lives. Our approach to privacy will have the single greatest impact on that outcome.”

Marie Wallace, Analytics Strategist for IBM and creator of the blog All Things Analytics, who spoke on the importance of privacy


“For the most part, we only study brains when there is something wrong with them … This technology allows us to study brains in conditions and contexts that are as diverse as those of everyday life.”

Tan Le, CEO of EMOTIV, who spoke about how her company’s headsets are advancing the understanding of the human brain

Tan Le wore a headset on stage that allowed attendees to look at her brain activity, live. Photo: Marla Aufmuth/TED

Tan Le wore a headset on stage that allowed attendees to look at her brain activity, live. Photo: Marla Aufmuth/TED

TEDHow do you animate cosmic rays? The story behind a TEDxCERN TED-Ed lesson


On September 24, TEDxCERN was hosted by physicist Brian Cox (watch his TED Talk: “CERN’s supercollider“), and the world was welcomed to watch for free. Below, an appetite-whetter that originally ran on the TEDx Innovations Blog.

Cosmic rays. Active galactic nuclei. Nucleosynthesis. For physicist Veronica Bindi, this is everyday vocabulary. A ten-year collaborator with AMS-02 — an experiment analyzing the data coming in from the Alpha Magnetic Spectrometer, a particle detector mounted on the International Space Station — Bindi deals with dark matter, solar activity, and the ins-and-outs of time of flight particle detectors with ease.

For someone without a double-digit career in particle physics, these topics can seem a bit intimidating. Bindi believes they shouldn’t be. Which is why when she was asked if she would contribute to a series of short physics-related lessons created by TED-Ed for TEDxCERN, she was both ecstatic and a bit daunted by the prospect. How would she make things like cosmic ray detection, collapsing stars, supernovae, black holes, and a years-long dissection of the building blocks of our universe come alive — in a video that clocks in at about four and a half minutes?

From Bindi's TED-Ed animation, "How cosmic rays help us understand the universe"

She was up for the challenge. She already knew about TED-Ed’s library of original animated lessons, having used other CERN-written lessons to get high school students excited about STEM. (Her favorite is “The beginning of the universe, for beginners” by CERN physicist Tom Whyntie.)

“Sometimes I’d dream about what I would do if I could have the opportunity to develop my own animation,” she says. “So you can imagine my surprise when I received an email from the head of TEDxCERN, Claudia Marcelloni, asking if I was interested in making a proposal for an animation.”

Bindi’s proposal? A primer on cosmic rays — those intriguing particles from outer space that help scientists understand space itself. But after the excitement came the questions: How do you transform a complex scientific concept into an easily-digestible lesson? How do you make astroparticle physics palpable … and palatable?

From Bindi's TED-Ed animation, "How cosmic rays help us understand the universe"

In tackling these questions, Bindi was not alone. She took on the script, while a team at TED-Ed — including lesson director and veteran animator Jeremiah Dickey — handled the animation. As a non-physicist, Dickey had his own challenges to face; mostly, translating the language of another field into that of his own. He had to transform science into art.

To figure out just how this happened, we spoke with both Bindi and Dickey via email. An edited version of the conversations follows:

Veronica, what was it like working with an animator?

VB: This is my first animation, but I’ve thought many times about doing this. My field of specialization is astroparticles, and more than general physics, it is not easily understandable to people not involved with it. That’s a pity. I believe that animations are a key pathway to draw people’s interest to a new topic. Animations reach where words fail; they allow people to easily understand concepts that would be so complex to understand otherwise. I really liked working with an animation team. I appreciated the opportunity to see the many, tiny details they take care of. And all the steps — and the many different people involved — that lead to the final product.

How involved were you in developing ideas for the animation?

VB: My task was the script, so I wrote the text. But the idea of making an animation was so fun that I ended up imagining and then proposing concepts. I really visualized it, frame after frame, in my mind.

When the TED-Ed team contacted me to show me the first draft of the actual animation, I was extremely excited, but also very scared at the same time. It was time to face the music. Of course, the animation was completely different from the one I had in mind: It was not so Star Wars, but it was much more fluid, more “universal,” and the message was really powerful and impactful. It was fascinating to see my story through the eyes of somebody else. It is amazing how everything changes when looking at it from different perspectives.

Was it difficult to turn a scientific concept into a short lesson?

VB: I’ve done my best. I can assure you it was not an easy task. You need to summarize so many concepts. “Where to start? What to say? When to conclude?” All in just a few minutes. Your mind starts to spin around adding concepts, then deleting them, getting excited and then completely frustrated. “Impossibile!” I said to myself in Italian. “I can’t just summarize all that in a few concepts.” But then after a lot of re-wording and rewriting, the animation takes its shape, word after word, the script is complete. You really just love it.

What was the most exciting part of the project for you? 

VB: Just being part of it. The day before you just dream about making an animation, and the next day you are working with a team of professional animators. Is it not fantastic? I get goosebumps.

rom Bindi's TED-Ed animation, "How cosmic rays help us understand the universe"

Jeremiah, how do you go about turning a complex scientific concept into an animation? Did you have a clear idea of what you wanted to do at the start, or do things morph as you go on?

JD: It’s rare that I have a completely clear idea at the start, but I do try to choose lessons to work on that spark some visual ideas at first. Invariably, the next step is then improving my own understanding of the subject, which is often fuzzy at best. I start by printing out the lesson with a lot of space between the lines so I can doodle on the page as I read it over and over. These doodles are usually worthless, but they’re a starting point — they make obvious the areas of the lesson I need to focus on. Research to bridge those gaps, lots of re-reading and more sketching eventually will lead to a cohesive plan that, hopefully, flows well and communicates the material in a compelling way. Figuring out how to visualize the material parallels my own path from ignorance to understanding it.

From what did you draw inspiration while planning out the visuals?

JD: I’m really fond of mid-20th century “space age” art, a lot of which was really kind of propaganda for the Cold War space race between the U.S. and the U.S.S.R.  There’s a lot of great art that came out of that — on both sides — that is always a pleasure to delve into. I looked at the paintings of Robert McCall, who did a lot of fantastic concept art for NASA from the Apollo era onwards. There are also a couple of animated Disney films dealing with the science of space directed by Ward Kimball in the mid-1950s that always inspire. Not sure if I can say how any of these things directly influenced the animation, but they definitely informed it.

What was it like working with a scientist from CERN?

JD: It’s really an honor to get to work with people who are on the cutting edge of scientific discovery — and, to be honest, a little intimidating to be tasked with visualizing their lessons, as it’s likely how many are being introduced to the concepts they are studying at CERN. But Veronica’s lesson on cosmic rays does a great job presenting the material in a very clear way that really is the essence of good teaching — making the complicated understandable, and in this case, doing it in an appropriately mind-blowing context.

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="360" src=";rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>

Read more about TED-Ed’s lessons and global network of educators »

Read more about TEDx organizers and events »

Watch the livestream of TEDxCERN »

TEDAmy Cuddy power-poses through pop culture

Amy Cuddy demonstrates a classic power pose used by humans and chimps alike — spreading your arms wide to appear more powerful. Photo: James Duncan Davidson

Amy Cuddy demonstrates a classic power pose used by humans and chimps alike — spreading your arms wide to appear more powerful. Photo: James Duncan Davidson

Power posing is always in style. So we were excited to see it featured in The New York Times Fashion & Style section this weekend in the article “Amy Cuddy takes a stand.” In a glowing article about the wide influence of her TED Talk (watch: Your body language shapes who you are), the writer notes that it has affected “elementary school students, retirees, elite athletes, surgeons, politicians, victims of bullying and sexual assault, beleaguered refugees, people dealing with mental illness or physical limitations.”

This, we knew. (See a collection of emails that Cuddy received in the months after her talk went viral.) But what we didn’t know: “Power posing showed up twice in Dilbert comic strips, and Planters nuts and Secret deodorant have developed ad campaigns around it.”

Below, the evidence. First, Dilbert on power posing.

Dilbert 177174.strip.zoom

Amy Cuddy tweeted in response to this one, “Silly, Dilbert. THAT’s not the right way to power pose!”

Dilbert 217167.strip.zoom redo

And speaking of comics, power posing also appeared in the comic Betty.


Here, Mr. Peanut tries his hand at inspirational speaking, complete with his arms in the air.

<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="360" src=";rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>

And the Secret ad that promotes power posing:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="330" mozallowfullscreen="mozallowfullscreen" src="" title="Secret "Fearless Power Pose" :15" webkitallowfullscreen="webkitallowfullscreen" width="586"></iframe>

Power posing also got awesomely spoofed in this Vooza video.

<iframe class="wpcom-protected-iframe " frameborder="0" height="552" id="wpcom-iframe-ff15c1f83af692044062d6072b9c55fa-542adeb37bc2f" name="wpcom-iframe-ff15c1f83af692044062d6072b9c55fa-542adeb37bc2f" scrolling="no" width="982"></iframe> <script type="text/javascript"> ( function() { var func = function() { var iframe_form = document.getElementById('wpcom-iframe-form-ff15c1f83af692044062d6072b9c55fa-542adeb37bc2f'); var iframe = document.getElementById('wpcom-iframe-ff15c1f83af692044062d6072b9c55fa-542adeb37bc2f'); if ( iframe_form && iframe ) { iframe_form.submit(); iframe.onload = function() { iframe.contentWindow.postMessage( { 'msg_type': 'poll_size', 'frame_id': 'wpcom-iframe-ff15c1f83af692044062d6072b9c55fa-542adeb37bc2f' }, window.location.protocol + '//' ); } } // Autosize iframe var funcSizeResponse = function( e ) { var origin = document.createElement( 'a' ); origin.href = e.origin; // Verify message origin if ( '' !== ) return; // Verify message is in a format we expect if ( 'object' !== typeof || undefined === ) return; switch ( ) { case 'poll_size:response': var iframe = document.getElementById( ); if ( iframe && '' === iframe.width ) iframe.width = '100%'; if ( iframe && '' === iframe.height ) iframe.height = parseInt( ); return; default: return; } } if ( 'function' === typeof window.addEventListener ) { window.addEventListener( 'message', funcSizeResponse, false ); } else if ( 'function' === typeof window.attachEvent ) { window.attachEvent( 'onmessage', funcSizeResponse ); } } if (document.readyState === 'complete') { func.apply(); /* compat for infinite scroll */ } else if ( document.addEventListener ) { document.addEventListener( 'DOMContentLoaded', func, false ); } else if ( document.attachEvent ) { document.attachEvent( 'onreadystatechange', func ); } } )(); </script>

Earlier this summer, we spoke to actor Manish Dayal, who used power posing to fuel his performance in the move The Hundred-Foot Journey. And The New York Times piece let us in on another actor using power posing as a form of method acting: Allison Williams of the show Girls. She tells the Times that she does the reverse of power posing. “Marnie generally has her shoulders forward, inched slightly up, and her arms folded as a line of defense,” she says.

Something none of the rest of us will ever do again, thanks to this talk that showed us how posture can make us feel about ourselves:

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="330" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="" webkitallowfullscreen="webkitAllowFullScreen" width="586"></iframe>

Krebs on SecurityWe Take Your Privacy and Security. Seriously.

“Please note that [COMPANY NAME] takes the security of your personal data very seriously.” If you’ve been on the Internet for any length of time, chances are very good that you’ve received at least one breach notification email or letter that includes some version of this obligatory line. But as far as lines go, this one is about as convincing as the classic break-up line, “It’s not you, it’s me.”


I was reminded of the sheer emptiness of this corporate breach-speak approximately two weeks ago, after receiving a snail mail letter from my Internet service provider — Cox Communications. In its letter, the company explained:

“On or about Aug. 13, 2014, “we learned that one of our customer service representatives had her account credentials compromised by an unknown individual. This incident allowed the unauthorized person to view personal information associated with a small number of Cox accounts. The information which could have been viewed included your name, address, email address, your Secret Question/Answer, PIN and in some cases, the last four digits only of your Social Security number or drivers’ license number.”

The letter ended with the textbook offer of free credit monitoring services (through Experian, no less), and the obligatory “Please note that Cox takes the security of your personal data very seriously.” But I wondered how seriously they really take it. So, I called the number on the back of the letter, and was directed to Stephen Boggs, director of public affairs at Cox.

Boggs said that the trouble started after a female customer account representative was “socially engineered” or tricked into giving away her account credentials to a caller posing as a Cox tech support staffer. Boggs informed me that I was one of just 52 customers whose information the attacker(s) looked up after hijacking the customer service rep’s account.

The nature of the attack described by Boggs suggested two things: 1) That the login page that Cox employees use to access customer information is available on the larger Internet (i.e., it is not an internal-only application); and that 2) the customer support representative was able to access that public portal with nothing more than a username and a password.

Boggs either did not want to answer or did not know the answer to my main question: Were Cox customer support employees required to use multi-factor or two-factor authentication to access their accounts? Boggs promised to call back with an definitive response. To Cox’s credit, he did call back a few hours later, and confirmed my suspicions.

“We do use multifactor authentication in various cases,” Boggs said. “However, in this situation there was not two-factor authentication. We are taking steps based on our investigation to close this gap, as well as to conduct re-training of our customer service representatives to close that loop as well.”

This sad state of affairs is likely the same across multiple companies that claim to be protecting your personal and financial data. In my opinion, any company — particularly one in the ISP business — that isn’t using more than a username and a password to protect their customers’ personal information should be publicly shamed.

Unfortunately, most companies will not proactively take steps to safeguard this information until they are forced to do so — usually in response to a data breach.  Barring any pressure from Congress to find proactive ways to avoid breaches like this one, companies will continue to guarantee the security and privacy of their customers’ records, one breach at a time.

TEDJR pastes up the past, a conference on the cybersecurity of medical devices, and the start of “space taxis”

An image from JR's "Unframed — Ellis Island." Photo: Courtesy of JR

An image from JR’s “Unframed — Ellis Island.” Photo: Courtesy of JR

The past two weeks have been busy for the members of the TED community. Some news highlights:

Artist JR is bringing the past back to life with his new series “Unframed.” He has taken archival photos of the Ellis Island Immigrant Hospital in New York, closed since 1954, and pasted them up so that doctors who worked there appear to float in the surgical theater and kids treated there peer in through a window. “I let the walls decide what part of the image should appear,” he tells The New York Times. (Watch JR’s TED Talk, “Use art to turn the world inside out.”)

The United States Food and Drug Administration is holding a conference to talk about cybersecurity on medical devices — in other words, can you hack a pacemaker? (Yes.) It starts October 21 in Arlington, Virginia and is open to the public, according to The Washington Post. (Watch Avi Rubin’s talk, “All your devices can be hacked.”)

Gregoire Courtine and his team have released a new paper. The paralyzed rats he was working with? The little guys are now taking 1,000 successive steps without failure. “It is a little bit Frankenstein,” he tells the MIT Technology Review. (Watch Gregoire’s talk, “The paralyzed rat that walked.”)

Lewis Pugh shared with The New York Times what he did and didn’t see while swimming long tracts through the Seven Seas this summer. “When I swam in the Aegean, the sea floor was covered with litter; I saw tires and plastic bags, bottles, cans, shoes and clothing,” he says. “I saw no sharks, no whales, no dolphins. I saw no fish longer than 11 inches.” (Watch Lewis’ TED Talk, “My mind-shifting Everest swim.”)

Carl Zimmer opened the 7th annual Imagine Science Film Festival. (Watch Carl’s TED-Ed lesson, “How did feathers evolve?“)

NASA is partnering with Elon Musk’s SpaceX, as well as with Boeing, to create “space taxis.” These taxis will take astronauts to the International Space Station. (Watch Elon’s Q&A, “The man behind Tesla, SpaceX, SolarCity.”)

Amy Cuddy and power posing—the phenomenon of her creation—gets a profile in The New York Times Fashion & Style section. The piece runs down how Cuddy’s advice to strike a powerful stance for two minutes has had a ripple effect through the world. (Watch Amy’s talk, “Your body language shapes who you are,” or check out our piece, “Amy Cuddy power-poses through popular culture.”

Thomas Goetz and his healthcare startup, Iodine, have launched their consumer-friendly online database of prescription drugs. Goetz talks to The New York Times about how the service incorporates ratings on how people feel about the product, using information from Google Consumer Surveys. (Watch Thomas’ TED Talk, “It’s time to redesign medical data” or read our update with him: “What are your drugs trying to tell you?”)

Paleontologist Zeresenay Alemseged stars in this Nautilus profile of Ethiopia’s field researchers — an international group investigating humanity’s origins, with more than a little Indiana Jones flair. (Watch Zeresenay’s talk, “The search for humanity’s roots.”)

Charmian Gooch’s work at Global Witness gets a mention in a Washington Post story about how anonymous companies make it difficult for law enforcement to investigate crimes. (Watch Charmian’s TED Talk, “My wish: To launch a new era of open business.”)

Sally Kohn makes the Advocate’s list of the top 50 most influential LGBT people in media. (Watch Sally’s talks, “Let’s try emotional correctness” and “Don’t like clickbait? Don’t click.”)

Leslie Morgan Steiner penned a piece for The Washington Post titled “He held a gun to my head. And I love him.” It’s her explanation of why the notions that Janay Palmer Rice should just leave her husband are too simplistic. (Watch Leslie’s TED Talk, “Why domestic violence victims don’t leave.”)

Architect Bradley Cantrell, one of our TEDGlobal 2014 Fellows, creates landscapes that react in real time. Fast Company’s Co.Exist takes a look at his concepts, which could lead to levees that adapt to the needs of the river and parks that actively improve air quality. (Read about Bradley and the new class of TED Fellows.), the disaster response system created by Morgan and Caitria O’Neill, is being deployed in Weed, California, where wildfires have forced more than 2,000 residents to evacuate. (Watch Morgan and Caitria’s talk, “How to step up in the face of disaster.”)

Noah Wilson-Rich asks in The New York Times: “Are bees back up on their knees?” (Watch Noah’s TED Talk, “Every city needs healthy honey bees.”)

And finally, cool news for Sylvia Earle, who wished in 2009 for more ocean “hope spots.” Barack Obama is broadening the Pacific Remote Islands National Marine Monument from less than 100,000 square miles to close to 500,000 square miles. This is in addition to the nearly 700,000 square miles he pledged to protect in June. (Watch Sylvia’s talk, “My wish: Protect our oceans.”)

Planet Linux AustraliaColin Charles: Trip report: LinuxCon North America, CentOS Dojo Paris, WebExpo Prague

I had quite a good time at LinuxCon North America/CloudOpen North America 2014, alongside my colleague Max Mether – between us, we gave a total of five talks. I noticed that this year there was a database heavy track — Morgan Tocker from Oracle’s MySQL Team had a few talks as did Martin MC Brown from Continuent. 

The interest in MariaDB stems from the fact that people are starting to just see it appear in CentOS 7, and its just everywhere (you can even get it from the latest Ubuntu LTS). This makes for giving interesting talks, since many are shipping MariaDB 5.5 as the default choice, but that’s something we released over 2 years ago; clearly there are many interesting new bits in MariaDB 10.0 that need attention!

Chicago is a fun place to be — the speaker gift was an architectural tour of Chicago by boat, probably one of the most useful gifts I’ve ever received (yes, I took plenty of photos!). The Linux Foundation team organised the event wonderfully as always, and I reckon the way the keynotes were setup with the booths in the same room was a clear winner — pity we didn’t have a booth there this year. 

Shortly afterwards, I headed to Paris for the CentOS Dojo. The room was full (some 50 attendees?), whom were mainly using CentOS and its clear that CentOS 7 comes with MariaDB so this was a talk to get people up to speed with what’s different with MySQL 5.5, what’s missing from MySQL 5.6, and when to look at MariaDB 10. We want to build CentOS 7 packages for the MariaDB repository (10.0 is already available with MariaDB 10.0.14), so watch MDEV-6433 in the meantime for the latest 5.5 builds.

Then there was WebExpo Prague, with over 1,400 attendees, held in various theatres around Prague. Lots of people here also using MariaDB, some rather interesting conversations on having a redis front-end, how we power many sites, etc. Its clear that there is a need for a meetup group here, there’s plenty of usage.

LongNowDavid Brin, Bruce Sterling & Daniel Suarez – Manual for Civilization Lists

IMG_8330-LPhoto by Particia Chang

Our brickstarter drive for The Interval at Long Now ends October 1, 02014. Please consider a donation today to support completing The Interval, the home of the Manual for Civilization.

The Manual for Civilization is a crowd-curated collection of the 3500 books you would most want to sustain or rebuild civilization. It is also the library at The Interval, with about 1000 books on shelves floor-to-ceiling throughout the space. We are about a third of the way done with compiling the list and acquiring selected the titles.

We have a set of four categories to guide selections:

  • Cultural Canon: Great works of literature, nonfiction, poetry, philosophy, etc
  • Mechanics of Civilization: Technical knowledge, to build and understand things
  • Rigorous Science Fiction: Speculative stories about potential futures
  • Long-term Thinking, Futurism, and relevant history (Books on how to think about the future that may include surveys of the past)

Our list comes from suggestions by Interval donors, Long Now members, and a some specially-invited guests with particular expertise. All the book lists we’ve published so far are shown here including lists from Brian Eno, Stewart Brand, Maria Popova, and Neal Stephenson. Interval donors will be the first to get the full list when it is complete.

Today we add selections from science fiction authors Bruce SterlingDavid Brin, and Daniel Suarez. All three are known for using contemporary science and technology as a starting point from which to speculate on the future. And that type of practice is exactly why Science Fiction is one of our core categories.

David Brin is a scientist, futurist and author who has won science fiction’s highest honors including the Locus, Campbell, Nebula, and Hugo awards. His 01991 book Earth is filled with predictions for our technological future, many of which have already come true. He has served on numerous advisory committees for his scientific expertise.

David BrinDavid Brin (photo by Cheryl Brigham)

David Brin’s list

Bruce Sterling‘s first novel was published in 01977. In 01985 he edited Mirrorshades the defining Cyberpunk anthology, and went on to win two Hugos and a Campbell award for his science fiction. His non-fiction writing including his long-running column for Wired are also influential. He spoke for Long Now in 02004.

Bruce Sterling (Photo by Heisenberg Media)Bruce Sterling (photo by Heisenberg Media)

Bruce Sterling’s list

Daniel Suarez made a huge stir with his 02006 self-published debut novel Daemon . Its success led to him speaking in 02008 for Long Now’s Seminar series and to a deal with a major publisher. In 02014 he published his fourth novel Influx.

Daniel SuarezDaniel Suarez (photo by Steve Payne)

Daniel Suarez’s list

Getting science fiction recommendations from great authors is an honor and a privilege. And we appreciation their support for The Interval, in helping to give it the best library possible, as well as of The Long Now Foundation as a whole. Books from all three of these authors will appear in the Manual for Civilization, as well as these selections that they’ve made of books that are important to them.

We hope that you will give us your list, too. If you’ve donated then you should have the link to submit books. And if you haven’t, then hurry up and give before October 1 at 5pm–your last chance to become a charter donor.


The Interval at Long Now in San FranciscoPhoto by Because We Can 

TED7 things learned from a day spent watching TEDxCERN


Wednesday marked the second-ever TEDxCERN, the event organized by the folks at CERN, the famed particle physics research center in Geneva, Switzerland, responsible for bringing us the World Wide Web, the Large Hadron Collider, and confirmation of the existence of the Higgs boson. You know, just a few minor things.

TEDxCERN brought together a mix of experts from across the sciences and the world, people all working to answer the question: “What are the big ideas in science that will help us address tomorrow’s major global problems?” Particle physicist (and three-time TED speaker) Brian Cox served as quippy host, while more than a thousand attendees watched live, in a giant tent nearby CERN’s iconic Globe of Science and Innovation.

If you weren’t one of the lucky thousand, or were too swamped with work to catch the live webcast, don’t despair. We watched for you. And created a list of things we learned.

  1. Water is weird. So says water molecule expert Marcia Barbosa, who defended the continued study of the molecule by explaining that it has 70 anomalies—much more than that of silicon, a “sexier” subject due to its role in technological innovation. Barbosa closely studies the properties of water flow; and her research involving water and nanotubes could lead to better, faster methods of desalinating ocean water to meet future water demands (as science writer Marcos Pivetta explains here).
  2. Thanks to a particle detector mounted on the International Space Station, scientists are keeping tabs on a lot of cosmic rays. The number is over 54,000,000,000, and is increasing every second. CERN physicist Veronica Bindi explains how and why in this TED-Ed lesson.
  3. A surprising threat to the rainforest? Noise. When speaker Topher White began working to combat illegal logging in Sumatra, he and his team were stymied by the constant din provided by resident monkeys, birds and other creatures, which actually drowned out the sound of chainsaws being used by unauthorized loggers. To solve the problem, he invented a solar-powered device out of recycled cell phones that detects chainsaw noise and sends an alert to users’ inboxes. (Read about the device here.)
  4. The future of antibiotics may lie in silver nanoparticles. Oxford University’s Sonia Trigueros is one of the many people who has put considerable study into the material, which benefits from both silver’s natural antibacterial properties and its ability to kill unwanted cells via hydroxyl radicals. (Read a quite-technical analysis of her and others’ attempts to stabilize the nanoparticles in solution.)
  5. Cardiovascular medicine is becoming easier to get in Cameroon. This is thanks to young inventor Andrew Zang, who has created the Cardiopad — an electronic tablet that enables an electrocardiogram (ECG) to be performed on a patient almost anywhere, even in some of the most remote villages, with the results transmitted wirelessly for a specialist to assess. In a country where, in 2013, there was only one physician for every 12,500 people, this is a big deal.
  6. We owe our lives to aerosol particles. This has to do with clouds, sun and temperature, and not so much hairspray. TED-Ed explains it best in this enlightening lesson by CERN physicist Jasper Kirkby, member of the CLOUD experiment at CERN, which — appropriately — is searching out fantastic new information about our cumulus, cirrus and other cloud friends.
  7. Despite what it may seem at times, we are living in a hugely exciting moment. Julien Lesgourgues is a cosmologist and author of The Cosmic Linear Anisotropy Solving System (CLASS), a code cosmologists use in simulating the universe. Lesgourgues spoke of the massive thrill of living in a world where we are able to glean so much information about our universe from data, and encouraged the audience to “enjoy the privilege of being part of the first generation of humans who understand the secrets of our universe.” Though we’d bet money that there are more secrets to uncover.
Brian Cox hosted TEDxCERN, entertaining an audience of 1200 as well as an online audience around the globe. Photo: Courtesy of @TEDxCERN

Brian Cox hosted TEDxCERN, entertaining 1200 attendees plus an online audience from around the globe. Photo: Courtesy of @TEDxCERN

The Globe of Science and Innovation at the European Organization for Nuclear Research. The famed science hub celebrated its 60th anniversary, just as TEDxCERN was held. Photo: Courtesy of CERN

The Globe of Science and Innovation at CERN, aka the European Organization for Nuclear Research. The famed science hub celebrated its 60th anniversary, just as TEDxCERN was held. Photo: Courtesy of CERN


Topher White, creator of Rainforest Connection’s solar powered noise detectors. Photo: @TEDxCERN

Mark ShuttleworthCloud Foundry for the Ubuntu community?

Quick question – we have Cloud Foundry in private beta now, is there anyone in the Ubuntu community who would like to use a Cloud Foundry instance if we were to operate that for Ubuntu members?

RacialiciousFunny Business: The Racialicious Review of Cantinflas

By Arturo R. García

It was perhaps inevitable that Sebastian del Amo’s Cantinflas would fit Charlie Chaplin into the proceedings. Much like Richard Attenborough before him, del Amo finds himself needing to make room for not just a performer, but a singular persona.

And there are moments when it feels like a more introspective film wants to burst through amid the usual hagiography. But a few choices do make this take on Mario Moreno and his life’s work more interesting than the trailer would have you believe.

SPOILERS under the cut

The film’s biggest asset, thankfully, is Óscar Jaenada in the title role. It might seem scandalous for Jaenada, a Spaniard, to inhabit the role of Mexico’s signature comedic character. But as both Morenos and Cantinflas, he buoys the film adroitly enough to placate concerns.

Crucially, Jaenada manages to recreate the signature rhythm of Cantinflas’ verbal riffing, though the film chalks his discovering his voice up to an encounter, perhaps apocryphal, with a heckler during one of his first monologues. Once his act was fully developed, Moreno made it plausible that his lovable underdog persona was able to dominate rooms full of people, like this one in El Super Sabio:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="420"></iframe>

The film leaps ahead in time in large part because its centerpiece — Moreno’s appearance in Around The World in 80 Days — takes place after Moreno has established himself as a labor activist and entrepreneur on top of his success as an actor.

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="560"></iframe>

Moreno’s inclusion is framed as the linchpin to Around The World being made, since he confirms his involvement alongside a literally from-out-of-nowhere Frank Sinatra, and their participation, it seems, entices Elizabeth Taylor (Barbara Mori — how’s that for a racebend?) to sign on.

But the decision is also positioned as his first step toward redemption after cheating on his wife Valentina (Ilsa Saenz) and allowing the Cantinflas brand to go from representing Mexico’s lower socioeconomic classes to making money off of them, as shown rather pointedly in a scene where celebrities attend the lush unveiling of a mural honoring the character, while the poor people he’s supposed to represent strain for a peek from outside the hall.

Óscar Jaenada as Mario Moreno as Cantinflas in “Cantinflas.”

It makes for a feel-good ending and a statement of balance between Moreno’s life and his work: we see him win the Golden Globe Award for Best Actor in a Comedy/Musical (this really happened) and announce that he’s both leaving Hollywood and adopting a child with Valentina. One of these statements is true: the couple did adopt a child, one he conceived with another woman. But they did remain together until she died in 1966.

In real life, however, Moreno didn’t immediately stop attempting to crack the U.S. market. Despite being warned not to do his schtick in English, Moreno attempted to do just that in 1957 with his second American feature, Pepe:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="420"></iframe>

Unfortunately, not even appearances by Sinatra and Judy Garland, on top of a second Golden Globe nomination, could make Pepe a hit. Three years later, he appeared as the mystery guest on What’s My Line?:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="420"></iframe>

The film has already been selected as the Mexican entry into next year’s Academy Awards, but as things stand, two factors hurt its chances: besides the historical omissions, del Amo and co-writer Edui Tijerina come up short in showing us Moreno in action as the fully-developed Cantinflas. We get snippets of directors learning to adjust (or else) to his verbal performances, but unfortunately, the only glimpse of him as a physical presence comes during the end credits, when Jaenada does his version of the eponymous sequence from El Bolero De Raquel. As he does throughout the movie, Jaenada does justice to the original, seen here:

<iframe allowfullscreen="allowfullscreen" frameborder="0" height="315" src="" width="420"></iframe>

We also don’t get any inkling of why U.S. stars like Sinatra and Taylor would hitch their wagon on Moreno’s talents, or why Chaplin would vouch for him to Around The World producer Michael Todd (Michael Imperioli, bravely battling both studio politics and an unflattering wig).

Showing Moreno gain credibility with “bigger” American performers would have fit in nicely with the narrative of the brand overtaking the man. And perhaps more importantly for Academy voters, the extra time could have helped del Amo present this story as the kind of epic a performer of Moreno’s stature deserves. After all, if Attenborough’s biography of Chaplin biography could command 143 minutes, why limit Cantinflas to 102?

The post Funny Business: The Racialicious Review of Cantinflas appeared first on Racialicious - the intersection of race and pop culture.

Sociological ImagesIn One Year, the % of Americans Who See the Criminal Justice as Racist Rose 9 Points

According to polling by the Public Religion Research Institute, the percent of American who say that the criminal justice system treats black people unfairly rose by 9 percentage points in just one year.  In fact, every category of person polled was more likely to think so in 2014 than in 2013, including Republicans, people over 65, and whites.

The biggest jump was among young people 18-29, 63% of whom believed the criminal justice system was unfair in 2014, compared to 42% in 2016.  The smallest jump was among Democrats — just 3 percentage points — but they mostly thought the system was jacked to begin with.


America has a history of making changes once police violence is caught on tape and shared widely. One of the first instances was after police attacked peaceful Civil Rights marchers in Selma, Alabama. The television had just become a ubiquitous appliance and the disturbing images of brutality were hard to ignore when they flashed across living rooms.

The death of Mike Brown in Ferguson, MO, and the aftermath is the likely candidate for this change. If you do a quick Google Image search for the word “ferguson,” the dominant visual story of that conflict seems solidly on the side of the protesters, not the police.

Click to see these images larger and judge for yourself:


H/t @seanmcelwee.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Planet Linux AustraliaAndrew Pollock: [life] Day 243: Day care for a day

I had to resort to using Zoe's old day care today so I could do some more Thermomix Consultant training. Zoe's asked me on and off if she could go back to her old day care to visit her friends and her old teachers, so she wasn't at all disappointed when she could today. Megan was even there as well, so it was a super easy drop off. She practically hugged me and sent me on my way.

When I came back at 3pm to pick her up, she wanted to stay longer, but wavered a bit when I offered to let her stay for another hour and ended up coming home with me.

We made a side trip to the Valley to check my post office box, and then came home.

Zoe watched a bit of TV, and then Sarah arrived to pick her up. After some navel gazing, I finished off the day with a very strenuous yoga class.

CryptogramNSA Patents Available for License

There's a new article on NSA's Technology Transfer Program, a 1990s-era program to license NSA patents to private industry. I was pretty dismissive about the offerings in the article, but I didn't find anything interesting in the catalog. Does anyone see something I missed?

My guess is that the good stuff remains classified, and isn't "transferred" to anyone.

Slashdot thread.

Worse Than FailureGridlock

In every global organization, there comes a point where someone figures out that all of those servers scattered throughout the planet aren't running at 100% capacity, and that they are sitting there going:

    Got anything for me to do?
    Got anything for me to do?
    Got anything for me to do?

It then occurs to these folks that these otherwise wasted CPU cycles can be leveraged to benefit the firm. Most people then realize that what they should do is buy a farm of machines and have some software to manage processes that are to run on the grid.

Most people.

At H. W.'s company, they designed virtual grids of virtual machines. Each one was carved out of some spare ram and CPU cycles on an underutilized server somewhere in the firm. These could be dynamically allocated on an as-needed basis - within certain constraints. For example, you may only be allowed to get 4GB of RAM in your transient VM.

Once this home-grown grid software was built and was turned loose in production, the team had to justify its effort by having some applications actually use the grid. Since all of the development teams had existing production hardware, they were loathe to risk moving their otherwise stable production systems onto a new virtual platform. This meant that the grid folks could only target new application development.

To this end, they started from the top and applied political pressure downward to use the grid for anything new that was being built.

H. W.'s team was building a replacement system for an aging, difficult to maintain legacy system, and was informed that not only would they not be getting new hardware for their replacement system, but that they had to use the grid.

About 15 other teams were given the same directive.

Fast forward about a year and all of the new systems were ready to deploy on the grid. Each application was comprised of numerous up-front preparatory jobs, the main jobs, and the post-run clean-up jobs. The way you'd access the grid was to make a request for a VM with certain specifications (MB of RAM, number of available cores, access to certain file systems). The grid management software would see what was available at that moment. If what you needed was free, it was yours. If it was not available, your request would block until such time as the resources you needed could be provided.

The justification for this was that these were production jobs that absolutely, positively had to run to completion, so failing to allocate a required resource was simply not an option. If your job needed a machine, you'd get it as soon as it was available. Period!

The various applications were brought on-line, one at a time, independently. Although they fired up their series of jobs at different start times, there was significant overlap.

After a few weeks, several applications had been brought on-line. Then it started. Jobs would periodically, and seemingly randomly take an inordinately long period of time to finish. There were no errors in the application logs. Just incredibly long pauses between log statements. No amount of debugging in any of the applications could find anything wrong.

While this was going on, more applications were brought on-line. The pauses got longer, and jobs were not finishing within their allotted windows. Naturally, this triggered all sorts of redness on various dashboards, and managers started inquiring as to why these brand new applications were failing to complete on time. Again, no amount of debugging in any of the applications could explain the reason for the pauses where all processing within an application appeared to just stop dead.

Finally, it was H. W.'s turn to go live. This application read data from 33 different source systems, and allocated a lot of caches. Since these caches were all larger than the available VM RAM limits, they had to be broken down into sub-range caches (e.g.: A-E, F-J, ...). This forced the allocation of a lot of VM's. During scale testing, the application consistently finished its work in 30 minutes. When run on the grid, it took upward of 4 hours.

Then it happened. By random chance, all the applications on the grid stopped dead at the same time.

At this point, the source of the problem was not to be found in application logs, but in the logs of the grid VM server itself. It turns out that instead of requesting all of the resources that it would need up front, each application would grab VM's as needed. Of course, if application 1 grabbed 10% of the VM's, and application 2 grabbed 10% of the VM's, and ... application 10 grabbed 10% of the VM's, and then each of them needed to grab one more, all of them were blocked while waiting for one of the others to free up a VM. In perpetuity.

Hilarity ensued when each of the development managers demanded that their application run to completion before the next application was allowed to start. Of course, there was no hardware for them to do an emergency migration as the hardware from the legacy systems had been re-purposed after the new applications were deployed.

The grid folks hacked together a change that allowed an application to specify a list of resources it would need up front and insisted that everyone make an emergency change to utilize it. Of course, an application that used a huge amount of resources would block anything else from running. You could also start two small applications leaving most of the grid free, but if there wasn't enough left for a big application, it wouldn't start, even though lots of resources were still available.

There are now 15 massive efforts to try and figure out a way to get each of these applications off the grid.

Photo credit: Foter / CC BY-SA

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet DebianMarco d'Itri: CVE-2014-6271 fix for Debian sarge, etch and lenny

Very old Debian releases like sarge (3.1), etch (4.0) and lenny (5.0) are not supported anymore by the Debian Security Team and do not get security updates. Since some of our customers still have servers running these version, I have built bash packages with the fix for CVE-2014-6271 (the "shellshock" bug) and Florian Weimer's patch which restricts the parsing of shell functions to specially named variables:

This work has been sponsored by my employer Seeweb, an hosting, cloud infrastructure and colocation provider.

Planet DebianJonathan Dowland: Letter to Starburst magazine

I recently read a few issues of Starburst magazine which is good fun, but a brief mention of the Man Booker prize in issue 404 stoked the fires of the age old SF-versus-mainstream argument, so I wrote the following:

Dear Starburst,

I found it perplexing that, in "Brave New Words", issue 404, whilst covering the Man-Booker shortlist, Ed Fortune tried to simultaneously argue that genre readers "read broadly" yet only Howard Jacobson's novel would be of passable interest. Asides from the obvious logical contradiction he is sadly overlooking David Mitchell's critically lauded and undisputably SF&F novel "The Bone Clocks", which it turned out was also overlooked by the short-listers. Still, Jacobson's novel made it, meaning SF&F represents 16% of the shortlist. Not too bad I'd say.

All the best & keep up the good work!

As it happens I'm currently struggling through "J". I'm at around the half-way mark.

Planet DebianDebConf team: DebConf15 dates are set, come and join us! (Posted by DebConf15 team)

At DebConf14 in Portland, Oregon, USA, next year’s DebConf team presented their conference plans and announced the conference dates: DebConf15 will take place from 15 to 22 August 2015 in Heidelberg, Germany. On the Opening Weekend on 15/16 August, we invite members of the public to participate in our wide offering of content and events, before we dive into the more technical part of the conference during following week. DebConf15 will also be preceeded by DebCamp, a time and place for teams to gather for intensive collaboration.

A set of slides from a quick show-case during the DebConf14 closing ceremony provide a quick overview of what you can expect next year. For more in-depth information, we invite you to watch the video recording of the full session, in which the team provides detailed information on the preparations so far, location and transportation to the venue at Heidelberg, the different rooms and areas at the Youth Hostel (for accommodation, hacking, talks, and social activities), details about the infrastructure that are being worked on, and the plans around the conference schedule.

We invite everyone to join us in organising this conference. There are different areas where your help could be very valuable, and we are always looking forward to your ideas. Have a look at our wiki page, join our IRC channels and subscribe to our mailing lists.

We are also contacting potential sponsors from all around the globe. If you know any organisation that could be interested, please consider handing them our sponsorship brochure or contact the fundraising team with any leads.

Let’s work together, as every year, on making the best DebConf ever!

Sky CroeserSocial Media & Society 2014 wrap up, part 1: gender and moments of grunching

This was my first attendance at Social Media and Society Conference, and sadly I could only participate in the first day, being keen to get back to Montreal to help Claire prepare for the oncoming arrival of BabyClaire. Despite feeling a little anxiety that BabyClaire might decide to make an early appearance, I enjoyed the opportunity to catch up on some of the latest research around social media use, particularly given the heavy focus on issues around social justice, race, and gender.

Click on the image for more Kate Beaton/awful velocipedestrienne excellence!

Click on the image for more Kate Beaton/awful velocipedestrienne excellence!

The morning opened with a keynote from Keith Hampton, which began with an amusing overview of some of the moral panics that have accompanied previous technological developments (including the horror of women on bicycles). After a discussion of ways in which social media facilitates increasing connection and other benefits, Hampton turned to addressing some of the costs of social media. Drawing on work by Noelle-Neuman on ‘The Spiral of Silence’, Hampton discussed recent research he’s carried out with others around the potential of social media to facilitate more lively online discourse. Surprisingly, research on Americans’ discussions of Snowden showed that only 0.3% of people were willing to the topic online but not offline. Twitter and Facebook users who felt their online connections didn’t agree with their opinions were also less willing to talk about those opinions offline, across contexts. Overall, this undermines claims that people will turn to online forums to voice opinions that might be unpopular or controversial offline.

The second potential cost of social media that Hampton discussed was the increased stress that comes from learning more about bad news experienced by close connections. Results here were highly gendered, beginning with the base measures of stress: women are, on average, more stressed than men. (Race also plays a role, unsurprisingly – Jenny Korn noted the need for more discussion on this.) Men, on the whole, experience no changes in stress levels associated with increasing social media use, while women generally experienced lessened stress with more social media use. However, the contagion effects of bad news for close connections were significantly higher for women than for men.

This was interesting research (which my short summary does little justice to), but I did experience an odd moment of grunching during this talk – a sudden sensation of being othered. In discussing women’s higher levels of awareness of stressful events in close connections lives, Hampton made a throwaway joke about his wife having ‘some theories as to why this might be’. This is not, obviously, a glaring instance of sexism, but the smattering of polite laughter did, suddenly, throw me out of my sense of ease and curiosity about research. Some of the tweets that followed helped to catalyse the source of my unease: the expectation that we could all laugh along at the disproportionate burden of emotional labour that women bear, and the lack of interrogation about why we bear that burden, or how we might shift it.

I experienced a few other moments of this sudden grunching throughout the conference (including when a participant well above forty joked on the conference hashtag about the difficulty of verifying age of consent in singles bars). I’ve decided to start writing about them despite my anxiety that, as an early career researcher, such reflections will have negative impacts on my work, because I think it’s important to name and discuss these small moments of alienation and otherness, as well as the big ones.

After the keynote presentation, I presented Tim and my research in the ‘Politics’ stream (we’re currently working on writing this up, so hopefully we’ll be able to share more soon). Next up, Mapping Iran’s Online Public‘, by Xiaoyi Ma and Emad Khazraee, laid out a useful methodology for capturing and automatically categorising tweets in Farsi. While this research does tend to support the common assumption that Twitter in Iran is dominated by young progressives (probably because Twitter is banned in Iran), Khazraee noted that the Iranian blogosphere is much more evenly divided.

Catherine Dumas’ presentation on Political mobilisation of online communities through e-petitioning behaviour in WeThePeople focused on the wake of the Sandy Hook shooting, demonstrating signs of organised counter-mobilisation against gun control, including several e-petitions attempting to shift the focus to mental health services and armed guards in schools.

The final presentation of the session focused on issues of archiving and trust related to government use of social media, particularly around the Canadian Truth and Reconciliation Commission on residential schools. Elizabeth Shaffer spoke about the importance of archives to those trying to prove their experiences at residential schools and seek redress, and noted that records will continue to be important as we look back on the Commission over coming years. She suggested that social media is likely to play a key role in the discussions around the Commission, and has the potential to be used for more horizontal engagement and information sharing. This research is still at an early stage, albeit a fascinating one, bringing together literature on social media, archiving, and governance: I’m very curious to learn more about how the process of archiving social media around the Commission progresses, and whose voices are (and aren’t) included.

The next panel addressed Twitter and Privacy, with all three panelists noting that this issue is inherently gendered. Siobhan O’Flynn addressed the ways in which Twitter’s terms of service create a legal grey zone. O’Flynn argued, in part, that the existence of hashtags as a means of joining a broader conversation sets up an implicit expectation of privateness for non-hashtagged content – I’m curious about the empirical data around this, and whether users base their actions on this expectation. Nehal ElHadi, like O’Flynn, discussed the appropriation of tweets in response to Christine Fox‘s question to her followers about what they were wearing when they were assaulted, using this as a starting-point for exploring what it means for Twitter content to be ‘public’. ElHadi’s theoretical framework draws on a range of literature, including postcolonial work on the politicisation of space, bringing in vital attention to race and power online, which is often neglected in academia.

Finally, Ramona Pringle spoke briefly on some of her transmedia storytelling projects (including Avatar Secrets, which looks like a super-cool exploration of what it means to live in a wired world, told through a personal lens). Pringle emphasised that Twitter, like other social media, isn’t just a device like a VCR; it’s not a tool we read the manual for, operate, and then put down. Instead, it’s a space we hang out in – we may not understand all of the implications and potential consequences of being there, in much the same way that we may not understand all of the laws governing public spaces like a library or coffee shop. She also spoke about the inherent messiness of human relationships, which includes human relationships online, and why this means that it’s not reasonable to draw lines like, ‘adults just shouldn’t sext’, or ‘if you don’t want people to see naked images of you, don’t ever take them’.

In tomorrow’s installment of the SMSociety14 wrap up: cultural acceptance, social media use by unions, and Idle No More!

Kelvin ThomsonLocal Government Reform

Having been a Councillor in the City of Coburg, State Member for Pascoe Vale, and now a Federal MP, I have seen a steady erosion in our local government democratic system and its ability to effectively represent local communities.<o:p></o:p>

The present local government system is a recipe for corruption. Councillors have a lot of power, a high workload, low salary, and courtesy of proportional representation, low accountability. These arrangements have created a perfect storm in which residents ultimately lose out.<o:p></o:p>

The Local Government Electoral Review was generated following an unprecedented number of complaints, 456, received following the 2012 local council elections. The Review made several recommendations including capping campaign donations at $1,000, postal voting for all municipalities, candidates nominating in person and a review of local councillor allowances.<o:p></o:p>

Candidates nominating in person and having to show they have a minimum number of supporters makes eminent sense. <o:p></o:p>

Current councillor allowances should be reviewed in the context of population growth, resident expectations, councillor workloads and attracting better quality candidates. Councillors in Victoria are paid between $7,730 and $27,514 a year. Low pay, and high workload and pressure are not good variables if we want to weed out corruption and attract high calibre candidates. <o:p></o:p>

Councillor workloads are becoming increasingly more difficult and challenging given Melbourne’s rapid population growth. As of 2014, the City of Moreland has 163,217 residents and 11 Councillors representing 3 wards. With Council Multi Wards now almost the size of some state electorates, containing 30,000 to 50,000 residents, the workload is too high for what is at best a part time job.<o:p></o:p>

When I was a Councillor, we had single member wards with around 8,000 to 10,000 residents per ward. The ward was small enough that as a Councillor you had time to virtually get to know everyone in your neighbourhood, and to be more proactive on emerging issues. Under this system residents could build a personal relationship with their local councillor, and councillors were more accountable and could not hide their decisions or work ethic. Under the multi member ward system, we have a situation where some councillors work harder than others, and councillors who do not put in the same effort, not turn up to meetings, vote against the interest of their communities, and who show disregard for the position they were elected to, are less accountable to their communities.<o:p></o:p>

We would be a lot better off moving back to single member wards for councils. This would improve Councillors ability to represent their communities, improve residents access to their representatives, and reduce the risk of corruption, inappropriate behaviour, and low accountability. <o:p></o:p>
<o:p> </o:p>
Local Government is the closest level of government to the people. The VEC, Local Government Electoral Review and the Ministers responsible, should be working to better connect residents with councillors, and encourage good quality council representatives.

Planet Linux AustraliaSonia Hamilton: Git and mercurial abort: revision cannot be pushed

I’ve been migrating some repositories from Mercurial to Git; as part of this migration process some users want to keep using Mercurial locally until they have time to learn git.

First install the hg-git tools; for example on Ubuntu:

sudo aptitude install python-setuptools python-dev
sudo easy_install hg-git

Make sure the following is in your ~/.hgrc:

hgext.bookmarks =
hggit = 

Then, in your existing mercurial repository, add a new remote that points to the git repository. For example for a BitBucket repository:

cd <mercurial repository>
cat .hg/hgrc
# the original hg repository
default =
# the git version (on BitBucket in this case)
bbgit   = git+ssh://

Then you can go an hg push bbgit to push from your local hg repository to the remote git repository.

mercurial abort: revision cannot be pushed

You may get the error mercurial abort: revision cannot be pushed since it doesn’t have a ref when pushing from hg to git, or you might notice that your hg work isn’t being pushed. The solution here is to reset the hg bookmark for git’s master branch:

hg book -f -r tip master
hg push bbgit

If you find yourself doing this regularly, this small shell function (in your ~/.bashrc) will help:

hggitpush () {
   # $1 is hg remote name in hgrc for repo
   # $2 is branch (defaults to master)
   hg book -f -r tip ${2:-master}
   hg push $1

Then from your shell you can run commands like:

hggitpush bbgit dev
hggitpush foogit      # defaults to pushing to master


Planet DebianEan Schuessler: RoboJuggy at JavaOne

A few months ago I was showing my friend Bruno Souza the work I had been doing with my childhood friend and robotics genius, David Hanson.  I had been watching what David was going through in his process of creating life-like robots with the limited industrial software available for motor control. I had suggested to David that binding motors to Blender control structures was a genuinely viable possibility. David talked with his forward looking CEO, Jong Lee, and they were gracious enough to invite me to Hong Kong to make this exciting idea a reality. Working closely the HRI team (Vytas, Gabrielos, Fabien and Davide) with David’s friend and collaborators at  OpenCog (Ben Goertzel, Mandeep, David, Jamie, Alex and Samuel) a month long creative hack-fest yielded pretty amazing results.

Bruno is an avid puppeteer, a global organizer of java user groups and creator of Juggy the Java Finch, mascot of Java users and user groups everywhere. We started talking about how cool it would be to have a robot version of Juggy. When I was in China I had spent a little time playing with Mark Tilden’s RSMedia and various versions of David’s hobby servo based emotive heads. Bruno and I did a little research into the ROS Java bindings for the Robot Operating System and decided that if we could make that part of the picture we had a great and fun idea for a JavaOne talk.

Hunting and gathering

I tracked down a fairly priced RSMedia in Alaska, Bruno put a pair of rubber Juggy puppet heads in the mail and we were on our way.
We had decided that we wanted RoboJuggy to be able to run about untethered and the new RaspberryPi B+ seemed like the perfect low power brain to make that happen. I like the Debian based Raspbian distributions but had lately started using the “netinst” Pi images. These get your Pi up and running in about 15 minutes with a nicely minimalistic install instead of a pile of dependencies you probably don’t need. I’d recommend anyone interested I’m duplicating our work to stay their journey there:

Raspbian UA Net Installer

Robots seem like an embedded application but ROS only ships packages for Ubuntu. I was pleasantly surprised that there are very good instructions for building ROS from source on the Pi. I ended up following these instructions:

Setting up ROS Hydro on the Raspberry Pi

Building from source means that all your install ends up being “isolated” (in ROS speak) and your file locations and build instructions end up being subtly current. As explained in the linked article, this process is also very time consuming. One thing I would recommend once you get past this step is to use the UNIX dd command to back up your entire SD card to a desktop. This way if you make a mess of things in later steps you can restore your install to a pristine Raspbian+ROS install. If your SD drive was on /dev/sdb you might use something like this to do the job:

sudo dd bs=4M if=/dev/sdb | gzip > /home/your_username/image`date +%d%m%y`.gz

Getting Java in the mix

Once you have your Pi all set up with minimal Raspbian and ROS you are going to want a Java VM. The Pi runs the ARM CPU so you need the corresponding version of Java. I tried getting things going initially with OpenJDK and I had some issues with that. I will work on resolving that in the future because I would like to have a 100% Free Software kit for this but since this was for JavaOne I also wanted JDK8, which isn’t available in Debian yet. So, I downloaded the Oracle JDK8 package for ARM.

Java 8 JDK for ARM

At this point you are ready to start installing the ROS Java packages. I’m pretty sure the way I did this initially is wrong but I was trying to reconcile the two install procedures for ROS Java and ROS Hydro for Raspberry Pi. I started by following these directions for ROS Java but with a few exceptions (you have to click the “install from source link” in the page to see the right stuff:

Installing ROS Java on Hydro

Now these instructions are good but this is a Pi running Debian and not an Ubuntu install. You won’t run the apt-get package commands because those tools were already installed in your earlier steps. Also, this creates its own workspace and we really want these packages all in one workspace. You can apparently “chain” workspaces in ROS but I didn’t understand this well enough to get it working so what I did was this:

> mkdir -p ~/rosjava 
> wstool init -j4 ~/rosjava/src
> source ~/ros_catkin_ws/install_isolated/setup.bash > cd ~/rosjava # Make sure we've got all rosdeps and msg packages.
> rosdep update 
> rosdep install --from-paths src -i -y

and then copied the sources installed into ~/rosjava/src into my main ~/ros_catkin_ws/src. Once those were copied over I was able to run a standard build.

> catkin_make_isolated --install

Like the main ROS install this process will take a little while. The Java gradle builds take an especially long time. One thing I would recommend to speed up your workflow is to have an x86 Debian install (native desktop, QEMU instance, docker, whatever) and do these same “build from source” installs there. This will let you try your steps out on a much faster system before you try them out in the Pi. That can be a big time saver.

Putting together the pieces

Around this time my RSMedia had finally showed up from Alaska. At first I thought I had a broken unit because it would power up, complain about not passing system tests and then shut back down. It turns out that if you just put the D batteries in and miss the four AAs that it will kind of pretend to be working so watch for that mistake. Here is a picture of the RSMedia when it first came out of the box (sorry that its rotated, I need to fix my WordPress install):



Other parts were starting to roll in as well. The rubber puppet heads had made their way through Brazilian customs and my Pololu Mini Maestro 24 had also shown up as well as the my servo motors and pan and tilt camera rig. I had previously bought a set of 10 motors for goofing around so I bought the pan and tilt rig by itself for about $5(!) but you can buy a complete set for around $25 from a number of EBay stores.

Complete pan and tilt rig with motors for $25

A bit more about the Pololu. This astonishing little motor controller costs about $25 and gives you control of 24 motors with an easy to use and high level serial API. It is probably also possible to control these servos directly from the Pi and eliminate this board but that will be genuinely difficult because of the real-time timing issues. For $25 this thing is a real gem and you won’t regret buying it.

Now it was time to start dissecting the RSMedia and getting control of its brain. Unfortunately a lot of great information about the RSMedia has floated away since it was in its heyday 5 years ago but there is still some solid information out there that we need to round up and preserve. A great resource is the SourceForge based website here at

That site has links to a number of useful sites. You will definitely want to check out their wiki. To disassemble the RSMedia I followed their instructions. I will say, it would be smart to take more pictures as you are going because they don’t take as many as they should. I took pictures of each board and its associated connections as dismantled the unit and that helped me get things back together later.  Another important note is that if all you want to do is solder onto the control board and not replace the head then its feasible to solder the board in place without completely disassembling the unit. Here are some photos of the dis-assembly:

wpid-20140921_114742.jpg wpid-20140921_113054.jpg wpid-20140921_112619.jpg

Now I also had to start adjusting the puppet head, building an armature for the motors to control it and hooking it into the robot. I need to take some more photos of the actual armature. I like to use cardboard for this kind of stuff because it is so fast to work with and relatively strong. One trick I have also learned about cardboard is that if you get something going with it and you need it to be a little more production strength you can paint it down with fiberglass resin from your local auto store. Once it dries it becomes incredibly tough because it soaks through the fibers of the cardboard and hardens around them. You will want to do this in a well ventilated area but its a great way to build super tough prototypes.

Another prototyping trick I can suggest is using a combination of Velcro and zipties to hook things together. The result is surprisingly strong and still easy to take apart if things aren’t working out. Velcro self-adhesive pads stick to rubber like magic and that is actually how I hooked the jaw servo onto the mask. You can see me torturing its first initial connection here:

Since the puppet head had come all the way from Brazil I decided to cook some chicken hearts in the churrascaria style while I worked on them in the garage. This may sound gross but I’m telling you, you need to try it! I soaked mine in soy sauce, Sriracha and chinese cooking wine. Delicious but I digress.



As I was eating my chicken hearts I was also connecting the pan and tilt armature onto the puppet’s jaw and eye assembly. It took me most of the evening to get all this going but by about one in the morning things were starting to look good!

I only had a few days left to hack things together before JavaOne and things were starting to get tight. I had so much to do and had also started to run into some nasty surprises with the ROS Java control software. It turns out that ROS Java is less than friendly with ROS message structures that are not “built in”. I had tried to follow the provided instructions but was not (and still have not) been able to get that working.

Using “unofficial” messages with ROS Java

I still needed to get control of the RSMedia. Doing that required the delicate operation of soldering to its control board. On the board there are a set of pins that provide a serial interface to the ARM based embedded Linux computer that controls the robot. To do that I followed these excellent instructions:

Connecting to the RSMedia Linux Console Port

After some sweaty time bent over a magnifying glass I had success:


I had previously purchased the USB-TTL232 accessory described in the article from Dallas’ awesome Tanner Electronics store in Dallas. If you are a geek I would recommend that you go there and say hi to its proprietor (and walking encyclopedia of electronics knowledge) Jim Tanner.

It was very gratifying when I started a copy of minicom, set it to 115200, N, 8, 1, plugged in the serial widget to the RSMedia and booted it up. I was greeted with a clearly recognizable Linux startup and console prompt. At first I thought I had done something wrong because I couldn’t get it to respond to commands but I quickly realized I had flow control turned on. Once turned off I was able to navigate around the file system, execute commands and have some fun. A little research and I found this useful  resource which let me get all kinds of body movements going:

A collection of useful commands for the RSMedia

At this point, I had a usable set of controls for the body as well as the neck armature. I had a controller running the industry’s latest and greatest robotics framework that could run on the RSMedia without being tethered to power and I had most of a connection to Java going.  Now I just had to get all those pieces working together. The only problem is that time was running out and I only had a couple of days until my talk and still had to pack and square things away at work.

The last day was spent doing things that I wouldn’t be able to do on the road. My brother Erik (and fantastic artist) came over to help paint up the juggy head and fix the eyeball armature. He used a mix of oil paint, rubber cement which stuck to the mask beautifully.

I bought battery packs for the USB Pi power and the 6v motor control and integrated them into a box that could sit below the neck armature. I fixed up a cloth neck sleeve that could cover everything. Luckily during all this my beautiful and ever so supportive girlfriend Becca had helped me get packed or I probably wouldn’t have made it out the door.

Welcome to San Francisco



Geek FeminismDamn the Man, Save the Linkspam! (28 September 2014)

  • You don’t know what you don’t know: How our unconscious minds undermine the workplace | Official Google Blog (September 25): Google runs research and analytics to try and combat unconscious bias that excludes minorities. “we need to help people identify and understand their biases so that they can start to combat them. So we developed a workshop, Unconscious Bias @ Work, in which more than 26,000 Googlers have taken part. And it’s made an impact: Participants were significantly more aware, had greater understanding, and were more motivated to overcome bias.”
  • Building a better and more diverse community | Blog – Hacker School (September 25): “The short: We now have need-based living expense grants for black and non-white Latino/a and Hispanic people, as well as people from many other groups traditionally underrepresented in programming. Etsy, Juniper, Perka, Stripe, Betaworks, and Fog Creek have partnered with us to fund the grants, and help make the demographics of Hacker School better reflect those of the US. Hacker School remains free for everyone.”
  • Science Has A Thomas Jefferson Problem… | Isis the Scientist… (September 19): “A large portion of the attacks against scientists are perpetrated by someone the victim knew, but many women in general know their attackers. So, at the crux of the stunning and shocking and eye opening is something that I find more insidious – it is the belief that science is somehow different than society at large.”
  • Read The Nasty Comments Women In Science Deal With Daily | The Huffington Post (September 25): [CW: Sexist and harassing language] “Curious to learn more about sexism in science, HuffPost Science reached out to women on the secret-sharing app Whisper. We asked whether anyone had ever said or done anything to discourage their interest in science–and, as you can see below, we were flooded with responses.”
  • Book Challenges Suppress Diversity | Diversity in YA (September 18): “It’s clear to me that books that fall outside the white, straight, abled mainstream are challenged more often than books that do not destabilize the status quo.”
  • Technology Isn’t Designed to Fit Women | Motherboard (September 12): “In some cases, making devices smaller necessarily requires waiting for further technological advancements; just think of how smartphones shrunk through the years as the tech was refined (before phablets took them in the other direction). But especially when it comes to devices that are implanted in the body, this has a disproportionate impact on people of smaller stature—which means women are more likely to be left behind.”
  • Building a Better Breast Pump | The Atlantic (September 25): “Until women have better support for breast-feeding, whether that manifests as paid maternity leave, safe and convenient places for pumping, or better access to lactation specialists, breast pumps aren’t likely to go the way of the Fitbit.”
  • Hope-less at Hope X | (September 18): “What Edward Snowden, Glenn Greenwald and Laura Poitras made possible, a couple of knuckleheads made impossible. The courage that Snowden has shown, the determination Poitras has shown, the persistence Greenwald has displayed — all these things made it possible for a woman who mostly doesn’t leave the house to … well, leave the house. I thought, for the first time in years, maybe this is a fight I should be fighting alongside the others.”
  • Goodbye, Ello: Privacy, Safety, and Why Ello Makes Me More Vulnerable to My Abusers and Harassers | Not Your Ex/Rotic (September 23): “Because the people I most want to avoid know my aliases. They are friends with people I know on Ello. They might already be on Ello (I’d be surprised if they weren’t) and are totally open to following me, reading me, tagging me, commenting on my posts. Hell, they can even find me through our mutual friends – any mutual activity pops up on their Friends feed.And, by the way Ello is currently set up, there is nothing I can do about it.”
  • The Victim, The Comforter, The Guy’s Girl… | Matter | Medium (September 23): “I’ve come to notice more and more how working within the particular masculine sexism of the tech industry has nudged the way I present myself, just a little. I’ve noticed how, very slowly, I’ve started to acquiesce into playing roles that get assigned to me. I’ve noticed how I disappear behind these masks.”
  • Apple Promised an Expansive Health App So Why Can’t I Track Menstruation? | The Verge (September 25): “Apple’s HealthKit can help you keep track of your blood alcohol content. If you’re still growing, it’ll track your height. And if you have an inhaler, it’ll help you track how often you use it. You can even use it to input your sodium intake, because “with Health, you can monitor all of your metrics that you’re most interested in,” said Apple Software executive Craig Federighi back in June. And yet, of all the crazy stuff you can do with the Health app, Apple somehow managed to omit a woman’s menstrual cycle.”
  • Why can’t you track periods in Apple’s Health app? | ntlk’s blog (September 26): “So why isn’t cycle tracking present in the Health app? I don’t know, but the only valid reason I can think of is that it didn’t occur to anyone to include it.”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet DebianJonathan Dowland: Puppet and filesystem mounts

Well, not long after writing my last post I've found some time to write up some of my puppet adventures, sooner than I imagined...

Outside work, I sys-admin a VPS instance that is shared by a few friends. We recently embarked in a project to migrate to a different VPS instance and I took the opportunity to revisit how we managed home directories.

I've got all the disk space allocated to the VM set up as LVM physical volumes. This has proven very useful for later expansion: we can do it all live. Each user on the VM may have one or more UNIX accounts that they use. Therefore, in the old scheme, for the jon user, we mounted an allocation of disk space at /home/jons, put the account home directories under it at e.g. /home/jons/jon, symlinked /home/jon -> /home/jons/jon for brevity, and set that as the home field in the passwd entry. This worked surprisingly well, but I was always uncomfortable with having a symlink in the home path (and some software was, too.)

For the new machine, I decided to try bind mounts. Short story: they just work. However, the mtab (and df output) can look a little cluttered, and mount order becomes quite important. To manage the set-up, I wrote a few puppet snippets. First, a convenience definition to make the actual bind-mounts a little less verbose.

define bindmount($device) {
  mount { $name:
    device  => $device,
    ensure  => mounted,
    fstype  => 'none',
    options => 'bind',
    dump    => 0,
    pass    => 2,
    require => File[$device],

Once that was in place, we then needed to ensure that the directories to which the LV were to be mounted, and to where the user's home would be bind-mounted, actually exist; we also need to mount the underlying LV and set up the bind mount. The dependency chain is actually a graph, but with the majority of dependencies quite linear:

define bindmounthome() {
  file { ["/home/${name}s", "/home/${name}"]:
    ensure  => directory,
  } -> # depended upon by
  mount { "/home/${name}s":
    device  => "LABEL=${name}",
    ensure  => mounted,
    fstype  => 'ext4',
    options => 'defaults',
    dump    => 0,
    pass    => 2,
  } -> # depended upon by
  bindmount { "/home/${name}":
    device  => "/home/${name}s/${name}",
  file { "/home/${name}s/${name}":
    ensure  => directory,
    owner   => $name,
    group   => $name,
    mode    => 0701, # 0701/drwx-----x
    require => [User[$name], Group[$name], Mount["/home/${name}s"]],

That covers the underlying mounts and the "primary" accounts. However, the point of this exercise was to support the secondary accounts for each user. There's a bit of repetition here, and with some refactoring both this and the preceding bindmounthome definition could be a bit shorter, but I'm not sure whether that would be at the expense of legibility:

define seconduser($parent) {
  file { "/home/${name}":
    ensure => directory,
  } -> # depended upon by
  bindmount { "/home/${name}":
    device => "/home/${parent}s/${name}",
  file { "/home/${parent}s/${name}":
    ensure  => directory,
    owner   => $name,
    group   => $name,
    mode    => 0701, # 0701/drwx-----x
    require => [User[$name], Group[$name], Mount["/home/${parent}s"]],

I had to re-read the above a couple of times just now to convince myself that I hadn't missed the dependencies between the mount invocations towards the bottom, but they're there: so, puppet will always run the mount for /home/jons before /home/jons/jon. Since puppet is writing to the fstab, this means that the ordering is correct and a sequential start-up will work.

If you want anything cleverer than serialised, one-at-a-time mounting at boot, I think one would have to use something other than trusty-old fstab for the job. I'm planning to look at Systemd's mount unit type, but there's no rush as this particular host is still running sysvinit for the time being.

Planet DebianClint Adams: Banana Pi is a real thing

Now that I've almost caught up with life after an extended stint on the West Coast, it's time to play.

Like Gunnar, I acquired a Banana Pi courtesy of LeMaker.

My GuruPlug (courtesy me) and my Excito B3 (courtesy the lovely people at Tor) are giving me a bit of trouble in different ways, so my intent is to decommission and give away the GuruPlug and Excito B3, leaving my DreamPlug and the Banana Pi to provide the services currently performed by the GuruPlug, Excito B3, and DreamPlug.

The Banana Pi is presently running Bananian on a 32G SDHC (Class 10) card. This is close to wheezy, and appears to have a mostly-sane default configuration, but I am not going to trust some random software downloaded off the Internet on my home network, so I need to be able to run Debian on it instead.

My preliminary belief is that the two main obstacles are Linux and U-Boot. Bananian 14.09 comes with Linux 3.4.90+ #1 SMP PREEMPT Fri Sep 12 18:13:45 CEST 2014 armv7l GNU/Linux, whatever that is, and U-Boot SPL 2014.04-10694-g2ae8b32 (Sep 03 2014 - 20:53:14). I don't yet know what the status of mainline/Debian support is.

Someone gave me a wooden cigar box to use as a case, which is not working out quite as hoped. I also found that my hack to power a 3.5" SATA drive does not work, so I'll either need to hammer on that some more or resolve to use a 2.5" drive instead.


Mem:        993700      36632     957068          0       2248      11136
-/+ buffers/cache:      23248     970452
Swap:       524284       1336     522948


Processor       : ARMv7 Processor rev 4 (v7l)
processor       : 0
BogoMIPS        : 1192.96

processor       : 1
BogoMIPS        : 1197.05

Features        : swp half thumb fastmult vfp edsp neon vfpv3 tls vfpv4 idiva idivt 
CPU implementer : 0x41
CPU architecture: 7
CPU variant     : 0x0
CPU part        : 0xc07
CPU revision    : 4

Hardware        : sun7i
Revision        : 0000
Serial          : 03c32de75055484880485278165166c9

Planet DebianJonathan Dowland: What have I been up to?

It's been a little while since I've written about what I've been up to. The truth is I've been busy with moving house - and I'll write a bit more about that at another time. But asides from that there have been some bits and bobs.

I use a little tool called archivemail to tidy up old listmail (my policy is to retain 30 days of listmail for most lists). If I unsubscribe to a list, then eventually I end up with an empty mail folder corresponding to that list. I decided it would be nice to extend archivemail to delete mailboxes if, after the archiving has taken place, the mailbox is empty. Doing this properly means adding delete routines to Python's "mailbox" library, which is part of the Python standard library. I've therefore started work on a patch for Python.

Since this is an enhancement, Python would only accept a patch for Python 3. Therefore, eventually, I would also have to port archivemail from Python 2 to 3. "archivemail" is basically abandonware at the moment, and the principal Debian maintainer is MIA. There was a release critical bug filed against it, so I joined the Debian Python team to co-maintain archivemail in Debian. I've worked around the RC bug but a proper fix is still to come.

In other Debian news, I've been mostly quiet. A small patch for squishyball to get it to build on Hurd, and a temporary fix patch for lhasa to get it to build on the build daemons for all architectures (problems with the test suite). All three of lhasa, squishyball and archivemail need a little bit of love to get them into shape before the jessie freeze.

I've had plans to write up some of the more interesting technical things I've been up to at work, but with the huge successes of the School we've been so busy I haven't had time. Hopefully you can soon look forward to some of our further adventures with puppet, including evaluating Shibboleth modules, some stuff about handling user directories, bind mounts and LVM volumes and actually publishing some of our more useful internal modules; I hope we will also (soon) have some useful data to go with our experiments with Linux LXC containers versus KVM-powered virtual machines in some of our use-cases. I've also got a few bits and pieces on Systemd to write up.

Planet Linux AustraliaSridhar Dhanapalan: Twitter posts: 2014-09-22 to 2014-09-28

Planet DebianBenjamin Mako Hill: Community Data Science Workshops Post-Mortem

Earlier this year, I helped plan and run the Community Data Science Workshops: a series of three (and a half) day-long workshops designed to help people learn basic programming and tools for data science tools in order to ask and answer questions about online communities like Wikipedia and Twitter. You can read our initial announcement for more about the vision.

The workshops were organized by myself, Jonathan Morgan from the Wikimedia Foundation, long-time Software Carpentry teacher Tommy Guy, and a group of 15 volunteer “mentors” who taught project-based afternoon sessions and worked one-on-one with more than 50 participants. With overwhelming interest, we were ultimately constrained by the number of mentors who volunteered. Unfortunately, this meant that we had to turn away most of the people who applied. Although it was not emphasized in recruiting or used as a selection criteria, a majority of the participants were women.

The workshops were all free of charge and sponsored by the UW Department of Communication, who provided space, and the eScience Institute, who provided food.

cdsw_combo_images-1The curriculum for all four session session is online:

The workshops were designed for people with no previous programming experience. Although most our participants were from the University of Washington, we had non-UW participants from as far away as Vancouver, BC.

Feedback we collected suggests that the sessions were a huge success, that participants learned enormously, and that the workshops filled a real need in the Seattle community. Between workshops, participants organized meet-ups to practice their programming skills.

Most excitingly, just as we based our curriculum for the first session on the Boston Python Workshop’s, others have been building off our curriculum. Elana Hashman, who was a mentor at the CDSW, is coordinating a set of Python Workshops for Beginners with a group at the University of Waterloo and with sponsorship from the Python Software Foundation using curriculum based on ours. I also know of two university classes that are tentatively being planned around the curriculum.

Because a growing number of groups have been contacting us about running their own events based on the CDSW — and because we are currently making plans to run another round of workshops in Seattle late this fall — I coordinated with a number of other mentors to go over participant feedback and to put together a long write-up of our reflections in the form of a post-mortem. Although our emphasis is on things we might do differently, we provide a broad range of information that might be useful to people running a CDSW (e.g., our budget). Please let me know if you are planning to run an event so we can coordinate going forward.

Planet Linux AustraliaDavid Rowe: SM1000 Part 6 – Noise and Radio Tests

For the last few weeks I have been debugging some noise issues in “analog mode”, and testing the SM1000 between a couple of HF radios.

The SM1000 needs to operate in “analog” mode as well as support FreeDV Digital Voice (DV mode). In analog mode, the ADC samples the mic signal, and sends it straight to the DAC where it is sent to the mic input of the radio. This lets you use the SM1000 for SSB as well as DV, without unplugging the SM1000 and changing microphones. Analog mode is a bit more challenging as electrical noise in the SM1000, if not controlled, makes it through to the transmit audio. DV mode is less sensitive, as the modem doesn’t care about low level noise.

Tracking down noise sources involves a lot of detail work, not very exciting but time consuming. For example I can hear a noise in the received audio, is it from the DAC or ADC side? Write software so I can press a button to send 0 samples to the DAC so I can separate the DAC and ADC at run time. OK it’s the ADC side, is it the ADC itself or the microphone amplifier? Break net and terminate ADC with 1k resistor to ground (thanks Matt VK5ZM for this suggestion). OK it’s the microphone amplifier, so is it on the input side or the op-amp itself? Does the noise level change with the mic gain control? No, then it must not be from the input. And so it goes.

I found noise due to the ADC, the mic amp, the mic bias circuit, and the 5V switcher. Various capacitors and RC filters helped reduce it to acceptable levels. The switcher caused high frequency hiss, this was improved with a 100nF cap across R40, and a 1500 ohm/1nF RC filter between U9 and the ADC input on U1 (prototype schematic). The mic amp and mic bias circuit was picking up 50Hz noise at the frame rate of the DSP software that was fixed with 220uF cap across R40 and a 100 ohm/220uF RC filter in series with R39, the condenser mic bias network.

To further improve noise, Rick and I are also working on changes to the PCB layout. My analog skills are growing and I am now working methodically. It’s nice to learn some new skills, useful for other radio projects as well. Satisfying.

Testing Between Two Radios

Next step is to see how the SM1000 performs over real radios. In particular how does it go with nearby RF energy? Does the uC reset itself, is there RF noise getting into the sensitive microphone amplifier and causing runaway feedback in analog mode? Also user set up issues: how easy is it to interface to the mic input of a radio? Is the level reaching the radio mic input OK?

The first step was to connect the SM1000 to a FT817 as the transmit radio, then to a IC7200 via 100dB of attenuation. The IC7200 receive audio was connected to a laptop running FreeDV. The FT817 was set to 0.5W output so I wouldn’t let the smoke out of my little in-line attenuators. This worked pretty well, and I obtained SNRs of up to 20dB from FreeDV. It’s always a little lower through real radios, but that’s acceptable. The PTT control from the SM1000 worked well. It was at this point that I heard some noises using the SM1000 in “analog” mode that I chased down as described above.

At the IC7200 output I recorded this file demonstrating audio using the stock FT817 MH31 microphone, the SM1000 used in analog mode, and the SM1000 in DV mode. The audio levels are unequal (MH31 is louder), but I am satisfied there are no strange noises in the SM1000 audio (especially in analog mode) when compared to the MH31 microphone. The levels can be easily tweaked.

Then I swapped the configuration to use the IC7200 as the transmitter. This has up to 100W PEP output, so I connected it to an end fed dipole, and used the FT817 with the (non-resonant) VHF antenna as the receiver. It took me a while to get the basic radio configuration working. Even with the stock IC7200 mic I could hear all sorts of strange noises in the receive audio due to the proximity of the two radios. Separating them (walking up the street with the FT817) or winding the RF gain all the way down helped.

However the FreeDV SNR was quite low, a maximum of 15dB. I spent some time trying to work out why but didn’t get to the bottom of it. I suspect there is some transmit pass-band filtering in the IC7200, making some FDMDV carriers a few dB lower than others. Note x-shaped scatter diagram and sloped spectrum below:

However the main purpose of these tests was to see how the SM1000 handled high RF fields. So I decided to move on.

I tested a bunch of different combinations, all with good results:

  • IC7200 with stock HM36 mic, SM1000 in analog mode, SM1000 in DV mode (high and low drive)
  • Radios tuned to 7.05, 14.235 and 28.5 MHz.
  • Tested with IC7200 and SM1000 running from the same 12V battery (breaking transformer isolation).
  • Had a 1m headphone cable plugged into the SM1000 act as an additional “antenna”.
  • Rigged up an adaptor to plug the FT817 MH31 mic into the CN5 “ext mic” connector on the SM1000. Total of 1.5m in mic lead, so plenty of opportunity for RF pick up.
  • Running full power into low and 3:1 SWR loads. (Matt, VK5ZM suggested high SWR loads is a harsh RF environment).

Here are some samples, SM1000 analog, stock IC7200 mic, SM1000 DV low drive, SM1000 high drive. There are some funny noises on the analog and stock mic samples due to the proximity of the rx to the tx, but they are consistent across both samples. No evidence of runaway RF feedback or obvious strange noises. Once again the DV level is a bit lower. All the nasty HF channel noise is gone too!

Change Control

Rick and I are coordinating our work with a change log text file that is under SVN version control. As I perform tests and make changes to the SM1000, I record them in the change log. Rick then works from this document to modify the schematic and PCB, making notes on the change log. I can then review his notes against the latest schematic and PCB files. The change log, combined with email and occasional Skype calls, is working really well, despite us being half way around the planet from each other.

SM1000 Enclosure

One open issue for me is what enclosure we provide for the Beta units. I’ve spoken to a few people about this, and am open to suggestions from you, dear reader. Please comment below on your needs or ideas for a SM1000 enclosure. My requirements are:

  1. Holes for loudspeaker, PTT switch, many connectors.
  2. Support operation in “hand held” or “small box next to the radio” form
  3. Be reasonably priced, quick to produce for the Qty 100 beta run.

It’s a little over two months since I started working on the SM1000 prototype, and just 6 months since Rick and I started the project. I’m very pleased with progress. We are on track to meet our goal of having Betas available in 2014. I’ve kicked off the manufacturing process with my good friend Edwin from Dragino in China, ordering parts and working together with Rick on the BOM.


Sociological ImagesSat Stat: Staggering Graph Reveals the Cooptation of Economic Recoveries by the Rich

The graph below represents the share of the income growth that went to the richest 10% of Americans in ten different economic recoveries.  The chart comes from economist Pavlina Tcherneva.

1 (2)

It’s quite clear from the far right blue and red columns that the top 10% have captured 100% of the income gains in the most recent economic “recovery,” while the bottom 90% have seen a decline in incomes even post-recession.

It’s also quite clear that the economic benefits of recoveries haven’t always gone to the rich, but that they have done so increasingly so over time. None of this is inevitable; change our economic policies, change the numbers.

Via Andrew Sullivan.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Planet DebianDebConf team: Wrapping up DebConf14 (Posted by Paul Wise, Donald Norwood)

The annual Debian developer meeting took place in Portland, Oregon, 23 to 31 August 2014. DebConf14 attendees participated in talks, discussions, workshops and programming sessions. Video teams captured a lot of the main talks and discussions for streaming for interactive attendees and for the Debian video archive.

Between the video, presentations, and handouts the coverage came from the attendees in blogs, posts, and project updates. We’ve gathered a few articles for your reading pleasure:

Gregor Herrmann and a few members of the Debian Perl group had an informal unofficial pkg-perl micro-sprint and were very productive.

Vincent Sanders shared an inspired gift in the form of a plaque given to Russ Allbery in thanks for his tireless work of keeping sanity in the Debian mailing lists. Pictures of the plaque and design scheme are linked in the post. Vincent also shared his experiences of the conference and hopes the organisers have recovered.

Noah Meyerhans’ adventuring to Debian by train, (Inter)netted some interesting IPv6 data for future road and railwarriors.

Hideki Yamane sent a gentle reminder for English speakers to speak more slowly.

Daniel Pocock posted of GSoC talks at DebConf14, highlights include the Java Project Dependency Builder and the WebRTC JSCommunicator.

Thomas Goirand gives us some insight into a working task list of accomplishments and projects he was able to complete at DebConf14, from the OpenStack discussion to tasksel talks, and completion of some things started last year at DebConf13.

Antonio Terceiro blogged about debci and the Debian Continuous Integration project, Ruby, Redmine, and Noosfero. His post also shares the atmosphere of being able to interact directly with peers once a year.

Stefano Zacchiroli blogged about a talk he did on debsources which now has its own HACKING file.

Juliana Louback penned: DebConf 2014 and How I Became a Debian Contributor.

Elizabeth Krumbach Joseph’s in-depth summary of DebConf14 is a great read. She discussed Debian Validation & CI, debci and the Continuous Integration project, Automated Validation in Debian using LAVA, and Outsourcing webapp maintenance.

Lucas Nussbaum by way of a blog post releases the very first version of Debian Trivia modelled after the TCP/IP Drinking Game.

François Marier’s shares additional information and further discussion on Outsourcing your webapp maintenance to Debian.

Joachim Breitner gave a talk on Haskell and Debian, created a new tool for binNMUs for Haskell packages which runs via cron job. The output is available for Haskell and for OCaml, and he still had a small amount of time to go dancing.

Jaldhar Harshad Vyas was not able to attend DebConf this year, but he did tune in to the videos made available by the video team and gives an insightful viewpoint to what was being seen.

Jérémy Bobbio posted about Reproducible builds in Debian in his recap of DebConf14. One of the topics at hand involved defining a canonical path where packages must be built and a BOF discussion on reproducible builds from where the conversation moved to discussions in both Octave and Groff. New helpers dh_fixmtimes and dh_genbuildinfo were added to BTS. The .buildinfo format has been specified on the wiki and reviewed. Lots of work is being done in the project, interested parties can help with the TODO list or join the new IRC channel #debian-reproducible on

Steve McIntyre posted a Summary from the d-i / debian-cd BoF at DC14, with some of the session video available online. Current jessie D-I needs some help with the testing on less common architectures and languages, and release scheduling could be improved. Future plans: Switching to a GUI by default for jessie, a default desktop and desktop choice, artwork, bug fixes and new architecture support. debian-cd: Things are working well. Improvement discussions are on selecting which images to make I.E. netinst, DVD, et al., debian-cd in progress with http download support, Regular live test builds, Other discussions and questions revolve around which ARM platforms to support, specially-designed images, multi-arch CDs, and cloud-init based images. There is also a call for help as the team needs help with testing, bug-handling, and translations.

Holger Levsen reports on feedback about the feedback from his LTS talk at DebConf14. LTS has been perceived well, fits a demand, and people are expecting it to continue; however, this is not without a few issues as Holger explains in greater detail the lacking gatekeeper mechanisms, and how contributions are needed from finance to uploads. In other news the security-tracker is now fixed to know about old stable. Time is short for that fix as once jessie is released the tracker will need to support stable, oldstable which will be wheezy, and oldoldstable.

Jonathan McDowell’s summary of DebConf14 includes a fair perspective of the host city and the benefits of planning of a good DebConf14 location. He also talks about the need for facetime in the Debian project as it correlates with and improves everyone’s ability to work together. DebConf14 also provided the chance to set up a hard time frame for removing older 1024 bit keys from Debian keyrings.

Steve McIntyre posted a Summary from the “State of the ARM” BoF at DebConf14 with updates on the 3 current ports armel, armhf and arm64. armel which targets the ARM EABI soft-float ARMv4t processor may eventually be going away, while armhf which targets the ARM EABI hard-float ARMv7 is doing well as the cross-distro standard. Debian is has moved to a single armmp kernel flavour using Device Tree Blobs and should be able to run on a large range of ARMv7 hardware. The arm64 port recently entered the main archive and it is hoped to release with jessie with 2 official builds hosted at ARM. There is talk of laptop development with an arm64 CPU. Buildds and hardware are mentioned with acknowledgements for donated new machines, Banana Pi boards, and software by way of ARM’s DS-5 Development Studio - free for all Debian Developers. Help is needed! Join #debian-arm on and/or the debian-arm mailing list. There is an upcoming Mini-DebConf in November 2014 hosted by ARM in Cambridge, UK.

Tianon Gravi posted about the atmosphere and contrast between an average conference and a DebConf.

Joseph Bisch posted about meeting his GSOC mentors, attending and contributing to a keysigning event and did some work on debmetrics which is powering Debmetrics provides a uniform interface for adding, updating, and viewing various metrics concerning Debian.

Harlan Lieberman-Berg’s DebConf Retrospective shared the feel of DebConf, and detailed some of the work on debugging a build failure, work with the pkg-perl team on a few uploads, and work on a javascript slowdown issue on codeeditor.

Ana Guerrero López reflected on Ten years contributing to Debian.

LongNowAdam Steltzner: Beyond Mars, Earth— A Seminar Flashback

In October 02013 NASA engineer Adam Steltzner spoke for Long Now about landing Curiosity on Mars. In Beyond Mars, Earth, Steltzner gives an insiders view of previous Mars missions leading up to his team’s incredible feat of landing the Curiosity rover safely on the planet’s surface. More broadly he ponders why humans have the need to explore and where we may go next.

Video of the 12 most recent Seminars is free for all to view. Beyond Mars, Earth is a recent SALT talk, free for public viewing until September 02014. Listen to SALT audio free on our Seminar pages and via podcastLong Now members can see all Seminar videos in HD.

<iframe frameborder="no" height="350" scrolling="no" src=";auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false&amp;visual=true" width="100%"></iframe>

This month our Seminar About Long-term Thinking (SALT) ”flashbacks” highlight Space-themed talks, as we lead up to Ariel Waldman’s The Future of Human Space Flight at The Interval, September 30th, 02014.

From Stewart Brand’s summary of Beyond Mars, Earth (in full here):

“With this kind of exploration,“ Steltzner said, “we’re really asking questions about ourselves. How great is our reach? How grand are we? Exploration of this kind is not practical, but it is essential.” He quoted Theodore Roosevelt: “Far better it is to dare mighty things, to win glorious triumphs even though checkered by failure, than to rank with those timid spirits who neither enjoy nor suffer much because they live in the gray twilight that knows neither victory nor defeat.”

After the epic subjects of his talk, Steltzner’s Q&A with Stewart Brand gets quite personal. A late-comer to science and engineering, one night he looked up at the stars, asked himself a question, and that lead him to a whole new life.

Adam Steltzner is an engineer at NASA’s Jet Propulsion Laboratory (JPL) who has worked on the the Galileo, Cassini, and Mars Pathfinder missions as well as the Shuttle-Mir Program. He was the lead engineer of Curiosity rover’s “Entry, Descent, and Landing“ phase.

Adam Steltzner at Long Now

The Seminars About Long-term Thinking series began in 02003 and is presented each month live in San Francisco. It is curated and hosted by Long Now’s President Stewart Brand. Seminar audio is available to all via podcast.

Everyone can watch full video of the last 12 Long Now Seminars (including this Seminar video until late June 02014). Long Now members can watch the full ten years of Seminars in HD. Membership levels start at $8/month and include lots of benefits.

You can join Long Now here.

Cory DoctorowMy In Real Life book-tour!

I'm heading out on tour with my new graphic novel In Real Life, adapted by Jen Wang from my story Anda's Game. I hope you'll come out and see us! We'll be in NYC, Princeton, LA, San Francisco, Seattle, Austin, Minneapolis and Chicago! (I'm also touring my new nonfiction book, Information Doesn't Want to Be Free, right after -- here's the whole schedule).

Planet DebianRitesh Raj Sarraf: Laptop Mode Tools 1.66

I am pleased to announce the release of Laptop Mode Tools at version 1.66.

This release fixes an important bug in the way Laptop Mode Tools is invoked. Users, now when disable it in the config file, the tool will be disabled. Thanks to bendlas@github for narrowing it down. The GUI configuration tool has been improved, thanks to Juan. And there is a new power saving module for users with ATI Radeon cards. Thanks to M. Ziebell for submitting the patch.

Laptop Mode Tools development can be tracked @ GitHub




Planet DebianNiels Thykier: Lintian – Upcoming API making it easier to write correct and safe code

The upcoming version of Lintian will feature a new set of API that attempts to promote safer code. It is hardly a “ground-breaking discovery”, just a much needed feature.

The primary reason for this API is that writing safe and correct code is simply too complicated that people get it wrong (including yours truly on occasion).   The second reason is that I feel it is a waste having to repeat myself when reviewing patches for Lintian.

Fortunately, the kind of issues this kind of mistake creates are usually minor information leaks, often with no chance of exploiting it remotely without the owner reviewing the output first[0].

Part of the complexity of writing correct code originates from the fact that Lintian must assume Debian packages to be hostile until otherwise proven[1]. Consider a simplified case where we want to read a file (e.g. the copyright file):

package Lintian::cpy_check;
use strict; use warnings; use autodie;
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  # BAD: This is an example of doing it wrong
  open(my $fd, '<', $info->unpacked($filename));

This has two trivial vulnerabilities[2].

  1. Any part of the path (usr,usr/share, …) can be asymlink to “somewhere else” like /
    1. Problem: Access to potentially any file on the system with the credentials of the user running Lintian.  But even then, Lintian generally never write to those files and the user has to (usually manually) disclose the report before any information leak can be completed.
  2. The target path can point to a non-file.
    1. Problem: Minor inconvenience by DoS of Lintian.  Examples include a named pipe, where Lintian will get stuck until a signal kills it.

Of course, we can do this right[3]:

package Lintian::cpy_check;
use strict; use warnings; use autodie;
use Lintian::Util qw(is_ancestor_of);
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  my $root = $info->unpacked
  my $path = $info->unpacked($filename);
  if ( -f $path and is_ancestor_of($root, $path)) {
    open(my $fd, '<', $path);

Where “is_ancestor_of” is the only available utility to assist you currently.  It hides away some 10-12 lines of code to resolve the two paths and correctly asserting that $path is (an ancestor of) $root.  Prior to Lintian 2.5.12, you would have to do that ancestor check by hand in each and every check[4].

In the new version, the correct code would look something like this:

package Lintian::cpy_check;
use strict; use warnings; use autodie;
sub run {
  my ($pkg, undef, $info) = @_;
  my $filename = "usr/share/doc/$pkg/copyright";
  my $path = $info->index_resolved_path($filename);
  if ($path and $path->is_open_ok) {
    my $fd = $path->open;

Now, you may wonder how that promotes safer code.  At first glance, the checking code is not a lot simpler than the previous “correct” example.  However, the new code has the advantage of being safer even if you forget the checks.  The reasons are:

  1. The return value is entirely based on the “file index” of the package (think: tar vtf data.tar.gz).  At no point does it use the file system to resolve the path.  Whether your malicious package trigger an undef warning based on the return value of index_resolved_index leaks nothing about the host machine.
    1. However, it does take safe symlinks into account and resolves them for you.  If you ask for ‘foo/bar’ and ‘foo’ is a symlink to ‘baz’ and ‘baz/bar’ exists in the package, you will get ‘baz/bar’.  If ‘baz/bar’ happens to be a symlink, then it is resolved as well.
    2. Bonus: You are much more likely to trigger the undef warning during regular testing, since it also happens if the file is simply missing.
  2. If you attempt to call “$path->open” without calling “$path->is_open_ok” first, Lintian can now validate the call for you and stop it on unsafe actions.

It also has the advantage of centralising the code for asserting safe access, so bugs in it only needs to be fixed in one place.  Of course, it is still possible to write unsafe code.  But at least, the new API is safer by default and (hopefully) more convenient to use.


[0] being the primary exception here.

[1] This is in contrast to e.g. piuparts, which very much trusts its input packages by handing the package root access (albeit chroot’ed, but still).

[2] And also a bug.  Not all binary packages have a copyright – instead some while have a symlink to another package.

[3] The code is hand-typed into the blog without prior testing (not even compile testing it).  The code may be subject to typos, brown-paper-bag bugs etc. which are all disclaimed (of course).

[4] Fun fact, our documented example for doing it “correctly” prior to implementing is_ancestor_of was in fact not correct.  It used the root path in a regex (without quoting the root path) – fortunately, it just broke lintian when your TMPDIR / LINTIAN_LAB contained certain regex meta-characters (which is pretty rare).

Chaotic IdealismLove

Someone at Wrong Planet recently posed the question, "What is love?" Here's my answer.

Love is having among your top priorities the well-being and happiness of another living creature. It means that their happiness brings you satisfaction and that you act in ways that benefit them, and that even when you don't feel emotionally connected, you still persist in making decisions with their interests in mind.

I've thought about it some, and I think that by this definition it is actually possible to love people who never do anything but annoy you--or people you have never even met.

There are other words that can be used to describe related things--words like "passion", the dizzy-headed emotional and erotic connection to a new partner; "fondness", the warm feeling that one has for someone they feel comfortable with and consider a friend; or "duty" (to family, friends, or community), the decision to work to benefit someone else even when doing so brings you no satisfaction. There's the parent-child bond that includes a lot of protectiveness and a good deal of identifying the child as very nearly part of oneself, which is heavily rooted in the instinct to care for one's young.

But love, in general, is more than an emotion--if it were only an emotion, then all you'd have to do to stop loving someone is to get upset with them. Love is more of a long-term behavioral pattern, a conscious or subconscious decision, or a habit--perhaps even a way of life.

Planet Linux AustraliaGlen Turner: Ubiquitous survelliance, VPNs, and metadata

My apologies for the lack of diagrams accompanying this post. I had not realised when I selected LiveJournal to host my blog that it did not host images.

There have been a lot of remarks, not the least by a minister, about the use of VPNs to avoid metadata collection. Unfortunately VPNs cannot be presumed to be effective in avoiding metadata collection, because of the sheer ubiquity of surveillance and the traffic analysis opportunities that ubiquity makes possible.

By ‘metadata’ I mean the production of flow records, one record per flow, with no sampling or aggregation.

By ‘ubiquitous surveillance’ I mean the ability to completely tap and record the ingress and egress data of a computer. Furthermore, the sharing of that data with other nations, such as via the Five Eyes programme. It is a legal quirk in the US and in Australia that a national spy agency may not, without a warrant or reasonable cause, be able to analyse the data of its own citizens directly, but can obtain that same information via a Five Eyes partner without a warrant or reasonable cause.

By ‘VPN service’ I mean a overseas service which sells subscriber-based access to a OpenVPN or similar gateway. The subscriber runs a OpenVPN client, the service runs a OpenVPN server. The traffic from within that encrypted VPN tunnel is then NATed and sent out the Internet-facing interface of the OpenVPN server. The traffic from the subscriber appears to have the IP address of the VPN server; this makes VPN services popular for avoiding geo-locked Internet content from Hula, Netflix and BBC iPlayer.

The theory is that this IP address misdirection also defeats ubiquitous surveillance. An agency producing metadata from the subscriber's traffic sees only communication with the VPN service. An agency tapping the subscriber's traffic sees only the IP address of the subscriber exchanging encrypted content with the IP address of the VPN service.

Unfortunately ubiquitous surveillance is ubiquitous: if a national spy agency cannot tap the traffic itself then it can ask its Five Eyes partner to do the tap. This means that the traffic of the VPN service is also tapped. One interface contains traffic with the VPN subscribers; the other interface contains unencrypted traffic from all subscribers to the Internet. Recall that the content of the traffic with the VPN subscribers is encrypted.

Can a national spy agency relate the unencrypted Internet traffic back to the subscriber's connections? If so then it can tap content and metdata as if the VPN service was not being used.

Unfortunately it is trivial for a national spy agency to do this. ‘Traffic analysis’ is the examination of patterns of traffic. TCP traffic is very vulnerable to traffic analysis:

  • Examining TCP traffic we see a very prominent pattern at the start of every connection. This ‘TCP three-way handshake’ sends one small packet all by itself for the entire round-trip time, receives one small packet all by itself for the entire round trip time, then sends one large packet. Within a small time window we will see the same pattern in VPN service's encrypted traffic with the subscriber and in the VPN service's unencrypted Internet traffic.

  • Examining TCP traffic we see a very prominent pattern which a connection encounters congestion. This ‘TCP multiplicative decrease’ halves the rate of transmission upon traffic where the sender has not received a Acknowledgement packet within the expected time. Within a small time window we will see the same pattern in VPN service's encrypted traffic with the subscriber and in the VPN service's unencrypted Internet traffic.

These are only the gross features. It doesn't take much imagination to see that the interval between Acks can be used to group connections with the same round-trip time. Or that the HTTP GET and response is also prominent. Or that jittering in web streaming connections is prominent.

In short, by using traffic analysis a national spy agency can — with a high probability — assign the unencrypted traffic on the Internet interface to the encrypted traffic from the VPN subscriber. That is, given traffic with (Internet site IP address, VPN service Internet-facing IP address) and (VPN service subscriber-facing IP address, Subscriber IP address) then traffic analysis allows a national spy agency to reduce that to (Internet site IP address, Subscriber IP address). That is, the same result as if the VPN service was not used.

The only question remains is if the premier national spy agencies are actually exchanging tables of (datetime, VPN service subscriber-facing IP address, Internet site IP address, Subscriber IP address) to allow national taps of (datetime, VPN server IP address, Subscriber IP address) to be transformed into (datetime, Internet site IP address, Subscriber IP address). There is nothing technical to prevent them from doing so. Based upon the revealed behaviour of the Five Eyes agencies it is reasonable to expect that this is being done.

Planet Linux AustraliaTim Serong: Dear ASIO

Since the Senate passed legislation expanding your surveillance powers on Thursday night, you’ve copped an awful lot of flack on Twitter. Part of the problem, I think – aside from the legislation being far too broad – is that we don’t actually know who you are, or what exactly it is you get up to. You could be part of a spy novel, a movie or a decades-long series of cock ups. You could be script kiddies with a budget. Or you could be something else entirely.

At times like this I try to remind myself to assume good faith; to remember that most people are basically decent and are trying to live a good life. Some people are even trying to make the world a better place, whatever that might mean.

For those of you then who are decent people, and who are trying to keep Australia safe from whatever mysterious threats are out there that we don’t know about – all without wishing to impinge on or risk destroying the freedoms that we enjoy here – you have my thanks.

For those of you involved in the formulation of The National Security Legislation Amendment Bill 2014 (No 1) – you who might be reading this post as I type it, rather than after I publish it – I have tried very, very hard to imagine that you honestly believe you are making the world a better place. And maybe you do actually think that, but for my part I cannot see the powers granted as anything other than a direct assault on our democracy. As Glenn Greenwald pointed out, I should be more worried about bathroom accidents, restaurant meals and lightning strikes than terrorism. As a careful bath user with a strong stomach and a sturdy house to hide in, I think I’m fairly safe on that front. Frankly I’m more worried about climate change. Do you have anyone on staff who can investigate that threat to our national security?

Anyway, thanks for reading, and I’ll take it as a kindness if you don’t edit this post without asking first.


Tim Serong


LongNowDrew Endy Seminar Media

This lecture was presented as part of The Long Now Foundation’s monthly Seminars About Long-term Thinking.

The iGEM Revolution

Tuesday September 16, 02014 – San Francisco

Audio is up on the Endy Seminar page, or you can subscribe to our podcast.


Massively collaborative synthetic biology – a summary by Stewart Brand

Natural genomes are nearly impossible to figure out, Endy began, because they were evolved, not designed. Everything is context dependent, tangled, and often unique. So most biotech efforts become herculean. It cost $25 million to develop a way to biosynthesize the malaria drug artemisinin, for example. Yet the field has so much promise that most of what biotechnology can do hasn’t even been imagined yet.

How could the nearly-impossible be made easy? Could biology become programmable? Endy asked Lynn Conway, the legendary inventer of efficient chip design and manufacturing, how to proceed. She said, “Go meta.” If the recrafting of DNA is viewed from a meta perspective, the standard engineering cycle—Design, Build, Test, Design better, etc.—requires a framework of DNA Synthesis, using Standards, understood with Abstraction, leading to better Synthesis, etc.

“In 2003 at MIT,” Endy said, “we didn’t know how to teach it, but we thought that maybe working with students we could figure out how to learn it.” It would be learning-by-building. So began a student project to engineer a biological oscillator—a genetic blinker—which led next year to several teams creating new life forms, which led to the burgeoning iGEM phenomenon. Tom Knight came up with the idea of standard genetic parts, like Lego blocks, now called BioBricks. Randy Rettberg declared that cooperation had to be the essence of the work, both within teams (which would compete) and among all the participants to develop the vast collaborative enterprise that became the iGEM universe—students creating new BioBricks (now 10,000+) and meeting at the annual Jamboree in Boston (this year there are 2,500 competitors from 32 countries). “iGEM” stands for International Genetically Engineered Machine.

Playfulness helps, Endy said. Homo faber needs homo ludens—man-the-player makes a better man-the-maker. In 2009 ten teenagers with $25,000 in sixteen weeks developed the ability to create E. coli in a variety of colors. They called it E. chromi. What could you do with pigmented intestinal microbes? “The students were nerding out.” They talked to designers and came up with the idea of using colors in poop for diagnosis. By 2049, they proposed, there could be a “Scatalog” for color matching of various ailments such as colon cancer. “It would be more pleasant than colonoscopy.”

The rationale for BioBricks is that “standardization enables coordination of labor among parties and over time.” For the system to work well depends on total access to the tools. “I want free-to-use language for programming life,” said Endy. The stated goal of the iGEM revolutionaries is “to benefit all people and the planet.” After ten years there are now over 20,000 of them all over the world guiding the leading edges of biotechnology in that direction.

During the Q&A, Endy told a story from his graduate engineering seminar at Dartmouth. The students were excited that the famed engineer and scientist Arthur Kantrowitz was going to lead a session on sustainability. They were shocked when he told them, “‘Sustainability‘ is the most dangerous thing I’ve ever encountered. My job today is to explain two things to you. One, pessimism is a self-fulfilling prophecy. Two, optimism is a self-fulfilling prophecy.”

Subscribe to our Seminar email list for updates and summaries.

CryptogramFriday Squid Blogging: Squid Fishing Moves North in California

Warmer waters are moving squid fishing up the California coast.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Planet DebianRichard Hartmann: Release Critical Bug report for Week 39

The UDD bugs interface currently knows about the following release critical bugs:

  • In Total: 1393
    • Affecting Jessie: 408 That's the number we need to get down to zero before the release. They can be split in two big categories:
      • Affecting Jessie and unstable: 360 Those need someone to find a fix, or to finish the work to upload a fix to unstable:
        • 50 bugs are tagged 'patch'. Please help by reviewing the patches, and (if you are a DD) by uploading them.
        • 20 bugs are marked as done, but still affect unstable. This can happen due to missing builds on some architectures, for example. Help investigate!
        • 290 bugs are neither tagged patch, nor marked done. Help make a first step towards resolution!
      • Affecting Jessie only: 48 Those are already fixed in unstable, but the fix still needs to migrate to Jessie. You can help by submitting unblock requests for fixed packages, by investigating why packages do not migrate, or by reviewing submitted unblock requests.
        • 0 bugs are in packages that are unblocked by the release team.
        • 48 bugs are in packages that are not unblocked.

Graphical overview of bug stats thanks to azhag:

TEDInvasion of the golden mussel: A TED Fellow wields genes to protect the Amazon

TED Fellow Marcela Ulliana da Silva in her lab.

Back in the ’90s, the golden mussel (Limnoperna fortunei) hitched a ride on ships traveling from Asia to South America. In the past decade and a half, the mussel has proliferated through South America’s river systems, destroying the native habitat and disrupting the operation of power plants and water treatment facilities. This invasive species now threatens the delicate ecosystem of the Amazon.

Computational biologist and TEDGlobal 2014 Fellow Marcela Uliano da Silva is working to put a halt to this. A native of Brazil, she’s sequencing the golden mussel’s genome for the first time; she tells the TED Blog how she hopes to use information gleaned from its molecular profile to stop current invasions and forecast future ones.

Tell us about the golden mussel — why does it pose a problem to South America?

The golden mussel originates from Asia, and arrived in South America in the early 1990s, carried in ballast water of ships. The first golden mussels were deposited in La Plata estuary in Argentina, and began to spread via the Parana River, going up all the way to the Pantanal wetlands. In these basins, golden mussels reproduced at high rates, fouling and clogging up the pipelines in power plants and water treatment facilities, as well as taking habitat away from native species. The mussels have made their way to Itaipu — one of the biggest power plants in the world — and they also do damage to many power plants in São Paulo and Minas Gerais in Brazil.

But the golden mussel doesn’t only spread via ballast water and larvae that swim upstream — the public play an active role in the invasion, too. There are several famous fishery festivals in the Brazilian wetlands, and people come by car, towing private boats from the south. When they put the boats in the water, they introduce golden mussels to new rivers. That was how it was introduced in the wetlands. That’s why awareness-raising and education are important: we need to avoid introducing mussels in new locations.

How do the mussels affect the native ecosystem?

Scientists are now calling the golden mussel an “ecosystem engineer,” because unfortunately, it changes environments very efficiently. One of its characteristics is that it reproduces a lot, creating huge populations. It’s a filter feeder, so when there are many mussels in one area, water transparency increases. Sunlight penetrates the water more deeply, changing phytoplankton levels and the balance of species living at the surface of the water. In some rivers, there is evidence showing that the fish population has increased 20% because they have a new food resource in the mussels. But when you increase the number of fish, it has a domino effect, as they are at the top of the food chain. Ultimately, when the mussel invades, it transforms the ecosystem, decreasing biodiversity and homogenizing the environment.

Map of the mussel migration. The golden mussel originated from Asia, and was introduced into the river basin systems of South America in the 1990s via ballast water. Today it has proliferated throughout the region's wetlands and is threatening to reach the Amazon. Image: Julia Back

Map of the mussel migration. The golden mussel originated from Asia, and was introduced into the river basin systems of South America in the 1990s via ballast water. Today it has proliferated throughout the region’s wetlands and is threatening to reach the Amazon. Image: Julia Back

Are golden mussels a threat to the Amazon?

Yes, definitely, and that is one of our main concerns and reasons for developing a genetics-based solution. The Amazon is the most biodiverse environment in the world. So if the golden mussel gets there, it would modify the environment as it has done in the other basins in South America, unbalancing the ecosystem of Amazon. This would be a disaster.

What’s kept the mussel from reaching the Amazon up to this point?

Beyond the educational campaigns to prevent the spread of hitchhiking mussels, there is a Brazilian regulation in place called NORMAM 20, which makes commercial ships coming to the Amazon deballast twice before entering the Amazon river basin.

The waters of the Amazon basin also vary in their physicochemical characteristics, and to some extent this has helped prevent the golden mussel from establishing itself there. However, the so-called “white” waters — which have nearly neutral pH and a high content of suspended mineral solids — would be friendly to the golden mussel. The waters of the Paraná and the Paraguay and Uruguay basins, where the golden mussels have already spread, have similar characteristics.

Tell us about the paper you recently published. Why is it important?

So my work is to identify the golden mussel’s genetic data, and use our understanding of the mussel’s molecular profile in order to keep it from harming the environment.

The paper, “Gene Discovery through Transcriptome Sequencing for the Invasive Mussel Limnoperna Fortunei,” was the first-ever molecular profile for the golden mussel to be published. Until it came out, there had been virtually no genetic information available for the golden mussel. Our goal was to use the power of next-generation sequencing technology to describe as many genes as possible with a first transcriptome sequencing approach.

We now have around 90,000 expressed sequences for the golden mussel, which allowed us to raise some hypotheses about the relationship between phenotypic plasticity and the success of the mussel as an invasive species and efficient ecosystem engineer.

Close-up of the invasive golden mussel, which proliferates quickly and densely, clogging up power plants, waterworks and destroying ecosystems. Photo: Marcela Uliano da Silva

Close-up of the invasive golden mussel, which proliferates quickly and densely, clogging up power plants, waterworks and destroying ecosystems. Photo: Marcela Uliano da Silva

Can you give us an example of what you’re looking for?

For example, the genome of the oyster Crassostrea gigas, a bivalve like the golden mussel, offered us some insights about what to look for in the golden mussel transcriptome. This oyster has an expansion in some gene families that are expressed to keep the cell alive in moments of stress, like the Heat Shock Protein 70 (HSP70). It has 88 copies of this gene, while humans have only 17. This represents different adaptations of different animal groups, which have different life habits. Our first analyses showed that L. fortunei expresses at least 55 different types of HSP70 isoforms. We’re double-checking its profile now that we are sequencing the genome. But one of our main hypotheses is that the golden mussel likely has a robust genotype, differently expressed in diverse environments, that would give it an advantage in dealing with the challenges of establishing itself in a variety of environments, such as the different basins where it thrives in South America.

So how can you apply the mussel’s molecular profile to actively prevent a future invasion?

Our next step is to sequence the entire genome and understand the golden mussel’s gene expression, so that we can see the ways it copes with various stressors. For example, we plan to expose mussels to a range of temperatures to see which genes express. Studying these profiles, we can find key genes, which may be the future targets for a gene therapy approach.

Through a series of processes, we can develop RNA interference that will keep these genes from expressing — rendering the mussels sterile, for example. Another reason to develop this project is our interest in evolutionary studies. The more genomes we sequence, the better we will be able to understand relationships among the living species. This project is also part of an international collaboration called GIGA, which aims to keep track of groups working in genomics and transcriptomics of invertebrate species around the world.

What if you end up killing off the golden mussel all over the world, even in places where it’s not invasive?

This is a good question. Risk assessment studies have to be done before applying such approach in the wild, no doubt. Each invaded location is different, and we have to do a proper prior study. Nevertheless, new locations and controlled locations, like power plants, could deeply benefit from a gene therapy approach. The idea is to send a vector, like a virus, in controlled concentrations, so it would get to distant water bodies. This vector would contain a specific target for a specific golden mussel gene. This is much more efficient and harmless when compared with chemicals, which kill not only mussels but all the biodiversity around them. Gene therapy using a vector to carry the RNA interference is not like creating a transgenic mussel. We won’t introduce any new genes into the golden mussel genome, so the environmental risk is small.

Is the mussel a threat anywhere else?

Yes. Apart from the impacts to the most biodiverse ecosystem in the world, if the golden mussel crosses the Americas and gets to the United States, ecologists predict it would have a greater impact than the zebra mussel (Dreissena polymorpha), a well-known invasive species in the U.S. and Europe that’s responsible for millions of dollars’ worth of damage.

What else do you need to move forward with this work?

We are moving forward amazingly well, considering the number of people involved in this project. But we still lack ideal computational power. My co-advisor, Dr. Francisco Prosdocimi, has a computer cluster that we use to assemble the genome, but this is a very busy computer, as there are a lot of projects running on it. Our current solution is to do what many other scientists are doing: use the cloud to compute our work. But that also costs money and time. But it seems to me that the future of computer-processing is crowdsourcing.

How would that work?

Nowadays we all have powerful computers in our pockets — our smartphones. At the moment, while people are sleeping and phones are recharging, all that processing power is wasted. Why not use this computing power to help scientific research? There are already some scientific groups that have developed apps to harness this memory. One approach created by a group of researchers in Canada is a computational game. While the remote users are playing the game, they are helping to align gene sequences. So it’s kind of like Tetris.

As a native Brazilian welcoming TED’s first major conference to Brazil, what would you like the TED community to know about your country?

When I think about Brazil, I have mixed feelings. The first feeling is of great love: Brazilians are so tolerant and friendly with people from all over the world, and that makes me feel very proud of our people. I’m also happy that Brazil’s major problem — huge social inequality — has decreased vastly these last 10 years or so. The hurtful part, nonetheless, is that there are still a lot of people without opportunities and living in precarious conditions, which I feel is unacceptable for a rich country like Brazil. And it’s really time for us to find a way to grow our economy in balance and accordance with preserving ecosystems. This is an urgent matter that has been neglected, in my opinion.

All that said, I can really say that TED attendees should look forward to the experience. Brazil, with all its richness and diversity, won’t disappoint you!

Uliano-Silva collects mussels at Jacuí River, in the city of Porto Alegre, southern Brazil. Photo: Rogério da Silva

Uliano-Silva collects mussels at Jacuí River, in the city of Porto Alegre, southern Brazil. Photo: Rogério da Silva

CryptogramMedical Records Theft and Fraud

There's a Reuters article on new types of fraud using stolen medical records. I don't know how much of this is real and how much is hype, but I'm certain that criminals are looking for new ways to monetize stolen data.

Planet DebianSteve Kemp: Next week I shall be mostly in Kraków

Next week my wife and I shall be mostly visiting Poland, and spending a week in Kraków.

It has been a while since I've had a non-Helsinki-based holiday, so I'm looking forward to the trip.

In other news I've been rationalising DNS entries and domain names recently, all being well this zone should be served by Amazon shortly, subject to the usual combination of TTLs and resolution-puns.

Krebs on SecuritySignature Systems Breach Expands

Signature Systems Inc., the point-of-sale vendor blamed for a credit and debit card breach involving some 216 Jimmy John’s sandwich shop locations, now says the breach also may have jeopardized customer card numbers at nearly 100 other independent restaurants across the country that use its products.

pdqEarlier this week, Champaign, Ill.-based Jimmy John’s confirmed suspicions first raised by this author on July 31, 2014: That hackers had installed card-stealing malware on cash registers at some of its store locations. Jimmy John’s said the intrusion — which lasted from June 16, 2014 to Sept. 5, 2014 — occurred when hackers compromised the username and password needed to remotely administer point-of-sale systems at 216 stores.

Those point-of-sale systems were produced by Newtown, Pa., based payment vendor Signature Systems. In a statement issued in the last 24 hours, Signature Systems released more information about the break-in, as well as a list of nearly 100 other stores — mostly small mom-and-pop eateries and pizza shops — that were compromised in the same attack.

“We have determined that an unauthorized person gained access to a user name and password that Signature Systems used to remotely access POS systems,” the company wrote. “The unauthorized person used that access to install malware designed to capture payment card data from cards that were swiped through terminals in certain restaurants. The malware was capable of capturing the cardholder’s name, card number, expiration date, and verification code from the magnetic stripe of the card.”

Meanwhile, there are questions about whether Signature’s core product — PDQ POS — met even the most basic security requirements set forth by the PCI Security Standards Council for point-of-sale payment systems. According to the council’s records, PDQ POS was not approved for new installations after Oct. 28, 2013. As a result, any Jimmy John’s stores and other affected restaurants that installed PDQ’s product after the Oct. 28, 2013 sunset date could be facing fines and other penalties.

This snapshot from the PCI Council shows that PDQ POS was not approved for new installations after Oct. 28, 2013.

This snapshot from the PCI Council shows that PDQ POS was not approved for new installations after Oct. 28, 2013.

What’s more, the company that performed the security audit on PDQ — a now-defunct firm called Chief Security Officers — appears to be the only qualified security assessment firm to have had their certification authority revoked (PDF) by the PCI Security Standards Council.

In response to inquiry from KrebsOnSecurity, Jimmy John’s noted that of the 216 impacted stores, 13 were opened after October 28, 2013.

“We understood, from our point of sale technology vendor, that payment systems installed in those stores, as with all locations, were PCI compliant,” Jimmy Johns said in a statement. “We are working independently, and moving as quickly as possible, to install PCI compliant stand-alone payment terminals in those 13 stores.  This is being overseen by Jimmy John’s director of information technology, who will confirm completion of this work directly with each location.  As part of our broader response to the security incident, action has already been taken in those 13 stores, as well as the other impacted locations, to remove malware, and to install and assure the use of dual-factor authentication for remote access and encrypted swipe technology for store purchases.  In addition, the systems used in all of our stores are scanned every day for malware.”

For its part, Signature Systems says it has been developing a new payment application that features card readers that utilize point-to-point encryption capable of blocking point-of-sale malware.

TEDFurther reading and watching on Brazil, to get you in the spirit for TEDGlobal

A view of Rio de Janeiro, where TEDGlobal 2014 will take place. Photo: Thinkstock

A view of Rio de Janeiro, where TEDGlobal 2014 will take place. Photo: Thinkstock

TED has traveled to Tanzania, to Canada and to India. We’ve been to Germany, to Qatar and to assorted places throughout the United Kingdom and the state of California in the United States. But in a few weeks, we are headed for the first time to South America. TEDGlobal 2014 kicks off on Monday, October 6, in Rio de Janeiro, Brazil. 

So why Brazil? And why now? Curator Bruno Giussani explains. 

“We wanted to go to South America because it’s one of the regions of the world where we’ve never had an official TED event, but where TEDx and the Open Translation Project and are really taking off in a significant way,” he says. “We went and visited several cities—not only in Brazil, but in other countries too. We looked at many different options—big cities, small cities, capital cities, far-off cities. In the end, Rio turned out to be not only a very attractive city, but also a vibrant hub of creativity and innovation. It’s a place where there is big thinking on the social and political issues affecting the world.”

Below, some further reading — and watching — to get you excited for this trip to Brazil. Not able to be there in person? Get TED Live to watch from afar, or follow our coverage here on the TED Blog.

Some stories to read … 

Why the World Cup was such a pivotal moment for Brazil: Misha Glenny on Brazil’s colonial heritage and its present realities »

How to survive in the urban jungle of São Paulo, from an insider »

Startups, street schools, and micro revolutions: The world’s most optimistic millennials call Brazil home »

A TED Fellow leads the charge against Brazil’s $2-billion-a-year illegal wildlife trade »

An insider’s tour of Brazil’s creativity and innovation culture »

A TED Fellow who’s working hard to keep an invasive species of mussels from entering the Amazon »

What you need to know about Neymar da Silva Santos Júnior, and racism at the World Cup »

Some talks to watch…

The 4 commandments of cities from Eduardo Paes, the mayor of Rio de Janeiro »

How Jaime Lerner reinvented urban space in his hometown—Curitiba, Brazil— and changed the way city planners think in the process »

Charles Leadbeater shares education innovations developed in Monkey Hill, one of Rio’s favelas »

Speaking of Rio’s favelas, artist JR on why he decided to travel there and turn the landscape into a canvas »

Brazilian social documentary photographer Sebastião Salgado shows the silent drama of his images »

And check out the playlist  “South America!” for talks on the creativity flowing through the continent, and the unique challenges it is facing »

And naturally, we’ve got lots for you to read on what to expect from TEDGlobal 2014 in general… 

How we chose the themes and speakers for TEDGlobal 2014 »

The full speaker lineup, arranged by discipline »

The favelas of Rio. Photo: iStockphoto

The favelas of Rio. Photo: iStockphoto

Another view of Rio. Photo: Thinkstock

Another view of Rio. Photo: Thinkstock

Sociological ImagesHappy Birthday, Gloria Anzaldua!

We are happy to honor Gloria Anzaldúa.  Anzaldúa was a lesbian Chicana feminist of European and American Indian descent, born in Texas to parents of Mexican lineage.  This collection of identities informed her social theory and she is credited with articulating the importance of intersectionality, or the way in which multiple identities in a single individual inflect each other in powerful ways.  Two of her most famous works include This Bridge Called My Back, with Cherríe Moraga (1981) and Borderlands/La Frontera: The New Mestiza (1987).


Image borrowed from

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Sociological ImagesHunters and Their Kills: Destroying or Taming Nature?

Flashback Friday.


Flipping through Safari magazine, something that struck me as odd.  Because the magazine is aimed, primarily, at selling hunting safaris, the vast majority of the pictures were people posing with their kills.

What I noticed was that, in nearly 100 percent of the pictures, the animals were posed so as to look alive: resting or sleeping.  Most often, the animal was on its belly with its legs folded naturally beneath it and, even, its head held or propped up.  The hunters posed behind the animal, often with a hand on it, as if they were simply petting the animal.   Further, there was almost never any evidence of the wound: no holes, no blood (though sometimes the weapon is included in the picture).  It is almost as if the people are at a petting zoo and the animal is blissfully enjoying the human attention.


Imagine for a minute how challenging this must be to pull off.  If you shoot an animal, it likely falls into any number of positions, many of which make it look like it’s just been shot (legs akimbo, head at an awkward angle, etc).  The hunter and his or her companions must have to wrangle this 500, 1,000, 1,500 pound dead weight into the position in which it appears in the images.

Why do they do it?

I don’t know. But maybe it has something to do with the relationship to nature that hunter culture endorses.  Instead of a destructive, violent relationship to nature that would be represented by picturing animals in their death poses, these pictures suggest a custodial relationship in which humans take care of or chaperone a nature to which they feel tenderly.

That is, they don’t destroy nature with their guns, they tame it.


Originally posted in 2009.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main October 2014 Meeting: MySQL + CCNx

Oct 7 2014 19:00
Oct 7 2014 21:00
Oct 7 2014 19:00
Oct 7 2014 21:00

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

Stewart Smith, A History of MySQL

Hank, Content-Centric Networking

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

October 7, 2014 - 19:00

read more

Planet SE LinuxDan Walsh: A follow up to the Bash Exploit and SELinux.

One of the advantages of a remote exploit is to be able to setup and launch attacks on other machines.

I wondered if it would be possible to setup a bot net attack using the remote attach on an apache server with the bash exploit.

Looking at my rawhide machine's policy

sesearch -A -s httpd_sys_script_t -p name_connect -C | grep -v ^D
Found 24 semantic av rules:
   allow nsswitch_domain dns_port_t : tcp_socket { recv_msg send_msg name_connect } ;
   allow nsswitch_domain dnssec_port_t : tcp_socket name_connect ;
ET allow nsswitch_domain ldap_port_t : tcp_socket { recv_msg send_msg name_connect } ; [ authlogin_nsswitch_use_ldap ]

The apache script would only be allowed to connect/attack a dns server and an LDAP server.  It would not be allowed to become a spam bot (No connection to mail ports) or even attack other web service.

Could an attacker leave a back door to be later connected to even after the bash exploit is fixed?

# sesearch -A -s httpd_sys_script_t -p name_bind -C | grep -v ^D

Nope!  On my box the httpd_sys_script_t process is not allowed to listen on any network ports.

I guess the crackers will just have to find a machine with SELinux disabled.

Planet DebianJakub Wilk: Pet peeves: debhelper build-dependencies (redux)

$ zcat Sources.gz | grep -o -E 'debhelper [(]>= 9[.][0-9]{,7}([^0-9)][^)]*)?[)]' | sort | uniq -c | sort -rn
    338 debhelper (>= 9.0.0)
     70 debhelper (>= 9.0)
     18 debhelper (>= 9.0.0~)
     10 debhelper (>= 9.0~)
      2 debhelper (>= 9.2)
      1 debhelper (>= 9.2~)
      1 debhelper (>= 9.0.50~)

Is it a way to protest against the current debhelper's version scheme?

RacialiciousFuneral For A Gladiator: The Racialicious Review of Scandal 4.1

By Arturo R. García

Aside from addressing many of the questions posed in last year’s finale, Scandal‘s season premiere focused on two more: Who is Olivia Pope without her Associates? And does she even want to be Olivia Pope anymore?

SPOILERS under the cut

Given the circumstances, the elegiac tone permeating “Randy, Red, Superfreak and Julia” was appropriate, and possibly cathartic for the cast on some level. There was a “case of the week,” sort of — more on that in a second — but the centerpiece of the episode was the erstwhile Gladiators forcing themselves to reunite for Harrison’s funeral. Given Columbus Short’s real-life actions, this was not unexpected:

<script async="" charset="utf-8" src=""></script>

Jake (Scott Foley) knows what’s coming once Olivia (Kerry Washington) returns to D.C.

OK, so it wasn’t the first scene, but certainly the most important. The news that Harrison was indeed killed at the hands of B613 is enough to shake Olivia out of a life of island bliss as “Julia Baker” with poor genre-savvy Jake, who knows what will happen as soon as she gets a whiff of life in Washington again.

While Olivia is reacquainting herself with her old identity, most of the rest of her team has been trying to develop new ones: while Quinn seems to enjoy her post-Charlie life, Huck has resigned himself to life as “Randy the Smart Guy,” and Abby has found her attempt to be the Grant administration’s new Olivia (uh, in a professional capacity) blunted; not only is Abby not the new Olivia, she’s not even “Abby.”

The redefined balance between Abby and Olivia will no doubt be a focus of the upcoming year. As will the return of the Olitz teases, and the eventual return of Maya, and Charlie, and the question of what David will do with the scaaaary B613 files. But here’s the biggest question for the show after a slow-burn start: are viewers still interested in following this journey, or will How To Get Away With Murder steal Scandal‘s thunder?

Abby (Darby Stanchfield) isn’t Olivia’s No. 2 anymore.

Scandalous Notions

  • A Republican arguing for equal pay? And this show’s not on Syfy?
  • So what does Jake do for a job now? It’s not like Rowan is going to hire him back … or is he?
  • Ominous Words, Part I: “Get some power and use it.” You sure you want to say that to an ex-partner who now has the goods on the whole government?
  • Ominous Words, Part II: “When you see her, you will tell me.” Who knows how many thinkpieces are about to be devoted to Mellie’s mental condition, but the old Whedonista in me heard this and thought: From beneath you, it devours. The implications look rather similar at this point.
  • Always good to see Portia de Rossi. Here’s to hoping “Lizzy Bear” and Olivia cross swords sooner rather than later.
  • As things stand, Olivia doesn’t get a very varied skillset with just Huck and Quinn back on the team. So let’s have some fun: which actors would you like to see emerge as Harrison and Abby 2.0?

The post Funeral For A Gladiator: The Racialicious Review of Scandal 4.1 appeared first on Racialicious - the intersection of race and pop culture.

Planet Linux AustraliaAndrew Pollock: [life] Day 240: A day of perfect scheduling

Today was a perfectly lovely day, the schedule just flowed so nicely.

I started the day making a second batch of pizza sauce for the Riverfire party I'm hosting tomorrow night. Once that was finished, we walked around the corner to my dentist for a check up.

Zoe was perfect during the check up, she just sat in the corner of the room and watched and also played on her phone. The dentist commented on how well behaved she was. It blew my mind to run into Tanya there for the second time in a row. We're obviously on the same schedules, but it's just crazy to always wind up with back to back appointments.

After the appointment, we pretty much walked onto a bus to the city, so we could meet Nana for lunch. While we were on the bus, I called up and managed to get haircut appointments for both of us at 3pm. I figured we could make the return trip via CityCat, and the walk home would take us right past the hairdresser.

The bus got us in about 45 minutes early, so we headed up to the Museum of Brisbane in City Hall to see if we could get into the clock tower. We got really lucky, and managed to get onto the 11:45am tour.

Things have changed since I was a kid and my Nana used to take me up the tower. They no longer let you be up there when the bells chime, which is a shame, but apparently it's very detrimental to your hearing.

Zoe liked the view, and then we went back down to King George Square to wait for Nana.

We went to Jo Jo's for lunch, and they somehow managed to lose Zoe and my lunch order, and after about 40 minutes of waiting, I chased it up, and it still took a while to sort out. Zoe was very patient waiting the whole time, despite being starving.

After lunch, she wanted to see Nana's work, so we went up there. On the way back out, she wanted to play with the Drovers statues on Ann Street for a bit. After that, we made our way to North Quay and got on a CityCat, which nicely got us to the hairdresser in time for our appointment.

After that, we walked home, and drove around to check out a few bulk food places that I've learned about from my Thermomix Consultant training. We checked out a couple in Woolloongabba, and they had some great stuff available to the public.

It was getting late, so after a failed attempt at finding one in West End, we returned home so I could put dinner on.

It was a smoothly flowing day today, and Zoe handled it so well.

Planet DebianHolger Levsen: 20140925-reproducible-builds

Reproducible builds? I never did any - manually :)

I've never done a reproducible build attempt of any package, manually, ever. But what I've done now is setting up reproducible builds on which will build hundreds or thousands of packages, hopefully reproducibly, regularily in the future. Thanks to Lunar's and many other peoples work, this was actually rather easy. If you want to do this manually, it should take you just a few minutes to setup a suitable build environment.

So three days ago when I wasn't exactly bored I decided that it was a good moment to implement some reproducible build jobs on jenkins.d.n, and so I gave it a try and two hours later the basic implementation was working, and then it was an evening and morning of fine tuning until I was mostly satisfied. Since then there has been some polishing, but the basic setup is done and has been working since.

What's the result? One job, reproducible_setup will just create a suitable environment for pbuilding reproducible packages as documented so well on the Debian wiki. And as that job runs 3.5 minutes only (to debootstrap from scratch), it's run daily.

And then there are currently 16 other jobs, which test reproducible builds in different areas: d-i, core, some six major desktops and some selected desktop applications, some security + privacy related packages, some build chains we have in Debian, libreoffice and Most of these jobs run several hours, but luckily not days. And they discover packages which still fail to build reproducibly, which already has caused some bugs to be filed, eg. #762732 "libdebian-installer: please do not write timestamps in Doxygen generated documentation".

So this is the output from testing the reproducibilty of all debian-installer packages: 72 packages were successfully built reproducibly, while 6 packages failed to do so. I was quite impressed by these numbers as AFAIK noone tried to build d-i reproducibly before.

72 packages successfully built reproducibly: userdevfs user-setup usb-discover udpkg tzsetup rootskel rootskel-gtk rescue preseed pkgsel partman-xfs partman-target partman-partitioning partman-nbd partman-multipath partman-md partman-lvm partman-jfs partman-iscsi partman-ext3 partman-efi partman-crypto partman-btrfs partman-basicmethods partman-basicfilesystems partman-base partman-auto partman-auto-raid partman-auto-lvm partman-auto-crypto partconf os-prober oldsys-preseed nobootloader network-console netcfg net-retriever mountmedia mklibs media-retriever mdcfg main-menu lvmcfg lowmem localechooser live-installer lilo-installer kickseed kernel-wedge kbd-chooser iso-scan installation-report installation-locale hw-detect grub-installer finish-install efi-reader dh-di debian-installer-utils debian-installer-netboot-images debian-installer-launcher clock-setup choose-mirror cdrom-retriever cdrom-detect cdrom-checker cdebconf-terminal cdebconf-entropy bterm-unifont base-installer apt-setup anna 
6 packages failed to built reproducibly: win32-loader libdebian-installer debootstrap console-setup cdebconf busybox 

What's also impressive: all packages for the newly introduced Cinnamon Desktop build reproducibly from the start!

The jenkins setup is configured via just three small files:

That's it and that's enough to keep several cores busy for days. :-) But as each job only takes a few hours each is scheduled twice a month and more jobs and packages shall be added in future (with some heuristics to schedule known good packages less often...)

I guess it's an appropriate opportunity to say "many thanks to Profitbricks", who have been donating the powerful virtual machine is running on since October 2012. I also want to say "many many thanks to Helmut" (Grohne) who has recently joined me in maintaining this jenkins setup. And then I'd like to thank "the KGB trio" (Gregor, Tincho and Dam!) for providing those KGB bots on IRC, which are very helpful for providing notifications on IRC channels and last but not least thanks to everybody who contributed so that reproducible builds got this far! Keep up the jolly good work!

And If you happen to know failing packages not included in job-cfg/reproducible.yaml I'd like to hear about those, so they'll get regularily tested and appear on the radar, until finally bugs are filed, fixed and migrated to stable. So one day all binary packages in Debian stable will be build reproducibly. An important step on this road is probably to have this defined as an release goal for Jessie+1. And then for jessie+1 hopefully the first 10k packages will build reproducibly? Or whooping 23k maybe? ;-) And maybe release jessie+2 with 100%?!? We will see! Even Jessie already has quite some packages (someone needs to count them...) which build reproducibly with just modified dpkg(-dev) and debhelper packages alone...

So let's fix all the bugs! That said, an easier start for most of you is probably the list of useful things you (yes, you!) can do! :-)

Oh, and last but surely not least in my book: many thanks too to the nice people hosting me so friendly in the last days! Keep on rockin'!

Planet DebianPetter Reinholdtsen: How to test Debian Edu Jessie despite some fatal problems with the installer

The Debian Edu / Skolelinux project provide a Linux solution for schools, including a powerful desktop with education software, a central server providing web pages, user database, user home directories, central login and PXE boot of both clients without disk and the installation to install Debian Edu on machines with disk (and a few other services perhaps to small to mention here). We in the Debian Edu team are currently working on the Jessie based version, trying to get everything in shape before the freeze, to avoid having to maintain our own package repository in the future. The current status can be seen on the Debian wiki, and there is still heaps of work left. Some fatal problems block testing, breaking the installer, but it is possible to work around these to get anyway. Here is a recipe on how to get the installation limping along.

First, download the test ISO via ftp, http or rsync (use The ISO build was broken on Tuesday, so we do not get a new ISO every 12 hours or so, but thankfully the ISO we already got we are able to install with some tweaking.

When you get to the Debian Edu profile question, go to tty2 (use Alt-Ctrl-F2), run

nano /usr/bin/edu-eatmydata-install

and add 'exit 0' as the second line, disabling the eatmydata optimization. Return to the installation, select the profile you want and continue. Without this change, exim4-config will fail to install due to a known bug in eatmydata.

When you get the grub question at the end, answer /dev/sda (or if this do not work, figure out what your correct value would be. All my test machines need /dev/sda, so I have no advice if it do not fit your need.

If you installed a profile including a graphical desktop, log in as root after the initial boot from hard drive, and install the education-desktop-XXX metapackage. XXX can be kde, gnome, lxde, xfce or mate. If you want several desktop options, install more than one metapackage. Once this is done, reboot and you should have a working graphical login screen. This workaround should no longer be needed once the education-tasks package version 1.801 enter testing in two days.

I believe the ISO build will start working on two days when the new tasksel package enter testing and Steve McIntyre get a chance to update the debian-cd git repository. The eatmydata, grub and desktop issues are already fixed in unstable and testing, and should show up on the ISO as soon as the ISO build start working again. Well the eatmydata optimization is really just disabled. The proper fix require an upload by the eatmydata maintainer applying the patch provided in bug #702711. The rest have proper fixes in unstable.

I hope this get you going with the installation testing, as we are quickly running out of time trying to get our Jessie based installation ready before the distribution freeze in a month.

Worse Than FailureError'd: Free as in $5.29 Cheese Sticks

C.F. wrote, "I came really close to redeeming my 'free' cheese sticks coupon on Pizza Hut's site."


"Apparently, I'm going to get my server right before the end of WWI," wrote Mark.


"Condi notified me that there was an error with one of my actions," Julien writes, "I still don't think it was worth it."


"Barnes and Noble has secured my order by making it NaN," writes Kevin K.


"I know it's a new programming language and all, but I was surprised to see the language specification for Swift weigh in at 99 exabytes," writes Alastair.


"How much does it cost to visit Seattle? Well, according to the Barclay Card Travel Community, you're better off not asking," writes Vitaliy.


Roger A. writes, "Wait - so the next bus departs in 9 hours and how many minutes?!"


Johannes S. wrote, "I've heard that Americans aren't the best at geography, but I didn't think that it'd apply to their websites."


[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaMichael Still: The Decline and Fall of IBM: End of an American Icon?

ISBN: 0990444422
This book is quite readable, which surprises me for the relatively dry topic. Whilst obviously not everyone will agree with the author's thesis, it is clear that IBM hasn't been managed for long term success in a long time and there are a lot of very unhappy employees. The book is an interesting perspective on a complicated problem.

Tags for this post: book robert_cringely ibm corporate decline
Related posts: Phones; Your first computer?; Advertising inside the firewall; Corporate networks; Loyalty; Dead IBM DeveloperWorks
Comment Recommend a book

Planet DebianDirk Eddelbuettel: R and Docker

r and docker talk picture by @mediafly

Earlier this evening I gave a short talk about R and Docker at the September Meetup of the Docker Chicago group.

Thanks to Karl Grzeszczak for setting the meeting, and for providing a pretty thorough intro talk regarding CoreOS and Docker.

My slides are now up on my presentations page.

Kelvin ThomsonMoreland and Moonee Valley Suffer Population and Planning Consequences, while Leafy Liberal Eastern Suburbs are Protected

It is an absolute disgrace that the Victorian Liberal Government is willing to protect its Liberal voting areas from the consequences of Melbourne’s rapid population growth- high rise developments, multi unit developments, traffic congestion, bottlenecks, overcrowded public transport and pressure on infrastructure and services, but will not extend the same courtesy to Moreland and Moonee Valley.<o:p></o:p>

After a great deal of public debate, discussion and consideration, Moreland and Moonee Valley Councils submitted their draft new residential zones to the Victorian Government, following that Government’s requirement under Plan Melbourne, for new residential zones to be developed.<o:p></o:p>

Moonee Valley recommended the neighbourhood residential zone, which encourages the lowest rate of growth across 75.5% of the municipality. Moreland Council suggested approximately 60% of the municipality be overlaid by the lowest rate of growth.<o:p></o:p>

Glen Eira, Boroondara, and Bayside successfully recommended and had approved by the Minister that around 75%, 76%, and 83% of their municipalities, respectively, be covered by the lowest residential growth zones, the Neighbourhood Residential Zone. This had been recommended and supported by their respective residents and communities. If it is good enough for Glen Eira, Boroondara, and Bayside then it should also be good enough for Moreland and Moonee Valley. <o:p></o:p>

Why is it that Melbourne’s Liberal supporting suburbs like Kew, Hawthorn, Camberwell, Brighton, Sandringham and Black Rock get two storey height limits, yet the sky is the limit in suburbs like Brunswick, Coburg, Pascoe Vale and Strathmore? <o:p></o:p>

Rather than determine residential planning and growth zones based on party political considerations, the Planning Minister should be respecting and adhering to the views of local communities in Moreland and Moonee Valley.<o:p></o:p>

The Minister has labelled the draft plans by Moonee Valley and Moreland as “fatally flawed”, and ‘anticipated growth rates will have significant implications; yet earlier this year he approved, virtually untouched, the draft plans put to him by Glen Eira, Boroondara and Bayside.<o:p></o:p>

If it is good enough for these municipalities to have their residential zones supported, it also ought to be good enough for the northern suburbs.<o:p></o:p>

The Victorian Liberal Government is mandating Moreland and Moonee Valley absorb an unfair share of Melbourne’s rapid population growth. Why should Moreland and Moonee Valley have more high rise developments, when our communities have made it abundantly clear we prefer sustainable development in line with community expectations, infrastructure and services. <o:p></o:p>

I urge the Victorian Liberal Government to respect and adhere to the views of our local residents, just as they have done in some of Melbourne’s leafy eastern suburbs, and to not see Melbourne’s northern suburbs as a dumping ground for Melbourne’s rapid population growth which is fuelling congestion, house prices, cost of living issues and infrastructure stress. <o:p></o:p>
<o:p> </o:p>
As reported in these weeks Moreland Leader, house prices are surging across Moreland, with increases of 22.1% in Pascoe Vale South in the median house price alone. Encouraging more population growth by implementing high density development policies will continue fuelling housing unaffordability. The Victorian Government should be seeking to make housing more affordable through implementing a sustainable population and planning strategy.


Planet Linux AustraliaAndrew Pollock: [life] Day 239: Cousin catch up, ice skating and a TM5 pickup

My sister, brother-in-law and niece are in town for a wedding on the weekend, so after collecting Zoe from the train station, we headed out to Mum and Dad's for the morning to see them all. My niece, Emma, has grown heaps since I last saw her. Her and Zoe had some nice cuddles and played together really well.

I'd also promised Zoe that I'd take her ice skating, so that dovetailed pretty well with the visit, as instead of going to Acacia Ridge, we went to Boondall after lunch and skated there.

Zoe was very confident this time on the ice. She spent more time without her penguin than with it, so I think next time she'll be fine without one at all. She only had a couple of falls, the first one I think was a bit painful for her and a shock, but after that she was skating around really well. I think she was quite proud of herself.

My new Thermomix had been delivered to my Group Leader's house, so after that, we drove over there so I could collect it and get walked through how I should handle deliveries for customers. Zoe napped in the car on the way, and woke up without incident, despite it being a short nap. She had a nice time playing with Maria's youngest daughter while Maria walked me through everything, which was really lovely.

Time got away on me a bit, and we hurried home so that Sarah could pick Zoe up. I then got stuck into making some pizza sauce for our Riverfire pizza party on Saturday night.

Krebs on Security‘Shellshock’ Bug Spells Trouble for Web Security

As if consumers weren’t already suffering from breach fatigue: Experts warn that attackers are exploiting a critical, newly-disclosed security vulnerability present in countless networks and Web sites that rely on Unix and Linux operating systems. Experts say the flaw, dubbed “Shellshock,” is so intertwined with the modern Internet that it could prove challenging to fix, and in the short run is likely to put millions of networks and countless consumer records at risk of compromise.

The bug is being compared to the recent Heartbleed vulnerability because of its ubiquity and sheer potential for causing havoc on Internet-connected systems — particularly Web sites. Worse yet, experts say the official patch for the security hole is incomplete and could still let attackers seize control over vulnerable systems.

The problem resides with a weakness in the GNU Bourne Again Shell (Bash), the text-based, command-line utility on multiple Linux and Unix operating systems. Researchers discovered that if Bash is set up to be the default command line utility on these systems, it opens those systems up to specially crafted remote attacks via a range of network tools that rely on it to execute scripts, from telnet and secure shell (SSH) sessions to Web requests.

According to several security firms, attackers are already probing systems for the weakness, and that at least two computer worms are actively exploiting the flaw to install malware. Jamie Blasco, labs director at AlienVault, has been running a honeypot on the vulnerability since yesterday to emulate a vulnerable system.

“With the honeypot, we found several machines trying to exploit the Bash vulnerability,” Blasco said. “The majority of them are only probing to check if systems are vulnerable. On the other hand, we found two worms that are actively exploiting the vulnerability and installing a piece of malware on the system. This malware turns the systems into bots that connect to a C&C server where the attackers can send commands, and we have seen the main purpose of the bots is to perform distributed denial of service attacks.”

The vulnerability does not impact Microsoft Windows users, but there are patches available for Linux and Unix systems. In addition, Mac users are likely vulnerable, although there is no official patch for this flaw from Apple yet. I’ll update this post if we see any patches from Apple.

Update, Sept. 29 9:06 p.m. ET: Apple has released an update for this bug, available for OS X Mavericks, Mountain Lion, and Lion.

The U.S.-CERT’s advisory includes a simple command line script that Mac users can run to test for the vulnerability. To check your system from a command line, type or cut and paste this text:

env x='() { :;}; echo vulnerable' bash -c "echo this is a test"

If the system is vulnerable, the output will be:

 this is a test

An unaffected (or patched) system will output:

 bash: warning: x: ignoring function definition attempt
 bash: error importing function definition for `x'
 this is a test

US-CERT has a list of operating systems that are vulnerable. Red Hat and several other Linux distributions have released fixes for the bug, but according to US-CERT the patch has an issue that prevents it from fully addressing the problem.

The Shellshock bug is being compared to Heartbleed because it affects so many systems; determining which are vulnerable and developing and deploying fixes to them is likely to take time. However, unlike Heartbleed, which only allows attackers to read sensitive information from vulnerable Web servers, Shellshock potentially lets attackers take control over exposed systems.

“This is going to be one that’s with us for a long time, because it’s going to be in a lot of embedded systems that won’t get updated for a long time,” said Nicholas Weaver, a researcher at the International Computer Science Institute (ICSI) and at the University of California, Berkeley. “The target computer has to be accessible, but there are a lot of ways that this turns accessibility into full local code execution. For example, one could easily write a scanner that would basically scan every Web site on the planet for vulnerable (Web) pages.”

Stay tuned. This one could get interesting very soon.

Planet SE LinuxDan Walsh: What does SELinux do to contain the the bash exploit?

Do you have SELinux enabled on your Web Server?

Lots of people are asking me about SELinux and the Bash Exploit.

I did a quick analysis on one reported remote Apache exploit:

Shows an example of the bash exploit on an apache server.  It even shows that SELinux was enforcing when the exploit happened.

SELinux does not block the exploit but it would prevent escallation of confined domains.
Why didn't SELinux block it?

SELinux controls processes based on their types, if the process is doing what it was designed to do then SELinux will not block it.

In the defined exploit the apache server is running as httpd_t and it is executing a cgi script which would be labeled httpd_sys_script_exec_t.  

When httpd_t executes a script labeled httpd_sys_script_exec_t SELinux will transition the new process to httpd_sys_script_t.

SELinux policy allowd processes running as httpd_sys_script_t is to write to /tmp, so it was successfull in creating /tmp/aa.

If you did this and looked at the content in /tmp it would be labeled httpd_tmp_t


Lets look at which files httpd_sys_script_t is allowed to write to on my Rawhide box.

# sesearch -A -s httpd_sys_script_t -c file -p write -C | grep open | grep -v ^D
   allow httpd_sys_script_t httpd_sys_rw_content_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow httpd_sys_script_t anon_inodefs_t : file { ioctl read write getattr lock append open } ; 
   allow httpd_sys_script_t httpd_sys_script_t : file { ioctl read write getattr lock append open } ; 
   allow httpd_sys_script_t httpd_tmp_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 

httpd_sys_script_t is a process label which only applies to content in /proc.  This means processes running as httpd_sys_script_t can write to there process data.

anon_inodefs_t is an in memory label, most likely not on your disk.

The only on disk places it can write files labeled httpd_sys_rw_content_t and /tmp.

grep httpd_sys_rw_content_t /etc/selinux/targeted/contexts/files/file_contexts

or on my box

# find /etc -context "*:httpd_sys_rw_content_t:*"

With SELinux disabled, this hacked process would be allowed to write any content that is world writable on your system as well as any content owned by the apache user or group.

Lets look at what it can read.

sesearch -A -s httpd_sys_script_t -c file -p read -C | grep open | grep -v ^D | grep -v exec_t
   allow domain locale_t : file { ioctl read getattr lock open } ; 
   allow httpd_sys_script_t iso9660_t : file { ioctl read getattr lock open } ; 
   allow httpd_sys_script_t httpd_sys_ra_content_t : file { ioctl read create getattr lock append open } ; 
   allow httpd_sys_script_t httpd_sys_rw_content_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow httpd_sys_script_t squirrelmail_spool_t : file { ioctl read getattr lock open } ; 
   allow domain ld_so_t : file { ioctl read getattr execute open } ; 
   allow httpd_sys_script_t anon_inodefs_t : file { ioctl read write getattr lock append open } ; 
   allow httpd_sys_script_t sysctl_kernel_t : file { ioctl read getattr lock open } ; 
   allow domain base_ro_file_type : file { ioctl read getattr lock open } ; 
   allow httpd_sys_script_t httpd_sys_script_t : file { ioctl read write getattr lock append open } ; 
   allow nsswitch_domain cert_t : file { ioctl read getattr lock open } ; 
   allow httpd_script_type etc_runtime_t : file { ioctl read getattr lock open } ; 
   allow httpd_script_type fonts_cache_t : file { ioctl read getattr lock open } ; 
   allow domain mandb_cache_t : file { ioctl read getattr lock open } ; 
   allow domain abrt_t : file { ioctl read getattr lock open } ; 
   allow domain lib_t : file { ioctl read getattr lock execute open } ; 
   allow domain man_t : file { ioctl read getattr lock open } ; 
   allow httpd_sys_script_t cifs_t : file { ioctl read getattr lock execute execute_no_trans entrypoint open } ; 
   allow domain sysctl_vm_overcommit_t : file { ioctl read getattr lock open } ; 
   allow httpd_sys_script_t nfs_t : file { ioctl read getattr lock execute execute_no_trans entrypoint open } ; 
   allow kernel_system_state_reader proc_t : file { ioctl read getattr lock open } ; 
   allow nsswitch_domain passwd_file_t : file { ioctl read getattr lock open } ; 
   allow nsswitch_domain sssd_public_t : file { ioctl read getattr lock open } ; 
   allow domain cpu_online_t : file { ioctl read getattr lock open } ; 
   allow httpd_script_type public_content_rw_t : file { ioctl read getattr lock open } ; 
   allow nsswitch_domain etc_runtime_t : file { ioctl read getattr lock open } ; 
   allow nsswitch_domain hostname_etc_t : file { ioctl read getattr lock open } ; 
   allow domain ld_so_cache_t : file { ioctl read getattr lock open } ; 
   allow nsswitch_domain sssd_var_lib_t : file { ioctl read getattr lock open } ; 
   allow httpd_script_type public_content_t : file { ioctl read getattr lock open } ; 
   allow nsswitch_domain krb5_conf_t : file { ioctl read getattr lock open } ; 
   allow domain abrt_var_run_t : file { ioctl read getattr lock open } ; 
   allow domain textrel_shlib_t : file { ioctl read getattr execute execmod open } ; 
   allow httpd_sys_script_t httpd_tmp_t : file { ioctl read write create getattr setattr lock append unlink link rename open } ; 
   allow domain machineid_t : file { ioctl read getattr lock open } ; 
   allow httpd_sys_script_t mysqld_etc_t : file { ioctl read getattr lock open } ; 
   allow domain rpm_script_tmp_t : file { ioctl read getattr lock open } ; 
   allow nsswitch_domain samba_var_t : file { ioctl read getattr lock open } ; 
   allow domain sysctl_crypto_t : file { ioctl read getattr lock open } ; 
   allow nsswitch_domain net_conf_t : file { ioctl read getattr lock open } ; 
   allow httpd_script_type etc_t : file { ioctl read getattr execute execute_no_trans open } ; 
   allow httpd_script_type fonts_t : file { ioctl read getattr lock open } ; 
   allow httpd_script_type ld_so_t : file { ioctl read getattr execute execute_no_trans open } ; 
   allow nsswitch_domain file_context_t : file { ioctl read getattr lock open } ; 
   allow httpd_sys_script_t httpd_squirrelmail_t : file { ioctl read getattr lock append open } ; 
   allow httpd_script_type base_ro_file_type : file { ioctl read getattr lock execute execute_no_trans open } ; 
   allow httpd_sys_script_t snmpd_var_lib_t : file { ioctl read getattr lock open } ; 
   allow nsswitch_domain samba_etc_t : file { ioctl read getattr lock open } ; 
   allow domain man_cache_t : file { ioctl read getattr lock open } ; 
   allow httpd_script_type bin_t : file { ioctl read getattr lock execute execute_no_trans open } ; 
   allow httpd_script_type lib_t : file { ioctl read getattr lock execute execute_no_trans open } ; 
   allow httpd_sys_script_t httpd_sys_content_t : file { ioctl read getattr lock execute execute_no_trans entrypoint open } ; 
   allow nsswitch_domain etc_t : file { ioctl read getattr lock open } ; 
ET allow nsswitch_domain cert_t : file { ioctl read getattr lock open } ; [ authlogin_nsswitch_use_ldap ]
ET allow nsswitch_domain slapd_cert_t : file { ioctl read getattr lock open } ; [ authlogin_nsswitch_use_ldap ]
ET allow nsswitch_domain net_conf_t : file { ioctl read getattr lock open } ; [ authlogin_nsswitch_use_ldap ]
ET allow domain sysctl_kernel_t : file { ioctl read getattr lock open } ; [ fips_mode ]

Looks like a lot of types, but most of these are System Types bin_t, lib_t ld_so_t, man_t fonts_t,  most stuff under /usr etc.

It would be allowed to read /etc/passwd (passwd_t) and most content in /etc.  

It can read apache static content, like web page data.

Well what can't it read?

user_home_t - This is where I keep my credit card data
usr_tmp_t where an admin might have left something
Other content in /var
*db_t - No database data.
It can not read most of apache runtime data like apache content in /var/lib or /var/log or /etc/httpd

With SELinux disabled, this process would be allowed to read any content that is world readable on your system as well as any content owned by the apache user our group.

We also need to look at what domains httpd_sys_script_t can transition to?

# sesearch -T -s httpd_sys_script_t -c process -C | grep -v ^D
Found 9 semantic te rules:
   type_transition httpd_sys_script_t httpd_rotatelogs_exec_t : process httpd_rotatelogs_t; 
   type_transition httpd_sys_script_t abrt_helper_exec_t : process abrt_helper_t; 
   type_transition httpd_sys_script_t antivirus_exec_t : process antivirus_t; 
   type_transition httpd_sys_script_t sepgsql_ranged_proc_exec_t : process sepgsql_ranged_proc_t; 
   type_transition httpd_sys_script_t sepgsql_trusted_proc_exec_t : process sepgsql_trusted_proc_t; 

SELinux would also block the process executing a setuid process to raise its capabilities.

Now this is a horrible exploit but as you can see SELinux would probably have protected a lot/most of your valuable data on your machine.  It would buy you time for you to patch your system.

Did you setenforce 1?

LongNowLarry Harvey Seminar Tickets


The Long Now Foundation’s monthly

Seminars About Long-term Thinking

Larry Harvey presents Why The Man Keeps Burning

Larry Harvey presents “Why The Man Keeps Burning”


Monday October 20, 02014 at 7:30pm SFJAZZ Center

Long Now Members can reserve 2 seats, join today! General Tickets $15


About this Seminar:

“Scaling up will kill Burning Man.” “That new rule will kill Burning Man.” “The Bureau of Land Management will kill Burning Man.” “Selling tickets that way will kill Burning Man.” “Board infighting will kill Burning Man.” “Upscale turnkey camps will kill Burning Man.”


What if Burning Man is too fragile to be killed? What if celebrating ephemerality is the best guarantee of continuity? What if every year’s brand new suspension of disbelief has deep-down durability? What if conservatively radical principles and evolving rules are more robust over time than anything merely physical?

What really keeps the Man burning? If anyone knows, it should be the event’s primary founder, author of The Principles, and ongoing Chief Philosophical Officer, artist Larry Harvey.

CryptogramSecurity Trade-offs of Cloud Backup

This is a good essay on the security trade-offs with cloud backup:

iCloud backups have not eliminated this problem, but they have made it far less common. This is, like almost everything in tech, a trade-off:
  • Your data is far safer from irretrievable loss if it is synced/backed up, regularly, to a cloud-based service.

  • Your data is more at risk of being stolen if it is synced/backed up, regularly, to a cloud-based service.

Ideally, the companies that provide such services minimize the risk of your account being hijacked while maximizing the simplicity and ease of setting it up and using it. But clearly these two goals are in conflict. There's no way around the fact that the proper balance is somewhere in between maximal security and minimal complexity.

Further, I would wager heavily that there are thousands and thousands more people who have been traumatized by irretrievable data loss (who would have been saved if they'd had cloud-based backups) than those who have been victimized by having their cloud-based accounts hijacked (who would have been saved if they had only stored their data locally on their devices).

It is thus, in my opinion, terribly irresponsible to advise people to blindly not trust Apple (or Google, or Dropbox, or Microsoft, etc.) with "any of your data" without emphasizing, clearly and adamantly, that by only storing their data on-device, they greatly increase the risk of losing everything.

It's true. For most people, the risk of data loss is greater than the risk of data theft.

Planet DebianSteve Kemp: Today I mostly removed python

Much has already been written about the recent bash security problem, allocated the CVE identifier CVE-2014-6271, so I'm not even going to touch it.

It did remind me to double-check my systems to make sure that I didn't have any packages installed that I didn't need though, because obviously having fewer packages installed and fewer services running reduces the potential attack surface.

I had noticed in the past I had python installed and just though "Oh, yeah, I must have python utilities running". It turns out though that on 16 out of 19 servers I control I had python installed solely for the lsb_release script!

So I hacked up a horrible replacement for `lsb_release in pure shell, and then became cruel:

~ # dpkg --purge python python-minimal python2.7 python2.7-minimal lsb-release

That horrible replacement is horrible because it defers detection of all the names/numbers to the /etc/os-release which wasn't present in earlier versions of Debian. Happily all my Debian GNU/Linux hosts run Wheezy or later, so it all works out.

So that left three hosts that had a legitimate use for Python:

  • My mail-host runs offlineimap
    • So I purged it.
    • I replaced it with isync.
  • My host-machine runs KVM guests, via qemu-kvm.
    • qemu-kvm depends on Python solely for the script /usr/bin/kvm_stat.
    • I'm not pleased about that but will tolerate it for now.
  • The final host was my ex-mercurial host.
    • Since I've switched to git I just removed tha package.

So now 1/19 hosts has Python installed. I'm not averse to the language, but given that I don't personally develop in it very often (read "once or twice in the past year") and by accident I had no python-scripts installed I see no reason to keep it on the off-chance.

My biggest surprise of the day was that now that we can use dash as our default shell we still can't purge bash. Since it is marked as Essential. Perhaps in the future.

Planet DebianAigars Mahinovs: Distributing third party applications via Docker?

Recently the discussions around how to distribute third party applications for "Linux" has become a new topic of the hour and for a good reason - Linux is becoming mainstream outside of free software world. While having each distribution have a perfectly packaged, version-controlled and natively compiled version of each application installable from a per-distribution repository in a simple and fully secured manner is a great solution for popular free software applications, this model is slightly less ideal for less popular apps and for non-free software applications. In these scenarios the developers of the software would want to do the packaging into some form, distribute that to end-users (either directly or trough some other channels, such as app stores) and have just one version that would work on any Linux distribution and keep working for a long while.

For me the topic really hit at Debconf 14 where Linus voiced his frustrations with app distribution problems and also some of that was touched by Valve. Looking back we can see passionate discussions and interesting ideas on the subject from systemd developers (another) and Gnome developers (part2 and part3).

After reading/watching all that I came away with the impression that I love many of the ideas expressed, but I am not as thrilled about the proposed solutions. The systemd managed zoo of btrfs volumes is something that I actually had a nightmare about.

There are far simpler solutions with existing code that you can start working on right now. I would prefer basing Linux applications on Docker. Docker is a convenience layer on top of Linux cgroups and namespaces. Docker stores its images in a datastore that can be based on AUFS or btrfs or devicemapper or even plain files. It already has a semantic for defining images, creating them, running them, explicitly linking resources and controlling processes.

Lets play a simple scenario on how third party applications should work on Linux.

Third party application developer writes a new game for Linux. As his target he chooses one of the "application runtime" Docker images on Docker Hub. Let's say he chooses the latest Debian stable release. In that case he writes a simple Dockerfile that installs his build-dependencies and compiles his game in "debian-app-dev:wheezy" container. The output of that is a new folder containing all the compiled game resources and another Dockerfile - this one describes the runtime dependencies of the game. Now when a docker image is built from this compiled folder, it is based on "debian-app:wheezy" container that no longer has any development tools and is optimized for speed and size. After this build is complete the developer exports the Docker image into a file. This file can contain either the full system needed to run the new game or (after #8214 is implemented) just the filesystem layers with the actual game files and enough meta-data to reconstruct the full environment from public Docker repos. The developer can then distribute this file to the end user in the way that is comfortable for them.

The end user would download the game file (either trough an app store app, app store website or in any other way) and import it into local Docker instance. For user convenience we would need to come with an file extension and create some GUIs to launch for double click, similar to GDebi. Here the user would be able to review what permissions the app needs to run (like GL access, PulseAudio, webcam, storage for save files, ...). Enough metainfo and cooperation would have to exist to allow desktop menu to detect installed "apps" in Docker and show shortcuts to launch them. When the user does so, a new Docker container is launched running the command provided by the developer inside the container. Other metadata would determine other docker run options, such as whether to link over a socket for talking to PulseAudio or whether to mount in a folder into the container to where the game would be able to save its save files. Or even if the application would be able to access X (or Wayland) at all.

Behind the scenes the application is running from the contained and stable libraries, but talking to a limited and restricted set of system level services. Those would need to be kept backwards compatible once we start this process.

On the sandboxing part, not only our third party application is running in a very limited environment, but also we can enhance our system services to recognize requests from such applications via cgroups. This can, for example, allow a window manager to mark all windows spawned by an application even if the are from a bunch of different processes. Also the window manager can now track all processes of a logical application from any of its windows.

For updates the developer can simply create a new image and distribute the same size file as before, or, if the purchase is going via some kind of app-store application, the layers that actually changed can be rsynced over individually thus creating a much faster update experience. Images with the same base can share data, this would encourage creation of higher level base images, such as "debian-app-gamegl:wheezy" that all GL game developers could use thus getting a smaller installation package.

After a while the question of updating abandonware will come up. Say there is is this cool game built on top of "debian-app-gamegl:wheezy", but now there was a security bug or some other issue that requires the base image to be updated, but that would not require a recompile or a change to the game itself. If this Docker proposal is realized, then either the end user or a redistributor can easily re-base the old Docker image of the game on a new base. Using this mechanism it would also be possible to handle incompatible changes to system services - ten years down the line AwesomeAudio replaces PulseAudio, so we create a new "debian-app-gamegl:wheezy.14" version that contains a replacement libpulse that actually talks to AwesomeAudio system service instead.

There is no need to re-invent everything or push everything and now package management too into systemd or push non-distribution application management into distribution tools. Separating things into logical blocks does not hurt their interoperability, but it allows to recombine them in a different way for a different purpose or to replace some part to create a system with a radically different functionality.

Or am I crazy and we should just go and sacrifice Docker, apt, dpkg, FHS and non-btrfs filesystems on the altar of systemd?

P.S. You might get the impression that I dislike systemd. I love it! Like an init system. And I love the ideas and talent of the systemd developers. But I think that systemd should have nothing to do with application distribution or processes started by users. I am sometimes getting an uncomfortable feeling that systemd is morphing towards replacing the whole of System V jumping all the way to System D and rewriting, obsoleting or absorbing everything between the kernel and Gnome. In my opinion it would be far healthier for the community if all of these projects would developed and be usable separately from systemd, so that other solutions can compete on a level playing field. Or, maybe, we could just confess that what systemd is doing is creating a new Linux meta-distribution.

Planet DebianJan Wagner: Redis HA with Redis Sentinel and VIP

For an actual project we decided to use Redis for some reasons. As there is availability a critical part, we discovered that Redis Sentinel can monitor Redis and handle an automatic master failover to a available slave.

Setting up the Redis replication was straight forward and even setting up Sentinel. Please keep in mind, if you configure Redis to require an authentication password, you even need to provide that for the replication process (masterauth) and for the Sentinel connection (auth-pass).

The more interesting part is, how to migrate over the clients to the new master in case of a failover process. While Redis Sentinel could also be used as configuration provider, we decided not to use this feature, as the application needs to request the actual master node from Redis Sentinel much often, which will maybe a performance impact.
The first idea was to use some kind of VRRP, implemented into keepalived or something like this. The problem with such a solution is, you need to notify the VRRP process when a redis failover is in progress.
Well, Redis Sentinel has a configuration option called 'sentinel client-reconfig-script':

# When the master changed because of a failover a script can be called in
# order to perform application-specific tasks to notify the clients that the
# configuration has changed and the master is at a different address.
# The following arguments are passed to the script:
# <master-name> <role> <state> <from-ip> <from-port> <to-ip> <to-port>
# <state> is currently always "failover"
# <role> is either "leader" or "observer"
# The arguments from-ip, from-port, to-ip, to-port are used to communicate
# the old address of the master and the new address of the elected slave
# (now a master).
# This script should be resistant to multiple invocations.

This looks pretty good and as there is provided a <role>, I thought it would be a good idea to just call a script which evaluates this value and based on it's return, it adds the VIP to the local network interface, when we get 'leader' and removes it when we get 'observer'. It turned out that this was not working as <role> didn't returned 'leader' when the local redis instance got master and 'observer' when got slave in any case. This was pretty annoying and I was short before giving up.
Fortunately I stumpled upon a (maybe) chinese post about Redis Sentinal, where was done the same like I did. On the second look I recognized that the decision was made on ${6} which is <to-ip>, nothing more then the new IP of the Redis master instance. So I rewrote my tiny shell script and after some other pitfalls this strategy worked out well.

Some notes about convergence. Actually it takes round about 6-7 seconds to have the VIP migrated over to the new node after Redis Sentinel notifies a broken master. This is not the best performance, but as we expect this happen not so often, we need to design the application using our Redis setup to handle this (hopefully) rare scenario.

Falkvinge - Pirate PartyBitcoin, Open Source Movement For Decentralized Future

Bitcoin concept

Cryptocurrency – Nozomi Hayase: It all started with a white paper published in 2008. An unknown innovator under the pseudonym Satoshi Nakamoto outlined an open source protocol for a public ledger that has come to be known as Bitcoin – which among other things is a peer-to-peer form of
digital cash. Five years since its inception, more people are coming to recognize the revolutionary force behind the underlying technology
of the Bitcoin blockchain that enables this decentralized network to achieve consensus amongst strangers at a global scale.

The disruptive force this innovation brings to existing systems is enormous. From potentially replacing the remittance industry to empowering the underbanked and those whose currencies are tightly controlled and subject to hyperinflation, the world is just beginning to see its effects in the financial realm. This capacity of Bitcoin to transform society is attracting activists and global citizens dedicated to promoting equality and a more just world. In particular, the ability of the blockchain’s decentralized trust to build a truly peer-to-peer platform for transactions is a very powerful force for those who seek to flatten the current hierarchical system that favors the one percent oligarchs who have their hands on the levers of power.

Yet as with any new invention in its early phase, many are still skeptical and hesitant to get behind this technology. They call for a more critical examination of Bitcoin as a digital currency. One argument revolves around perceived inequalities embedded within the design of the currency that create certain privilege of early adopters. The contention is that Bitcoin is centralized; that roughly half of all bitcoins belong to around 927 individuals. If true, this puts half of the world’s Bitcoin currency in the hands of a tenth of a percent of all accounts. A Washington Post published an article called, “Forget the 1 percent. In the Bitcoin world, half the wealth belongs to the 0.1 percent”. It called out Bitcoin’s apparent inequality and highlighted the gap between those who own Bitcoin and those who don’t.

All new technology takes time for mainstream adoption and in the beginning it is inevitable that the user base and innovator pool are relatively small. This is certainly true with Bitcoin. Compared to the situation in its early stages, bitcoin adoption now is moving very quickly. That said, what lies beneath this concern about the imbalance between users appears to be a fear that Bitcoin early adopters could end up replacing the current 1% oligarchy of bankster gangs of Wall Street and Goldman Sachs and would simply recycle the old world order of robber-baron capitalism. So the question arises, could Bitcoin’s revolutionary potential be overshadowed by this imagined pitfall, or worse yet become just another tool for neoliberal forces? By engaging this question, we can deepen our understanding of the genius and revolutionary potential of this innovation.

Perhaps the ideal currency in the minds of some of those who criticize Bitcoin’s perceived design of inequality is one that could bring all of Bitcoin’s positive features without creating so-called ‘early adopters’. For this to happen, coins would need to be premined and distributed to all people equally while still achieving the massive hashing power needed to secure the system from any external force determined to hijack or compromise it. This all sounds good, but practically speaking, who has the resources to set it in motion and bring it to that point?

These kinds of efforts might be achieved at a local and smaller scale, like Auroracoin in Iceland, even though it was reported that the experiment stumbled with problems in its design that caused a dramatic drop in value. In addition to scale vulnerability of the blockchain hashing power with a smaller currency, the larger question remains regarding how to effectively challenge the current global cartel of Western financial institutions that have over the years undermined sovereignty of local communities and destroyed whole countries through limiting access to payment networks, debt peonage, rent seeking and money printing. How is it possible to create a common currency that connects people around the world in a truly peer-to-peer way without it being intermediated or hijacked by a patronage network of states and corporate banksters that regularly steal from the commons and act against the interests of the people?

Decentralized Organizing

The creation of Bitcoin and its ecosystem follows a trend of decentralization that has emerged in recent years, such as the Occupy movement’s leaderless horizontal organizing, the Free and Open Source Software movement and collaborative production like Wikipedia and Linux. In traditional movements, a group of individuals or a particular organization takes the lead and organizes the cause. Activists generally struggle to fund their efforts, as altruism does not get rewarded financially in the current extreme capitalistic environment. Across the board in struggles for social justice, funding is often the most challenging issue. Identified leaders communicate the purpose of a project and rely on existing networks and systems for funding and material support. Oftentimes for the sake of efficiency and lack of alternatives, such organizing tends to crystallize into another form of hierarchy.

In addition, over the years, the old methods of dissent and social change have been shown to be less and less effective. Establishment forces target leaders and recognized organizers as a point of control for co-option and weakening movements. This is likely one reason why the inventor of Bitcoin decided to be anonymous and minimize his influence in the operation. Any open declaration of resistance against state and corporate power will not go unnoticed and a direct confrontation with the establishment is inevitable if what is created is at all effective. Such efforts are often met with attacks and in most cases easily squashed.

In a case of building Bitcoin network, the hashing infrastructure of the Bitcoin blockchain needed to attain a stable and impervious size before it could go truly viral and gain more mainstream appeal. Early on, Satoshi Nakamoto appeared to have strong concerns for protecting the development of the software against any just such external threats. This revelation surfaced in Julian Assange’s latest book “When Google Met WikiLeaks”. In a footnote, the founder of WikiLeaks depicted an alliance with Bitcoin community that goes back years before this new cryptographic invention matured into the currency of contagious courage it has become (bitcoin was eventually used to support funds for WikiLeaks and Edward Snowden).

In December 2010, just after WikiLeaks faced the unlawful financial blockade imposed by Bank of America, Visa, MasterCard, PayPal and Western Union, a debate emerged on the Bitcoin Forum, concerning a risk that using bitcoin for donations to WikiLeaks could “provoke unwanted government interest in the then nascent crypto-currency”. Responding to one poster who welcomed such challenge, Nakamoto emphasized the importance of protecting the software development at this early stage. “Bitcoin is a small beta community in its infancy. You would not stand to get more than pocket change and the heat you would bring would likely destroy us at this stage.” Six days later Nakamoto disappeared from the Bitcoin community. WikiLeaks read the analysis, agreed with his view and decided to postpone launching the Bitcoin donation until the currency attained stability.

Perhaps past protest and resistance movements can teach us something. In order for the creation of an alternative system to be truly effective, it needs to be subversive and under the radar until it gains strength. In a sense, Bitcoin is a living example of such an effective decentralized organization. After 5 years of existence, it has created the largest global network of supercomputing power. It has now grown to such a level that it is virtually impossible for one nation-state or corporation to undermine or hijack the network.

Swarm Effect

The idea of blockchain crypto-currency was put forward in 2008. The following year the Bitcoin software was launched and the first blockchain network came into existence by miners producing and transacting bitcoins. Someone had to do the early work of building of this open source ledger and securing the system at a time when few would even believe it possible or support this innovation. Where did the impetus for this work come from?

The unfolding process of the growing Bitcoin ecosystem can be looked at as an expression of a phenomenon called the “Swarm”. Rick Falkvinge, founder of the first Pirate Party describes the Swarm as a new style of organizing. He explains how this differs both from an egalitarian way of working where no single person or group has authority for guiding the process. He notes how it is a “scaffolding set up by a few individuals that enables tens of thousands of people to cooperate on a common goal in their life.” Falkvinge describes a Swarm is driven by voluntarism. People join the cause because they believe in the idea. There is no leader, but each person’s action inspires others and guides the Swarm in moving forward together.

Bitcoin is an open source project that generates a Swarm effect. From outside, this might appear mysterious or unsettling with its lack of a traditional core. Some have difficulty believing there is no controlling authority and are compelled to try to unveil a “ghost outside the machine”. Whether it is innovation or activism, people tend to first look for some subversion behind a seemingly progressive idea and then look for individuals or entities that are possibly pulling strings to divert the original intention. This can be a healthy skepticism, yet in the case of Bitcoin, there is no company, director or physical entity. There are no CEOs, no offices; no group of individuals running the operation.

At this time, the origin of the blockchain technology has been traced only to an anonymous unknown creator. The idea itself is not attached to a particular individual or group. It is somewhat similar to the way the online collective Anonymous express themselves as simply “ideas without origin” and claim that “there is no authorship … no control, no leadership, only influence, the influence of thought”.

Bitcoin is simply an ingenious idea and an open source computer code that anyone can read, take up, modify and develop. Whoever created it didn’t give it only to specific people, but instead made it accessible to everyone in the world. Doug Carrillo, founder of tweeted, “Satoshi’s greatest gift was uniting all the intelligent, altruistic, visionary people from all over the world working on Bitcoin”. The impulse that got this global currency enterprise going was this gift of protocol.

<script async="async" charset="utf-8" src=""></script>

Incentive Structure

One notable characteristic that emerged within the Swarm surrounding the Bitcoin ecosystem is the creation of an unique form of volunteerism. Volunteering is generally associated with charity; one gives time, resources and skills for free. Traditionally one does not expect their work to be compensated. The Bitcoin ecosystem generates a new volunteerism in the form of innovation without permission, where those who engage both generate and expand new economic activity simply by creating, acquiring or using the currency. The value they create and the efforts that support it are rewarded in a way that strengthens the system as a whole, while further encouraging innovation on the edges. This all works with a network effect where voluntary peer-to-peer interaction creates and expands an autonomous zone impervious to patronage and monopolization. Each person moves toward something they believe in, while building a kind of common wealth that inspires further participation.

This is made possible through one particular feature of the Bitcoin currency. When we look at Bitcoin technology as a distributed trust foundation upon which global society is building a network that empowers everyone, we can begin to understand the brilliance of its design and why its first application is an open source mineable currency.

With a cap of 21M, Bitcoin has a fixed monetary policy. This design seems deflationary in fiat economy terms, yet Bitcoin’s infinite divisibility (8 decimal points and more if consensus is reached) makes it possible to accommodate any level of economic growth. Some view this design as an inherent flaw, arguing that it rewards early adopters and encourages hoarding, yet this element has played a crucial role in the development of the blockchain. It provided a way of building a global public asset ledger through integrating a reward structure by means of increasingly difficult proof of work tied to network capacity and value expansion.

This all functions as an incentive for volunteerism by creating the Swarm, which creator of Netscape and early web browsers Marc Andreessen characterizes as “a four-sided network effect”, namely “four constituencies that participate in expanding the value of Bitcoin”. Andreessen explains these 4 participants as “(1) consumers who pay with Bitcoin, (2) merchants who accept Bitcoin, (3) “miners” who run the computers that process and validate all the transactions and enable the distributed trust network to exist, and (4) developers and entrepreneurs who are building new products and services with and on top of Bitcoin”. This design, combined with unparalleled flow and infinite divisibility creates an open source network effect – not just squared, but cubed.

The Bitcoin incentive structures have shown to be very effective. Bitcoin is expanding infrastructure with ATMs and exchanges and improved POS interfaces. When people are invested enough in something, it motivates them to solve challenges that come along the way. In fact, it has generated enough incentive to make people keep the network decentralized and avoid the issue of a 51% attack (the phenomenon of concentration of a hashing majority and temporary takeover by one mining pool). Each member of the global mining pool has a strong incentive to strengthen and maintain the integrity of the system.

Many early adopters put resources into creating new start-ups to support the ecosystem. After becoming the first major retailer to accept Bitcoin globally, Overstock announced it would donate four percent of cryptocurrency revenue to foundations that promote the use of digital currency in the world. A new Bitcoin Exchange in Norway is reported to donate 5% of their profits to charities fighting poverty.

Bitcoin is an open source project that is self-organized and crowd-sourced by all who are involved in it. Early adopters are like shareholders who also can take responsibility for maintaining the system. Although justification of the percentage of reward accruing to them may be debatable, if we look at how vital their role was in building the system and the risk they took early on in supporting the innovation, it looks different. One should also note that their accumulation of bitcoin was not made through cheating, manipulation and exploitation like with fiat and debt ownership, so we may be compelled to draw more nuanced conclusions about this perceived inequality.

Genesis Block and the Network of Affinity

So what was the idea encoded in protocol that set this in motion? Bitcoin is a neutral technology, yet the genesis of the idea is not neutral. It had clear political implications. The Bitcoin protocol emerged during the financial crisis of 08 and the creator was aware of how badly governments were handling monetary policies. The first block of the Bitcoin blockchain known as the genesis block includes the following quote in its metadata. “The Times 03/Jan/2009 Chancellor on brink of second bailout for banks.” Although this might have been intended to be just a timestamp, his choice of this Times of London article as a date of proof might give a sense of his background motivation.

Satoshi’s white paper pointed to the “inherent weakness of trust based model” of the existing financial system and proposed “an electronic payment system based on cryptographic proof” as a viable alternative. The Bitcoin open source protocol was a response to the crisis of the existing centralized financial system. Over time, this has created a network of affinity through voluntary association and mutual aid based on trust in the common person.

In this network, individuals out of themselves turn their computers into mines. Out of themselves they create start-ups and test the system. Some might be driven to earn bitcoin or build a cutting edge business, while others are motivated by a principle and vision of a decentralized future and redistribution of power. Whatever the motives driving users, miners and innovators, by choosing to be a part of Bitcoin network, they are all bound by one shared idea – that of supporting and maintaining the integrity of the public ledger for all the world to use.

This may remind us of what fueled the Occupy Movement. In fall of 2011, people from all walks of life came together in lower Manhattan. There were socialists, libertarians, housewives, anarchists and teachers. Those whose houses were foreclosed, students with onerous debt, businessmen – all came together because they could see how the system was rigged against them. They were not willing to put up with the current hierarchical financial and economic system that is run and controlled by the 1%. Their shared frustration became a network of affinity bringing people together to engage in efforts to solve problems of corrupt hierarchical institutions and false forms of representation. Through activating trust in one another, they attempted to create a peer-to-peer decentralized network. They were willing to work together and abide by this particular protocol of consensus and egalitarian form of decision-making.

Bitcoin is a similar social movement empowered by decentralized consensus decision-making processes. While the Occupy movement was crushed by a brutal police force as it tried to build decentralized networks on existing centralized corporate occupied territory, with Bitcoin, a whole new network is built upon a distributed trust platform that is structurally autonomous and out of reach of any private third party authority. We now have an alternative to over-centralized social forms that were programmed to manage wealth and resources for the rich and powerful and can enter into a network of peers impervious to these forces of control.

As noted, some see and criticize the existence of privilege within this Bitcoin network of affinity. Yet the issue of equality disparity was brought up even within the 99%. Concerns surfaced within Occupy that it was not truly addressing issues of racism and class divides. In the U.S, the Occupy movement arose mostly out of white middle-class issues such as mortgages and student loans, while for people of color in US cities like Oakland and Detroit, the oppressive issues were more about police brutality, not having grocery stores in their community and lack of the basic work and means to fulfill the needs of everyday life.

The same thing might be said about the Bitcoin network, as it is potentially composed of people of different backgrounds, nationalities, cultures or economic classes, within which there is clear inequality. This is really a reflection of our current social structure that carries a long history of colonialism in the form of Western hegemony. Despite all the differences, there is one thing in common. Participation in the blockchain network is voluntary. Both with the Occupy network motto of “In Each Other We Trust” and with Bitcoin, those who are in it choose to join because they see what an improvement it is over the current system and how it could at least offer benefit beyond monopolized rent seeking for oligarchic powers.

Whatever incentives bring people into the network, by joining they are working to build on the genesis idea that brought them together. It is to eliminate the need for central authority and replace hierarchical third party based representative forms of governance with a truly peer-to-peer decentralized trust network. They do this by each person simply participating and becoming the change they wish to see in the world.

Beyond Levers of Control

Perhaps in transitioning into the Bitcoin ecosystem, the biggest challenge we face is the limit of our ability to imagine. Corporate colonization has not only created vast inequality of wealth by way of an economy based on exploitation and wars, but its real damage was creating a poverty of the mind that cuts humanity off from its connection to the earth. It captured the imagination into materialistic economy of exploitation and extraction and many no longer think with the earth as the First Nations used to do.

We are now so used to centrally planned and hierarchically organized societies, it is difficult for many to imagine how a truly decentralized society might work. People are conditioned by the old paradigm and tend to look for levers of control even where there are none. They cannot even imagine a system that does not create such points of control. So when faced with the idea of a Bitcoin 1%, so called early adopters, it is easy to automatically apply the current reality of the 1% that can print money at will, create debt slavery and rent seek at every choke point. Some fail to understand how radically different the Bitcoin ecosystem is from the existing centrally controlled economy and forms of governance.

Let’s examine this more closely. The existing fiat world is organized by physical and social confinement of populations within nation-state boundaries, where sovereignty of nations ostensibly determines currency and monetary policy. Creation of currency has been enforced by monopolies by means of taxation. This central authority creates levers of control that are now mostly co-opted by corporations. These levers prevent ordinary people from fully counting themselves in as the true source of legitimacy. It creates a chain of command where institutions and professionals can gain power to coerce others to serve the interests of those at the top of the hierarchy and make people work against their own interests.

In this environment, accumulation of money can be easily translated to power and corruption. We see how this has panned out in history. Those who have money can control the flow, get to the top of the system, buy politicians and even take over governments. As a result, we now have a massive inequality where a tiny percent of the population, 0.001 percent controls access to the majority of the wealth and transactional flow of the entire world.

In the last few decades, this corruption reached an extreme level after taking the dollar off the gold standard, with the Federal Reserve and other private central banks (private corporations) using this power to print money out of thin air. With this and the monopolization of banks and payment systems, they maintained massive power to fund divisive resource wars and debase currency through inflation. This is now destroying the middle class and slowly turning the whole population toward a medieval-like debt servitude. Bitcoin completely challenges this system of control through enabling a decentralized network that cuts out third parties that create monopolies and insidious levers of control.

Money as Flow

In the Bitcoin distributed trust network, there are no choke- and checkpoints. Currency gains its true meaning – pure flow. What would it be like if currency functions as flow rather than a form of control? As we move into a more decentralized future, the current hierarchical institutions and central authorities lose power. In the Bitcoin network, accumulation of money is not easily translated into power over others. All that those 1% Bitcoin adopters can do with their acquired money is to spend it. They cannot interfere with or undermine the integrity of the value transfer network. They cannot print more and debase the whole system and they can’t rent seek each transaction.

So let’s imagine a scenario where they try to buy up all the media, lobby politicians and buy real estate and land. With new blockchain based social media and forms of crowd-funding and micro-payments, it will become harder for big media organizations to own the airwaves, to control and dictate narratives. Also, it would be difficult for individuals and corporations to buy up politicians, as their transactions are transparent in the blockchain and private agendas would be harder to hide. Besides, politicians will themselves become increasingly irrelevant as taxation is completely transformed in a Bitcoin world.

In a Bitcoin ecosystem, the familiar world crumbles. It is an entirely new world. A 1% here looks a lot different than the 1% in the current fiat world. This is a world where people interact through voluntary association rather than coerced will. The current monopolies seen in the existing financial system would become difficult to maintain. It makes the government more transparent and eventually more accountable.

Perhaps the larger ramification of the blockchain invention is the potential of code to facilitate an equal application of law that ensures the principle of consensus. If a Bitcoin 1%er wants to buy land from farmers or even buy whole cities, the land owners would have to be willing to sell. People won’t as easily be manipulated and forced to act against their will. In a Bitcoin decentralized society, farmers and others have a choice to say no and this ability of each person to decide what to consent to defines personal power.

In the world created through this two-way voluntary participation, accumulation of money or goods means isolation. If one wants to create a society of control and domination, it would have to be a little pond kingdom where one can maintain an illusion of control. But the same rules of consensus apply and the participation in this pond is also voluntary. Everyone can freely choose what kind of community and people they wish to associate with.

As the two worlds interface at this early stage, some voice concern about the current 1% trying to buy up bitcoin. But if they do this in order to maintain the current power which they gained in the fiat world, they will shoot themselves in the foot. They cannot buy out the system, but can only buy into the new network just like everyone else, which would only lead to expanding and strengthening the decentralized network and accelerating the demise of the fiat system itself and the illegitimate authority that goes with it.

New Sovereignty

The Bitcoin network creates decentralized consensus at a large scale without anyone in the middle. Can we really understand the significance of this and imagine what a future created through a truly peer-to-peer network looks like?

With the invention of the blockchain, we are entering into a new era. Security expert and technologist Andreas Antonopoulos describes how up till 2008, sovereignty created currency. He notes how the world of sovereign currency ended in 2008 and that after 2008, currencies could be created by individuals. When broadly adopted, these currencies create their own sovereignty and purchasing power.

The current sociopolitical system is built on a long history of colonization. Western civilization has a dark past of violence, brutal subjugation of indigenous people to enforce domination and resource extraction. This unredeemed shadow is carried on even now as a force that has morphed into a pervasive globalized corporate power with its privatization and financialization of everyday life.

While the concept of sovereignty in the current nation-state paradigm is based on a colonial mentality where independence of a country was attained through conquest of others, the blockchain invention opens a door for a new kind of sovereignty, one that is not based on the logic of control and domination, but through mutually shared ideals and voluntary association.

Bitcoin is the world’s first stateless currency that transcends borders in a similar way as the Internet. Its unmediated flow delivers more power to the periphery. As a result it could dissolve the hegemony of U.S. empire and end the monarchy of the petrodollar that controls flows of oil, finance and global geopolitics. This could potentially shrink the wealth gap between the Global South and the North. For the first time in history, humanity has the option to really heal the wound of long history of brutal colonization; to end major wars, transform poverty and inequality and move toward a more humane world. Humanity has a chance to embark on a new path, where technology of Western society is used to serve for the wisdom of indigenous cultures and together create a new civilization.

If we let the imagination follow its natural current freely, it leads into a future where collective creativity can solve the centralized problems of the old world. Even if Bitcoin as money dies tomorrow, that will only happen because a better designed blockchain cryptocurrency has come to replace it.

We don’t know exactly where this will go, as this kind of thing has never occurred before. But one thing is certain: The invention of the blockchain has already changed the world forever. No one can think of this world in the same way as before. The blockchain has already unleashed the flow of radical imagination, becoming the waves of uprising of a decentralized future.

Planet DebianGunnar Wolf: #bananapi → On how compressed files should be used

I am among the lucky people who got back home from DebConf with a brand new computer: a Banana Pi. Despite the name similarity, it is not affiliated with the very well known Raspberry Pi, although it is a very comparable (although much better) machine: A dual-core ARM A7 system with 1GB RAM, several more on-board connectors, and same form-factor.

I have not yet been able to get it to boot, even from the images distributed on their site (although I cannot complain, I have not devoted more than a hour or so to the process!), but I do have a gripe on how the images are distributed.

I downloaded some images to play with: Bananian, Raspbian, a Scratch distribution, and Lubuntu. I know I have a long way to learn in order to contribute to Debian's ARM port, but if I can learn by doing... ☻

So, what is my gripe? That the three images are downloaded as archive files:

  1. 0 gwolf@mosca『9』~/Download/banana$ ls -hl \
  2. > Lubuntu_For_BananaPi_v3.1.1.tgz Raspbian_For_BananaPi_v3.1.tgz \
  3. > Scratch_For_BananaPi_v1.0.tgz
  4. -rw-r--r-- 1 gwolf gwolf 222M Sep 25 09:52
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz
  6. -rw-r--r-- 1 gwolf gwolf 1.3G Sep 25 10:01 Raspbian_For_BananaPi_v3.1.tgz
  7. -rw-r--r-- 1 gwolf gwolf 1.2G Sep 25 10:05 Scratch_For_BananaPi_v1.0.tgz

Now... that is quite an odd way to distribute image files! Specially when looking at their contents:

  1. 0 gwolf@mosca『14』~/Download/banana$ unzip -l
  2. Archive:
  3. Length Date Time Name
  4. --------- ---------- ----- ----
  5. 2032664576 2014-09-17 15:29 bananian-1409.img
  6. --------- -------
  7. 2032664576 1 file
  8. 0 gwolf@mosca『15』~/Download/banana$ for i in Lubuntu_For_BananaPi_v3.1.1.tgz \
  9. > Raspbian_For_BananaPi_v3.1.tgz Scratch_For_BananaPi_v1.0.tgz
  10. > do tar tzvf $i; done
  11. -rw-rw-r-- bananapi/bananapi 3670016000 2014-08-06 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img
  12. -rwxrwxr-x bananapi/bananapi 3670016000 2014-08-08 04:30 Raspbian_For_BananaPi_v3_1.img
  13. -rw------- bananapi/bananapi 3980394496 2014-05-27 01:54 Scratch_For_BananaPi_v1_0.img

And what is bad about them? That they force me to either have heaps of disk space available (2GB or 4GB for each image) or to spend valuable time extracting before recording the image each time.

Why not just compressing the image file without archiving it? That is,

  1. 0 gwolf@mosca『7』~/Download/banana$ tar xzf Lubuntu_For_BananaPi_v3.1.1.tgz
  2. 0 gwolf@mosca『8』~/Download/banana$ xz Lubuntu_1404_For_BananaPi_v3_1_1.img
  3. 0 gwolf@mosca『9』~/Download/banana$ ls -hl Lubun*
  4. -rw-r--r-- 1 gwolf gwolf 606M Aug 6 03:45 Lubuntu_1404_For_BananaPi_v3_1_1.img.xz
  5. -rw-r--r-- 1 gwolf gwolf 823M Sep 25 10:02 Lubuntu_For_BananaPi_v3.1.1.tgz

Now, wouldn't we need to decompress said files as well? Yes, but thanks to the magic of shell redirections, we can just do it on the fly. That is, instead of having 3×4GB+1×2GB files sitting on my hard drive, I just need to have several files ranging between 145M and I guess ~1GB. Then, it's as easy as doing:

  1. 0 gwolf@mosca『8』~/Download/banana$ dd if=<(xzcat bananian-1409.img.xz) of=/dev/sdd

And the result should be the same: A fresh new card with Bananian ready to fly. Right, right, people using these files need to have xz installed on their systems, but... As it stands now, I can suppose current prospective users of a Banana Pi won't fret about facing a standard Unix tool!

(Yes, I'll forward this rant to the Banana people, it's not just bashing on my blog :-P )

[update] Several people (thanks!) have contacted me stating that I use a bashism: The <(…) construct is specific to Bash. If you want to do this with any other shell, it can be done with a simple pipe:

  1. $ xzcat bananian-1409.img.xz | dd of=/dev/sdd

That allows for less piping to be done on the kernel, and is portable between different shells. Also, a possibility would be:

  1. $ xzcat bananian-1409.img.xz > /dev/sdd

Although that might not be desirable, as it avoids the block-by-block nature of dd. I'm not sure if it makes a realdifference, but it's worth saying :)

And yes, some alternatives for not unarchiving the file — Here in the blog, an anon commenter suggests (respectively, for zip and .tar.gz files):

  1. $ dd if=<(unzip -p of=/dev/sdd
  2. $ dd if=<(tar -xOf Lubuntu_For_BananaPi_v3.1.1.tgz) of=/dev/sdd

And a commenter by IRC suggests:

  1. $ paxtar -xOaf Raspbian_For_BananaPi_v3.1.tgz Raspbian_For_BananaPi_v3_1.img | sudo dd bs=262144 of=/dev/


CryptogramNasty Vulnerability found in Bash

It's a big and nasty one.

Invariably we're going to see articles pointing at this and at Heartbleed and claim a trend in vulnerabilities in open-source software. If anyone has any actual data other than these two instances and the natural human tendency to generalize, I'd like to see it.

Sociological ImagesHappy Birthday, bell hooks!

Gloria Jean Watkins (1952 – ) adopted her pen name, bell hooks, from her maternal great-grandmother Bell Blair Hooks. Her writing examines a broad range of topics, but one theme is the attention she draws to the interconnectivity of capitalism, race, and gender. Throughout her prolific career, she has repeatedly exposed the way these dimensions produce and perpetuate systems of oppression and domination.

Sociological Cinema



Art by Andjelka Djukic. H/t Sociological Cinema.

(View original at

Sociological ImagesSeeing Children’s Desire: Visibility and Sexual Orientation

In 2009, Benoit Denizet-Lewis wrote in the New York Times that youth were coming out as gay, bisexual, and lesbian at increasingly early ages. Coming out in middle school, though, often prompted parents to ask the classic question: “But how do you know you’re gay?”

The equally classic response to this question is, “Well, how do you know you’re not?” The response is meant to bring questioners’ attention to the invisible norm: heterosexuality. It’s a sexual orientation, too, and if a person must somehow determine that they are gay, then the same must be true of heterosexuality.

Of course, most heterosexuals simply respond: “I always knew.” At which point the gay or bisexual person just nods smugly. It’s very effective.

In any case, I was reminded of this when I came across a Buzzfeed collection of “painfully funny secrets” children think they’re hiding from their parents. A few of them were romantic or sexual secrets kept by four-, five- and six-year-olds.

42 2

I’m not saying that any of these secrets actually mean anything about these children’s sexual orientation, but they might. The first crush I can remember was in 2nd grade. His name was Brian and we cleaned up the teacher’s classroom after school in exchange for stickers. I never looked directly at him, nor him at me, but he was soooooo cuuuuuuuute!

Anyway, it’s interesting to me that parents have a difficult time believing that their children might have a pretty good idea who they like. The signs of their sexual and romantic interests start early. Then again, if parents are looking for signs that their children develop crushes on the other sex, it’s likely easier for them to see. The invisibility of heterosexuality as a sexual orientation can make it, paradoxically, impossible to miss. While the non-normativeness of homo- and bisexuality can make these orientations invisible.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

Racialicious#WeNeedDiverseBooks: Historical Fiction and Making Reading Fun

Gotta catch ‘em all– the history nerd’s pokemon

By Kendra James

Like most of my friends in elementary school, I was obsessed with The American Girl dolls and books The dolls lacked comprehensive diversity back then, in that they had one single doll of colour until 1997. I owned Felicity Merriman, a white girl who lived in colonial Williamsburg, but received Addy Walker, a former slave who escapes from the South into Philadelphia, soon after she debuted in 1993. As per my mother’s rule, I read all six of Addy’s books before being gifted the doll. But unlike Felicity’s, I didn’t often revisit them for pleasure. In my constant search for American historical fiction with protagonists of colour written for young readers, I often come across the same problem I did when I was younger: it’s all really depressing.

Addy Walker’s story begins in Meet Addy while she’s still enslaved, and I have vivid memories of one paragraph where her overseer forces her to eat tobacco leaf worms. If you had asked me, when I was younger, to state a fact about Harriet Tubman I would have told you about the time her mistress threw a porcelain sugar bowl at her head. Meanwhile, Felicity’s biggest worry in life in Meet Felicity was saving a horse. My favourite young adult historical fiction author, Ann Rinaldi, wrote stories that spanned across races, but her romantic stories about southern belles and women of the revolutionary war were always more fun to read than her sanitised retellings of the Jeffersons and the Hemmings or Sioux boarding schools.

In pre-Mattel age when the American Girl Doll franchise was still owned and partially run by Pleasant Rowland and her Pleasant Company, I devoured their 90 page novels about young girls scattered throughout various points of American history. Back then they were a genuinely decent source of early education and introduction into various facets of American history for an 8 year old girl. I credit the dolls and their books for the love of middle and young adult historical-fiction I took into my adult life, but that doesn’t mean they were all fun.

Maybe I fixated on strange things when I was younger, but it was always the worst elements of these books, American Girls and others, that stuck with me, and I get the feeling that’s not the experience for the little girls with a wider variety of characters who look like them to choose from.

White characters not only get a wider variety of books to choose from, but books in a wider variety of settings. Characters of colour in American hist-fic tend to exist strictly within certain boundaries of time or not at all. African-Americans exist within the boundaries of slavery, the Jim Crow South, or the Civil Rights movement. Native Americans exist in the mythical west until about 1870 or so, Asian-Americans exist during World War 2, only in the west (and only from Eastern countries), and I had to reach out to our followers to fill in the gaps my childhood reading material left when it came to Latin@s.

These stories need to be told, of course. Diverse literature for young readers is extremely important. The world needs YA literature about Japanese Internment during the Second World War, but they shouldn’t be the only books Japanese-American children get to see themselves reflected in. This isn’t to encourage the erasure or minimalisation of the realities that people of colour have historically faced, but rather a desire for authors and publishers to realise that all of us existed in America outside the times of our most publicised oppressions. And that, even during the most difficult times, we still had lives that didn’t necessarily completely revolve around the overhead political themes of the day.

With that in mind, and because I’m 26 year old woman who still reads almost exclusively YA and middle grade fiction, I’ve compiled a list (that is by no means complete) of historical fiction with POC characters that might allow young and middle adult readers to have a little more fun with their reading escapism.

American Fairy Trilogy, by Sara Zettel: Having received an advanced copy of Dust Girl, the first book in this series, from Random House, I set it into my ‘Donate’ pile because neither the jacket flap or cover read as interesting to me. Callie, the book’s main protagonist, is a mixed race girl living with her mother in the middle of the Kansas dust bowl during The Great Depression. Not only is she mixed race (a white passing mixed race Black girl whose hair is a dead giveaway), she’s also half-faerie. Now, only one of those things was obvious from the cover or the description of the first book, and I’ll let you guess which one that was. It wasn’t until the publisher sent the third book in the series that I peered at the cover and wondered to myself, “is this series about a Black girl?”

After a quick Google to confirm my suspicions I started the series and couldn’t put it down. Callie’s story goes from the Kansas dust bowl to the golden age of Hollywood, and out into jazz age Chicago as she searches for her father who’s been kidnapped by the Unseelie Fae. Actor Paul Robeson is a significant minor character, topics like minstrelsy, interracial relationships, and passing are discussed, the fantasy world is well constructed, and the fifteen year old characters act like fifteen year old characters.

Lesson learned: Don’t judge a book by its cover.

Aristotle and Dante Discover The Secrets of the Universe, by Benjamin Alire Sáenz: I haven’t gotten to Aristotle and Dante myself and would normally be hard pressed to consider a book set in 1987 to be historical fiction (I am not that old, thank you). But rave reviews from friends and suggestions from our readers prompted me to include it here. It’s described as a gay coming of age novel, one that doesn’t seem to have a dedicated plot, but instead tracks an evolving friendship between two boys in Texas.

From the official summary: “Aristotle is an angry teen with a brother in prison. Dante is a know-it-all who has an unusual way of looking at the world. When the two meet at the swimming pool, they seem to have nothing in common. But as the loners start spending time together, they discover that they share a special friendship—the kind that changes lives and lasts a lifetime. And it is through this friendship that Ari and Dante will learn the most important truths about themselves and the kind of people they want to be.”

The Diviners, by Libba Bray: Plucky girl psychic Evie O’Neill is the main character in Bray’s book, but much like in her last YA historical fantasy series, A Great And Terrible Beauty, she rounds of her cast of paranormally gifted main characters with an MOC, Memphis, a healer, and his younger brother Isaiah, a prophet. Both live in Harlem in the height of the renaissance. The Diviner’s greatest flaw is an annoying protagonist whose attitude and overuse of 1920s slang I never could quite accept. The rest of it — a richly painted New York City that ranges the backstages of Broadway theatres to the abandoned mansions of Harlem, a plot filled with magic and murder, and a fun cast of supporting characters– makes that one flaw an easy enough one to overlook.

More than just long, this is a densely written book, with a lot of vivid detail for those of us really looking for the ‘historical’ in historical fiction. Some much younger readers may be turned off by how long it takes to get through a chapter, but for the rest I encourage you read it before the sequel comes out in 2015.

…And Now Miguel, by Joseph Krumgold: I include this book with a caveat– it was written in 1953, based off a movie of the same name. While I remember enjoying it when I was younger, I don’t remember clearly whether or not it is written in a style that may reflect attitudes and language of of the 1950s.

That said, this is definitely a book for younger readers. Mexican-American Miguel lives in New Mexico with his shepherding family and wants to go with the men in his family on their annual herding trip up the mountain. He prays to his town’s patron saint to allow him to go, and his wish comes true, but at a cost. His older brother is drafted into service for World War Two, and so Miguel has to go on the herding trip in his place. It’s an easy ready with an easy, obvious moral for younger readers: Be careful what you wish for.

If I Ever Get Out Of Here, by Eric Gansworth: These days I don’t read many books with male protagonists (I know, I know– a misandrist to the end), but Gansworth’s book tells the story of two teenage boys (one Native American and one white) bonding over rock n’ roll in upstate New York in 1975. Given my love of 1970s rock I am, at the very least, intrigued enough to include it here. The summary reads:

Lewis “Shoe” Blake is used to the joys and difficulties of life on the Tuscarora Indian reservation in 1975: the joking, the Fireball games, the snow blowing through his roof. What he’s not used to is white people being nice to him — people like George Haddonfield, whose family recently moved to town with the Air Force. As the boys connect through their mutual passion for music, especially the Beatles, Lewis has to lie more and more to hide the reality of his family’s poverty from George. He also has to deal with the vicious Evan Reininger, who makes Lewis the special target of his wrath. But when everyone else is on Evan’s side, how can he be defeated? And if George finds out the truth about Lewis’s home — will he still be his friend?”

A Spy in the House, by Y.S. Lee: I’d hand this book to the teenager that’s already devoured the BBC’s Sherlock and/or loves Elementary. Taking place in Victorian London, Lee’s book is a slight departure from the rest of the list. Mary Quinn is an Asian-Irish orphan saved from the gallows by a school that specialises in training women spies. Her first mission has her going undercover as a lady’s maid in a London to discover the whereabouts of stolen goods from India. Her work leads not only to her first successful mission, but the unlocking of her past.

Keisha Discovers Harlem (The Magic Attic Club), by Zoe Lewis: The Magic Attic Club books were similar to the American Girl Doll franchise, but existed at a slightly lower price point. The books revolved around a group of girls who discovered a steamer trunk of clothing and a magic, time traveling mirror in a friends’ house. This was not the world’s best series (there’s a reason the company folded in 2007 while American Girl lives on), but Keisha got to do a lot, and they tended to have a lighter tone than the AGD books, while still being equally as informative.

Flygirl, by Sherri Smith: This one is downloading onto my kindle as we speak. Elementary School Me was obsessed with World War Two and plowed through books like When Hitler Stole Pink Rabbit, Summer of My German Soldier, Number The Stars, Starring Sally J. Freedman as Herself, and several others. Like school curriculum, much of YA discourse focuses on the Holocaust and the European Theater. Literature about the American side of the war is heavily focused on white protagonists, with Under The Blood Red Sun and The Bracelet(a picture book) being the two Asian-American focused stories that stick out from childhood.

Flygirl is about a mixed race girl named Ida who lives in Louisiana during the war. Her father was a pilot and all she wants to do is sign up for the Women Airforce Service Pilots. She could do so by passing as white, but has to consider what that means for her family, life, and identity. This is potentially heavy material, but I’m recommending it solely because this is exactly the kind of book I would have been looking for back in the fourth or fifth grade.

Mare’s War, by Tanita Davis: The same goes for Mare’s War, another book I haven’t read, but will since it’s about Black women serving in the Women’s Army Corps (something I still imagine myself doing). The summary reads as follows:

Meet Mare, a grandmother with flair and a fascinating past.

Octavia and Tali are dreading the road trip their parents are forcing them to take with their grandmother over the summer. After all, Mare isn’t your typical grandmother. She drives a red sports car, wears stiletto shoes, flippy wigs, and push-up bras, and insists that she’s too young to be called Grandma. But somewhere on the road, Octavia and Tali discover there’s more to Mare than what you see. She was once a willful teenager who escaped her less-than-perfect life in the deep South and lied about her age to join the African American battalion of the Women’s Army Corps during World War II.

Told in alternating chapters, half of which follow Mare through her experiences as a WAC member and half of which follow Mare and her granddaughters on the road in the present day, this novel introduces a larger-than-life character who will stay with readers long after they finish reading.”

Bud Not Buddy & The Mighty Miss Malone by Christopher Paul Curtis: Two books mired within the Great Depression with two equally spunky child characters searching for their fathers. I haven’t read Bud since middle school, but I’ve yet to ever go wrong recommending Curtis, a Coretta Scott King and Newbery Award winning author.



The Fire Horse Girl by Kay Honeyman: Another caveat: I haven’t read this one yet, and it’s only caught my eye because it deals with Chinese organised crime in the 1920s. Your 9th grader probably shouldn’t be watching Boardwalk Empire, but in case they do and they’d like a different take on organised crime during the same era, here we go. The summary:

Seventeen-year-old Jade Moon was born in 1906, the year of the Fire Horse, an ominous sign for Chinese girls. It signals willfulness, stubbornness, and impetuousness, all characteristics that embarrass her father and grandfather and cause derision and cruelty by her too-small village. So when Sterling Promise, a long-lost adopted cousin, appears and proposes she immigrate to America using false “paper son” papers, Jade Moon and her father agree to the plan. Jade Moon views this offer as escape and freedom; her father as the only opportunity to marry off his undesirable daughter. The interminable boat ride—and even more onerous imprisonment off California’s Angel Island—finally transitions to her treacherous entry into America. Jade Moon’s disguise as a young man and her homelessness pave the way for her involvement with the tong, a Chinese organized crime syndicate, and breathtaking danger at every turn.”


As I noted, this is by no means a complete list. Think of it as a jumping off point, rather than a comprehensive study guide. Here’s hoping it’s somewhat helpful to those of you looking to supply young readers (or yourselves) with some happier reading memories.


The post #WeNeedDiverseBooks: Historical Fiction and Making Reading Fun appeared first on Racialicious - the intersection of race and pop culture.

Planet DebianJulian Andres Klode: hardlink 0.3.0 released; xattr support

Today I not only submitted my bachelor thesis to the printing company, I also released a new version of hardlink, my file deduplication tool.

hardlink 0.3 now features support for xattr support, contributed by Tom Keel at Intel. If this does not work correctly, please blame him.

I also added support for a –minimum-size option.

Most of the other code has been tested since the upload of RC1 to experimental in September 2012.

The next major version will split up the code into multiple files and clean it up a bit. It’s getting a bit long now in a single file.

Filed under: Uncategorized

Krebs on Security$1.66M in Limbo After FBI Seizes Funds from Cyberheist

A Texas bank that’s suing a customer to recover $1.66 million spirited out of the country in a 2012 cyberheist says it now believes the missing funds are still here in the United States — in a bank account that’s been frozen by the federal government as part of an FBI cybercrime investigation.

robotrobkbIn late June 2012, unknown hackers broke into the computer systems of Luna & Luna, LLP, a real estate escrow firm based in Garland, Texas. Unbeknownst to Luna, hackers had stolen the username and password that the company used to managed its account at Texas Brand Bank (TBB), a financial institution also based in Garland.

Between June 21, 2012 and July 2, 2012, fraudsters stole approximately $1.75 million in three separate wire transfers. Two of those transfers went to an account at the Industrial and Commercial Bank of China. That account was tied to the Jixi City Tianfeng Trade Limited Company in China. The third wire, in the amount of $89,651, was sent to a company in the United States, and was recovered by the bank.

Jixi is in the Heilongjiang province of China on the border with Russia, a region apparently replete with companies willing to accept huge international wire transfers without asking too many questions. A year before this cyberheist took place, the FBI issued a warning that cyberthieves operating out of the region had been the recipients of approximately $20 million in the year prior — all funds stolen from small to mid-sized businesses through a series of fraudulent wire transfers sent to Chinese economic and trade companies (PDF) on the border with Russia.

Luna became aware of the fraudulent transfers on July 2, 2012, when the bank notified the company that it was about to overdraw its accounts. The theft put Luna & Luna in a tough spot: The money the thieves stole was being held in escrow for the U.S. Department of Housing and Urban Development (HUD). In essence, the crooks had robbed Uncle Sam, and this was exactly the argument that Luna used to talk its bank into replacing the missing funds as quickly as possible.

“Luna argued that unless TBB restored the funds, Luna and HUD would be severely damaged with consequences to TBB far greater than the sum of the swindled funds,” TBB wrote in its original complaint (PDF). TBB notes that it agreed to reimburse the stolen funds, but that it also reserved its right to legal claims against Luna to recover the money.

When TBB later demanded repayment, Luna refused. The bank filed suit on July 1, 2013, in state court, suing to recover the approximately $1.66 million that it could not claw back, plus interest and attorney’s fees.

For the ensuing year, TBB and Luna wrangled in the courts over the venue of the trial. Luna also counterclaimed that the bank’s security was deficient because it only relied on a username and password, and that TBB should have flagged the wires to China as highly unusual.

TBB notes that per a written agreement with the bank, Luna had instructed the bank to process more than a thousand wire transfers from its accounts to third-party accounts. Further, the bank pointed out that Luna had been offered but refused “dual controls,” a security measure that requires two employees to sign off on all wire transfers before the money is allowed to be sent.

In August, Luna alerted (PDF) the U.S. District Court for the Northern District of Texas that in direct conversations with the FBI, an agent involved in the investigation disclosed that the $1.66 million in stolen funds were actually sitting in an account at JPMorgan Chase, which was the receiving bank for the fraudulent wires. Both Luna and TBB have asked the government to consider relinquishing the funds to help settle the lawsuit.

The FBI did not return calls seeking comment. The Office of the U.S. attorney for the Northern District of Texas, which is in the process of investigating potential criminal claims related to the fraudulent transfers, declined to comment except to say that the case is ongoing and that no criminal charges have been filed to date.

As usual, this cyberheist resulted from missteps by both the bank and the customer. Dual controls are a helpful — but not always sufficient — security control that Luna should have adopted, particularly given how often these cyberheists are perpetrated against title and escrow firms. But it is galling that it is easier to find more robust, customer-facing security controls at your average email or other cloud service provider than it is at one of thousands of financial institutions in the United States.

If you run a small business and are managing your accounts online, you’d be wise to expect a similar attack on your own accounts and prepare accordingly. That means taking your business to a bank that offers more than just usernames, passwords and tokens for security. Shop around for a bank that lets you secure your transfers with some sort of additional authentication step required from a mobile device. These security methods can be defeated of course, but they present an extra hurdle for the bad guys, who probably are more likely to go after the lower-hanging fruit at thousands of other financial institutions that don’t offer more modern security approaches.

But if you’re expecting your bank to protect your assets should you or one of your employees fall victim to a malware phishing scheme, you could be in for a rude awakening. Keep a close eye on your books, require that more than one employee sign off on all large transfers, and consider adopting some of these: Online Banking Best Practices for Businesses.

Worse Than FailureCodeSOD: A Pentester's Paradise

Tom works as a pentester and, as such, gets paid big bucks for finding flaws in his clients' websites usually because he has to find less than obvious 'gotcha'-level flaws.

While testing a critical web application for a very large corporate client, he noticed some odd behavior surrounding a page that validates user logins.


Apparently, the original developer decided that it would be a good idea to send the database credentials to the client in a snippet of JavaScript and then use them to formulate a GET request to the server, presumably where the user is validated.

I'm not sure what other surprises Tom found while working for this particular client, but I hope the developer's reach was mercifully limited.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet DebianPetter Reinholdtsen: Suddenly I am the new upstream of the lsdvd command line tool

I use the lsdvd tool to handle my fairly large DVD collection. It is a nice command line tool to get details about a DVD, like title, tracks, track length, etc, in XML, Perl or human readable format. But lsdvd have not seen any new development since 2006 and had a few irritating bugs affecting its use with some DVDs. Upstream seemed to be dead, and in January I sent a small probe asking for a version control repository for the project, without any reply. But I use it regularly and would like to get an updated version into Debian. So two weeks ago I tried harder to get in touch with the project admin, and after getting a reply from him explaining that he was no longer interested in the project, I asked if I could take over. And yesterday, I became project admin.

I've been in touch with a Gentoo developer and the Debian maintainer interested in joining forces to maintain the upstream project, and I hope we can get a new release out fairly quickly, collecting the patches spread around on the internet into on place. I've added the relevant Debian patches to the freshly created git repository, and expect the Gentoo patches to make it too. If you got a DVD collection and care about command line tools, check out the git source and join the project mailing list. :)

Sam VargheseDeath of a teenager: why were police not asked obvious questions?

THERE are obvious questions which should have been put to the police in the wake of the shooting of Numan Haider, an 18-year-old Muslim man, in the Melbourne suburb of Endeavour Hills on Tuesday (September 23) night.

But it’s doubtful that any reporter from the mainstream media – which appears to function more as a propaganda arm of government – will ask these queries.

Why did police ask a person whom they acknowledge was under surveillance to come in for an interview at night, and alone?

Why did police search this man’s house without a warrant the same evening? IF someone is suspected of doing something does that equate to guilt?

Why did police agree to come out and meet this man in the car park? Where the hell have they received training for dealing with people like the teenager?

Why did they not insist on meeting him in broad daylight, in the police station, along with a lawyer or someone else so that there would be witnesses to whatever happened?

Was the knife that Haider had on his person allowed under the prevailing laws in Melbourne? Or did it violate the existing laws?

And finally, why have journalists lost that one trait that should be a hallmark of their character – scepticism? Why do they swallow anything and everything that is dished out?

Mark ShuttleworthWhat Western media and polititians fail to mention about Iraq and Ukraine

Be careful of headlines, they appeal to our sense of the obvious and the familiar, they entrench rather than challenge established stereotypes and memes. What one doesn’t read about every day is usually more interesting than what’s in the headlines. And in the current round of global unease, what’s not being said – what we’ve failed to admit about our Western selves and our local allies – is central to the problems at hand.

Both Iraq and Ukraine, under Western tutelage, failed to create states which welcome diversity. Both Iraq and the Ukraine aggressively marginalised significant communities, with the full knowledge and in some cases support of their Western benefactors. And in both cases, those disenfranchised communities have rallied their cause into wars of aggression.

Reading the Western media one would think it’s clear who the aggressors are in both cases: Islamic State and Russia are “obvious bad actors” who’s behaviour needs to be met with stern action. Russia clearly has no business arming rebels with guns they use irresponsibly to tragic effect, and the Islamic State are clearly “a barbaric, evil force”. If those gross simplifications, reinforced in the Western media, define our debate and discussion on the subject then we are destined pursue some painful paths with little but frustration to show for the effort, and nasty thorns that fester indefinitely. If that sounds familiar it’s because yes, this is the same thing happening all over again. In a prior generation, only a decade ago, anger and frustration at 9/11 crowded out calm deliberation and a focus on the crimes in favour of shock and awe. Today, out of a lack of insight into the root cause of Ukrainian separatism and Islamic State’s attractiveness to a growing number across the Middle East and North Africa, we are about to compound our problems by slugging our way into a fight we should understand before we join.

This is in no way to say that the behaviour of Islamic State or Russia are acceptable in modern society. They are not. But we must take responsibility for our own behaviour first and foremost; time and history are the best judges of the behaviour of others.

In the case of the Ukraine, it’s important to know how miserable it has become for native Russian speakers born and raised in the Ukraine. People who have spent their entire lives as citizens of the Ukraine who happen to speak in Russian at home, at work, in church and at social events have found themselves discriminated against by official decree from Kiev. Friends of mine with family in Odessa tell me that there have been systematic attempts to undermine and disenfranchise Russian speaking in the Ukraine. “You may not speak in your home language in this school”. “This market can only be conducted in Ukrainian, not Russian”. It’s important to appreciate that being a Russian speaker in Ukraine doesn’t necessarily mean one is not perfectly happy to be a Ukranian. It just means that the Ukraine is a diverse cultural nation and has been throughout our lifetimes. This is a classic story of discrimination. Friends of mine who grew up in parts of Greece tell a similar story about the Macedonian culture being suppressed – schools being forced to punish Macedonian language spoken on the playground.

What we need to recognise is that countries – nations – political structures – which adopt ethnic and cultural purity as a central idea, are dangerous breeding grounds for dissent, revolt and violence. It matters not if the government in question is an ally or a foe. Those lines get drawn and redrawn all the time (witness the dance currently under way to recruit Kurdish and Iranian assistance in dealing with IS, who would have thought!) based on marriages of convenience and hot button issues of the day. Turning a blind eye to thuggery and stupidity on the part of your allies is just as bad as making sure you’re hanging with the cool kids on the playground even if it happens that they are thugs and bullies –  stupid and shameful short-sightedness.

In Iraq, the government installed and propped up with US money and materials (and the occasional slap on the back from Britain) took a pointedly sectarian approach to governance. People of particular religious communities were removed from positions of authority, disqualified from leadership, hunted and imprisoned and tortured. The US knew that leading figures in their Iraqi government were behaving in this way, but chose to continue supporting the government which protected these thugs because they were “our people”. That was a terrible mistake, because it is those very communities which have morphed into Islamic State.

The modern nation states we call Iraq and the Ukraine – both with borders drawn in our modern lifetimes – are intrinsically diverse, intrinsically complex, intrinsically multi-cultural parts of the world. We should know that a failure to create governments of that diversity, for that diversity, will result in murderous resentment. And yet, now that the lines for that resentment are drawn, we are quick to choose sides, precisely the wrong position to take.

What makes this so sad is that we know better and demand better for ourselves. The UK and the US are both countries who have diversity as a central tenet of their existence. Freedom of religion, freedom of expression, the right to a career and to leadership on the basis of competence rather than race or creed are major parts of our own identity. And yet we prop up states who take precisely the opposite approach, and wonder why they fail, again and again. We came to these values through blood and pain, we hold on to these values because we know first hand how miserable and how wasteful life becomes if we let human tribalism tear our communities apart. There are doors to universities in the UK on which have hung the bodies of religious dissidents, and we will never allow that to happen again at home, yet we prop up governments for whom that is the norm.

The Irish Troubles was a war nobody could win. It was resolved through dialogue. South African terrorism in the 80′s was a war nobody could win. It was resolved through dialogue and the establishment of a state for everybody. Time and time again, “terrorism” and “barbarism” are words used to describe fractious movements by secure, distant seats of power, and in most of those cases, allowing that language to dominate our thinking leads to wars that nobody can win.

Russia made a very grave error in arming Russian-speaking Ukranian separatists. But unless the West holds Kiev to account for its governance, unless it demands an open society free of discrimination, the misery there will continue. IS will gain nothing but contempt from its demonstrations of murder – there is no glory in violence on the defenceless and the innocent – but unless the West bends its might to the establishment of societies in Syria and Iraq in which these religious groups are welcome and free to pursue their ambitions, murder will be the only outlet for their frustration. Politicians think they have a new “clean” way to exert force – drones and airstrikes without “boots on the ground”. Believe me, that’s false. Remote control warfare will come home to fester on our streets.


Planet DebianMike Hommey: So, hum, bash…

So, I guess you heard about the latest bash hole.

What baffles me is that the following still is allowed:

env echo='() { xterm;}' bash -c "echo this is a test"

Interesting replacements for “echo“, “xterm” and “echo this is a test” are left as an exercise to the reader.

Update: Another thing that bugs me: Why is this feature even enabled in posix mode? (the mode you get from bash --posix, or, more importantly, when running bash as sh) After all, export -f is a bashism.

Planet Linux AustraliaCraige McWhirter: Enabling OpenStack Roles To Resize Volumes Via Policy

If you have volume backed OpenStack instances, you may need to resize them. In most usage cases you'll want to have un-privileged users resize the instances. This documents how you can modify the Cinder policy to allow tenant members assigned to a particular role to have permissions to resize volumes.


  • You've already created your OpenStack tenant.
  • You've already created your OpenStack user.
  • You know how to allocate roles to users in tenants.

Select the Role

You will need to create or identify a suitable role. In this example I'll use "Support".

Modify policy.json

Once the role has been created or identified, add these lines to the /etc/cinder/policy.json on the Cinder API server(s):

"context_is_support": [["role:Support"]],
"admin_or_support":  [["is_admin:True"], ["rule:context_is_support"]],

Modify "volume_extension:volume_admin_actions:reset_status" to use the new context:

"volume_extension:volume_admin_actions:reset_status": [["rule:admin_or_support"]],

Add users to the role

Add users who need priveleges to resize volumes to the role SupportAdmin in their tennant.

The users you have added to the "Support" role should now be able to resize volumes.

Planet DebianRuss Allbery: Review: Turn the Ship Around!

Review: Turn the Ship Around!, by L. David Marquet

Publisher: Portfolio
Copyright: 2012
ISBN: 1-101-62369-1
Format: Kindle
Pages: 272

Turn the Ship Around! (yes, complete with the irritating exclamation point in the title) is marketed to the business and management non-fiction market, which is clogged with books claiming to provide simple techniques to be a great manager or fix an organization. If you're like me, this is a huge turn-off. The presentation of the books is usually just shy of the click-bait pablum of self-help books. Many of the books are written by famous managers best known for doing horrible things to their staff (*cough* Jack Welch). It's hard to get away from the feeling that this entire class of books is an ocean of bromides covering a small core of outright evil.

This book is not like that, and Marquet is not one of those managers. It can seem that way at times: it is presented in the format that caters to short attention span, with summaries of primary points at the end of every short chapter and occasionally annoying questions sprinkled throughout. I'm capable of generalizing information to my own life without being prompted by study questions, thanks. But that's just form. The core of this book is a surprisingly compelling story of Marquet's attempt to introduce a novel management approach into one of the most conservative and top-down of organizations: a US Navy nuclear submarine.

I read this book as an individual employee, and someone who has no desire to ever be a manager. But I recently changed jobs and significantly disrupted my life because of a sequence of really horrible management decisions, so I have strong opinions about, at least, the type of management that's bad for employees. A colleague at my former employer recommended this book to me while talking about the management errors that were going on around us. It did such a good job of reinforcing my personal biases that I feel like I should mention that as a disclaimer. When one agrees with a book this thoroughly, one may not have sufficient distance from it to see the places where its arguments are flawed.

At the start of the book, Marquet is assigned to take over as captain of a nuclear submarine that's struggling. It had a below-par performance rating, poor morale, and the worst re-enlistment rate in the fleet, and was not advancing officers and crew to higher ranks at anywhere near the level of other submarines. Marquet brought to this assignment some long-standing discomfort with the normal top-down decision-making processes in the Navy, and decided to try something entirely different: a program of radical empowerment, bottom-up decision-making, and pushing responsibility as far down the chain of command as possible. The result (as you might expect given that you can read a book about it) was one of the best-performing submarines in the fleet, with retention and promotion rates well above average.

There's a lot in here about delegated decision-making and individual empowerment, but Turn the Ship Around! isn't only about that. Those are old (if often ignored) rules of thumb about how to manage properly. I think the most valuable part of this book is where Marquet talks frankly about his own thought patterns, his own mistakes, and the places where he had to change his behavior and attitude in order to make his strategy successful. It's one thing to say that individuals should be empowered; it's quite another to stop empowering them (which is still a top-down strategy) and start allowing them to be responsible. To extend trust and relinquish control, even though you're the one who will ultimately be held responsible for the performance of the people reporting to you. One of the problems with books like this is that they focus on how easy the techniques presented in the book are. Marquet does a more honest job in showing how difficult they are. His approach was not complex, but it was emotionally quite difficult, even though he was already biased in favor of it.

The control, hierarchy, and authority parts of the book are the most memorable, but Marquet also talks about, and shows through specific examples from his command, some accompanying principles that are equally important. If everyone in an organization can make decisions, everyone has to understand the basis for making those decisions and understand the shared goals, which requires considerable communication and open discussion (particularly compared to a Navy ideal of an expert and taciturn captain). It requires giving people the space to be wrong, and requires empowering people to correct each other without blame. (There's a nice bit in here about the power of deliberate action, and while Marquet's presentation is more directly applicable to the sorts of physical actions taken in a submarine, I was immediately reminded of code review.) Marquet also has some interesting things to say about the power of, for lack of a better term, esprit de corps, how to create it, and the surprising power of acting like you have it until you actually develop it.

As mentioned, this book is very closely in alignment with my own biases, so I'm not exactly an impartial reviewer. But I found it fascinating the degree to which the management situation I left was the exact opposite of the techniques presented in this book in nearly every respect. I found it quite inspiring during my transition period, and there are bits of it that I want to read again to keep some of the techniques and approaches fresh in my mind.

There is a fair bit of self-help-style packaging and layout here, some of which I found irritating. If, like me, you don't like that style of book, you'll have to wade through a bit of it. I would have much preferred a more traditional narrative story from which I could draw my own conclusions. But it's more of a narrative than most books of this sort, and Marquet is humble enough to show his own thought processes, tensions, and mistakes, which adds a great deal to the presentation. I'm not sure how directly helpful this would be for a manager, since I've never been in that role, but it gave me a lot to think about when analyzing successful and unsuccessful work environments.

Rating: 8 out of 10

Geek FeminismQuick hit: “I’ll fight them as an engineer”

Thanks to a backchannel comment earlier, I had the thought that Peggy Seeger wrote a way better version of Lean In back in 1970, when Sheryl Sandberg was a baby. For those who didn’t spend their teen years listening to seventies folk music when all their peers were listening to rock and/or roll, here’s her song “I’m Gonna Be an Engineer”, with a bonus animation by Ken Wong:
<iframe allowfullscreen="true" class="youtube-player" frameborder="0" height="315" src=";rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="560"></iframe>

Oh, but now the times are harder and me Jimmy’s got the sack;
I went down to Vicker’s, they were glad to have me back.
But I’m a third-class citizen, my wages tell me that
But I’m a first-class engineer!

The boss he says “We pay you as a lady,
You only got the job because I can’t afford a man,
With you I keep the profits high as may be,
You’re just a cheaper pair of hands.”

Well, I listened to my mother and I joined a typing pool
Listened to my lover and I put him through his school
If I listen to the boss, I’m just a bloody fool
And an underpaid engineer
I been a sucker ever since I was a baby
As a daughter, as a mother, as a lover, as a dear
But I’ll fight them as a woman, not a lady
I’ll fight them as an engineer!

44 years later, Australian businessperson Evan Thornley — who was six years old when Seeger wrote “I’m Gonna Be an Engineer” — presented a slide at a startup conference that said: “Women: like men, only cheaper.”

The same week, Ashe Dryden wrote:

In a world where a business’s bottom-line comes before anything else, industries profit from the unequal treatment of their employees. Marginalized people often have to go above and beyond the work being done by their more privileged coworkers to receive the same recognition. The problem is readily apparent for women of color, who make between 10 and 53% less than their white male counterparts. The situation is such that compensating people equally is seen as a radical act. In maintaining an undervalued workforce, businesses create even more profit.

(Emphasis author’s.)

Thanks to Maco for reminding me both that the song exists and of how timely it is almost half a century later. There’s some good news, though: Peggy Seeger is alive and well, and still performing and releasing music. She turns 80 years old next year and according to her Twitter bio, she’s openly bi and poly. (Footnote: happy Bisexual Awareness Week! Yes, we get a whole week now.)


Planet DebianLaura Arjona: 10 short steps to contribute translations to free software for Android

This small guide assumes that you know how to create a public repository with git (or other version control system). Maybe some projects use other VCS, Subversion or whatever; the process would be similar although the commands will be different of course.

If you don’t want to use any VCS, you can just download the corresponding file, translate it, and send it by email or to the BTS of the project, but the commands required are very easy and you’ll see soon that using git (or any VCS) is quite comfortable and less scary than what it seems.

So, you were going to recommend a nice app that you use or found in F-Droid to your friend, but she does not understand English. Why not translating the app for her? And for everybody? It’s a job that can be done in 15 minutes or so (Android apps have very short strings, few menus, and so). Let’s go!

1.- Search the app in the F-Droid website

You can do it going to the URL:


Then, open the details of the app, and learn where’s the source code.

2.- Clone the source code

If you have an account in that forge, fork/clone the project into your account, and then, clone your fork/clone in local.

If you haven’t got an account in that forge, clone the project in local.

git clone URLofTheProjectOrYourClone

3.- In local, create a new branch, and checkout to it.

cd nameofrepo

git checkout -b Spanish

4.- Then, copy the “/res/values” folder into “res/values-XX” folder (where XX is your language code)

cp ./res/values /res/values-es -R

5.- Translate

Edit the “strings.xml” file that is in the “res/values-XX” folder, and change the English strings to your language (respect the XML format).

6.- Translate other files, or delete them

If there are more files in that folder (e.g. “arrays.xml”), review them to know if they have “translatable” strings. If yes, translate them. If not, delete the files.

7.- Commit

When you are finished, commit your changes:

git add res/values-es/*

git commit -a

(Message can be “Spanish translation” or so)

8.- Push your changes to your public repo

If you didn’t create a public clone of the repo in your forge, create a public repo and push your local stuff into there.

git push --all

9.- Request a merge to the original repo

(Using the web interface of the forge, if it is the same for the original repo and your clone, or sending an email or creating an issue and providing the URL of your repo). For example, open a new issue in the project’s BTS

Title: Spanish translation available for merging

Body: Hi everybody.

Thanks for your work in "nameofapp".

I have completed a Spanish translation, it's available for review/merge in the Spanish branch of my repo:


Best regards

10.- Congratulations!

Translations are new features, and having a new feature in your app for free is a great thing, so probably the app developer(s) will merge your translation soon.

Share your joy with your friends, so they begin to use the app you translated, and maybe become translators too!


You can comment on this post in this thread.

Filed under: Tools, Writings (translations) Tagged: Android, Contributing to libre software, English, Free Software, libre software, translations

Planet DebianJulian Andres Klode: APT 1.1~exp3 released to experimental: First step to sandboxed fetcher methods

Today, we worked, with the help of ioerror on IRC, on reducing the attack surface in our fetcher methods.

There are three things that we looked at:

  1. Reducing privileges by setting a new user and group
  2. chroot()
  3. seccomp-bpf sandbox

Today, we implemented the first of them. Starting with 1.1~exp3, the APT directories /var/cache/apt/archives and /var/lib/apt/lists are owned by the “_apt” user (username suggested by pabs). The methods switch to that user shortly after the start. The only methods doing this right now are: copy, ftp, gpgv, gzip, http, https.

If privileges cannot be dropped, the methods will fail to start. No fetching will be possible at all.

Known issues:

  • We drop all groups except the primary gid of the user
  • copy breaks if that group has no read access to the files

We plan to also add chroot() and seccomp sandboxing later on; to reduce the attack surface on untrusted files and protocol parsing.

Filed under: Uncategorized

CryptogramJulian Sanchez on the NSA and Surveillance Reform

Julian Sanchez of the Cato Institute has a lengthy audio interview on NSA surveillance and reform. Worth listening to.

Krebs on SecurityJimmy John’s Confirms Breach at 216 Stores

More than seven weeks after this publication broke the news of a possible credit card breach at nationwide sandwich chain Jimmy John’s, the company now confirms that a break-in at one of its payment vendors jeopardized customer credit and debit card information at 216 stores.

jjohns On July 31, KrebsOnSecurity reported that multiple banks were seeing a pattern of fraud on cards that were all recently used at Jimmy John’s locations around the country. That story noted that the company was working with authorities on an investigation, and that multiple Jimmy John’s stores contacted by this author said they ran point-of-sale systems made by Newtown, Pa.-based Signature Systems.

In a statement issued today, Champaign, Ill. based Jimmy John’s said customers’ credit and debit card data was compromised after an intruder stole login credentials from the company’s point-of-sale vendor and used these credentials to remotely access the point-of-sale systems at some corporate and franchised locations between June 16, 2014 and Sept. 5, 2014.

“Approximately 216 stores appear to have been affected by this event,” Jimmy John’s said in the statement. “Cards impacted by this event appear to be those swiped at the stores, and did not include those cards entered manually or online. The credit and debit card information at issue may include the card number and in some cases the cardholder’s name, verification code, and/or the card’s expiration date. Information entered online, such as customer address, email, and password, remains secure.”

The company has posted a listing on its Web site — — of the restaurant locations affected by the intrusion. There are more than 1,900 franchised Jimmy John’s locations across the United States, meaning this breach impacted roughly 11 percent of all stores.

pdqThe statement from Jimmy John’s doesn’t name the point of sale vendor, but company officials confirm that the point-of-sale vendor that was compromised was indeed Signature Systems. Officials from Signature Systems could not be immediately reached for comment, and it remains unclear if other companies that use its point-of-sale solutions may have been similarly impacted.

Point-of-sale vendors remain an attractive target for cyber thieves, perhaps because so many of these vendors enable remote administration on their hardware and yet secure those systems with little more than a username and password — and often easy-to-guess credentials to boot.

Last week, KrebsOnSecurity reported that a different hacked point-of-sale provider was the driver behind a breach that impacted more than 330 Goodwill locations nationwide. That breach, which targeted payment vendor C&K Systems Inc., persisted for 18 months, and involved two other as-yet unnamed C&K customers.

RacialiciousMust Read: Guernica’s take on Class

From Guernica

From Guernica

Guernica, the magazine of arts and culture, dedicated their latest special issue to the class divide. But, as most of us reading this blog know, race and class are not so easily separated. And in spite people online and in activist circles arguing that the social issue of our time is no longer race, only looking at one issue in a vacuum means that our proposed solutions to societal ills will always feel incomplete.

Two essays in the issue beautifully and painfully explain the paradigm Patricia Hill Collins outlined in Black Feminist Thought. Race, class, and gender are interlocking systems of oppression:

Viewing relations of domination for Black women for any given sociohistorical context as being structured via a system of interlocking race, class, and gender oppression expands the focus of analysis from merely describing the similarities and differences distinguishing these systems of oppression and focuses greater attention on how they interconnect. Assuming that each system needs the others in order to function creates a distinct theoretical stance that stimulates the rethinking of basic social science concepts.

The first piece is Margo Jefferson’s “Scenes from a Life in Negroland.” A sample:

We thought of ourselves as the Third Race, poised between the masses of Negroes and all classes of Caucasians. Like the Third Eye, the Third Race possessed a wisdom, intuition, and enlightened knowledge the other two races lacked. Its members had education, ambition, sophistication, and standardized verbal dexterity.

—If, as was said, too many of us ached, longed, strove to be be be be White White White White WHITE;

—If (as was said) many us boasted overmuch of the blood des blancs which for centuries had found blatant or surreptitious ways to flow, course, and trickle tepidly through our veins;

—If we placed too high a value on the looks, manners, and morals of the Anglo-Saxon…

…White people did too. They wanted to believe they were the best any civilization could produce. They wanted to be white just as much as we did. They worked just as hard at it. They failed just as often. But they could pass so no one objected.

“Negroland” is a complex, complicated piece. As I read I was turned off, infuriated, dismayed, delighted, aghast, and provoked enough to blast it out to my network and solicit more opinions. I suggest reading, sitting with it for a while, and sorting out your feelings a bit later.

Familiar in a different way is “Ghosts in the Land of Plenty.” Luis Alberto Urrea opens:

Why don’t we stop lying? Why don’t we deal with reality? Race is easy—class is hard. That politically incorrect, Mexican-excoriating bastard Edward Abbey told the truth: “The conservatives love their cheap labor; the liberals love their cheap cause. (Neither group, you will notice, ever invites the immigrants to move into their homes. Not into their homes!)” Immigration is so last century. But “illegal” immigration is still paranoiacally embraced in this country as a race issue. The “browning” of pristine white America. (Sorry, Crazy Horse.) Among my sisters and brothers bussing your lunch table, however, you will never see an Octavio Paz or the Mexican consul general of Dallas. You will see people of the lower class, running for their lives. Immigration was and is a class issue. Invisible people escape doom to serve us as extra-invisible people, made more invisible by language, skin color, and class. You can’t multiply a zero, but somehow they manage to become doubly nothing in the Land of Plenty.

I am an invisible man who refused to disappear.

Pointing a righteous pistol at the various liberal industries cropping up around aiding the poor, the brown, and the marginalized, Urrea dances through his narrative, occasionally turning his rhetorical barrel on himself. Read it, for nothing else but this:

See how we’re helping? We hugged an African-American on camera! They put these pictures up on social media so other well-meaning folks will send them more money. A year later, those kids wake up one day and ask, “What happened to those rich folks with the big program?” I know because I have been asked this question.

How can you hope to help someone whose humanity you don’t fully recognize?

The post Must Read: Guernica’s take on Class appeared first on Racialicious - the intersection of race and pop culture.

Sociological ImagesWhy Don’t Religious People Know More About Religion?

Economist Robin Hanson has an “it isn’t about” list. It begins:

  • Food isn’t about Nutrition
  • Clothes aren’t about Comfort

Also on the list is:

  • Church isn’t about God

Maybe church isn’t about religious ideas either.

I was reminded of this recently when I followed a link to a Pew quiz on religious knowledge. It’s a lite version of the 32-item quiz Pew used with a national sample in 2010.  One of the findings from that survey (the full report is here) was that people who went to church regularly and who said that religion was important in their lives didn’t do much better on the quiz than did those who had a weak attachment to church and religion.


The strongly committed averaged 17 correct answers out of the 32 questions; the uncommitted, 16.  This same pattern was repeated in the more recent 15-question quiz.


The committed may derive many things from their church attendance and faith, but knowledge of religion isn’t one of them.

To be fair, the quiz covers many religions, and people do know more about their own religion than they do about others.  “What was Joseph Smith’s religion?” Only about half the population gets that one right, but 93% of the Mormons nailed it. Mormons also knew more about the Ten Commandments. Catholics did better than others on the transubstantiation question.  But when it came to knowing who inspired the Protestant Reformation, Protestants got outscored by Jews and atheists.

Overall, nonbelievers, Jews, and Mormons did much better than did Protestants and Catholics.


One reason for their higher scores might be education – college graduates outscore high school or less by nearly 8 points out of 32.


It may be that nonbelievers, Jews, and Mormons are more likely to have finished college. Unfortunately, the Pew report does not give data that controls for education.

But another reason that these groups scored higher may be their position as religious minorities. Jews and Mormons have to explain to the flock how their ideas are different from those of the majority. Atheists and agnostics too, in their questioning and even rejecting,  have probably devoted more thought to religion, or more accurately, religions. On the questions about Shiva and Nirvana, they leave even the Jews and Mormons far behind.

For Protestants and Catholics, by contrast, learning detailed information about their religion is not as crucial. Just as White people in the US rarely ask what it means to be White, Christians need not worry about their differences from the mainstream. They are the mainstream. So going to church or praying can be much more about feelings – solidarity, transcendence, peace, etc.  That variety of religious experience need not include learning the history or even the tenets of the religion itself. As Durkheim said, the central element in religion is ritual – especially the feelings a ritual generates in the group. Knowing the actual beliefs might be a nice addition, but it’s not crucial.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at

CryptogramDetecting Robot Handwriting

Interesting article on the arms race between creating robot "handwriting" that looks human, and detecting text that has been written by a robot. Robots will continue to get better, and will eventually fool all of us.

Planet Linux AustraliaGabriel Noronha: EVSE for Sun Valley Toursit Park

So you might of seen a couple posts about Sun Valley Tourist Park, that is because we visit there a lot to visit grandma and grandpa (wife’s parents) .  So we decided because its outside of our return range we have to charge there to get home if we take the I-MIEV. but with the Electric Vehicle Supply Equipment (EVSE) that comes with the car limits the charge rate to 10amps max. So we convinced the park to install a 32amp EVSE.  This allow us to charge at the I-MIEV full rate of 13amps so 30% faster.

Aeroviroment RS at Sun ValleyAeroviroment EVSE-RS at Sun Valley

If you want to know more about the EVSE it’s an Aeroviroment EVSE RS.  It should work fine with the Holden volt, Mitsubishi Outlander PHEV, I-MIEV 2012 or later (may not work with 2010 models) and the Nissan LEAF.

If you are in the central coast and want somewhere to charge you can find the details on how to contact the park on plugshare. It’s available for public use depending on how busy the park is and the driver paying a nominal fee, and the driver phones ahead, during office hours.


Planet Linux AustraliaAndrew Pollock: [life] Day 238: Picnic play date in Roma Street Parklands with a side trip to the museum

School holidays are a good time for Zoe to have a weekday play date with my friend Kim's daughter Sarah, and we'd lined up a picnic in Roma Street Parklands today.

Zoe had woken up at about 1:30am with a nightmare, and subsequently slept in. It had taken me forever to get back to sleep, so I was pretty tired and slept a bit late too.

We got going eventually, and narrowly missed a train, so had to wait for the next one. We got into the Parklands pretty much on time, and despite the drizzly weather, had a nice morning making our way around the gardens.

The weather progressively improved by lunchtime, and after an early lunch, Kim and kids headed home, and we headed into the museum.

Unfortunately I was wrong about which station we had to get off to go to the museum, and we got off at Southbank rather than South Brisbane and had a long, slow walk of shame to get to the museum.

We used the freebie tickets I'd gotten to see the Deep Oceans exhibit, before heading home. I love the museum's free cloaking service, as it allowed me to divest myself of picnic blankets, my backpack and the Esky while we were at the museum.

While we were making the long walk of shame to the museum, I got a call from the car repairer to say that my car was ready, so after we returned to the rental car at the train station we drove directly to the repairer and collected the car, which involved a lot of shuffling of car contents and car seats. I then thought I'd lost my car key, and that involved an unnecessary second visit back to the car rental place on foot before I discovered it was in my pocket all along.

When we got home, Zoe wanted to play pirates again with our chocolate gold coins. What we wound up playing was a variant of "hide the thimble" in her bedroom, where she hid the chocolate gold coins all over the place, and then proceeded to show me where she'd hidden them all. It was very cute.

There was a tiny bit of TV before Sarah arrived to pick up Zoe.

Planet Linux AustraliaAndrew Pollock: [life] Day 237: A day with the grandparents and a lot cooking

Yesterday was a pretty full on day. I had to drop the car off to get the rear bumper replaced, and I also had to get to my Thermomix Consultant practical training by 9:30am.

I'd arranged to drop the car off at 8am and then pick up a rental car, and Mum was coming to collect Zoe at 8:30am. Zoe woke up at a good time, and we managed to get going extra early, so I dropped the car off early and was picking up the rental car before 8am.

Mum also arrived extra early, so I used the additional time to swing by the Valley to check my PO box, as I had a suspicion my Thermomix Consultant kit might have arrived, and it had.

I then had to get over to my Group Leader's house to do the practical training, which consisted of watching and giving a demo, with a whole bunch of advice and feedback along the way. It was a long day of much cooking, but it was good to get all of the behind the scenes tricks on how to prepare for a demo, give the demo and have it all run smoothly and to schedule.

I then headed over to Mum and Dad's for dinner. Zoe had had a great day, and my Aunty Peggy was also down from Toowoomba. We stayed for dinner and then headed home. I managed to get Zoe to bed more or less on time.

Worse Than FailureForever Alone

Dan’s team had a large re-engineering project. They wanted to remove some Java dependencies and replace the UI layer with their new, in-house developed standard library. Like most large maintenance projects, it was big, had a few hidden traps, but was mostly time consuming tedium. For the tedious bits, they decided to bring on a new developer.

William was that developer. He radiated confidence like an LED bulb<script src="" type="text/javascript"></script>- cold, harsh, and efficient. He said all the right things in the interview. When Dan showed him their Git repository, William nodded sagely, “I know my way around Git quite well. I appreciate the distributed part of it. It gives me the freedom to work alone. I work best alone.”

Dan’s team wanted somebody who could work with minimal guidance, so William’s lone gunman motto seemed like a good idea. They brought him on, and Dan spent the first few days getting him set up, introducing him to the code base, and helping him with any questions he had. William didn’t have many, as he reminded Dan, “I don’t need you hovering over my shoulder. I work best alone.”

Dan took the leash off, and William got to work. For the next three weeks, the burndown chart resembled a gasoline-soaked house, and William was a lit match. Everyone was quite happy with the results they saw, and agreed with William: he worked best alone.

At least until all progress ground to a complete and total halt. “Hey, is there a problem?” Dan asked.

“Not really, I’m just fighting with Git. It’s so slow.”

“That doesn’t sound right. Do you want a hand?”

“No. I work better alone.”

Dan let William take a few more days to fight with whatever was slowing him down, but when nothing changed, he popped back. “Is there a problem?”

“Git is just frustrating. It’s so terrible at working with binary files!”

“Well, it’s not really meant to be good at that. If you want, I can take a look-”

“No, it’s fine. I work better alone.”

Dan was suspicious, but William’s work had been so good to start, he decided to let it slide for a few more days. Whatever specific problem William was having would clear itself up before long.

In a way, Dan was right. William’s problems did clear themselves up- he quit, without notice. He told Dan that the best part of his new job: “I won’t have to fight with Git anymore. They just use FTP for everything.”

That was fine for William, but it meant Dan had to go pick up William’s work. He started a git clone and waited for the download to finish. And waited. And had a cup of coffee. And waited. By the time it was done, 2GB of data had been copied over. That was shocking, since the original repository was only a few megabytes.

One mysterious file,, drew Dan’s attention. Once he unzipped it, it was easy to see the problem. Each time William had changed or deleted a file, he added the original to the zip archive, renaming them from “foo.c” to “foo.1.c”, “foo.2.c”, etc. And each time, he’d restage and commit the archive back into the repository. The resulting delta was nearly as large as the archive itself, and was tracked in the Git history.

Dan got to spend a lot of time fighting with the repository’s tree, trying to pick apart the zip file changes from the real changes in the code base so that he could shrink it back down to a reasonable size. It was complicated and fiddly work, and nobody could help him, which gave him lots of time to discover the joys of working alone.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet DebianVincent Sanders: I wanted to go to Portland because it's a really good book town.

Plane at Heathrow terminal 5 taking me to America for Debconf 14Patti Smith is right, more than any other US city I have visited, Portland feels different. Although living in Cambridge, which sometimes feels like where books were invented, might give me a warped sense of a place.

Jo McIntyre getting on the tram at PDX
I have visited Portland a few times previously and I feel comfortable every time I arrive at PDX. Sure the place still suffers from the american obsession with the car but similar to New York you can rely on public transport to get about.

On this occasion my visit was for the Debian Conference which i was excited to attend having missed the previous one in Switzerland. This time the conference has changed its format to being 10 days long and mixing the developer time in with the more formal sessions.

The opening session gave Steve McIntyre and myself the opportunity to present a small token of our appreciation to Russ. The keynote speakers that afternoon were all very interesting both Stefano Zacchiroli and Gabriella Coleman giving food for thought on two very different subjects.

The sponsored accomodation rooms were plesent
Several conferences in the past have experienced issues with sponsored accommodation and food, I am very pleased to report that both were very good this time. The room I was in had a small kitchen area, en-suite bathroom, desks and most importantly comfortable beds.

Andy and Patty in the Ondine dining area
The food provision was in the form of a buffet in the Ondine facility. The menu was not greatly varied but catered to all requirements including vegetarian and gluten free diets.

Neil, Rob, Jo, Steve , Neil, Daniel and Andy dining under the planes
Some of us went on a visit to the Evergreen air and space museum to look at some rare aircraft and rockets. I can thoroughly recommend a visit if you are in the area.

These are just the highlights of the week though, the time in the hack-labs was productive with several practical achievements Including:
- Uploading new packages reducing the bug count
- Sorting out getting an updated key into the Debian keyring.

Overall I had a thoroughly enjoyable time and got a lot out of the conference this year. The new format suited me surprisingly well and as usual the social side was as valuable as the practical.

I hope the organisers have recovered enough to appreciate just how good a job they did and not get hung up on the small number of things that went wrong when the majority of things went perfectly to plan.

Planet Linux AustraliaTim Serong: Something Like a Public Consultation

The Australian government often engages in public consultation on a variety of matters. This is a good thing, because it provides an opportunity for us to participate in our governance. One such recent consultation was from the Attorney-General’s Department on Online Copyright Infringement. I quote:

On 30 July 2014, the Attorney-General, Senator the Hon George Brandis, and the Minister for Communications Malcolm Turnbull MP released a discussion paper on online copyright infringement.

Submissions were sought from interested organisations and individuals on the questions outlined in the discussion paper and on other possible approaches to address this issue.

Submissions were accepted via email, and there was even a handy online form where you could just punch in your answers to the questions provided. The original statement on publishing submissions read:

Submissions received may be made public on this website unless otherwise specified. Submitters should indicate whether any part of the content should not be disclosed to the public. Where confidentiality is requested, submitters are encouraged to provide a public version that can be made available.

This has since been changed to:

Submissions received from peak industry groups, companies, academics and non-government organisations that have not requested confidentiality are being progressively published on the Online copyright infringement—submissions page.

As someone who in a fit of inspiration late one night (well, a fit of some sort, but I’ll call it inspiration), put in an individual submission I am deeply disappointed that submissions from individuals are apparently not being published. Geordie Guy has since put in a Freedom of Information request for all individual submissions, but honestly the AGD should be publishing these. It was after all a public consultation.

For the record then, here’s my submission:

Question 1: What could constitute ‘reasonable steps’ for ISPs to prevent or avoid copyright infringement?

In our society, internet access has become a necessary public utility.  We communicate with our friends and families, we do our banking, we purchase and sell goods and services, we participate in the democratic process; we do all these things online.  It is not the role of gas, power or water companies to determine what their customers do with the gas, power or water they pay for.  Similarly, it is not the role of ISPs to police internet usage.

Question 2: How should the costs of any ‘reasonable steps’ be shared between industry participants?

Bearing in mind my answer to question 1, any costs incurred should rest squarely with the copyright owners.

Question 3: Should the legislation provide further guidance on what would constitute ‘reasonable steps’?

The legislation should explicitly state that:

  1. Disconnection is not a reasonable step given that internet access is a necessary public utility.
  2. Deep packet inspection, or any other technological means of determining the content, or type of content being accessed by a customer, is not a reasonable step as this would constitute a gross invasion of privacy.

Question 4: Should different ISPs be able to adopt different ‘reasonable steps’ and, if so, what would be required within a legislative framework to accommodate this?

Given that it is not the role of ISPs to police internet usage (see answer to question 1), there are no reasonable steps for ISPs to adopt.

Question 5: What rights should consumers have in response to any scheme or ‘reasonable steps’ taken by ISPs or rights holders? Does the legislative framework need to provide for these rights?

Consumers need the ability to freely challenge any infringement notice, and there must be a guarantee they will not be disconnected.  The fact that an IP address does not uniquely identify a specific person should be enshrined in legislation.  The customer’s right to privacy must not be violated (see point 2 of answer to question 3).

Question 6: What matters should the Court consider when determining whether to grant an injunction to block access to a particular website?

As we have seen with ASIC’s spectacularly inept use of section 313 of Australia’s Telecommunications Act to inadvertently block access to 250,000 web sites, such measures can and will result in wild and embarrassing unintended consequences.  In any case, any means employed in Australia to block access to overseas web sites is exceedingly trivial to circumvent using freely available proxy servers and virtual private networks.  Consequently the Court should not waste its time granting injunctions to block access to web sites.

Question 7: Would the proposed definition adequately and appropriately expand the safe harbour scheme?

The proposed definition would seem to adequately and appropriately expand the safe harbour scheme, assuming the definition of “service provider” extends to any person or entity who provides internet access of any kind to any other person or entity.  For example, if my personal internet connection is also being used by a friend, a family member or a random passerby who has hacked my wifi, I should be considered a service provider to them under the safe harbour scheme.

Question 8: How can the impact of any measures to address online copyright infringement best be measured?

I am deeply dubious of the efficacy and accuracy of any attempt to measure the volume and impact of copyright infringement.  Short of actively surveilling the communications of the entire population, there is no way to accurately measure the volume of copyright infringement at any point in time, hence there is no way to effectively quantify the impact of any measures designed to address online copyright infringement.

Even if the volume of online copyright infringement could be accurately measured, one cannot assume that an infringing copy equates to a lost sale.  At one end of the spectrum, a single infringing copy could have been made by someone who would never have been willing or able to pay for access to that work.  At the other end of the spectrum, a single infringing copy could expose a consumer to a whole range of new media, resulting in many purchases that never would have occurred otherwise.

Question 9: Are there alternative measures to reduce online copyright infringement that may be more effective?

There are several alternative measures that may be more effective, including:

  1. Content distributors should ensure that their content is made available to the Australian public at a reasonable price, at the same time as releases in other countries, and absent any Digital Restrictions Management technology (DRM, also sometimes erroneously termed Digital Rights Management, which does more to inconvenience legitimate purchasers than it does to curb copyright infringement).
  2. Content creators and distributors should be encouraged to update their business models to accommodate and take advantage of the realities of ubiquitous digital communications.  For example, works can be made freely available online under liberal licenses (such as Creative Commons Attribution Share-Alike) which massively increases exposure, whilst also being offered for sale, perhaps in higher quality on physical media, or with additional bonus content in the for-purchase versions.  Public screenings, performances, displays, commissions and so forth (depending on the media in question) will contribute further income streams all while reducing copyright infringement.
  3. Australian copyright law could be amended such that individuals making copies of works (e.g. downloading works, or sharing works with each other online) on a noncommercial basis does not constitute copyright infringement.  Changing the law in this way would immediately reduce online copyright infringement, because a large amount of activity currently termed infringement would no longer be seen as such.

Finally, as a member of Pirate Party Australia it would be remiss of me not to provide a link to the party’s rather more detailed and well-referenced submission, which thankfully was published by the AGD. We’ve also got a Pozible campaign running to raise funds for an English translation of the Dutch Pirate Bay blocking appeal trial ruling, which will help add to the body of evidence demonstrating that web site blocking is ineffective.

Planet Linux AustraliaCraige McWhirter: Resizing a Root Volume for an Openstack Instance

This documents how to resize an OpenStack instance that has it's root partition backed by a volume. In this circumstance "nova resize" will not resize the diskspace as expected.


Shutdown the instance you wish to resize

Check the status of the source VM and stop it if it's not already:

$ nova list
| ID                                   | Name      | Status | Task State | 
Power State | Networks                                    |
| 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | ResizeMe0 | ACTIVE | -          | 
Running     | Tutorial= |
$ nova stop ResizeMe0
$ nova list
| ID                                   | Name      | Status | Task State | 
Power State | Networks                                    |
| 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | ResizeMe0 | SHUTOFF | -          | 
Running     | Tutorial= |

Identify and extend the volume

Obtain the ID of the volume attached to the instance:

$ nova show ResizeMe0 | grep volumes
| os-extended-volumes:volumes_attached | [{"id": "616dbaa6-f5a5-4f06-9855-fdf222847f3e"}]         |

Set the volume's state to be "available" to so we can resize it:

$ cinder reset-state --state available 616dbaa6-f5a5-4f06-9855-fdf222847f3e
$ cinder show 616dbaa6-f5a5-4f06-9855-fdf222847f3e | grep " status "
| status | available |

Extend the volume to the desired size:

$ cinder extend 616dbaa6-f5a5-4f06-9855-fdf222847f3e 4

Set the status back to being in use:

$ cinder reset-state --state in-use 616dbaa6-f5a5-4f06-9855-fdf222847f3e

Start the instance back up again

Start the instance again:

$ nova start ResizeMe0

Voila! Your old instance is now running with an increased disk size as requested.

Planet DebianRussell Coker: Cheap 3G Data in Australia

The Request

I was asked for advice about cheap 3G data plans. One of the people who asked me has a friend with no home Internet access, the friend wants access but doesn’t want to pay too much. I don’t know whether the person in question can’t use ADSL/Cable (maybe they are about to move house) or whether they just don’t want to pay for it.

3G data in urban areas in Australia is fast enough for most Internet use. But it’s not good for online games or VOIP. It’s also not very useful for Youtube and other online video. There is a variety of 3G speed testing apps for Android phones and there are presumably similar apps for the iPhone. Before signing up for 3G at home it’s probably best to get a friend who’s on the network in question to test Internet speed at your house, it would be annoying to sign up for an annual contract and then discover that your home is in a 3G dead spot.

Cheapest Offers

The best offer at the moment for moderate data use seems to be Amaysim with 10G for $99.90 and an expiry time of 365 days [1]. 10G in a year isn’t a lot, but it’s pre-paid so the user can buy another 10G of data whenever they want. At the moment $10 for 1G of data in a month and $20 for 2G of data in a month seem to be common offerings for 3G data in Australia. If you use exactly 1G per month then Amaysim isn’t any better than a number of other telcos, but if your usage varies (as it does with most people) then spreading the data use over several months offers significant savings without the need to save big downloads for the last day of the month.

For more serious Internet use Virgin has pre-paid offerings of 6G for $30 and 12G for $40 which has to be used in a month [2]. Anyone who uses an average of more than 3G per month will get better value from the Virgin offers.

If anyone knows of cheaper options than Amaysim and Virgin then please let me know.

Better Coverage

Both Amaysim and Virgin use the Optus network which covers urban areas quite well. I used Virgin a few years ago (and presume that it has only improved since then) and my wife uses Amaysim now. I haven’t had any great problems with either telco. If you need better coverage than the Optus network provides then Telstra is the only option. Telstra have a number of prepaid offers, the most interesting is $100 for 10G of data that expires in 90 days [3].

That Telstra offer is the same price as the Amaysim offer and only slightly more expensive than Virgin if you average 3.3G per month. It’s a really good deal if you average 3.3G per month as you can expect it to be faster and have better coverage.

Which One to Choose?

I think that the best option for someone who is initially connecting their home via 3g is to start with Amaysim. Amaysim is the cheapest for small usage and they have an Amaysim Android app and web page for tracking usage. After using a few gig of data on Amaysim it should be possible to determine which plan is going to be most economical in the long term.

Connecting to the Internet

To get the best speed you need a 4G AKA LTE connection. But given that 3G speed is great enough to use expensive amounts of data it doesn’t seem necessary to me. I’ve done a lot of work over the Internet with 3G from Virgin, Kogan, Aldi, and Telechoice and haven’t felt a need to pay for anything faster.

I think that the best thing to do is to use an old phone running Android 2.3 or iOS 4.3 as a Wifi access point. The cost of a dedicated 3G Wifi AP is enough to significantly change the economics of such Internet access and most people have access to old smart phones.

Planet DebianMatthew Garrett: My free software will respect users or it will be bullshit

I had dinner with a friend this evening and ended up discussing the FSF's four freedoms. The fundamental premise of the discussion was that the freedoms guaranteed by free software are largely academic unless you fall into one of two categories - someone who is sufficiently skilled in the arts of software development to examine and modify software to meet their own needs, or someone who is sufficiently privileged[1] to be able to encourage developers to modify the software to meet their needs.

The problem is that most people don't fall into either of these categories, and so the benefits of free software are often largely theoretical to them. Concentrating on philosophical freedoms without considering whether these freedoms provide meaningful benefits to most users risks these freedoms being perceived as abstract ideals, divorced from the real world - nice to have, but fundamentally not important. How can we tie these freedoms to issues that affect users on a daily basis?

In the past the answer would probably have been along the lines of "Free software inherently respects users", but reality has pretty clearly disproven that. Unity is free software that is fundamentally designed to tie the user into services that provide financial benefit to Canonical, with user privacy as a secondary concern. Despite Android largely being free software, many users are left with phones that no longer receive security updates[2]. Textsecure is free software but the author requests that builds not be uploaded to third party app stores because there's no meaningful way for users to verify that the code has not been modified - and there's a direct incentive for hostile actors to modify the software in order to circumvent the security of messages sent via it.

We're left in an awkward situation. Free software is fundamental to providing user privacy. The ability for third parties to continue providing security updates is vital for ensuring user safety. But in the real world, we are failing to make this argument - the freedoms we provide are largely theoretical for most users. The nominal security and privacy benefits we provide frequently don't make it to the real world. If users do wish to take advantage of the four freedoms, they frequently do so at a potential cost of security and privacy. Our focus on the four freedoms may be coming at a cost to the pragmatic freedoms that our users desire - the freedom to be free of surveillance (be that government or corporate), the freedom to receive security updates without having to purchase new hardware on a regular basis, the freedom to choose to run free software without having to give up basic safety features.

That's why projects like the GNOME safety and privacy team are so important. This is an example of tying the four freedoms to real-world user benefits, demonstrating that free software can be written and managed in such a way that it actually makes life better for the average user. Designing code so that users are fundamentally in control of any privacy tradeoffs they make is critical to empowering users to make informed decisions. Committing to meaningful audits of all network transmissions to ensure they don't leak personal data is vital in demonstrating that developers fundamentally respect the rights of those users. Working on designing security measures that make it difficult for a user to be tricked into handing over access to private data is going to be a necessary precaution against hostile actors, and getting it wrong is going to ruin lives.

The four freedoms are only meaningful if they result in real-world benefits to the entire population, not a privileged minority. If your approach to releasing free software is merely to ensure that it has an approved license and throw it over the wall, you're doing it wrong. We need to design software from the ground up in such a way that those freedoms provide immediate and real benefits to our users. Anything else is a failure.

(title courtesy of My Feminism will be Intersectional or it will be Bullshit by Flavia Dzodan. While I'm less angry, I'm solidly convinced that free software that does nothing to respect or empower users is an absolute waste of time)

[1] Either in the sense of having enough money that you can simply pay, having enough background in the field that you can file meaningful bug reports or having enough followers on Twitter that simply complaining about something results in people fixing it for you

[2] The free software nature of Android often makes it possible for users to receive security updates from a third party, but this is not always the case. Free software makes this kind of support more likely, but it is in no way guaranteed.

comment count unavailable comments

Planet DebianRobert Collins: what-poles-for-the-tent

So Monty and Sean have recently blogged about about the structures (1, 2) they think may work better for OpenStack. I like the thrust of their thinking but had some mumblings of my own to add.

Firstly, I very much like the focus on social structure and needs – what our users and deployers need from us. That seems entirely right.

And I very much like the getting away from TC picking winners and losers. That was never an enjoyable thing when I was on the TC, and I don’t think it has made OpenStack better.

However, the thing that picking winners and losers did was that it allowed users to pick an API and depend on it. Because it was the ‘X API for OpenStack’. If we don’t pick winners, then there is no way to say that something is the ‘X API for OpenStack’, and that means that there is no forcing function for consistency between different deployer clouds. And so this appears to be why Ring 0 is needed: we think our users want consistency in being able to deploy their application to Rackspace or HP Helion. They want vendor neutrality, and by giving up winners-and-losers we give up vendor neutrality for our users.

Thats the only explanation I can come up with for needing a Ring 0 – because its still winners and losers (e.g. picking an arbitrary project) keystone, grandfathering it in, if you will. If we really want to get out of the role of selecting projects, I think we need to avoid this. And we need to avoid it without losing vendor neutrality (or we need to give up the idea of vendor neutrality).

One might say that we must pick winners for the very core just by its, but I don’t think thats true. If the core is small, many people will still want vendor neutrality higher up the stack. If the core is large, then we’ll have a larger % of APIs covered and stable granting vendor neutrality. So a core with fixed APIs will be under constant pressure to expand: not just from developers of projects, but from users that want API X to be fixed and guaranteed available and working a particular way at [most] OpenStack clouds.

Ring 0 also fulfils a quality aspect – we can check that it all works together well in a realistic timeframe with our existing tooling. We are essentially proposing to pick functionality that we guarantee to users; and an API for that which they have everywhere, and the matching implementation we’ve tested.

To pull from Monty’s post:

“What does a basic end user need to get a compute resource that works and seems like a computer? (end user facet)

What does Nova need to count on existing so that it can provide that. “

He then goes on to list a bunch of things, but most of them are not needed for that:

We need Nova (its the only compute API in the project today). We don’t need keystone (Nova can run in noauth mode and deployers could just have e.g. Apache auth on top). We don’t need Neutron (Nova can do that itself). We don’t need cinder (use local volumes). We need Glance. We don’t need Designate. We don’t need a tonne of stuff that Nova has in it (e.g. quotas) – end users kicking off a simple machine have -very- basic needs.

Consider the things that used to be in Nova: Deploying containers. Neutron. Cinder. Glance. Ironic. We’ve been slowly decomposing Nova (yay!!!) and if we keep doing so we can imagine getting to a point where there truly is a tightly focused code base that just does one thing well. I worry that we won’t get there unless we can ensure there is no pressure to be inside Nova to ‘win’.

So there’s a choice between a relatively large set of APIs that make the guaranteed available APIs be comprehensive, or a small set that that will give users what they need just at the beginning but might not be broadly available and we’ll be depending on some unspecified process for the deployers to agree and consolidate around what ones they make available consistently.

In sort one of the big reasons we were picking winners and losers in the TC was to consolidate effort around a single API – not implementation (keystone is already on its second implementation). All the angst about defcore and compatibility testing is going to be multiplied when there is lots of ecosystem choice around APIs above Ring 0, and the only reason that won’t be a problem for Ring 0 is that we’ll still be picking winners.

How might we do this?

One way would be to keep picking winners at the API definition level but not the implementation level, and make the competition be able to replace something entirely if they implement the existing API [and win hearts and minds of deployers]. That would open the door to everything being flexible – and its happened before with Keystone.

Another way would be to not even have a Ring 0. Instead have a project/program that is aimed at delivering the reference API feature-set built out of a single, flat Big Tent – and allow that project/program to make localised decisions about what components to use (or not). Testing that all those things work together is not much different than the current approach, but we’d have separated out as a single cohesive entity the building of a product (Ring 0 is clearly a product) from the projects that might go into it. Projects that have unstable APIs would clearly be rejected by this team; projects with stable APIs would be considered etc. This team wouldn’t be the TC : they too would be subject to the TC’s rulings.

We could even run multiple such teams – as hinted at by Dean Troyer one of the email thread posts. Running with that I’d then be suggesting

  • IaaS product: selects components from the tent to make OpenStack/IaaS
  • PaaS product: selects components from the tent to make OpenStack/PaaS
  • CaaS product (containers)
  • SaaS product (storage)
  • NaaS product (networking – but things like NFV, not the basic Neutron we love today). Things where the thing you get is useful in its own right, not just as plumbing for a VM.

So OpenStack/NaaS would have an API or set of APIs, and they’d be responsible for considering maturity, feature set, and so on, but wouldn’t ‘own’ Neutron, or ‘Neutron incubator’ or any other component – they would be a *cross project* team, focused at the product layer, rather than the component layer, which nearly all of our folk end up locked into today.

Lastly Sean has also pointed out that we have large N N^2 communication issues – I think I’m proposing to drive the scope of any one project down to a minimum, which gives us more N, but shrinks the size within any project, so folk don’t burn out as easily, *and* so that it is easier to predict the impact of changes – clear contracts and APIs help a huge amount there.

Planet DebianRuss Allbery: Review: 2014 Hugos: Short Story Nominees

Review: 2014 Hugos: Short Story Nominees, edited by Loncon 3

Publisher: Loncon 3
Copyright: 2014
Format: Kindle

This is a bit of a weird "book review," since this is not a book. Rather, it's the collection of Hugo-nominated short stories for the 2014 Hugos (given for works published in 2013) at Loncon 3, the 2014 Worldcon. As such, the "editor" is the pool of attendees and supporting members who chose to nominate works, all of which had been previously edited by other editors in their original publication.

This is also not something that someone else can acquire; if you were not a supporting or attending member, you didn't get the voting packet. But I believe all of the stories here are available on-line for free in some form, a short search away.

"If You Were a Dinosaur, My Love" by Rachel Swirsky: The most common complaint about this story is that it's not really a story, and I have to agree. It's a word image of an alternate world in which the narrator's love is a human-sized dinosaur, starting with some surreal humor and then slowly shifting tone as it reveals the horrible event that's happened to the narrator's actual love, and that's sparked the wish for her love to have claws and teeth. It's reasonably good at what it's trying to do, but I wanted more of a story. The narrator's imagination didn't do much for me. (5)

"The Ink Readers of Doi Saket" by Thomas Olde Heuvelt: At least for me, this story suffered from being put in the context of a Hugo nominee. It's an okay enough story about a Thai village downstream from a ritual that involves floating wishes down the river, often with offerings in the improvised small boats. The background of the story is somewhat cynical: the villagers make some of the wishes come true, sort of, while happily collecting the offerings and trying to spread the idea that the wishes with better offerings are more likely to come true. The protagonist follows a familiar twist: he actually can make wishes come true, maybe, but is very innocent about his role in the world.

This is not a bad story, although stories written by people with western-sounding names about non-western customs worry me, and there were a few descriptions and approaches here (such as the nickname translations in footnotes and the villager archetypes) that made my teeth itch. But it is not a story that belongs on the Hugo nomination slate, at least in my opinion. It's either cute or mildly irritating, depending on one's mood when one meets it, not horribly original, and very forgettable. (5)

"Selkie Stories Are for Losers" by Sofia Samatar: I really liked this story for much of its length. It features a couple of young, blunt, and bitter women, and focuses on the players in the typical selkie story that don't get much attention. The selkie's story is one of captivity or freedom; her lover's story is the inverse, the captor or the lover. But I don't recall a story about the children before, and I think Samatar got the tone right. It has the bitterness of divorce and abandonment mixed with the disillusionment of fantasy turned into pain.

My problem with this story is the ending, or rather, the conclusion, since the story doesn't so much end as stop. There's a closing paragraph that gives some hint of the shape to come, but it gave me almost no closure, and it didn't answer any of the emotional questions that the rest of the story raised for me. I wanted something more, some sort of epiphany or clearer determination. (7)

"The Water That Falls on You from Nowhere" by John Chu: This was by far my favorite of the nominees, which is convenient since it won. I thought it was the only nominee that felt in the class of stories I would expect to win a Hugo.

I think this story needs one important caveat up front. The key conceit of the story is that, in this world, water falls on you out of nowhere if you tell any sort of lie. It does not explore the practical impact on that concept for the broader world. That didn't bother me; for some reason, I wasn't really expecting it to do so. But it did bother several other people I've seen comment on this story. They were quite frustrated that the idea was used primarily to shape a personal and family emotional dilemma, not to explore the impact on the world. So, go into this with the right expectations: if you want world-building or deep exploration of a change in physical laws, you will want a different story.

This story, instead, is a beautiful gem about honesty in relationships, about communication about very hard things and very emotional things, about coming out, about trusting people, and about understanding people. I thought it was beautiful. If you read Captain Awkward, or other discussion of how to deal with difficult families and the damage they cause to relationships, seek this one out. It surprised me, and delighted me, and made me cry in places, and I loved the ending. It's more fantasy than science fiction, and it uses the conceit as a trigger for a story about people instead of a story about worlds and technology, but I'm still very happy to see it win. (9)

Rating: 7 out of 10

Don MartiTreasuring clicks, trashing content

Matt Harty from Experian writes, Marketers Buy Clicks But Don’t Understand What They Get. More:

Clicks usually do not bring any other information with them. When the click hits the marketer’s site, the ability to value the differences (and related potential ROI) between these visitors is minimal.

Harty's proposed solution, not surprisingly, is to add another layer of Big Data intermediaries, to sell information about the users behind those clicks. This one will fix it for sure, right? But does online advertising have to be just a matter of piling up more and more layers of companies selling expensive math and sneakily-acquired PII?

If only there were something that you could attach an ad to, some work that people who were interested in a certain topic would naturally see as valuable and want to spend time with. Something that would make an ad pay its own way, by sending the message, as Kevin Simler put it, Here an ad conveys valuable information simply by existing.

Yes, paying for something valuable to run the ad on would cost money, but that's part of how advertising really works. Advertising done right pays its way by carrying a signal to prospective buyers, one that they have an incentive to receive and process, not block. Simler also points out a kind of meta-signaling, or "cultural imprinting." When a brand establishes itself, it helps its customers send their own signals.

[B]rands carve out a relatively narrow slice of brand-identity space and occupy it for decades. And the cultural imprinting model explains why. Brands need to be relatively stable and put on a consistent "face" because they're used by consumers to send social messages, and if the brand makes too many different associations, (1) it dilutes the message that any one person might want to send, and (2) it makes people uncomfortable about associating themselves with a brand that jumps all over the place, firing different brand messages like a loose cannon.

Advertising isn't just a game of spam vs. spam filter, popup vs. popup blocker, and cookie vs. Privacy Badger. There's more to it than that, or there can be.

Meanwhile, Bob Hoffman writes,

Content is everything, and it's nothing. It's an artificial word thrown around by people who know nothing, describing nothing.

Good point. The audience's perception of how much it cost to place an ad is the way that the ad acquires its signaling power. The ad-supported resource, whether it's a TV show, an article with photos, or a story, amplifies the ad by its quality and apparent cost.

A famous byline on a magazine cover increases the magazine's reputation, which increases the signaling power of the ads inside, which makes ad space more valuable. Get a reputation for paying well, get more money from advertisers, and so on. Do it right and the more you pay people, the more advertisers pay you, the more you can pay people. (This is the positive feedback loop that pro sports is in. And not only is the sports audience not the product being sold, the audience is paying to be advertised to.)

Signaling through quality editorial product is the opportunity that online advertising is thowing away, by programmatically buying ad units attatched to crappy, infringing, or outright fraudulent "content". Somehow, people have gotten the idea that math matters, user data matters, but "content" doesn't.

What's the alternative? Some ideas at What can brands do now? and Solutions.

Bonus links

Malvertising Campaign Employs the Nuclear Option on Zedo A malicious Javascript file, unintentionally served last week by the Zedo advertising network, redirected victims to the Nuclear exploit kit which (under the right circumstances) delivered a punishing series of infections onto PCs.

Einbinder Flypaper, The brand you've gradually grown to trust over the course of three generations.

Geek FeminismLinkspams and Chocolate Milk (23 September 2014)

  • Official #teamharpy Statement on the case of Joseph Hawley Murphy vs. nina de jesus and Lisa Rabey | Team Harpy: [CW: retaliation against whistleblowers, sexual harassment] “We both also believe that women calling out harmful the behaviour of men is an act of free speech and of resistance to a culture that regularly reduces our bodies to sexual objects existing only to serve men. We have decided to fight this lawsuit, at great financial and emotional cost to ourselves, because we believe that all victims of sexual harassment should be supported and believed.” “Team Harpy” have also posted about Round 1 of fundraising efforts, for those who would like to offer support.
  • Privilege | Robot Hugs: Short comic explaining privilege, and how to manage it responsibly.
  • You Asked: Why aren’t more companies putting their weight behind diversity initiatives? | ashe dryden: “Industry-wide change isn’t coming from within companies but, increasingly, from people who are able to operate outside them where the risk of being harassed, fired, or pushed out of the industry all together seems lower. When we’re seeing companies actively punish marginalized people for speaking up, we have to question their mission statements proclaiming a commitment to diversity.”
  • What We Talk About When We Talk About What We Talk About When We Talk About Making | Quiet Babylon: “We too can access the tools of publishing and the means of production and find success as independents. We too can be small scale Tims Cook. Some of us who make the attempt will be made rich and some of us will be driven from our homes and some of us will putter along comfortably and all of us will have made our bed upon a great deal of human and environmental suffering.”
  • How to Detoxify the Web | The Kernel: [CW: discussion of abusive language / behavior] “A lot has changed in the world and online, but being 14 still sucks. Plenty of times, just being human sucks. Without someone to talk to or to help you figure out which end is up, it’s easy to push it down—like a man, right?—or fall for advice that doesn’t have your best interest at heart. Those readers still message me sometimes to tell me the impact my advice columns had on their lives. As much as I’d like to think it was because of my fantastic writing, I know it’s because of the community we created together.”
  • Geeks have become their own worst enemies | The Washington Post: “The essence of confidence is the ability to handle critiques and the existence of challengers with grace and security in your own position. If what deBoer is describing is a permanent state, though, then a certain subset of angry geeks will prove themselves to be exactly what the once-dominant culture said they were all along: myopic and insecure. The hysterical reactions to criticism and challenge do far more damage to the proposition that geek culture contains rich forms, stories and communities worth taking seriously than any critic ever could.”
  • GaymerX2: Internetting While Female | YouTube: Video: “Carolyn Petit, Katherine Cross, and Anita Sarkeesian discuss their experiences ‘Internetting While Female’ at GaymerX2 2014.”
  • Dear DC Comics, This Is Why You Shouldn’t Leave Creative Little Girls Behind | The Mary Sue: “Maybe statistically it’s more likely to be four boys playing, and they want to cater to that. But if so, it’s a self-fulfilling prophecy. If you market only to boys, don’t be surprised boys are your only market. And don’t be surprised if the boys with sisters and female friends end up playing something else entirely.”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet DebianJunichi Uekawa: Sending GCM send message from nodejs.

Sending GCM send message from nodejs. I wanted to send a message from emacs, but it seemed to be relatively difficult to send HTTPS POST request from emacs, so I just decided to use handy nodejs. The attached 'data' is attached to intent as extra, so it can be extracted by intent.getExtras.getString("message") in IntentService#onHandleIntent()


LongNowNew Book Explores the Legacy of Paul Otlet’s Mundaneum


In 02007, SALT speaker Alex Wright introduced us to Paul Otlet, the Belgian visionary who spent the first half of the twentieth century building a universal catalog of human knowledge, and who dreamed of creating a global information network that would allow anyone virtual access to this “Mundaneum.”

In June of this year, Wright released a new monograph that examines the impact of Otlet’s work and dreams within the larger history of humanity’s attempts to organize and archive its knowledge. In Cataloging The World: Paul Otlet and the Birth of the Information Age, Wright traces the visionary’s legacy from its idealistic mission through the Mundaneum’s destruction by the Nazis, to the birth of the internet and the data-driven world of the 21st century.

Otlet’s work on his Mundaneum went beyond a simple wish to collect and catalog knowledge: it was driven by a deeply idealistic vision of a world brought into harmony through the free exchange of information.

An ardent “internationalist,” Otlet believed in the inevitable progress of humanity towards a peaceful new future, in which the free flow of information over a distributed network would render traditional institutions – like state governments – anachronistic. Instead, he envisioned a dawning age of social progress, scientific achievement, and collective spiritual enlightenment. At the center of it all would stand the Mundaneum, a bulwark and beacon of truth for the whole world. (Wright 02014)

Otlet imagined a system of interconnected “electric telescopes” with which people could easily access the Mundaneum’s catalog of information from the comfort of their homes – a ‘world wide web’ that would bring the globe together in shared reverence for the power of knowledge. But sadly, his vision was thwarted before it could reach its full potential. Brain Pickings’ Maria Popova writes,

At the peak of Otlet’s efforts to organize the world’s knowledge around a generosity of spirit, humanity’s greatest tragedy of ignorance and cruelty descended upon Europe. As the Nazis seized power, they launched a calculated campaign to thwart critical thought by banning and burning all books that didn’t agree with their ideology … and even paved the muddy streets of Eastern Europe with such books so the tanks would pass more efficiently.

Otlet’s dream of open access to knowledge obviously clashed with the Nazis’ effort to control the flow of information, and his Mundaneum was promptly shut down to make room for a gallery displaying Third Reich art. Nevertheless, Otlet’s vision survived, and in many ways inspired the birth of the internet.

While Otlet did not by any stretch of the imagination “invent” the Internet — working as he did in an age before digital computers, magnetic storage, or packet-switching networks — nonetheless his vision looks nothing short of prophetic. In Otlet’s day, microfilm may have qualified as the most advanced information storage technology, and the closest thing anyone had ever seen to a database was a drawer full of index cards. Yet despite these analog limitations, he envisioned a global network of interconnected institutions that would alter the flow of information around the world, and in the process lead to profound social, cultural, and political transformations. (Wright 02014)

Still, Wright argues, some characteristics of today’s internet fly in the face of Otlet’s ideals even as they celebrate them. The modern world wide web is predicated on an absolute individual freedom to consume and contribute information, resulting in an amorphous and decentralized network of information whose provenance can be difficult to trace. In many ways, this defies Otlet’s idealistic belief in a single repository of absolute and carefully verified truths, open access to which would lead the world to collective enlightenment. Wright wonders,

Would the Internet have turned out any differently had Paul Otlet’s vision come to fruition? Counterfactual history is a fool’s game, but it is perhaps worth considering a few possible lessons from the Mundaneum. First and foremost, Otlet acted not out of a desire to make money — something he never succeeded at doing — but out of sheer idealism. His was a quest for universal knowledge, world peace, and progress for humanity as a whole. The Mundaneum was to remain, as he said, “pure.” While many entrepreneurs vow to “change the world” in one way or another, the high-tech industry’s particular brand of utopianism almost always carries with it an underlying strain of free-market ideology: a preference for private enterprise over central planning and a distrust of large organizational structures. This faith in the power of “bottom-up” initiatives has long been a hallmark of Silicon Valley culture, and one that all but precludes the possibility of a large-scale knowledge network emanating from anywhere but the private sector.

Nevertheless, Wright sees in Otlet’s vision a useful ideal to keep striving for:

Otlet’s Mundaneum will never be. But it nonetheless offers us a kind of Platonic object, evoking the possibility of a technological future driven not by greed and vanity, but by a yearning for truth, a commitment to social change, and a belief in the possibility of spiritual liberation. Otlet’s vision for an international knowledge network—always far more expansive than a mere information retrieval tool—points toward a more purposeful vision of what the global network could yet become. And while history may judge Otlet a relic from another time, he also offers us an example of a man driven by a sense of noble purpose, who remained sure in his convictions and unbowed by failure, and whose deep insights about the structure of human knowledge allowed him to peer far into the future…

Wright summarizes Otlet’s legacy with a simple question: are we better off when we safeguard the absolute individual freedom to consume and distribute information as we see fit, or should we be making a more careful effort to curate the information we are surrounded by? It’s a question that we see emerging with growing urgency in contemporary debates about privacy, data sharing, and regulation of the internet – and our answer to it is likely to play an important role in shaping the future of our information networks.

To learn more about Cataloging the World, please take a look at Maria Popova’s thoughtful review, or visit the book’s website.


Planet DebianSteve Kemp: Waiting for features upstream

I (grudgingly) use the Calibre e-book management software to handle my collection of books, and copy them over to my kindle-toy.

One thing that has always bothered me was the fact that when books are imported their ratings are too. If I receive a small sample of ebooks from a friend their ratings are added to my collections.

I've always regarded ratings as things personal to me, rather than attributes of a book itself; as my tastes might not match yours, and vice-versa.

On that basis the last time I was importing a small number of books and getting annoyed at having to manually reset all the imported ratings I decided to do something about it. I started hacking and put together a simple Calibre plugin to automatically zero ratings when books are imported to the collection (i.e. set the rating to be zero).

Sadly this work wasn't painless, despite the small size, as an unfortunate bug in Calibre meant my plugin method wasn't called. Happily Kovid Goyal helped me work through the problem, and he committed a fix that will be in the next Calibre release. For the moment I'm using today's git-snapshot and it works well.

Similarly I've recently started using extended file attributes to store metadata on my desktop system. Unfortunately the GNU findutils package doesn't allow you to do the obvious thing:

$ find ~/foo -xattr user.comment

There are several xattr patches floating around, but I had to bundle my own in debian/patches to get support for finding files that have particular attribute names.

Maybe one day extended attributes will be taken seriously. (rsync, cp, etc will preserve them. I'm hazy on the compatibility with tar, but most things seem to be working.)

TEDEmpathy paradise: Students at a Jewish Day School reflect on Zak Ebrahim’s experience growing up with an extremist Muslim father


Students at the Davis Academy in Georgia contemplated the differences—and similarities—between Zak Ebrahim’s experience and their own. Photo: Twitter/@sbbEZas123

With Rosh Hashanah fast approaching, Sara Beth Berman of the Davis Academy in Atlanta, Georgia, wanted to create a lesson for the school’s middle school students around the ideas of empathy and forgiveness.

“In the month preceding the Jewish New Year, we talk a lot about how to forgive, how to accept forgiveness, and how do you want to be better in the new year,” says Berman, the experiential educator at this Jewish day school. “I was working on finding something to teach on these topics. And I was coming up short.”

Zak Ebrahim: I am the son of a terrorist. Here's how I chose peace. Zak Ebrahim: I am the son of a terrorist. Here's how I chose peace. But an a-ha moment came when one of her colleagues, Judaic studies teacher Samara Schwartz, forwarded her a TED Talk all about the life-altering things that happen when we dare to have empathy: Zak Ebrahim’s “I am the son of a terrorist. Here’s how I chose peace.”

“I thought, ‘This is amazing,’” says Berman, who quickly jumped into a conversation with Schwartz, the school’s administrators and the school’s counseling team to discuss how to frame it for a lesson. “It’s this amazing person with such a positive message, who is a Muslim and whose father who was an extremist terrorist. I watched it a couple of times, and I knew it was going to be powerful.”

On Friday, September 12—just three days after Zak’s talk was released (and the day after September 11)—about 200 middle school students at the academy took part in a lesson framed around Zak’s talk and the song “Change Your Mind” by Sister Hazel. In the classroom where Berman observed, the students sat at desks arranged in a big U-shape and watched the talk. When Ebrahim revealed that his father is El-Sayyid Nosair, who was convicted of planning the 1993 World Trade Center bombing and who assassinated Rabbi Meir Kahane, the leader of the Jewish Defense League, in 1990, the kids audibly gasped.

“They couldn’t believe it,” said Berman. “They said, ‘We didn’t know that you could have a father that’s a bad man and then be so good.’ That’s really an important lesson for us in terms of teaching them how to make their own decisions and grow and be their own people.”

The students in the class also had a big reaction to the part of the talk where Ebrahim describes going to the shooting range with his father and uncles, and the glee that erupted when a target burst into flames. “I’m not sure how many of them could hear this,” says Berman, “but Zak said the Arabic phrase ‘ibn abuh’ — like father, like son. It sounds close to father in Hebrew—‘abba.’”

After watching the talk, the students got up from their seats. Around the classroom, a series of questions were posted for them to consider: How do you feel Zak’s experience growing up was different from yours? How was Zak’s childhood the same as yours? Why was he able to be empathetic? How could you be more open and welcoming to your peers who have struggled like Zak has?

Quietly, the students walked around and wrote down their thoughts on Post-it notes, which they then stuck to the walls. Says Berman, “Many students were like, ‘I ran out of Post-its. Can I have more Post-its?’”


Students reflect on questions spurred by Zak Ebrahim’s TED Talk. Photo: Twitter/@sbbEZas123

Around the question, “Where do you learn stereotypes, and how can we bust them?” students posted answers like, “We learn stereotypes from the people around us, but we can bust them by doing what we think is right.”

The question, “Zak struggled as the new kid in class who was quiet and chubby. Have you ever felt like Zak?” also prompted some interesting answers. One female student wrote, “My brothers make fun of me for being small all the time. So I get what Zak is saying.”

Berman loved watching the students find common ground with Ebrahim. “There were some realizations that Zak was just like them—which was awesome. That’s all we want from our students: to realize that everybody is a human,” says Berman. “When you’re in middle school, it’s really hard to realize that there are other people around you that also have feelings. These are kids, so they aren’t on the terrorist path, but this reminded them that they also shouldn’t be on the bullying path. That they have choices.”

Overall, Berman calls the lesson “empathy paradise.”

The lesson also served as an important opportunity to talk to the kids about the realities of terrorism and about the importance of religious tolerance. “The majority of the kids were not born yet on September 11, 2001,” says Berman. “We really try to speak with kindness and to be authentic that we’re talking about a specific group of extremist terrorists—that we’re talking about ISIS, Hamas and Hezbollah, not Muslim people as a whole.”

Berman and her colleagues work hard to make sure that the students hear from people of many faiths. Last year, the school held a panel that brought together a rabbi, a Baptist preacher, a Presbyterian minister, and an imam, all senior clergy members from around Atlanta. Berman hopes that this talk will further help students be open-minded. “I enjoy finding things to help them have a nuanced understanding of Islam as a religion and Muslim people as a whole,” says Berman.

TED Talks are a popular teaching tool at the Davis Academy for students and teachers alike. And Berman hopes that this talk will have a lasting effect on how students think and act. “We want them to be able to have high-level conversations and to think about becoming a better person in the new year,” she says.

And for Zak Ebrahim, it was incredibly moving to see images of this lesson posted on Twitter. “Beautiful and humbling,” he wrote in response. “This is my dream.”


An up-close look at some answers to this question. Photo: Twitter/@sbbEZas123


In this classroom, students answer questions on a chalk board. Photo: Twitter/@rabbispen


A scene from the end of this lesson. Photo: Twitter/@rabbispen

TEDBooks to get you ready for TEDGlobal 2014


Can’t wait for TEDGlobal 2014? We’re here to help! Spend the next two weeks curling up with these books by the wonderful speakers who will grace the stage in Rio.


American Chica: Two Worlds, One Childhood, by Marie Arana. The writer’s classic, in which she shares her own experience growing up between Peru and the United States, which she describes as a “north-south collision.”

No Place to Hide, by Glenn Greenwald. In his new book, journalist Glenn Greenwald takes a detailed look at government snooping and tells the story of working with Edward Snowden to leak classified NSA documents.

Why Meditate: Working with Thoughts and Emotions, by Matthieu Ricard. Ricard, now a monk but once a biochemist, presents a clear guide to how to meditate.

Fascinating regions

Bolivar, by Marie Arana. The biographer and critic’s latest book follows Simón Bolívar’s career and life, from his campaign to liberate Colombia and Venezuela to his love affairs.

Indonesia, Etc.: Exploring the Improbable Nation, by Elizabeth Pisani. Pisani’s new book explores Indonesia in all its richness and contradiction.

The Balkans: Nationalism, War and the Great Powers, 1804-2011, by Misha Glenny. Glenny offers a comprehensive look at the history of the Balkans.

Big global problems

Migration and Remittances During the Global Financial Crisis and Beyond, edited by Dilip Ratha, Ibrahim Sirkeci and Jeffrey H. Cohen. In their 2012 book, the authors investigate the impact of the financial crisis on migrants from developing countries.

The Wisdom of Whores: Bureaucrats, Brothels and the Business of AIDS, by Elizabeth Pisani. Wielding wit and political incorrectness, Pisani takes a bold look at how governments are reluctant to fund effective HIV prevention.

McMafia: A Journey Through the Global Criminal Underworld, by Misha Glenny. The journalist unleashes a shocking fact—that domestic trade accounts for an estimated one-fifth of the global gross domestic product. In a feat of reporting, he then takes us inside some of those illegal economies.

Global stability

Cops Across Borders, by Ethan Nadelmann. The drug policy reformer’s book, originally published in 1994, examines the United States’ role in international law enforcement.

Peaceland: Conflict Resolution and the Everyday Politics of International Intervention, by Séverine Autesserre. The political scientist’s book, which came out earlier this year, examines peacebuilding interventions.

Stabilization Operations, Security and Development: States of Fragility, edited by Robert Muggah. Muggah, a specialist in security and development, offers a review of international stabilization efforts.

Rights, Resources and the Politics of Accountability, edited by Joanna Wheeler and Peter Newell. This 2006 book examines ways in which marginalized groups mobilize to fight for their rights.


Ghana Must Go, by Taiye Selasi. This debut novel, published last year, follows a long-dispersed family as they gather in Ghana following their father’s death.



Before They Pass Away, by Jimmy Nelson. The photographer documents over 30 remote tribes in more than 500 images.

Vik Muniz: Le Musée Imaginaire, by Vik Muniz and Eric Mézil. This volume accompanied a show in which Muniz, an artist, responded to the Collection Lambert in Avignon.

The Forces in Architecture, by Alejandro Aravena. The architect takes a technical look at his field in a book designed for architecture students and professionals.


Novel Plant Bioresources, by Ameenah Gurib-Fakim. A biodiversity scientist presents a comprehensive resource for understanding under-utilized plant species.

Beyond Boundaries: The New Neuroscience of Connecting Brains with Machines—and How It Will Change Our Lives, by Miguel Nicolelis. The neuroscientist explores how the brain creates thought, and the role machines will play in this process.

Measuring and Modeling the Universe, Volume 2, edited by Wendy L. Freedman. The astronomer’s 2010 textbook covers theories on the evolution of the universe.

Gastrointestinal Imaging Cases, by Jorge A. Soto, Stephen Anderson and Christine Menias. This 2013 textbook surveys more than 150 gastrointestinal cases.

The environment

Illustrated Encyclopedia of the Ocean, edited by Fabien Cousteau. This beautifully illustrated book dives into the ocean’s mysteries.

The Shaman’s Apprentice: A Tale of the Amazon Rainforest, by Mark Plotkin and Lynne Cherry. Plotkin and his coauthor focus on one boy’s quest to become a tribal shaman.

Antarctica 2041: My Quest to Save the Earth’s Last Wilderness, by Robert Swan. Swan, a polar explorer and environmentalist, looks at the state of the earth in light of the looming deadline of 2041, when the treaty protecting Antarctica is up for review.

Business and philanthropy

Jugaad Innovation: Think Frugal, Be Flexible, Generate Breakthrough Growth, by Navi Radjou, Jaideep Prabhu, and Simone Ahuja. Radjou and his coauthors examine how to drive innovation in an increasingly unpredictable business landscape via jugaad, Hindi for a clever improvised solution.

Philanthrocapitalism: How Giving Can Save the World, by Michael Green and Matthew Bishop. The social progress expert and his coauthor look at the way billionaires are reshaping philanthropy.

The Seven-Day Weekend: Changing the Way Work Works, by Ricardo Semler. Semler argues for employee satisfaction over corporate goals in this classic book, published a decade ago.



Planet DebianGunnar Wolf: Can printing be so hard‽

Dear lazyweb,

I am tired of finding how to get my users to happily print again. Please help.

Details follow.

Several years ago, I configured our Institute's server to provide easy, nifty printing support for all of our users. Using Samba+CUPS, I automatically provided drivers to Windows client machines, integration with our network user scheme (allowing for groups authorization — That means, you can only print in your designated printer), flexible printer management (i.e. I can change printers on the server side without the users even noticing — Great when we get new hardware or printers get sent to repairs!)...

Then, this year the people in charge of client machines in the institute decided to finally ditch WinXP licenses and migrate to Windows 7. Sweet! How can it hurt?

Oh, it can hurt. Terribly.

Windows 7 uses a different driver model, and after quite a bit of hair loss, I was not able to convince Samba to deliver drivers to Win7 (FWIW, I think we are mostly using 64 bit versions). Not only that, it also barfs when we try to install drivers manually and print to a share. And of course, it barfs in the least useful way, so it took me quite a bit of debugging and Web reading to find out it was not only my fault.

So, many people have told me that Samba (or rather, Windows-type networking) is no longer regarded as a good idea for printing. The future is here, and it's called IPP. And it is simpler, because Windows can talk directly with CUPS! Not only that, CUPS allows me to set valid users+groups to each printer. So, what's there to lose?

Besides time, that is. It took me some more hair pulling to find out that Windows 7 is shipped by default (at least in the version I'm using) with the Internet Printing Server feature disabled. Duh. OK, enable it, and... Ta-da! It works with CUPS! Joy, happiness!

Only that... It works only when I use it with no authentication.

Windows has an open issue, with its corresponding hotfix even, because Win7 and 2008 fail to provide user credentials to print servers...

So, yes, I can provide site-wide printing capabilities, but I still cannot provide per-user or per-group authorization and accounting, which are needed here.

I cannot believe this issue cannot be solved under Windows 7, several years after it hit the market. Or am I just too blunt and cannot find an obvious solution?

Dear lazyweb, I did my homework. Please help me!

CryptogramLesson in Successful Disaster Planning

I found the story of the Federal Reserve on 9/11 to be fascinating. It seems they just flipped a switch on all their Y2K preparations, and it worked.

Google AdsensePaired for success: An extended workforce for

Welcome to the second part of ‘Paired for success’, a blog series dedicated to the stories of publishers and Certified Partners who have joined forces to get the most out of Google AdSense.    

When Dimitriy was getting ready for his move to Germany, he collected a range of learning materials about its language, culture and traditions. That’s why in 2010 he decided to share his knowledge with others and to set up, a portal with a wealth of useful information about all things German. Since the early days, Google AdSense has been part of growth. is managed by a small team. It was this lack of in-house resources that led Dimitriy to approach YoulaMedia, an advertising agency and Google AdSense Certified Partner based in Saint Petersburg, Russia.

YoulaMedia quickly tackled Dimitriy’s challenge: increase advertising earnings without impairing their users’ experience. This partnership exceeded Dimitriy’s expectations, and he can now invest more time in creating interesting, high-quality content.  

Read the full story here.

Are you looking for a managed solution too? Find out what Google AdSense Certified Partners can do for your business or check out our partners worldwide.

Posted by Alicia Escriba, Inside AdSense team
Was this blog post useful? Share your feedback with us.

Planet DebianEnrico Zini: pressure


I've just stumbled on this bit that seems relevant to me:

Insist on using objective criteria

The final step is to use mutually agreed and objective criteria for evaluating the candidate solutions. During this stage they encourage openness and surrender to principle not pressure.

I find the concept of "pressure" very relevant, and I like the idea of discussions being guided by content rather than pressure.

I'm exploring the idea of filing under this concept of "pressure" most of the things described in code of conducts, and I'm toying with looking at gender or race issues from the point of view of making people surrender to pressure.

In that context, most code of conducts seem to be giving a partial definition of "pressure". I've been uncomfortable at DebConf this year, because the conference PG12 code of conduct would cause me trouble for talking about what lessons can Debian learn from consent culture in BDSM communities, but it would still allow situations in which people would have to yield to pressure, as long as the pressure was done avoiding the behaviours blacklisted by the CoC.

Pressure could be the phrase "you are wrong" without further explanation, spoken by someone with more reputation than I have in a project. It could be someone with the time for writing ten emails a day discussing with someone with barely the time to write one. It could be someone using elaborate English discussing with someone who needs to look up every other word in a dictionary. It could be just ignoring emails from people who have issues different than mine.

I like the idea of having "please do not use pressure to bring your issues forward" written somewhere, rather than spend time blacklisting all possible ways of pressuring people.

I love how the Diversity Statement is elegantly getting all this where it says: «We welcome contributions from everyone as long as they interact constructively with our community.»

However, I also find it hard not to fall back to using pressure, even just for self-preservation: I have often found myself in the situation of having the responsibility to get a job done, and not having the time or emotional resources to even read the emails I get about the subject. All my life I've seen people in such a situation yell "shut up and let me work!", and I feel a burning thirst for other kinds of role models.

A CoC saying "do not use pressure" would not help me much here, but being around people who do that, learning to notice when and how they do it, and knowing that I could learn from them, that certainly would.

If you can link to examples, I'd like to add them here.

RacialiciousWho is Lucy Flores?

<script async="async" charset="utf-8" src=""></script>

Midterms are coming.

Also known as the election years that most people don’t pay attention to, the midterm elections have an enormous impact on the lives of day to day people. Voter turnout tends to drop, but major political machinations happen while the sitting President is still in office.

This month, long time friend of the blog Rebecca Traister wrote a stunning profile of candidate Lucy Flores for Elle Magazine. Flores, the Democratic hopeful for Lieutenant Governor of Nevada decimates other political origin stories – she’s Mexican-American, one of 13 siblings, the child of immigrants, and former gang member. She turned her life around, started at community college, became a lawyer, and decided to run for office. She’s unapologetically pro-choice (and one of the rare candidates that will share her own story.) Domestic violence shaped her world – and her life experiences lead to a very pro-populist platform.

But what really gives Flores’ story bite is her unique position in politics – not only who she is, but what she represents for the Democratic party:

When a governor steps down in the state [of Nevada], the lieutenant governor, who’s not necessarily of the same party, assumes the post. Nevada’s current governor is the immensely popular Republican Brian Sandoval, whom Politico Magazine dubbed “The Man Who Keeps Harry Reid Up at Night.” That’s because many believe he’ll challenge the majority leader for his Senate seat in 2016, if, that is, the person who’d take his place is a fellow Republican: Flores’ opponent Mark Hutchison. Which makes Flores, to use Politico-speak, “The Woman Who Could Save Harry Reid’s Hide—and Keep the Senate in Democratic Hands.”

Go read it. Read it all.

The post Who is Lucy Flores? appeared first on Racialicious - the intersection of race and pop culture.

Sociological ImagesThe Manliest Shoes You’ve Ever Seen (1971)

When you hear the phrase Hush Puppies, think of basset hounds, and see these shoes, do you think “rugged, masculine, virile”? Because that’s what the copy says. In fact, this ad argues that wearing these shoes might make a women’s rights advocate call you a male chauvinist pig because they’re that masculine.


If this isn’t evidence of the fact that masculinity is socially constructed and changes over time, I don’t know what is.

Found at Vintage Ads.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at

CryptogramKill Switches for Weapons

Jonathan Zittrain argues that our military weapons should be built with a kill switch, so they become useless when they fall into enemy hands.

TEDMeet Jennifer Brain, your new favorite TED speaker, and the 13-year-old student who drew her


When Cecilia Matei’s science teacher showed a TED Talk about the promise of stem cell research to her class at the American School of Milan, Italy, she was immediately intrigued.

“For me it was a completely new subject that I had never heard of before,” the 13-year-old says.

The talk inspired her to create this drawing (click on it to see a larger version), a cross between a comic strip and a movie storyboard. While the star of the image is named “Jennifer Brain,” she closely resembles real-life TED speaker Susan Solomon, who at TEDGlobal 2012 shared the advances being made toward creating lab-grown stem cell lines, which could accelerate many types of medical research.

Matei’s teacher, Joseph Leonetti, handpicked this talk to kick off a special project for his students — as they did a three-week lesson on cell structures and processes in class, they were tasked with researching stem cells at home. “They were responsible for finding out what stem cells are, where they come from, how they are used and the ethical issues surrounding them. They were guided in the right direction,” he says. “Almost each class session started with students eagerly discussing what they had found out about stem cells the night before.”

This image was one of Matei’s homework assignments from the lesson. She walks us through the action of it: “Famous scientist Dr. Brain is invited to give a TED Talk about her studies,” she says. “She’s trying to convince people that stem cell research could help cure many diseases.”

In general, Matei says that her favorite subjects in school are math, English and, of course, science. “I loved every lab and experiment in Mr. Leonetti’s class,” she says. As for her favorite TED Talks she prefers ones “about new technological inventions.”

So we’ll have to wait and see what’s next for Dr. Jennifer Brain.

Planet Linux AustraliaAndrew McDonnell: Evaluating the security of OpenWRT (part 2) – bugfix

I had a bug applying the RELRO flag to busybox, this is fixed in GitHub now.

For some reason the build links the busybox binary a second time and I missed the flag.

Also an omission from my prior blog entry: uClibc has RELRO turned on in its configuration already in OpenWRT, so does not need flags passing through to it. However, it is failing to build its libraries with RELRO in all cases, in spite of the flag. This problem doesn’t happen in a standalone uClibc build from the latest uClibc trunk, but I haven’t scoped how to get uClibc trunk into OpenWRT. This may have been unclear they way I described it.

RacialiciousRecap: We’re Gonna Have to Live Through At Least Two Seasons of This; Gotham, Pilot

An entirely accurate summation.

An entirely accurate summation.

By Kendra James

Gotham 1×01 was not a good hour of television.

I am 99.9% sure that, looking through completely objective and non-nostalgia tinted lenses (she says, unconvincingly), that the Birds of Prey pilot from 2002 was better than the pilot FOX served up last night. Unlike my beloved BoP, the Jim-Gordon-cum-Gotham-City origin story is about two white men and thus Gotham will most likely get more than 13 episodes to try and be great.

“Try” being the key word.

Normally I would attempt to find some beacon of hope mired deep in the muck of a pilot, but Gotham is a show that sounds like its using a comic book script for its dialogue –and no, it’s not a Greg Rucka script– and looks like at least 30 minutes of it was shot through a sepia tinted instagram filter. While envisioning characters’ dialogue appearing in speech bubbles above their heads, trying to be obligatorily impressed when familiar face appeared every ten minutes(“Hey, look, Poison Ivy! ”/ “Cool, it’s the Riddler!” / “Oh boy, Penguin!”), and watching the woman playing Jim Gordon’s fiancee ‘act’, I realised I’m not convinced that this show is ever going to be good.

Instead of grasping at straws to call this a win, lets just quickly list the great things Fish Mooney (Jada Pinkett Smith) and Renee Montoya (Victoria Cartagena) did last night:

Jada Pinkett Smith as Fish Mooney

– Despite her name (and the fact that she’s wearing a wig) Fish  is not about getting her hair wet, and she’s got white men in her employ to make sure it doesn’t happen. In this new Bat-verse where everyone is connected, a young Penguin (Robin Taylor) is in charge of keeping Fish’s hair laid while she beats her employees with a baseball bat in the rain. Penguin’s murmured , “Sorry,” for failing to keep an umbrella over her head results in, “If you let this hair go frizzy you will be.”

– Fish also has the future Penguin rub her bare feet while she auditions an amateur standup comic in her club (“Look, guys! I bet they want us to think it’s The Joker!”). This is before she orders Jim Gordon to shoot him in the back and dump him in the river for betraying her to the GCPD earlier in the episode. This was not a good pilot, but so many on twitter seemed to agree: we were all happy to see Jada Pinkett Smith kicking ass, taking names, (adjusting her wig after both), and dominating the men who attempted to get in her way.

–  We have no definitive proof, but it sure did sound like some of Smith’s line delivery was inspired by the late Eartha Kitt, the first Black Catwoman on the 1960s Batman TV show.

– Above all, Fish was introduced with more personality and outright motivation than either of the two white male stars of the show Jim Gordon or his partner, Harvey Bullock (Donal Logue). This could be because Gordon and Bullock are long established characters in the DC Universe, and between the comics and recent Nolan films, the writers are expecting viewers to come in with some prior knowledge. Smith, on the other hand, is originating the role of Fish Mooney. The focus on her character may peter out in subsequent episodes, but it was nice to see Smith handed something meaty to work with in the pilot episode.

(And it’s worth pointing out that women criminals who originate on Bat-verse shows have a history of going places.)

– Renee Montoya didn’t do much this week, but comic book fans (and everyone else) were probably able to pick up on the sledgehammer of a hint concerning her sexuality and her possible past relationship with Gordon’s fiancee. I was surprised there; we live in a world where fans have to fight NBC for John Constantine’s bisexuality and cigarettes (guess which fight they won), so I wouldn’t have been shocked to see Montoya’s lesbian relationships pushed to the side. Still, the way this pilot went? They’ll have to call me back when they introduce Kate Kane.


Being a seasoned expert of the medium, I understand that you generally can’t judge a show by a bad pilot. Gotham will get another episode or two from me to improve, but life is too short and Gotham Academy is coming out too soon  for me to waste time in a subpar Bat-scape. All I can do is encourage those of you who came to Gotham for the WOC to also give the CW’s The Flash pilot, and Candace Patton’s Iris West, an equal chance.

(Spoiler Alert: It’s better. It’s so much better.)

The post Recap: We’re Gonna Have to Live Through At Least Two Seasons of This; Gotham, Pilot appeared first on Racialicious - the intersection of race and pop culture.

Worse Than FailureCodeSOD: Going Out of Style

The process of optimizing the CSS used in a web site can be quite complicated. The subtle interplay between selectors, attributes, specificity, inheritance and the DOM elements can significantly impact the outcome. Style guides can be a thing of elegant beauty, to be admired by many and revered by those steeped in the dark arts of styling.

There there's the code that George found when he took on the task of migrating a 1990's-era web site. Nobody expects code from a 15 year-old web site to be up to current standards. But there are limits. George's spidey sense started tingling when he found a file named 'css.php'. A look inside didn't do anything to turn the alarm bells off.


It was with real trepidation that George cracked open the CSS_print_srnbi function.

function CSS_print_srnbi( $what){
  global $CSS_color;
  global $CSS_background_color;
  for( $i= 1; $i < 256; ++$i){
    echo "$what.";
    if( $i & 128) echo "f";
    if( $i & 64) echo "t";
    if( $i & 32) echo "c";
    if( $i & 16) echo "s";
    if( $i & 8) echo "r";
    if( $i & 4) echo "n";
    if( $i & 2) echo "b";
    if( $i & 1) echo "i";
    echo " { ";
    if( $i & 128) echo "border-width: 1px 0px 1px 0px; ";
    if( $i & 64) echo "vertical-align: top; ";
    if( $i & 32) echo "text-align: center; ";
    if( $i & 16) echo "font-variant: small-caps; ";
    if( $i & 8) echo "background-color: #$CSS_color; color: #$CSS_background_color; ";
    if( $i & 4) echo "font-stretch: narrower; ";
    if( $i & 2) echo "font-weight: bold; ";
    if( $i & 1) echo "font-style: italic; ";
    echo "}\n";

Behold. Elegant beauty personified in 25 sublime lines of code.

Fortunately (or unfortunately, for those who want to see this technique taken to it's still unrealized conclusion...unicode characters) only "b", "f", "fb" and "tc" were used as classes throughout the website.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet DebianDariusz Dwornikowski: debrfstats software for RFS statistics

Last time I told that I would release software I used to make RFS stats plots. You can find it in my github repo -

The software contains small class to get data needed to generate plots, as well as for doing some simple bug analysis. The software also contains an R script to make plots from a CSV file. For now debrfstats uses SOAP interface to Debbugs but I am now working on adding a UDD data source.

The software is written in Python 2 (SOAPpy does not come in 3 flavour), some usage examples are in the file in the repository.

If you have any questions or wishes for debrfstats do not hesitate to contact me.

Geek FeminismWho moved my linkspam? (22 September 2014)

  • “You Cannot Be Mommy”: A Female Cook on Ratatouille | Rebecca Lynde-Scott at The Toast (Sept 15): “Notice that, while her position is never specified, she’s low enough on the totem pole that she’s given the job of training the despised plongeur (“garbage boy” in the film, actually dishwasher), a job only given to the person occupying the station the new person is moving into, so she’s pretty damn low. [...] Linguini announced his and Colette’s relationship to the press, “Inspiration has many names. Mine is named Colette.” That moment in the movie is supposed to be about how he’s betraying Remy by not being honest, but he’s betraying Colette nearly as much just by these two sentences. In eight words, he demotes her from competent cook on the way up to artist’s muse. As the former, she could keep working her way up. As the latter, she might never get another job in a really good kitchen again, if she and Linguini break up. That gets ignored, of course, shellacked over with Remy’s story, some sharp remarks, and that trademarked Disney happy-ever-after.”
  • [warning for discussion of harassment] Pushing Women and People of Color Out of Science Before We Go In | Jennifer Selvidge at Huffington Post (Sept 18): “The misogyny and racism I experienced and saw at MIT became more and more concerning [...] I know that even with close to straight As, I am still unwelcome in my scientific community and unwelcome as an engineer. I will be competing with white men with lower GPAs and less research experience who will likely be chosen over me, as professors on graduate committees. After all, some of those very same graduate school committee members probably remember fondly “the days when men were engineers and women were flight attendants.” The problems in STEM are the people in STEM. I shouldn’t have to play catch up, when I am already ahead.”
  • New FOSS Outreach Program internships for female technical contributors | Quim Gil at Wikimedia (Sept 18): “The Free and Open Source Software Outreach Program for Women offers paid internships to developers and other technical contributors working on projects together with free software organizations. [...] The application period starts on September 22nd and ends one month later on October 22nd.”
  • [warning for police brutality] Police killed a black man dressed up like an anime character |  Aja Romano at The Daily Dot (Sept 16): “For the second time in two months, a black man has been shot and killed by police officers while holding a toy weapon. The Utah police fatally shot 22-year-old Darrien Hunt on Wednesday.”
  • Participate in a Survey About Gender Diversity in Video Games | Carly Smith at The Escapist (Sept 16): “Student researcher Jennifer Allaway is examining the relationship between players’ desires for diversity and game developers’ understanding of that desire, among many other topics, for a GDC 2015 talk.” There are separate surveys for developers and consumers.”
  • [warning for discussion of sexual harassment] Misogyny and the Atheist Movement | Comment by Hold your seahorses at Metafilter (Sept 15): “The article makes a passing mention of new “rules” for the “gender dynamic” and I think there’s actually something to that, as far as the reason why at least a subset of men get extravagantly, sometimes violently, upset and retaliatory when they run up against, or see someone run up against, those “rules”. Because yes. Absolutely, the rules are changing about what you “can” and “can’t” do with/to women (at cons, in public, online, in general). But the people getting upset about this tend to misunderstand what the idea of “the rules are changing” means. The “rules” – that set of norms that determined where you could and couldn’t acceptably transgress with someone – used to be much more liberal from the male perspective. [...] That sense of assurance, of insulation from consequences, is what’s been increasingly yoinked away from men as it becomes less and less acceptable to do these things.”
  • Time to Raise the Profile of Women and Minorities in Science | Brian Welle and Megan Smith at Scientific American (Sept 16): “over the past few years, we discovered some pretty ugly news about our beloved Google Doodles. We had been making these embellishments to the corporate logo on our home page, often in honor of specific people on their birthdays, ever since the company was founded in 1998. For the first seven years, we celebrated exactly zero women. We had not noticed the imbalance.”
  • why many women of colour within the so-called ‘Western countries’ and those outside are very alienated with the [mainstream] feminism | lesetoilesnoires at tumblr (Sept 20): “The idea that to show a White young woman in the West why and how she needs feminism, or why and how she has benefited from feminism, you have to appeal to the ‘tragic plight’ of Women of Colour ‘elsewhere’, turn these Women of Colour into caricatures of victimhood while contrasting it with White, middle-class women as ‘empowered subjects’, is simply condescending in the best case and outright racist in the worst case.”
  • Albert Einstein, Anti-Racist Activist | s.e. smith at this ain’t livin’ (Sept 22): “It is perhaps not surprising that Einstein’s contributions to anti-racism were erased at the time. It was easy to focus on the media-friendly physicist who amazed people with his mind, and to quietly skate around details of his personal life. His work can’t have made contemporary media comfortable, either, as he was unafraid when it came to specifically confronting white complicity and talking about what whites needed to do.”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet Linux AustraliaSonia Hamilton: SaltStack Essential Reading

A list of ‘Essential Reading’ for SaltStack. A collection of useful links, mostly for myself but possibly helpful to others.

Planet Linux AustraliaCraige McWhirter: Converting an Instance to an Image in OpenStack

This documents how to convert an existing VM instance into an OpenStack image which can be used to boot new instances. In particular it documents doing so when you are using volume backed instances.


Create a snapshot of the instance

Check the status of the source VM and stop it if it's not already:

$ nova list
| ID                                   | Name      | Status | Task State | 
Power State | Networks                                    |
| 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | Tutorial1 | ACTIVE | -          | 
Running     | Tutorial= |
$ nova stop Tutorial1
$ nova list
| ID                                   | Name      | Status | Task State | 
Power State | Networks                                    |
| 4fef1b97-901e-4ab1-8e1f-191cb2f75969 | Tutorial1 | SHUTOFF | -          | 
Running     | Tutorial= |

Take a snapshot and check the result:

$ nova image-create --poll Tutorial1 Tutorial1Snapshot
Server snapshotting... 100% complete
$ nova image-list
| ID                                   | Name              | Status | Server |
| 47e192f8-32b2-4839-8392-a18e3be1b9a6 | Tutorial1Snapshot | ACTIVE |        |

Convert that snapshot into an image

Obtain the snapshot ID from cinder:

$ cinder snapshot-list
|                  ID                  |              Volume ID             
  |  Status  |           Display Name          | Size |
| 6a09198d-3b14-438d-a8e2-0473331fa0b7 | 616dbaa6-f5a5-4f06-9855-fdf222847f3
e | deleting |  snapshot for Tutorial1 |  10  |

Create a volume from that snapshot:

$ cinder create --snapshot-id 6a09198d-3b14-438d-a8e2-0473331fa0b7 2

|       Property      |                Value                 |
|     attachments     |                  []                  |
|  availability_zone  |                MyZone                |
|       bootable      |                false                 |
|      created_at     |      2014-09-23T02:19:48.414823      |
| display_description |                 None                 |
|     display_name    |                 None                 |
|      encrypted      |                False                 |
|          id         | 8fc9e82d-bb57-4e74-a48a-93e20c94fe2f |
|       metadata      |                  {}                  |
|         size        |                  2                   |
|     snapshot_id     | 6a09198d-3b14-438d-a8e2-0473331fa0b7 |
|     source_volid    |                 None                 |
|        status       |               creating               |
|     volume_type     |                block                 |

Create and upload an image from that volume:

$ cinder upload-to-image 8fc9e82d-bb57-4e74-a48a-93e20c94fe2f TutorialInstance
|       Property      |                                                      
|   container_format  |                                                      
|     disk_format     |                                                      
| display_description |                                                      
|          id         |                                                      
|       image_id      |                                                      
|      image_name     |                                                      
|         size        |                                                      
|        status       |                                                      
|      updated_at     |                                                      
|     volume_type     | {u'name': u'block', u'qos_specs_id': None, u'deleted'
: False, u'created_at': u'2014-08-08T04:04:49.000000', u'updated_at': None, u
'deleted_at': None, u'id': u'7a522201-7c27-4eaa-9d95-d70cfaaeb16a'} |

Export your network UUID and image UUID:

$ export OS_IMAGE=83ec0ea1-e41e-475e-b925-96e5f702fba5
$ export OS_NET=c4beeb1d-c04d-43f4-b8fb-b485bcfcf005

Boot an instance from your new image to ensure it works:

$ nova boot --key-name $OS_USERNAME --flavor m1.tiny --block-device source=image,id=$OS_IMAGE,dest=volume,size=2,shutdown=remove,bootindex=0 --nic net-id=$OS_NET --poll Tutorial0
| Property                             | Value                                           |
| OS-DCF:diskConfig                    | MANUAL                                          |
| OS-EXT-AZ:availability_zone          | MyZone                                          |
| OS-EXT-STS:power_state               | 0                                               |
| OS-EXT-STS:task_state                | scheduling                                      |
| OS-EXT-STS:vm_state                  | building                                        |
| OS-SRV-USG:launched_at               | -                                               |
| OS-SRV-USG:terminated_at             | -                                               |
| accessIPv4                           |                                                 |
| accessIPv6                           |                                                 |
| adminPass                            | Riuvai8PvHu3                                    |
| config_drive                         |                                                 |
| created                              | 2014-09-23T02:25:14Z                            |
| flavor                               | m1.tiny (1)                                     |
| hostId                               |                                                 |
| id                                   | ec354ce2-fed9-4196-829e-483ab7759203            |
| image                                | Attempt to boot from volume - no image supplied |
| key_name                             | DemoTutorial                                    |
| metadata                             | {}                                              |
| name                                 | Tutorial0                                       |
| os-extended-volumes:volumes_attached | []                                              |
| progress                             | 0                                               |
| security_groups                      | default                                         |
| status                               | BUILD                                           |
| tenant_id                            | djfj4574fn478fh69gk489fn239fn9rn                |
| updated                              | 2014-09-23T02:25:14Z                            |
| user_id                              | hy95g85nmf72bd0esdfj94582jd82j4f8               |
Server building... 100% complete

Your new image should now be waiting for you to log in.

Planet DebianKeith Packard: easymega-118k

Neil Anderson Flies EasyMega to 118k' At BALLS 23

Altus Metrum would like to congratulate Neil Anderson and Steve Cutonilli on the success the two stage rocket, “A Money Pit”, which flew on Saturday the 20th of September on an N5800 booster followed by an N1560 sustainer.

“A Money Pit” used two Altus Metrum EasyMega flight computers in the sustainer, each one configured to light the sustainer motor and deploy the drogue and main parachutes.

Safely Staged After a 7 Second Coast

After the booster burned out, the rocket coasted for 7 seconds to 250m/s, at which point EasyMega was programmed to light the sustainer. As a back-up, a timer was set to light the sustainer 8 seconds after the booster burn-out. In both cases, the sustainer ignition would have been inhibited if the rocket had tilted more than 20° from vertical. During the coast, the rocket flew from 736m to 3151m, with speed going from 422m/s down to 250m/s.

This long coast, made safe by EasyMega's quaternion-based tilt sensor, allowed this flight to reach a spectacular altitude.

Apogee Determined by Accelerometer

Above 100k', the MS5607 barometric sensor is out of range. However, as you can see from the graph, the barometric sensor continued to return useful data. EasyMega doesn't expect that to work, and automatically switched to accelerometer-only apogee determination mode.

Because off-vertical flight will under-estimate the time to apogee when using only an accelerometer, the EasyMega boards were programmed to wait for 10 seconds after apogee before deploying the drogue parachute. That turned out to be just about right; the graph shows the barometric data leveling off right as the apogee charges fired.

Fast Descent in Thin Air

Even with the drogue safely fired at apogee, the descent rate rose to over 200m/s in the rarefied air of the upper atmosphere. With increasing air density, the airframe slowed to 30m/s when the main parachute charge fired at 2000m. The larger main chute slowed the descent further to about 16m/s for landing.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.4.450.1.0

Continuing with his standard pace of approximately one new version per month, Conrad released a new minor release of Armadillo a few days ago. As before, I had created a GitHub-only pre-release which was tested against all eighty-seven (!!) CRAN dependents of our RcppArmadillo package and then uploaded RcppArmadillo 0.4.450.0 to CRAN.

The CRAN maintainers pointed out that under the R-development release, a NOTE was issued concerning the C-library's rand() call. This is a pretty new NOTE, but it means using the (sometimes poor quality) rand() generator is now a no-no. Now, Armadillo being as robustly engineered as it is offers a new random number generator based on C++11 as well as a fallback generator for those unfortunate enough to live with an older C++98 compiler. (I would like to note here that I find Conrad's continued support for both C++11, offering very useful modern language idioms, as well as the fallback code for continued deployment and usage by those constrained in their choice of compilers rather exemplary --- because contrary to what some people may claim, it is not a matter of one or the other. C++ always was, and continues to be, a multi-paradigm language which can be supported easily by several standard. But I digress...)

In any event, one cannot argue with CRAN about their prescription of a C++98 compiler. So Conrad and I discussed this over email, and came up with a scheme where a user-package (such as RcppArmadillo) can provide an alternate generator which Armadillo then deploys. I implemented a first solution which was then altered / reflected by Conrad in a revised version 4.450.1 of Armadillo. I packaged, and now uploaded, that version as RcppArmadillo 0.4.450.1.0 to both CRAN and into Debian.

Besides the RNG change already discussed, this release brings a few smaller changes from the Armadillo side. These are detailed below in the extract from the NEWS file. On the RcppArmadillo side, we now have support for pkgKitten which is both very exciting and likely the topic of another blog post with an example of creating an RcppArmadillo package that purrs. In the process, I overhauled and polished how new packages are created by RcppArmadillo.package.skeleton(). An upcoming blog post may provide an example.

Changes in RcppArmadillo version 0.4.450.1.0 (2014-09-21)

  • Upgraded to Armadillo release Version 4.450.1 (Spring Hill Fort)

    • faster handling of matrix transposes within compound expressions

    • expanded symmatu()/symmatl() to optionally disable taking the complex conjugate of elements

    • expanded sort_index() to handle complex vectors

    • expanded the gmm_diag class with functions to generate random samples

  • A new random-number implementation for Armadillo uses the RNG from R as a fallback (when C++11 is not selected so the C++11-based RNG is unavailable) which avoids using the older C++98-based std::rand

  • The RcppArmadillo.package.skeleton() function was updated to only set an "Imports:" for Rcpp, but not RcppArmadillo which (as a template library) needs only LinkingTo:

  • The RcppArmadillo.package.skeleton() function will now prefer pkgKitten::kitten() over package.skeleton() in order to create a working package which passes R CMD check.

  • The pkgKitten package is now a Suggests:

  • A manual page was added to provide documentation for the functions provided by the skeleton package.

  • A small update was made to the package manual page.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaGlen Turner: Installing OpenVSwitch and Mininet on Raspberry Pi


OpenVSwitch is a software defined networking switch for Linux. It supports its own protocol and also OpenFlow 1.3. OpenVSwitch is included in the Linux kernel and its user-space utilities ship in Debian Wheezy.

Mininet allows the simple creation of emulated networks, using Linux's namespace feature. Mininet is not packaged in Debian Wheezy.

Raspberry Pi kernel issue #377 enables the kernel features needed by OpenVSwitch and Mininet.

Installing OpenVSwitch

Since all the necessary parts are in packages, simply install the packages:

$ sudo apt-get install ovsdbmonitor openvswitch-switch openvswitch-controller openvswitch-pki openvswitch-ipsec

The packaging is done well, and automatically establishes the necessary databases and public key infrastructure.

Installing Mininet

The main Mininet installation instructions give three choices: we are using “Option 2: installation from source”.

Before going further enable memory control groups in the kernel. Edit the line in /boot/cmdline.txt to append:

cgroup_enable=memory swapaccount=1

Reboot so that those kernel parameters take effect.

Get the source:

$ sudo apt-get install git
$ git clone git://

There is an installation script in mininet/utils/ It won't run successfully as Raspberry Pi doesn't keep the Linux kernel in the expected package. In any case it tries to compile OpenVSwitch as a kernel module, which is no longer needed now that OpenVSwitch is part of the stock Linux kernel.

Looking at that script we can do the steps by hand. Starting with installing the runtime dependencies:

$ sudo apt-get install build-essential iperf telnet python-setuptools cgroup-bin ethtool  ethtool help2man pyflakes pylint pep8 socat

Now install Mininet into /usr/local:

$ sudo make install

Finally, test that the installation worked:

$ sudo /etc/init.d/openvswitch-controller stop
$ sudo mn --test pingall
*** Creating network
*** Adding controller
*** Adding hosts:
h1 h2 
*** Adding switches:
*** Adding links:
(h1, s1) (h2, s1) 
*** Configuring hosts
h1 h2 
*** Starting controller
*** Starting 1 switches
*** Waiting for switches to connect
*** Ping: testing ping reachability
h1 -> h2 
h2 -> h1 
*** Results: 0% dropped (2/2 received)
*** Stopping 1 controllers
*** Stopping 1 switches
s1 ..
*** Stopping 2 links

*** Stopping 2 hosts
h1 h2 
*** Done
completed in 6.277 seconds


LongNowThe Interval Brickstarter Funded: Support The Robot Stretch Goal

Photo by Christopher Michel

The Interval brickstarter ends on October 1 at 5pm–that’s the last chance to become an Interval Charter Donor. We’ve set two ambitious stretch goals to reach before it ends: raising $550,000 in total (about $42K to go) and having 1000 total donors.

Thanks to hundreds of supporters around the world we have funded our ‘brickstarter’ for The Interval’s construction! This achievement was possible thanks to more than 700 long-term thinkers (and counting) who have donated over the last 2 years.

Our supporters gave from around the US and the world: Atlanta to Zurich; Australia to Croatia; New Hampshire to Hawaii! Thank you all for your generosity in helping build a one-of-a-kind venue, The Long Now Foundation’s new home: The Interval at Long Now.

Photo by Because We Can

The Interval is now open to all 10AM to Midnight every day at Fort Mason Center in San Francisco. The names of our Charter Donors will soon be listed on our Donor Wall within The Interval as thanks for their support in making our new home a reality.

The money raised will help fund the Interval’s two robots by October 1st.

Let’s meet our Robots… 

The Bespoke Gin Robot will be stationed behind the bar and wields an array of 15 botanicals including coriander seed, lemon peel, and apricot kernels. Each botanical is individually distilled by the amazing folks at St. George Spirits and the bot’s with components were made by Party Robotics. 

Gin Robot design image by Because We Can

Including the juniper spirit base, the Gin Robot can create custom gin to-order in 87,178,291,200 possible combinations. The Interval could serve up a different gin variation each day for the next 238 million years. It’s our hope that this cocktail possibility machine will be worthy of sitting next to Brian Eno’s Ambient Painting.

Top 10 donors in the stretch phase get invited to a special tasting with the Gin Robot.

Our Chalkboard Robot has been designed by artist Jürg Lehni. The chalkboard itself is now up at The Interval. It awaits the artist’s arrival from Switzerland to install the bot which will then write or draw by our command. Below you can see Viktor, out robot’s elder cousin, in action.

<iframe allowfullscreen="" height="315" src="" width="560"></iframe>
In addition to all our usual donor benefits, if we hit BOTH the $550K mark and reach 1000 donors, we have a couple of Long Now surprises planned to thank all of our charter donors.

Everyone who donates by October 1 will be a charter donor. All of our great donor perks like Challenge Coins, Long Now flasks, and bottle keep bottles of exclusive St George Bourbon, Single Malt, or Bristlecone Gin are still available.

Long Now Challenge Coin

The stretch goal of $550,000–adding $55K to our brickstarter total–will help cover costs associated with building and installing the robots. We’ve set a participation stretch goal of 1000 Donors (less than 300 to go!).

The Interval is intended to be both a community hub and a funding source for the Foundation going forward. Your donations help to defray our construction and startup costs, so your generosity is incredibly important to getting this endeavor off on a flying start toward profitability.

If you haven’t donated, please consider a gift by October 1, to become a Charter Donor and enjoy first chance to buy tickets to Interval events and be listed on the Donor Wall. You’ll also be a part of starting up a unique venue that helps get important ideas about long-term thinking into the world.

If you have donated, thank you! We’d appreciate your help spreading the word before the October 1 deadline. And remember gifts (and employer matches) are cumulative–consider going up a level? We have only days left to reach our stretch goal and fund these wonderful additions to The Interval’s array of mechanical wonders.

Photo by Because We Can

Planet DebianGunnar Wolf: One month later: How is the set of Debian keyrings faring?

OK, it's almost one month since we (the keyring-maintainers) gave our talk at DebConf14; how are we faring regarding key transitions since then? You can compare the numbers (the graphs, really) to those in our DC14 presentation.

Since the presentation, we have had two keyring pushes:

First of all, the Non-uploading keyring is all fine: As it was quite recently created, and as it is much smaller than our other keyrings, it has no weak (1024 bit) keys. It briefly had one in 2010-2011, but it's long been replaced.

Second, the Maintainers keyring: In late July we had 222 maintainers (170 with >=2048 bit keys, 52 with weak keys). By the end of August we had 221: 172 and 49 respectively, and by September 18 we had 221: 175 and 46.

As for the Uploading developers, in late July we had 1002 uploading developers (481 with >=2048 bit keys, 521 with weak keys). By the end of August we had 1002: 512 and 490 respectively, and by September 18 we had 999: 531 and 468.

Please note that these numbers do not say directly that six DMs or that 50 uploading DDs moved to stronger keys, as you'd have to factor in new people being added, keys migrating between different keyrings (mostly DM⇒DD), and people retiring from the project; you can get the detailed information looking at the public copy of our Git repository, particularly of its changelog.

And where does that put us?

Of course, I'm very happy to see that the lines in our largest keyring have already crossed. We now have more people with >=2048 bit keys. And there was a lot of work to do this processing done! But that still means... That in order not to lock a large proportion of Debian Developers and Maintainers out of the project, we have a real lot of work to do. We would like to keep the replacement slope high (because, remember, in January 1st we will remove all small keys from the keyring).

And yes, we are willing to do the work. But we need you to push us for it: We need you to get a new key created, to gather enough (two!) DD signatures in it, and to request a key replacement via RT.

So, by all means: Do keep us busy!

Debian Developers (uploading)266.66 KB
Debian Developers (non-uploading)204.17 KB
Debian Maintainers296.73 KB

Planet DebianKonstantinos Margaritis: EfikaMX updated wheezy and jessie images available

A while ago, I promised to some people in forum that I would provide bootable armhf images for wheezy but most importantly for jessie with an updated kernel. After a delay -I did have the images ready and working, but had to clean them up a bit- I decided to publish them here first.

So, here are the images: (559MB) (635MB)