Planet Russell

,

Planet DebianIan Campbell: qcontrol 0.5.6

  • Fix for kernels which have permissions 0200 (write-only) on gpio export device.
  • Updates to systemd unit files.
  • Update to README for (not so) new homepage (thanks to Martin Michlmayr).
  • Add a configuration option in the examples to handle QNAP devices which lack a fan (Debian bug #712841, thanks to Martin Michlmayr for the patch and to Axel Sommerfeldt).

Get it from git or http://www.hellion.org.uk/qcontrol/releases/0.5.6/.

The Debian package will be uploaded shortly.

,

Planet DebianAntoine Beaupré: Diversity, education, privilege and ethics in technology

This is a rant I wrote while attending KubeCon Europe 2018. I do not know how else to frame this deep discomfort I have with the way one of the most cutting edge projects in my community is moving. I see it as a symptom of so many things wrong in society at large and figured it was as good a way as any to open the discussion regarding how free software communities seem to naturally evolved into corporate money-making machines with questionable ethics.

A white male looking at his phone while a hair-dresser prepares him for a video shoot, with plants and audio-video equipment in the background A white man groomed by a white woman

Diversity and education

There is often a great point made of diversity at KubeCon, and that is something I truly appreciate. It's one of the places where I have seen the largest efforts towards that goal; I was impressed by the efforts done in Austin, and mentioned it in my overview of that conference back then. Yet it is still one of the less diverse places I've ever participated in: in comparison, Pycon "feels" more diverse, for example. And then, of course, there's real life out there, where women constitute basically half the population, of course. This says something about the actual effectiveness diversity efforts in our communities.

a large conference room full of people that mostly look like white male, with a speaker on a large stage illuminated in white 4000 white men

The truth is that contrary to programmer communities, "operations" knowledge (sysadmin, SRE, DevOps, whatever it's called these days) comes not from institutional education, but from self-learning. Even though I have years of university training, the day to day knowledge I need in my work as a sysadmin comes not from the university, but from late night experiments on my personal computer network. This was first on the Macintosh, then on the FreeBSD source code of passed down as a magic word from an uncle and finally through Debian consecrated as the leftist's true computing way. Sure, my programming skills were useful there, but I acquired those before going to university: even there teachers expected students to learn programming languages (such as C!) in-between sessions.

A bunch of white geeks hanging out with their phones next to a sign that says 'Thanks to our Diversity Scholarship Sponsors' with a bunch of corporate logos Diversity program

The real solutions to the lack of diversity in our communities not only comes from a change in culture, but also real investments in society at large. The mega-corporations subsidizing events like KubeCon make sure they get a lot of good press from those diversity programs. However, the money they spend on those is nothing compared to tax evasion in their home states. As an example, Amazon recently put 7000 jobs on hold because of a tax the city of Seattle wanted to impose on corporations to help the homeless population. Google, Facebook, Microsoft, and Apple all evade taxes like gangsters. This is important because society changes partly through education, and that costs money. Education is how more traditional STEM sectors like engineering and medicine have changed: women, minorities, and poorer populations were finally allowed into schools after the epic social struggles of the 1970s finally yielded more accessible education. The same way that culture changes are seeing a backlash, the tide is turning there as well and the trend is reversing towards more costly, less accessible education of course. But not everywhere. The impacts of education changes are long-lasting. By evading taxes, those companies are keeping the state from revenues that could level the playing field through affordable education.

Hell, any education in the field would help. There is basically no sysadmin education curriculum right now. Sure you can follow a Cisco CCNA or MSCE private trainings. But anyone who's been seriously involved in running any computing infrastructure knows those are a scam: that will tie you down in a proprietary universe (Cisco and Microsoft, respectively) and probably just to "remote hands monkey" positions and rarely to executive positions.

Velocity

Besides, providing an education curriculum would require the field to slow down so that knowledge would settle down and trickle into a curriculum. Configuration management is pretty old, but because the changes in tooling are fast, any curriculum built in the last decade (or even less) quickly becomes irrelevant. Puppet publishes a new release every 6 month, Kubernetes is barely 4 years old now, and is changing rapidly with a ~3 month release schedule.

Here at KubeCon, Mark Zuckerberg's mantra of "move fast and break things" is everywhere. We call it "velocity": where you are going does not matter as much as how fast you're going there. At one of the many keynotes, Abby Kearns from the Cloud Foundry Foundation boasted at how Home Depot, in trying to sell more hammers than Amazon, is now deploying code to production multiple times a day. I am still unclear as whether this made Home Depot actually sell more hammers, or if it's something that we should even care about in the first place. Shouldn't we converge over selling less hammers? Making them more solid, reliable, so that they are passed down from generations instead of breaking and having to be replaced all the time?

Slide from Kearn's keynote that shows a women with perfect nail polish considering a selection of paint colors with the Home Depot logo and stats about 'speed' in their deployment Home Depot ecstasy

We're solving a problem that wasn't there in some new absurd faith that code deployments will naturally make people happier, by making sure Home Depot sells more hammers. And that's after telling us that Cloud Foundry helped the USAF save 600M$ by moving their databases to the cloud. No one seems bothered by the idea that the most powerful military in existence would move state secrets into a private cloud, out of the control of any government. It's the name of the game, at KubeCon.

Picture of a jet fighter flying over clouds, the logo of the USAF and stats about the cost savings due their move to the cloud USAF saves (money)

In his keynote, Alexis Richardson, CEO of Weaveworks, presented the toaster project as an example of what not to do. "He did not use any sourced components, everything was built from scratch, by hand", obviously missing the fact that toasters are deliberately not built from reusable parts, as part of the planned obsolescence design. The goal of the toaster experiment is also to show how fragile our civilization has become precisely because we depend on layers upon layers of parts. In this totalitarian view of the world, people are also "reusable" or, in that case "disposable components". Not just the white dudes in California, but also workers outsourced out of the USA decades ago; it depends on precious metals and the miners of Africa, the specialized labour of the factories and intricate knowledge of the factory workers in Asia, and the flooded forests of the first nations powering this terrifying surveillance machine.

Privilege

Photo of the Toaster Project book which shows a molten toster that looks like it came out of a H.P. Lovecraft novel "Left to his own devices he couldn’t build a toaster. He could just about make a sandwich and that was it." -- Mostly Harmless, Douglas Adams, 1992

Staying in an hotel room for a week, all expenses paid, certainly puts things in perspectives. Rarely have I felt more privileged in my entire life: someone else makes my food, makes my bed, and cleans up the toilet magically when I'm gone. For me, this is extraordinary, but for many people at KubeCon, it's routine: traveling is part of the rock star agenda of this community. People get used to being served, both directly in their day to day lives, but also through the complex supply chain of the modern technology that is destroying the planet.

An empty shipping container probably made of cardboard hanging over the IBM booth Nothing is like corporate nothing.

The nice little boxes and containers we call the cloud all abstract this away from us and those dependencies are actively encouraged in the community. We like containers here and their image is ubiquitous. We acknowledge that a single person cannot run a Kube shop because the knowledge is too broad to be possibly handled by a single person. While there are interesting collaborative and social ideas in that approach, I am deeply skeptical of its impact on civilization in the long run. We already created systems so complex that we don't truly know who hacked the Trump election or how. Many feel it was, but it's really just a hunch: there were bots, maybe they were Russian, or maybe from Cambridge? The DNC emails, was that really Wikileaks? Who knows! Never mind failing close or open: the system has become so complex that we don't even know how we fail when we do. Even those in the highest positions of power seem unable to protect themselves; politics seem to have become a game of Russian roulette: we cock the bot, roll the secret algorithm, and see what dictator will shoot out.

Ethics

All this is to build a new Skynet; not this one or that one, those already exist. I was able to pleasantly joke about the AI takeover during breakfast with a random stranger without raising as much as an eyebrow: we know it will happen, oh well. I've skipped that track in my attendance, but multiple talks at KubeCon are about AI, TensorFlow (it's opensource!), self-driving cars, and removing humans from the equation as much as possible, as a general principle. Kubernetes is often shortened to "Kube", which I always think of as a reference to the Star Trek Borg all mighty ship, the "cube". This might actually make sense given that Kubernetes is an open source version of Google's internal software incidentally called... Borg. To make such fleeting, tongue-in-cheek references to a totalitarian civilization is not harmless: it makes more acceptable the notion that AI domination is inescapable and that resistance truly is futile, the ultimate neo-colonial scheme.

Captain Jean-Luc Picard, played by Patrick Stewart, assimilated by the Borg as 'Locutus' "We are the Borg. Your biological and technological distinctiveness will be added to our own. Resistance is futile."

The "hackers" of our age are building this machine with conscious knowledge of the social and ethical implications of their work. At best, people admit to not knowing what they really are. In the worse case scenario, the AI apocalypse will bring massive unemployment and a collapse of the industrial civilization, to which Silicon Valley executives are responding by buying bunkers to survive the eventual roaming gangs of revolted (and now armed) teachers and young students coming for revenge.

Only the most privileged people in society could imagine such a scenario and actually opt out of society as a whole. Even the robber barons of the 20th century knew they couldn't survive the coming revolution: Andrew Carnegie built libraries after creating the steel empire that drove much of US industrialization near the end of the century and John D. Rockefeller subsidized education, research and science. This is not because they were humanists: you do not become an oil tycoon by tending to the poor. Rockefeller said that "the growth of a large business is merely a survival of the fittest", a social darwinist approach he gladly applied to society as a whole.

But the 70's rebel beat offspring, the children of the cult of Job, do not seem to have the depth of analysis to understand what's coming for them. They want to "hack the system" not for everyone, but for themselves. Early on, we have learned to be selfish and self-driven: repressed as nerds and rejected in the schools, we swore vengeance on the bullies of the world, and boy are we getting our revenge. The bullied have become the bullies, and it's not small boys in schools we're bullying, it is entire states, with which companies are now negotiating as equals.

The fraud

A t-shirt from the Cloudfoundry booth that reads 'Freedom to create' ...but what are you creating exactly?

And that is the ultimate fraud: to make the world believe we are harmless little boys, so repressed that we can't communicate properly. We're so sorry we're awkward, it's because we're all somewhat on the autism spectrum. Isn't that, after all, a convenient affliction for people that would not dare to confront the oppression they are creating? It's too easy to hide behind such a real and serious condition that does affect people in our community, but also truly autistic people that simply cannot make it in the fast-moving world the magical rain man is creating. But the real con is hacking power and political control away from traditional institutions, seen as too slow-moving to really accomplish the "change" that is "needed". We are creating an inextricable technocracy that no one will understand, not even us "experts". Instead of serving the people, the machine is at the mercy of markets and powerful oligarchs.

A recurring pattern at Kubernetes conferences is the KubeCon chant where Kelsey Hightower reluctantly engages the crowd in a pep chant:

When I say 'Kube!', you say 'Con!'

'Kube!' 'Con!' 'Kube!' 'Con!' 'Kube!' 'Con!'

Cube Con indeed...

I wish I had some wise parting thoughts of where to go from here or how to change this. The tide seems so strong that all I can do is observe and tell stories. My hope is that the people that need to hear this will take it the right way, but I somehow doubt it. With chance, it might just become irrelevant and everything will fix itself, but somehow I fear things will get worse before they get better.

Krebs on SecurityWhy Is Your Location Data No Longer Private?

The past month has seen one blockbuster revelation after another about how our mobile phone and broadband providers have been leaking highly sensitive customer information, including real-time location data and customer account details. In the wake of these consumer privacy debacles, many are left wondering who’s responsible for policing these industries? How exactly did we get to this point? What prospects are there for changes to address this national privacy crisis at the legislative and regulatory levels? These are some of the questions we’ll explore in this article.

In 2015, the Federal Communications Commission under the Obama Administration reclassified broadband Internet companies as telecommunications providers, which gave the agency authority to regulate broadband providers the same way as telephone companies.

The FCC also came up with so-called “net neutrality” rules designed to prohibit Internet providers from blocking or slowing down traffic, or from offering “fast lane” access to companies willing to pay extra for certain content or for higher quality service.

In mid-2016, the FCC adopted new privacy rules for all Internet providers that would have required providers to seek opt-in permission from customers before collecting, storing, sharing and selling anything that might be considered sensitive — including Web browsing, application usage and location information, as well as financial and health data.

But the Obama administration’s new FCC privacy rules didn’t become final until December 2016, a month after then President-elect Trump was welcomed into office by a Republican controlled House and Senate.

Congress still had 90 legislative days (when lawmakers are physically in session) to pass a resolution killing the privacy regulations, and on March 23, 2017 the Senate voted 50-48 to repeal them. Approval of the repeal in the House passed quickly thereafter, and President Trump officially signed it on April 3, 2017.

In an op-ed published in The Washington Post, Ajit Pai — a former Verizon lawyer and President Trump’s pick to lead the FCC — said “despite hyperventilating headlines, Internet service providers have never planned to sell your individual browsing history to third parties.”

FCC Commissioner Ajit Pai.

“That’s simply not how online advertising works,” Pai wrote. “And doing so would violate ISPs’ privacy promises. Second, Congress’s decision last week didn’t remove existing privacy protections; it simply cleared the way for us to work together to reinstate a rational and effective system for protecting consumer privacy.”

Sen. Bill Nelson (D-Fla.) came to a different conclusion, predicting that the repeal of the FCC privacy rules would allow broadband providers to collect and sell a “gold mine of data” about customers.

Sky CroeserICA18 Day 2: narrating voice, digital media and the body, feminist theorisation beyond western cultures, collective memory, and voices of freedom and constraint

Narrating Voice and Building Self on Digital and Social Media
thisislebanon‘This is Lebanon’: Narrating Migrant Labor to Resistive Public. Rayya El Zein, University of Pennsylvania. This research looks at the calling into being of an ideal political subject through social media. ‘This is Lebanon’ is a platform run by a Nepalese immigrant, Dipendra Upetry, where migrant workers have been sharing stories of labour abuses. The Lebanese system for migrant work is particularly conducive to labour abuses, as workers often have a ‘sponsor’ who they may also live with. El Zein is looking at how the voices of labourers affect the political imagination around what it means to be Lebanese. ‘This is Lebanon’ inverts a popular tourism hashtag, #thisislebanon, and when Lebanese citizens complain that “this isn’t Lebanon”, Upetry invites them to change working conditions if they want that to be true. The Kafa campaign, run by a Lebanon NGO in coordination with the International Labour Union, shared a series of ads about a young couple trying to decide what the right thing to do is regarding the person doing domestic work with them, imagining change as coming from educated middle class people who just need guidance. These are ideologically-inflected ideas of politics that position the individual as the mechanism of change.

Instagramming Persian Identity: Ritual Identity Negotiations of Iranians and Persians in/out of Iran. Samira Rajabi, University of Pennsylvania. This research came out of trying to understand why some people refer to themselves as Persians, and others as Iranians. Rajabi looked at how identity is being negotiated on social media, particularly Instagram, which led to exploring particularly the ways in which identity are written on women’s bodies. Many women were part of the Iranian revolution, but they were the first losers after the revolution. Trauma has had a huge impact on how identity is negotiated, and tactical media can be one way to respond to the deep symbolic trauma many people from Iran have experienced.

Hijacking Religion on Facebook. Mona Abdel-Fadil, University of Oslo. This focuses on the Norwegian Cross-Case – a newsreader tried to wear a cross while reading the news, and was told she was in breach of guidelines. There’s a Facebook group: “Yes to wearing the cross whenever I choose”. This is a good case study for understanding identity politics, the role of social media users in amplifying conflicts about religion, modes of performing conflict (and understanding who they are performing to), and the politics of affect. The Facebook group is dominated by conservative Christians who are worried about losing Norway’s Christian heritage; nationalists who see Norwegian identity as inextricably tied to Christianity; humanists (predominantly women) who try to bridge differences; fortified secularists, who argue ferociously, particularly against the nationalists; ardent atheists (predominantly men), who tend to be fan the flames by abusing religious people, then step back. The group is shaped by master narratives that require engagement: that wearing the cross is an act of defiance (often against Muslim attack); that Norwegian cultural heritage is under threat (with compliance from politicians). There’s an intensification and amplification of conflict, including distorting and adding to the original conflict. We need to understand that for some people this is entertainment – an attraction to the tension in the group, and how easy it is to inflame emotions.

Discussion session: Lilie Chouliaraki, in responding, noted the role of trauma and victimhood, inviting speakers to reflect on the role of victimhood and self-victimhood in constituting subjects and identities here. Rajabi noted that trauma requires a different level of response – the stakes are different. But trauma is medicalised, we treat it as something to be dealt with individually rather than politically. Abdel-Fadil is trying to work out how to write from a place of vulnerability about this: how to take the sense of suffering expressed by these people who feel like Christianity or Norwegian identity is under threat seriously, while not necessarily accepting that they are actually victims.

Digital Media and the Body

dem9w1zwsaas5qm

Drawing from Abigail Selzer King

Towards a theory of projectilic media: Notes on Islamic State’s Deployment of Fire. Marwan M. Kraidy, Annenberg, University of Pennsylvania. Kraidy asks why ISIS uses the symbolism of fire so frequently. There’s a distinction between digital images, operative images (for example, drone footages) that are part of an image; projectilic images (images as weapons); and prophylactic images (which build a sense of safety and security). In ISIS’s symbolism, fire becomes a metaphor for sudden birth and sudden death, for the war machine, and for flames of justice. Speed is essential to the war machine, and to fire. A one-hour ISIS video would have about half an hour of projectilic sequences. ISIS uses a torch as a metaphor for the war machine, and the hearth as a a metaphor for the utopian homeland. Fire activates new connections between words and images. Immolation confuses the customary chronology (for example, of beheading videos).

You Have Been Tagged: Incanting Names and Incarnating Bodies on Social Media. Paul Frosh, Hebrew University of Jerusalem. Tagging has become a prevalent technique for circulating images on social media, and serves various purposes for social media platforms (for example, adding more data). Naming and figuration are linked to the life of the self. Names aren’t just linguistic designators – they’re also signifiers of power. Names perform the entanglement of the social subject. Tagging requires a systematic circulation of the name (you must join the platform). Tagging interpolates us as subjects of a particular system, and revitalises the ancient magical power of action at a distance through naming. Tagging is a magical act of germination. Being tagged carries a social weight, prompting us to respond. Tagging sends social signals through others’ images, as opposed to selfies. Tagging goes against the grain of networked selfhood in digital culture, re-centring the body. Tagging is the fleshing out of informational networks.

refugee-selfie-001

Selfies as Testimonies of the Flesh. Lilie Chouliaraki, London School of Economics and Political Science. Aesthetic corporeality becomes important when we think about vulnerable bodies. Digital testimonies produced in conflict zones are elements of a broader landscape of violence and suffering. How does the selfie mediate the faces of refugees? What does the remediation of these faces in Western news sites tell us? Three types of images: refugees being photographed to take selfies; refugee selfies with global leaders; celebrities taking photos as if they were refugees. Chouliaraki notes that refugees taking selfies in Lesbos are celebrating not just having arrived, but also having survived the deadliest sea crossing. Refugee selfies are remediated through a series of disembodiments; their faces are, at best, an absent presence, or, at worst, fully absent.

Feminist Theorizations Beyond Western Cultures
Orientalism, Gender, and Media Representation: A Textual Analysis of Afghan Women in US, Afghan, and Chinese Media. Azeta Hatef, Pennsylvania State University and Luwei Rose Luqui, Hong Kong Baptist University. This study looks at media representations of women in Afghanistan, thinking about the purposes these images serve in relation the war on Afghanistan. Media coverage in China is controlled by the government, but soft news is offered a bit more leeway than hard news outlets. Nevertheless, in China mainstream media conveys the same theme: Afghan women oppressed by brown men. Both US and Chinese media portrays Afghanistan as backwards, with women’s freedoms entirely limited. While violence against women in Afghanistan is worthy of attention, but these media representations operate to amplify distinctions between “us” and “them”, justifying intervention (and failing to recognise the violence done by that intervention).

Production of subject of politics through social media: a practice of Iranian women activists. Gilda Seddighi, University of Bergen. This research looked at an Iranian online network of mourning mothers, drawing on Butler’s conceptualization of politicization. There was a group, “Supporters of Mourning Mothers Harstad”, composed mainly of asylum seekers, connected by Facebook and other mechanisms. Motherhood can be seen here as a source of recognition of political subjects across national border. The notion of motherhood was expanded to include children beyond their own. Nevertheless, many women interviewed spoke of their activism as apolitical, and belonging to a particular nation-state was taken for granted.

Subject Transformations: New Media, New Feminist Discourses. Nithila Kanagasabai, Tata Institute of Social Sciences. This research attempts to look at new strands of feminism in India, particularly in smaller towns in Tamil Nadu. Work from urban areas has tended to position Women’s Studies as urban, upper-caste, middle-class, English-speaking, online, and speaking for marginalised groups. Students who Kanagasabai interviewed drew on ‘the feminist canon’ (for example, Virginia Woolf, Shulamith Firestone), but also on little magazines – small local literary magazines in regional dialects of Tamil, which previously circulated predominantly among unemployed, educated men. These magazines have shifted to allow women, Dalits, and people from scheduled tribes to express themselves. Little magazines open space for subjectivity, offering a critique of seemingly universal social norms, including casteism and gender roles. Students interviewed mention these magazines alongside sources like Jstor and Economic and Political Weekly, which speaks to the development of new methodologies. Publishing in little magazines (as opposed to mainstream feminist journals) is seen not just as convenient, but also as a political decision. Moving online did not mean that little magazines transcended the local or temporal – readership remains limited and local, but they are still important spaces. Following feminists online has lead to a deeper everyday engagement with feminist literature. Lurking needs to be viewed within the framework of collaborative learning, and engagement can happen during key moments. Most students didn’t relate to the title of feminism (which they felt required a particular kind of academic competence), but instead related to women’s studies.

Collective Identities and Memories
Collective Memory Matters: Mobilizing Activist Memory in Autonomous Media. Kamilla Petrick, Sandra Jeppesen, Ellen Craig, Cassidy Croft, & Sharmeen Khan, Lakehead University. Unpaid labour within collectives means that institutional memory isn’t actively shared, but instead embodied within long-term members (who may leave).

detroitwall

By Király-Seth – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=42295509

Emergent Voices in Material Memories: Conceptualizing Public Voices of Segregated Memories in Detroit. Scott Mitchell, Wayne State University. An eight-mile wall remains as a visible reminder of the history of segregation in Detroit, also serving as a space of education and hope. The wall was constructed by developers to raise property values for the White area by separating it from Black communities. Grassroots efforts to add a mural have shifted its meaning.

 

Repertoires, Identities, and Issues of Collective Actions of the Candlelight Movements in S. Korea. Young-Gil Chae, Hankuk University of Foreign Studies and Inho Cho, Hanyang UJaehee Cho, Chung-Ang University.

The Mnemonic Black Hole at Guantánamo: Memory and Counter-Memory Digital Practices on Twitter. Muira McCammon, Annenberg School for Communication at the University of Pennsylvania. Guantánamo is often left off maps: Johann Stein has called it a “legal black hole”. McCammon tried to go to the library at Guantánamo for detainees – being unsuccessful, she tried following the Joint Task Force for Guantánamo on Twitter. McCammon asks what some of the mnemonic strategies used on the Twitter feed are. Only images of higher-up command and celebrities are posted. Traces of Guantánamo as a ‘space of exception’ have been deleted (for example, tweets noting the lack of Internet connection). The official ‘memory maker’, when posting on Twitter, can’t escape others’ memory-making (for example, responses to an official tweet about sexual harassment training at Guantánamo which pointed out the tremendous irony). When studying these issues, there are few systematic ways to track and trace digital military memory makers.

The Voice of Silence: Practices of Participation Among East Jerusalem Palestinians. Maya de Vries, Hebrew University of Jerusalem. This research focuses on participation avoidance, for example the boycotting of Facebook over the ways in which it censors Palestinian content, as an active form of resistance. de Vries notes the complexity of power relations in working with Palestinians in East Jerusalem. Interviewees choose not to engage in anything political on Facebook, knowing that it is monitored by the Israeli state. This state monitoring affects their choices around Facebook. There is also kinship monitoring – knowing that family are reading. Self-monitoring also plays a role. One interviewee notes that when she had to put her location down, there was no option for “East Jerusalem, Palestine”. These layers of monitoring mean that Palestinians negotiate their engagement with Facebook cautiously, frequently choosing non-participation.

Voices of Freedom, Voices of Constraint: Race, Citizenship and Public Memory – Then and Now
Selected Research: “The Fire Next Time in the Civil Sphere: Literary Journalism and Justice in America 1963. Kathy Roberts Forde, Associate Professor, Journalism Department, University of Massachusetts-Amherst. After the end of slavery, new systems were put in place to control Black people, and exploit their labour. Black resistance continued, building a vibrant Black public sphere and paving the way for the civil rights movement. James Baldwin wrote that the only thing that White people had that Black people needed was power. White people should not be a model for how to live. White people destroyed, and were destroying, thousands of lives, and did not know it, and did not want to know it. Baldwin’s writing was hugely influential.

Selected Research: Newspaper Wars: Civil Rights and White Resistance in South Carolina, 1935-1965, 2017. Sid Bedingfield, Assistant Professor, Hubbard School of Journalism and Mass Communication, University of Minnesota-Twin Cities. Talks about NAACP leader Roy Wilkins’ 1964 opinion piece complaining about Black youth crime. This had parallels with segregationists’ narratives, and Wilkins’ had cordial communications with some segregationists. These narratives stripped away historical context and ongoing oppression when covering Black protests and expressions of anger and frustration.

Selected Research: Framing the Black Panthers: The Spectacular Rise of a Black Power Icon, 2017, 2nd edition; Rebel Media: Adventures in the History of the Black Public Sphere, In Progress; Jane Rhodes, Professor and Department Head, African American Studies, University of Illinois at Chicago. Almost everything Rhodes finds in the discourses of the 1960s is still relevant today in discourses of nationalism and race. Stuart Hall argues that each surge of social anxiety finds a temporary respite in the projection of fears onto compellingly anxiety-laden themes – like moral panics about Black people and other racialised others. US coverage of Britain in the 1960s tended to frame Britain as having issues with race, but an unwillingness to deal with it. Meanwhile, British press seemed to have almost a lurid fascination with racial violence in the US (with an undercurrent of fear for white safety in the US, and subsequently in Britain). Deep-seated anxieties around race and social change aren’t subtle. As Enoch Powell came to power, media seemed to be tangled in debates about whether US or UK racism was worse.

Planet DebianSteve Kemp: On collecting metrics

Here are some brief notes about metric-collection, for my own reference.

Collecting server and service metrics is a good thing because it lets you spot degrading performance, and see the effect of any improvements you've made.

Of course it is hard to know what metrics you might need in advance, so the common approach is to measure everything, and the most common way to do that is via collectd.

To collect/store metrics the most common approach is to use carbon and graphite-web. I tend to avoid that as being a little more heavyweight than I'd prefer. Instead I'm all about the modern alternatives:

  • Collect metrics via go-carbon
    • This will listen on :2003 and write metrics beneath /srv/metrics
  • Export the metrics via carbonapi
    • This will talk to the go-carbon instance and export the metrics in a compatible fashion to what carbon would have done.
  • Finally you can view your metrics via grafana
    • This lets you make pretty graphs & dashboards.

Configuring all this is pretty simple. Install go-carbon, and give it a path to write data to (/srv/metrics in my world). Enable the receiver on :2003. Enable the carbonserver and make it bind to 127.0.0.1:8888.

Now configure the carbonapi with the backend of the server above:

  # Listen address, should always include hostname or ip address and a port.
  listen: "localhost:8080"

  # "http://host:port" array of instances of carbonserver stores
  # This is the *ONLY* config element in this section that MUST be specified.
  backends:
    - "http://127.0.0.1:8888"

And finally you can add your data-source to grafana of 127.0.0.1:8080, and graph away.

The only part that I'm disliking at the moment is the sheer size of collectd. Getting metrics of your servers (uptime, I/O performance, etc) is very useful, but it feels like installing 10Mb of software to do that is a bit excessive.

I'm sure there must be more lightweight systems out there for collecting "everything". On the other hand I've added metrics exporting to my puppet-master, and similar tools very easily so I have lightweight support for that in the tools themselves.

I have had a good look at metricsd which is exactly the kind of tool I was looking for, but I've not searched too far afield for other alternatives and choices just yet.

I should write more about application-specific metrics in the future, because I've quizzed a few people recently:

  • What's the average response-time of your application? What's the effectiveness of your (gzip) compression?
    • You don't know?
  • What was the quietest time over the past 24 hours for your server?
    • You don't know?
  • What proportion of your incoming HTTP-requests were for HTTP?
    • Do you monitor HTTP-status-codes? Can you see how many times people were served redirects to the SSL version of your site? Will using HST save you bandwidth, if so how much?

Fun times. (Terrible pun is terrible, but I was talking to a guy called Tim. So I could have written "Fun Tims".)

,

Planet DebianSylvain Beucler: Testing GNU FreeDink in your browser

Ever wanted to try this weird GNU FreeDink game, but never had the patience to install it?
Today, you can play it with a single click :)

Play GNU FreeDink

This is a first version that can be polished further but it works quite well.
This is the original C/C++/SDL2 code with a few tweaks, cross-compiled to WebAssembly (and an alternate version in asm.js) with emscripten.
Nothing brand new I know, but things are getting smoother, and WebAssembly is definitely a performance boost.

I like distributed and autonomous tools, so I'm generally not inclined to web-based solutions.
In this case however, this is a local version of the game. There's no server side. Savegames are in your browser local storage. Even importing D-Mods (game add-ons) is performed purely locally in the in-memory virtual FS with a custom .tar.bz2 extractor cross-compiled to WebAssembly.
And you don't have to worry about all these Store policies (and Distros policies^W^W^W.

I'm interested in feedback on how well these works for you in your browsers and devices:

I'm also interested in tips on how to place LibreJS tags - this is all free JavaScript.

Planet DebianSteinar H. Gunderson: Debian XU4 images updated

I've updated my Debian images for the ODROID XU4; the newest build was done before stretch release, and a lot of minor adjustments have happened since then.

The XU4 is fairly expensive for a single-board computer ($59 plus PSU, storage and case), and it's getting a bit long in the tooth with 32-bit and all, but it's probably still the nicest choice among the machines Hardkernel have to option. In particular, it's fairly fast, the eMMC option is so much better than SD, and these days, you can run mainline kernel on them instead of some 3.10 build nobody cares about anymore. (Well, in Debian's kernel, you don't get HDMI, though…) It's not nearly as widely supported as the Raspberry Pi, of course, and it doesn't have the crazy huge ecosystem, but it's definitely faster. :-)

Debian doesn't officially support the XU4, but with only a small amount of non-free bits in the bootloader, you can get an almost vanilla image; Debian U-Boot (with GRUB!), Debian kernel, and a plain image that comes out of debootstrap with only some minor awkwardness for loading the device tree. My personal one runs sid, but stretch is a good start for a server and it's easy to dist-upgrade, so I haven't bothered making sid images. I probably will make buster images at some point, though.

Enjoy!

CryptogramFriday Squid Blogging: Squid Comic

It's not very good, but it has a squid in it.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDCalling all social entrepreneurs + nonprofit leaders: Apply for The Audacious Project

Our first collection of Audacious Project winners takes the stage after a stellar session at TED2018, in which each winner made a big, big wish to move their organization’s vision to the next level with help from a new consortium of nonprofits. As a bonus during the Audacious Project session. we watched an astonishing performance of “New Second Line” from Camille A. Brown and Dancers. From left: The Bail Project’s Robin Steinberg; Heidi M. Sosik of the Woods Hole Oceanographic Institute; Caroline Harper of Sight Savers; Vanessa Garrison and T. Morgan Dixon of GirlTrek; Fred Krupp from the Environmental Defense Fund; Chloe Davis and Maleek Washington of Camille A. Brown and Dancers; pianist Scott Patterson; Andrew Youn of the One Acre Fund; and Catherine Foster, Camille A. Brown, Timothy Edwards, Juel D. Lane from Camille A. Brown and Dancers. Obscured behind Catherine Foster is Raj Panjabi of Last Mile Health (and dancer Mayte Natalio is offstage). Photo: Ryan Lash / TED

Creating wide-scale change isn’t easy. It takes incredible passion around an issue, and smart ideas on how to move the needle and, hopefully, improve people’s lives. It requires bottomless energy, a dedicated team, an extraordinary amount of hope. And, of course, it demands real resources.

TED would like to help, on the last part at least. This is an open invitation to all social entrepreneurs and nonprofit leaders: apply to be a part of The Audacious Project in 2019. We’re looking for big, bold, unique ideas that are capable of affecting more than a million people or driving transformational change on a key issue. We’re looking for unexplored plans that have a real, credible path to execution. That can inspire people around the world to come together to act.

Applications for The Audacious Project are open now through June 10. And here’s the best part — this isn’t a long, detailed grant application that will take hours to complete. We’ve boiled it down to the essential questions that can be answered swiftly. So apply as soon as you can. If your idea feels like a good fit, we’ll be in touch with an extended application that you’ll have four weeks to complete.

The Audacious Project process is rigorous — if selected as a Finalist, you’ll participate in an ideation workshop to help clarify your approach and work with us and our partners on a detailed project proposal spanning three to five years. But the work will be worth it, as it can turbocharge your drive toward change.

More than $406 million has already been committed to the first ideas in The Audacious Project. And further support is coming in following the simultaneous launch of the project at both TED2018 and the annual Skoll World Forum last week. Watch the full session from TED, or highlight reel above that screened the next day at Skoll. And who knows? Perhaps you’ll be a part of the program in 2019.

TEDA behind-the-scenes view of TED2018, to inspire you to apply for The Audacious Project

What’s it like to stand in the wings, preparing to give your TED Talk and share a big idea to create ripples of change? This video, captured at TED2018, gives a taste of that. It follows the first speakers of The Audacious Project, TED’s new initiative to fund big ideas for global change. These speakers had a lot on the line as they gave their talks — in addition to a packed house at the conference, their talks were viewed around the world via Facebook Watch. And they all crushed it, sharing their ideas with unique power. (Want goosebumps? Watch Robin Steinberg’s talk about ending the injustice of the US bail system.)

Have an idea for the social good that feels in the same spirit? Apply to be a part of The Audacious Project next year. Applications are open now through June 10, 2018 — and the questionnaire is intentionally short to encourage you to apply. So go for it. Share your biggest, wildest vision for how to tackle one of the world’s most pressing problems.

Apply for The Audacious Project >>

CryptogramSecurity and Human Behavior (SHB 2018)

I'm at Carnegie Mellon University, at the eleventh Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions -- all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating conference of my year. It influences my thinking in many different, and sometimes surprising, ways.

This year's program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks. (Ross also maintains a good webpage of psychology and security resources.)

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

Next year, I'll be hosting the event at Harvard.

Planet Linux AustraliaJonathan Adamczewski: Modern C++ Randomness

This thread happened…

So I did a little digging to satisfy my own curiosity about the “modern C++” version, and have learned a few things that I didn’t know previously…

(this is a manual unrolled twitter thread that starts here, with slight modifications)

Nearly all of this I gleaned from the invaluable and . Comments about implementation refer specifically to the gcc-8.1 C++ standard library, examined using Compiler Explorer and the -E command line option.

std::random_device is a platform-specific source of entropy.

std: mt19937 is a parameterized typedef of std::mersenne_twister_engine

specifically:
std::mersenne_twister_engine<uint_fast32_t, 32, 624, 397, 31, 0x9908b0df, 11, 0xffffffff, 7, 0x9d2c5680, 15, 0xefc60000, 18, 1812433253>
(What do those number mean? I don’t know.)

And std::uniform_int_distribution produces uniformly distributed random numbers over a specified range, from a provided generator.

The default constructor for std::random_device takes an implementation-defined argument, with a default value.

The meaning of the argument is implementation-defined – but the type is not: std::string. (I’m not sure why a dynamically modifiable string object was the right choice to be the configuration parameter for an entropy generator.)

There are out-of-line private functions for much of this implementation of std::random_device. The constructor that calls the out-of-line init function is itself inline – so the construction and destruction of the default std::string param is also generated inline.

Also, peeking inside std::random_generator, there is a union with two members:

void* _M_file, which I guess would be used to store a file handle for /dev/urandom or similar.

std::mt19937 _M_mt, which is a … parameterized std::mersenne_twister_engine object.

So it seems reasonable to me that if you can’t get entropy* from outside your program, generate your own approximation. It looks like it is possible that the entropy for the std::mersenne_twister_engine will be provided by a std::mersenne_twister_engine.

Unlike std::random_device, which has its implementation out of line, std::mersenne_twister_engine‘s implementation seems to be all inline. It is unclear what benefits this brings, but it results in a few hundred additional instructions generated.

And then there’s std::uniform_int_distribution, which seems mostly unsurprising. It is again fully inline, which (from a cursory eyeballing) may allow a sufficiently insightful compiler to avoid a couple of branches and function calls.

The code that got me started on this was presented in jest – but (std::random_device + std::mt19937 + std::uniform_int_distribution) is a commonly recommended pattern for generating random numbers using these modern C++ library features.

My takeaways:
std::random_device is potentially very expensive to use – and doesn’t provide strong cross-platform guarantees about the randomness it provides. It is configured with an std::string – the meaning of which is platform dependent. I am not compelled to use this type.

std::mt19937 adds a sizeable chunk of codegen via its inline implementation – and there are better options than Mersenne Twister.

Bottom line: I’m probably going to stick with rand(), and if I need something a little fancier,  or one of the other suggestions provided as replies to the twitter thread.

Addition: the code I was able to gather, representing some relevant parts

Sky CroeserICA Day 1: Kurdish transnational media, racism online, digital labour, and public scholarship

My rough and very incomplete notes from the first day of ICA. There were a bunch of interesting points that I haven’t noted because I was distracted or tired or too busy listening, and great papers that I sadly missed. I mostly use these notes to follow up on work later, but if they’re useful to you too, that’s great!

a_time_for_drunken_horsesUnderstanding Kurdish Media and Communications: Space, Place and Materiality
Theaters of Inhibition and Cinemas of Strategy: Censorship, Space, and Struggle at a Film Festival in Turkey. Josh Carney, American University of Beirut, spoke about Bakur (North), a film about the everyday life of PKK guerrillas. When the Turkish government banned screenings of Bakur, people met at the theatres anyway to discuss the censorship. The directors of Bakur will go on trial in a few days for ‘terrorist propaganda’. Struggles over censorship were tied to struggles over the city space of Istanbul, perhaps in response to the Turkish government’s attempts to erase ideas and spaces that it finds disagreeable. The government wanted to erase Bakur because it was a testament to the peace process, and to the government’s withdrawal from it. This censorship can be seen as an attempt to erase the promise and possibility of peace.

Cinematic Spaces of Solitude, Exile, and Resistance: Telling Kurdish Stories from Norway, Iran, and Turkey. Suncem Koçer, Kadir Has University, spoke on Kurdish filmmaking as a transnational platform for identity politics. Bahman Ghobadi talks about Kurds as a people on the move, and says that cinema as the art of movement is therefore the most suitable medium for documenting Kurdish stories.

Infrastructures, Colonialism and Struggle. Burce Celik, Loughborough University, argues that Kurdish transnational media is still embedded in historical, political, and territorial contexts. Technical and economic concerns, as well as national borders, also shape networks. State interventions can take place at multiple levels. For example, while the Turkish government may not be able to stop television transmissions from Europe, there are reports of police smashing satellite antennas in Kurdish villages. While there are no country-wide Internet shut-downs, there have been region-wide shut-downs in Kurdish provinces of Turkey. We need to consider the materiality of media infrastructures.

Questions: I asked if there were attempts to shift film screenings and other spaces that had been shut down online. Carney noted that film-makers were very resistant to doing this, as film screenings and movie festivals were seen as important. Bakur was leaked online, and the directors asked that people didn’t share or watch it. Koçer affirmed this, and said that censorship in a way also served a generative purpose for film-makers.

Racism in Digital Media Space

Racism in the Hybrid Media System: Analyzing the Finnish ‘Immigration Debate’Gavan Titley, University of Helsinki. Mervi Pantti, U of Helsinki and Kaarina Nikunen, U of Tampere. Pantti opens by noting that even naming racism as racism is often contentious. ‘Hybra’ project – looking at understandings of racism shaped and contested in the interactive everyday cultures of digital media. This paper looks particularly at Suomi24, ‘Finland 24’, one of the largest non-English-language commenting site online. Anti-racist activism in the 1990s helped to fix racism in the public imagination as a result of movements of people, rather than deeper structures. ‘Racism’ is used broadly in Finnish public discourse to mean ‘discrimination’ (for example, ‘obesity racism’), which removes it from it’s particular context. Conservatives talk about “opinion racism”: claims that journalists and others with a ‘multicultural agenda’ are intolerant of other viewpoints. Politically, it’s very difficult to mobilise in terms of racism and anti-racism because of the ways in which this language works.

goodes-smallMore Than Meets the Eye: Understanding Networks of Images in Controversies Around Racism on Social Media. Tim Highfield, Digital Media Research Centre, Queensland University of Technology, and Ariadna Matamoros-Fernandez, Queensland University of Technology. This research, focused on everyday visual representations of racism and counter-racism practices, comes out of the wider literature on racism online that have largely focused on text. It draws on Matamoros-Fernandez’s conceptual work around platform racism. This article looks at the online responses to Adam Goodes’ war cry, many of which used images as a way to push the boundaries for racist viewpoints (often via homophobia). Indigenous social media users frequently added their own images to push back against the racism expressed against Goodes. Mainstream media, though, frequently reinforced hegemonic discourses of racism, rather than giving space to Indigenous voices. There were salient practices on Twitter that are interesting when thinking about platform racism: visual call-outs of racism, often of which were a way of performing distance from Australian racism, which had the effect of amplifying racism. Rather than performing ‘white solidarity’ by amplifying racism, it would be useful to do more to share Indigenous voices and critiques of racism, and link this particular incident to broader structures of racism in Australian society. Visual cultures are an opportunity to understand cover and everyday racism on social media platforms. Even with changes introduced by various platforms to combat racism (after user pressure), there is a lack of consistency and transparency in responses to platformed racism.

Online Hate Speech: Genealogies, Tensions and Contentions. Eugenia Siapera, Dublin City University, Paloma Viejo, Dublin City University and Elena Moreo, Dublin City University.

Theorising Online Racism: The Stream, Affect and Power Laws. Sanjay Sharma, Brunel University. Racialism isn’t an individual act, it’s embedded in material techno-social relations. Ambient racism creates an atmosphere of background hostility. Microaggressions may seem isolated and minor, but they can be all-pervasive.

Working it Out: Emergent Forms of Labor in the Global Digital Economy
Nothing left to lose: bureaucrats in Googleland: Vicki Mayer, Tulane. Stories about Google’s centrality to the economy are highly mediated, even for those working within the organisation. Bureaucrats aren’t meant to sell Google, but they have been pushed to ‘samenwerking’ (planned collaboration) to ‘solve problems’ individually with little structural support. Interviewees used the word “innovative” most often to describe how workers were trying to do more varied tasks with less time and money, while also trying to publicise their achievements. New companies come in all the time saying that they’ll create thousands of jobs, but with limited real results.

radioindigenaDeveloping a Farmworker Low-Power Radio Station in Southern California. Carlos Jimenez, University of Denver. Local Indigenous workers speak Mixteco and Zapotec (sp?) (which is very different from English and Spanish), and listen to Chilena songs – no radio stations in Oxnard catered to this language or musical tastes. The Mexteco Indigena Community Organizing Project partnered with the community. When there was an application made for Radio Indígena for a relatively low-powered antenna, another station fifty miles away, KDB93.7PM, registered a complaint. At first Radio Indígena organisers called to ask them to remove their complaint, but they refused until they received a letter from farmworkers in the area. After a while, the radio community wanted to try shifting towards online transmissions rather than through the radio antenna. But they found that farmworkers’ typical data plans would stop them from listening in. The cost of new media technologies place a greater burden on individual listeners, rather than on the broadcaster.

Production, moderation, representation: three ways of seeing women of color labor in digital culture, Lisa Nakamura, University of Michigan. The lower you go in the chain of production, the more people who aren’t white men you see. It is useful to ask whose labour we misattribute to white men, or even algorithms, on digital platforms. US digital work has been both outsourced and insourced, including to women on reservations. Fairchild ‘invaded’ reservations, and was one of the largest employers in the Navajo Nation until resistance to firings from the American Indian Movement, and unionisation, lead to them leaving. The plant there had produced “high reliability” components, which needed very low failure rates. Employing Navajo workers allowed Fairchild to pay less than the minimum wage. Workers were told that they were building parts for televisions, radios, calculators, and so on (with military applications not mentioned). In a current analogue, moderation work on sites like Facebook is outsourced, sometimes to volunteers. We might also look at the ways in which people like Alexis Ohanian (of Reddit) took credit for the work of teenager Rayouf Alhumedhi in the creation of a hijab emoji.

e2f47f3bc0604084ad088276d23ff610Riot Practices: Immaterial Labor and the Prison-Industrial Complex. Li Cornfeld, Amherst College. There’s a ‘mock prison riot’ at the former state penitentiary in Moundsville yearly, which is a combination of a trade show and a training exercise for ‘correctional officers’. This isn’t what we think of when we consider ‘tech events’, but we should take its claims to be a tech event seriously. It’s a private event, with global attendees. This is one of the ways in which the US exports its technologies of control and norms. It’s also a space to incorporate participants in the tech development process (for example, adding cords to radios for places where batteries are scarce). Technologies of control aren’t just weapons, they include phones, wristbands, and other tracking technologies – many of these are marketed as being not just for prisons, but also for other settings, such as hospitals.

Moving Broadband From Sea to Land: Internet Infrastructure and Labor in Tanzania. Lisa Parks, Massachusetts Institute of Technology. Parks wanted to understand how internet moves from sea to land, and what kinds of digital labor exist in Tanzania to help carry out these operations. She spoke to people who are both formal and informal IT workers, often carrying out risky forms of labour to make the internet more widely available. Drawing on Vicki Mayer, and Labato and Thomas’ The Informal Media Economy. IT ‘development’ projects often lead to unused infrastructure – technology that’s in place, but left unpowered, disconnected, in need of assembly or repair. In Bunda, there are people working in vital jobs like repairing or charging phones. The cost of charging phones is scaled by income. Mobile phone repair workers have designed their own phone which they are going to ask Foxconn to manufacture.

Public Scholars: Engaging With Mainstream Media as Activism

dedcksowaaasm_xThis was a panel discussion, with Amy Adele Hasinoff, University of Colorado Denver;  Charlton McIlwain, New York University; Jean Burgess, Queensland University of Technology; Victor W. Pickard and Maria Repnikova, Georgia State University.
The benefits of media engagement aren’t always direct and obvious – sometimes, for example, they connect unexpected groups and help build alliances. Framing material for a public audience with interventions from editors can be useful in thinking about how we communicate our research, including to other academics outside our own disciplines. Speakers were unsure about the benefits of engaging in hostile spaces – are there useful ways to engage with right-wing media, for example?

There was a lot of interest in the potential issues with engaging with the media. People’s experiences with engaging has differed – some speakers had been discouraged for engaging too much, others felt it was seen as a fundamental part of their job. However, there can be a problem keeping a balance between public scholarship (including dealing with hostile responses) and more traditional academic outputs. It’s important to discriminate between ‘high value’ engagement opportunities and junk.

University support for academics under attack can vary – sometimes they’ll provide legal support, but this isn’t necessarily reliable (or publicised). You’ll often only find out what the university responses to these issues are when a problem comes up. Many of the attacks academics face when speaking publicly aren’t necessarily overt: they might include subtle red-baiting, or questioning about how your background (for example, noting Maria Repnikova’s Russian surname) impacts on your ideas.

There were suggestions for those starting out with media engagement and not yet inundated with media requests:

  • Make sure your colleagues know that you’re interested in media engagement: they should be passing on relevant media queries;
  • Actively contact media when you have research that’s relevant and important – this might involve proposing stories to journalists/editors, or tweeting at journalists.
  • Have useful research to share (especially quantitative data).

How not to get fired? You can’t avoid making any controversial statements – if the press decide to go after you, they will. But aim to have evidence to back your point up, and hopefully aim to also have solidarity networks. (I’d add: maybe join your union!)

When engaging with the media, consider the formats that work for you: text, radio, or television?

Activism, Social Justice and the Role of Contemporary Scholarship
Sasha Costanza-Chock, Massachusetts Institute of Technology. Out of the Shadows, into the Streets! was the result of hands-on, participatory media processes. There isn’t a divide between scholarship and working with social justice organisations: it makes the work more accountable to the people working on the ground, and to their needs. Work with Out for Change led Costanza-Chock to shift their theoretical framework to one of transformative media: it’s about media-making as a healing and identity-forming process.

Kevin Michael Carragee, Suffolk University, began by making a distinction between activist scholarship and scholarship on activism. The former requires establishing partnerships with organisations and movements – there are more calls for this than actual examples. Carragee talked about his work with the Media Research and Action Project. One of the lessons of MRAP is that you want to try to increase the resources available to the group you’re working with. We need to recognise activists as lay scholars. Activists and scholars don’t share the same goals, discourses, and practices – we need to remember that.

Rosemary Clark-Parsons, The Annenberg School for Communication at the University of Pennsylvania. Clark-Parsons draws on feminist standpoint theory: all knowledge is contextually situated; marginalised communities are situated in ways that give them a broader view of power relations; research on those power relations should begin with and centre marginalised communities. To do participatory research, we must position ourselves with activists, but we have to be reflexive about what solidarity means and what power relationships are involved. It’s important to ground theory in practitioners’ perspectives.

Jack Linchuan Qiu, The Chinese University of Hong Kong, talked about the problems with the ‘engagement and impact’ framework, which doesn’t consider how our work has an impact, and to what ends. We need to have hope. As academics we have the luxury of finding hope, and using our classrooms and publications to share that hope.

Chenjerai Kumanyika, Rutgers University – School of Communication and Information. This kind of research offers a corrective to some of the tendencies that exist in our field. Everything Kumanyika has done that’s had an impact has been an “irresponsible job decision”. We have to push back against the priorities of the university, which are about extending empire. We have to push back against understanding class just as an identity parameter, as opposed to a relation between struggles. We need to sneak into the university, be in but not of it.

It was a wrench leaving this final panel of the day, but I had to go meet my partner and Nonsense Baby, so sadly I left before the end.

Cory DoctorowTalking the writers’ life with the Australia Broadcasting Company’s Green Room show

Earlier this spring, while I was on my Australia/NZ tour, I sat down with Australian author Nick Earls for his Green Room show, (MP3) to gossip, complain, and daydream about the writer’s life.

Planet Linux AustraliaAnthony Towns: Buying in and selling out

I figured “Someday we’ll find it: the Bitcoin connection; the coders, exchanges, and me” was too long for a title. Anyhoo, since very late February I’ve been gainfully employed in the cryptocurrency space, as a developer on Bitcoin Core at Xapo (it always sounds pretentious to shorten that to “bitcoin core developer” to me).

I mentioned this to Rusty, whose immediate response (after “Congratulations”) was “Xapo is weird”. I asked if he could name a Bitcoin company that’s not weird — turns out that’s still an open research problem. A lot of Bitcoin is my kind of weird: open source, individualism, maths, intense arguments, economics, political philosophies somewhere between techno-libertarianism and anarcho-capatalism (“ancap”, which shouldn’t be confused with the safety rating), and a general “we’re going to make the world a better place with more freedom and cleverer technology” vibe of the thing. Xapo in particular is also my kind of weird. For one, it’s founded by Argentinians who have experience with the downsides of inflation (currently sitting at 20% pa, down from 40% and up from 10%), even if that pales in comparison to Venezuela, the world’s current socialist basket case suffering from hyperinflation; and Xapo’s CEO makes what I think are pretty good points about Bitcoin improving global well-being by removing a lot of discretion from monetary policy — as opposed to doing blockchains to make finance more financey, or helping criminals and terrorists out, or just generally getting rich quick. Relatedly, Xapo (seems to me to be) much more of a global company than many cryptocurrency places, which often seem very Silicon Valley focussed (or perhaps NYC, or wherever their respective HQ is); it might be a bit self-indulgent, but I really like being surrounded by people with oddly different cultures, and at least my general impression of a lot of Silicon Valley style tech companies these days is more along the lines of “dysfunctional monoculture” than anything positive. Xapo’s tech choices also seem to be fairly good, or at least in line with my preferences (python! using bitcoin core! microservices!). Xapo is also one of pretty few companies that’s got a strong Bitcoin focus, rather than trying to support every crazy new cryptocurrency or subtoken out there: I tend to think Bitcoin’s the only cryptocurrency that really has good technical and economic fundamentals; so I like “Bitcoin maximilism” in principle, though I guess I’m hard pressed to argue it’s optimal at the business level.

For anyone who follow Bitcoin politics, Xapo might seem a strange choice — Xapo not long ago was on the losing side of the S2X conflict, and why team up with a loser instead of the winners? I don’t take that view for a couple of reasons: I didn’t ever really think doubling the blocksize (the 2X part) was a fundamentally bad idea (not least, because segwit (the S part) already does that and more under some circumstances), but rather the problem was the implementation plan of doing it in just a few months, against the advice of all the most knowledgeable developers, and having an absolutely terrible response when problems with the implementation were found. But although that was probably unavoidable considering the mandate to activate S2X within just a few months, I think the majority of the blame is rightly put on the developers doing the shoddy work, and the solution is for companies to work with developers who can say “no” convincingly, or, preferably, can say “yes, and this is how” long enough in advance that solving the problem well is actually possible. So working with any (or at least most) of the S2X companies just seems like being part of the solution to me. And in any event, I want to live in a world where different viewpoints are welcome and disagreement is okay, and finding out that you’re wrong just means you learned something new, not that you get punished and ostracised.

Likewise, you could argue that anyone who wants to really use Bitcoin should own their private keys, rather than use something like Xapo as a wallet or even a vault, and that working on Xapo is kind-of opposed to the “be your own bank” philosophy at the heart of Bitcoin. My belief is that there’s still a use for banks with Bitcoin: safely storing valuables is hard even when they’re protected by maths instead of (or as well as) locks or guns; so it still makes sense for many people to want to outsource the work of maintaining private keys, and unless you’re an IT professional, it’s probably more sensible to do that to a company that looks kind of like a bank (ie, a custodial wallet like Xapo) rather than one that looks like a software vendor (bitcoin core, electrum, etc) or a hardware vendor (ledger or trezor, eg). In that case, the key benefit that Bitcoin offers is protection from government monetary policy, and, hopefully better/cheaper access or storage of your wealth, which isn’t nothing, even if it’s not fully autonomous control over your wealth.

For the moment, there’s plenty of things to work on at Xapo: I’ve been delaying writing this until I could answer the obvious “when segwit?” question (“now!”), but there’s still more bits to do there, and obviously there are lots of neat things to do improving the app, and even more non-development things to do like dealing with other financial institutions, compliance concerns, and what not. Mostly that’s stuff I help with, but not my focus: instead, the things I’m lucky enough to get to work on are the ones that will make a difference in months/years to come, rather than the next few weeks, which gives me an excuse to keep up to date with things like lightning and Schnorr signatures and work on open source bitcoin stuff in general. It’s pretty fantastic. The biggest risk as I see it is I end up doing too much work on getting some awesome new feature or project prototyped for Xapo and end up having to maintain it, downgrading this from dream job to just a motherforking fantastic one. I mean, aside from the bigger risks like cryptocurrency turns out to be a fad, or we all die from nuclear annihilation or whatever.

I don’t really think disclosure posts are particularly necessary — it’s better to assume everyone has undisclosed interests and biases and judge what they say and do on its own merits. But in the event they are a good idea: financially, I’ve got as yet unvested stock options in Xapo which I plan on exercising and hope will be worth something someday, and some Bitcoin which I’m holding onto and hope will still be worth something some day. I expect those to be highly correlated, so anything good for one will be good for the other. Technically, I think Bitcoin is fascinating, and I’ve put a lot of work into understanding it: I’ve looked through the code, I’ve talked with a bunch of the developers, I’ve looked at a bunch of the crypto, and I’ve even done a graduate diploma in economics over the last couple of years to have some confidence in my ability to judge the economics of it (though to be fair, that wasn’t the reason I had for enrolling initially), and I think it all makes pretty good sense. I can’t say the same about other cryptocurrencies, eg Litecoin’s essentially the same software, but the economics of having a “digital silver” to Bitcoin’s “digital gold” doesn’t seem to make a lot of sense to me, and while Ethereum aims at a bunch of interesting problems and gets the attention it deserves as a result, I’m a long way from convinced it’s got the fundamentals right, and a lot of other cryptocurrency things seem to essentially be scams. Oh, perhaps I should also disclose that I don’t have access to private keys for $10 billion dollars worth of Bitcoin; I’m happily on the open source technology side of things, not on the access to money side.

Of course, my opinions on any of that might change, and my financial interests might change to reflect my changed opinions. I don’t expect to update this blog post, and may or may not post about any new opinions I might form. Which is to say that this isn’t financial advice, I’m not a financial advisor, and if I were, I’m certainly not your financial advisor. If you still want financial advice on crypto, I think Wences’s is reasonable: take 1% of what you’re investing, stick it in Bitcoin, and ignore it for a decade. If Bitcoin goes crazy, great, you’ve doubled your money and can brag about getting in before Bitcoin went up two orders of magnitude; if it goes terrible, you’ve lost next to nothing.

One interesting note: the press is generally reporting Bitcoin as doing terribly this year, maintaining a value of around $7000-$9000 USD after hitting highs of up to $19000 USD mid December. That’s not fake news, but it’s a pretty short term view: for comparison, Wences’s advice linked just above from less than 12 months ago (when the price was about $2500 USD) says “I have seen a number of friends buy at “expensive” prices (say, $300+ per bitcoin)” — but that level of “expensive” is still 20 or 30 times cheaper than today. As a result, in spite of the “bad” news, I think every cryptocurrency company that’s been around for more than a few months is feeling pretty positive at the moment, and most of them are hiring, including Xapo. So if you want to work with me on Xapo’s backend team we’re looking for Python devs. But like every Bitcoin company, expect it to be a bit weird.

CryptogramDetecting Lies through Mouse Movements

Interesting research: "The detection of faked identity using unexpected questions and mouse dynamics," by Merulin Monaro, Luciano Gamberini, and Guiseppe Sartori.

Abstract: The detection of faked identities is a major problem in security. Current memory-detection techniques cannot be used as they require prior knowledge of the respondent's true identity. Here, we report a novel technique for detecting faked identities based on the use of unexpected questions that may be used to check the respondent identity without any prior autobiographical information. While truth-tellers respond automatically to unexpected questions, liars have to "build" and verify their responses. This lack of automaticity is reflected in the mouse movements used to record the responses as well as in the number of errors. Responses to unexpected questions are compared to responses to expected and control questions (i.e., questions to which a liar also must respond truthfully). Parameters that encode mouse movement were analyzed using machine learning classifiers and the results indicate that the mouse trajectories and errors on unexpected questions efficiently distinguish liars from truth-tellers. Furthermore, we showed that liars may be identified also when they are responding truthfully. Unexpected questions combined with the analysis of mouse movement may efficiently spot participants with faked identities without the need for any prior information on the examinee.

Boing Boing post.

Worse Than FailureError'd: Go Home Google News, You're Drunk

"Well, it looks like Google News was inebriated as well!" Daniel wrote.

 

"(Translation: Given names similar to Otto) One must wonder which distance measure algorithm they used to decide that 'Faseaha' is more similar to Otto than Otto," writes Peter W.

 

Andrei V. writes, "What amazing discounts for rental cars offered by Air Baltic!"

 

"I know that Amazon was trying to tell me something about my Kindle author status, but the message appears to have been lost in translation," Bob wrote.

 

"I tried to sign up for severe weather alerts and I'm 100% sure I'm actually signed up. NOT!" writes, Eric R.

 

Lorens writes, "I think the cryptocurrency bubble may have exploded. Or imploded."

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianThomas Lange: Mini DebConf Hamburg

Last week I attended the MiniDebConfHamburg. I worked on new releases of dracut and rinse. Dracut is an initramfs-tools replacement which now supports early microcode loading. Rinse is a tool similar to debootstrap for rpm distributions, which now can create Fedora 28 environments aka chroots.

On Sunday I gave a lightning talk video about how to try out dracut on your computer without removing initramfs-tools. In Debian, we still did not switched the default to dracut, and I like to see more feedback if dracut works in your environment. Later I did a presentation on the FAI.me build service (video, slides). Many thanks to Juri, who implemented a switch on the FAI.me web page for changing between a basic and an advanced mode for the installation images. I've also worked on installing Ubuntu 18.04 LTS (Bionic) using FAI, which was quite simple, because changing the release name from xenial to bionic was most of the work. Yesterday I've added some language support for Ubuntu into FAI, so I hope to release the next version soon.

MiniDebConfHamburg was very nice, a nice location so I hope there will be more MiniDebConfs in Hamburg in the future.

Don MartiHappy GDPR day. Here's some sensitive data about me.

I know I haven't posted for a while, but I can't skip GDPR Day. You don't see a lot of personal info from me here on this blog. But just for once, I'm going to share something.

I'm a blood donor.

This doesn't seem like a lot of information. People sign up for blood drives all the time. But the serious privacy problem here is that when I give blood, they also test me for a lot of diseases, many of which could have a big impact on my life and how much of certain kinds of healthcare products and services I'm likely to need. The fact that I'm a blood donor might also help people infer something about my sex life but the health data is TMI already.

And I have some bad news. I recently got the ad info from my Facebook account and there it is, in the file advertisers_who_uploaded_a_contact_list_with_your_information.html. American Red Cross Blood Donors. Yes, it looks like the people I chose to trust with some of my most sensitive personal info have given it to the least trusted company on the Internet.

In today's marketing scene, the fact that my blood donor information leaked to Facebook isn't too surprising. The Red Cross clearly has some marketing people, and targeting the existing contact list on Facebook is just one of the things that marketing people do without thinking about it too much.Not thinking about privacy concerns is a problem for Marketing as a career field long-term. If everyone thinks of Marketing as the Department of Creepy Stuff it's going to be harder to recruit creative people.

So, wait a minute. Why am I concerned that Facebook has positive health info on me? Doesn't that help maintain my status in the data-driven economy? What's the downside? (Obvious joke about healthy-blood-craving Facebook board member Peter Thiel redacted—you're welcome.)

The problem is that my control over my personal data isn't just a problem for me. As Prof. Arvind Narayanan said (video), Poor privacy harms society as a whole. Can I trust Facebook to use my blood info just to target me for the Red Cross, and not to sort people by health for other purposes? Of course not. Facebook has crossed every creepy line that they have promised not to. To be fair, that's not just a Facebook thing. Tech bros do risky and mean things all the time without really thinking them through, and even when they do set appropriate defaults they half-ass the implementation and shit happens.

Will blood donor status get you better deals, or apartments, or jobs, in the future? I don't know. I do know that the Red Cross made a big point about confidentiality when they got me signed up. I'm waiting for a reply from the Red Cross privacy officer about this, and will post an update.

Anyway, happy GDPR Day, and, in case you missed it, Salesforce CEO Marc Benioff Calls for a National Privacy Law.

TEDIn Case You Missed It: The dawn of “The Age of Amazement” at TED2018

In Case You Missed It TED2018More than 100 speakers — activists, scientists, adventurers, change-makers and more — took the stage to give the talk of their lives this week in Vancouver at TED2018. One blog post could never hope to hold all of the extraordinary wisdom they shared. Here’s a (shamelessly inexhaustive) list of the themes and highlights we heard throughout the week — and be sure to check out full recaps of day 1, day 2, day 3 and day 4.

Discomfort is a proxy for progress. If we hope to break out of the filter bubbles that are defining this generation, we have to talk to and connect with people we disagree with. This message resonated across the week at TED, with talks from Zachary R. Wood and Dylan Marron showing us the power of reaching out, even when it’s uncomfortable. As Wood, a college student who books “uncomfortable speakers,” says: “Tuning out opposing viewpoints doesn’t make them go away.” To understand how society can progress forward, he says, “we need to understand the counterforces.” Marron’s podcast “Conversations With People Who Hate Me” showcases him engaging with people who have attacked him on the internet. While it hasn’t led to world peace, it has helped him develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

The Audacious Project, a new initiative for launching big ideas, seeks to create lasting change at scale. (Photo: Ryan Lash / TED)

Audacious ideas for big impact. The Audacious Project, TED’s newest initiative, aims to be the nonprofit version of an IPO. Housed at TED, it’s a collaboration among some of the biggest names in philanthropy that asks for nonprofit groups’ most audacious dreams; each year, five will be presented at TED with an invitation for the audience and world to get involved. The inaugural Audacious group includes public defender Robin Steinberg, who’s working to end the injustice of bail; oceanographer Heidi M. Sosik, who wants to explore the ocean’s twilight zone; Caroline Harper from Sight Savers, who’s working to end the scourge of trachoma; conservationist Fred Krupp, who wants to use the power of satellites and data to track methane emissions in unprecedented detail; and T. Morgan Dixon and Vanessa Garrison, who are inspiring a nationwide movement for Black women’s health. Find out more (and how you can get involved) at AudaciousProject.org.

Living means acknowledging death. Philosopher-comedian Emily Levine has stage IV lung cancer — but she says there’s no need to “oy” or “ohhh” over her: she’s OK with it. Life and death go hand in hand, she says; you can’t have one without the other. Therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Jason Rosenthal’s journey of loss and grief began when his wife, Amy Krouse Rosenthal, wrote about their lives in an article read by millions of people: “You May Want to Marry My Husband” — a meditation on dying disguised as a personal ad for her soon-to-be-solitary spouse. By writing their story, Amy made Jason’s grief public — and challenged him to begin anew. He speaks to others who may be grieving: “I would like to offer you what I was given: a blank sheet of paper. What will you do with your intentional empty space, with your fresh start?”

“It’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” says Yuval Noah Harari. (Photo: Ryan Lash / TED)

Can we rediscover the humanity in our tech?  In a visionary talk about a “globally tragic, astoundingly ridiculous mistake” companies like Google and Facebook made at the foundation of digital culture, Jaron Lanier suggested a way we can fix the internet for good: pay for it. “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them,” he says. Historian Yuval Noah Harari, appearing onstage as a hologram live from Tel Aviv, warns that with consolidation of data comes consolidation of power. Fascists and dictators, he says, have a lot to gain in our new digital age; and “it’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” he says. Gizmodo writers Kashmir Hill and Surya Mattu survey the world of “smart devices” — the gadgets that “sit in the middle of our home with a microphone on, constantly listening,” and gathering data — to discover just what they’re up to. Hill turned her family’s apartment into a smart home, loading up on 18 internet-connected appliances; her colleague Mattu built a router that tracked how often the devices connected, who they were transmitting to, what they were transmitting. Through the data, he could decipher the Hill family’s sleep schedules, TV binges, even their tooth-brushing habits. And a lot of this data can be sold, including deeply intimate details. “Who is the true beneficiary of your smart home?” he asks. “You, or the company mining you?”

An invitation to build a better world. Actor and activist Tracee Ellis Ross came to TED with a message: the global collection of women’s experiences will not be ignored, and women will no longer be held responsible for the behaviors of men. Ross believes it is past time that men take responsibility to change men’s bad behavior — and she offers an invitation to men, calling them in as allies with the hope they will “be accountable and self-reflective.” She offers a different invitation to women: Acknowledge your fury. “Your fury is not something to be afraid of,” she says. “It holds lifetimes of wisdom. Let it breathe, and listen.”

Wow! discoveries. Among the TED Fellows, explorer and conservationist Steve Boyes’ efforts to chart Africa’s Okavango Delta has led scientists to identify more than 25 new species; University of Arizona astrophysicist Burçin Mutlu-Pakdil discovered a galaxy with an outer ring and a reddish inner ring that was unlike any ever seen before (her reward: it’s now called Burçin’s Galaxy). Another astronomer, University of Hawaii’s Karen Meech saw — and studied for an exhilarating few days — ‘Oumuamua, the first interstellar comet observed from Earth. Meanwhile, engineer Aaswath Raman is harnessing the cold of deep space to invent new ways to keep us cooler and more energy-efficient. Going from the sublime to the ridiculous, roboticist Simone Giertz showed just how much there is to be discovered from the process of inventing useless things.  

Walter Hood shares his work creating public spaces that illuminate shared memories without glossing over past — and present — injustices. (Photo: Ryan Lash / TED)

Language is more than words. Even though the stage program of TED2018 consisted primarily of talks, many went beyond words. Architects Renzo Piano, Vishaan Chakbrabarti, Ian Firth and Walter Hood showed how our built structures, while still being functional, can lift spirits, enrich lives, and pay homage to memories. Smithsonian Museum craft curator Nora Atkinson shared images from Burning Man and explained how, in the desert, she found a spirit of freedom, creativity and collaboration not often found in the commercial art world. Designer Ingrid Fetell Lee uncovered the qualities that make everyday objects a joy to behold. Illustrator Christoph Niemann reminded us how eloquent and hilarious sketches can be; in her portraits of older individuals, photographer Isadora Kosofsky showed us that visuals can be poignant too. Paul Rucker discussed his painful collection of artifacts from America’s racial past and how the artistic act of making scores of Ku Klux Klan robes has brought him some catharsis. Our physical movements are another way we speak  — for choreographer Elizabeth Streb, it’s expressing the very human dream to fly. For climber Alex Honnold, it was attaining a sense of mastery when he scaled El Capitan alone without ropes. Dolby Laboratories chief scientist Poppy Crum demonstrated the emotions that can be read through physical tells like body temperature and exhalations, and analytical chemist Simone Francese revealed the stories told through the molecules in our fingerprints.  

Kate Raworth presents her vision for what a sustainable, universally beneficial economy could look like. (Photo: Bret Hartman / TED)

Is human growth exponential or limited? There will be almost ten billion people on earth by 2050. How are we going to feed everybody, provide water for everybody and get power to everybody? Science journalist Charles C. Mann has spent years asking these questions to researchers, and he’s found that their answers fall into two broad categories: wizards and prophets. Wizards believe that science and technology will let us produce our way out of our dilemmas — think: hyper-efficient megacities and robots tending genetically modified crops. Prophets believe close to the opposite; they see the world as governed by fundamental ecological processes with limits that we transgress to our peril. As he says: “The history of the coming century will be the choice we make as a species between these two paths.” Taking up the cause of the prophets is Oxford economist Kate Raworth, who says that our economies have become “financially, politically and socially addicted” to relentless GDP growth, and too many people (and the planet) are being pummeled in the process. What would a sustainable, universally beneficial economy look like? A doughnut, says Raworth. She says we should strive to move countries out of the hole — “the place where people are falling short on life’s essentials” like food, water, healthcare and housing — and onto the doughnut itself. But we shouldn’t move too far lest we end up on the doughnut’s outside and bust through the planet’s ecological limits.

Seeing opportunity in adversity. “I’m basically nuts and bolts from the knee down,” says MIT professor Hugh Herr, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He dreams of a future where humans have augmented their bodies in a way that redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. In a beautiful, touching talk in the closing session of TED2018, Mark Pollock and Simone George take us inside their relationship — detailing how Pollock became paralyzed and the experimental work they’ve undertaken to help him regain motion. In collaboration with a team of engineers who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test — proving that progress is definitely still possible.

TED Fellow and anesthesiologist Rola Hallam started the world’s first crowdfunded hospital in Syria. (Photo: Ryan Lash / TED)

Spotting the chance to make a difference. The TED Fellows program was full of researchers, activists and advocates capitalizing on the spaces that go unnoticed. Psychiatrist Essam Daod, found a “golden hour” in refugees’ treks when their narratives can sometimes be reframed into heroes’ journeys; landscape architect Kotcharkorn Voraakhom realized that a park could be designed to allow her flood-prone city of Bangkok mitigate the impact of climate change; pediatrician Lucy Marcil seized on the countless hours that parents spend in doctors’ waiting rooms to offer tax assistance; sustainability expert DeAndrea Salvador realized the profound difference to be made by helping low-income North Carolina residents with their energy bills; and anesthesiologist Rola Hallam is addressing aid shortfalls for local nonprofits, resulting in the world’s first crowdfunded hospital in Syria.

Catch up on previous In Case You Missed It posts from April 10 (Day 1), April 11 (Day 2), April 12 (Day 3), and yesterday, April 13 (Day 4).

TEDIn Case You Missed It: Bold visions for humanity at day 4 of TED2018

In Case You Missed It TED2018Three sessions of memorable TED Talks covering life, death and the future of humanity made the penultimate day of TED2018 a remarkable space for tech breakthroughs and dispatches from the edges of culture.

Here are some of the themes we heard echoing through the opening day, as well as some highlights from around the conference venue in Vancouver.

The future built on genetic code. DNA is built on four letters: G, C, A, T. These letters determine the sequences of the 20 amino acids in our cells that build the proteins that make life possible. But what if that “alphabet” got bigger? Synthetic biologist and chemist Floyd Romesberg suggests that the four letters of the genetic alphabet are not all that unique. He and his colleagues constructed the first “semi-synthetic” life forms based on a 6-letter DNA. With these extra building blocks, cells can construct hitherto unseen proteins. Someday, we could tailor these cells to fulfill all sorts of functions — building new, hyper-targeted medicines, seeking out and destroying cancer, or “eating” toxic materials. And maybe soon, we’ll be able to use that expanded DNA alphabet to teleport. That’s right, you read it here first: teleportation is real. Biologist and engineer Dan Gibson reports from the front lines of science fact that we are now able to transmit the most fundamental parts of who we are: our DNA. It’s called biological teleportation, and the idea is that biological entities including viruses and living cells can be reconstructed in a distant location if we can read and write the sequence of that DNA code. The machines that perform this fantastic feat, the BioXP and the DBC, stitch together both long and short forms of genetic code that can be downloaded from the internet. That means that in the future, with an at-home version of these machines (or even one worlds away, say like, Mars), we may be able to download and print personalized therapeutic medications, prescriptions and even vaccines.

“If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. (Photo: Jason Redmond / TED)

Dispatches from the fight against hate online. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. In 2016, Green collaborated with Moonshot CVE to pilot a new approach, the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups, and used what they learned to create targeted advertising aimed at people susceptible to ISIS’s recruiting — and counter those messages. In English and Arabic, the eight-week pilot program reached more than 300,000 people. “If technology has any hope of overcoming today’s challenges,” Green says, “we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” Dylan Marron is taking a different approach to the problem of hate on the internet. His video series, such as “Sitting in Bathrooms With Trans People,” have racked up millions of views, and they’ve also sent a slew of internet poison in his direction. He developed a coping mechanism: he calls up the people who leave hateful remarks, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace, he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years (he’s now just 18) he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of what we call intelligence is just trial-and-error on a massive scale — a machine can try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. Which really isn’t general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives. Picking up on the thread of pitfalls of current AI, artist and technology critic James Bridle describes how automated copycats on YouTube mimic trusted videos by using algorithmic tricks to create “fake news” for kids. End result: children exploring YouTube videos from their favorite cartoon characters are sent down autoplaying rabbit holes, where they can find eerie, disturbing videos filled with very real violence and very real trauma. Algorithms are touted as the fix, but as Bridle says, machine learning is really just what we call software that does things we don’t understand … and we have enough of that already, no?

Chetna Gala Sinha tells us about a bank in India that meets the needs of rural poor women who want to save and borrow. (Photo: Jason Redmond / TED)

Listen and learn. Takemia MizLadi Smith spoke up for the front-desk staffer, the checkout clerk, and everyone who’s ever been told they need to start collecting information from customers, whether it be an email, zip code or data about their race and gender. Smith makes the case to empower every front desk employee who collects data — by telling them exactly how that data will be used. Chetna Gala Sinha, meanwhile, started a bank in India that meets the needs of rural poor women who want to save and borrow — and whom traditional banks would not touch. How does the bank improve their service? As Chetna says: simply by listening. Meanwhile, sex educator Emily Nagoski talked about a syndrome called emotional nonconcordance, where what your body seems to want runs counter to what you actually want. In an intimate situation, ahem, it can be hard to figure out which one to listen to, head or body. Nagoski gives us full permission and encouragement to listen to your head, and to the words coming out of the mouth of your partner. And Harvard Business School prof Frances Frei gave a crash course in trust — building it, keeping it, and the hardest, rebuilding it. She shares lessons from her stint as an embed at Uber, where far from listening to in meetings, staffers would actually text each other during meetings — about the meeting. True listening, the kind that builds trust, starts with putting away your phone.

Bionic man Hugh Herr envisions humanity soaring out of the 21st century. (Photo: Ryan Lash / TED)

A new way to heal our bodies … and build new ones. Optical engineer Mary Lou Jepsen shares an exciting new tool for reading what’s inside our bodies. It exploits the properties of red light, which behaves differently in different body materials. Our bones and flesh scatter red light (as she demonstrates on a piece of raw chicken breast), while our red blood absorbs it and doesn’t let it pass through. By measuring how light scatters, or doesn’t, inside our bodies, and using a technique called holography to study the resulting patterns as the light comes through the other side, Jepsen believe we can gain a new way to spot tumors and other anomalies, and eventually to create a smaller, more efficient replacement for the bulky MRI. MIT professor Hugh Herr is working on a different way to heal — and augment — our bodies. He’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He calls it “NeuroEmbodied Design,” a methodology to create cyborg function where the lines between the natural and synthetic world are blurred. This future will provide humanity with new bodies and end disability, Herr says — and it’s already happening. He introduces us to Jim Ewing, a friend who lost a foot in a climbing accident. Using the Agonist-antagonist Myoneural Interface, or AAMI, a method Herr and his team developed at MIT to connect nerves to a prosthetic, Jim’s bones and muscles were integrated with a synthetic limb, re-establishing the neural connection between his ankle and foot muscles and his brain. What might be next? Maybe, the ability to fly.

Announcements! Back in 2014, space scientist Will Marshall introduced us to his company, Planet, and their proposed fleet of tiny satellites. The goal: to image the planet every day, showing us how Earth changes in near-real time. In 2018, that vision has come good: every day, a fleet of about 200 small satellites pictures every inch of the planet, taking 1.5 million 29-megapixel images every day (about 6T of data daily), gathering data on changes both natural and human-made. This week at TED, Marshall announced a consumer version of Planet, called Planet Stories, to let ordinary people play with these images. Start playing now here. Another announcement comes from futurist Ray Kurzweil: a new way to query the text inside books using something called semantic search — which is a search on ideas and concepts, rather than specific words. Called TalkToBooks, the beta-stage product uses an experimental AI to query a database of 120,000 books in about a half a second. (As Kurzweil jokes: “It takes me hours to read a hundred thousand books.”) Jump in and play with TalkToBooks here. Also announced today: “TED Talks India: Nayi Soch” — the wildly popular Hindi-language TV series, created in partnership with StarTV and hosted by Shah Rukh Khan — will be back for three more seasons.

TEDBody electric: Notes from Session 9 of TED2018

Mary Lou Jepsen demonstrates the ability of red light to scatter when it hits our bodies. Can we leverage this property to see inside ourselves? She speaks at TED2018 on April 13, 2018. Photo: Ryan Lash / TED

During the week of TED, it’s tempting to feel like a brain in a jar — to think on a highly abstracted, intellectual, hypertechnical level about every single human issue. But the speakers in this session remind us that we’re still just made of meat. And that our carbon-based life forms aren’t problems to be transcended but, if you will, platforms. Let’s build on them, explore them, and above all feel at home in them.

When red light means go. The last time Mary Lou Jepsen took the TED stage, she shared the science of knowing what’s inside another person’s mind. This time, the celebrated optical engineer shares an exciting new tool for reading what’s inside our bodies. It exploits the properties of red light, which behaves differently in different body materials. Our bones and flesh scatter red light (as she demonstrates on a piece of raw chicken breast), while our red blood absorbs it. By measuring how light scatters, or doesn’t, inside our bodies, and using a technique called holography to study the resulting patterns as the light comes through the other side, Jepsen believe we can gain a new way to spot tumors and other anomalies, and eventually to create a smaller, more efficient replacement for the bulky MRI. Her demo doubles as a crash course in optics, with red and green lasers and all kinds of cool gear (some of which juuuuust squeaked through customs in time). And it’s a wildly inspiring look at a bold effort to solve an old problem in a new way.

Floyd E. Romesberg imagines a couple new letters in DNA that might allow us to create … who knows what. Photo: Jason Redmond / TED

What if DNA had more letters to work with? DNA is built on only four letters: G, C, A, T. These letters determine the sequences of the 20 amino acids in our cells that build the proteins that make life possible. But what if that “alphabet” got bigger? Synthetic biologist and chemist Floyd Romesberg suggests that the letters of the genetic alphabet are not all that unique. For the problem of life, perhaps, “maybe we’re not the only solution, maybe not even the best solution — just a solution.” And maybe new parts can be built to work alongside the natural parts. Inspired by these insights, Romesberg and his colleagues constructed the first “semi-synthetic” life forms based on a 6-letter DNA. With these extra building blocks, cells can construct hitherto unseen proteins. Someday, we could tailor these cells to fulfill all sorts of functions — building new, hyper-targeted medicines, seeking out and destroying cancer, or “eating” toxic materials. Worried about unintended consequences? Romesberg says that his augmented 6-letter DNA cannot be replenished within the body. As the unnatural genetic materials are depleted, the semi-synthetic cells die off, protecting us against nightmarish sci-fi scenarios of rogue microorganisms.

On the slide behind Dan Gibson: a teleportation machine, more or less. It’s a “printer” that can convert digital information into biological material, and it holds the promise of sending things like vaccines and medicines over the internet. Photo: Ryan Lash / TED

Beam our DNA up, Scotty. Teleportation is real. That’s right, you read it here first. This method isn’t quite like what the minds behind Star Trek brought to life, but the massive implications attached are just as futuristic. Biologist and engineer Dan Gibson reports from the front lines of science fact, that we are now able to transmit not our entire selves, but the most fundamental parts of who we are: our DNA. Or, simply put, biological teleportation. “The characteristics and functions of all biological entities including viruses and living cells are written into the code of DNA,” says Gibson. “They can be reconstructed in a distant location if we can read and write the sequence of that DNA code.” The machines that perform this fantastic feat, the BioXP and the DBC, stitch together both long and short forms of genetic code that can be downloaded from the internet. That means that in the future, with an at-home version of these machines (or even one literally worlds away, say like, Mars), we may be able to download and print personalized therapeutic medications, prescriptions and even vaccines. The process takes weeks now, but could someday come down to 1–2 days. (And don’t worry: Gibson, his team and the government screen every synthesis order against a database to make sure viruses and pathogens aren’t being made.) He says: “For now, I will be satisfied beaming new medicines across the globe, fully automated and on-demand to save lives from emerging deadly infectious diseases and to create personalized cancer medicines for those who don’t have time to wait.”

In a powerful talk, sex educator Emily Nagoski educates us about emotional nonconcordance — when our body and our mind “say” different things in an intimate situation. Which to listen to? Photo: Ryan Lash / TED

Busting one of our most dangerous myths about sex. When it comes to pleasure, humans have something that’s often called “the reward center” — but, explains sex educator Emily Nagoski, that “reward center” is actually three intertwined, separate systems: liking, or whether it feels good or bad; wanting, which motivates us to move toward or away from a stimulus; and learning. Learning is best explained by Pavlov’s dogs, whom he trained to salivate when he rang a bell. Were the dogs hungry for the bell (wanting)? Did they find the bell delicious (liking)? Of course not: “What Pavlov did was make the bell food-related.” The separateness of these three things, wanting, liking and learning, helps explain a phenomenon called emotional nonconcordance, when our physiological response doesn’t match our subjective experience. This happens with all sorts of emotional and motivational systems, including sex. “Research over the last thirty years has found that genital blood flow can increase in response to sex-related stimuli, even if those sex-related stimuli are not also associated with a subjective experience of wanting and liking,” she says. The problem is that we don’t recognize nonconcordance when it comes to sex: in fact, there is a dangerous myth that even if someone says they don’t want it or don’t like it, their body can say differently, and the body is the one telling the “truth.” This myth has serious consequences for victims of unwanted and nonconsensual sexual contact, who are sometimes told that their nonconcordant genital response invalidates their experience … and who can even have that response held up as evidence in sexual assault cases. Nagoski urges all of us to share this crucial information with someone — judges, lawyers, your partners, your kids. “The roots of this myth are deep and they are entangled with some very dark forces in our culture, but with every brave conversation we have, we make the world that little bit better,” she says to one of the biggest standing Os in a standing-O-heavy session.

The musicians and songwriters of LADAMA perform and speak at TED2018. Photo: Ryan Lash / TED

Bringing Latin alternative music to Vancouver. Singing in Spanish, Portuguese and English, LADAMA enliven the TED stage with a vibrant, energizing and utterly danceable musical set. The multinational ensemble of women — Maria Fernanda Gonzalez from Venezuela, Lara Klaus from Brazil, Daniela Serna of Colombia, and Sara Lucas from the US — and their bass player collaborator combine traditional South American and Caribbean styles like cumbia, maracatu and joropo with pop, soul and R&B to deliver a pulsing musical experience. The group took attendees on a musical journey with their modern and soulful compositions, playing original songs “Night Traveler” and “Porro Maracatu.”

Hugh Herr lost both legs below the knee, but the new legs he built allow him once again to run, climb and even dance. Photo: Ryan Lash / TED

“The robot became part of me.” MIT professor Hugh Herr takes the TED stage, his sleek bionic legs conspicuous under his sharp grey suit. “I’m basically nuts and bolts from the knee down,” Herr says, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward realizing a goal that has long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He calls it “NeuroEmbodied Design,” a methodology to create cyborg function where the lines between the natural and synthetic world are blurred. This future will provide humanity with new bodies and end disability, Herr says — and it’s already happening. He introduces us to Jim Ewing, a friend of Herr’s who was in a climbing accident that resulted in the amputation of his foot. Using the Agonist-antagonist Myoneural Interface, a method Herr and his team developed at MIT to connect nerves to a prosthetic, Jim’s bones and muscles were integrated with a synthetic limb, re-establishing the neural connection between his ankle and foot muscles and his brain. “Jim moves and behaves as if the synthetic limb is part of him,” Herr says. And he’s even back climbing again. Taking a few moments to dream, Herr describes a future where humans have augmented their bodies in a way that fundamentally redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. “I believe humans will become superheroes,” Herr says. “During the twilight years of this century, I believe humans will be unrecognizable in morphology and dynamics from what we are today. Humanity will take flight and soar.”

Jim Ewing, left, lost a limb in a climbing accident; he partnered with MIT professor Hugh Herr, right, to build a limb that got him back up and climbing again. Photo: Ryan Lash / TED

,

Harald WelteOsmoCon 2018 CfP closes on 2018-05-30

One of the difficulties with OsmoCon2017 last year was that almost nobody submitted talks / discussions within the deadline, early enough to allow for proper planning.

This lad to the situation where the sysmocom team had to come up with a schedule/agenda on their own. Later on much after the CfP deadline,people then squeezed in talks, making the overall schedule too full.

It is up to you to avoid this situation again in 2018 at OsmoCon2018 by submitting your talk RIGHT NOW. We will be very strict regarding late submissions. So if you would like to shape the Agenda of OsmoCon 2018, this is your chance. Please use it.

We will have to create a schedule soon, as [almost] nobody will register to a conference unless the schedule is known. If there's not sufficient contribution in terms of CfP response from the wider community, don't complain later that 90% of the talks are from sysmocom team members and only about the Cellular Network Infrastructure topics.

You have been warned. Please make your CfP submission in time at https://pretalx.sysmocom.de/osmocon2018/cfp before the CfP deadline on 2018-05-30 23:59 (Europe/Berlin)

Harald Welteopenmoko.org archive down due to datacenter issues

Unfortunately, since about 11:30 am CEST on MAy 24, openmoko.org is down due to some power outage related issues at Hetzner, the hosting company at which openmoko.org has been hosting for more than a decade now.

The problem seems to have caused quite a lot of fall-out tom many servers (Hetzner is hosting some 200k machines, not sure how many affected, though), and Hetzner is anything but verbose when it comes to actually explaining what the issue is.

All they have published is https://www.hetzner-status.de/en.html#8842 - which is rather tight lipped about some power grid issues. But then, what do you have UPSs for if not for "a strong voltage reduction in the local power grid"?

The openmoko.org archive machine is running in Hetzner DC10, by the way. This is where they've had the largest number of tickets.

In any case, we'll have to wait for them to resolve their tickets. They appear to be working day and night on that.

I have a number of machines hosted at Hetzner, and I'm actually rather happy that none of the more important systems were affected that long. Some machines simply lost their uplink connectivity for some minutes, while some others were rebooted (power outage). The openmoko.org archive is the only machine that didn't automatically boot after the outage, maybe the power supply needs replacement.

In any case, I hope the service will be back up again soon.

btw: Guess who's been paying for hosting costs ever since Openmoko, Inc. has shut down? Yes, yours truly. It was OK for something like 9 years, but I want to recursively pull the dynamic content through some cache, which can then be made permanent. The resulting static archive can then be moved to some VM somewhere, without requiring a dedicated root server. That should reduce the costs down to almost nothing.

Krebs on Security3 Charged In Fatal Kansas ‘Swatting’ Attack

Federal prosecutors have charged three men with carrying out a deadly hoax known as “swatting,” in which perpetrators call or message a target’s local 911 operators claiming a fake hostage situation or a bomb threat in progress at the target’s address — with the expectation that local police may respond to the scene with deadly force. While only one of the three men is accused of making the phony call to police that got an innocent man shot and killed, investigators say the other two men’s efforts to taunt and deceive one another ultimately helped point the gun.

Tyler “SWAuTistic” Barriss. Photo: AP

According to prosecutors, the tragic hoax started with a dispute over a match in the online game “Call of Duty.” The indictment says Shane M. Gaskill, a 19-year-old Wichita, Kansas resident, and Casey S. Viner, 18, had a falling out over a $1.50 game wager.

Viner allegedly wanted to get back at Gaskill, and so enlisted the help of another man — Tyler R. Barriss — a serial swatter known by the alias “SWAuTistic” who’d bragged of “swatting” hundreds of schools and dozens of private residences.

The federal indictment references transcripts of alleged online chats among the three men. In an exchange on Dec. 28, 2017, Gaskill taunts Barriss on Twitter after noticing that Barriss’s Twitter account (@swattingaccount) had suddenly started following him.

Viner and Barriss both allegedly say if Gaskill isn’t scared of getting swatted, he should give up his home address. But the address that Gaskill gave Viner to pass on to Barriss no longer belonged to him and was occupied by a new tenant.

Barriss allegedly then called the emergency 911 operators in Wichita and said he was at the address provided by Viner, that he’d just shot his father in the head, was holding his mom and sister at gunpoint, and was thinking about burning down the home with everyone inside.

Wichita police quickly responded to the fake hostage report and surrounded the address given by Gaskill. Seconds later, 28-year-old Andrew Finch exited his mom’s home and was killed by a single shot from a Wichita police officer. Finch, a father of two, had no party to the gamers’ dispute and was simply in the wrong place at the wrong time.

Just minutes after the fatal shooting, Barriss — who is in Los Angeles  — is allegedly anxious to learn if his Kansas swat attempt was successful. Someone has just sent Barriss a screenshot of a conversation between Viner and Gaskill mentioning police at Gaskill’s home and someone getting killed. So Barriss allegedly then starts needling Gaskill via instant message:

Defendant BARRISS: Yo answer me this
Defendant BARRISS: Did police show up to your house yes or no
Defendant GASKILL: No dumb fuck
Defendant BARRISS: Lmao here’s how I know you’re lying

Prosecutors say Barriss then posted a screen shot showing the following conversation between Viner and Gaskill:

Defendant VINER: Oi
Defendant GASKILL: Hi
Defendant VINER: Did anyone show @ your house?
Defendant VINER: Be honest
Defendant GASKILL: Nope
Defendant GASKILL: The cops are at my house because someone ik just killed his dad

Barriss and Gaskill then allegedly continued their conversation:

Defendant GASKILL: They showed up to my old house retard
Defendant BARRISS: That was the call script
Defendant BARRISS: Lol
Defendant GASKILL: Your literally retarded
Defendant GASKILL: Ik dumb ass
Defendant BARRISS: So you just got caught in a lie
Defendant GASKILL: No I played along with you
Defendant GASKILL: They showed up to my old house that we own and rented out
Defendant GASKILL: We don’t live there anymore bahahaha
Defendant GASKILL: ik you just wasted your time and now your pissed
Defendant BARRISS: Not really
Defendant BARRISS: Once you said “killed his dad” I knew it worked lol
Defendant BARRISS: That was the call lol
Defendant GASKILL: Yes it did buy they never showed up to my house
Defendant GASKILL: You guys got trolled
Defendant GASKILL: Look up who live there we moved out almost a year ago
Defendant GASKILL: I give you props though you’re the 1% that can actually swat babahaha
Defendant BARRISS: Dude MY point is You gave an address that you dont live at but you were acting tough lol
Defendant BARRISS: So you’re a bitch

Later on the evening of Dec. 28, after news of the fatal swatting started blanketing the local television coverage in Kansas, Gaskill allegedly told Barriss to delete their previous messages. “Bape” in this conversation refers to a nickname allegedly used by Casey Viner:

Defendant GASKILL: Dm asap
Defendant GASKILL: Please it’s very fucking impi
Defendant GASKILL: Hello
Defendant BARRISS: ?
Defendant BARRISS: What you want
Defendant GASKILL: Dude
Defendant GASKILL: Me you and bape
Defendant GASKILL: Need to delete everything
Defendant GASKILL: This is a murder case now
Defendant GASKILL: Casey deleted everything
Defendant GASKILL: You need 2 as well
Defendant GASKILL: This isn’t a joke K troll anymore
Defendant GASKILL: If you don’t you’re literally retarded I’m trying to help you both out
Defendant GASKILL: They know it was swat call

The indictment also features chat records between Viner and others in which he admits to his role in the deadly swatting attack. In the follow chat excerpt, Viner was allegedly talking with someone identified only as “J.D.”

Defendant VINER: I literally said you’re gonna be swatted, and the guy who swatted him can easily say I convinced him or something when I said hey can you swat this guy and then gave him the address and he said yes and then said he’d do it for free because I said he doesn’t think anything will happen
Defendant VINER: How can I not worry when I googled what happens when you’re involved and it said a eu [sic] kid and a US person got 20 years in prison min
Defendant VINER: And he didn’t even give his address he gave a false address apparently
J.D.: You didn’t call the hoax in…
Defendant VINER: Does t [sic] even matter ?????? I was involved I asked him to do it in the first place
Defendant VINER: I gave him the address to do it, but then again so did the other guy he gave him the address to do it as well and said do it pull up etc

Barriss is charged with multiple counts of making false information and hoaxes; cyberstalking; threatening to kill another or damage property by fire; interstate threats, conspiracy; and wire fraud. Viner and Gaskill were both charged with wire fraud, conspiracy and obstruction of justice. A copy of the indictment is available here.

The Associated Press reports that the most serious charge of making a hoax call carries a potential life sentence because it resulted in a death, and that some of the other charges carry sentences of up to 20 years.

The moment that police in Kansas fired a single shot that killed Andrew Finch.

As I told the AP, swatting has been a problem for years, but it seems to have intensified around the time that top online gamers started being able to make serious money playing games online and streaming those games live to thousands or even tens of thousands of paying subscribers. Indeed, Barriss himself had earned a reputation as someone who delighted in watching police kick in doors behind celebrity gamers who were live-streaming.

This case is not the first time federal prosecutors have charged multiple people in the same swatting attacks even if only one person was involved in actually making the phony hoax calls to police. In 2013, my home was the target of a swatting attack that thankfully ended without incident. The government ultimately charged four men — several of whom were minors at the time — with conducting that swat attack as well as many others they’d perpetrated against public figures and celebrities.

But despite spending considerable resources investigating those crimes, prosecutors were able to secure only light punishments for those involved in the swatting spree. One of those men, a serial swatter and cyberstalker named Mir Islam, was sentenced to to just one year in jail for his role in multiple swattings.  Another individual who was part of that group — Eric “Cosmo the God” Taylorgot three years of probation.

Something tells me Barriss, Gaskill and Viner aren’t going to be so lucky. Barriss has admitted his role in many swattings, and he admitted to his last, fatal swatting in an interview he gave to KrebsOnSecurity less than 24 hours after Andrew Finch’s murder — saying he was not the person who pulled the trigger.

Rondam RamblingsBlame where it's due

I can't say I'm even a little bit surprised that the summit with North Korea has fallen through.  I wouldn't even bother blogging about this except that back in April I expressed some cautious optimism that maybe, just maybe, Trump's bull-in-the-china-shop tactics could be working.  Nothing makes me happier than having my pessimistic prophecies be proven wrong, but alas, Donald Trump seems to be

Sociological ImagesEnglish/Gibberish

One major part of introducing students to sociology is getting to the “this is water” lesson: the idea that our default experiences of social life are often strange and worthy of examining. This can be challenging, because the default is often boring or difficult to grasp, but asking the right questions is a good start (with some potentially hilarious results).

Take this one: what does English sound like to a non-native speaker? For students who grew up speaking it, this is almost like one of those Zen koans that you can’t quite wrap your head around. If you intuitively know what the language means, it is difficult to separate that meaning from the raw sounds.

That’s why I love this video from Italian pop singer Adriano Celentano. The whole thing is gibberish written to imitate how English slang sounds to people who don’t speak it.


Another example to get class going with a laugh is the 1990s video game Fighting Baseball for the SNES. Released in Japan, the game didn’t have the licensing to use real players’ names, so they used names that sounded close enough. A list of some of the names still bounces around the internet:

The popular idea of the Uncanny Valley in horror and science fiction works really well for languages, too. The funny (and sometimes unsettling) feelings we get when we watch imitations of our default assumptions fall short is a great way to get students thinking about how much work goes into our social world in the first place.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramFont Steganography

Worse Than FailureImprov for Programmers: Just for Transformers

We're back again with a little something different, brought to you by Raygun. Once again, the cast of "Improv for Programmers" is going to create some comedy on the fly for you, and this time… you could say it's… transformative. Today's episode contains small quantities of profanity.

Raygun provides a window into how users are really experiencing your software applications.

Unlike traditional logging, Raygun silently monitors applications for issues affecting end users in production, then allows teams to pinpoint the root cause behind a problem with greater speed and accuracy by providing detailed diagnostic information for developers. Raygun makes fixing issues 1000x faster than traditional debugging methods using logs and incomplete information.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integrated, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Harald WelteMailing List hosting for FOSS Projects

Recently I've encountered several occasions in which a FOSS project would have been interested in some reliable, independent mailing list hosting for their project communication.

I was surprised how difficult it was to find anyone running such a service.

From the user / FOSS project point of view, the criteria that I would have are:

  • operated by some respected entity that is unlikely to turn hostile, discontinue the service or go out of business altogether
  • free of any type of advertisements (we all know how annoying those are)
  • cares about privacy, i.e. doesn't sell the subscriber lists or non-public archives
  • use FOSS to run the service itself, such as GNU mailman, listserv, ezmlm, ...
  • an easy path to migrate away to another service (or self-hosting) as they grow or their requirements change. A simple mail forward to that new address for the related addresses is typically sufficient for that

If you think mailing lists serve no purpose these days anyways, and everyone is on github: Please have a look at the many thousands of FOSS project mailing lists out there still in use. Not everyone wants to introduce a dependency to the whim of a proprietary software-as-a-service provider.

I never had this problem as I always hosted my own mailman instance on lists.gnumonks.org anyway, and all the entities that I've been involved in (whether non-profit or businesses) had their own mailing list hosts. From franken.de in the 1990ies to netfilter.org, openmoko.org and now osmocom.org, we all pride oursevles in self-hosting.

But then there are plenty of smaller projects that neither have the skills nor the funding available. So they go to yahoo groups or some other service that will then hold them hostage without a way to switch their list archives from private to public, without downloadable archives or forwarding in the case they want to move away :(

Of course the larger FOSS projects also have their own list servers, starting from vger.kernel.org to Linux distributions like Debian GNU/Linux. But what if your FOSS project is not specifically Linux related?

The sort-of obvious candidates that I found all don't really fit:

Now don't get me wrong, I'm of course not expecting that there are commercial entities operating free-of charge list hosting services where you neither pay with money, nor your data, nor by becoming a spam receiver.

But still, in the wider context of the Free Software community, I'm seriously surprised that none of the various non-for-profit / non-commercial foundations or associations are offering a public mailing list hosting service for FOSS projects.

One can of course always ask any from the above list and ask for a mailing list even though it's strictly speaking off-topic to them. But who will do that, if he has to ask uninvited for a favor?

I think there's something missing. I don't have the time to set up a related service, but I would certainly want to contribute in terms of funding in case any existing FOSS related legal entity wanted to expand. If you already have a legal entity, abuse contacts, a team of sysadmins, then it's only half the required effort.

Planet DebianJonathan Dowland: Mastodon

I'm experimenting with Mastodon, an alternative to Twitter. My account is @jon@argh.club. I'm happy for recommendations on interesting people to follow!

Inspired by Iustin, I also started taking a look at Hakyll as a possible replacement for IkiWiki. (That's at grr.argh.club/~jon, although there's nothing to see yet.)

Planet DebianBenjamin Mako Hill: Natural experiment showing how “wide walls” can support engagement and learning

Seymour Papert is credited as saying that tools to support learning should have “high ceilings” and “low floors.” The phrase is meant to suggest that tools should allow learners to do complex and intellectually sophisticated things but should also be easy to begin using quickly. Mitchel Resnick extended the metaphor to argue that learning toolkits should also have “wide walls” in that they should appeal to diverse groups of learners and allow for a broad variety of creative outcomes. In a new paper, Sayamindu Dasgupta and I attempted to provide an empirical test of Resnick’s wide walls theory. Using a natural experiment in the Scratch online community, we found causal evidence that “widening walls” can, as Resnick suggested, increase both engagement and learning.

Over the last ten years, the “wide walls” design principle has been widely cited in the design of new systems. For example, Resnick and his collaborators relied heavily on the principle in the design of the Scratch programming language. Scratch allows young learners to produce not only games, but also interactive art, music videos, greetings card, stories, and much more. As part of that team, Sayamindu was guided by “wide walls” principle when he designed and implemented the Scratch cloud variables system in 2011-2012.

While designing the system, Sayamindu hoped to “widen walls” by supporting a broader range of ways to use variables and data structures in Scratch. Scratch cloud variables extend the affordances of the normal Scratch variable by adding persistence and shared-ness. A simple example of something possible with cloud variables, but not without them, is a global high-score leaderboard in a game (example code is below). After the system was launched, we saw many young Scratch users using the system to engage with data structures in new and incredibly creative ways.

cloud-variable-scriptExample of Scratch code that uses a cloud variable to keep track of high-scores among all players of a game.

Although these examples reflected powerful anecdotal evidence, we were also interested in using quantitative data to reflect the causal effect of the system. Understanding the causal effect of a new design in real world settings is a major challenge. To do so, we took advantage of a “natural experiment” and some clever techniques from econometrics to measure how learners’ behavior changed when they were given access to a wider design space.

Understanding the design of our study requires understanding a little bit about how access to the Scratch cloud variable system is granted. Although the system has been accessible to Scratch users since 2013, new Scratch users do not get access immediately. They are granted access only after a certain amount of time and activity on the website (the specific criteria are not public). Our “experiment” involved a sudden change in policy that altered the criteria for who gets access to the cloud variable feature. Through no act of their own, more than 14,000 users were given access to feature, literally overnight. We looked at these Scratch users immediately before and after the policy change to estimate the effect of access to the broader design space that cloud variables afforded.

We found that use of data-related features was, as predicted, increased by both access to and use of cloud variables. We also found that this increase was not only an effect of projects that use cloud variables themselves. In other words, learners with access to cloud variables—and especially those who had used it—were more likely to use “plain-old” data-structures in their projects as well.

The graph below visualizes the results of one of the statistical models in our paper and suggests that we would expect that 33% of projects by a prototypical “average” Scratch user would use data structures if the user in question had never used used cloud variables but that we would expect that 60% of projects by a similar user would if they had used the system.

Model-predicted probability that a project made by a prototypical Scratch user will contain data structures (w/o counting projects with cloud variables)

It is important to note that the estimated effective above is a “local average effect” among people who used the system because they were granted access by the sudden change in policy (this is a subtle but important point that we explain this in some depth in the paper). Although we urge care and skepticism in interpreting our numbers, we believe our results are encouraging evidence in support of the “wide walls” design principle.

Of course, our work is not without important limitations. Critically, we also found that rate of adoption of cloud variables was very low. Although it is hard to pinpoint the exact reason for this from the data we observed, it has been suggested that widening walls may have a potential negative side-effect of making it harder for learners to imagine what the new creative possibilities might be in the absence of targeted support and scaffolding. Also important to remember is that our study measures “wide walls” in a specific way in a specific context and that it is hard to know how well our findings will generalize to other contexts and communities. We discuss these caveats, as well as our methods, models, and theoretical background in detail in our paper which now available for download as an open-access piece from the ACM digital library.


This blog post, and the open access paper that it describes, is a collaborative project with Sayamindu Dasgupta. Financial support came from the eScience Institute and the Department of Communication at the University of Washington. Quantitative analyses for this project were completed using the Hyak high performance computing cluster at the University of Washington.

CryptogramSupermarket Shoplifting

Worse Than FailureBusiness Driven Development

Every now and then, you come across a special project. You know the sort, where some business user decides that they know exactly what they need and exactly how it should be built. They get the buy-in of some C-level shmoe by making sure that their lips have intimate knowledge of said C-level butt. Once they have funding, they have people hired and begin to bark orders.

Toonces, the Driving Cat

About 8 years ago, I had the privilege experience of being on such a project. When we were given the phase-I specs, all the senior tech people immediately said that there was no way to perform a sane daily backup and data-roll for the next day. The response was "We're not going to worry about backups and daily book-rolls until later". We all just cringed, made like good little HPCs and followed our orders to march onward.

Fast forward about 10 months and the project had a sufficient amount of infrastructure that the business user had no choice but to start thinking about how to close the books each day, and roll things forward for the next day. The solution he came up with was as follows:

   1. Shut down all application servers and the DB
   2. Remove PK/FK relationships and rename all the tables in the database from: xxx to: xxx.yyyymmdd
   3. Create all new empty tables in the database (named: xxx)
   4. Create all the PK/FK relationships, indices, triggers, etc.
   5. Prime the new: xxx tables with data from the: xxx.<prev-business-date> tables
   6. Run a job to mirror the whole thing to offsite DB servers
   7. Run the nightly backups (to tape)
   8. Fire up the DB and application servers

Naturally, all the tech people groaned, mentioning things like history tables, wasted time regenerating indices, nightmares if errors occurred while renaming tables, etc., but they were ignored.

Then it happened. As is usually the case when non-technical people try to do technical designs, the business user found himself designed into a corner.

The legitimate business-need came up to make adjustments to transactions for the current business day after the table-roll to the next business day had completed.

The business user pondered it for a bit and came up with the following:

    1. Shut down all application servers and the DB
    2. Remove PK/FK relationships and rename the post-roll tables of tomorrow from xxx to xxx.tomorrow
    3. Copy SOME of the xxx.yyyymmdd tables from the pre-roll current day back to: xxx
       (leaving the PK's and indices notably absent)
    4. Restart the DB and application servers (with some tables rolled and some not rolled)
    5. Let the users make changes as needed
    6. Shut down the application and DB servers
    7. Manually run ad-hoc SQL to propagate all changes to the xxx.tomorrow table(s)
    8. Rename the: xxx tables to: xxx.yyyymmdd.1 
       (or 2 or 3, depending upon how many times this happened per day)
    9. Rename the xxx.tomorrow tables back to: xxx
   10. Rebuild all the PK/FK relationships, create new indices and re-associate triggers, etc.
   11. Rerun the mirroring and backup scripts
   12. Restart the whole thing

When we pointed out the insanity of all of this, and the extremely high likelihood of any failure in the table-renaming/moving/manual-updating causing an uncorrectable mess that would result in losing the entire day of transactions, we were summarily terminated as our services were no longer required — because they needed people who knew how to get things done.

I'm the first to admit that there are countless things that I do not know, and the older I get, the more that list seems to grow.

I'm also adamant about not making mistakes I know will absolutely blow up in my face - even if it costs me a job. If you need to see inside of a gas tank, throwing a lit match into it will illuminate the inside, but you probably won't like how it works out for you.

Five of us walked out of there, unemployed and laughing hysterically. We went to our favorite watering hole and decided to keep tabs on the place for the inevitable explosion.

Sure enough, 5 weeks after they had junior offshore developers (who didn't have the spine to say "No") build what they wanted, someone goofed in the rollback, and then goofed again while trying to unroll the rollback.

It took them three days to figure out what to restore and in what sequence, then restore it, rebuild everything and manually re-enter all of the transactions since the last backup. During that time, none of their customers got the data files that they were paying for, and had to find alternate sources for the information.

When they finally got everything restored, rebuilt and updated, they went to their customers and said "We're back". In response, the customers told them that they had found other ways of getting the time-sensitive information and no longer required their data product.

Not only weren't the business users fired, but they got big bonuses for handling the disaster that they had created.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianVincent Bernat: Multi-tier load-balancing with Linux

A common solution to provide a highly-available and scalable service is to insert a load-balancing layer to spread requests from users to backend servers.1 We usually have several expectations for such a layer:

scalability
It allows a service to scale by pushing traffic to newly provisioned backend servers. It should also be able to scale itself when it becomes the bottleneck.
availability
It provides high availability to the service. If one server becomes unavailable, the traffic should be quickly steered to another server. The load-balancing layer itself should also be highly available.
flexibility
It handles both short and long connections. It is flexible enough to offer all the features backends generally expect from a load-balancer like TLS or HTTP routing.
operability
With some cooperation, any expected change should be seamless: rolling out a new software on the backends, adding or removing backends, or scaling up or down the load-balancing layer itself.

The problem and its solutions are well known. From recently published articles on the topic, “Introduction to modern network load-balancing and proxying” provides an overview of the state of the art. Google released “Maglev: A Fast and Reliable Software Network Load Balancer” describing their in-house solution in details.2 However, the associated software is not available. Basically, building a load-balancing solution with commodity servers consists of assembling three components:

  • ECMP routing
  • stateless L4 load-balancing
  • stateful L7 load-balancing

In this article, I describe and support a multi-tier solution using Linux and only open-source components. It should offer you the basis to build a production-ready load-balancing layer.

Update (2018.05)

Facebook just released Katran, an L4 load-balancer implemented with XDP and eBPF and using consistent hashing. It could be inserted in the configuration described below.

Last tier: L7 load-balancing🔗

Let’s start with the last tier. Its role is to provide high availability, by forwarding requests to only healthy backends, and scalability, by spreading requests fairly between them. Working in the highest layers of the OSI model, it can also offer additional services, like TLS-termination, HTTP routing, header rewriting, rate-limiting of unauthenticated users, and so on. Being stateful, it can leverage complex load-balancing algorithm. Being the first point of contact with backend servers, it should ease maintenances and minimize impact during daily changes.

L7 load-balancers
The last tier of the load-balancing solution is a set of L7 load-balancers receiving user connections and forwarding them to the backends.

It also terminates client TCP connections. This introduces some loose coupling between the load-balancing components and the backend servers with the following benefits:

  • connections to servers can be kept open for lower resource use and latency,
  • requests can be retried transparently in case of failure,
  • clients can use a different IP protocol than servers, and
  • servers do not have to care about path MTU discovery, TCP congestion control algorithms, avoidance of the TIME-WAIT state and various other low-level details.

Many pieces of software would fit in this layer and an ample literature exists on how to configure them. You could look at HAProxy, Envoy or Træfik. Here is a configuration example for HAProxy:

# L7 load-balancer endpoint
frontend l7lb
  # Listen on both IPv4 and IPv6
  bind :80 v4v6
  # Redirect everything to a default backend
  default_backend servers
  # Healthchecking
  acl dead nbsrv(servers) lt 1
  acl disabled nbsrv(enabler) lt 1
  monitor-uri /healthcheck
  monitor fail if dead || disabled

# IPv6-only servers with HTTP healthchecking and remote agent checks
backend servers
  balance roundrobin
  option httpchk
  server web1 [2001:db8:1:0:2::1]:80 send-proxy check agent-check agent-port 5555
  server web2 [2001:db8:1:0:2::2]:80 send-proxy check agent-check agent-port 5555
  server web3 [2001:db8:1:0:2::3]:80 send-proxy check agent-check agent-port 5555
  server web4 [2001:db8:1:0:2::4]:80 send-proxy check agent-check agent-port 5555

# Fake backend: if the local agent check fails, we assume we are dead
backend enabler
  server enabler [::1]:0 agent-check agent-port 5555

This configuration is the most incomplete piece of this guide. However, it illustrates two key concepts for operability:

  1. Healthchecking of the web servers is done both at HTTP-level (with check and option httpchk) and using an auxiliary agent check (with agent-check). The later makes it easy to put a server to maintenance or to orchestrate a progressive rollout. On each backend, you need a process listening on port 5555 and reporting the status of the service (UP, DOWN, MAINT). A simple socat process can do the trick:3

    socat -ly \
      TCP6-LISTEN:5555,ipv6only=0,reuseaddr,fork \
      OPEN:/etc/lb/agent-check,rdonly
    

    Put UP in /etc/lb/agent-check when the service is in nominal mode. If the regular healthcheck is also positive, HAProxy will send requests to this node. When you need to put it in maintenance, write MAINT and wait for the existing connections to terminate. Use READY to cancel this mode.

  2. The load-balancer itself should provide an healthcheck endpoint (/healthcheck) for the upper tier. It will return a 503 error if either there is no backend servers available or if put down the enabler backend through the agent check. The same mechanism as for regular backends can be used to signal the unavailability of this load-balancer.

Additionally, the send-proxy directive enables the proxy protocol to transmit the real clients’ IP addresses. This protocol also works for non-HTTP connections and is supported by a variety of servers, including nginx:

http {
  server {
    listen [::]:80 default ipv6only=off proxy_protocol;
    root /var/www;
    set_real_ip_from ::/0;
    real_ip_header proxy_protocol;
  }
}

As is, this solution is not complete. We have just moved the availability and scalability problem somewhere else. How do we load-balance the requests between the load-balancers?

First tier: ECMP routing🔗

On most modern routed IP networks, redundant paths exist between clients and servers. For each packet, routers have to choose a path. When the cost associated to each path is equal, incoming flows4 are load-balanced among the available destinations. This characteristic can be used to balance connections among available load-balancers:

ECMP routing
ECMP routing is used as a first tier. Flows are spread among available L7 load-balancers. Routing is stateless and asymmetric. Backend servers are not represented.

There is little control over the load-balancing but ECMP routing brings the ability to scale horizontally both tiers. A common way to implement such a solution is to use BGP, a routing protocol to exchange routes between network equipments. Each load-balancer announces to its connected routers the IP addresses it is serving.

If we assume you already have BGP-enabled routers available, ExaBGP is a flexible solution to let the load-balancers advertise their availability. Here is a configuration for one of the load-balancers:

# Healthcheck for IPv6
process service-v6 {
  run python -m exabgp healthcheck -s --interval 10 --increase 0 --cmd "test -f /etc/lb/v6-ready -a ! -f /etc/lb/disable";
  encoder text;
}

template {
  # Template for IPv6 neighbors
  neighbor v6 {
    router-id 192.0.2.132;
    local-address 2001:db8::192.0.2.132;
    local-as 65000;
    peer-as 65000;
    hold-time 6;
    family {
      ipv6 unicast;
    }
    api services-v6 {
      processes [ service-v6 ];
    }
  }
}

# First router
neighbor 2001:db8::192.0.2.254 {
  inherit v6;
}

# Second router
neighbor 2001:db8::192.0.2.253 {
  inherit v6;
}

If /etc/lb/v6-ready is present and /etc/lb/disable is absent, all the IP addresses configured on the lo interface will be announced to both routers. If the other load-balancers use a similar configuration, the routers will distribute incoming flows between them. Some external process should manage the existence of the /etc/lb/v6-ready file by checking for the healthiness of the load-balancer (using the /healthcheck endpoint for example). An operator can remove a load-balancer from the rotation by creating the /etc/lb/disable file.

To get more details on this part, have a look at “High availability with ExaBGP.” If you are in the cloud, this tier is usually implemented by your cloud provider, either using an anycast IP address or a basic L4 load-balancer.

Unfortunately, this solution is not resilient when an expected or unexpected change happens. Notably, when adding or removing a load-balancer, the number of available routes for a destination changes. The hashing algorithm used by routers is not consistent and flows are reshuffled among the available load-balancers, breaking existing connections:

Stability of ECMP routing 1/2
ECMP routing is unstable when a change happens. An additional load-balancer is added to the pool and the flows are routed to different load-balancers, which do not have the appropriate entries in their connection tables.

Moreover, each router may choose its own routes. When a router becomes unavailable, the second one may route the same flows differently:

Stability of ECMP routing 2/2
A router becomes unavailable and the remaining router load-balances its flows differently. One of them is routed to a different load-balancer, which do not have the appropriate entry in its connection table.

If you think this is not an acceptable outcome, notably if you need to handle long connections like file downloads, video streaming or websocket connections, you need an additional tier. Keep reading!

Second tier: L4 load-balancing🔗

The second tier is the glue between the stateless world of IP routers and the stateful land of L7 load-balancing. It is implemented with L4 load-balancing. The terminology can be a bit confusing here: this tier routes IP datagrams (no TCP termination) but the scheduler uses both destination IP and port to choose an available L7 load-balancer. The purpose of this tier is to ensure all members take the same scheduling decision for an incoming packet.

There are two options:

  • stateful L4 load-balancing with state synchronization accross the members, or
  • stateless L4 load-balancing with consistent hashing.

The first option increases complexity and limits scalability. We won’t use it.5 The second option is less resilient during some changes but can be enhanced with an hybrid approach using a local state.

We use IPVS, a performant L4 load-balancer running inside the Linux kernel, with Keepalived, a frontend to IPVS with a set of healthcheckers to kick out an unhealthy component. IPVS is configured to use the Maglev scheduler, a consistent hashing algorithm from Google. Among its family, this is a great algorithm because it spreads connections fairly, minimizes disruptions during changes and is quite fast at building its lookup table. Finally, to improve performance, we let the last tier—the L7 load-balancers—sends back answers directly to the clients without involving the second tier—the L4 load-balancers. This is referred to as direct server return (DSR) or direct routing (DR).

Second tier: L4 load-balancing
L4 load-balancing with IPVS and consistent hashing as a glue between the first tier and the third tier. Backend servers have been omitted. Dotted lines represent the path for the return packets.

With such a setup, we expect packets from a flow to be able to move freely between the components of the two first tiers while sticking to the same L7 load-balancer.

Configuration🔗

Assuming ExaBGP has already been configured like described in the previous section, let’s start with the configuration of Keepalived:

virtual_server_group VS_GROUP_MH_IPv6 {
  2001:db8::198.51.100.1 80
}
virtual_server group VS_GROUP_MH_IPv6 {
  lvs_method TUN  # Tunnel mode for DSR
  lvs_sched mh    # Scheduler: Maglev
  sh-port         # Use port information for scheduling
  protocol TCP
  delay_loop 5
  alpha           # All servers are down on start
  omega           # Execute quorum_down on shutdown
  quorum_up   "/bin/touch /etc/lb/v6-ready"
  quorum_down "/bin/rm -f /etc/lb/v6-ready"

  # First L7 load-balancer
  real_server 2001:db8::192.0.2.132 80 {
    weight 1
    HTTP_GET {
      url {
        path /healthcheck
        status_code 200
      }
      connect_timeout 2
    }
  }

  # Many others...
}

The quorum_up and quorum_down statements define the commands to be executed when the service becomes available and unavailable respectively. The /etc/lb/v6-ready file is used as a signal to ExaBGP to advertise the service IP address to the neighbor routers.

Additionally, IPVS needs to be configured to continue routing packets from a flow moved from another L4 load-balancer. It should also continue routing packets from unavailable destinations to ensure we can drain properly a L7 load-balancer.

# Schedule non-SYN packets
sysctl -qw net.ipv4.vs.sloppy_tcp=1
# Do NOT reschedule a connection when destination
# doesn't exist anymore
sysctl -qw net.ipv4.vs.expire_nodest_conn=0
sysctl -qw net.ipv4.vs.expire_quiescent_template=0

The Maglev scheduling algorithm will be available with Linux 4.18, thanks to Inju Song. For older kernels, I have prepared a backport.6 Use of source hashing as a scheduling algorithm will hurt the resilience of the setup.

DSR is implemented using the tunnel mode. This method is compatible with routed datacenters and cloud environments. Requests are tunneled to the scheduled peer using IPIP encapsulation. It adds a small overhead and may lead to MTU issues. If possible, ensure you are using a larger MTU for communication between the second and the third tier.7 Otherwise, it is better to explicitely allow fragmentation of IP packets:

sysctl -qw net.ipv4.vs.pmtu_disc=0

You also need to configure the L7 load-balancers to handle encapsulated traffic:8

# Setup IPIP tunnel to accept packets from any source
ip tunnel add tunlv6 mode ip6ip6 local 2001:db8::192.0.2.132
ip link set up dev tunlv6
ip addr add 2001:db8::198.51.100.1/128 dev tunlv6

Evaluation of the resilience🔗

As configured, the second tier increases the resilience of this setup for two reasons:

  1. The scheduling algorithm is using a consistent hash to choose its destination. Such an algorithm reduces the negative impact of expected or unexpected changes by minimizing the number of flows moving to a new destination. “Consistent Hashing: Algorithmic Tradeoffs” offers more details on this subject.

  2. IPVS keeps a local connection table for known flows. When a change impacts only the third tier, existing flows will be correctly directed according to the connection table.

If we add or remove a L4 load-balancer, existing flows are not impacted because each load-balancer takes the same decision, as long as they see the same set of L7 load-balancers:

L4 load-balancing instability 1/3
Loosing a L4 load-balancer has no impact on existing flows. Each arrow is an example of flow. The dots are flow endpoints bound to the associated load-balancer. If they had moved to another load-balancer, connection would have been lost.

If we add a L7 load-balancer, existing flows are not impacted either because only new connections will be scheduled to it. For existing connections, IPVS will look at its local connection table and continue to forward packets to the original destination. Similarly, if we remove a L7 load-balancer, only existing flows terminating at this load-balancer are impacted. Other existing connections will be forwarded correctly:

L4 load-balancing instability 2/3
Loosing a L7 load-balancer only impacts the flows bound to it.

We need to have simultaneous changes on both levels to get a noticeable impact. For example, when adding both a L4 load-balancer and a L7 load-balancer, only connections moved to a L4 load-balancer without state and scheduled to the new load-balancer will be broken. Thanks to the consistent hashing algorithm, other connections will stay bound to the right L7 load-balancer. During a planned change, this disruption can be minimized by adding the new L4 load-balancers first, waiting a few minutes, then adding the new L7 load-balancers.

L4 load-balancing instability 3/3
Both a L4 load-balancer and a L7 load-balancer come back to life. The consistent hash algorithm ensures that only one fifth of the existing connections would be moved to the incoming L7 load-balancer. Some of them continue to be routed through their original L4 load-balancer, which mitigates the impact.

Additionally, IPVS correctly routes ICMP messages to the same L7 load-balancers as the associated connections. This ensures notably path MTU discovery works and there is no need for smart workarounds.

Tier 0: DNS load-balancing🔗

Optionally, you can add DNS load-balancing to the mix. This is useful either if your setup is spanned accross multiple datacenters, or multiple cloud regions, or if you want to break a large load-balancing cluster into smaller ones. It is not intended to replace the first tier as it doesn’t share the same characteristics: load-balancing is unfair (it is not flow-based) and recovery from a failure is slow.

Complete load-balancing solution
A complete load-balancing solution spanning accross two datacenters.

gdnsd is an authoritative-only DNS server with integrated healthchecking. It can serve zones from master files using the RFC 1035 zone format:

@ SOA ns1 ns1.example.org. 1 7200 1800 259200 900
@ NS ns1.example.com.
@ NS ns1.example.net.
@ MX 10 smtp

@     60 DYNA multifo!web
www   60 DYNA multifo!web
smtp     A    198.51.100.99

The special RR type DYNA will return A and AAAA records after querying the specified plugin. Here, the multifo plugin implements an all-active failover of monitored addresses:

service_types => {
  web => {
    plugin => http_status
    url_path => /healthcheck
    down_thresh => 5
    interval => 5
  }
  ext => {
    plugin => extfile
    file => /etc/lb/ext
    def_down => false
  }
}

plugins => {
  multifo => {
    web => {
      service_types => [ ext, web ]
      addrs_v4 => [ 198.51.100.1, 198.51.100.2 ]
      addrs_v6 => [ 2001:db8::198.51.100.1, 2001:db8::198.51.100.2 ]
    }
  }
}

In nominal state, an A request will be answered with both 198.51.100.1 and 198.51.100.2. An healthcheck failure will update the returned set accordingly. It is also possible to administratively remove an entry by modifying the /etc/lb/ext file. For example, with the following content, 198.51.100.2 will not be advertised anymore:

198.51.100.1 => UP
198.51.100.2 => DOWN
2001:db8::c633:6401 => UP
2001:db8::c633:6402 => UP

You can find all the configuration files and the setup of each tier in the GitHub repository. If you want to replicate this setup at a smaller scale, it is possible to collapse the second and the third tiers by using either localnode or network namespaces. Even if you don’t need its fancy load-balancing services, you should keep the last tier: while backend servers come and go, the L7 load-balancers bring stability, which translates to resiliency.


  1. In this article, “backend servers” are the servers behind the load-balancing layer. To avoid confusion, we will not use the term “frontend.” ↩︎

  2. A good summary of the paper is available from Adrian Colyer. From the same author, you may also have a look at the summary for “Stateless datacenter load-balancing with Beamer.” ↩︎

  3. If you feel this solution is fragile, feel free to develop your own agent. It could coordinate with a key-value store to determine the wanted state of the server. It is possible to centralize the agent in a single location, but you may get a chicken-and-egg problem to ensure its availability. ↩︎

  4. A flow is usually determined by the source and destination IP and the L4 protocol. Alternatively, the source and destination port can also be used. The router hashes these information to choose the destination. For Linux, you may find more information on this topic in “Celebrating ECMP in Linux.” ↩︎

  5. On Linux, it can be implemented by using Netfilter for load-balancing and conntrackd to synchronize state. IPVS only provides active/backup synchronization. ↩︎

  6. The backport is not strictly equivalent to its original version. Be sure to check the README file to understand the differences. Briefly, in Keepalived configuration, you should:

    • not use inhibit_on_failure
    • use sh-port
    • not use sh-fallback

    ↩︎

  7. At least 1520 for IPv4 and 1540 for IPv6. ↩︎

  8. As is, this configuration is a insecure. You need to ensure only the L4 load-balancers will be able to send IPIP traffic. ↩︎

Planet DebianJoachim Breitner: The diameter of German+English

Languages never map directly onto each other. The English word fresh can mean frisch or frech, but frish can also be cool. Jumping from one words to another like this yields entertaining sequences that take you to completely different things. Here is one I came up with:

frechfreshfrishcoolabweisenddismissivewegwerfendtrashingverhauendbangingGeklopfeknocking – …

And I could go on … but how far? So here is a little experiment I ran:

  1. I obtained a German-English dictionary. Conveniently, after registration, you can get dict.cc’s translation file, which is simply a text file with three columns: German, English, Word form.

  2. I wrote a program that takes these words and first canonicalizes them a bit: Removing attributes like [ugs.] [regional], {f}, the to in front of verbs and other embellishment.

  3. I created the undirected, bipartite graph of all these words. This is a pretty big graph – ~750k words in each language, a million edges. A path in this graph is precisely a sequence like the one above.

  4. In this graph, I tried to find a diameter. The diameter of a graph is the longest path between two nodes that you cannot connect with a shorter path.

Because the graph is big (and my code maybe not fully optimized), it ran a few hours, but here it is: The English expression be annoyed by sb. and the German noun Icterus are related by 55 translations. Here is the full list:

  • be annoyed by sb.
  • durch jdn. verärgert sein
  • be vexed with sb.
  • auf jdn. böse sein
  • be angry with sb.
  • jdm. böse sein
  • have a grudge against sb.
  • jdm. grollen
  • bear sb. a grudge
  • jdm. etw. nachtragen
  • hold sth. against sb.
  • jdm. etw. anlasten
  • charge sb. with sth.
  • jdn. mit etw. [Dat.] betrauen
  • entrust sb. with sth.
  • jdm. etw. anvertrauen
  • entrust sth. to sb.
  • jdm. etw. befehlen
  • tell sb. to do sth.
  • jdn. etw. heißen
  • call sb. names
  • jdn. beschimpfen
  • abuse sb.
  • jdn. traktieren
  • pester sb.
  • jdn. belästigen
  • accost sb.
  • jdn. ansprechen
  • address oneself to sb.
  • sich an jdn. wenden
  • approach
  • erreichen
  • hit
  • Treffer
  • direct hit
  • Volltreffer
  • bullseye
  • Hahnenfuß-ähnlicher Wassernabel
  • pennywort
  • Mauer-Zimbelkraut
  • Aaron's beard
  • Großkelchiges Johanniskraut
  • Jerusalem star
  • Austernpflanze
  • goatsbeard
  • Geißbart
  • goatee
  • Ziegenbart
  • buckhorn plantain
  • Breitwegerich / Breit-Wegerich
  • birdseed
  • Acker-Senf / Ackersenf
  • yellows
  • Gelbsucht
  • icterus
  • Icterus

Pretty neat!

So what next?

I could try to obtain an even longer chain by forgetting whether a word is English or German (and lower-casing everything), thus allowing wild jumps like hathuthüttelodge.

Or write a tool where you can enter two arbitrary words and it finds such a path between them, if there exists one. Unfortunately, it seems that the terms of the dict.cc data dump would not allow me to create such a tool as a web site (but maybe I can ask).

Or I could throw in additional languages!

What would you do?

,

Planet DebianJonathan McDowell: Home Automation: Graphing MQTT sensor data

So I’ve setup a MQTT broker and I’m feeding it temperature data. How do I actually make use of this data? Turns out collectd has an MQTT plugin, so I went about setting it up to record temperature over time.

First problem was that although the plugin supports MQTT/TLS it doesn’t support it for subscriptions until 5.8, so I had to backport the fix to the 5.7.1 packages my main collectd host is running.

The other problem is that collectd is picky about the format it accepts for incoming data. The topic name should be of the format <host>/<plugin>-<plugin_instance>/<type>-<type_instance> and the data is <unixtime>:<value>. I modified my MQTT temperature reporter to publish to collectd/mqtt-host/mqtt/temperature-study, changed the publish line to include the timestamp:

publish.single(pub_topic, str(time.time()) + ':' + str(temp),
            hostname=Broker, port=8883,
            auth=auth, tls={})

and added a new collectd user to the Mosquitto configuration:

mosquitto_passwd -b /etc/mosquitto/mosquitto.users collectd collectdpass

And granted it read-only access to the collectd/ prefix via /etc/mosquitto/mosquitto.acl:

user collectd
topic read collectd/#

(I also created an mqtt-temp user with write access to that prefix for the Python script to connect to.)

Then, on the collectd host, I created /etc/collectd/collectd.conf.d/mqtt.conf containing:

LoadPlugin mqtt

<Plugin "mqtt">
        <Subscribe "ha">
                Host "mqtt-host"
                Port "8883"
                User "collectd"
                Password "collectdpass"
                CACert "/etc/ssl/certs/ca-certificates.crt"
                Topic "collectd/#"
        </Subscribe>
</Plugin>

I had some initial problems when I tried setting CACert to the Let’s Encrypt certificate; it actually wants to point to the “DST Root CA X3” certificate that signs that. Or using the full set of installed root certificates as I’ve done works too. Of course the errors you get back are just of the form:

collectd[8853]: mqtt plugin: mosquitto_loop failed: A TLS error occurred.

which is far from helpful. Once that was sorted collectd started happily receiving data via MQTT and producing graphs for me:

Study temperature

This is a pretty long winded way of ending up with some temperature graphs - I could have just graphed the temperature sensor using collectd on the Pi to send it to the monitoring host, but it has allowed a simple MQTT broker, publisher + subscriber setup with TLS and authentication to be constructed and confirmed as working.

Planet DebianEddy Petrișor: rust for cortex-m7 baremetal

This is a reminder for myself, if you want to install rust for a baremetal Cortex-M7 target, this seems to be a tier 3 platform:

https://forge.rust-lang.org/platform-support.html

Higlighting the relevant part:

Target std rustc cargo notes
...
msp430-none-elf * 16-bit MSP430 microcontrollers
sparc64-unknown-netbsd NetBSD/sparc64
thumbv6m-none-eabi * Bare Cortex-M0, M0+, M1
thumbv7em-none-eabi *

Bare Cortex-M4, M7
thumbv7em-none-eabihf * Bare Cortex-M4F, M7F, FPU, hardfloat
thumbv7m-none-eabi * Bare Cortex-M3
...
x86_64-unknown-openbsd 64-bit OpenBSD

In order to enable the relevant support, use the nightly build and add the relevant target:
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

active toolchain
----------------

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
If not using nightly, switch to that:

eddy@feodora:~/usr/src/rust-uc$ rustup default nightly-x86_64-unknown-linux-gnu
info: using existing install for 'nightly-x86_64-unknown-linux-gnu'
info: default toolchain set to 'nightly-x86_64-unknown-linux-gnu'

  nightly-x86_64-unknown-linux-gnu unchanged - rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
Add the needed target:
eddy@feodora:~/usr/src/rust-uc$ rustup target add thumbv7em-none-eabi
info: downloading component 'rust-std' for 'thumbv7em-none-eabi'
  5.4 MiB /   5.4 MiB (100 %)   5.1 MiB/s ETA:   0 s               
info: installing component 'rust-std' for 'thumbv7em-none-eabi'
eddy@feodora:~/usr/src/rust-uc$ rustup show
Default host: x86_64-unknown-linux-gnu

installed toolchains
--------------------

stable-x86_64-unknown-linux-gnu
nightly-x86_64-unknown-linux-gnu (default)

installed targets for active toolchain
--------------------------------------

thumbv7em-none-eabi
x86_64-unknown-linux-gnu

active toolchain
----------------

nightly-x86_64-unknown-linux-gnu (default)
rustc 1.28.0-nightly (cb20f68d0 2018-05-21)
Then compile with --target.

Cory DoctorowWhere to find me at Phoenix Comics Fest this week

I’m heading to Phoenix Comics Fest tomorrow (going straight to the airport from my daughter’s elementary school graduation) (!), and I’ve got a busy schedule so I thought I’d produce a comprehensive list of the places you can find me in Phoenix:


Wednesday, May 23: Elevenageddon at Poisoned Pen books, 4014 N Goldwater Blvd, Scottsdale, AZ 85251, 7-8PM (“A Multi-Author Sci-Fi Event”)

Thursday, May 24:

Transhumans and Transhumanism in Fiction, North 126AB, with Emily Devenport and Sylvain Neuvel, 12PM-1PM

Prophets of Sci-Fi, North 125AB, with Emily Devenport, Sylvain Neuvel and John Scalzi, 3PM-4PM

Tor Authors Signing, Exhibitor Hall Author Signing area, 4:30PM-530PM

Building a Franken-Book, North 126C, with Bob Beard, Joey Eschrich and Ed Finn


Friday, May 25:

Two Truths and a Lie, North 122ABC, with Myke Cole, Emily Devenport, K Arsenault Rivera and John Scalzi, 1030AM-1130AM

Solo Presentation, North 122ABC, 1:30PM-2:30PM

Signing, Exhibitor Hall Author Signing Area, 3PM-4PM

Saturday, May 26:

Cory Doctorow & John Scalzi in Conversation about Politics in Sci Fi and Fantasy, North 125AB, 12PM-1PM

Signing, North 124AB, 1:15PM-2:15PM

Rondam RamblingsA quantum mechanics puzzle, part drei

[This post is the third part of a series.  You should read parts one and two before reading this or it won't make any sense.] So we have two more cases to consider: Case 3: we pulse the laser with very short pulses, emitting only one photon at a time.  This is actually not possible with a laser, but it is possible with something like this single-photon-emitting light source (which was actually

Krebs on SecurityMobile Giants: Please Don’t Share the Where

Your mobile phone is giving away your approximate location all day long. This isn’t exactly a secret: It has to share this data with your mobile provider constantly to provide better call quality and to route any emergency 911 calls straight to your location. But now, the major mobile providers in the United States — AT&T, Sprint, T-Mobile and Verizon — are selling this location information to third party companies — in real time — without your consent or a court order, and with apparently zero accountability for how this data will be used, stored, shared or protected.

Think about what’s at stake in a world where anyone can track your location at any time and in real-time. Right now, to be free of constant tracking the only thing you can do is remove the SIM card from your mobile device never put it back in unless you want people to know where you are.

It may be tough to put a price on one’s location privacy, but here’s something of which you can be sure: The mobile carriers are selling data about where you are at any time, without your consent, to third-parties for probably far less than you might be willing to pay to secure it.

The problem is that as long as anyone but the phone companies and law enforcement agencies with a valid court order can access this data, it is always going to be at extremely high risk of being hacked, stolen and misused.

Consider just two recent examples. Earlier this month The New York Times reported that a little-known data broker named Securus was selling local police forces around the country the ability to look up the precise location of any cell phone across all of the major U.S. mobile networks. Then it emerged that Securus had been hacked, its database of hundreds of law enforcement officer usernames and passwords plundered. We also found out that Securus’ data was ultimately obtained from a California-based location tracking firm LocationSmart.

On May 17, KrebsOnSecurity broke the news of research by Carnegie Mellon University PhD student Robert Xiao, who discovered that a LocastionSmart try-before-you-buy opt-in demo of the company’s technology was wide open — allowing real-time lookups from anyone on anyone’s mobile device — without any sort of authentication, consent or authorization.

Xiao said it took him all of about 15 minutes to discover that LocationSmart’s lookup tool could be used to track the location of virtually any mobile phone user in the United States.

Securus seems equally clueless about protecting the priceless data to which it was entrusted by LocationSmart. Over the weekend KrebsOnSecurity discovered that someone — almost certainly a security professional employed by Securus — has been uploading dozens of emails, PDFs, password lists and other files to Virustotal.com — a service owned by Google that can be used to scan any submitted file against dozens of commercial antivirus tools.

Antivirus companies willingly participate in Virustotal because it gives them early access to new, potentially malicious files being spewed by cybercriminals online. Virustotal users can submit suspicious files of all kind; in return they’ll see whether any of the 60+ antivirus tools think the file is bad or benign.

One basic rule that all Virustotal users need to understand is that any file submitted to Virustotal is also available to customers who purchase access to the service’s file repository. Nevertheless, for the past two years someone at Securus has been submitting a great deal of information about the company’s operations to Virustotal, including copies of internal emails and PDFs about visitation policies at a number of local and state prisons and jails that made up much of Securus’ business.

Some of the many, many files uploaded to Virustotal.com over the years by someone at Securus Technologies.

One of the files, submitted on April 27, 2018, is titled “38k user pass microsemi.com – joomla_production.mic_users_blockedData.txt”.  This file includes the names and what appear to be hashed/scrambled passwords of some 38,000 accounts — supposedly taken from Microsemi, a company that’s been called the largest U.S. commercial supplier of military and aerospace semiconductor equipment.

Many of the usernames in that file do map back to names of current and former employees at Microsemi. KrebsOnSecurity shared a copy of the database with Microsemi, but has not yet received a reply. Securus also has not responded to requests for comment.

These files that someone at Securus apparently submitted regularly to Virustotal also provide something of an internal roadmap of Securus’ business dealings, revealing the names and login pages for several police departments and jails across the country, such as the Travis County Jail site’s Web page to access Securus’ data.

Check out the screen shot below. Notice that forgot password link there? Clicking that prompts the visitor to enter their username and to select a “security question” to answer. There are but three questions: “What is your pet’s name? What is your favorite color? And what town were you born in?” There don’t appear to be any limits on the number of times one can attempt to answer a secret question.

Choose wisely and you, too, could gain the ability to look up anyone’s precise mobile location.

Given such robust, state-of-the-art security, how long do you think it would take for someone to figure out how to reset the password for any authorized user at Securus’ Travis County Jail portal?

Yes, companies like Securus and Location Smart have been careless with securing our prized location data, but why should they care if their paying customers are happy and the real-time data feeds from the mobile industry keep flowing?

No, the real blame for this sorry state of affairs comes down to AT&T, Sprint, T-Mobile and Verizon. T-Mobile was the only one of the four major providers that admitted providing Securus and LocationSmart with the ability to perform real-time location lookups on their customers. The other three carriers declined to confirm or deny that they did business with either company.

As noted in my story last Thursday, LocationSmart included the logos of the four carriers on their home page — in addition to those of several other major firms (that information is no longer available on the company’s site, but it can still be viewed by visiting this historic record of it over at the Internet Archive).

Now, don’t think for a second that these two tiny companies are the only ones with permission from the mobile giants to look up such sensitive information on demand. At a minimum, each one of these companies can in theory resell (or leak) this information and access to others. On 15 May, ZDNet reported that Securus was getting its data from the carriers by going through an intermediary: 3Cinteractive, which was getting it from LocationSmart.

However, it is interesting that the first insight we got that the mobile firms were being so promiscuous with our private location data came in the Times story about law enforcement officials seeking the ability to access any mobile device’s location data in real time.

All technologies are double-edged swords, which means that each can be used both for good and malicious ends. As much as police officers may wish to avoid the hassle and time constraints of having to get a warrant to determine the precise location of anyone they please whenever they wish, those same law enforcement officers should remember that this technology works both ways: It also can just as easily be abused by criminals to track the real-time movements of police and their families, informants, jurors, witnesses and even judges.

Consider the damage that organized crime syndicates — human traffickers, drug smugglers and money launderers — could inflict armed with an app that displays the precise location of every uniformed officer from within 300 ft to across the country. All because they just happened to know the cell phone number tied to each law enforcement official.

Maybe you have children or grandchildren who — like many of their peers these days — carry a mobile device at all times for safety and for quick communication with parents or guardians. Now imagine that anyone in the world has the instant capability to track where your kid is at any time of day. All they’d need is your kid’s digits.

Maybe you’re the current or former target of a stalker, jilted ex-spouse, or vengeful co-worker. Perhaps you perform sensitive work for the government. All of the above-mentioned parties and many more are put at heightened personal risk by having their real-time location data exposed to commercial third parties.

Some people might never sell their location data for any price: I suspect most of us would like this information always to be private unless and until we change the defaults (either in a binary “on/off” way or app-specific). On the other end of the spectrum there are probably plenty of people who don’t care one way or another provided that sharing their location information brings them some real or perceived financial or commercial benefit.

The point is, for many of us location privacy is priceless because, without it, almost everything else we’re doing to safeguard our privacy goes out the window.

And this sad reality will persist until the mobile providers state unequivocally that they will no longer sell or share customer location data without having received and validated some kind of legal obligation — such as a court-ordered subpoena.

But even that won’t be enough, because companies can and do change their policies all the time without warning or recourse (witness the current reality). It won’t be enough until lawmakers in this Congress step up and do their jobs — to prevent the mobile providers from selling our last remaining bastion of privacy in the free world to third party companies who simply can’t or won’t keep it secure.

The next post in this series will examine how we got here, and what Congress and federal regulators have done and might do to rectify the situation.

Update, May 23, 12:34 am ET: Securus responded with the following comment:

“Securus Technologies does not use the Google tool, Virustotal.com as part of our normal business practice for confidential information.  We use other antivirus tools that meet our high standards for security and reliability.  Importantly,Virustotal.com will associate a file with a URL or domain merely because the URL or domain is included in the file.  Our initial review concluded that the overwhelming majority of files that Virustotal.com associates with www.securustech.net were not uploaded by Securus.  Our review also showed that a few employees accessed the site in an abundance of caution to verify that outside emails were virus free.  As a result, many of the files indicated in your article were not directly uploaded by Securus and/or are not Securus documents. A vast majority of files merely mention our URL.  Our review also determined that the Microsemi file mentioned in your article is only associated with Securus because two Securus employee email addresses were included in the file, and not because Securus uploaded the file.”

“Because we take the security of information very seriously, we are continuing to look into this matter to ensure proper procedures are followed to protect company and client information. We will update you if we learn that procedures were not followed.”

CryptogramAnother Spectre-Like CPU Vulnerability

Google and Microsoft researchers have disclosed another Spectre-like CPU side-channel vulnerability, called "Speculative Store Bypass." Like the others, the fix will slow the CPU down.

The German tech site Heise reports that more are coming.

I'm not surprised. Writing about Spectre and Meltdown in January, I predicted that we'll be seeing a lot more of these sorts of vulnerabilities.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown.

I still predict that we'll be seeing lots more of these in the coming months and years, as we learn more about this class of vulnerabilities.

Cory DoctorowThe paperback of Walkaway is out today, along with reissues of all my adult novels in matching covers!

Today marks the release of the paperback of Walkaway, along with reissues of my five other adult novels, all in matching covers designed by the incredible Will Stahle (and if ebooks are your thing, check out my fair-trade ebook store, where you can get all my audiobooks and ebooks sold on the same terms as physical editions, with no DRM and no license agreements!).

Worse Than FailureRepresentative Line: Aggregation of Concatenation

A few years back, JSON crossed the “really good hammer” threshold. It has a good balance of being human readable, relatively compact, and simple to parse. It thus has become the go-to format for everything. “KoHHeKT” inherited a service which generates some JSON from an in-memory tree structure. This is exactly the kind of situation where JSON shines, and it would be trivial to employ one of the many JSON serialization libraries available for C# to generate JSON on demand.

Orrrrr… you could use LINQ aggregations, string formatting and trims…

private static string GetChildrenValue(int childrenCount)
{
        string result = Enumerable.Range(0, childrenCount).Aggregate("", (s, i) => s + $"\"{i}\",");
        return $"[{result.TrimEnd(',')}]";
}

Now, the concatenation and trims and all of that is bad. But I’m mostly stumped by what this method is supposed to accomplish. It’s called GetChildrenValue, but it doesn’t return a value- it returns an array of numbers from 0 to children count. Well, not an array, obviously- a string that can be parsed into an array. And they’re not actually numbers- they’re enclosed in quotes, so it’s actually text, not that any JavaScript client would care about the difference.

Why? How is this consumed? KoHHeKT couldn’t tell us, and we certainly aren’t going to figure it out from this block. But it is representative of the entire JSON constructing library- aggregations and concatenations with minimal exception handling and no way to confirm that it output syntactically valid JSON because nothing sanitizes its inputs.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #160

Here’s what happened in the Reproducible Builds effort between Sunday May 13 and Saturday May 19 2018:

Packages reviewed and fixed, and bugs filed

In addition, build failure bugs were reported by Adrian Bunk (2) and Gilles Filippini (1).

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages.

reprotest development

reprotest is our tool to build software and check it for reproducibility.

  • kpcyrd:
  • Chris Lamb:
    • Update referencess to Alioth now that the repository has migrated to Salsa. (1, 2, 3)

jenkins.debian.net development

There were a number of changes to our Jenkins-based testing framework, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Levente Polyak and Mattia Rizzolo & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet Linux AustraliaOpenSTEM: Nellie Bly – investigative journalist extraordinaire!

May is the birth month of Elizabeth Cochrane Seaman, better known as “Nellie Bly“. Here at OpenSTEM, we have a great fondness for Nellie Bly – an intrepid 19th century journalist and explorer, who emulated Jules Verne’s fictional character, Phileas Fogg, in racing around the world in less than 80 days in 1889/1890. Not only […]

,

Planet DebianDima Kogan: More Vnlog demos

More demos of vnlog and feedgnuplot usage! This is pretty pointless, but should be a decent demo of the tools at least. This is a demo, not documentation; so for usage details consult the normal docs.

Each Wednesday night I join a group bike ride. This is an organized affair, and each week an email precedes the ride, very roughly describing the route. The two organizers alternate leading the ride each week, and consequently the emails alternate also. I was getting the feeling that some of the announcements show up in my mailbxo more punctually than others, and after a recent 20-minutes-before-the ride email, I decided this just had to be quantified.

The emails all go to a google-group email. The google-groups people are a wheel-reinventing bunch, so talking to the archive can't be done with normal tools (NNTP? mbox files? No?). A brief search revealed somebody's home-grown tool to programmatically grab the archive:

https://github.com/icy/google-group-crawler.git

The docs look funny, but are actually correct: you really do run the script to download stuff and generate another script; and then run that script to download the rest of the stuff.

Anyway, I used that tool to grab all the emails that are available. Then I wrote a quick/dirty script to parse out the data I care about and dump everything into a vnlog:

#!/usr/bin/perl
use strict;
use warnings;

use feature ':5.10';

my %daysofweek = ('Mon' => 0,
                  'Tue' => 1,
                  'Wed' => 2,
                  'Thu' => 3,
                  'Fri' => 4,
                  'Sat' => 5,
                  'Sun' => 6);
my %months = ('Jan' => 1,
              'Feb' => 2,
              'Mar' => 3,
              'Apr' => 4,
              'May' => 5,
              'Jun' => 6,
              'Jul' => 7,
              'Aug' => 8,
              'Sep' => 9,
              'Oct' => 10,
              'Nov' => 11,
              'Dec' => 12);


say '# path ridenum who whenwedh date wordcount subject';

for my $path (<mbox/m.*>)
{
    my ($ridenum,$who,$date,$whenwedh,$subject);

    my $wordcount = 0;
    my $inbody    = undef;

    open FD, '<', $path;
    while(<FD>)
    {
        if( !$inbody && /^From: *(.*?)\s*$/ )
        {
            $who = $1;
            if(   $who =~ /sean/i)   { $who = 'sean'; }
            elsif($who =~ /nathan/i) { $who = 'nathan'; }
            else                     { $who = 'other'; }
        }
        if( !$inbody &&
            /^Subject: \s*
             (?:=\?UTF-8\?Q\?)?
             (.*?) \s* $/x )
        {
            $subject = $1;
            ($ridenum) = $subject =~ /^(?: \# | (?:=\?ISO-8859-1\?Q\?=23) )
                                      ([0-9]+)/x;
            $subject =~ s/[\s#]//g;
        }
        if( !$inbody && /^Date: *(.*?)\s*$/ )
        {
            $date = $1;

            my ($zone) = $date =~ / (\(.+\) | -0700 | -0800) /x;
            if( !defined $zone)
            {
                die "No timezone in: '$date'";
            }
            if( $zone !~ /PST|PDT|-0700|-0800/)
            {
                die "Unexpected timezone: '$zone'";
            }

            my ($Dayofweek,$D,$M,$Y,$h,$m,$s) = $date =~ /^(...),? +(\d+) +([a-zA-Z]+) +(20\d\d) +(\d\d):(\d\d):(\d\d)/;
            if( !(defined $Dayofweek && defined $h && defined $m && defined $s) )
            {
                die "Unparseable date '$date'";
            }
            my $dayofweek = $daysofweek{$Dayofweek} // die "Unparseable day-of-week '$Dayofweek'";

            my $t     = $dayofweek*24 + $h + ($m + $s/60)/60;
            my $twed0 = 2*24; # start of wed
            $M = $months{$M} // die "Unknown month '$M'. Line: '$_'";
            $date = sprintf('%04d%02d%02d', $Y,$M,$D);

            $whenwedh = $t - $twed0;
        }

        if( !$inbody && /^[\r\n]*$/ )
        {
            $inbody = 1;
        }
        if( $inbody )
        {
            if( /------=_Part/ || /Content-Type:/)
            {
                last if $wordcount > 0;
                $inbody = undef;
                next;
            }
            my @words = /(\w+)/g;
            $wordcount += @words;
        }
    }
    close FD;

    $who      //= '-';
    $subject  //= '-';
    $ridenum  //= '-';
    $date     //= '-';
    $whenwedh //= '-';

    say "$path $ridenum $who $whenwedh $date $wordcount $subject";
}

The script isn't important, and the resulting data is here. Now that I have a log on disk, I can do stuff with it. The first few lines of the log look like this:

dima@scrawny:~/projects/passagemining/google-group-crawler/the-passage-announcements$ < rides.vnl head

# path ridenum who whenwedh date wordcount subject
mbox/m.-EF1u5bbw5A.SywitKQ3y1sJ 265 sean 1.40722222222222 20140903 190 265-Coasting
mbox/m.-JdiiTIvyYs.Jgy_rCiwAGAJ 151 sean 18.6441666666667 20120606 199 151-FinalsWeek
mbox/m.-l6z9-1WC78.SgP3ytLsDAAJ 312 nathan 19.5394444444444 20150812 189 312-SpaceFilling
mbox/m.-vfVuoUxJ0w.FwpRRWC7EgAJ 367 nathan 18.1766666666667 20160831 164 367-Dislocation
mbox/m.-YHTEvmbIyU.HHWjbs_xpesJ 110 sean 10.9108333333333 20110810 407 110-SouslesParcs,laPoubelle
mbox/m.0__GMaUD_O8.Pjupq0AwBAAJ 404 sean 13.5255555555556 20170524 560 404-Bumped
mbox/m.0CT9ybx3uIU.sdZGwo8rSQUJ 53 sean -23.1402777777778 20100629 223 53WeInventedtheRemix
mbox/m.0FtQxCkxVHA.AjhGJ7mgAwAJ 413 nathan 20.4155555555556 20170726 178 413-GradientAssent
mbox/m.0haCNC_N2fY.bJ-93LQSFQAJ 337 nathan 57.3708333333333 20160205 479 337-TheCronutRide

I can align the columns to make it more human-readable:

dima@scrawny:~/projects/passagemining/google-group-crawler/the-passage-announcements$ < rides.vnl head | vnl-align

#             path              ridenum   who       whenwedh        date   wordcount           subject          
mbox/m.-EF1u5bbw5A.SywitKQ3y1sJ 265     sean     1.40722222222222 20140903 190       265-Coasting               
mbox/m.-JdiiTIvyYs.Jgy_rCiwAGAJ 151     sean    18.6441666666667  20120606 199       151-FinalsWeek             
mbox/m.-l6z9-1WC78.SgP3ytLsDAAJ 312     nathan  19.5394444444444  20150812 189       312-SpaceFilling           
mbox/m.-vfVuoUxJ0w.FwpRRWC7EgAJ 367     nathan  18.1766666666667  20160831 164       367-Dislocation            
mbox/m.-YHTEvmbIyU.HHWjbs_xpesJ 110     sean    10.9108333333333  20110810 407       110-SouslesParcs,laPoubelle
mbox/m.0__GMaUD_O8.Pjupq0AwBAAJ 404     sean    13.5255555555556  20170524 560       404-Bumped                 
mbox/m.0CT9ybx3uIU.sdZGwo8rSQUJ  53     sean   -23.1402777777778  20100629 223       53WeInventedtheRemix       
mbox/m.0FtQxCkxVHA.AjhGJ7mgAwAJ 413     nathan  20.4155555555556  20170726 178       413-GradientAssent         
mbox/m.0haCNC_N2fY.bJ-93LQSFQAJ 337     nathan  57.3708333333333  20160205 479       337-TheCronutRide          
dima@scrawny:~/projects/passagemining/google-group-crawler/the-passage-announcements$

If memory serves, we're at around ride 450 right now. Is that right?

$ < rides.vnl vnl-sort -nr -k ridenum | head -n2 | vnl-filter -p ridenum

# ridenum
452

Cool. This command was longer than it needed to be in order to produce nicer output. If I was exploring the dataset, I'd save keystrokes and do this instead:

$ < rides.vnl vnl-sort -nrk ridenum | head

# path ridenum who whenwedh date wordcount subject
mbox/m.7TnUbcShAz8.67KgwBGhAAAJ 452 nathan 20.7694444444444 20180502 175 452-CastingtoType
mbox/m.ej7Oz6sDzgc.bEnN04VEAQAJ 451 sean 0.780833333333334 20180425 258 451-Recovery
mbox/m.LWfydBtpd_s.35SgEJEqAgAJ 450 nathan 67.9608333333333 20180420 659 450-AnotherGreenWorld
mbox/m.3mv-Cm0EzkM.oAm3MkNYCAAJ 449 sean 17.5875 20180411 290 449-DoYouHaveRockNRoll?
mbox/m.AEV4ukSjO5U.IPlUabfEBgAJ 448 nathan 20.6138888888889 20180404 175 448-TheThirdString
mbox/m.bYTM6kgxtJs.5iHcVQKPBAAJ 447 sean 15.8355555555556 20180328 196 447-PassParticiple
mbox/m.tHMqRWp9o_Y.FQ8hFvnqCQAJ 446 nathan 20.5213888888889 20180321 139 446-Chiaroscuro
mbox/m.jr0SBsDBzgk.UHrbCv4VBQAJ 445 sean 15.3280555555556 20180314 111 445-85%
mbox/m.K2Yg_FRXuAo.SyViTwXXAQAJ 444 nathan 19.6180555555556 20180307 171 444-BackintheLoop

OK, how far back does the archive go? I do the same thing as before, but sort in the opposite order to find the ealiest rides

$ < rides.vnl vnl-sort -n -k ridenum | head -n2 | vnl-filter -p ridenum

# ridenum

Nothing. That's odd. Let me look at whole records, and at more than just the first two lines

$ < rides.vnl vnl-sort -n -k ridenum | head | vnl-align

#             path              ridenum   who       whenwedh       date   wordcount                       subject                      
mbox/m.2gywN9pxMI4.40UBrDjnAwAJ -       nathan  17.6572222222222 20171206  95       Noridetonight;daytimeridethisSaturday!             
mbox/m.49fZsvZac_U.a0CazPinCAAJ -       sean   -34.495           20170320 463       Extraridethisweekend+Passage400save-the-date       
mbox/m.5gJd21W24vo.ICDEHrnQJvcJ -       nathan  12.1063888888889 20130619 172       NoPassageRideTonight;GalleryOpeningTomorrowNight   
mbox/m.7qEbhBWSN1U.Cx6cxYTECgAJ -       nathan  17.7891666666667 20180418 134       Noridetonight;Passage450onSaturday!                
mbox/m.DVssP4Th__4.jXzzu9clZLQJ -       sean    20.9138888888889 20101222 209       TheWrathofTlaloc                                   
mbox/m.E6etBSqEQIc.C35-SkBllHoJ -       sean    50.7575          20131220 292       Noridenextweek;seeyounextyear                      
mbox/m.GyJ16HiK8Ds.z6yNC4W5SeUJ -       sean   -11.5666666666667 20120529 228       NoRideThisWeek!...AIDS/Lifecycle...ThirdAnniversary
mbox/m.H3QGBvjeTfM.CS-xRn1WDQAJ -       sean    17.0180555555555 20171227 257       Noridetonight;nextride1/6                          
mbox/m.K2P6D_BGfYU.ve6a_8l6AAAJ -       sean    37.8166666666667 20170223 150       RemainingPassageRouteMapShirtsAvailableforPurchase

Aha. A bunch of emails aren't announncing a ride, but are announcing that there's no ride that week. Let's ignore those

$ < rides.vnl vnl-filter -p +ridenum | vnl-sort -n -k ridenum | head -n2

# ridenum
52

Bam. So we have emails going back to ride 52. Good enough. All right. I'm aiming to create a time histogram for Sean's emails and another for Nathan's emails. What about emails that came from neither one? In theory there shouldn't be any of those, but there could be a parsing error, or who knows what.

$ < rides.vnl vnl-filter 'who == "other"'

# path ridenum who whenwedh date wordcount subject
mbox/m.A-I0_i9-YOs.QRX1P99_uiUJ 65 other 65.1413888888889 20100917 330 65-LosAngelesRidesItself+specialscreening
mbox/m.pHpzsjH7H68.O7CP_v6bcEoJ 67 other 16.5663888888889 20101006 50 67Sortition,NotSaturation

OK. Exactly 2 emails out of hundreds. That's not bad, and I'll just ignore those. Out of curiosity, what happened? Is this a parsing error?

$ grep From: $(< rides.vnl vnl-filter 'who == "other"' --eval '{print path}')

mbox/m.A-I0_i9-YOs.QRX1P99_uiUJ:From: The Passage Announcements <the-passage-...@googlegroups.com>
mbox/m.pHpzsjH7H68.O7CP_v6bcEoJ:From: The Passage Announcements <the-passage-...@googlegroups.com>

So on rides 65 and 67 "The Passage Announcements" emailed themselves. Oops. Since the ride leaders alternate, I can infer who actually sent these by looking at the few rides around this one:

$ < rides.vnl vnl-filter 'ridenum > 60 && ridenum < 70' -p ridenum,who | vnl-sort -n -k ridenum

# ridenum who
61 sean
62 nathan
63 sean
64 nathan
65 other
66 nathan
67 other
68 nathan
69 sean

That's pretty conclusive: clearly these emails came from Sean. I'm still going to ignore them, though.

The ride is on Wed evening, and the emails generally come in the day or two before then. Does my data set contain any data outside this reasonable range? Hopefully very little, just like the "other" author emails.

$ < rides.vnl vnl-filter --has ridenum -p whenwedh | feedgnuplot --histo 0 --binwidth 1 --xlabel 'Hour (on Wed)' --ylabel 'Email frequency'

frequency-all.svg

The ride starts at 21:00 on Wed, and we see a nice spike immediately before. The smaller cluster prior to that is the emails that go out the night before. There's a tiny number of stragglers going out the previous day (that I'm simply going to ignore). And there're a number of emails going out after Wed. These likely announce an occasional weekend ride that I will also ignore. But let's do check. How many are there?

$ < rides.vnl vnl-filter --has ridenum 'whenwedh > 22' | wc -l

16

Looking at these manually, most are indeed weekend rides, with a small number of actual extra-early announcements for Wed. I can parse the email text more fancily to pull those out, but that's really not worth my time.

OK. I'm now ready for the main thing.

$ < rides.vnl | 
    vnl-filter --has ridenum 'who != "other"' -p who,whenwedh |
    feedgnuplot --dataid --autolegend
                --histo sean,nathan --binwidth 0.5
                --style sean   'with boxes fill transparent solid 0.3 border lt -1'
                --style nathan 'with boxes fill transparent pattern 1 border lt -1'
                --xmin -12 --xmax 24
                --xlabel "Time (hour)" --ylabel 'Email frequency'
                --set 'xtics ("12\n(Tue)" -12,"16\n(Tue)" -8,"20\n(Tue)" -4,"0\n(Wed)" 0,"4\n(Wed)" 4,"8\n(Wed)" 8,"12\n(Wed)" 12,"16\n(Wed)" 16,"21\n(Wed)" 21,"0\n(Thu)" 24)'
                --set 'arrow from 21, graph 0 to 21, graph 1 nohead lw 3 lc "red"'
                --title "Passage email timing distribution"

frequency-zoomed.svg

This looks verbose, but most of the plotting command is there to make things look nice. When analyzing stuff, I'd omit most of that. Anyway, I can now see what I suspected: Nathan is a procrastinator! His emails almost always come in on Wed, usually an hour or two before the deadline. Sean's emails are bimodal: one set comes in on Wed afternoon, and another in the extreme early morning on Wed. Presumably he sleeps in-between.

We have more data, so we can make more pointless plots. For instance, what does the verbosity of the emails look like? Is one sender more verbose than another?

$ < rides.vnl vnl-sort -n -k ridenum |
  vnl-filter 'who != "other"' -p +ridenum,who,wordcount |
  feedgnuplot --lines --domain --dataid --autolegend
              --xlabel 'Ride number' --ylabel 'Words per email'

verbosity_unfiltered.svg

$ < rides.vnl vnl-filter 'who != "other"' --has ridenum -p who,wordcount |
  feedgnuplot --dataid --autolegend
              --histo sean,nathan --binwidth 20
              --style sean   'with boxes fill transparent solid 0.3 border lt -1'
              --style nathan 'with boxes fill transparent pattern 1 border lt -1'
              --xlabel "Words per email" --ylabel 'frequency'
              --title "Passage verbosity distribution"

verbosity_histogram.svg

The time series doesn't obviously say anything, but from the histogram, it looks like Sean is a bit more verbose, maybe? What's the average?

$ < rides.vnl vnl-filter --eval 'ridenum != "-" { if(who == "sean")   { Ns++; Ws+=wordcount; }
                                                  if(who == "nathan") { Nn++; Wn+=wordcount; } }
                                 END { print "Mean verbosity sean,nathan: "Ws/Ns, Wn/Nn }'

Mean verbosity sean,nathan: 304.955 250.425

Indeed. Is the verbosity time-dependent? Is anybody getting more or less verbose over the years? The time-series plot above is pretty noisy, so it's not clear. Let's filter it to reduce the noise. We're getting into an area that's too complicated for these tools, and moving to something more substantial at this point would be warranted. But I'll do one more thing with these tools, and then stop. I can implement a half-assed filter by time-shifting the verbosity series, re-joining the shifted series, and computing the mean. I do this separately for the two email authors, and then re-combine the series. I could join these two, but simply catting the two data sets together is sufficient here.

$ < rides.vnl vnl-sort -n -k ridenum |
    vnl-filter 'who == "nathan"' --has ridenum |
    vnl-filter -p ridenum,idx=NR,wordcount > nathanrp0

$ < rides.vnl vnl-sort -n -k ridenum |
    vnl-filter 'who == "nathan"' --has ridenum |
    vnl-filter -p ridenum,idx=NR-1,wordcount > nathanrp-1

$ < rides.vnl vnl-sort -n -k ridenum |
    vnl-filter 'who == "nathan"' --has ridenum |
    vnl-filter -p ridenum,idx=NR+1,wordcount > nathanrp+1

$ ... same for Sean ...

$ cat <(vnl-join --vnl-suffix2 after --vnl-sort n -j idx
                 <(vnl-join --vnl-suffix2 before --vnl-sort n -j idx
                            nathanrp{0,-1})
                 nathanrp+1 |
        vnl-filter -p ridenum,who='"nathan"','wordcountfiltered=(wordcount+wordcountbefore+wordcountafter)/3')

      <(vnl-join --vnl-suffix2 after --vnl-sort n -j idx
                 <(vnl-join --vnl-suffix2 before --vnl-sort n -j idx
                            seanrp{0,-1})
                 seanrp+1 |
        vnl-filter -p ridenum,who='"sean"','wordcountfiltered=(wordcount+wordcountbefore+wordcountafter)/3') |
  feedgnuplot --lines --domain --dataid --autolegend
              --xlabel 'Ride number' --ylabel 'Words per email'

verbosity_filtered.svg

Whew. Clearly this was doable, but that's a one-liner that has clearly gotten out of hand, and pushing it further would be unwise. Looking at the data there isn't any obvious time dependence. But what you can clearly see is the extra verbiage around the round-number rides 100, 200, 300, 350, 400, etc. These were often a special weekend ride, with the email containing lots of extra instructions and such.

This was all clearly a waste of time, but as a demo of vnlog workflows, this was ok.

Planet DebianDaniel Pocock: OSCAL'18 Debian, Ham, SDR and GSoC activities

Over the weekend I've been in Tirana, Albania for OSCAL 2018.

Crowdfunding report

The crowdfunding campaign to buy hardware for the radio demo was successful. The gross sum received was GBP 110.00, there were Paypal fees of GBP 6.48 and the net amount after currency conversion was EUR 118.29. Here is a complete list of transaction IDs for transparency so you can see that if you donated, your contribution was included in the total I have reported in this blog. Thank you to everybody who made this a success.

The funds were used to purchase an Ultracell UCG45-12 sealed lead-acid battery from Tashi in Tirana, here is the receipt. After OSCAL, the battery is being used at a joint meeting of the Prishtina hackerspace and SHRAK, the amateur radio club of Kosovo on 24 May. The battery will remain in the region to support any members of the ham community who want to visit the hackerspaces and events.

Debian and Ham radio booth

Local volunteers from Albania and Kosovo helped run a Debian and ham radio/SDR booth on Saturday, 19 May.

The antenna was erected as a folded dipole with one end joined to the Tirana Pyramid and the other end attached to the marquee sheltering the booths. We operated on the twenty meter band using an RTL-SDR dongle and upconverter for reception and a Yaesu FT-857D for transmission. An MFJ-1708 RF Sense Switch was used for automatically switching between the SDR and transceiver on PTT and an MFJ-971 ATU for tuning the antenna.

I successfully made contact with 9A1D, a station in Croatia. Enkelena Haxhiu, one of our GSoC students, made contact with Z68AA in her own country, Kosovo.

Anybody hoping that Albania was a suitably remote place to hide from media coverage of the British royal wedding would have been disappointed as we tuned in to GR9RW from London and tried unsuccessfully to make contact with them. Communism and royalty mix like oil and water: if a deceased dictator was already feeling bruised about an antenna on his pyramid, he would probably enjoy water torture more than a radio transmission celebrating one of the world's most successful hereditary monarchies.

A versatile venue and the dictator's revenge

It isn't hard to imagine communist dictator Enver Hoxha turning in his grave at the thought of his pyramid being used for an antenna for communication that would have attracted severe punishment under his totalitarian regime. Perhaps Hoxha had imagined the possibility that people may gather freely in the streets: as the sun moved overhead, the glass facade above the entrance to the pyramid reflected the sun under the shelter of the marquees, giving everybody a tan, a low-key version of a solar death ray from a sci-fi movie. Must remember to wear sunscreen for my next showdown with a dictator.

The security guard stationed at the pyramid for the day was kept busy chasing away children and more than a few adults who kept arriving to climb the pyramid and slide down the side.

Meeting with Debian's Google Summer of Code students

Debian has three Google Summer of Code students in Kosovo this year. Two of them, Enkelena and Diellza, were able to attend OSCAL. Albania is one of the few countries they can visit easily and OSCAL deserves special commendation for the fact that it brings otherwise isolated citizens of Kosovo into contact with an increasingly large delegation of foreign visitors who come back year after year.

We had some brief discussions about how their projects are starting and things we can do together during my visit to Kosovo.

Workshops and talks

On Sunday, 20 May, I ran a workshop Introduction to Debian and a workshop on Free and open source accounting. At the end of the day Enkelena Haxhiu and I presented the final talk in the Pyramid, Death by a thousand chats, looking at how free software gives us a unique opportunity to disable a lot of unhealthy notifications by default.

CryptogramJapan's Directorate for Signals Intelligence

The Intercept has a long article on Japan's equivalent of the NSA: the Directorate for Signals Intelligence. Interesting, but nothing really surprising.

The directorate has a history that dates back to the 1950s; its role is to eavesdrop on communications. But its operations remain so highly classified that the Japanese government has disclosed little about its work ­ even the location of its headquarters. Most Japanese officials, except for a select few of the prime minister's inner circle, are kept in the dark about the directorate's activities, which are regulated by a limited legal framework and not subject to any independent oversight.

Now, a new investigation by the Japanese broadcaster NHK -- produced in collaboration with The Intercept -- reveals for the first time details about the inner workings of Japan's opaque spy community. Based on classified documents and interviews with current and former officials familiar with the agency's intelligence work, the investigation shines light on a previously undisclosed internet surveillance program and a spy hub in the south of Japan that is used to monitor phone calls and emails passing across communications satellites.

The article includes some new documents from the Snowden archive.

Planet DebianDaniel Silverstone: Runtime typing

I have been wrestling with a problem for a little while now and thought I might send this out into the ether for others to comment upon. (Or, in other words, Dear Lazyweb…)

I am writing system which collects data from embedded computers in my car (ECUs) over the CAN bus, using the on-board diagnostics port in the vehicle. This requires me to generate packets on the CAN bus, listen to responses, including managing flow control, and then interpret the resulting byte arrays.

I have sorted everything but the last little bit of that particular data pipeline. I have a prototype which can convert the byte arrays into "raw" values by interpreting them either as bitfields and producing booleans, or as anything from an unsigned 8 bit integer to a signed 32 bit integer in either endianness. Fortunately none of the fields I'd need to interpret are floats.

This is, however, pretty clunky and nasty. Since I asked around and a majority of people would prefer that I keep the software configurable at runtime rather than doing meta-programming to describe these fields, I need to develop a way to have the data produced by reading these byte arrays (or by processing results already interpreted out of the arrays) type-checked.

As an example, one field might be the voltage of the main breaker in the car. It's represented as a 16 bit big-endian unsigned field, in tenths of a volt. So the field must be divided by ten and then given the type "volts". Another field is the current passing through that main breaker. This is a 16 bit big-endian signed value measured in tenths of an amp, so must be interpreted as as such, divided by ten, and then given the type "amps". I intend for all values handled beyond the raw byte arrays themselves to simply be floats, so there'll be signedness available regardless.

What I'd like, is to later have a "computed" value, let's call it "power flow", which is the voltage multiplied by the current. Naturally this would need to be given the type 'watts'. What I'd dearly love is to build into my program the understanding that volts times amps equals watts, and then have the reader of the runtime configuration type-check the function for "power flow".

I'm working on this in Rust, though for now the language is less important than the algorithms involved in doing this (unless you know of a Rust library which will help me along). I'd dearly love it if someone out there could help me to understand the right way to handle such expression type checking without having to build up a massively complex type system.

Currently I am considering things (expressed for now in yaml) along the lines of:

- name: main_voltage
  type: volts
  expr: u16_be(raw_bmc, 14) / 10
- name: main_current
  type: amps
  expr: i16_be(raw_bmc, 12) / 10
- name: power_flow
  type: watts
  expr: main_voltage * main_current

What I'd like is for each expression to be type-checked. I'm happy for untyped scalars to end up auto-labelled (so the u16_be() function would return an untyped number which then ends up marked as volts since 10 is also untyped). However when power_flow is typechecked, it should be able to work out that the type of the expression is volts * amps which should then typecheck against watts and be accepted. Since there's also consideration needed for times, distances, booleans, etc. this is not a completely trivial thing to manage. I will know the set of valid types up-front though, so there's that at least.

If you have any ideas, ping me on IRC or perhaps blog a response and then drop me an email to let me know about it.

Thanks in advance.

Planet DebianSune Vuorela: Managing cooking recipes

I like to cook. And sometimes store my recipes. Over the years I have tried KRecipes, kept my recipes in BasKet notes, in KJots notes, in more or less random word processor documents.

I liked the free form entering recipes in various notes applications and word processor documents, but I lacked some kind of indexing them. What I wanted was free-ish text for writing recipes, and some thing that could help me find them by tags I give them. By Title. By how I organize them. And maybe by Ingredient if I don’t know how to get rid of the soon-to-be-bad in my refridgerator.

Given I’m a software developer, maybe I should try scratch my own itch. And I did in the last month and a half during some evenings. This is also where my latest Qt and modern C++ blog posts comes from

The central bit is basically a markdown viewer, and the file format is some semi structured markdown in one file per recipe. Structured in the file system however you like it.

There is a recipes index which simply is a file system view with pretty titles on top.

There is a way to insert tags into recipes.

I can find them by title.

And I can find recipes by ingredients.

Given it is plain text, it can easily be synced using Git or NextCloud or whatever solution you want for that.

You can give it a spin if you want. It lives here https://cgit.kde.org/scratch/sune/kookbook.git/. There is a blueprint for a windows installer here: https://phabricator.kde.org/D12828

There is a markdown file describing the specifics of the file format. It is not declared 100% stable yet, but I need good reasons to break stuff.

My recipe collection is in my native language Danish, so I’m not sure sharing it for demo purposes makes too much sense.

Worse Than FailureThe New Guy (Part I)

After working mind-numbing warehouse jobs for several years, Jesse was ready for a fresh start in Information Technology. The year 2015 brought him a newly-minted Computer and Networking Systems degree from Totally Legit Technical Institute. It would surely help him find gainful employment, all he had to do was find the right opportunity.

DNS hierarchy Seeking the right opportunity soon turned in to any opportunity. Jesse came across a posting for an IT Systems Administrator that piqued his interest but the requirements and responsibilities left a lot to be desired. They sought someone with C++ and Microsoft Office experience who would perform "General IT Admin Work" and "Other Duties as assigned". None of those things seemed to fit together, but he applied anyway.

During the interview, it became clear that Jesse and this small company were essentially in the same boat. While he was seeking any IT employment, they were seeking any IT Systems admin. Their lone admin recently departed unexpectedly and barely left any documentation of what he actually did. Despite several red flags about the position, he decided to accept anyway. Jesse was assured of little oversight and freedom to do things his way - an extreme rarity for a young IT professional.

Jesse got to work on his first day determined to map out the minefield he was walking in to. The notepad with all the admin passwords his predecessor left behind was useful for logging in to things. Over the next few days, he prodded through the network topology to uncover all the horrors that lie within. Among them:

  • The front-end of their most-used internal application was using Access 97 that interfaced with a SQL Server 2008 machine
  • The desktop computers were all using Windows XP (Half of them upgraded from NT 4.0)
  • The main file server and domain controller were still running on NT 4.0
  • There were two other mystery servers that didn't seem to perform any discernible function. Jesse confirmed this by unplugging them and leaving them off

While sorting through the tangled mess he inherited, Jesse got a high priority email from Ralph, the ancient contracted Networking Admin whom he hadn't yet had the pleasure of meeting. "U need to fix the website. FTP not working." While Ralph wasn't one for details, Jesse did learn something from him - they had a website, it used FTP for something, and it was on him to fix it.

Jesse scanned the magic password notepad and came across something called "Website admin console". He decided to give that a shot, only to be told the password was expired and needed to be reset. Unfortunately the reset email was sent to his predecessor's deactivated account. He replied to Ralph telling him he wasn't able to get to the admin console to fix anything.

All that he got in return was a ticket submitted by a customer explaining the problem and the IP address of the FTP server. It seemed they were expecting to be able to fetch PDF reports from an FTP location and were no longer able to. He went to the FTP server and didn't find anything out of the ordinary, other than the fact that is should really be using SFTP. Despite the lack of security, something was still blocking the client from accessing it.

Jesse suddenly had an idea born of inexperience for how to fix the problem. When he was having connectivity issues on his home WiFi network, all he had to do was reboot the router and it would work! That same logic could surely apply here. After tracking down the router, he found the outlet wasn't easily accessible. So he decided to hit the (factory) Reset button on the back.

Upon returning to his desk, he was greeted by nearly every user in their small office. Nobody's computer worked any more. After turning a deep shade of red, Jesse assured everyone he would fix it. He remembered something from TL Tech Institute called DNS that was supposed to let computers talk to each other. He went around and set everyone's DNS server to 192.168.1.0, the address they always used in school. It didn't help.

Jesse put in a call to Ralph and explained the situation. All he got was a lecture from the gravelly-voiced elder on the other end, "You darn kids! Why don't ye just leave things alone! I've been working networks since before there were networks! Give me a bit, I'll clean up yer dang mess!" Within minutes, Ralph managed to restore connectivity to the office. Jesse checked his DNS settings out of curiosity to find that the proper setting was 2.2.2.0.

The whole router mishap made him completely forget about the original issue - the client's FTP. Before he could start looking at it again, Ralph forwarded him an email from the customer thanking them for getting their reports back. Jesse had no idea how or why that was working now, but he was willing to accept the praise. He solved his first problem, but the fun was just beginning...

To be continued...

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianSteve Kemp: This month has been mostly golang-based

This month has mostly been about golang. I've continued work on the protocol-tester that I recently introduced:

This has turned into a fun project, and now all my monitoring done with it. I've simplified the operation, such that everything uses Redis for storage, and there are now new protocol-testers for finger, nntp, and more.

Sample tests are as basic as this:

  mail.steve.org.uk must run smtp
  mail.steve.org.uk must run smtp with port 587
  mail.steve.org.uk must run imaps
  https://webmail.steve.org.uk/ must run http with content 'Prayer Webmail service'

Results are stored in a redis-queue, where they can picked off and announced to humans via a small daemon. In my case alerts are routed to a central host, via HTTP-POSTS, and eventually reach me via the pushover

Beyond the basic network testing though I've also reworked a bunch of code - so the markdown sharing site is now golang powered, rather than running on the previous perl-based code.

As a result of this rewrite, and a little more care, I now score 99/100 + 100/100 on Google's pagespeed testing service. A few more of my sites do the same now, thanks to inline-CSS, inline-JS, etc. Nothing I couldn't have done before, but this was a good moment to attack it.

Finally my "silly" Linux security module, for letting user-space decide if binaries should be executed, can-exec has been forward-ported to v4.16.17. No significant changes.

Over the coming weeks I'll be trying to move more stuff into the cloud, rather than self-hosting. I'm doing a lot of trial-and-error at the moment with Lamdas, containers, and dynamic-routing to that end.

Interesting times.

Planet Debianbisco: First GSoC Report

To whom it may concern, this is my report over the first few weeks of gsoc under the umbrella of the Debian project. I’m writing this on my way back from the minidebconf in Hamburg, which was a nice experience, maybe there will be another post about that ;) So, the goal of my GSOC project is to design and implement a new SSO solution for Debian. But that only touches one part of the projects deliveries.

Planet DebianMartin Pitt: De-Googling my phone, reloaded

Three weeks ago I blogged about how to get rid of non-free Google services and moving to free software on my Android phone. I’ve got a lot of feedback via email, lwn, and Google+, many thanks to all of you for helpful hints! As this is obviously important to many people, I want to tie up some lose ends and publish the results of these discussions.

Alternative apps and stores

  • Yalp is a free app that is able to search, install, and update installed apps from the Google Play Store. It doesn’t even need you to have a Google account, although you can use it to install already paid apps (however, you can’t buy apps within Yalp). I actually prefer that over uptodown now.

  • I moved from FreeOTP to AndOTP. The latter offers backing up your accounts with password or GPG encryption, which is certainly much more convenient than what I’ve previously been doing with noting down the accounts and TOTP secrets in an encrypted file on my laptop.

  • We often listen to internet radio at home. I replaced the non-free ad-ware TuneIn with Transistor, a simple and free app that even has convenient launcher links for a chosen station, so it’s exactly what we want. It does not have a builtin radio station list/search, but if you care about that, take a look at RadioDroid (but that doesn’t have the convenient quick starters).

Transport

In this area the situation is now much happier than my first post indicated. As promised I used trainline.eu for booking some tickets (both for Deutsche Bahn and also on Thalys), and indeed this does a fine job. Same price, European rebate cards like BahnCard 50 are supported, and being able to book with a lot of European train services with just one provider is really neat. However, I’m missing a lot of DB navigator’s great features: realtime information and alternatives, seat selection, car position indicator, regional tariffs, or things like “Länderticket”.

Fortunately it turns out that DB Navigator works just great with a trick: Disable the “Karte anzeigen” option in the menu, and it will immediately stop complaining about missing Play Services after each action. Also, logging in with your DB account never finishes, but after terminating and restarting the app you are logged in and everything works fine. That might be a “regular” bug or just a side effect without Play Services.

Wrt. rental bikes: citybik.es is an awesome project and freely available API that shows available bikes on a map all over Europe. The OpenBikeSharing uses that on Android. That plus the ordinary Nextbike app works well enough.

microG

A lot of people pointed out microG as a free implementation of Google Play Service APIs. Indeed I did try this even before my first blog post; but I didn’t mention it as I wanted to find out which apps actually need this API.

Also, this really appears to be something for the daunting: On my rooted Nexus 4 with LineageOS I didn’t get it to work, even after installing the handful of hacks that you need for signature spoofing; and I daresay that on a standard vendorized installation without root/replaced bootloader it’s outright impossible.

Fortunately there are LineageOS builds with microG included, which gets you much further. But even with that e. g. location still does not work out of the box, but one needs to hunt down and install various providers. I’ve heard from several people that they use this successfully, but as this wasn’t the point of my exercise I just gave up after that.

A really useful piece of functionality of Play Services is tracking and remote-controlling (lock, warn tone, erase) lost or stolen phones. With having backup, encryption and proper locking, a stolen phone is not the end of the world, but it’s still relatively important for me (even though I never had to actually use it yet). The only alternative that I found is Cerberus which looks quite comprehensive. It’s not free though (neither as in beer nor in speech), so unless you particularly distrust Google and are not a big company, it might just be better to keep using Play Services for this functionality.

Calendar and Contacts

I’m really happy with DAVDroid and radicale after using them for over a month. But most people don’t have a personal server to run these. etesync looks like an interesting alternative which provide the hosting for you for five coffees a year, and also offer (free) self-hosting for those who can and want to.

,

Planet DebianAndrej Shadura: Porting inputplug to XCB

5 years ago I wrote inputplug, a tiny daemon which connects to your X server and monitors its input devices, running an external command each time a device is connected or disconnected.

I have used a custom keyboard layout and a fairly non-standard settings for my pointing devices since 2012. I always annoyed me those settings would be re-set every time the device was disconnected and reconnected again, for example, when the laptop was brought back up from the suspend mode. I usually solved that by putting commands to reconfigure my input settings into the resume hook scripts, but that obviously didn’t solve the case of connecting external keyboards and mice. At some point those hook scripts stopped to work because they would run too early when the keyboard and mice were not they yet, so I decided to write inputplug.

Inputplug was the first program I ever wrote which used X at a low level, and I had to use Xlib to access the low-level features I needed. More specifically, inputplug uses XInput X extension and listens to XIHierarchyChanged events. In June 2014, Vincent Bernat contributed a patch to rely on XInput2 only.

During the MiniDebCamp, I had a typical case of yak shaving despite not having any yaks around: I wanted to migrate inputplug’s packaging from Alioth to Salsa, and I had an idea to update the package itself as well. I had an idea of adding optional systemd user session integration, and the easiest way to do that would be to have inputplug register a D-Bus service. However, if I just registered the service, introspecting it would cause annoying delays since it wouldn’t respond to any of the messages the clients would send to it. Handling messages would require me to integrate polling into the event loop, and it turned out it’s not easy to do while sticking to Xlib, so I decided to try and port inputplug to XCB.

For those unfamiliar with XCB, here’s a bit of background: XCB is a library which implements the X11 protocol and operates on a slightly lower level than Xlib. Unlike Xlib, it only works with structures which map directly to the wire protocol. The functions XCB provides are really atomic: in Xlib, it not unusual for a function to perform multiple X transactions or to juggle the elements of the structures a bit. In XCB, most of the functions are relatively thin wrappers to enable packing and unpacking of the data. Let me give you an example.

In Xlib, if you wanted to check whether the X server supports a specific extension, you would write something like this:

XQueryExtension(display, "XInputExtension", &xi_opcode, &event, &error)

Internally, XQueryExtension would send a QueryExtension request to the X server, wait for a reply, parse the reply and return the major opcode, the first event code and the first error code.

With XCB, you need to separately send the request, receive the reply and fetch the data you need from the structure you get:

const char ext[] = "XInputExtension";

xcb_query_extension_cookie_t qe_cookie;
qe_cookie = xcb_query_extension(conn, strlen(ext), ext);

xcb_query_extension_reply_t *rep;
rep = xcb_query_extension_reply(conn, qe_cookie, NULL);

At this point, rep has its field preset set to true if the extension is present. The rest of the things are in the structure as well, which you have to free yourself after the use.

Things get a bit more tricky with requests returning arrays, like XIQueryDevice. Since the xcb_input_xi_query_device_reply_t structure is difficult to parse manually, XCB provides an iterator, xcb_input_xi_device_info_iterator_t which you can use to iterate over the structure: xcb_input_xi_device_info_next does the necessary parsing and moves the pointer so that each time it is run the iterator points to the next element.

Since replies in the X protocol can have variable-length elements, e.g. device names, XCB also provides wrappers to make accessing them easier, like xcb_input_xi_device_info_name.

Most of the code of XCB is generated: there is an XML description of the X protocol which is used in the build process, and the C code to parse and generate the X protocol packets is generated each time the library is built. This means, unfortunately, that the documentation is quite useless, and there aren’t many examples online, especially if you’re going to use rarely used functions like XInput hierarchy change events.

I decided to do the porting the hard way, changing Xlib calls to XCB calls one by one, but there’s an easier way: since Xlib is now actually based on XCB, you can #include <X11/Xlib-xcb.h> and use XGetXCBConnection to get an XCB connection object corresponding to the Xlib’s Display object. Doing that means there will still be a single X connection, and you will be able to mix Xlib and XCB calls.

When porting, it often is useful to have a look at the sources of Xlib: it becomes obvious what XCB functions to use when you know what Xlib does internally (thanks to Mike Gabriel for pointing this out!).

Another thing to remember is that the constants and enums Xlib and XCB define usually have the same values (mandated by the X protocol) despite having slightly different names, so you can mix them too. For example, since inputplug passes the XInput event names to the command it runs, I decided to keep the names as Xlib defines them, and since I’m creating the corresponding strings by using a C preprocessor macro, it was easier for me to keep using XInput2.h instead of defining those strings by hand.

If you’re interested in the result of this porting effort, have a look at the code in the Mercurial repo. Unfortunately, it cannot be packaged for Debian yet since the Debian package for XCB doesn’t ship the module for XInput (see bug #733227).

P.S. Thanks again to Mike Gabriel for providing me important help — and explaining where to look for more of it ;)

Planet DebianSune Vuorela: Where KDEInstallDirs points to

The other day, some user of Extra CMake Modules (A collection of utilities and find modules created by KDE), asked if there was an easy way to query cmake for wherever the KDEInstallDirs points to (KDEInstallDirs is a set of default paths that mostly is good for your system, iirc based upon GNUInstallDirs but with some extensions for various Qt, KDE and XDG common paths, as well as some cross platform additions). I couldn’t find an easy way of doing it without writing a couple of lines of CMake code.

Getting the KDE_INSTALL_(full_)APPDIR with default options is:

$ cmake -DTYPE=APPDIR ..
KDE_INSTALL_FULL_APPDIR:/usr/local/share/applications

and various other options can be set as well.

$ cmake -DCMAKE_INSTALL_PREFIX=/opt/mystuff -DTYPE=BINDIR ..
KDE_INSTALL_FULL_BINDIR: /opt/mystuff/bin

This is kind of simple, but let’s just share it with the world:

cmake_minimum_required(VERSION 3.0)
find_package(ECM REQUIRED)
set (CMAKE_MODULE_PATH ${ECM_MODULE_PATH})

include(KDEInstallDirs)

message("KDE_INSTALL_FULL_${TYPE}: " ${KDE_INSTALL_FULL_${TYPE}})

I don’t think it is complex enough to claim any sorts of copyrights, but if you insist, you can use it under one of the following licenses: CC0, Public Domain (if that’s in your juristiction), MIT/X11, WTFPL (any version), 3-clause BSD, GPL (any version), LGPL (any version) and .. erm. whatever.

I was trying to get it to work as a cmake -P script, but some of the find_package calls requires working CMakeCache. Comments welcome.

Planet DebianHolger Levsen: 20180520-Debian-is-wrong

So, the MiniDebConf Hamburg 2018 is about to end, it's sunny, no clouds are visible and people seem to be happy.

And, I have time to write this blog post! So, just as a teaser for now, I'll present to you the content of some slides of our "Reproducible Buster" talk today. Watch the video!

Debian is wrong

93% is a lie. We need infrastructure, processes and policies. (And testing. Currently we only have testing and a vague goal.)

With the upcoming list of bugs (skipped here) we don't want to fingerpoint at individual teams, instead I think we can only solve this if we as Debian decide we want to solve it for buster.

I think this is not happening because people believe things have been sorted out and we take care of them. But we are not, we can't do this alone.

Debian stretch

the 'reproducibly in theory but not in practice' release

Debian buster

the 'we should be reproducible but we are not' release?

Debian bullseye

the 'we are almost there but still haven't sorted out...' release???


I rather hope for:

Debian buster

the release is still far away and we haven't frozen yet! ;-)

Planet DebianDirk Eddelbuettel: Rcpp 0.12.17: More small updates

Another bi-monthly update and the seventeenth release in the 0.12.* series of Rcpp landed on CRAN late on Friday following nine (!!) days in gestation in the incoming/ directory of CRAN. And no complaints: we just wish CRAN were a little more forthcoming with what is happenening when, and/or would let us help supplying additional test information. I do run a fairly insane amount of backtests prior to releases, only to then have to wait another week or more is ... not ideal. But again, we all owe CRAN and immense amount of gratitude for all they do, and do so well.

So once more, this release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, the 0.12.12 release in July 2017, the 0.12.13.release in late September 2017, the 0.12.14.release in November 2017, the 0.12.15.release in January 2018 and the 0.12.16.release in March 2018 making it the twenty-first release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1362 packages on CRAN depend on Rcpp for making analytical code go faster and further, along with another 138 in the current BioConductor release 3.7.

Compared to other releases, this release contains again a relatively small change set, but between Kevin and Romain cleaned a few things up. Full details are below.

Changes in Rcpp version 0.12.17 (2018-05-09)

  • Changes in Rcpp API:

    • The random number Generator class no longer inhreits from RNGScope (Kevin in #837 fixing #836).

    • A spurious parenthesis was removed to please gcc8 (Dirk fixing #841)

    • The optional Timer class header now undefines FALSE which was seen to have side-effects on some platforms (Romain in #847 fixing #846).

    • Optional StoragePolicy attributes now also work for string vectors (Romain in #850 fixing #849).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJoerg Jaspert: Mini DebConf Hamburg

Since Friday around noon time, I and my 6-year-old son are at the Mini DebConf in Hamburg. Attending together with my son is quite a different experience than plain alone or with also having my wife around. Though he is doing pretty good, it mostly means the day ends for me around 2100 when he needs to go to sleep.

Friday

Friday we had a nice train trip up here, with a change to schedule, needed to switch to local trains to actually get where we wanted. Still, arrived in time for lunch, which is always good, afterwards we first went to buy drinks for the days and discovered a nice playground just around the corner.

The evening, besides dinner, consisted of chatting, hacking and getting Nils busy with something - for the times he came to me. He easily found others around and is fast in socialising with people, so free hacking time for me.

Saturday

The day started with a little bit of a hurry to, as Nils suddenly got the offer to attend a concert in the Elbphilharmonie and I had to get him over there fast. He says he liked it, even though it didn’t make much sense. Met him later for lunch again, followed by a visit to the playground, and then finally hacking time again.

While Nils was off looking after other conference attendees (and appearently getting ice cream too), after attending the Salsa talk, I could hack on stuff, and that meant dozens of merge requests for dak got processed (waldi and lamby are on a campaign against flake8 errors, it appears).

Apropos Salsa: The gitlab instance is the best that happened to Debian in terms of collaboration for a long time. It allows a so much better handling of any git related stuff, its worlds between earlier and now.

Holger showed Nils and me the venue, including climbing up one of the towers, quite an adventure for Nils, but a real nice view from up there.

In the evening the dak master branch was ready to get merged into our deploy branch - and as such automagically deployed on all machines where we run. It consisted of 64 commits and appearently a bug, that thankfully I found a merge request to fix it from waldi in the morning.

Oh, and the most important thing: THERE HAVE BEEN PANCAKES!

Sunday

Started the morning, after breakfast, with merging the fixup for the bug, and getting it into the deploy branch. Also asked DSA to adjust group rights for the ftpteam, today we got one promotion from ftptrainee to ftpteam, everybody tell your condolences to waldi. Also added more ftptrainees as we got more volunteers, and removed inactive ones.

Soon we have to start our way back home, but I am sure to come back for another Mini Conf, if it happens again here.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2018 Workshop: Being an Acrobat: Linux and PDFs

Jun 16 2018 12:30
Jun 16 2018 16:30
Jun 16 2018 12:30
Jun 16 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Portable Document Format (PDF) is a file format first specified by Adobe Systems in 1993. It was a proprietary format until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization.

This workshop presentation will provide various ways that PDF files can can be efficiently manipulated in Linux and other free software that may not be easy in proprietary operating systems or applications.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

June 16, 2018 - 12:30

read more

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2018 Main Meeting: VoxxedDays conference report

Jun 5 2018 18:30
Jun 5 2018 20:30
Jun 5 2018 18:30
Jun 5 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE NEW LOCATION

6:30 PM to 8:30 PM Tuesday, June 5, 2018
Meeting Room 3, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Andrew Pam, Voxxed Days conference report

Andrew will report on a conference he recently attended, covering Language-Level Virtualization with GraalVM, Aggressive Web Apps and more.

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

June 5, 2018 - 18:30

Planet DebianBen Hutchings: Help the Debian kernel team to help you

I gave the first talk this morning at Mini-DebConf Hamburg, titled "Help the kernel team to help you". I briefly described several ways that Debian users and developers can make it easier (or harder) for us to deal with their requests. The slides are up in on my talks page, and video should be available soon.

Planet DebianAndrej Shadura: Porting inputplug to XCB

5 years ago I wrote inputplug, a tiny daemon which connects to your X server and monitors its input devices, running an external command each time a device is connected or disconnected.

I have used a custom keyboard layout and a fairly non-standard settings for my pointing devices since 2012. I always annoyed me those settings would be re-set every time the device was disconnected and reconnected again, for example, when the laptop was brought back up from the suspend mode. I usually solved that by putting commands to reconfigure my input settings into the resume hook scripts, but that obviously didn’t solve the case of connecting external keyboards and mice. At some point those hook scripts stopped to work because they would run too early when the keyboard and mice were not they yet, so I decided to write inputplug.

Inputplug was the first program I ever wrote which used X at a low level, and I had to use Xlib to access the low-level features I needed. More specifically, inputplug uses XInput X extension and listens to XIHierarchyChanged events. In June 2014, Vincent Bernat contributed a patch to rely on XInput2 only.

During the MiniDebCamp, I had a typical case of yak shaving despite not having any yaks around: I wanted to migrate inputplug’s packaging from Alioth to Salsa, and I had an idea to update the package itself as well. I had an idea of adding optional systemd user session integration, and the easiest way to do that would be to have inputplug register a D-Bus service. However, if I just registered the service, introspecting it would cause annoying delays since it wouldn’t respond to any of the messages the clients would send to it. Handling messages would require me to integrate polling into the event loop, and it turned out it’s not easy to do while sticking to Xlib, so I decided to try and port inputplug to XCB.

For those unfamiliar with XCB, here’s a bit of background: XCB is a library which implements the X11 protocol and operates on a slightly lower level than Xlib. Unlike Xlib, it only works with structures which map directly to the wire protocol. The functions XCB provides are really atomic: in Xlib, it not unusual for a function to perform multiple X transactions or to juggle the elements of the structures a bit. In XCB, most of the functions are relatively thin wrappers to enable packing and unpacking of the data. Let me give you an example.

In Xlib, if you wanted to check whether the X server supports a specific extension, you would write something like this:

XQueryExtension(display, "XInputExtension", &xi_opcode, &event, &error)

Internally, XQueryExtension would send a QueryExtension request to the X server, wait for a reply, parse the reply and return the major opcode, the first event code and the first error code.

With XCB, you need to separately send the request, receive the reply and fetch the data you need from the structure you get:

const char ext[] = "XInputExtension";

xcb_query_extension_cookie_t qe_cookie;
qe_cookie = xcb_query_extension(conn, strlen(ext), ext);

xcb_query_extension_reply_t *rep;
rep = xcb_query_extension_reply(conn, qe_cookie, NULL);

At this point, rep has its field preset set to true if the extension is present. The rest of the things are in the structure as well, which you have to free yourself after the use.

Things get a bit more tricky with requests returning arrays, like XIQueryDevice. Since the xcb_input_xi_query_device_reply_t structure is difficult to parse manually, XCB provides an iterator, xcb_input_xi_device_info_iterator_t which you can use to iterate over the structure: xcb_input_xi_device_info_next does the necessary parsing and moves the pointer so that each time it is run the iterator points to the next element.

Since replies in the X protocol can have variable-length elements, e.g. device names, XCB also provides wrappers to make accessing them easier, like xcb_input_xi_device_info_name.

Most of the code of XCB is generated: there is an XML description of the X protocol which is used in the build process, and the C code to parse and generate the X protocol packets is generated each time the library is built. This means, unfortunately, that the documentation is quite useless, and there aren’t many examples online, especially if you’re going to use rarely used functions like XInput hierarchy change events.

I decided to do the porting the hard way, changing Xlib calls to XCB calls one by one, but there’s an easier way: since Xlib is now actually based on XCB, you can #include <X11/Xlib-xcb.h> and use XGetXCBConnection to get an XCB connection object corresponding to the Xlib’s Display object. Doing that means there will still be a single X connection, and you will be able to mix Xlib and XCB calls.

When porting, it often is useful to have a look at the sources of Xlib: it becomes obvious what XCB functions to use when you know what Xlib does internally (thanks to Mike Gabriel for pointing this out!).

Another thing to remember is that the constants and enums Xlib and XCB define usually have the same values (mandated by the X protocol) despite having slightly different names, so you can mix them too. For example, since inputplug passes the XInput event names to the command it runs, I decided to keep the names as Xlib defines them, and since I’m creating the corresponding strings by using a C preprocessor macro, it was easier for me to keep using XInput2.h instead of defining those strings by hand.

If you’re interested in the result of this porting effort, have a look at the code in the Mercurial repo. Unfortunately, it cannot be packaged for Debian yet since the Debian package for XCB doesn’t ship the module for XInput (see bug #733227).

P.S. Thanks again to Mike Gabriel for providing me important help — and explaining where to look for more of it ;)

Planet DebianRuss Allbery: California state election

Hm, I haven't done one of these in a while. Well, time to alienate future employers and make awkward mistakes in public that I have to explain if I ever run for office! (Spoiler: I'm highly unlikely to ever run for office.)

This is only of direct interest to California residents. To everyone else, RIP your feed reader, and I'm sorry for the length. (My hand-rolled blog software doesn't do cut tags.) I'll spare you all the drill-down into the Bay Area regional offices. (Apparently we elect our coroner, which makes no sense to me.)

Propositions

I'm not explaining these because this is already much too long; those who aren't in California and want to follow along can see the voter guide.

Proposition 68: YES. Still a good time to borrow money, and what we're borrowing money for here seems pretty reasonable. State finances are in reasonable shape; we have the largest debt of any state for the obvious reason that we have the most people and the most money.

Proposition 69: YES. My instinct is to vote no because I have a general objection to putting restrictions on how the state manages its budget. I don't like dividing tax money into locked pools for the same reason that I stopped partitioning hard drives. That said, this includes public transit in the spending pool from gasoline taxes (good), the opposition is incoherent, and there are wide-ranging endorsements. That pushed me to yes on the grounds that maybe all these people understand something about budget allocations that I don't.

Proposition 70: NO. This is some sort of compromise with Republicans because they don't like what cap-and-trade money is being spent on (like high-speed rail) and want a say. If I wanted them to have a say, I'd vote for them. There's a reason why they have to resort to backroom tricks to try to get leverage over laws in this state, and it's not because they have good ideas.

Proposition 71: YES. Entirely reasonable change to say that propositions only go into effect after the election results are final. (There was a real proposition where this almost caused a ton of confusion, and prompted this amendment.)

Proposition 72: YES. I'm grumbling about this because I think we should get rid of all this special-case bullshit in property taxes and just readjust them regularly. Unfortunately, in our current property tax regime, you have to add more exemptions like this because otherwise the property tax hit (that would otherwise not be incurred) is so large that it kills the market for these improvements. Rainwater capture is to the public benefit in multiple ways, so I'll hold my nose and vote for another special exception.

Federal Offices

US Senator: Kevin de León. I'll vote for Feinstein in the general, and she's way up on de León in the polls, but there's no risk in voting for the more progressive candidate here since there's no chance Feinstein won't get the most votes in the primary. De León is a more solidly progressive candidate than Feinstein. I'd love to see a general election between the two of them.

State Offices

I'm omitting all the unopposed ones, and all the ones where there's only one Democrat running in the primary. (I'm not going to vote for any Republican except for one exception noted below, and third parties in the US are unbelievably dysfunctional and not ready to govern.) For those outside the state, California has a jungle primary where the top two vote-getters regardless of party go to the general election, so this is more partisan and more important than other state primaries.

Governor: Delaine Eastin. One always has to ask, in our bullshit voting system, whether one has to vote tactically instead of for the best candidate. But, looking at polling, I think there's no chance Gavin Newsom (the second-best candidate and the front-runner) won't advance to the general election, so I get to vote for the candidate I actually want to win, even though she's probably not going to. Eastin is by far the most progressive candidate running who actually has the experience required to be governor. (Spoiler: Newsom is going to win, and I'll definitely vote for him in the general against Villaraigosa.)

Lieutenant Governor: Eleni Kounalakis. She and Bleich are the strongest candidates. I don't see a ton of separation between them, but Kounalakis's endorsements are a bit stronger for me. She's also the one candidate who has a specific statement about what she plans to do with the lieutenant governor role of oversight over the university system, which is almost it's only actual power. (This political office is stupid and we should abolish it.)

Secretary of State: Alex Padilla. I agree more with Ruben Major's platform (100% paper ballots is the correct security position), but he's an oddball outsider and I don't think he can accomplish as much. Padilla has an excellent track record as the incumbant and is doing basically the right things, just less dramatically.

Treasurer: Fiona Ma. I like Vivek Viswanathan and support his platform, but Fiona Ma has a lot more political expertise and I think will be more effective. I look forward to voting for Viswanathan for something else someday.

Attorney General: Dave Jones. Xavier Becerra hasn't been doing a bad job fighting off bad federal policy, but that seems to be all that he's interested in, and he's playing partisan games with the office. He has an air of amateurishness and political hackery. Dave Jones holds the same positions in a more effective way, is more professional, and has done a good job as Insurance Commissioner.

Insurance Commissioner: Steve Poizner. I'm going to vote for the (former) Republican here. Poizner expressed some really bullshit views on immigration when he ran for governor (which he's apologized for). I wouldn't support him for a more political office. But he was an excellent insurance commissioner (see, for instance, the response to Blue Cross's rate increase request). I'm closer to Ricardo Lara politically, but in his statements to the press he comes across as a clown: self-driving car insurance problems, cannabis insurance, climate change insurance, and a bunch of other nonsense that makes me think he doesn't understand the job. The other democrat, Mahmood, seems like less of a partisan hack, but he's a virtual unknown. If this were an important partisan office, I'd hold my nose and vote for Lara, but the job of insurance commissioner is more to be an auditor and negotiator, and Poizner was really good at it.

Superintendent of Public Instruction: Tony Thurmond. The other front-runner is Marshall Tuck, who is a charter school advocate. I hate charter schools with the passion of a burning sun.

Local Measures

Regional Measure 3: YES. Even more hyper-local than the rest of this post, but mentioning it because it was a narrow call. Bridge tolls are regressive, and I'm not a big fan of raising them as opposed to, say, increasing property taxes (yes please) or income taxes. That said, taxing cars to pay for (largely) public transit is the direction the money should flow. It was thinly balanced for me, but the thrust of the projects won out over the distaste at the regressive tax.

,

Planet DebianRuss Allbery: Free software log (April 2018)

This is rather late since I got distracted by various other things including, ironically, releasing a bunch of software. This is for April, so doesn't include the releases from this month.

The main release I worked on was remctl 3.14, which fixed a security bug introduced in 3.12 with the sudo configuration option. This has since been replaced by 3.15, which has more thorough maintainer testing infrastructure to hopefully prevent this from happening again.

I also did the final steps of the release process for INN 2.6.2, although as usual Julien ÉLIE did all of the hard work.

On the Debian side, I uploaded a new rssh package for the migration to GitLab (salsa.debian.org). I have more work to do on that front, but haven't yet had the time. I've been prioritizing some of my own packages over doing more general Debian work.

Finally, I looked at my Perl modules on CPANTS (the CPAN testing service) and made note of a few things I need to fix, plus filed a couple of bugs for display issues (one of which turned out to be my fault and fixed in Git). I also did a bit of research on the badges that people in the Rust community use in their documentation and started adding support to DocKnot, some of which made it into the subsequent release I did this month.

Planet DebianDirk Eddelbuettel: RcppGSL 0.3.5

A maintenance update of RcppGSL just brought version 0.3.5 to CRAN, a mere twelve days after the RcppGSL 0.3.4. release. Just like yesterday's upload of inline 0.3.15 it was prompted by a CRAN request to update the per-package manual page; see the inline post for details.

The RcppGSL package provides an interface from R to the GNU GSL using the Rcpp package.

No user-facing new code or features were added. The NEWS file entries follow below:

Changes in version 0.3.5 (2018-05-19)

  • Update package manual page using references to DESCRIPTION file [CRAN request].

Courtesy of CRANberries, a summary of changes to the most recent release is available.

More information is on the RcppGSL page. Questions, comments etc should go to the issue tickets at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianMartín Ferrari: MiniDebConf Hamburg - Friday/Saturday

MiniDebCamp Hamburg - Friday 18/5, Saturday 19/5

Friday and Saturday have been very productive days, I love events where there is time to hack!

I had more chats about contributors.d.o with Ganneff and Formorer, and if all goes according to plan, soon salsa will start streaming commit information to contributors and populate information about different teams: not only about normal packaging repos, but also about websites, tools, native packages, etc.

Note that the latter require special configuration, and the same goes if you want to have separate stats for your team (like for the Go team or the Perl team). So if you want to offer proper attribution to members of your team, please get in touch!


I spent loads of time working on Prometheus packages, and finally today (after almost a year) I uploaded a new version of prometheus-alertmanager to experimental. I decided to just drop all the web interface, as packaging all the Elm framework would take me months of work. If anybody feels like writing a basic HTML/JS interface, I would be happy to include it in the package!

While doing that, I found bugs in the CI pipeline for Go packages in Salsa. Solving these will hopefully make the automatic testing more reliable, as API breakage is sadly a big problem in the Go ecosystem.


I am loving the venue here. Apart from hosting some companies and associations, there is an art gallery which currently has a photo exhibition called Echo park; there were parties happening last night, and tonight apparently there will be more. This place is amazing!

Comment

Planet DebianThorsten Glaser: Progress report from the Movim packaging sprint at MiniDebconf

Nik wishes you to know that the Movim packaging sprint (sponsored by the DPL, thank you!) is handled under the umbrella of the Debian Edu sprint (similarily sponsored) since this package is handled by the Teckids Debian Task Force, personnel from Teckids e.V.

After arriving, I’ve started collecting knowledge first. I reviewed upstream’s composer.json file and Wiki page about dependencies and, after it quickly became apparent that we need much more information (e.g. which versions are in sid, what the package names are, and, most importantly, recursive dependencies), a Wiki page of our own grew. Then I made a hunt for information about how to package stuff that uses PHP Composer upstream, and found the, ahem, wonderfully abundant, structured, plentiful and clear documentation from the Debian PHP/PEAR Packaging team. (Some time and reverse-engineering later I figured out that we just ignore composer and read its control file in pkg-php-tools converting dependency information to Debian package relationships. Much time later I also figured out it mangles package names in a specific way and had to rename one of the packages I created in the meantime… thankfully before having uploaded it.) Quickly, the Wiki page grew listing the package names we’re supposed to use. I created a package which I could use as template for all others later.

The upstream Movim developer arrived as well — we have quite an amount of upstream developers of various projects attending MiniDebConf, to the joy of the attendees actually directly involved in Debian, and this makes things much easier, as he immediately started removing dependencies (to make our job easier) and fixing bugs and helping us understand how some of those dependencies work. (I also contributed code upstream that replaces some Unicode codepoints or sequences thereof, such as 3⃣ or ‼ or 👱🏻‍♀️, with <img…/> tags pointing to the SVG images shipped with Movim, with a description (generated from their Unicode names) in the alt attribute.)

Now, Saturday, all dependencies are packaged so far, although we’re still waiting for maintainer feedback for those two we’d need to NMU (or have them upload or us take the packages over); most are in NEW of course, but that’s no problem. Now we can tackle packaging Movim itself — I guess we’ll see whether those other packages actually work then ☺

We also had a chance to fix bugs in other packages, like guacamole-client and musescore.

In the meantime we’ve also had the chance to socialise, discuss, meet, etc. other Debian Developers and associates and enjoy the wonderful food and superb coffee of the “Cantina” at the venue; let me hereby express heartfelt thanks to the MiniDebConf organisation for this good location pick!

Update, later this night: we took over the remaining two packages with permission from their previous team and uploader, and have already started with actually packaging Movim, discovering untold gruesome things in the upstream of the two webfonts it bundles.

Planet DebianMike Hommey: Announcing git-cinnabar 0.5.0 beta 3

Git-cinnabar is a git remote helper to interact with mercurial repositories. It allows to clone, pull and push from/to mercurial remote repositories, using git.

Get it on github.

These release notes are also available on the git-cinnabar wiki.

What’s new since 0.5.0 beta 2?

  • Fixed incompatibilities with Mercurial >= 4.4.
  • Miscellaneous metadata format changes.
  • Move more operations to the helper, hopefully making things faster.
  • Updated git to 2.17.0 for the helper.
  • Properly handle clones with bundles when the repository doesn’t contain anything newer than the bundle.
  • Fixed tag cache, which could lead to missing tags.

Planet DebianDirk Eddelbuettel: inline 0.3.15

A maintenance release of the inline package arrived on CRAN today. inline facilitates writing code in-line in simple string expressions or short files. The package is mature and in maintenance mode: Rcpp used it greatly for several years but then moved on to Rcpp Attributes so we have a much limited need for extensions to inline. But a number of other package have a hard dependence on it, so we do of course look after it as part of the open source social contract (which is a name I just made up, but you get the idea...)

This release was triggered by a (as usual very reasonable) CRAN request to update the per-package manual page which had become stale. We now use Rd macros, you can see the diff for just that file at GitHub; I also include it below. My pkgKitten package-creation helper uses the same scheme, I wholeheartedly recommend it -- as the diff shows, it makes things a lot simpler.

Some other changes reflect both two user-contributed pull request, as well as standard minor package update issues. See below for a detailed list of changes extracted from the NEWS file.

Changes in inline version 0.3.15 (2018-05-18)

  • Correct requireNamespace() call thanks (Alexander Grueneberg in #5).

  • Small simplification to .travis.yml; also switch to https.

  • Use seq_along instead of seq(along=...) (Watal M. Iwasaki) in #6).

  • Update package manual page using references to DESCRIPTION file [CRAN request].

  • Minor packaging updates.

Courtesy of CRANberries, there is a comparison to the previous release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianJoey Hess: fridge 0.1

Imagine something really cool, like a fridge connected to a powerwall, powered entirely by solar panels. What could be cooler than that?

How about a fridge powered entirely by solar panels without the powerwall? Zero battery use, and yet it still preserves your food.

That's much cooler, because batteries, even hyped ones like the powerwall, are expensive and innefficient and have limited cycles. Solar panels are cheap and efficient now. With enough solar panels that the fridge has power to cool down most days (even cloudy days), and a smart enough control system, the fridge itself becomes the battery -- a cold battery.

I'm live coding my fridge, with that goal in mind. You can follow along in this design thread on secure scuttlebutt, and my git commits, and you can watch real-time data from my fridge.

Over the past two days, which were not especially sunny, my 1 kilowatt of solar panels has managed to cool the fridge down close to standard fridge temperatures. The temperature remains steady overnight thanks to added thermal mass in the fridge. My food seems safe in it, despite it being powered off for 14 hours each night.

graph of fridge temperature, starting at 13C and trending downwards to 5C over 24 hours

(Numbers in this graph are running higher than the actual temps of food in the fridge, for reasons explained in the scuttlebutt thread.)

Of course, the longterm viability of a fridge that never draws from a battery is TBD; I'll know within a year if it works for me.

bunch of bananas resting on top of chest freezer fridge conversion

I've written about the coding side of this project before, in my haskell controlled offgrid fridge. The reactive-banana-automation library is working well in this application. My AIMS inverter control board and easy-peasy-devicetree-squeezy were other groundwork for this project.

CryptogramFriday Squid Blogging: Flying Squid

Flying squid are real.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianAndrej Shadura: Goodbye Octopress, hello Pelican

Hi from MiniDebConf in Hamburg!

As you may have noticed, I don’t update this blog often. One of the reasons why this was happening was that until now it was incredibly difficult to write posts. The software I used, Octopress (based on Jekyll) was based on Ruby, and it required quite specific versions of its dependencies. I had the workspace deployed on one of my old laptops, but when I attempted to reproduce it on the laptop I currently use, I failed to. Some dependencies could not be installed, others failed, and my Ruby skills weren’t enough to fix that mess. (I have to admit my Ruby skills improved insignificantly since the time I installed Octopress, but that wasn’t enough to help in this case.)

I’ve spent some time during this DebCamp to migrate to Pelican, which is written in Python, packaged in Debian, and its dependencies are quite straighforward to install. I had to install (and write) a few plugins to make the migration easier, and port my custom Octopress Bootstrap theme to Pelican.

I no longer include any scripts from Twitter or Facebook (I made Tweet and Share button static links), and the Disqus comments are loaded only on demand, so reading this blog will respect your privacy better than before.

See you at MiniDebConf tomorrow!

Krebs on SecurityT-Mobile Employee Made Unauthorized ‘SIM Swap’ to Steal Instagram Account

T-Mobile is investigating a retail store employee who allegedly made unauthorized changes to a subscriber’s account in an elaborate scheme to steal the customer’s three-letter Instagram username. The modifications, which could have let the rogue employee empty bank accounts associated with the targeted T-Mobile subscriber, were made even though the victim customer already had taken steps recommended by the mobile carrier to help minimize the risks of account takeover. Here’s what happened, and some tips on how you can protect yourself from a similar fate.

Earlier this month, KrebsOnSecurity heard from Paul Rosenzweig, a 27-year-old T-Mobile customer from Boston who had his wireless account briefly hijacked. Rosenzweig had previously adopted T-Mobile’s advice to customers about blocking mobile number port-out scams, an increasingly common scheme in which identity thieves armed with a fake ID in the name of a targeted customer show up at a retail store run by a different wireless provider and ask that the number to be transferred to the competing mobile company’s network.

So-called “port out” scams allow crooks to intercept your calls and messages while your phone goes dark. Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves who have already stolen a target’s password(s) can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.

In this case, however, the perpetrator didn’t try to port Rosenzweig’s phone number: Instead, the attacker called multiple T-Mobile retail stores within an hour’s drive of Rosenzweig’s home address until he succeeded in convincing a store employee to conduct what’s known as a “SIM swap.”

A SIM swap is a legitimate process by which a customer can request that a new SIM card (the tiny, removable chip in a mobile device that allows it to connect to the provider’s network) be added to the account. Customers can request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

However, thieves and other ne’er-do-wells can abuse this process by posing as a targeted mobile customer or technician and tricking employees at the mobile provider into swapping in a new SIM card for that customer on a device that they control. If successful, the SIM swap accomplishes more or less the same result as a number port out (at least in the short term) — effectively giving the attackers access to any text messages or phone calls that are sent to the target’s mobile account.

Rosenzweig said the first inkling he had that something wasn’t right with his phone was on the evening of May 2, 2018, when he spotted an automated email from Instagram. The message said the email address tied to the three-letter account he’d had on the social media platform for seven years — instagram.com/par — had been changed. He quickly logged in to his Instagram account, changed his password and then reverted the email on the account back to his original address.

By this time, the SIM swap conducted by the attacker had already been carried out, although Rosenzweig said he didn’t notice his phone displaying zero bars and no connection to T-Mobile at the time because he was at home and happily surfing the Web on his device using his own wireless network.

The following morning, Rosenzweig received another notice — this one from Snapchat — stating that the password for his account there (“p9r”) had been changed. He subsequently reset the Instagram password and then enabled two factor authentication on his Snapchat account.

“That was when I realized my phone had no bars,” he recalled. “My phone was dead. I couldn’t even call 611,” [the mobile short number that all major wireless providers make available to reach their customer service departments].”

It appears that the perpetrator of the SIM swap abused not only internal knowledge of T-Mobile’s systems, but also a lax password reset process at Instagram. The social network allows users to enable notifications on their mobile phone when password resets or other changes are requested on the account.

But this isn’t exactly two-factor authentication because it also lets users reset their passwords via their mobile account by requesting a password reset link to be sent to their mobile device. Thus, if someone is in control of your mobile phone account, they can reset your Instagram password (and probably a bunch of other types of accounts).

Rosenzweig said even though he was able to reset his Instagram password and restore his old email address tied to the account, the damage was already done: All of his images and other content he’d shared on Instagram over the years was still tied to his account, but the attacker had succeeded in stealing his “par” username, leaving him with a slightly less sexy “par54384321,” (apparently chosen for him at random by either Instagram or the attacker).

As I wrote in November 2015, short usernames are something of a prestige or status symbol for many youngsters, and some are willing to pay surprising sums of money for them. Known as “OG” (short for “original” and also “original gangster”) in certain circles online, these can be usernames for virtually any service, from email accounts at Webmail providers to social media services like InstagramSnapchatTwitter and Youtube.

People who traffic in OG accounts prize them because they can make the account holder appear to have been a savvy, early adopter of the service before it became popular and before all of the short usernames were taken.

Rosenzweig said a friend helped him work with T-Mobile to regain control over his account and deactivate the rogue SIM card. He said he’s grateful the attackers who hijacked his phone for a few hours didn’t try to drain bank accounts that also rely on his mobile device for authentication.

“It definitely could have been a lot worse given the access they had,” he said.

But throughout all of this ordeal, it struck Rosenzweig as odd that he never once received an email from T-Mobile stating that his SIM card had been swapped.

“I’m a software engineer and I thought I had pretty good security habits to begin with,” he said. “I never re-use passwords, and it’s hard to see what I could have done differently here. The flaw here was with T-Mobile mostly, but also with Instagram. It seems like by having the ability to change one’s [Instagram] password by email or by mobile alone negates the second factor and it becomes either/or from the attackers point of view.”

Sources close to the investigation say T-Mobile is investigating a current or former employee as the likely culprit. The mobile company also acknowledged that it does not currently send customers an email to the email address on file when SIM swaps take place. A T-Mobile spokesperson said the company was considering changing the current policy, which sends the customer a text message to alert them about the SIM swap.

“We take our customers privacy and security very seriously and we regret that this happened,” the company said in a written statement. “We notify our customers immediately when SIM changes occur, but currently we do not send those notifications via email. We are actively looking at ways to improve our processes in this area.”

In summary, when a SIM swap happens on a T-Mobile account, T-Mobile will send a text message to the phone equipped with the new SIM card. But obviously that does not help someone who is the target of a SIM swap scam.

As we can see, just taking T-Mobile’s advice to place a personal identification number (PIN) on your account to block number port out scams does nothing to flag one’s account to make it harder to conduct SIM swap scams.

Rather, T-Mobile says customers need to call in to the company’s customer support line and place a separate “SIM lock” on their account, which can only be removed if the customer shows up at a retail store with ID (or, presumably, anyone with a fake ID who also knows the target’s Social Security Number and date of birth).

I checked with the other carriers to see if they support locking the customer’s current SIM to the account on file. I suspect they do, and will update this piece when/if I hear back from them. In the meantime, it might be best just to phone up your carrier and ask.

Please note that a SIM lock on your mobile account is separate from a SIM PIN that you can set via your mobile phone’s operating system. A SIM PIN is essentially an additional layer of physical security that locks the current SIM to your device, requiring you to input a special PIN when the device is powered on in order to call, text or access your data plan on your phone. This feature can help block thieves from using your phone or accessing your data if you lose your phone, but it won’t stop thieves from physically swapping in their own SIM card.

iPhone users can follow these instructions to set or change a device’s SIM PIN. Android users can see this page. You may need to enter a carrier-specific default PIN before being able to change it. By default, the SIM PIN for all Verizon and AT&T phones is “1111;” for T-Mobile and Sprint it should default to “1234.”

Be advised, however, that if you forget your SIM PIN and enter the wrong PIN too many times, you may end up having to contact your wireless carrier to obtain a special “personal unlocking key” (PUK).

At the very least, if you haven’t already done so please take a moment to place a port block PIN on your account. This story explains exactly how to do that.

Also, consider reviewing twofactorauth.org to see whether you are taking full advantage of any multi-factor authentication offerings so that your various accounts can’t be trivially hijacked if an attacker happens to guess, steal, phish or otherwise know your password.

One-time login codes produced by mobile apps such as Authy, Duo or Google Authenticator are more secure than one-time codes sent via automated phone call or text — mainly because crooks can’t steal these codes if they succeed in porting your mobile number to another service or by executing a SIM swap on your mobile account [full disclosure: Duo is an advertiser on this blog].

Update, May 19, 3:16 pm ET: Rosenzweig reports that he has now regained control over his original Instagram account name, “par.” Good on Instagram for fixing this, but it’s not clear the company has a real strong reporting process for people who find their usernames are hijacked.

Planet DebianJoachim Breitner: Proof reuse in Coq using existential variables

This is another technical post that is only of interest only to Coq users.

TL;DR: Using existential variable for hypotheses allows you to easily refactor a complicated proof into an induction schema and the actual proofs.

Setup

As a running example, I will use a small theory of “bags”, which you can think of as lists represented as trees, to allow an O(1) append operation:

Require Import Coq.Arith.Arith.
Require Import Psatz.
Require FunInd.

(* The data type *)
Inductive Bag a : Type :=
  | Empty : Bag a
  | Unit  : a -> Bag a
  | Two   : Bag a -> Bag a -> Bag a.

Arguments Empty {_}.
Arguments Unit {_}.
Arguments Two {_}.

Fixpoint length {a} (b : Bag a) : nat :=
  match b with
  | Empty     => 0
  | Unit _    => 1
  | Two b1 b2 => length b1 + length b2
  end.

(* A smart constructor that ensures that a [Two] never
   has [Empty] as subtrees. *)
Definition two {a} (b1 b2 : Bag a) : Bag a := match b1 with
  | Empty => b2
  | _ => match b2 with | Empty => b1
                       | _ => Two b1 b2 end end.

Lemma length_two {a} (b1 b2 : Bag a) :
  length (two b1 b2) = length b1 + length b2.
Proof. destruct b1, b2; simpl; lia. Qed.

(* A first non-trivial function *)
Function take {a : Type} (n : nat) (b : Bag a) : Bag a :=
  if n =? 0
  then Empty
  else match b with
       | Empty     => b
       | Unit x    => b
       | Two b1 b2 => two (take n b1) (take (n - length b1) b2)
       end.

The theorem

The theorem that I will be looking at in this proof describes how length and take interact:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).

Before I dive into it, let me point out that this example itself is too simple to warrant the techniques that I will present in this post. I have to rely on your imagination to scale this up to appreciate the effect on significantly bigger proofs.

Naive induction

How would we go about proving this lemma? Surely, induction is the way to go! And indeed, this is provable using induction (on the Bag) just fine:

Proof.
  intros.
  revert n.
  induction b; intros n.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + rewrite Nat.min_0_r. reflexivity.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. lia.
  * simpl.
    destruct (Nat.eqb_spec n 0).
    + subst. rewrite Nat.min_0_l. reflexivity.
    + simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

But there is a problem: A proof by induction on the Bag argument immediately creates three subgoals, one for each constructor. But that is not how take is defined, which first checks the value of n, independent of the constructor. This means that we have to do the case-split and the proof for the case n = 0 three times, although they are identical. It’s a one-line proof here, but imagine something bigger...

Proof by fixpoint

Can we refactor the proof to handle the case n = 0 first? Yes, but not with a simple invocation of the induction tactic. We could do well-founded induction on the length of the argument, or we can do the proof using the more primitive fix tactic. The latter is a bit hairy, you won’t know if your proof is accepted until you do Qed (or check with Guarded), but when it works it can yield some nice proofs.

Proof.
  intros a.
  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n. rewrite Nat.min_0_l. reflexivity.
  + destruct b.
    * rewrite Nat.min_0_r. reflexivity.
    * simpl. lia.
    * simpl. rewrite length_two, !IH. lia.
Qed.

Nice: we eliminated the duplication of proofs!

A functional induction lemma

Again, imagine that we jumped through more hoops here ... maybe some well-founded recursion with a tricky size measure and complex proofs that the measure decreases ... or maybe you need to carry around an invariant about your arguments and you have to work hard to satisfy the assumption of the induction hypothesis.

As long as you do only one proof about take, that is fine. As soon as you do a second proof, you will notice that you have to repeat all of that, and it can easily make up most of your proof...

Wouldn’t it be nice if you can do the common parts of the proofs only once, obtain a generic proof scheme that you can use for (most) proofs about take, and then just fill in the blanks?

Incidentally, the Function command provides precisely that:

take_ind
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall (n : nat) (b : Bag a), (n =? 0) = true -> P n b Empty) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> b = Empty -> P n Empty b) ->
       (forall (n : nat) (b : Bag a), (n =? 0) = false -> forall x : a, b = Unit x -> P n (Unit x) b) ->
       (forall (n : nat) (b : Bag a),
        (n =? 0) = false ->
        forall b1 b2 : Bag a,
        b = Two b1 b2 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

which is great if you can use Function (although not perfect – we’d rather see n = 0 instead of (n =? 0) = true), but often Function is not powerful enough to define the function you care about.

Extracting the scheme from a proof

We could define our own take_ind' by hand, but that is a lot of work, and we may not get it right easily, and when we change out functions, there is now this big proof statement to update.

Instead, let us use existentials, which are variables where Coq infers their type from how we use them, so we don’t have to declare them. Unfortunately, Coq does not support writing just

Lemma take_ind':
  forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
  forall (IH1 : ?) (IH2 : ?) (IH3 : ?) (IH4 : ?),
  forall n b, P n b (take n b).

where we just leave out the type of the assumptions (Isabelle does...), but we can fake it using some generic technique.

We begin with stating an auxiliary lemma using a sigma type to say “there exist some assumption that are sufficient to show the conclusion”:

Lemma take_ind_aux:
  forall a (P : _ -> _ -> _ -> Prop),
  { Hs : Prop |
    Hs -> forall n (b : Bag a), P n b (take n b)
  }.

We use the [eexist tactic])(https://coq.inria.fr/refman/proof-engine/tactics.html#coq:tacv.eexists) (existential exists) to construct the sigma type without committing to the type of Hs yet.

Proof.
  intros a P.
  eexists.
  intros Hs.

This gives us an assumption Hs : ?Hs – note the existential type. We need four of those, which we can achieve by writing

  pose proof Hs as H1. eapply proj1 in H1. eapply proj2 in Hs.
  pose proof Hs as H2. eapply proj1 in H2. eapply proj2 in Hs.
  pose proof Hs as H3. eapply proj1 in H3. eapply proj2 in Hs.
  rename Hs into H4.

we now have this goal state:

1 subgoal
a : Type
P : nat -> Bag a -> Bag a -> Prop
H4 : ?Goal2
H1 : ?Goal
H2 : ?Goal0
H3 : ?Goal1
______________________________________(1/1)
forall (n : nat) (b : Bag a), P n b (take n b)

At this point, we start reproducing the proof of length_take: The same approach to induction, the same case splits:

  fix IH 2.
  intros.
  rewrite take_equation.
  destruct (Nat.eqb_spec n 0).
  + subst n.
    revert b.
    refine H1.
  + rename n0 into Hnot_null.
    destruct b.
    * revert n Hnot_null.
      refine H2.
    * rename a0 into x.
      revert x n Hnot_null.
      refine H3.
    * assert (IHb1 : P n b1 (take n b1)) by apply IH.
      assert (IHb2 : P (n - length b1) b2 (take (n - length b1) b2)) by apply IH.
      revert n b1 b2 Hnot_null IHb1 IHb2.
      refine H4.
Defined. (* Important *)

Inside each case, we move all relevant hypotheses into the goal using revert and refine with the corresponding assumption, thus instantiating it. In the recursive case (Two), we assert that P holds for the subterms, by induction.

It is important to end this proofs with Defined, and not Qed, as we will see later.

In a next step, we can remove the sigma type:

Definition take_ind' a P := proj2_sig (take_ind_aux a P).

The type of take_ind' is as follows:

take_ind'
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       proj1_sig (take_ind_aux a P) ->
       forall n b, P n b (take n b)

This looks almost like an induction lemma. The assumptions of this lemma have the not very helpful type proj1_sig (take_ind_aux a P), but we can already use this to prove length_take:

Theorem length_take:
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind' with (P := fun n b r => length r = min n (length b)).
  repeat apply conj; intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

In this case I have to explicitly state P where I invoke take_ind', because Coq cannot figure out this instantiation on its own (it requires higher-order unification, which is undecidable and unpredictable). In other cases I had more luck.

After I apply take_ind', I have this proof goal:

______________________________________(1/1)
proj1_sig (take_ind_aux a (fun n b r => length r = min n (length b)))

which is the type that Coq inferred for Hs above. We know that this is a conjunction of a bunch of assumptions, and we can split it as such, using repeat apply conj. At this point, Coq needs to look inside take_ind_aux; this would fail if we used Qed to conclude the proof of take_ind_aux.

This gives me four goals, one for each case of take, and the remaining proofs really only deals with the specifics of length_take – no more general dealing with worrying about getting the induction right and doing the case-splitting the right way.

Also note that, very conveniently, Coq uses the same name for the induction hypotheses IHb1 and IHb2 that we used in take_ind_aux!

Making it prettier

It may be a bit confusing to have this proj1_sig in the type, especially when working in a team where others will use your induction lemma without knowing its internals. But we can resolve that, and also turn the conjunctions into normal arrows, using a bit of tactic support. This is completely generic, so if you follow this procedure, you can just copy most of that:

Lemma uncurry_and: forall {A B C}, (A /\ B -> C) -> (A -> B -> C).
Proof. intros. intuition. Qed.
Lemma under_imp:   forall {A B C}, (B -> C) -> (A -> B) -> (A -> C).
Proof. intros. intuition. Qed.
Ltac iterate n f x := lazymatch n with
  | 0 => x
  | S ?n => iterate n f uconstr:(f x)
end.
Ltac uncurryN n x :=
  let n' := eval compute in n in
  lazymatch n' with
  | 0 => x
  | S ?n => let uc := iterate n uconstr:(under_imp) uconstr:(uncurry_and) in
            let x' := uncurryN n x in
            uconstr:(uc x')
end.

With this in place, we can define our final proof scheme lemma:

Definition take_ind'' a P
  := ltac:(let x := uncurryN 3 (proj2_sig (take_ind_aux a P)) in exact x).
Opaque take_ind''.

The type of take_ind'' is now exactly what we’d wish for: All assumptions spelled out, and the n =? 0 already taken of (compare this to the take_ind provided by the Function command above):

take_ind''
     : forall (a : Type) (P : nat -> Bag a -> Bag a -> Prop),
       (forall b : Bag a, P 0 b Empty) ->
       (forall n : nat, n <> 0 -> P n Empty Empty) ->
       (forall (x : a) (n : nat), n <> 0 -> P n (Unit x) (Unit x)) ->
       (forall (n : nat) (b1 b2 : Bag a),
        n <> 0 ->
        P n b1 (take n b1) ->
        P (n - length b1) b2 (take (n - length b1) b2) ->
        P n (Two b1 b2) (two (take n b1) (take (n - length b1) b2))) ->
       forall (n : nat) (b : Bag a), P n b (take n b)

At this point we can mark take_ind'' as Opaque, to hide how we obtained this lemma.

Our proof does not change a lot; we merely no longer have to use repeat apply conj:

Theorem length_take''':
  forall {a} n (b : Bag a),
  length (take n b) = min n (length b).
Proof.
  intros a.
  intros.
  apply take_ind'' with (P := fun n b r => length r = min n (length b)); intros.
  * rewrite Nat.min_0_l. reflexivity.
  * rewrite Nat.min_0_r. reflexivity.
  * simpl. lia.
  * simpl. rewrite length_two, IHb1, IHb2. lia.
Qed.

Is it worth it?

It was in my case: Applying this trick in our ongoing work of verifying parts of the Haskell compiler GHC separated a somewhat proof into a re-usable proof scheme (go_ind), making the actual proofs (go_all_WellScopedFloats, go_res_WellScoped) much neater and to the point. It saved “only” 60 lines (if I don’t count the 20 “generic” lines above), but the pay-off will increase as I do even more proofs about this function.

CryptogramMaliciously Changing Someone's Address

Someone changed the address of UPS corporate headquarters to his own apartment in Chicago. The company discovered it three months later.

The problem, of course, is that in the US there isn't any authentication of change-of-address submissions:

According to the Postal Service, nearly 37 million change-of-address requests ­ known as PS Form 3575 ­ were submitted in 2017. The form, which can be filled out in person or online, includes a warning below the signature line that "anyone submitting false or inaccurate information" could be subject to fines and imprisonment.

To cut down on possible fraud, post offices send a validation letter to both an old and new address when a change is filed. The letter includes a toll-free number to call to report anything suspicious.

Each year, only a tiny fraction of the requests are ever referred to postal inspectors for investigation. A spokeswoman for the U.S. Postal Inspection Service could not provide a specific number to the Tribune, but officials have previously said that the number of change-of-address investigations in a given year totals 1,000 or fewer typically.

While fraud involving change-of-address forms has long been linked to identity thieves, the targets are usually unsuspecting individuals, not massive corporations.

Worse Than FailureError'd: Perfectly Technical Difficulties

David G. wrote, "For once, I'm glad to see technical issues being presented in a technical way."

 

"Springer has a very interesting pricing algorithm for downloading their books: buy the whole book at some 10% of the sum of all its individual chapters," writes Bernie T.

 

"While browsing PlataGO! forums, I noticed the developers are erasing technical debt...and then some," Dariusz J. writes.

 

Bill K. wrote, "Hooray! It's an 'opposite sale' on Adidas' website!"

 

"A trail camera disguised at a salad bowl? Leave that at an all you can eat buffet and it'll blend right in," wrote Paul T.

 

Brian writes, "Amazon! That's not how you do math!"

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianMartín Ferrari: MiniDebConf Hamburg - Thursday

MiniDebCamp Hamburg - Thursday 17/5

I missed my flight on Wednesday, and for a moment I thought I would have to cancel my attendance, but luckily I was able to buy a ticket for Thursday for a good price.

I arrived at the venue just in time for a "stand-up" meeting, where people introduced themselves and shared what are they working on / planning to work on. That gave me a great feeling, having an idea of what other people are doing, and gave me motivation to work on my projects.

The venue seems to be some kind of cooperative, with office space for different associations, there is also a small guest house (where I am sleeping), and a "cantina". The building seems very pretty, but is going through some renovations, so the scaffolding does not let you see it much. It also has a big outdoors area, which is always welcomed.

I had a good chat about mapping support in IkiWiki, so my rewrite of the OSM plugin might get some users even before it is completely merged!

I also worked for a while on Prometheus packages, I am hoping to finally get a new version of prometheus-alertmanager packaged soon.

I realised I still had some repos in my home directory in alioth, so I moved these away to salsa. On the same vein, I started discussions about migrating my data-collection scripts for contributors.d.o to salsa; this is quite important if we want to keep contributors being relevant and useful.

Comment

Planet DebianOlivier Berger: Virtualized lab demonstration using a tweaked Labtainers running in a container

I’ve recorded a screencast: Labtainers in docker demonstration (embedded below) demonstrating how I’ve tweaked Labtainers so as to run it inside its own Docker container.

I’m currently very much excited by the Labtainers framework for delivering virtual labs, for instance in the context of MOOCs.

Labtainers is quite interesting as it allows isolating a lab in several containers running in their own dedicated virtual network, which helps distributing a lab without needing to install anything locally.

My tweak allows to run what I called the “master” container which contains the labtainers scripts, instead of having to install labtainers on a Linux host. This should help installation and distribution of labtainers, as well as deploying it on cloud platforms, some day soon. In the meantime Labtainer containers of the labs run with privileges so it’s advised to be careful, and running the whole of these containers in a VM may be safer. Maybe Labtainers will evolve in the future to integrate a containerization of its scripts. My patches are pending, but the upstream authors are currently focused on some other priorities.

Another interesting property of labtainers that is shown in the demo is the auto-grading feature that uses traces of what was performed inside the lab environment by the student, to evaluate the activities. Here, the telnetlab that I’ve shown, is evaluated by looking at text input on the command line or messages appearing on the stdout or in logs : the student launched both telnet or ssh, some failed login appeared, etc.

However, the demo is a bit confusing, in that I recorded a second lab execution whereas I had previously attempted a first try at the same telnetlab. In labtainers, traces of execution can accumulate : the student wil make a first attempt, and restart later, before sending it all to the professor (unless a redo.py is issued). This explanes that the  grading appears to give a different result than what I performed in the screencast.

Stay tuned for more news about my Labtainers adventures.

P.S. thanks to labtainers authors, and obs-studio folks for the screencast recording tool 🙂

Planet DebianLouis-Philippe Véronneau: Running systemd in the Gitlab docker runner CI

At the DebConf videoteam, we use ansible to manage our machines. Last fall in Cambridge, we migrated our repositories on salsa.debian.org and I started playing with the Gitlab CI. It's pretty powerful and helped us catch a bunch of errors we had missed.

As it was my first time playing with continuous integration and docker, I had trouble when our playbooks used systemd in a way or another and I couldn't figure out a way to have systemd run in the Gitlab docker runner.

Fast forward a few months and I lost another day and a half working on this issue. I haven't been able to make it work (my conclusion is that it's not currently possible), but I thought I would share what I learned in the process with others. Who knows, maybe someone will have a solution!

10 steps to failure

I first stated by creating a privileged Gitlab docker runner on a machine that is dedicated to running Gitlab CI runners. To run systemd in docker you either need to run privileged docker instances or to run them with the --add-cap=SYS_ADMIN permission.

If you were trying to run a docker container that runs with systemd directly, you would do something like:

$ docker run -it --cap-add SYS_ADMIN -v /sys/fs/cgroup:/sys/fs/cgroup:ro debian-systemd

I tried replicating this behavior with the Gitlab runner by mounting the right volumes in the runner and giving it the right cap permissions.

The thing is, normally your docker container runs a entrypoint command such as CMD ["/lib/systemd/systemd"]. To run its CI scripts, the Gitlab runner takes that container but replaces the entrypoint command by:

sh -c 'if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [ -x /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh \nelif [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ]; then\n\texec /bin/sh \nelse\n\techo shell not found\n\texit 1\nfi\n\n'

That is to say, it tries to run bash.

If you try to run commands that require systemd such as systemctl status, you'll end up with this error message since systemd is not running:

Failed to get D-Bus connection: Operation not permitted

Trying to run systemd manually once the container has been started won't work either, since systemd needs to be PID 1 in order to work (and PID 1 is bash). You end up with this error:

Trying to run as user instance, but the system has not been booted with systemd.

At this point, I came up with a bunch of creative solutions to try to bypass Gitlab's entrypoint takeover. Turns out you can tell the Gitlab runner to override the container's entrypoint with your own. Sadly, the runner then appends its long bash command right after.

For example, if you run a job with this gitlab-ci entry:

image:
  name: debian-systemd
  entrypoint: "/lib/systemd/systemd"
script:
- /usr/local/bin/my-super-script

You will get this entrypoint:

/lib/systemd/systemd sh -c 'if [ -x /usr/local/bin/bash ]; then\n\texec /usr/local/bin/bash \nelif [ -x /usr/bin/bash ]; then\n\texec /usr/bin/bash \nelif [ -x /bin/bash ]; then\n\texec /bin/bash \nelif [ -x /usr/local/bin/sh ]; then\n\texec /usr/local/bin/sh \nelif [ -x /usr/bin/sh ]; then\n\texec /usr/bin/sh \nelif [ -x /bin/sh ]; then\n\texec /bin/sh \nelse\n\techo shell not found\n\texit 1\nfi\n\n'

This obviously fails. I then tried to be clever and use this entrypoint: ["/lib/systemd/systemd", "&&"]. This does not work either, since docker requires the entrypoint to be only one command.

Someone pointed out to me that you could try to use exec /lib/systemd/systemd to PID 1 bash by systemd, but that also fails with an error telling you the system has not been booted with systemd.

One more level down

Since it seems you can't run systemd in the Gitlab docker runner directly, why not try to run systemd in docker in docker (dind)? dind is used quite a lot in the Gitlab CI to build containers, so we thought it might work.

Sadly, we haven't been able to make this work either. You need to mount volumes in docker to run systemd properly and it seems docker doesn't like to mount volumes from a docker container that already have been mounted from the docker host... Ouf.

If you have been able to run systemd in the Gitlab docker runner, please contact me!

Paths to explore

The only Gitlab runner executor I've used at the moment is the docker one, since it's what most Gitlab instances run. I have no experience with it, but since there is also an LXC executor, it might be possible to run Gitlab CI tests with systemd this way.

Planet DebianLouis-Philippe Véronneau: Join us in Hamburg for the Hamburg Mini-DebConf!

Thanks to Debian, I have the chance to be able to attend the Hamburg Mini-DebConf, taking place in Hamburg from May 16th to May 20th. We are hosted by Dock Europe in the amazing Viktoria Kaserne building.

Viktoria Kaserne

As always, the DebConf videoteam has been hard at work! Our setup is working pretty well and we only have minor fixes to implement before the conference starts.

For those of you who couldn't attend the mini-conf, you can watch the live stream here. Videos will be uploaded shortly after to the DebConf video archive.

Olasd resting on our makeshift cubes podium

Planet Linux AustraliaMichael Still: How to maintain a local mirror of github repositories

Share

Similarly to yesterday’s post about mirroring ONAP’s git, I also want to mirror all of the git repositories for certain github projects. In this specific case, all of the Kubernetes repositories.

So once again, here is a script based on something Tony Breeds and I cooked up a long time ago for OpenStack…

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

from github import Github as github


GITHUB_ACCESS_TOKEN = '...use yours!...'


def get_github_projects():
    g = github(GITHUB_ACCESS_TOKEN)
    for user in ['kubernetes']:
        for repo in g.get_user(login=user).get_repos():
            yield('https://github.com', repo.full_name)


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = []
for res in list(get_github_projects()):
    if len(res) == 3:
        projects.append(res)
    else:
        projects.append((res[0], res[1], res[1]))
    
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(starting_dir)

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

This script is basically the same as the ONAP one, but it understands how to get a project list from github and doesn’t need to handle ONAP’s slightly strange repository naming scheme.

I hope it is useful to someone other than me.

Share

The post How to maintain a local mirror of github repositories appeared first on Made by Mikal.

Planet DebianNorbert Preining: Docker, cron, environment variables, and Kubernetes

I recently mentioned that I am running cron in some of the docker containers I need for a new service. Now that we moved to Kubernetes and Rancher for deployment, I moved most of the configuration into Kubernetes ConfigMaps, and expose the key/value pairs their as environment variables. Sounded like a good idea, but …

but well, reading the documentation would have helped. Cron scripts do not see the normal environment of the surrounding process (cron, init, whatever), but get a cleaned up environment. As a consequence, none of the configuration keys available in the environment did show up in the cron jobs – and as a consequence they badly failed of course 😉

After some thinking and reading, I came up with two solutions, one “Dirty Harry^WNorby” solution and one clean and nice, but annoying solution.

Dirty Harry^WNorby solution

What is available in the environment of the cron jobs is minimal, and in fact more or less what is defined in /etc/environment and stuff set by the shell (if it is a shell script). So the solution was adding the necessary variable definitions to /etc/environment in a way that they are properly set. For that, I added the following code in the start-syslog-cron script that is the entry point of the container:

# prepare for export of variables to cron jobs
if [ -r /env-vars-to-be-exported ]
then
  for i in `cat /env-vars-to-be-exported`
  do
    echo "$i=${!i}" >> /etc/environment
  done
fi

Meaning, if the container contains a file /env-vars-to-be-exported then the lines of it are considered variable names and are set in the environment with the respective values at the time of invocation.

Using this quick and dirty trick it is now dead-easy to get the ConfigMap variables into the cron job’s environment by adding the necessary variables names to the file /env-vars-to-be-exported. Thus, no adaption of the original source code was necessary – a big plus!

Be warned, there is no error checking etc, so one can mess up the container quite easily 😉

Standards solution

The more standard and clean solution is mounting the ConfigMap and reading the values from the exported files. This is possible, has the big advantage that one can change the values without restarting the containers (mounted ConfigMaps are updated when the ConfigMaps are changed – besides a few corner cases), and no nasty trickery in the initialization.

Disadvantage is that the code of the cron jobs needs to be changed to read the variables from the config files instead of the environment.

,

Krebs on SecurityTracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

On May 10, The New York Times broke the news that a different cell phone location tracking company called Securus Technologies had been selling or giving away location data on customers of virtually any major mobile network provider to a sheriff’s office in Mississippi County, Mo.

On May 15, ZDnet.com ran a piece saying that Securus was getting its data through an intermediary — Carlsbad, CA-based LocationSmart.

Wednesday afternoon Motherboard published another bombshell: A hacker had broken into the servers of Securus and stolen 2,800 usernames, email addresses, phone numbers and hashed passwords of authorized Securus users. Most of the stolen credentials reportedly belonged to law enforcement officers across the country — stretching from 2011 up to this year.

Several hours before the Motherboard story went live, KrebsOnSecurity heard from Robert Xiao, a security researcher at Carnegie Mellon University who’d read the coverage of Securus and LocationSmart and had been poking around a demo tool that LocationSmart makes available on its Web site for potential customers to try out its mobile location technology.

LocationSmart’s demo is a free service that allows anyone to see the approximate location of their own mobile phone, just by entering their name, email address and phone number into a form on the site. LocationSmart then texts the phone number supplied by the user and requests permission to ping that device’s nearest cellular network tower.

Once that consent is obtained, LocationSmart texts the subscriber their approximate longitude and latitude, plotting the coordinates on a Google Street View map. [It also potentially collects and stores a great deal of technical data about your mobile device. For example, according to their privacy policy that information “may include, but is not limited to, device latitude/longitude, accuracy, heading, speed, and altitude, cell tower, Wi-Fi access point, or IP address information”].

But according to Xiao, a PhD candidate at CMU’s Human-Computer Interaction Institute, this same service failed to perform basic checks to prevent anonymous and unauthorized queries. Translation: Anyone with a modicum of knowledge about how Web sites work could abuse the LocationSmart demo site to figure out how to conduct mobile number location lookups at will, all without ever having to supply a password or other credentials.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do,” Xiao said. “This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.”

Xiao said his tests showed he could reliably query LocationSmart’s service to ping the cell phone tower closest to a subscriber’s mobile device. Xiao said he checked the mobile number of a friend several times over a few minutes while that friend was moving. By pinging the friend’s mobile network multiple times over several minutes, he was then able to plug the coordinates into Google Maps and track the friend’s directional movement.

“This is really creepy stuff,” Xiao said, adding that he’d also successfully tested the vulnerable service against one Telus Mobility mobile customer in Canada who volunteered to be found.

Before LocationSmart’s demo was taken offline today, KrebsOnSecurity pinged five different trusted sources, all of whom gave consent to have Xiao determine the whereabouts of their cell phones. Xiao was able to determine within a few seconds of querying the public LocationSmart service the near-exact location of the mobile phone belonging to all five of my sources.

LocationSmart’s demo page.

One of those sources said the longitude and latitude returned by Xiao’s queries came within 100 yards of their then-current location. Another source said the location found by the researcher was 1.5 miles away from his current location. The remaining three sources said the location returned for their phones was between approximately 1/5 to 1/3 of a mile at the time.

Reached for comment via phone, LocationSmart Founder and CEO Mario Proietti said the company was investigating.

“We don’t give away data,” Proietti said. “We make it available for legitimate and authorized purposes. It’s based on legitimate and authorized use of location data that only takes place on consent. We take privacy seriously and we’ll review all facts and look into them.”

LocationSmart’s home page features the corporate logos of all four the major wireless providers, as well as companies like Google, Neustar, ThreatMetrix, and U.S. Cellular. The company says its technologies help businesses keep track of remote employees and corporate assets, and that it helps mobile advertisers and marketers serve consumers with “geo-relevant promotions.”

LocationSmart’s home page lists many partners.

It’s not clear exactly how long LocationSmart has offered its demo service or for how long the service has been so permissive; this link from archive.org suggests it dates back to at least January 2017. This link from The Internet Archive suggests the service may have existed under a different company name — loc-aid.com — since mid-2011, but it’s unclear if that service used the same code. Loc-aid.com is one of four other sites hosted on the same server as locationsmart.com, according to Domaintools.com.

LocationSmart’s privacy policy says the company has security measures in place…”to protect our site from the loss or misuse of information that we have collected. Our servers are protected by firewalls and are physically located in secure data facilities to further increase security. While no computer is 100% safe from outside attacks, we believe that the steps we have taken to protect your personal information drastically reduce the likelihood of security problems to a level appropriate to the type of information involved.”

But these assurances may ring hollow to anyone with a cell phone who’s concerned about having their physical location revealed at any time. The component of LocationSmart’s Web site that can be abused to look up mobile location data at will is an insecure “application programming interface” or API — an interactive feature designed to display data in response to specific queries by Web site visitors.

Although the LocationSmart’s demo page required users to consent to having their phone located by the service, LocationSmart apparently did nothing to prevent or authenticate direct interaction with the API itself.

API authentication weaknesses are not uncommon, but they can lead to the exposure of sensitive data on a great many people in a short period of time. In April 2018, KrebsOnSecurity broke the story of an API at the Web site of fast-casual bakery chain PaneraBread.com that exposed the names, email and physical addresses, birthdays and last four digits of credit cards on file for tens of millions of customers who’d signed up for an account at PaneraBread to order food online.

In a May 9 letter sent to the top four wireless carriers and to the U.S. Federal Communications Commission in the wake of revelations about Securus’ alleged practices, Sen. Ron Wyden (D-Ore.) urged all parties to take “proactive steps to prevent the unrestricted disclosure and potential abuse of private customer data.”

“Securus informed my office that it purchases real-time location information on AT&T’s customers — through a third party location aggregator that has a commercial relationship with the major wireless carriers — and routinely shares that information with its government clients,” Wyden wrote. “This practice skirts wireless carrier’s legal obligation to be the sole conduit by which the government may conduct surveillance of Americans’ phone records, and needlessly exposes millions of Americans to potential abuse and unchecked surveillance by the government.”

Securus, which reportedly gets its cell phone location data from LocationSmart, told The New York Times that it requires customers to upload a legal document — such as a warrant or affidavit — and to certify that the activity was authorized. But in his letter, Wyden said “senior officials from Securus have confirmed to my office that it never checks the legitimacy of those uploaded documents to determine whether they are in fact court orders and has dismissed suggestions that it is obligated to do so.”

Securus did not respond to requests for comment.

THE CARRIERS RESPOND

It remains unclear what, if anything, AT&T, Sprint, T-Mobile and Verizon plan to do about any of this. A third-party firm leaking customer location information not only would almost certainly violate each mobile providers own stated privacy policies, but the real-time exposure of this data poses serious privacy and security risks for virtually all U.S. mobile customers (and perhaps beyond, although all my willing subjects were inside the United States).

None of the major carriers would confirm or deny a formal business relationship with LocationSmart, despite LocationSmart listing them each by corporate logo on its Web site.

AT&T spokesperson Jim Greer said AT&T does not permit the sharing of location information without customer consent or a demand from law enforcement.

“If we learn that a vendor does not adhere to our policy we will take appropriate action,” Greer said.

T-Mobile referred me to their privacy policy, which says T-Mobile follows the “best practices” document (PDF) for subscriber location data as laid out by the CTIA, the international association for the wireless telecommunications industry.

A T-Mobile spokesperson said that after receiving Sen. Wyden’s letter, the company quickly shut down any transaction of customer location data to Securus and LocationSmart.

“We take the privacy and security of our customers’ data very seriously,” the company said in a written statement. “We have addressed issues that were identified with Securus and LocationSmart to ensure that such issues were resolved and our customers’ information is protected. We continue to investigate this.”

Verizon also referred me to their privacy policy.

Sprint officials shared the following statement:

“Protecting our customers’ privacy and security is a top priority, and we are transparent about our Privacy Policy. To be clear, we do not share or sell consumers’ sensitive information to third parties. We share personally identifiable geo-location information only with customer consent or in response to a lawful request such as a validated court order from law enforcement.”

“We will answer the questions raised in Sen. Wyden’s letter directly through appropriate channels. However, it is important to note that Sprint’s relationship with Securus does not include data sharing, and is limited to supporting efforts to curb unlawful use of contraband cellphones in correctional facilities.”

WHAT NOW?

Stephanie Lacambra, a staff attorney with the the nonprofit Electronic Frontier Foundation, said that wireless customers in the United States cannot opt out of location tracking by their own mobile providers. For starters, carriers constantly use this information to provide more reliable service to the customers. Also, by law wireless companies need to be able to ascertain at any time the approximate location of a customer’s phone in order to comply with emergency 911 regulations.

But unless and until Congress and federal regulators make it more clear how and whether customer location information can be shared with third-parties, mobile device customers may continue to have their location information potentially exposed by a host of third-party companies, Lacambra said.

“This is precisely why we have lobbied so hard for robust privacy protections for location information,” she said. “It really should be only that law enforcement is required to get a warrant for this stuff, and that’s the rule we’ve been trying to push for.”

Chris Calabrese is vice president of the Center for Democracy & Technology, a policy think tank in Washington, D.C. Calabrese said the current rules about mobile subscriber location information are governed by the Electronic Communications Privacy Act (ECPA), a law passed in 1986 that hasn’t been substantially updated since.

“The law here is really out of date,” Calabrese said. “But I think any processes that involve going to third parties who don’t verify that it’s a lawful or law enforcement request — and that don’t make sure the evidence behind that request is legitimate — are hugely problematic and they’re major privacy violations.”

“I would be very surprised if any mobile carrier doesn’t think location information should be treated sensitively, and I’m sure none of them want this information to be made public,” Calabrese continued. “My guess is the carriers are going to come down hard on this, because it’s sort of their worst nightmare come true. We all know that cell phones are portable tracking devices. There’s a sort of an implicit deal where we’re okay with it because we get lots of benefits from it, but we all also assume this information should be protected. But when it isn’t, that presents a major problem and I think these examples would be a spur for some sort of legislative intervention if they weren’t fixed very quickly.”

For his part, Xiao says we’re likely to see more leaks from location tracking companies like Securus and LocationSmart as long as the mobile carriers are providing third party companies any access to customer location information.

“We’re going to continue to see breaches like this happen until access to this data can be much more tightly controlled,” he said.

Sen. Wyden issued a statement on Friday in response to this story:

“This leak, coming only days after the lax security at Securus was exposed, demonstrates how little companies throughout the wireless ecosystem value Americans’ security. It represents a clear and present danger, not just to privacy but to the financial and personal security of every American family. Because they value profits above the privacy and safety of the Americans whose locations they traffic in, the wireless carriers and LocationSmart appear to have allowed nearly any hacker with a basic knowledge of websites to track the location of any American with a cell phone.”

“The threats to Americans’ security are grave – a hacker could have used this site to know when you were in your house so they would know when to rob it. A predator could have tracked your child’s cell phone to know when they were alone. The dangers from LocationSmart and other companies are limitless. If the FCC refuses to act after this revelation then future crimes against Americans will be the commissioners’ heads.”

 

Sen. Mark Warner (D-Va.) also issued a statement:

“This is one of many developments over the last year indicating that consumers are really in the dark on how their data is being collected and used,” Sen. Warner said. “It’s more evidence that we need 21st century rules that put users in the driver’s seat when it comes to the ways their data is used.”

In a statement provided to KrebsOnSecurity on Friday, LocationSmart said:

“LocationSmart provides an enterprise mobility platform that strives to bring secure operational efficiencies to enterprise customers. All disclosure of location data through LocationSmart’s platform relies on consent first being received from the individual subscriber. The vulnerability of the consent mechanism recently identified by Mr. Robert Xiao, a cybersecurity researcher, on our online demo has been resolved and the demo has been disabled. We have further confirmed that the vulnerability was not exploited prior to May 16th and did not result in any customer information being obtained without their permission.”

“On that day as many as two dozen subscribers were located by Mr. Xiao through his exploitation of the vulnerability. Based on Mr. Xiao’s public statements, we understand that those subscribers were located only after Mr. Xiao personally obtained their consent. LocationSmart is continuing its efforts to verify that not a single subscriber’s location was accessed without their consent and that no other vulnerabilities exist. LocationSmart is committed to continuous improvement of its information privacy and security measures and is incorporating what it has learned from this incident into that process.”

It’s not clear who LocationSmart considers “customers” in the phrase, “did not result in any customer information being obtained without their permission,” since anyone whose location was looked up through abuse of the service’s buggy API could not fairly be considered a “customer.”

Update, May 18, 11:31 AM ET: Added comments from Sens. Wyden and Warner, as well as updated statements from LocationSmart and T-Mobile.

Sociological Images“I Felt Like Destroying Something Beautiful”

When I was eight, my brother and I built a card house. He was obsessed with collecting baseball cards and had amassed thousands, taking up nearly every available corner of his childhood bedroom. After watching a particularly gripping episode of The Brady Bunch, in which Marsha and Greg settled a dispute by building a card house, we decided to stack the cards in our favor and build. Forty-eight hours later a seven-foot monstrosity emerged…and it was glorious.

I told this story to a group of friends as I ran a stack of paper coasters through my fingers. We were attending Oktoberfest 2017 in a rural university town in the Midwest. They collectively decided I should flex my childhood skills and construct a coaster card house. Supplies were in abundance and time was no constraint. 

I began to construct. Four levels in, people around us began to take notice; a few snapped pictures. Six levels in, people began to stop, actively take pictures, and inquire as to my progress and motivation. Eight stories in, a small crowd emerged. Everyone remained cordial and polite. At this point it became clear that I was too short to continue building. In solidarity, one of my friends stood on a chair to encourage the build. We built the last three levels together, atop chairs, in the middle of the convention center. 

Where inquires had been friendly in the early stages of building, the mood soon turned. The moment chairs were used to facilitate the building process was the moment nearly everyone in attendance began to take notice. As the final tier went up, objects began flying at my head. Although women remained cordial throughout, a fraction of the men in the crowd began to become more and more aggressive. Whispers of  “I bet you $50 that you can’t knock it down” or “I’ll give you $20 if you go knock it down” were heard throughout.  A man chatted with my husband, criticizing the structural integrity of the house and offering insight as to how his house would be better…if he were the one building. Finally, a group of very aggressive men began circling like vultures. One man chucked empty plastic cups from a few tables away. The card house was complete for a total of 2-minutes before it fell. The life of the tower ended as such: 

Man: “Would you be mad if someone knocked it down?”

Me: “I’m the one who built it so I’m the one who gets to knock it down.”

Man: “What? You’re going to knock it down?”

The man proceeded to punch the right side of the structure; a quarter of the house fell. Before he could strike again, I stretched out my arms knocking down the remainder. A small curtsey followed, as if to say thank you for watching my performance. There was a mixture of cheers and boos. Cheers, I imagine from those who sat in nearby tables watching my progress throughout the night. Boos, I imagine, from those who were denied the pleasure of knocking down the structure themselves.

As an academic it is difficult to remove my everyday experiences from research analysis.  Likewise, as a gender scholar the aggression displayed by these men was particularly alarming. In an era of #metoo, we often speak of toxic masculinity as enacting masculine expectations through dominance, and even violence. We see men in power, typically white men, abuse this very power to justify sexual advances and sexual assault. We even see men justify mass shootings and attacks based on their perceived subordination and the denial of their patriarchal rights.

Yet toxic masculinity also exits on a smaller scale, in their everyday social worlds. Hegemonic masculinity is a more apt description for this destructive behavior, rather than outright violent behavior, as hegemonic masculinity describes a system of cultural meanings that gives men power — it is embedded in everything from religious doctrines, to wage structures, to mass media. As men learn hegemonic expectations by way of popular culture—from Humphrey Bogart to John Wayne—one cannot help but think of the famous line from the hyper-masculine Fight Club (1999), “I just wanted to destroy something beautiful.”

Power over women through hegemonic masculinity may best explain the actions of the men at Ocktoberfest. Alcohol consumption at the event allowed men greater freedom to justify their destructive behavior. Daring one another to physically remove a product of female labor, and their surprise at a woman’s choice to knock the tower down herself, are both in line with this type of power over women through the destruction of something “beautiful”.

Physical violence is not always a key feature of hegemonic masculinity (Connell 1987: 184). When we view toxic masculinity on a smaller scale, away from mass shootings and other high-profile tragedies, we find a form of masculinity that embraces aggression and destruction in our everyday social worlds, but is often excused as being innocent or unworthy of discussion.

Sandra Loughrin is an Assistant Professor at the University of Nebraska at Kearney. Her research areas include gender, sexuality, race, and age.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowTalking education and technology with the Future Trends Forum

“Science fiction writer and cyberactivist Cory Doctorow joined the Future Trends Forum to explore possibilities for technology and education.”

CryptogramWhite House Eliminates Cybersecurity Position

The White House has eliminated the cybersecurity coordinator position.

This seems like a spectacularly bad idea.

Worse Than FailureImprov for Programmers: Inventing the Toaster

We always like to change things up a little bit here at TDWTF, and thanks to our sponsor Raygun, we've got a chance to bring you a little treat, or at least something a little different.

We're back with a new podcast, but this one isn't a talk show or storytelling format, or even a radio play. Remy rounded up some of the best comedians in Pittsburgh who were also in IT, and bundled them up to do some improv, using articles from our site and real-world IT news as inspiration. It's… it's gonna get weird.

Thanks to Erin Ross, Ciarán Ó Conaire, and Josh Cox for lending their time and voices to this project.

Music: "Happy Happy Game Show" Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/

Raygun gives you a window into the real user-experience for your software. With a few minutes of setup, all the errors, crashes, and performance issues will be identified for you, all in one tool. Not only does it make your applications better, with Raygun APM, it proactively identifies performance issues and builds a workflow for solving them. Raygun APM sorts through the mountains of data for you, surfacing the most important issues so they can be prioritized, triaged and acted on, cutting your Mean Time to Resolution (MTTR) and keeping your users happy.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integration, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Linux AustraliaMichael Still: How to maintain a local mirror of ONAP’s git repositories

Share

For various reasons, I like to maintain a local mirror of git repositories I use a lot, in this case ONAP. This is mostly because of the generally poor network connectivity in Australia, but its also because it makes cloning a new repository super fast.

Tony Breeds and I baked up a script to do this for OpenStack repositories a while ago. I therefore present a version of that mirror script which does the right thing for ONAP projects.

One important thing to note here that differs from OpenStack — ONAP projects aren’t named in a way where they will consistently sit in a directory structure together. For example, there is an “oom” repository, as well as an “oom/registrator” repository. We therefore need to normalise repository names on clone to ensure they don’t clobber each other — I do that by replacing path separators with underscores.

So here’s the script:

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

ONAP_GIT_BASE = 'ssh://mikal@gerrit.onap.org:29418'


def get_onap_projects():
    data = subprocess.check_output(
               ['ssh', 'gerrit.onap.org', 'gerrit',
                'ls-projects']).split('\n')
    for project in data:
        yield (ONAP_GIT_BASE, project,
               'onap/%s' % project.replace('/', '_'))


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = list(get_onap_projects())
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(os.path.abspath(starting_dir))

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

Note that your ONAP gerrit username probably isn’t “mikal”, so you might want to change that.

This script will checkout all ONAP git repositories into a directory named “onap” in your current working directory. A second run will add any new repositories, as well as updating the existing ones. Note that these are clones intended to be served with a local git server, instead of being clones you’d edit directly. To clone one of the mirrored repositories for development, you would then do something like:

$ git clone onap/aai_babel development/aai_babel

Or similar.

Share

The post How to maintain a local mirror of ONAP’s git repositories appeared first on Made by Mikal.

,

Planet DebianJonathan McDowell: Home Automation: Raspberry Pi as MQTT temperature sensor

After setting up an MQTT broker I needed some data to feed it. It made sense to start basic and gradually build up bits and pieces that would form a bigger home automation setup. As it happened I have an old Raspberry Pi B (original rev 1 [2 if you look at /proc/cpuinfo] with 256MB RAM) and some DS18B20 1-Wire temperature sensors lying around, so I decided to make a heavyweight temperature sensor (long term I’m hoping to do something with some ESP8266s).

There are plenty of guides out there about hooking up the DS18B20 to the Pi; Adafruit has a reasonable one. The short version is that GPIO4 can be easily configured to be a 1-Wire bus and you hook the DS18B20 up with a 4k7Ω resistor across the data + 3v3 power pins. An initial check can be performed by enabling the DT overlay on the fly:

sudo dtoverlay w1-gpio

Detection of 1-Wire devices is automatic so you should see an entry in dmesg looking like:

w1_master_driver w1_bus_master1: Attaching one wire slave 28.012345678abcd crc ef

You can then do

$ cat /sys/bus/w1/devices/28-*/w1_slave
1e 01 4b 46 7f ff 0c 10 18 : crc=18 YES
1e 01 4b 46 7f ff 0c 10 18 t=17875

Which shows a current temperature of 17.875°C in my sudy. Once that’s working (and you haven’t swapped GND and DATA like I did on the first go) you can make the Pi bootup with 1-Wire enabled by adding a dtoverlay=w1-gpio line to /boot/config.txt. The next step is to get that fed into the MQTT broker. A simple Python client seemed like the right approach. Debian has paho-mqtt but sadly not in a stable release. Thankfully the python3-paho-mqtt 1.3.1-1 package in testing installed just fine on the Raspbian stretch image my Pi is running. I dropped the following in /usr/locals/bin/mqtt-temp:

#!/usr/bin/python3

import glob
import time
import paho.mqtt.publish as publish

Broker = 'mqtt-host'
auth = {
    'username': 'user2',
    'password': 'bar',
}

pub_topic = 'test/temperature'

base_dir = '/sys/bus/w1/devices/'
device_folder = glob.glob(base_dir + '28-*')[0]
device_file = device_folder + '/w1_slave'

def read_temp():
    valid = False
    temp = 0
    with open(device_file, 'r') as f:
        for line in f:
            if line.strip()[-3:] == 'YES':
                valid = True
            temp_pos = line.find(' t=')
            if temp_pos != -1:
                temp = float(line[temp_pos + 3:]) / 1000.0

    if valid:
        return temp
    else:
        return None


while True:
    temp = read_temp()
    if temp is not None:
        publish.single(pub_topic, str(temp),
                hostname=Broker, port=8883,
                auth=auth, tls={})
    time.sleep(60)

And finished it off with a systemd unit file - I know a lot of people complain about systemd, but it really does make it easy to just spin up a minimal service as a unique non-privileged user. The following went in /etc/systemd/system/mqtt-temp.service:

[Unit]
Description=MQTT Temperature sensor
After=network.target

[Service]
# Hack because Python can't cope with a DynamicUser with no HOME
Environment="HOME=/"
ExecStart=/usr/local/sbin/mqtt-temp

DynamicUser=yes
MemoryDenyWriteExecute=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectControlGroups=true
RestrictRealtime=true
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

Start it up and enable for subsequent reboots:

systemctl start mqtt-temp
systemctl enable mqtt-temp

And then watch on my Debian test box as before:

$ mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo
test/temperature 17.875
test/temperature 17.937

Planet DebianJonathan Carter: Video Channel Updates

Last month, I started doing something that I’ve been meaning to do for years, and that’s to start a video channel and make some free software related videos.

I started out uploading to my YouTube channel which has been dormant for a really long time, and then last week, I also uploaded my videos to my own site, highvoltage.tv. It’s a MediaDrop instance, a video hosting platform written in Python.

I’ll still keep uploading to YouTube, but ultimately I’d like to make my self-hosted site the primary source for my content. Not sure if I’ll stay with MediaDrop, but it does tick a lot of boxes, and if its easy enough to extend, I’ll probably stick with it. MediaDrop might also be a good platform for viewing the Debian meetings videos like the DebConf videos. 

My current topics are very much Debian related, but that doesn’t exclude any other types of content from being included in the future. Here’s what I have so far:

  • Video Logs: Almost like a blog, in video format.
  • Howto: Howto videos.
  • Debian Package of the Day: Exploring packages in the Debian archive.
  • Debian Package Management: Howto series on Debian package management, a precursor to a series that I’ll do on Debian packaging.
  • What’s the Difference: Just comparing 2 or more things.
  • Let’s Internet: Read stuff from Reddit, Slashdot, Quora, blogs and other media.

It’s still early days and there’s a bunch of ideas that I still want to implement, so the content will hopefully get a lot better as time goes on.

I have also quit Facebook last month, so I dusted off my old Mastodon account and started posting there again: https://mastodon.xyz/@highvoltage

You can also subscribe to my videos via RSS: https://highvoltage.tv/latest.xml

Other than that I’m open to ideas, thanks for reading :)

Planet DebianShirish Agarwal: FOSS game community slump and question about getting images in palepeli

There is a thread in freegamedev.net which I have been following for the past few weeks.

In the back-and-forth argument, there I believe most of the arguments shared were somewhat wrong.

While we have AAA projects like 0ad and others, the mainstay of our games should be ones which doesn’t need any high-quality textures and still does the work.

I have been looking at a Let’s play playlist of an indie gem called ‘Dead in Vinland’

Youtube playlist link – https://www.youtube.com/playlist?list=PL8eo1fAeRpyl47_TRuQrtkBn2bNRaZtlT

If you look at the game, it doesn’t have much in terms of animation apart from bits of role-playing in encounters but is more oriented towards towards roll of dice.

The characters in the game are sort of cut-out characters very much like the cut-out cardboard/paperboard characters that we used to play while as children.

Where the game innovates is more in terms of an expansive dialog-tree and at the most 100-200 images of the characters. It basically has a group of 4 permanent characters, any one of them dies and the gamer is defeated.

If anybody has played rogue or any of the games in the new debian games-rogue tasks they would know that foss shows variety of stories even in the rpg world.

$ aptitude show games-rogue
Package: games-rogue
Version: 2.3
State: installed
Automatically installed: no
Priority: optional
Section: metapackages
Maintainer: Debian Games Team
Architecture: all
Uncompressed Size: 24.6 k
Depends: games-tasks (= 2.3)
Recommends: allure, angband, crawl, gearhead, gearhead2, hearse, hyperrogue, lambdahack, meritous, moria, nethack-x11, omega-rpg, slashem
Description: Debian's roguelike games
This metapackage will install dungeon crawling games in the spirit of Rogue.
Homepage: https://blends.debian.org/games/tasks/

I took rpg those are the kind of games I have liked always and even in that turn-based rpg’s although do hate the fact that I can’t save scum as in most traditional rpg’s.

Variety and Palapeli

I now turn my attention to a package called variety.

While looking at it, also pushed a wishlist bug for packaging the new version which would fix a bug I reported about a month back. It was in the same conversation that I came to know that the upstream releaser and the downstream maintainer is one and the same.

I have also been talking with upstream about various features or what could be done to make variety better.

Now while that’s all well and good and variety does a good job of being a wallpaper changer, I need and want the wallpapers which is the output of variety to be the input of palapeli BUT without having to do lots of manual intervention as it currently requires. Variety does a pretty good job of giving good quality wallpapers and goes beyond in even giving pretty much info. on the images as it saves the metadata of the images unlike many online image services. To make things easier for myself. I made a directory called Variety in Pictures and copied everything from ~/.config/variety/Favorites by doing –

~/.config/variety/Favorites$ cp -r --preserve . /home/shirish/Pictures/Variety/

And this method works out fine enough. The –preserve hook is essential as can be seen from the cp manpage –

--preserve[=ATTR_LIST]
preserve the specified attributes (default: mode,ownership,timestamps), if possible additional
attributes: context, links, xattr, all

Even the metadata of the image is pretty good enough as can be seen from any random picture –

~/Pictures/Variety$ exiftool 7527881664_024e44f8bf_o.jpg
ExifTool Version Number : 10.96
File Name : 7527881664_024e44f8bf_o.jpg
Directory : .
File Size : 1175 kB
File Modification Date/Time : 2018:04:09 03:56:15+05:30
File Access Date/Time : 2018:05:14 22:59:22+05:30
File Inode Change Date/Time : 2018:05:15 08:01:42+05:30
File Permissions : rw-r--r--
File Type : JPEG
File Type Extension : jpg
MIME Type : image/jpeg
Exif Byte Order : Little-endian (Intel, II)
Image Description :
Make : Canon
Camera Model Name : Canon EOS 5D
X Resolution : 240
Y Resolution : 240
Resolution Unit : inches
Software : Adobe Photoshop Lightroom 3.6 (Windows)
Modify Date : 2012:07:08 17:58:13
Artist : Peter Levi
Copyright : Peter Levi
Exposure Time : 1/50
F Number : 7.1
Exposure Program : Aperture-priority AE
ISO : 160
Exif Version : 0230
Date/Time Original : 2011:05:03 12:17:50
Create Date : 2011:05:03 12:17:50
Shutter Speed Value : 1/50
Aperture Value : 7.1
Exposure Compensation : 0
Max Aperture Value : 4.0
Metering Mode : Multi-segment
Flash : Off, Did not fire
Focal Length : 45.0 mm
User Comment :
Focal Plane X Resolution : 3086.925795
Focal Plane Y Resolution : 3091.295117
Focal Plane Resolution Unit : inches
Custom Rendered : Normal
Exposure Mode : Auto
White Balance : Auto
Scene Capture Type : Standard
Owner Name : Tsvetan ROUSTCHEV
Serial Number : 1020707385
Lens Info : 24-105mm f/?
Lens Model : EF24-105mm f/4L IS USM
XP Comment :
Compression : JPEG (old-style)
Thumbnail Offset : 908
Thumbnail Length : 18291
XMP Toolkit : XMP Core 4.4.0-Exiv2
Creator Tool : Adobe Photoshop Lightroom 3.6 (Windows)
Metadata Date : 2012:07:08 17:58:13+03:00
Lens : EF24-105mm f/4L IS USM
Image Number : 1
Flash Compensation : 0
Firmware : 1.1.1
Format : image/jpeg
Version : 6.6
Process Version : 5.7
Color Temperature : 4250
Tint : +8
Exposure : 0.00
Shadows : 2
Brightness : +65
Contrast : +28
Sharpness : 25
Luminance Smoothing : 0
Color Noise Reduction : 25
Chromatic Aberration R : 0
Chromatic Aberration B : 0
Vignette Amount : 0
Shadow Tint : 0
Red Hue : 0
Red Saturation : 0
Green Hue : 0
Green Saturation : 0
Blue Hue : 0
Blue Saturation : 0
Fill Light : 0
Highlight Recovery : 23
Clarity : 0
Defringe : 0
Gray Mixer Red : -8
Gray Mixer Orange : -17
Gray Mixer Yellow : -21
Gray Mixer Green : -25
Gray Mixer Aqua : -19
Gray Mixer Blue : +8
Gray Mixer Purple : +15
Gray Mixer Magenta : +4
Split Toning Shadow Hue : 0
Split Toning Shadow Saturation : 0
Split Toning Highlight Hue : 0
Split Toning Highlight Saturation: 0
Split Toning Balance : 0
Parametric Shadows : 0
Parametric Darks : 0
Parametric Lights : 0
Parametric Highlights : 0
Parametric Shadow Split : 25
Parametric Midtone Split : 50
Parametric Highlight Split : 75
Sharpen Radius : +1.0
Sharpen Detail : 25
Sharpen Edge Masking : 0
Post Crop Vignette Amount : 0
Grain Amount : 0
Color Noise Reduction Detail : 50
Lens Profile Enable : 1
Lens Manual Distortion Amount : 0
Perspective Vertical : 0
Perspective Horizontal : 0
Perspective Rotate : 0.0
Perspective Scale : 100
Convert To Grayscale : True
Tone Curve Name : Medium Contrast
Camera Profile : Adobe Standard
Camera Profile Digest : 9C14C254921581D1141CA0E5A77A9D11
Lens Profile Setup : LensDefaults
Lens Profile Name : Adobe (Canon EF 24-105mm f/4 L IS USM)
Lens Profile Filename : Canon EOS-1Ds Mark III (Canon EF 24-105mm f4 L IS USM) - RAW.lcp
Lens Profile Digest : 0387279C5E7139287596C051056DCFAF
Lens Profile Distortion Scale : 100
Lens Profile Chromatic Aberration Scale: 100
Lens Profile Vignetting Scale : 100
Has Settings : True
Has Crop : False
Already Applied : True
Document ID : xmp.did:8DBF8F430DC9E11185FD94228EB274CE
Instance ID : xmp.iid:8DBF8F430DC9E11185FD94228EB274CE
Original Document ID : xmp.did:8DBF8F430DC9E11185FD94228EB274CE
Source URL : https://www.flickr.com/photos/93647178@N00/7527881664
Source Type : flickr
Author : peter-levi
Source Name : Flickr
Image URL : https://farm9.staticflickr.com/8154/7527881664_024e44f8bf_o.jpg
Author URL : https://www.flickr.com/photos/93647178@N00
Source Location : user:www.flickr.com/photos/peter-levi/;user_id:93647178@N00;
Creator : Peter Levi
Rights : Peter Levi
Subject : paris, france, lyon, lyonne
Tone Curve : 0, 0, 32, 22, 64, 56, 128, 128, 192, 196, 255, 255
History Action : derived, saved
History Parameters : converted from image/x-canon-cr2 to image/jpeg, saved to new location
History Instance ID : xmp.iid:8DBF8F430DC9E11185FD94228EB274CE
History When : 2012:07:08 17:58:13+03:00
History Software Agent : Adobe Photoshop Lightroom 3.6 (Windows)
History Changed : /
Derived From :
Displayed Units X : inches
Displayed Units Y : inches
Current IPTC Digest : 044f85a540ebb92b9514cab691a3992d
Coded Character Set : UTF8
Application Record Version : 4
Date Created : 2011:05:03
Time Created : 12:17:50+03:00
Digital Creation Date : 2011:05:03
Digital Creation Time : 12:17:50+03:00
By-line : Peter Levi
Copyright Notice : Peter Levi
Headline : IMG_0779
Keywords : paris, france, lyon, lyonne
Caption-Abstract :
Photoshop Thumbnail : (Binary data 18291 bytes, use -b option to extract)
IPTC Digest : d8a6ddc0b5eacb05874d4b676f4cb439
Profile CMM Type : Linotronic
Profile Version : 2.1.0
Profile Class : Display Device Profile
Color Space Data : RGB
Profile Connection Space : XYZ
Profile Date Time : 1998:02:09 06:49:00
Profile File Signature : acsp
Primary Platform : Microsoft Corporation
CMM Flags : Not Embedded, Independent
Device Manufacturer : Hewlett-Packard
Device Model : sRGB
Device Attributes : Reflective, Glossy, Positive, Color
Rendering Intent : Perceptual
Connection Space Illuminant : 0.9642 1 0.82491
Profile Creator : Hewlett-Packard
Profile ID : 0
Profile Copyright : Copyright (c) 1998 Hewlett-Packard Company
Profile Description : sRGB IEC61966-2.1
Media White Point : 0.95045 1 1.08905
Media Black Point : 0 0 0
Red Matrix Column : 0.43607 0.22249 0.01392
Green Matrix Column : 0.38515 0.71687 0.09708
Blue Matrix Column : 0.14307 0.06061 0.7141
Device Mfg Desc : IEC http://www.iec.ch
Device Model Desc : IEC 61966-2.1 Default RGB colour space - sRGB
Viewing Cond Desc : Reference Viewing Condition in IEC61966-2.1
Viewing Cond Illuminant : 19.6445 20.3718 16.8089
Viewing Cond Surround : 3.92889 4.07439 3.36179
Viewing Cond Illuminant Type : D50
Luminance : 76.03647 80 87.12462
Measurement Observer : CIE 1931
Measurement Backing : 0 0 0
Measurement Geometry : Unknown
Measurement Flare : 0.999%
Measurement Illuminant : D65
Technology : Cathode Ray Tube Display
Red Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract)
Green Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract)
Blue Tone Reproduction Curve : (Binary data 2060 bytes, use -b option to extract)
DCT Encode Version : 100
APP14 Flags 0 : [14]
APP14 Flags 1 : (none)
Color Transform : YCbCr
Image Width : 1920
Image Height : 1280
Encoding Process : Baseline DCT, Huffman coding
Bits Per Sample : 8
Color Components : 3
Y Cb Cr Sub Sampling : YCbCr4:4:4 (1 1)
Aperture : 7.1
Date/Time Created : 2011:05:03 12:17:50+03:00
Digital Creation Date/Time : 2011:05:03 12:17:50+03:00
Image Size : 1920x1280
Megapixels : 2.5
Scale Factor To 35 mm Equivalent: 1.0
Shutter Speed : 1/50
Thumbnail Image : (Binary data 18291 bytes, use -b option to extract)
Circle Of Confusion : 0.030 mm
Field Of View : 43.5 deg
Focal Length : 45.0 mm (35 mm equivalent: 45.1 mm)
Hyperfocal Distance : 9.51 m
Lens ID : Canon EF 24-105mm f/4L IS USM
Light Value : 10.6

One of the things I found wanting, now I dunno whether a field for CC-SA 3.0 (Creative Commons – Share Alike 3.0) . It does have everything else to know how a particular picture is/was taken.

I use the images I found as input in palapeli.

$ aptitude show palapeli
Package: palapeli
Version: 4:17.12.2-1
State: installed
Automatically installed: no
Priority: optional
Section: games
Maintainer: Debian/Kubuntu Qt/KDE Maintainers
Architecture: amd64
Uncompressed Size: 1,138 k
Depends: palapeli-data (>= 4:17.12.2-1), kio, libc6 (>= 2.14), libkf5archive5 (>= 4.96.0), libkf5completion5 (>= 4.97.0), libkf5configcore5 (>= 4.98.0), libkf5configgui5 (>= 4.97.0), libkf5configwidgets5 (>= 4.96.0), libkf5coreaddons5(>= 4.100.0), libkf5crash5 (>= 5.15.0), libkf5i18n5 (>= 4.97.0), libkf5itemviews5 (>= 4.96.0), libkf5kiowidgets5(>= 4.99.0), libkf5notifications5 (>= 4.96.0), libkf5service-bin, libkf5service5 (>= 4.99.0), libkf5widgetsaddons5 (>= 4.96.0), libkf5xmlgui5 (>= 4.98.0), libqt5core5a (>= 5.9.0~beta), libqt5gui5 (>=5.8.0), libqt5svg5 (>= 5.6.0~beta), libqt5widgets5 (>= 5.7.0~), libstdc++6 (>= 4.4.0)
Recommends: khelpcenter, qhull-bin
Description: jigsaw puzzle game
Palapeli is a jigsaw puzzle game. Unlike other games in that genre, you are not limited to aligning pieces on imaginary
grids. The pieces are freely moveable.

Palapeli is the Finnish word for jigsaw puzzle.

This package is part of the KDE games module.

Now to make a new puzzle for palapeli, it needs 3 info. at the very least,

a. The name of the image. For many images you need to make one up as many a times either photographers are either not imaginative or do not have the time to give meaningful descriptive name. I have had an interesting discussion on a similar topic with the authors of palapeli. There are lots to do that in getting descriptions right. At least in free software I dunno of any way to get images processed so that descriptions can be told/found out.

b. A comment – This is optional but if you have a memorable image that you have captured, this is the best way to do it. I hate to bring bing wallapers but have seen they have some of the best description for what I’m trying to say –

shirish@debian:~/Pictures/Variety$ exiftool ManateeMom_EN-US9983570199_1920x1080.jpg | grep Caption-Abstract
Caption-Abstract : West Indian manatee mom and baby at Three Sisters Springs, Florida (© James R.D. Scott/Getty Images)
shirish@debian:~/Pictures/Variety$ exiftool ManateeMom_EN-US9983570199_1920x1080.jpg | grep Comment
User Comment : West Indian manatee mom and baby at Three Sisters Springs, Florida (© James R.D. Scott/Getty Images)
XP Comment :
Comment : West Indian manatee mom and baby at Three Sisters Springs, Florida (© James R.D. Scott/Getty Images)

c. Name of the Author – Many a time I just write either ‘nameless’ or ‘unknown’ as the author hasn’t taken any pains to share who they are.

Why I have shared or talked about palapeli as they are the only ones I know of who has done some extensive works on getting unique slicers so that the shapes of the puzzle piece come all different. But it’s still a trial and error method as it doesn’t have a preview mode.

While there’s lot that could be improved in both the above projects, I would be happy just if atm there would be a script which can take the images as an input, add or fib details (although best would be to have actual details) and do the work.

I am sure if I put my mind to it, I’ll probably find ways in which even exiftool can be improved but can’t thank enough the people who have made such a wonderful tool.

For instance, while looking at the metadata of quite a few pictures, found that many people use arbitrary fields to tell where the picture was shot at, some use headline, some use comment or Description while others use subject. If somebody wanted to write a script it would be difficult as there may be images which may have all the 3 fields and have different content on them (possibly) under context known by the author only.

The good thing is we have got the latest version on Debian testing –

$ exiftool -ver
10.96

Peace out.

Planet DebianJonathan Dowland: Imaging DVD-Rs, Step 2: Initial Import

This is part 2 in a series about a project to read/import a large collection of home-made optical media. Part 1 was Imaging DVD-Rs: Overview > and Step 1; the summary page for the whole project is imaging discs.

Last time we prepared for the import by gathering all our discs together and organising storage for them in two senses: real-world (i.e. spindles) and a more future-proof digital storage system for the data, in my case, a NAS. This time we're actually going to read some discs. I suggest doing a quick first pass over your collection to image all the trouble-free discs (and identify the ones that are going to be harder to read). We will return to the troublesome ones in a later part.

For reading home-made optical discs, you could simply use cp:

cp /dev/sr0 disc-image.iso

This has the attraction of being a very simple solution but I don't recommend it, because of a lack of options for error handling. Instead I recommend using GNU ddrescue. It is designed to be fault tolerant and retries bad sectors in various ways to try and coax every last byte out of the medium. Crucially, a partially imported disc image can be further added to by subsequent runs of ddrescue, even on a separate computer.

For the first import, I recommend the suggested options from the ddrescue manual:

ddrescue -n -b2048 /dev/cdrom cdimage.iso cdimage.log

This will create a cdimage.iso file, hopefully containing your data, and a map file cdimage.log, describing what ddrescue managed to achieve. You should archive both!

This will either complete reasonably quickly (within one to two minutes), or will run potentially indefinitely. Once you've got a feel for how long a successful extraction takes, I'd recommend terminating any attempt that lasts much longer than that, and putting those discs to one side in a "needs attention" pile, to be re-attempted later. If ddrescue does finish, it will tell you if it couldn't read any of the disc. If so, put that disc in the "needs attention" pile too.

commercially-made discs

Above, I wrote that I recommend this approach for home-made data discs. Broadly, I am assuming that such discs use a limited set of options and features available to disc authors: they'll either be single session, or multisession but you aren't interested in any files that are masked by later sessions; they won't be mixed mode (no Audio tracks); there won't be anything unusual or important stored in the disc metadata, title, or subcodes; etcetera.

This is not always the case for commercial discs, or audio CDs or video DVDs. For those, you may wish to recover more information than is available to you via ddrescue. These aren't my focus right now, so I don't have much advice on how to handle them, although I might in the future.

labelling and storing images

If your discs are labelled as poorly or inconsistently as mine, it might not be obvious what filename to give each disc image. For my project I decided to append a new label to all imported discs, something like "blahX", where X is an incrementing number. So, for a fourth disc being imported with the label "my files", the image name would be my_files.blah5.iso. If you are keeping the physical discs after importing them, You could also mark the disc with "blah5".

where are we now

You should now have a pile of discs that you have successfully imported, a corresponding collection of disc image files/ddrescue log file pairs, and possibly a pile of "needs attention" discs.

In future parts, we will look at how to explore what's actually on the discs we have imaged: how to handle partially read or corrupted disc images; how to map the files on a disc to the sectors you have read, to identify which files are corrupted; and how to try to coax successful reads out of troublesome discs.

CryptogramAccessing Cell Phone Location Information

The New York Times is reporting about a company called Securus Technologies that gives police the ability to track cell phone locations without a warrant:

The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show.

Another article.

Boing Boing post.

Worse Than FailureCodeSOD: Return of the Mask

Sometimes, you learn something new, and you suddenly start seeing it show up anywhere. The Baader-Meinhof Phenomenon is the name for that. Sometimes, you see one kind of bad code, and the same kind of bad code starts showing up everywhere. Yesterday we saw a nasty attempt to use bitmasks in a loop.

Today, we have Michele’s contribution, of a strange way of interacting with bitmasks. The culprit behind this code was a previous PLC programmer, even if this code wasn’t running straight on the PLC.

public static bool DecodeBitmask(int data, int bitIndex)
{
        var value = data.ToString();
        var padding = value.PadLeft(8, '0');
        return padding[bitIndex] == '1';
}

Take a close look at the parameters there- data is an int. That’s about what you’d expect here… but then we call data.ToString() which is where things start to break down. We pad that string out to 8 characters, and then check and see if a '1' happens to be in the spot we’re checking.

This, of course, defeats the entire purpose and elegance of bit masks, and worse, doesn’t end up being any more readable. Passing a number like 2 isn’t going to return true for any index.

Why does this work this way?

Well, let’s say you wanted a bitmask in the form 0b00000111. You might say, “well, that’s a 7”. What Michele’s predecssor said was, "that’s text… "00000111". But the point of bitmasks is to use an int to pass data around, so this developer went ahead on and turned "00000111" into an integer by simply parsing it, creating the integer 111. But there’s no possibly way to check if a certain digit is 1 or not, so we have to convert it back into a string to check the bitmask.

Unfortunately, the software is so fragile and unreliable that no one is willing to let the developers make any changes beyond “it’s on fire, put it out”.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

LongNowThe Role of Art in Addressing Climate Change: An interview with José Luis de Vicente

Sounds super depressing,” she texted. “That’s why I haven’t gone. Sort of went full ostrich.”

That was my friend’s response when I asked her if she had attended Després de la fi del món (After the End of the World), the exhibition on the present and future of climate change at the Center of Contemporary Culture in Barcelona (CCCB).

Burying one’s head in the sand when it comes to climate change is a widespread impulse. It is, to put it brusquely, a bummer story — one whose drama is slow-moving, complex, and operating at planetary scale. The media, by and large, underreports it. Politicians who do not deny its existence struggle to coalesce around long-term solutions. And while a majority of people are concerned about climate change, few talk about it with friends and family.

Given all of this, it would seem unlikely that art, of all things, can make much of a difference in how we think about that story.

José Luis de Vicente, the curator of Després de la fi del món, believes that it can.

“The arts can play a role of fleshing out social scenarios showing that other worlds are possible, and that we are going to be living in them,” de Vicente wrote recently. “Imagining other forms of living is key to producing them.”

Scenes from “After the End of the World.” Via CCCB.

The forms of living on display at Després de la fi del món are an immersive, multi-sensory confrontation. The show consists of nine scenes, each a chapter in a spatial essay on the present and future of the climate crisis by some of the foremost artists and thinkers contemplating the implications of the anthropocene.

“Mitigation of Shock” by Superflux. Via CCCB.

In one, I find myself in a London apartment in the year 02050.¹ The familiar confines of cookie-cutter IKEA furniture give way to an unsettling feeling as the radio on the kitchen counter speaks of broken food supply chains, price hikes, and devastating hurricanes. A newspaper on the living room table asks “HOW WILL WE EAT?” The answer is littered throughout the apartment, in the form of domestic agriculture experiments glowing under purple lights, improvised food computers, and recipes for burgers made out of flies.

“Overview” by Benjamin Grant. Via Daily Overview.

In another, I am surrounded by satellite imagery of the Earth that reveals the beauty of human-made systems and their impact on the planet.

“Win><Win” by Rimini Protokoll. Via CCCB.

The most radical scene, Rimini Protokoll’s “Win><Win,” is one de Vicente has asked me not to disclose in detail, so as to not ruin the surprise when Després de la fi del món goes on tour in the United Kingdom and Singapore. All I can say is that it has something to do with jellyfish, and that it is one of the most remarkable pieces of interactive theater I have ever seen.

A “decompression chamber” featuring philosopher Timothy Morton. Via CCCB.

Visitors transition between scenes via waiting rooms that de Vicente describes as “decompression chambers.” In each chamber, the Minister Of The Future, played by philosopher Timothy Morton, frames his program. The Minister claims to represent the interests of those who cannot exert influence on the political process, either because they have not yet been born, or because they are non-human, like the Great Barrier Reef.

“Aerocene” by Tomás Seraceno. Via Aerocene Foundation.

A key thesis of Després de la fi del món is that knowing the scientific facts of climate change is not enough to adequately address its challenges. One must be able to feel its emotional impact, and find the language to speak about it.

My fear—and the reason I go “full ostrich”—has long been that such a feeling would come about only once we experience climate change’s deleterious effects as an irrevocable part of daily life. My hope, after attending the exhibition and speaking with José Luis de Vicente, is that it might come, at least in part, through art.


“This Civilization is Over. And Everybody Knows It.”

The following interview has been edited for length and clarity.

AHMED KABIL: I suspect that for a lot of us, when we think about climate change, it seems very distant — both in terms of time and space. If it’s happening, it’s happening to people over there, or to people in the future; it’s not happening over here, or right now. The New York Times, for example, published a story finding that while most in the United States think that climate change will harm Americans, few believe that it will harm them personally. One of the things that I found most compelling about Després de la fi del món was how the different scenes of the exhibition made climate change feel much more immediate. Could you say a little bit about how the show was conceived and what you hoped to achieve?

José Luis de Vicente. Photo by Ahmed Kabil.

JOSÉ LUIS DE VICENTE: We wanted the show to be a personal journey, but not necessarily a cohesive one. We wanted it to be like a hallucination, like the recollection of a dream where you’re picking up the pieces here and there.

We didn’t want to do a didactic, encyclopedic show on the science and challenge of climate change. Because that show has been done many, many times. And also, we thought the problem with the climate crisis is not a problem of information. We don’t need to be told more times things that we’ve been told thousands of times.

“Unravelled” by Unknown Fields Division. Via CCCB.

We wanted something that would address the elephant in the room. And the elephant in the room for us was: if this is the most important crisis that we face as a species today, if it transcends generations, if this is going to be the background crisis of our lives, why don’t we speak about it? Why don’t we know how to relate to it directly? Why does it not lead newspapers in five columns when we open them in the morning? That emotional distance was something that we wanted to investigate.

One of the reasons that distance happens is because we’re living in a kind of collective trauma. We are still in the denial phase of that trauma. The metaphor I always like to use is, our position right now is like the one you’re in when you go to the doctor, and the doctor gives you a diagnosis saying that actually, there’s a big, big problem, and yet you still feel the same. You don’t feel any different after being given that piece of news, but at the same time intellectually you know at that point that things are never going to be the same. That’s where we are collectively when it comes to climate change. So how do we transition out of this position of trauma to one of empathy?

“Win><Win” by Rimini Protokoll. Via CCCB.

We also wanted to look at why this was politically an unmanageable crisis. And there’s two reasons for that. One is because it’s a political message no politician will be able to channel into a marketable idea, which is: “We cannot go on living the way we live.” There is no political future for any way you market that idea.

The other is—and Timothy Morton’s work was really influential in this idea—the notion that: “What if simply our senses and communicative capacities are not tuned to understanding the problem because it moves in a different resolution, because it proceeds on a scale that is not the scale of our senses?”

Morton’s notion of the hyper-objectthis idea that there are things that are too big and move too slow for us to see—was very important. The title of the show comes from the title of his book Hyperobjects: An Ecology of Nature After the End of the World (02013).

AHMED KABIL: One of the recent instances of note where climate change did make front-page news was the 02015 Paris Agreement. In Després de la fi del món, the Paris Agreement plays a central role in framing the future of climate change. Why?

JOSÉ LUIS DE VICENTE: If we follow the Paris Agreement to its final consequences, what it’s saying is that, in order to prevent global temperature from rising from 3.6 to 4.8 median degrees Celsius by the end of the 21st century, we have to undertake the biggest transformation that we’ve ever done. And even doing that will mean that we’re only halfway to our goal of having global temperatures not rise more than 2 degrees, ideally 1.5, and we’re already at 1 degree. So that gives a sense of the challenge. And we need to do it for the benefit of the humans and non-humans of 02100, who don’t have a say in this conversation.

“Overview” by Benjamin Grant. Via CCCB.

There are two possibilities here: either we make the goals of the Paris Agreement—the bad news here being that this problem is much, much bigger than just replacing fossil fuels with renewable energies. The Tesla way of going at it, of replacing every car in the world with a Tesla—the numbers just don’t add up. We’re going to have to rethink most systems in society to make this a possibility. That’s possibility number one.

Possibility number two: if we don’t make the goals of the Paris Agreement, we know that there’s no chance that life in the end of the 21st century is going to look remotely similar to today. We know that the kind of systemic crises we have are way more serious than the ones that would allow essential normalcy as we understand it today. So whether we make the goals of the Paris Agreement or not, there is no way that life in the second part of the 21st century looks as it does today.

That’s why we open the exhibition with McKenzie Wark’s quote.

“This civilization is over. And everybody knows it.” — McKenzie Wark

This civilization is over, not in the apocalyptic sense that the end of the world is coming, but that the civilization we built from the mid-nineteenth century onward on this capacity of taking fossil fuels out of the Earth and turning that into a labor force and turning that into an equation of “growth equals development equals progress” is just not sustainable.

“Environmental Health Clinic” by Natalie Jeremijenko. Via CCCB.

So with all these reference points, the show asks: What does it mean to understand this story? What does it mean to be citizens acknowledging this reality? What are possible scenes that look at either aspects of the anthropocene planet today or possible post-Paris futures?

This show should mean different things for you whether you’re fifty-five or you’re twelve. Because if you’re fifty-five, these are all hypothetical scenarios for a world that you’re not going to see. But if you’re twelve this is the world that you’re going to grow up into.

02100 may seem very far away, but the people who will see the world of 02100 are already born.

AHMED KABIL: What role will technology play in our climate change future?

JOSÉ LUIS DE VICENTE: Technology will, of course, play a role, but I think we have to be non-utopian about what that role will be.

The climate crisis is not a technological or socio-cultural or political problem; it’s all three. So the problem can only be solved at the three axes. The one that I am less hopeful about is the political axis, because how do we do it? How do we break that cycle of incredibly short-term incentives built into the political power structure? How do we incorporate the idea of: “Okay, what you want as my constituent is not the most important thing in the world, so I cannot just give you what you want if you vote for me and my position of power.” Especially when we’re seeing the collapse of systems and mechanisms of political representation.

“Sea State 9: Proclamation” by Charles Lim. Via CCCB.

I want to believe—and I’m not a political scientist—that huge social transformations translate to political redesigns, in spite of everything. I’m not overly optimistic or utopian about where we are right now. But our capacity to coalesce and gather around powerful ideas that transmit very easily to the masses allows for shifts of paradigm better than previously. Not only good ones, but bad ones as well.

AHMED KABIL: Is there a case for optimism on climate change?

JOSÉ LUIS DE VICENTE: I cannot be optimistic looking at the data on the table and the political agendas, but I am in the sense of saying that incredible things are happening in the world. We’re witnessing a kind of political awakening. These huge social shifts can happen at any moment.

And I think, for instance, that the fossil fuel industry knows that it’s the end of the party. What we’re seeing now is their awareness that their business model is not going to be viable for much longer. And obviously neither Putin nor Trump are good news for the climate, but nevertheless these huge shifts are coming.

“Mitigation of Shock” by Superflux. Via CCCB.

Kim Stanley Robinson always mentions this “pessimism of the intellect, optimism of the will.” I think that’s where you need to be, knowing that big changes are possible. Of course, I have no utopian expectations about it—this is going to be the backstory for the rest of our lives and we’re going to have traumatic, sad things happening because they’re already happening. But I’m quite positive that the world will definitely not look like this one in many aspects, and many things that big social revolutions in the past tried to make possible will be made possible.

If this show has done anything I hope it’s made a small contribution in answering the question of how we think about the future of climate change, how we talk about it, and how we understand what it means. We have to exist on timescales more expansive than the tiny units of time of our lives. We have to think of the world in ways that are non-anthropocentric. We have to think that the needs and desires of the humans of now are not the only thing that matters. That’s a huge philosophical revolution. But I think it’s possible.


Notes

[1] The Long Now Foundation uses five digit dates to serve as a reminder of the time scale that we endeavor to work in. Since the Clock of the Long Now is meant to run well past the Gregorian year 10,000, the extra zero is to solve the deca-millennium bug which will come into effect in about 8,000 years.

Learn More

  • Stay updated on the After The End of The World exhibition.
  • Read The Guardian’s 02015 profile of Timothy Morton.
  • Watch Benjamin Grant’s upcoming Seminar About Long-Term Thinking, “Overview: Earth and Civilization in the Macroscope.”
  • Watch Kim Stanley Robinson’s 02016 talk at The Interval At Long Now on how climate will evolve government and society.
  • Read José Luis de Vicente’s interview with Kim Stanley Robinson.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, April 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In March, about 183 work hours have been dispatched among 13 paid contributors. Their reports are available:

  • Abhijith PA did 5 hours (out of 10 hours allocated, thus keeping 5 extra hours for May).
  • Antoine Beaupré did 12h.
  • Ben Hutchings did 17 hours (out of 15h allocated + 2 remaining hours).
  • Brian May did 10 hours.
  • Chris Lamb did 16.25 hours.
  • Emilio Pozuelo Monfort did 11.5 hours (out of 16.25 hours allocated + 5 remaining hours, thus keeping 9.75 extra hours for May).
  • Holger Levsen did nothing (out of 16.25 hours allocated + 16.5 hours remaining, thus keeping 32.75 extra hours for May). He did not get hours allocated for May and is expected to catch up.
  • Hugo Lefeuvre did 20.5 hours (out of 16.25 hours allocated + 4.25 remaining hours).
  • Markus Koschany did 16.25 hours.
  • Ola Lundqvist did 11 hours (out of 14 hours allocated + 9.5 remaining hours, thus keeping 12.5 extra hours for May).
  • Roberto C. Sanchez did 7 hours (out of 16.25 hours allocated + 15.75 hours remaining, but immediately gave back the 25 remaining hours).
  • Santiago Ruano Rincón did 8 hours.
  • Thorsten Alteholz did 16.25 hours.

Evolution of the situation

The number of sponsored hours did not change. But a few sponsors interested in having more than 5 years of support should join LTS next month since this was a pre-requisite to benefit from extended LTS support. I did update Freexian’s website to show this as a benefit offered to LTS sponsors.

The security tracker currently lists 20 packages with a known CVE and the dla-needed.txt file 16. At two week from Wheezy’s end-of-life, the number of open issues is close to an historical low.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianNorbert Preining: Specification and Verification of Software with CafeOBJ – Part 3 – First steps with CafeOBJ

This blog continues Part 1 and Part 2 of our series on software specification and verification with CafeOBJ.

We will go through basic operations like starting and stopping the CafeOBJ interpreter, getting help, doing basic computations.

Starting and leaving the interpreter

If CafeOBJ is properly installed, a call to cafeobj will greet you with information about the current version of CafeOBJ, as well as build dates and which build system has been used. The following is what is shown on my Debian system with the latest version of CafeOBJ installed:

$ cafeobj
-- loading standard prelude

            -- CafeOBJ system Version 1.5.7(PigNose0.99) --
                   built: 2018 Feb 26 Mon 6:01:31 GMT
                         prelude file: std.bin
                                  ***
                      2018 Apr 19 Thu 2:20:40 GMT
                            Type ? for help
                                  ***
                  -- Containing PigNose Extensions --
                                  ---
                             built on SBCL
                             1.4.4.debian
CafeOBJ>

After the initial information there is the prompt CafeOBJ> indicating that the interpreter is ready to process your input. By default several files (the prelude as it is called above) is loaded, which defines certain basic sorts and operations.

If you have enough of playing around, simply press Ctrl-D (the Control key and d at the same time), or type in quit:

CafeOB> quit
$

Getting help

Besides the extensive documentation available at the website (reference manual, user manual, tutorials, etc), the reference manual is also always at your fingertips within the CafeOBJ interpreter using the ? group of commands:

  • ? – gives general help
  • ?com class – shows available commands classified by ‘class’
  • ? name – gives the reference manual entry for name
  • ?ex name – gives examples (if available) for name
  • ?ap name – (apropos) searches the reference manual for appearances of name

To give an example on the usage, let us search for the term operator and then look at the documentation concerning one of them:

CafeOBJ> ?ap op
Found the following matches:
 . `:theory  :  ->  { assoc | comm | id:  }`
...
 . `op  :  ->  {  }`
 . on-the-fly declaration
...

CafeOBJ> ? op
`op  :  ->  {  }`
Defines an operator by its domain, co-domain, and the term construct.
`` is a space separated list of sort names, `` is a
single sort name. 
...

I have shortened the output a bit indicated by ....

Simple computations

By default, CafeOBJ is just a barren landscape, meaning that there are now rules or axioms active. Everything is encapsulated into so called modules (which in mathematical terms are definitions of order-sorted algebras). One of these modules is NAT which allows computations in the natural numbers. To activate a module we use open:

CafeOBJ> open NAT .
...
%NAT>

The ... again indicate quite some output of the CafeOBJ interpreter loading additional files.

There are two things to note in the above:

  • One finishes a command with a literal dot . – this is necessary due to the complete free syntax of the CafeOBJ language and indicates the end of a statement, similar to semicolons in other programming languages.
  • The prompt has changed to NAT> to indicate that the playground (context) we are currently working are the natural numbers.

To actually carry out computations we use the command red or reduce. Recall from the previous post that the computational model of CafeOBJ is rewriting, and in this setting reduce means kicking of the rewrite process. Let us do this for a simple computation:

%NAT>; red 2 + 3 * 4 .
-- reduce in %NAT : (2 + (3 * 4)):NzNat
(14):NzNat
(0.0000 sec for parse, 0.0000 sec for 2 rewrites + 2 matches)

%NAT>

Things to note in the above output:

  • Correct operator precedence: CafeOBJ correctly computes 14 due to the proper use of operator precedence. If you want to override the parsing you can use additional parenthesis.
  • CafeOBJ even gives a sort (or type) information for the return value: (14):NzNat, indicating that the return value of 14 is of sort NzNat, which refers to non-zero natural numbers.
  • The interpreter tells you how much time it spent in parsing and rewriting.

If we have enough of this playground, we close the opened module with close which returns us to the original prompt:

%NAT> close .
CafeOBJ>

Now if you think this is not so interesting, let us to some more funny things, like computation with rational numbers, which are provided by CafeOBJ in the RAT module. Rational numbers can be written as slashed expressions: a / b. If we don’t want to actually reduce a given expression, we can use parse to tell CafeOBJ to parse the next expression and give us the parsed expression together with a sort:

CafeOBJ> open RAT .
...
%RAT> parse 3/4 .
(3/4):NzRat
%RAT>

Again, CafeOBJ correctly determined that the given value is a non-zero rational number. More complex expression can be parsed the same way, as well as reduced to minimal representation:

%RAT> parse 2 / ( 4 * 3 ) .
(2 / (4 * 3)):NzRat

%RAT> red 2 / ( 4 * 3 ) .
-- reduce in %RAT : (2 / (4 * 3)):NzRat
(1/6):NzRat
(0.0000 sec for parse, 0.0000 sec for 2 rewrites + 2 matches)

%RAT>

NAT and RAT are not the only built-in sorts, there are several more, and others can be defined easily (see next blog). The currently available data types, together with their sort order (recall that we are in order sorted algebras, so one sort can contain others):
NzNat < Nat < NzInt < Int < NzRat < Rat
which refer to non-zero natural numbers, natural numbers, non-zero integers, integers, non-zero rational numbers, rational numbers, respectively.

Then there are other data types unrelated (not ordered) to any other:
Triv, Bool, Float, Char, String, 2Tuple, 3Tuple, 4Tuple.

Functions

CafeOBJ does not have functions in the usual sense, but operators defined via there arity and a set of (rewriting) equations. Let us take a look at two simple functions in the natural numbers: square which takes one argument and returns the square of it, and a function sos which takes two arguments and returns the sum of squares of the arguments. In mathematical writing: square(a) = a * a and sos(a,b) = a*a + b*b.

This can be translated into CafeOBJ code as follows (from now on I will be leaving out the prompts):

open NAT .
vars A B : Nat
op square : Nat -> Nat .
eq square(A) = A * A .
op sos : Nat Nat -> Nat .
eq sos(A, B) = A * A + B * B .

This first declares two variables A and B to be of sort Nat (note that the module names and sort names are not the same, but the module names are usually the uppercase of the sort names). Then the operator square is introduced by providing its arity. In general an operator can have several input variables, and for each of them as well as the return value we have to provide the sorts:

  op  NAME : Sort1 Sort2 ... -> Sort

defines an operator NAME with several input parameters of the given sorts, and the return sort Sort.

The next line gives one (the only necessary) equation governing the computation rules of square. Equations are introduced by eq (and some variants of it), followed by an expression, and equal sign, and another expression. This indicates that CafeOBJ may rewrite the left expression to the right expression.

In our case we told CafeOBJ that it may rewrite an expression of the form square(A) to A * A, where A can be anything of sort Nat (for now we don't go into details how order-sorted rewriting works in general).

The next two lines do the same for the operator sos.

Having this code in place, we can easily do computations with it by using the already introduced reduce command:

red square(1) .
-- reduce in %NAT : (square(10)):Nat
(100):NzNat

red sos(10,20) .
-- reduce in %NAT : (f(10,20)):Nat
(500):NzNat

What to do if one equation does not service? Let us look at a typical recursive definition of sum of natural numbers: sum(0) = 0 and for a > 0 we have sum(a) = a + sum(a-1). This can be easily translated into CafeOBJ as follows:

open NAT .
op sum : Nat -> Nat .
eq sum(0) = 0 .
eq sum(A:NzNat) = A + sum(p A) .
red sum(10) .

where p (for predecessor) indicates the next smaller natural number. This operator is only defined on non-zero natural numbers, though.

In the above fragment we also see a new style of declaring variables, on the fly: The first occurrence of a variable in an equation can carry a sort declaration, which extends all through the equation.

Running the above code we get, not surprisingly 55, in particular:

-- reduce in %NAT : (sum(10)):Nat
(55):NzNat
(0.0000 sec for parse, 0.0000 sec for 31 rewrites + 41 matches)

As a challenge the reader might try to give definitions of the factorial function and the Fibonacci function, the next blog will present solutions for it.


This concludes the second part. In the next part we will look at defining modules (aka algebras aka theories) and use them to define lists.

Planet DebianEnrico Zini: Starting user software in X

There are currently many ways of starting software when a user session starts.

This is an attempt to collect a list of pointers to piece the big picture together. It's partial and some parts might be imprecise or incorrect, but it's a start, and I'm happy to keep it updated if I receive corrections.

x11-common

man xsession

  • Started by the display manager for example, /usr/share/lightdm/lightdm.conf.d/01_debian.conf or /etc/gdm3/Xsession
  • Debian specific
  • Runs scripts in /etc/X11/Xsession.d/
  • /etc/X11/Xsession.d/40x11-common_xsessionrc sources ~/.xsessionrc which can do little more than set env vars, because it is run at the beginning of X session startup
  • At the end, it starts the session manager (gnome-session, xfce4-session, and so on)

systemd --user

  • https://wiki.archlinux.org/index.php/Systemd/User
  • Started by pam_systemd, so it might not have a DISPLAY variable set in the environment yet
  • Manages units in:
    • /usr/lib/systemd/user/ where units provided by installed packages belong.
    • ~/.local/share/systemd/user/ where units of packages that have been installed in the home directory belong.
    • /etc/systemd/user/ where system-wide user units are placed by the system administrator.
    • ~/.config/systemd/user/ where the users put their own units.
  • A trick to start a systemd user unit when the X session has been set up and the DISPLAY variable is available, is to call systemctl start from a .desktop autostart file.

dbus activation

X session manager

xdg autostart

Other startup notes

~/.Xauthority

To connect to an X server, a client needs to send a token from ~/.Xauthority, which proves that they can read the user's provate data.

~/.Xauthority contains a token generated by display manager and communicated to X at startup.

To view its contents, use xauth -i -f ~/.Xauthority list

CryptogramSending Inaudible Commands to Voice Assistants

Researchers have demonstrated the ability to send inaudible commands to voice assistants like Alexa, Siri, and Google Assistant.

Over the last two years, researchers in China and the United States have begun demonstrating that they can send hidden commands that are undetectable to the human ear to Apple's Siri, Amazon's Alexa and Google's Assistant. Inside university labs, the researchers have been able to secretly activate the artificial intelligence systems on smartphones and smart speakers, making them dial phone numbers or open websites. In the wrong hands, the technology could be used to unlock doors, wire money or buy stuff online ­-- simply with music playing over the radio.

A group of students from University of California, Berkeley, and Georgetown University showed in 2016 that they could hide commands in white noise played over loudspeakers and through YouTube videos to get smart devices to turn on airplane mode or open a website.

This month, some of those Berkeley researchers published a research paper that went further, saying they could embed commands directly into recordings of music or spoken text. So while a human listener hears someone talking or an orchestra playing, Amazon's Echo speaker might hear an instruction to add something to your shopping list.

Worse Than FailureCodeSOD: A Bit Masked

The “for-case” or “loop-switch” anti-pattern creates some hard to maintain code. You know the drill: the first time through the loop, do one step, the next time through the loop, do a different step. It’s known as the “Anti-Duff’s Device”, which is a good contrast: Duff’s Device is a clever way to unroll a loop and turn it into a sequential process, while the “loop-switch” takes a sequential process and turns it into a loop.

Ashlea inherited an MFC application. It was worked on by a number of developers in Germany, some of which used English to name identifiers, some which used German, creating a new language called “Deunglish”. Or “Engleutch”? Whatever you call it, Ashlea has helpfully translated all the identifiers into English for us.

Buried deep in a thousand-line “do everything” method, there’s this block:

if(IS_SOMEFORMATNAME()) //Mantis 24426
{
  if(IS_CONDITION("RELEASE_4"))
  {
    m_BAR.m_TLC_FIELDS.DISTRIBUTIONCHANNEL="";
    CString strKey;
    for (unsigned int i=1; i<16; i++) // Test all combinations
    {
      strKey="#W#X#Y#Z";
      if(i & 1)
        strKey.Replace("#W", m_strActualPromo);  // MANTIS 45587: Search with and without promotion code
      if(i & 2)
        strKey.Replace("#X",m_BAR.m_TLC_FIELDS.OBJECTCODE);
      if(i & 4)
        strKey.Replace("#Y",TOKEN(strFoo,H_BAZCODE));
      if(i & 8)
        strKey.Replace("#Z",TOKEN(strFoo,H_CHAIN));

      strKey.Replace("#W","");
      strKey.Replace("#X","");
      strKey.Replace("#Y","");
      strKey.Replace("#Z","");

      if(m_lDistributionchannel.GetFirst(strKey))
      {
        m_BAR.m_TLC_FIELDS.DISTRIBUTIONCHANNEL="R";
        break;
      }
    }
  }
  else
    m_BAR.m_TLC_FIELDS.DISTRIBUTIONCHANNEL=m_lDistributionchannel.GetFirstLine(m_BAR.m_TLC_FIELDS.OBJECTCODE+m_strActualPromo);
}

Here, we see a rather unique approach to using a for-case- by using bitmasks to combine steps on each iteration of the loop. From what I can tell, they have four things which can combine to make an identifier, but might get combined in many different ways. So they try every possible combination, and if it exists, they can set the DISTRIBUTIONCHANNEL field.

That’s ugly and awful, and certainly a WTF, but honestly, that’s not what leapt out to me. It was this line:

if(IS_CONDITION("RELEASE_4"))

It’s quite clear that, as new versions of the software were released, they needed to control which features were enabled and which weren’t. This is probably related to a database, and thus the database may or may not be upgraded to the same release version as the code. So scattered throughtout the code are checks like this, which enable blocks of code at runtime based on which versions match with these flags.

Debugging that must be a joy.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianWouter Verhelst: Digitizing my DVDs

I have a rather sizeable DVD collection. The database that I created of them a few years back after I'd had a few episodes where I accidentally bought the same movie more than once claims there's over 300 movies in the cabinet. Additionally, I own a number of TV shows on DVD, which, if you count individual disks, will probably end up being about the same number.

A few years ago, I decided that I was tired of walking to the DVD cabinet, taking out a disc, and placing it in the reader. That instead, I wanted to digitize them and use kodi to be able to watch a movie whenever I felt like it. So I made some calculations, and came up with a system with enough storage (on ZFS, of course) to store all the DVDs without needing to re-encode them.

I got started on ripping most of the DVDs using dvdbackup, but it quickly became apparent that I'd made a miscalculation; where I thought that most of the DVDs would be 4.7G ones, it turns out that most commercial DVDs are actually of the 9G type. Come to think of it, that does make a lot of sense. Additionally, now that I had a home server that had some significant reduntant storage, I found that I had some additional uses for such things. The storage that I had, vast enough though it may be, wouldn't suffice.

So, I gave this some more thought, but then life interfered and nothing happened for a few years.

Recently however, I've picked it up again, changing my workflow. I started using handbrake to re-encode the DVDs so they wouldn't take up quite so much space; having chosen VP9 as my preferred codec, I end up storing the DVDs as about 1 to 2 G per main feature, rather than the 8 to 9 that it used to be -- a significant gain. However, my first workflow wasn't very efficient; I would run the handbrake GUI from my laptop on ssh -X sessions to multiple machines, encoding the videos directly from DVD that way. That worked, but it meant I couldn't shut down my laptop to take it to work without interrupting work that was happening; also, it meant that if a DVD finished encoding in the middle of the night, I wouldn't be there to replace it, so the system would be sitting idle for several hours. Clearly some form of improvement was necessary if I was going to do this in any reasonable amount of time.

So after fooling around a bit, I came up with the following:

  • First, I use dvdbackup -M -r a to read the DVD without re-encoding anything. This can be done at the speed of the optical medium, and can therefore be done much more efficiently than to use handbrake directly from the DVD. The -M option tells dvdbackup to read everything from the DVD (to make a mirror of it, in effect). The -r a option tells dvdbackup to abort if it encounters a read error; I found that DVDs sometimes can be read successfully if I eject the drive and immediately reinsert it, or if I give the disk another clean, or even just try again in a different DVD reader. Sometimes the disk is just damaged, and then using dvdbackup's default mode of skipping the unreadable blocks makes sense, but not in a first attempt.
  • Then, I run a small little perl script that I wrote. It basically does two things:

    1. Run HandBrakeCLI -i <dvdbackup output> --previews 1 -t 0, parse its stderr output, and figure out what the first and the last titles on the DVD are.
    2. Run qsub -N <movie name> -v FILM=<dvdbackup output> -t <first title>-<last title> convert-film
  • The convert-film script is a bash script, which (in its first version) did this:

    mkdir -p "$OUTPUTDIR/$FILM/tmp"
    HandBrakeCLI -x "threads=1" --no-dvdnav -i "$INPUTDIR/$FILM" -e vp9 -E copy -T -t $SGE_TASK_ID --all-audio --all-subtitles -o "$OUTPUTDIR/$FILM/tmp/T${SGE_TASK_ID}.mkv"
    

    Essentially, that converts a single title to a VP9-encoded matroska file, with all the subtitles and audio streams intact, and forcing it to use only one thread -- having it use multiple threads is useful if you care about a single DVD converting as fast as possible, but I don't, and having four DVDs on a four-core system all convert at 100% CPU seems more efficient than having two convert at about 180% each. I did consider using HandBrakeCLI's options to only extract the "interesting" audio and subtitle tracks, but I prefer to not have dubbed audio (to have subtitled audio instead); since some of my DVDs are originally in non-English languages, doing so gets rather complex. The audio and subtitle tracks don't take up that much space, so I decided not to bother with that in the end.

The use of qsub, which submits the script into gridengine, allows me to hook up several encoder nodes (read: the server plus a few old laptops) to the same queue.

That went pretty well, until I wanted to figure out how far along something was going. HandBrakeCLI provides progress information on stderr, and I can just do a tail -f of the stderr output logs, but that really works well only for one one DVD at a time, not if you're trying to follow along with about a dozen of them.

So I made a database, and wrote another perl script. This latter will parse the stderr output of HandBrakeCLI, fish out the progress information, and put the completion percentage as well as the ETA time into a database. Then it became interesting:

CREATE OR REPLACE FUNCTION transjob_tcn() RETURNS trigger AS $$
BEGIN
  IF (TG_OP = 'INSERT') OR (TG_OP = 'UPDATE' AND (NEW.progress != OLD.progress) OR NEW.finished = TRUE) THEN
    PERFORM pg_notify('transjob', row_to_json(NEW)::varchar);
  END IF;
  RETURN NEW;
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER transjob_tcn_trigger
  AFTER INSERT OR UPDATE ON transjob
  FOR EACH ROW EXECUTE PROCEDURE transjob_tcn();

This uses PostgreSQL's asynchronous notification feature to send out a notification whenever an interesting change has happened to the table.

#!/usr/bin/perl -w

use strict;
use warnings;

use Mojolicious::Lite;
use Mojo::Pg;

...

helper dbh => sub { state $pg = Mojo::Pg->new->dsn("dbi:Pg:dbname=transcode"); };

websocket '/updates' => sub {
    my $c = shift;
    $c->inactivity_timeout(600);
    my $cb = $c->dbh->pubsub->listen(transjob => sub { $c->send(pop) });
    $c->on(finish => sub { shift->dbh->pubsub->unlisten(transjob => $cb) });
};

app->start;

This uses the Mojolicious framework and Mojo::Pg to send out the payload of the "transjob" notification (which we created with the FOR EACH ROW trigger inside PostgreSQL earlier, and which contains the JSON version of the table row) over a WebSocket. Then it's just a small matter of programming to write some javascript which dynamically updates the webpage whenever that happens, and Tadaa! I have an online overview of the videos that are transcoding, and how far along they are.

That only requires me to keep the queue non-empty, which I can easily do by running dvdbackup a few times in parallel every so often. That's a nice saturday afternoon project...

,

CryptogramDetails on a New PGP Vulnerability

A new PGP vulnerability was announced today. Basically, the vulnerability makes use of the fact that modern e-mail programs allow for embedded HTML objects. Essentially, if an attacker can intercept and modify a message in transit, he can insert code that sends the plaintext in a URL to a remote website. Very clever.

The EFAIL attacks exploit vulnerabilities in the OpenPGP and S/MIME standards to reveal the plaintext of encrypted emails. In a nutshell, EFAIL abuses active content of HTML emails, for example externally loaded images or styles, to exfiltrate plaintext through requested URLs. To create these exfiltration channels, the attacker first needs access to the encrypted emails, for example, by eavesdropping on network traffic, compromising email accounts, email servers, backup systems or client computers. The emails could even have been collected years ago.

The attacker changes an encrypted email in a particular way and sends this changed encrypted email to the victim. The victim's email client decrypts the email and loads any external content, thus exfiltrating the plaintext to the attacker.

A few initial comments:

1. Being able to intercept and modify e-mails in transit is the sort of thing the NSA can do, but is hard for the average hacker. That being said, there are circumstances where someone can modify e-mails. I don't mean to minimize the seriousness of this attack, but that is a consideration.

2. The vulnerability isn't with PGP or S/MIME itself, but in the way they interact with modern e-mail programs. You can see this in the two suggested short-term mitigations: "No decryption in the e-mail client," and "disable HTML rendering."

3. I've been getting some weird press calls from reporters wanting to know if this demonstrates that e-mail encryption is impossible. No, this just demonstrates that programmers are human and vulnerabilities are inevitable. PGP almost certainly has fewer bugs than your average piece of software, but it's not bug free.

3. Why is anyone using encrypted e-mail anymore, anyway? Reliably and easily encrypting e-mail is an insurmountably hard problem for reasons having nothing to do with today's announcement. If you need to communicate securely, use Signal. If having Signal on your phone will arouse suspicion, use WhatsApp.

I'll post other commentaries and analyses as I find them.

EDITED TO ADD (5/14): News articles.

Slashdot thread.

Cory DoctorowPodcast: Petard, Part 02

Here’s the second part of my reading (MP3) of Petard (part one), a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Krebs on SecurityDetecting Cloned Cards at the ATM, Register

Much of the fraud involving counterfeit credit, ATM debit and retail gift cards relies on the ability of thieves to use cheap, widely available hardware to encode stolen data onto any card’s magnetic stripe. But new research suggests retailers and ATM operators could reliably detect counterfeit cards using a simple technology that flags cards which appear to have been altered by such tools.

A gift card purchased at retail with an unmasked PIN hidden behind a paper sleeve. Such PINs can be easily copied by an adversary, who waits until the card is purchased to steal the card’s funds. Image: University of Florida.

Researchers at the University of Florida found that account data encoded on legitimate cards is invariably written using quality-controlled, automated facilities that tend to imprint the information in uniform, consistent patterns.

Cloned cards, however, usually are created by hand with inexpensive encoding machines, and as a result feature far more variance or “jitter” in the placement of digital bits on the card’s stripe.

Gift cards can be extremely profitable and brand-building for retailers, but gift card fraud creates a very negative shopping experience for consumers and a costly conundrum for retailers. The FBI estimates that while gift card fraud makes up a small percentage of overall gift card sales and use, approximately $130 billion worth of gift cards are sold each year.

One of the most common forms of gift card fraud involves thieves tampering with cards inside the retailer’s store — before the cards are purchased by legitimate customers. Using a handheld card reader, crooks will swipe the stripe to record the card’s serial number and other data needed to duplicate the card.

If there is a PIN on the gift card packaging, the thieves record that as well. In many cases, the PIN is obscured by a scratch-off decal, but gift card thieves can easily scratch those off and then replace the material with identical or similar decals that are sold very cheaply by the roll online.

“They can buy big rolls of that online for almost nothing,” said Patrick Traynor, an associate professor of computer science at the University of Florida. “Retailers we’ve worked with have told us they’ve gone to their gift card racks and found tons of this scratch-off stuff on the ground near the racks.”

At this point the cards are still worthless because they haven’t yet been activated. But armed with the card’s serial number and PIN, thieves can simply monitor the gift card account at the retailer’s online portal and wait until the cards are paid for and activated at the checkout register by an unwitting shopper.

Once a card is activated, thieves can encode that card’s data onto any card with a magnetic stripe and use that counterfeit to purchase merchandise at the retailer. The stolen goods typically are then sold online or on the street. Meanwhile, the person who bought the card (or the person who received it as a gift) finds the card is drained of funds when they eventually get around to using it at a retail store.

The top two gift cards show signs that someone previously peeled back the protective sticker covering the redemption code. Image: Flint Gatrell.

Traynor and a team of five other University of Florida researchers partnered with retail giant WalMart to test their technology, which Traynor said can be easily and quite cheaply incorporated into point-of-sale systems at retail store cash registers. They said the WalMart trial demonstrated that researchers’ technology distinguished legitimate gift cards from clones with up to 99.3 percent accuracy.

While impressive, that rate still means the technology could still generate a “false positive” — erroneously flagging a legitimate customer as using a fraudulently obtained gift card in a non-trivial number of cases. But Traynor said the retailers they spoke with in testing their equipment all indicated they would welcome any additional tools to curb the incidence of gift card fraud.

“We’ve talked with quite a few retail loss prevention folks,” he said. “Most said even if they can simply flag the transaction and make a note of the person [presenting the cloned card] that this would be a win for them. Often, putting someone on notice that loss prevention is watching is enough to make them stop — at least at that store. From our discussions with a few big-box retailers, this kind of fraud is probably their newest big concern, although they don’t talk much about it publicly. If the attacker does any better than simply cloning the card to a blank white card, they’re pretty much powerless to stop the attack, and that’s a pretty consistent story behind closed doors.”

BEYOND GIFT CARDS

Traynor said the University of Florida team’s method works even more accurately in detecting counterfeit ATM and credit cards, thanks to the dramatic difference in jitter between bank-issued cards and those cloned by thieves.

The magnetic material on most gift cards bears a quality that’s known in the industry as “low coercivity.” The stripe on so-called “LoCo” cards is usually brown in color, and new data can be imprinted on them quite cheaply using a machine that emits a relatively low or weak magnetic field. Hotel room keys also rely on LoCo stripes, which is why they tend to so easily lose their charge (particularly when placed next to something else with a magnetic charge).

In contrast, “high coercivity” (HiCo) stripes like those found on bank-issued debit and credit cards are usually black in color, hold their charge much longer, and are far more durable than LoCo cards. The downside of HiCo cards is that they are more expensive to produce, often relying on complex machinery and sophisticated manufacturing processes that encode the account data in highly uniform patterns.

These graphics illustrate the difference between original and cloned cards. Source: University of Florida.

Traynor said tests indicate their technology can detect cloned bank cards with virtually zero false-positives. In fact, when the University of Florida team first began seeing positive results from their method, they originally pitched the technique as a way for banks to cut losses from ATM skimming and other forms of credit and debit card fraud.

Yet, Traynor said fellow academicians who reviewed their draft paper told them that banks probably wouldn’t invest in the technology because most financial institutions are counting on newer, more sophisticated chip-based (EMV) cards to eventually reduce counterfeit fraud losses.

“The original pitch on the paper was actually focused on credit cards, but academic reviewers were having trouble getting past EMV — as in, “EMV solves this and it’s universally deployed – so why is this necessary?'”, Traynor said. “We just kept getting reviews back from other academics saying that credit and bank card fraud is a solved problem.”

The trouble is that virtually all chip cards still store account data in plain text on the magnetic stripe on the back of the card — mainly so that the cards can be used in ATM and retail locations that are not yet equipped to read chip-based cards. As a result, even European countries whose ATMs all require chip-based cards remain heavily targeted by skimming gangs because the data on the chip card’s magnetic stripe can still be copied by a skimmer and used by thieves in the United States.

The University of Florida researchers recently were featured in an Associated Press story about an anti-skimming technology they developed and dubbed the “Skim Reaper.” The device, which can be made cheaply using a 3D printer, fits into the mouth of ATM’s card acceptance slot and can detect the presence of extra card reading devices that skimmer thieves may have fitted on top of or inside the cash machine.

The AP story quoted a New York Police Department financial crimes detective saying the Skim Reapers worked remarkably well in detecting the presence of ATM skimmers. But Traynor said many ATM operators and owners are simply uninterested in paying to upgrade their machines with their technology — in large part because the losses from ATM card counterfeiting are mostly assumed by consumers and financial institutions.

“We found this when we were talking around with the cops in New York City, that the incentive of an ATM bodega owner to upgrade an ATM is very low,” Traynor said. “Why should they go to that expense? Upgrades required to make these machines [chip-card compliant] are significant in cost, and the motivation is not necessarily there.”

Retailers also could choose to produce gift cards with embedded EMV chips that make the cards more expensive and difficult to counterfeit. But doing so likely would increase the cost of manufacturing by $2 to $3 per card, Traynor said.

“Putting a chip on the card dramatically increases the cost, so a $10 gift card might then have a $3 price added,” he said. “And you can imagine the reaction a customer might have when asked to pay $13 for a gift card that has a $10 face value.”

A copy of the University of Florida’s research paper is available here (PDF).

The FBI has compiled a list of recommendations for reducing the likelihood of being victimized by gift card fraud. For starters, when buying in-store don’t just pick cards right off the rack. Look for ones that are sealed in packaging or stored securely behind the checkout counter. Also check the scratch-off area on the back to look for any evidence of tampering.

Here are some other tips from the FBI:

-If possible, only buy cards online directly from the store or restaurant.
-If buying from a secondary gift card market website, check reviews and only buy from or sell to reputable dealers.
-Check the gift card balance before and after purchasing the card to verify the correct balance on the card.
-The re-seller of a gift card is responsible for ensuring the correct balance is on the gift card, not the merchant whose name is listed. If you are scammed, some merchants in some situations will replace the funds. Ask for, but don’t expect, help.
-When selling a gift card through an online marketplace, do not provide the buyer with the card’s PIN until the transaction is complete.
-When purchasing gift cards online, be leery of auction sites selling gift cards at a steep discount or in bulk.

CryptogramCritical PGP Vulnerability

EFF is reporting that a critical vulnerability has been discovered in PGP and S/MIME. No details have been published yet, but one of the researchers wrote:

We'll publish critical vulnerabilities in PGP/GPG and S/MIME email encryption on 2018-05-15 07:00 UTC. They might reveal the plaintext of encrypted emails, including encrypted emails sent in the past. There are currently no reliable fixes for the vulnerability. If you use PGP/GPG or S/MIME for very sensitive communication, you should disable it in your email client for now.

This sounds like a protocol vulnerability, but we'll learn more tomorrow.

News articles.

CryptogramRay Ozzie's Encryption Backdoor

Last month, Wired published a long article about Ray Ozzie and his supposed new scheme for adding a backdoor in encrypted devices. It's a weird article. It paints Ozzie's proposal as something that "attains the impossible" and "satisfies both law enforcement and privacy purists," when (1) it's barely a proposal, and (2) it's essentially the same key escrow scheme we've been hearing about for decades.

Basically, each device has a unique public/private key pair and a secure processor. The public key goes into the processor and the device, and is used to encrypt whatever user key encrypts the data. The private key is stored in a secure database, available to law enforcement on demand. The only other trick is that for law enforcement to use that key, they have to put the device in some sort of irreversible recovery mode, which means it can never be used again. That's basically it.

I have no idea why anyone is talking as if this were anything new. Several cryptographers have already explained why this key escrow scheme is no better than any other key escrow scheme. The short answer is (1) we won't be able to secure that database of backdoor keys, (2) we don't know how to build the secure coprocessor the scheme requires, and (3) it solves none of the policy problems around the whole system. This is the typical mistake non-cryptographers make when they approach this problem: they think that the hard part is the cryptography to create the backdoor. That's actually the easy part. The hard part is ensuring that it's only used by the good guys, and there's nothing in Ozzie's proposal that addresses any of that.

I worry that this kind of thing is damaging in the long run. There should be some rule that any backdoor or key escrow proposal be a fully specified proposal, not just some cryptography and hand-waving notions about how it will be used in practice. And before it is analyzed and debated, it should have to satisfy some sort of basic security analysis. Otherwise, we'll be swatting pseudo-proposals like this one, while those on the other side of this debate become increasingly convinced that it's possible to design one of these things securely.

Already people are using the National Academies report on backdoors for law enforcement as evidence that engineers are developing workable and secure backdoors. Writing in Lawfare, Alan Z. Rozenshtein claims that the report -- and a related New York Times story -- "undermine the argument that secure third-party access systems are so implausible that it's not even worth trying to develop them." Susan Landau effectively corrects this misconception, but the damage is done.

Here's the thing: it's not hard to design and build a backdoor. What's hard is building the systems -- both technical and procedural -- around them. Here's Rob Graham:

He's only solving the part we already know how to solve. He's deliberately ignoring the stuff we don't know how to solve. We know how to make backdoors, we just don't know how to secure them.

A bunch of us cryptographers have already explained why we don't think this sort of thing will work in the foreseeable future. We write:

Exceptional access would force Internet system developers to reverse "forward secrecy" design practices that seek to minimize the impact on user privacy when systems are breached. The complexity of today's Internet environment, with millions of apps and globally connected services, means that new law enforcement requirements are likely to introduce unanticipated, hard to detect security flaws. Beyond these and other technical vulnerabilities, the prospect of globally deployed exceptional access systems raises difficult problems about how such an environment would be governed and how to ensure that such systems would respect human rights and the rule of law.

Finally, Matthew Green:

The reason so few of us are willing to bet on massive-scale key escrow systems is that we've thought about it and we don't think it will work. We've looked at the threat model, the usage model, and the quality of hardware and software that exists today. Our informed opinion is that there's no detection system for key theft, there's no renewability system, HSMs are terrifically vulnerable (and the companies largely staffed with ex-intelligence employees), and insiders can be suborned. We're not going to put the data of a few billion people on the line an environment where we believe with high probability that the system will fail.

EDITED TO ADD (5/14): An analysis of the proposal.

Worse Than FailureCodeSOD: CONDITION_FAILURE

Oliver Smith sends this representative line:

bool long_name_that_maybe_distracted_someone()
{
  return (execute() ? CONDITION_SUCCESS : CONDITION_FAILURE);
}

Now, we’ve established my feelings on the if (condition) { return true; } else { return false; } pattern. This is just an iteration on that theme, using a ternary, right?

That’s certainly what it looks like. But Oliver was tracking down an unusual corner-case bug and things just weren’t working correctly. As it turns out, CONDITION_SUCCESS and CONDITION_FAILURE were both defined in the StatusCodes enum.

Screenshot of the intellisense which shows CONDITION_FAILURE defined as 2

Yep- CONDITION_FAILURE is defined as 2. The method returns a bool. Guess what happens when you coerce a non-zero integer into a boolean in C++? It turns into true. This method only ever returns true. Ironically, the calling method would then do its own check against the return value, looking to see if it were CONDITION_SUCCESS or CONDITION_FAILURE.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaClinton Roy: Actively looking for work

I am now actively looking for work, ideally something with Unix/C/Python in the research/open source/not-for-proft space. My long out of date resume has been updated.

,

Planet Linux AustraliaFrancois Marier: Running mythtv-setup over ssh

In order to configure a remote MythTV server, I had to run mythtv-setup remotely over an ssh connection with X forwarding:

ssh -X mythtv@machine

For most config options, I can either use the configuration menus inside of of mythfrontend (over a vnc connection) or the Settings section of MythWeb, but some of the backend and tuner settings are only available through the main setup program.

Unfortunately, mythtv-setup won't work over an ssh connection by default and prints the following error in the terminal:

$ mythtv-setup
...
W  OpenGL: Could not determine whether Sync to VBlank is enabled.
Handling Segmentation fault
Segmentation fault (core dumped)

The fix for this was to specify a different theme engine:

mythtv-setup -O ThemePainter=qt

Sky CroeserMothering

Today I am thinking about mothering as a way in which we can make the world (in all its messiness and difficulty) better.

“Children are the ways that the world begins again and again. If you fasten upon that concept of their promise, you will have trouble finding anything more awesome, and also anything more extraordinarily exhilarating, than the opportunity or/and the obligation to nurture a child into his or her own freedom.” – June Jordan

Mothering is often treated by our society as an inherently conservative activity, something that’s about preserving the past (past traditions, past family structures, past values). But I’m learning from so many people (including people who aren’t biological mothers) who are knitting together strands from the past and hopes for the future.

Care for nature, for the world around us, for our mothers’ and grandmothers’ knowledge and experience. And dreams of more space for children to be who they want to be, to welcome and nurture others, to grow freely.

My mother and grandmother taught me so much, and still do. They are kind and fierce and have managed change and dislocation while always providing me with a steady point in the world.

My beautiful friends who are mothers teach me every day through their examples and their honesty about the difficult moments as well as the wonderful ones.

And I learn from mothers beyond my little circles, too.

From Noongar mothers, and other Aboriginal mothers who fought for recognition of the kidnapping of their children, and who are working today to build a society where their children will be safe and valued as they should be.

From Black mothers like June Jordan, Alexis Pauline Gumbs, and others in the ‘Revolutionary Mothering’ collection, which I return to again and again. They have done so much to help me understand other mothers’ experiences, and to see the possibilities and work that I should be taking up. And others, like Sylvia Federici, who have helped me see what I might not have, otherwise.

From mothers who must be brave enough to leave war or economic insecurity, hoping for safety, even though it also means leaving behind family and friends and home and the language and culture that has been held dear.

From mothers who work quietly and consistently and without recognition, from mothers who are sometimes difficult because of the work they do, from mothers who struggle with their own pasts, and who nevertheless keep trying to create the world anew, more full of love and possibility than before.

,

Planet Linux AustraliaMichael Still: Head On

Share

A sequel to Lock In, this book is a quick and fun read of a murder mystery. It has Scalzi’s distinctive style which has generally meshed quite well for me, so it’s not surprise that I enjoyed this book.

 

Head On Book Cover Head On
John Scalzi
Fiction
Tor Books
April 19, 2018
336

To some left with nothing, winning becomes everything In a post-virus world, a daring sport is taking the US by storm. It's frenetic, violent and involves teams attacking one another with swords and hammers. The aim: to obtain your opponent's head and carry it through the goalposts. Impossible? Not if the players have Hayden's Syndrome. Unable to move, Hayden's sufferers use robot bodies, which they operate mentally. So in this sport anything goes, no one gets hurt - and crowds and competitors love it. Until a star athlete drops dead on the playing field. But is it an accident? FBI agents Chris Shane and Leslie Vann are determined to find out. In this game, fortunes can be made - or lost. And both players and owners will do whatever it takes to win, on and off the field.John Scalzi returns with Head On, a chilling near-future SF with the thrills of a gritty cop procedural. Head On brings Scalzi's trademark snappy dialogue and technological speculation to the future world of sports.

Share

The post Head On appeared first on Made by Mikal.

Don MartiCan markets for intent data even be a thing?

Doc Searls is optimistic that surveillance marketing is going away, but what's going to replace it? One idea that keeps coming up is the suggestion that prospective buyers should be able to sell purchase intent data to vendors directly. This seems to be appealing because it means that the Marketing department will still get to have Big Data and stuff, but I'm still trying to figure out how voluntary transactions in intent data could even be a thing.

Here's an example. It's the week before Thanksgiving, and I'm shopping for a kitchen stove. Here are two possible pieces of intent information that I could sell.

  • "I'm cutting through the store on the way to buy something else. If a stove is on sale, I might buy it, but only if it's a bargain, because who needs the hassle of handling a stove delivery the week before Thanksgiving?"

  • "My old stove is shot, and I need one right away because I have already invited people over. Shut up and take my money."

On a future intent trading platform, what's my incentive to reveal which intent is the true one?

If I'm a bargain hunter, I'm willing to sell my intent information, because it would tend to get me a lower price. But in that case, why would any store want to buy the information?

If I need the product now, I would only sell the information for a price higher than the expected difference between the price I would pay and the price a bargain hunter would pay. But if the information isn't worth more than the price difference, why would the store want to buy it?

So how can a market for purchase intent data happen?

Or is the idea of selling access to purchase intent only feasible if the intent data is taken from the "data subject" without permission?

Anyway, I can see how search advertising and signal-based advertising can assume a more important role as surveillance marketing becomes less important, but I'm not sure about markets for purchase intent. Maybe user data sharing will be not so much a stand-alone thing but a role for trustworthy news and cultural sites, as people choose to share data as part of commenting and survey completion, and that data, in aggregated form, becomes part of a site's audience profile.

,

CryptogramFriday Squid Blogging: How the Squid Lost Its Shell

Squids used to have shells.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Sociological ImagesWho Gets a Ticket?

The recent controversial arrests at a Philadelphia Starbucks, where a manager called the police on two Black men who had only been in the store a few minutes, are an important reminder that bias in the American criminal justice system creates both large scale, dramatic disparities and little, everyday inequalities. Research shows that common misdemeanors are a big part of this, because fines and fees can pile up on people who are more likely to be policed for small infractions.

A great example is the common traffic ticket. Some drivers who get pulled over get a ticket, while others get let off with a warning. Does that discretion shake out differently depending on the driver’s race? The Stanford Open Policing Project has collected data on over 60 million traffic stops, and a working paper from the project finds that Black and Hispanic drivers are more likely to be ticketed or searched at a stop than white drivers.

To see some of these patterns in a quick exercise, we pulled the project’s data on over four million stop records from Illinois and over eight million records from South Carolina. These charts are only a first look—we split the recorded outcomes of stops across the different codes for driver race available in the data and didn’t control for additional factors. However, they give a troubling basic picture about who gets a ticket and who drives away with a warning.

(Click to Enlarge)
(Click to Enlarge)

These charts show more dramatic disparities in South Carolina, but a larger proportion of white drivers who were stopped got off with warnings (and fewer got tickets) in Illinois as well. In fact, with millions of observations in each data set, differences of even a few percentage points can represent hundreds, even thousands of drivers. Think about how much revenue those tickets bring in, and who has to pay them. In the criminal justice system, the little things can add up quickly.

(View original at https://thesocietypages.org/socimages)

CryptogramAirline Ticket Fraud

New research: "Leaving on a jet plane: the trade in fraudulently obtained airline tickets:"

Abstract: Every day, hundreds of people fly on airline tickets that have been obtained fraudulently. This crime script analysis provides an overview of the trade in these tickets, drawing on interviews with industry and law enforcement, and an analysis of an online blackmarket. Tickets are purchased by complicit travellers or resellers from the online blackmarket. Victim travellers obtain tickets from fake travel agencies or malicious insiders. Compromised credit cards used to be the main method to purchase tickets illegitimately. However, as fraud detection systems improved, offenders displaced to other methods, including compromised loyalty point accounts, phishing, and compromised business accounts. In addition to complicit and victim travellers, fraudulently obtained tickets are used for transporting mules, and for trafficking and smuggling. This research details current prevention approaches, and identifies additional interventions, aimed at the act, the actor, and the marketplace.

Blog post.

Worse Than FailureError'd: Kind of...but not really

"On occasion, SQL Server Management Studio's estimates can be just a little bit off," writes Warrent B.

 

Jay D. wrote, "On the surface, yeah, it looks like a good deal, but you know, pesky laws of physics spoil all the fun."

 

"When opening a new tab in Google Chrome I saw a link near the bottom of the screen that suggested I 'Explore the world's iconic locations in 3D'," writes Josh M., "Unfortunately, Google's API felt differently."

 

Stuart H. wrote, "I think I might have missed out on this deal, the clock was counting up, no I mean down, I mean negative AHHHH!"

 

"Something tells me this site's programmer is learning how to spell the hard(est) way," Carl W. writes.

 

"Why limit yourself with one particular resource of the day when you can substitute any resource you want," wrote Ari S.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaBlueHackers: Vale Janet Hawtin Reid

Janet Hawtin ReidJanet Hawtin Reid (@lucychili) sadly passed away last week.

A mutual friend called me a earlier in the week to tell me, for which I’m very grateful.  We both appreciate that BlueHackers doesn’t ever want to be a news channel, so I waited writing about it here until other friends, just like me, would have also had a chance to hear via more direct and personal channels. I think that’s the way these things should flow.

knitted Moomin troll by Janet Hawtin ReidI knew Janet as a thoughtful person, with strong opinions particularly on openness and inclusion.  And as an artist and generally creative individual,  a lover of nature.  In recent years I’ve also seen her produce the most awesome knitted Moomins.

Short diversion as I have an extra connection with the Moomin stories by Tove Jansson: they have a character called My, after whom Monty Widenius’ eldest daughter is named, which in turn is how MySQL got named.  I used to work for MySQL AB, and I’ve known that My since she was a little smurf (she’s an adult now).

I’m not sure exactly when I met Janet, but it must have been around 2004 when I first visited Adelaide for Linux.conf.au.  It was then also that Open Source Industry Australia (OSIA) was founded, for which Janet designed the logo.  She may well have been present at the founding meeting in Adelaide’s CBD, too.  OSIA logo - by Janet Hawtin ReidAnyhow, Janet offered to do the logo in a conversation with David Lloyd, and things progressed from there. On the OSIA logo design, Janet wrote:

I’ve used a star as the current one does [an earlier doodle incorporated the Southern Cross]. The 7 points for 7 states [counting NT as a state]. The feet are half facing in for collaboration and half facing out for being expansive and progressive.

You may not have realised this as the feet are quite stylised, but you’ll definitely have noticed the pattern-of-7, and the logo as a whole works really well. It’s a good looking and distinctive logo that has lasted almost a decade and a half now.

Linux Australia logo - by Janet Hawtin ReidAs Linux Australia’s president Kathy Reid wrote, Janet also helped design the ‘penguin feet’ logo that you see on Linux.org.au.  Just reading the above (which I just retrieved from a 2004 email thread) there does seem to be a bit of a feet-pattern there… of course the explicit penguin feet belong with the Linux penguin.

So, Linux Australia and OSIA actually share aspects of their identity (feet with a purpose), through their respective logo designs by Janet!  Mind you, I only realised all this when looking through old stuff while writing this post, as the logos were done at different times and only a handful of people have ever read the rationale behind the OSIA logo until now.  I think it’s cool, and a fabulous visual legacy.

Fir tree in clay, by Janet Hawtin ReidFir tree in clay, by Janet Hawtin Reid. Done in “EcoClay”, brought back to Adelaide from OSDC 2010 (Melbourne) by Kim Hawtin, Janet’s partner.

Which brings me to a related issue that’s close to my heart, and I’ve written and spoken about this before.  We’re losing too many people in our community – where, in case you were wondering, too many is defined as >0.  Just like in a conversation on the road toll, any number greater than zero has to be regarded as unacceptable. Zero must be the target, as every individual life is important.

There are many possible analogies with trees as depicted in the above artwork, including the fact that we’re all best enabled to grow further.

Please connect with the people around you.  Remember that connecting does not necessarily mean talking per-se, as sometimes people just need to not talk, too.  Connecting, just like the phrase “I see you” from Avatar, is about being thoughtful and aware of other people.  It can just be a simple hello passing by (I say hi to “strangers” on my walks), a short email or phone call, a hug, or even just quietly being present in the same room.

We all know that you can just be in the same room as someone, without explicitly interacting, and yet feel either connected or disconnected.  That’s what I’m talking about.  Aim to be connected, in that real, non-electronic, meaning of the word.

If you or someone you know needs help or talk right now, please call 1300 659 467 (in Australia – they can call you back, and you can also use the service online).  There are many more resources and links on the BlueHackers.org website.  Take care.

Planet Linux AustraliaDavid Rowe: FreeDV 700D Part 4 – Acquisition

Since 2012 I have built a series of modems (FDMDV, COHPSK, OFDM) for HF Digital voice. I always get stuck on “acquisition” – demodulator algorithms that acquire and lock onto the received signal. The demod needs to rapidly estimate the frequency offset and “coarse” timing – the position where the modem frame starts in the sequence of received samples.

For my application (Digital Voice over HF), it’s complicated by the low SNR and fading HF channels, and the requirement for fast sync (a few hundred ms). For Digital Voice (DV) we need something fast enough to emulate Push To Talk (PTT) operation. In comparison HF data modems have it easy – they can take many lazy seconds to synchronise.

The latest OFDM modem has been no exception. I’ve spent several weeks messing about with acquisition algorithms to get half decent performance. Still some tuning to do but for my own sanity I think I’ll stop development here for now, write up the results, and push FreeDV 700D out for general consumption.

Acquisition and Sync Requirements

  1. Sync up quickly (a few 100ms) with high SNR signals.
  2. Sync up eventually (a few is seconds OK) for low SNR signals over poor channels. Sync eventually is better than none on channels where even SSB is struggling.
  3. Detect false sync and get out of it quickly. Don’t stay stuck in a false sync state forever.
  4. Hang onto sync through fades of a few seconds.
  5. Assume the operator can tune to within +/- 20Hz of a given frequency.
  6. Assume the radio drifts no more than +/- 0.2Hz/s (12 Hz a minute).
  7. Assume the sample clock offset (difference in ADC/DAC sample rates) is no more than 500ppm.

Actually the last three aren’t really requirements, it’s just what fell out of the OFDM modem design when I optimised it for low SNR performance on HF channels! The frequency stability of modern radios is really good; sound card sample clock offset less so but perhaps we can measure that and tell the operator if there is a problem.

Testing Acquisition

The OFDM modem sends pilot (known) symbols every frame. The demodulator correlates (compares) the incoming signal with the pilot symbol sequence. When it finds a close match it has a coarse timing candidate. It can then try to estimate the frequency offset. So we get a coarse timing estimate, a metric (called mx1) that says how close the match is, and a frequency offset estimate.

Estimating frequency offsets is particularly tricky, I’ve experienced “much wailing and gnashing of teeth” with these nasty little algorithms in past (stop laughing Matt). The coarse timing estimator is more reliable. The problem is that if you get an incorrect coarse timing or frequency estimate the modem can lock up incorrectly and may take several seconds, or operator intervention, before it realises its mistake and tries again.

I ended up writing a lot of GNU Octave functions to help develop and test the acquisition algorithms in ofdm_dev.

For example the function below runs 100 tests, measures the timing and frequency error, and plots some histograms. The core demodulator can cope with about +/ 1.5Hz of residual frequency offset and a few samples of timing error. So we can generate probability estimates from the test results. For example if we do 100 tests of the frequency offset estimator and 50 are within 1.5Hz of being correct, then we can say we have a 50% (0.5) probability of getting the correct frequency estimate.

octave:1> ofdm_dev
octave:2> acquisition_histograms(fin_en=0, foff_hz=-15, EbNoAWGN=-1, EbNoHF=3)
AWGN P(time offset acq) = 0.96
AWGN P(freq offset acq) = 0.60
HF P(time offset acq) = 0.87
HF P(freq offset acq) = 0.59

Here are the histograms of the timing and frequency estimation errors. These were generated using simulations of noisy HF channels (about 2dB SNR):


The x axis of timing is in samples, x axis of freq in Hz. They are both a bit biased towards positive errors. Not sure why. This particular test was with a frequency offset of -15Hz.

Turns out that as the SNR improves, the estimators do a better job. The next function runs a bunch of tests at different SNRs and frequency offsets, and plots the acquisition probabilities:

octave:3> acquisition_curves




The timing estimator also gives us a metric (called mx1) that indicates how strong the match was between the incoming signal and the expected pilot sequence. Here is a busy little plot of mx1 against frequency offset for various Eb/No (effectively SNR):

So as Eb/No increases, the mx1 metric tends to gets bigger. It also falls off as the frequency offset increases. This means sync is tougher at low Eb/No and larger frequency offsets. The -10dB value was thrown in to see what happens with pure noise and no signal at the input. We’d prefer not to sync up to that. Using this plot I set the threshold for a valid signal at 0.25.

Once we have a candidate time and freq estimate, we can test sync by measuring the number of bit errors a set of 10 Unique Word (UW) bits spread over the modem frame. Unlike the payload data in the modem frame, these bits are fixed, and known to the transmitter and receiver. In my initial approach I placed the UW bits right at the start of the modem frame. However I discovered a problem – with certain frequency offsets (e.g. multiples of the modem frame rate like +/- 6Hz) – it was possible to get a false sync with no UW errors. So I messed about with the placement of the UW bits until I had a UW that would not give any false syncs at any incorrect frequency offset. To test the UW I wrote another script:

octave:4> debug_false_sync

Which outputs a plot of UW errors against the residual frequency offset:

Note how at any residual frequency offset other than -1.5 to +1.5 Hz there are at least two bit errors. This allows us to reliably detect a false sync due to an incorrect frequency offset estimate.

State Machine

The estimators are wrapped up in a state machine to control the entire sync process:

  1. SEARCHING: look at a buffer of incoming samples and estimate timing, freq, and the mx1 metric.
  2. If mx1 is big enough, lets jump to TRIAL.
  3. TRIAL: measure the number of Unique Word bit errors for a few frames. If they are bad this is probably a false sync so jump back to SEARCHING.
  4. If we get a low number of Unique Word errors for a few frames it’s high fives all round and we jump to SYNCED.
  5. SYNCED: We put up with up two seconds of high Unique Word errors, as this is life on a HF channel. More than two seconds, and we figure the signal is gone for good so we jump back to SEARCHING.

Reading Further

HF Modem Frequency Offset Estimation, an earlier look at freq offset estimation for HF modems
COHPSK and OFDM waveform design spreadsheet
Modems for HF Digital Voice Part 1
Modems for HF Digital Voice Part 2
README_ofdm.txt, including specifications of the OFDM modem.

,

CryptogramSupply-Chain Security

Earlier this month, the Pentagon stopped selling phones made by the Chinese companies ZTE and Huawei on military bases because they might be used to spy on their users.

It's a legitimate fear, and perhaps a prudent action. But it's just one instance of the much larger issue of securing our supply chains.

All of our computerized systems are deeply international, and we have no choice but to trust the companies and governments that touch those systems. And while we can ban a few specific products, services or companies, no country can isolate itself from potential foreign interference.

In this specific case, the Pentagon is concerned that the Chinese government demanded that ZTE and Huawei add "backdoors" to their phones that could be surreptitiously turned on by government spies or cause them to fail during some future political conflict. This tampering is possible because the software in these phones is incredibly complex. It's relatively easy for programmers to hide these capabilities, and correspondingly difficult to detect them.

This isn't the first time the United States has taken action against foreign software suspected to contain hidden features that can be used against us. Last December, President Trump signed into law a bill banning software from the Russian company Kaspersky from being used within the US government. In 2012, the focus was on Chinese-made Internet routers. Then, the House Intelligence Committee concluded: "Based on available classified and unclassified information, Huawei and ZTE cannot be trusted to be free of foreign state influence and thus pose a security threat to the United States and to our systems."

Nor is the United States the only country worried about these threats. In 2014, China reportedly banned antivirus products from both Kaspersky and the US company Symantec, based on similar fears. In 2017, the Indian government identified 42 smartphone apps that China subverted. Back in 1997, the Israeli company Check Point was dogged by rumors that its government added backdoors into its products; other of that country's tech companies have been suspected of the same thing. Even al-Qaeda was concerned; ten years ago, a sympathizer released the encryption software Mujahedeen Secrets, claimed to be free of Western influence and backdoors. If a country doesn't trust another country, then it can't trust that country's computer products.

But this trust isn't limited to the country where the company is based. We have to trust the country where the software is written -- and the countries where all the components are manufactured. In 2016, researchers discovered that many different models of cheap Android phones were sending information back to China. The phones might be American-made, but the software was from China. In 2016, researchers demonstrated an even more devious technique, where a backdoor could be added at the computer chip level in the factory that made the chips ­ without the knowledge of, and undetectable by, the engineers who designed the chips in the first place. Pretty much every US technology company manufactures its hardware in countries such as Malaysia, Indonesia, China and Taiwan.

We also have to trust the programmers. Today's large software programs are written by teams of hundreds of programmers scattered around the globe. Backdoors, put there by we-have-no-idea-who, have been discovered in Juniper firewalls and D-Link routers, both of which are US companies. In 2003, someone almost slipped a very clever backdoor into Linux. Think of how many countries' citizens are writing software for Apple or Microsoft or Google.

We can go even farther down the rabbit hole. We have to trust the distribution systems for our hardware and software. Documents disclosed by Edward Snowden showed the National Security Agency installing backdoors into Cisco routers being shipped to the Syrian telephone company. There are fake apps in the Google Play store that eavesdrop on you. Russian hackers subverted the update mechanism of a popular brand of Ukrainian accounting software to spread the NotPetya malware.

In 2017, researchers demonstrated that a smartphone can be subverted by installing a malicious replacement screen.

I could go on. Supply-chain security is an incredibly complex problem. US-only design and manufacturing isn't an option; the tech world is far too internationally interdependent for that. We can't trust anyone, yet we have no choice but to trust everyone. Our phones, computers, software and cloud systems are touched by citizens of dozens of different countries, any one of whom could subvert them at the demand of their government. And just as Russia is penetrating the US power grid so they have that capability in the event of hostilities, many countries are almost certainly doing the same thing at the consumer level.

We don't know whether the risk of Huawei and ZTE equipment is great enough to warrant the ban. We don't know what classified intelligence the United States has, and what it implies. But we do know that this is just a minor fix for a much larger problem. It's doubtful that this ban will have any real effect. Members of the military, and everyone else, can still buy the phones. They just can't buy them on US military bases. And while the US might block the occasional merger or acquisition, or ban the occasional hardware or software product, we're largely ignoring that larger issue. Solving it borders on somewhere between incredibly expensive and realistically impossible.

Perhaps someday, global norms and international treaties will render this sort of device-level tampering off-limits. But until then, all we can do is hope that this particular arms race doesn't get too far out of control.

This essay previously appeared in the Washington Post.

Worse Than FailureCodeSOD: A Quick Replacement

Lucio Crusca was doing a bit of security auditing when he found this pile of code, and it is indeed a pile. It is PHP, which doesn’t automatically make it bad, but it makes use of a feature of PHP so bad that they’ve deprecated it in recent versions: the create_function method.

Before we even dig into this code, the create_function method takes a string, runs eval on it, and returns the name of the newly created anonymous function. Prior to PHP 5.3.0 this was their method of doing lambdas. And while the function is officially deprecated as of PHP 7.2.0… it’s not removed. You can still use it. And I’m sure a lot of code probably still does. Like this block…

        public static function markupToPHP($content) {
                if ($content instanceof phpQueryObject)
                        $content = $content->markupOuter();
                /* <php>...</php> to <?php...? > */
                $content = preg_replace_callback(
                        '@<php>\s*<!--(.*?)-->\s*</php>@s',
                        array('phpQuery', '_markupToPHPCallback'),
                        $content
                );
                /* <node attr='< ?php ? >'> extra space added to save highlighters */
                $regexes = array(
                        '@(<(?!\\?)(?:[^>]|\\?>)+\\w+\\s*=\\s*)(\')([^\']*)(?:&lt;|%3C)\\?(?:php)?(.*?)(?:\\?(?:&gt;|%3E))([^\']*)\'@s',
                        '@(<(?!\\?)(?:[^>]|\\?>)+\\w+\\s*=\\s*)(")([^"]*)(?:&lt;|%3C)\\?(?:php)?(.*?)(?:\\?(?:&gt;|%3E))([^"]*)"@s',
                );
                foreach($regexes as $regex)
                        while (preg_match($regex, $content))
                                $content = preg_replace_callback(
                                        $regex,
                                        create_function('$m',
                                                'return $m[1].$m[2].$m[3]."<?php "
                                                        .str_replace(
                                                                array("%20", "%3E", "%09", "&#10;", "&#9;", "%7B", "%24", "%7D", "%22", "%5B", "%5D"),
                                                                array(" ", ">", "       ", "\n", "      ", "{", "$", "}", \'"\', "[", "]"),
                                                                htmlspecialchars_decode($m[4])
                                                        )
                                                        ." ?>".$m[5].$m[2];'
                                        ),
                                        $content
                                );
                return $content;
        }

From what I can determine from the comments and the code, this is taking some arbitrary content in the form <php>PHP CODE HERE</php> and converting it to <?php PHP CODE HERE ?>. I don’t know what happens after this function is done with it, but I’m already terrified.

The inner-loop fascinates me. while (preg_match($regex, $content)) implies that we need to call the replace function multiple times, but preg_replace_callback by default replaces all instances of the matching regex, so there’s absolutely no reason fo the while loop. Then, of course, the use of create_function, which is itself a WTF, but it’s also worth noting that there’s no need to do this dynamically- you could just as easily have declared a callback function like they did above with _markupToPHPCallback.

Lucio adds:

I was looking for potential security flaws: well, I’m not sure this is actually exploitable, because even black hats have limited patience!

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

TEDTED en Español: TED’s first-ever Spanish-language speaker event in NYC

Host Gerry Garbulsky opens the TED en Español event in the TEDNYC theater, New York, NY. (Photo: Dian Lofton / TED)

Thursday marked the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event featured eight speakers, a musical performance, five short films and fifteen one-minute talks given by members of the audience.

The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event, TEDxRiodelaPlata in Argentina, TED en Español includes a Facebook community, Twitter feed, weekly “Boletín” newsletter, YouTube channel and — as of earlier this month — an original podcast created in partnership with Univision Communications.

Should we automate democracy? “Is it just me, or are there other people here that are a little bit disappointed with democracy?” asks César A. Hidalgo. Like other concerned citizens, the MIT physics professor wants to make sure we have elected governments that truly represent our values and wishes. His solution: What if scientists could create an AI that votes for you? Hidalgo envisions a system in which each voter could teach her own AI how to think like her, using quizzes, reading lists and other types of data. So once you’ve trained your AI and validated a few of the decisions it makes for you, you could leave it on autopilot, voting and advocating for you … or you could choose to approve every decision it suggests. It’s easy to poke holes in his idea, but Hidalgo believes it’s worth trying out on a small scale. His bottom line: “Democracy has a very bad user interface. If you can improve the user interface, you might be able to use it more.”

When the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully,” says Leticia Gasca. (Photo: Jasmina Tomic / TED)

How to fail mindfully. If your business failed in Ancient Greece, you’d have to stand in the town square with a basket over your head. Thankfully, we’ve come a long way — or have we? Failed-business owner Leticia Gasca doesn’t think so. Motivated by her own painful experience, she set out to create a way for others like her to convert the guilt and shame of a business venture gone bad into a catalyst for growth. Thus was born “Fuckup Nights” (FUN), a global movement and event series for sharing stories of professional failure, and The Failure Institute, a global research group that studies failure and its impact on people, businesses and communities. For Gasca, when the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully” and see endings as doorways to empathy, resilience and renewal.

From four countries to one stage. The pan-Latin-American musical ensemble LADAMA brought much more than just music to the TED en Español stage. Inviting the audience to dance with them, Venezuelan Maria Fernanda Gonzalez, Brazilian Lara Klaus, Colombian Daniela Serna and American Sara Lucas sing and dance to a medley of rhythms that range from South American to Caribbean-infused styles. Playing “Night Traveler” and “Porro Maracatu,” LADAMA transformed the stage into a place of music worth spreading.

Gastón Acurio shares stories of the power of food to change lives. (Photo: Jasmina Tomic / TED)

World change starts in your kitchen. In his pioneering work to bring Peruvian cuisine to the world, Gastón Acurio discovered the power that food has to change peoples’ lives. As ceviche started appearing in renowned restaurants worldwide, Gastón saw his home country of Peru begin to appreciate the diversity of its gastronomy and become proud of its own culture. But food hasn’t always been used to bring good to the world. With the industrial revolution and the rise of consumerism, “more people in the world are dying from obesity than hunger,” he notes, and many peoples’ lifestyles aren’t sustainable. 
By interacting with and caring about the food we eat, Gastón says, we can change our priorities as individuals and change the industries that serve us. He doesn’t yet have all the answers on how to make this a systematic movement that politicians can get behind, but world-renowned cooks are already taking these ideas into their kitchens. He tells the stories of a restaurant in Peru that supports native people by sourcing ingredients from them, a famous chef in NYC who’s fighting against the use of monocultures and an emblematic restaurant in France that has barred meat from the menu. “Cooks worldwide are convinced that we cannot wait for others to make changes and that we must jump into action,” he says. But professional cooks can’t do it all. If we want real change to happen, Gastón urges, we need home cooking to be at the center of everything.

The interconnectedness of music and life. Chilean musical director Paolo Bortolameolli wraps his views on music within his memory of crying the very first time he listened to live classical music. Sharing the emotions music evoked in him, Bortolameolli presents music as a metaphor for life — full of the expected and the unexpected. He thinks that we listen to the same songs again and again because, as humans, we like to experience life from a standpoint of expectation and stability, and he simultaneously suggests that every time we listen to a musical piece, we enliven the music, imbuing it with the potential to be not just recognized but rediscovered.

We reap what we sow — let’s sow something different. Up until the mid-’80s, the average incomes in major Latin American countries were on par with those in Korea. But now, less than a generation later, Koreans earn two to three times more than their Latin American counterparts. How can that be? The difference, says futurist Juan Enriquez, lies in a national prioritization of brainpower — and in identifying, educating and celebrating the best minds. What if in Latin America we started selecting for academic excellence the way we would for an Olympic soccer team? If Latin American countries are to thrive in the era of technology and beyond, they should look to establish their own top universities rather than letting their brightest minds thirst for nourishment, competition and achievement — and find it elsewhere, in foreign lands.

Rebeca Hwang shares her dream of a world where identities are used to bring people together, not alienate them. (Photo: Jasmina Tomic / TED)

Diversity is a superpower. Rebeca Hwang was born in Korea, raised in Argentina and educated in the United States. As someone who has spent a lifetime juggling various identities, Hwang can attest that having a blended background, while sometimes challenging, is actually a superpower. The venture capitalist shared how her fluency in many languages and cultures allows her to make connections with all kinds of people from around the globe. As the mother of two young children, Hwang hopes to pass this perspective on to her kids. She wants to teach them to embrace their unique backgrounds and to create a world where identities are used to bring people together, not alienate them.

Marine ecologist Enric Sala wants to protect the last wild places in the ocean. (Photo: Jasmina Tomic / TED)

How we’ll save our oceans If you jumped in the ocean at any random spot, says Enric Sala, you’d have a 98 percent chance of diving into a dead zone — a barren landscape empty of large fish and other forms of marine life. As a marine ecologist and National Geographic Explorer-in-Residence, Sala has dedicated his life to surveying the world’s oceans. He proposes a radical solution to help protect the oceans by focusing on our high seas, advocating for the creation of a reserve that would include two-thirds of the world’s ocean. By safeguarding our high seas, Sala believes we will restore the ecological, economic and social benefits of the ocean — and ensure that when our grandchildren jump into any random spot in the sea, they’ll encounter an abundance of glorious marine life instead of empty space.

And to wrap it up … In an improvised rap performance with plenty of well-timed dance moves, psychologist and dance therapist César Silveyra closes the session with 15 of what he calls “nano-talks.” In a spectacular showdown of his skills, Silveyra ties together ideas from previous speakers at the event, including Enric Sala’s warnings about overfished oceans, Gastón Acurio’s Peruvian cooking revolution and even a shoutout for speaker Rebeca Hwang’s grandmother … all the while “feeling like Beyoncé.”

Geek FeminismInformal Geek Feminism get-togethers, May and June

Some Geek Feminism folks will be at the following conferences and conventions in the United States over the next several weeks, in case contributors and readers would like to have some informal get-togethers to reminisce and chat about inheritors of the GF legacy:

If you’re interested, feel free to comment below, and to take on the step of initiating open space/programming/session organizing!

CryptogramVirginia Beach Police Want Encrypted Radios

This article says that the Virginia Beach police are looking to buy encrypted radios.

Virginia Beach police believe encryption will prevent criminals from listening to police communications. They said officer safety would increase and citizens would be better protected.

Someone should ask them if they want those radios to have a backdoor.

Krebs on SecurityThink You’ve Got Your Credit Freezes Covered? Think Again.

I spent a few days last week speaking at and attending a conference on responding to identity theft. The forum was held in Florida, one of the major epicenters for identity fraud complaints in United States. One gripe I heard from several presenters was that identity thieves increasingly are finding ways to open new mobile phone accounts in the names of people who have already frozen their credit files with the big-three credit bureaus. Here’s a look at what may be going on, and how you can protect yourself.

Carrie Kerskie is director of the Identity Fraud Institute at Hodges University in Naples. A big part of her job is helping local residents respond to identity theft and fraud complaints. Kerskie said she’s had multiple victims in her area recently complain of having cell phone accounts opened in their names even though they had already frozen their credit files at the big three credit bureausEquifax, Experian and Trans Union (as well as distant fourth bureau Innovis).

The freeze process is designed so that a creditor should not be able to see your credit file unless you unfreeze the account. A credit freeze blocks potential creditors from being able to view or “pull” your credit file, making it far more difficult for identity thieves to apply for new lines of credit in your name.

But Kerskie’s investigation revealed that the mobile phone merchants weren’t asking any of the four credit bureaus mentioned above. Rather, the mobile providers were making credit queries with the National Consumer Telecommunications and Utilities Exchange (NCTUE), or nctue.com.

Source: nctue.com

“We’re finding that a lot of phone carriers — even some of the larger ones — are relying on NCTUE for credit checks,” Kerskie said. “It’s mainly phone carriers, but utilities, power, water, cable, any of those, they’re all starting to use this more.”

The NCTUE is a consumer reporting agency founded by AT&T in 1997 that maintains data such as payment and account history, reported by telecommunication, pay TV and utility service providers that are members of NCTUE.

Who are the NCTUE’s members? If you call the 800-number that NCTUE makes available to get a free copy of your NCTUE credit report, the option for “more information” about the organization says there are four “exchanges” that feed into the NCTUE’s system: the NCTUE itself; something called “Centralized Credit Check Systems“; the New York Data Exchange; and the California Utility Exchange.

According to a partner solutions page at Verizon, the New York Data Exchange is a not-for-profit entity created in 1996 that provides participating exchange carriers with access to local telecommunications service arrears (accounts that are unpaid) and final account information on residential end user accounts.

The NYDE is operated by Equifax Credit Information Services Inc. (yes, that Equifax). Verizon is one of many telecom providers that use the NYDE (and recall that AT&T was the founder of NCTUE).

The California Utility Exchange collects customer payment data from dozens of local utilities in the state, and also is operated by Equifax (Equifax Information Services LLC).

Google has virtually no useful information available about an entity called Centralized Credit Check Systems. It’s possible it no longer exists. If anyone finds differently, please leave a note in the comments section.

When I did some more digging on the NCTUE, I discovered…wait for it…Equifax also is the sole contractor that manages the NCTUE database. The entity’s site is also hosted out of Equifax’s servers. Equifax’s current contract to provide this service expires in 2020, according to a press release posted in 2015 by Equifax.

RED LIGHT. GREEN LIGHT. RED LIGHT.

Fortunately, the NCTUE makes it fairly easy to obtain any records they may have on Americans.  Simply phone them up (1-866-349-5185) and provide your Social Security number and the numeric portion of your registered street address.

Assuming the automated system can verify you with that information, the system then orders an NCTUE credit report to be sent to the address on file. You can also request to be sent a free “risk score” assigned by the NCTUE for each credit file it maintains.

The NCTUE also offers an online process for freezing one’s report. Perhaps unsurprisingly, however, the process for ordering a freeze through the NCTUE appears to be completely borked at the moment, thanks no doubt to Equifax’s well documented abysmal security practices.

Alternatively, it could all be part of a willful or negligent strategy to continue discouraging Americans from freezing their credit files (experts say the bureaus make about $1 for each time they sell your file to a potential creditor).

On April 29, I had an occasion to visit Equifax’s credit freeze application page, and found that the site was being served with an expired SSL certificate from Symantec (i.e., the site would not let me browse using https://). This happened because I went to the site using Google Chrome, and Google announced a decision in September 2017 to no longer trust SSL certs issued by Symantec prior to June 1, 2016.

Google said it would do this starting with Google Chrome version 66. It did not keep this plan a secret. On April 18, Google pushed out Chrome 66.  Despite all of the advance warnings, the security people at Equifax apparently missed the memo and in so doing probably scared most people away from its freeze page for several weeks (Equifax fixed the problem on its site sometime after I tweeted about the expired certificate on April 29).

That’s because when one uses Chrome to visit a site whose encryption certificate is validated by one of these unsupported Symantec certs, Chrome puts up a dire security warning that would almost certainly discourage most casual users from continuing.

The insecurity around Equifax’s own freeze site likely discouraged people from requesting a freeze on their credit files.

On May 7, when I visited the NCTUE’s page for freezing my credit file with them I was presented with the very same connection SSL security alert from Chrome, warning of an invalid Symantec certificate and that any data I shared with the NCTUE’s freeze page would not be encrypted in transit.

The security alert generated by Chrome when visiting the freeze page for the NCTUE, whose database (and apparently web site) also is run by Equifax.

When I clicked through past the warnings and proceeded to the insecure NCTUE freeze form (which is worded and stylized almost exactly like Equifax’s credit freeze page), I filled out the required information to freeze my NCTUE file. See if you can guess what happened next.

Yep, I was unceremoniously declined the opportunity to do that. “We are currently unable to service your request,” read the resulting Web page, without suggesting alternative means of obtaining its report. “Please try again later.”

The message I received after trying to freeze my file with the NCTUE.

This scenario will no doubt be familiar to many readers who tried (and failed in a similar fashion) to file freezes on their credit files with Equifax after the company divulged that hackers had relieved it of Social Security numbers, addresses, dates of birth and other sensitive data on nearly 150 million Americans last September. I attempted to file a freeze via the NCTUE’s site with no fewer than three different browsers, and each time the form reset itself upon submission or took me to a failure page.

So let’s review. Many people who have succeeded in freezing their credit files with Equifax have nonetheless had their identities stolen and new accounts opened in their names thanks to a lesser-known credit bureau that seems to rely entirely on credit checking entities operated by Equifax.

“This just reinforces the fact that we are no longer in control of our information,” said Kerskie, who is also a founding member of Griffon Force, a Florida-based identity theft restoration firm.

I find it difficult to disagree with Kerskie’s statement. What chaps me about this discovery is that countless Americans are in many cases plunking down $3-$10 per bureau to freeze their credit files, and yet a huge player in this market is able to continue to profit off of identity theft on those same Americans.

EQUIFAX RESPONDS

I asked Equifax why the very same credit bureau operating the NCTUE’s data exchange (and those of at least two other contributing members) couldn’t detect when consumers had placed credit freezes with Equifax. Put simply, Equifax’s wall of legal verbiage below says mainly that NCTUE is a separate entity from Equifax, and that NCTUE doesn’t include Equifax credit information.

Here is Equifax’s full statement on the matter:

·        The National Consumer Telecom and Utilities Exchange, Inc. (NCTUE) is a nationwide, member-owned and operated, FCRA-compliant consumer reporting agency that houses both positive and negative consumer payment data reported by its members, such as new connect requests, payment history, and historical account status and/or fraudulent accounts.  NCTUE members are providers of telecommunications and pay/satellite television services to consumers, as well as utilities providing gas, electrical and water services to consumers. 

·        This information is available to NCTUE members and, on a limited basis, to certain other customers of NCTUE’s contracted exchange operator, Equifax Information Services, LLC (Equifax) – typically financial institutions and insurance providers.  NCTUE does not include Equifax credit information, and Equifax is not a member of NCTUE, nor does Equifax own any aspect of NCTUE.  NCTUE does not provide telecommunications pay/ satellite television or utility services to consumers, and consumers do not apply for those services with NCTUE.

·        As a consumer reporting agency, NCTUE places and lifts security freezes on consumer files in accordance with the state law applicable to the consumer.  NCTUE also maintains a voluntary security freeze program for consumers who live in states which currently do not have a security freeze law. 

·        NCTUE is a separate consumer reporting agency from Equifax and therefore a consumer would need to independently place and lift a freeze with NCTUE.

·        While state laws vary in the manner in which consumers can place or lift a security freeze (temporarily or permanently), if a consumer has a security freeze on his or her NCTUE file and has not temporarily lifted the freeze, a creditor or other service provider, such as a mobile phone provider, generally cannot access that consumer’s NCTUE report in connection with a new account opening.  However, the creditor or provider may be able to access that consumer’s credit report from another consumer reporting agency in order to open a new account, or decide to open the account without accessing a credit report from any consumer reporting agency, such as NCTUE or Equifax. 

PLACING THE FREEZE

I was able to successfully place a freeze on my NCTUE report by calling their 800-number — 1-866-349-5355. The message said the NCTUE might charge a fee for placing or lifting the freeze, in accordance with state freeze laws.

Depending on your state of residence, the cost of placing a freeze on your credit file at Equifax, Experian or Trans Union can run between $3 and $10 per credit bureau, and in many states the bureaus also can charge fees for temporarily “thawing” and removing a freeze (according to a list published by Consumers Union, residents of four states — Indiana, Maine, North Carolina, South Carolina — do not need to pay to place, thaw or lift a freeze).

While my home state of Virginia allows the bureaus to charge $10 to place a freeze, for whatever reason the NCTUE did not assess that fee when I placed my freeze request with them. When and if your freeze request does get approved using the NCTUE’s automated phone system, make sure you have pen and paper or a keyboard handy to jot down the freeze PIN, which you will need in the event you ever wish to lift the freeze. When the system read my freeze PIN, it was read so quickly that I had to hit “*” on the dial pad several times to repeat the message.

It’s frankly absurd that consumers should ever have to pay to freeze their credit files at all, and yet a recent study indicates that almost 20 percent of Americans chose to do so at one or more of the three major credit bureaus since Equifax announced its breach last fall. The total estimated cost to consumers in freeze fees? $1.4 billion.

A bill in the U.S. Senate that looks likely to pass this year would require credit-reporting firms to let consumers place a freeze without paying. The free freeze component of the bill is just a tiny provision in a much larger banking reform bill — S. 2155 — that consumer groups say will roll back some of the consumer and market protections put in place after the Great Recession of the last decade.

“It’s part of a big banking bill that has provisions we hate,” said Chi Chi Wu, a staff attorney with the National Consumer Law Center. “It has some provisions not having to do with credit reporting, such as rolling back homeowners disclosure act provisions, changing protections in [current law] having to do with systemic risk.”

Sen. Jack Reed (D-RI) has offered a bill (S. 2362) that would invert the current credit reporting system by making all consumer credit files frozen by default, forcing consumers to unfreeze their files whenever they wish to obtain new credit. Meanwhile, several other bills would impose slightly less dramatic changes to the consumer credit reporting industry.

Wu said that while S. 2155 appears steaming toward passage, she doubts any of the other freeze-related bills will go anywhere.

“None of these bills that do something really strong are moving very far,” she said.

I should note that NCTUE does offer freeze alternatives. Just like with the big four, NCTUE lets consumers place a somewhat less restrictive “fraud alert” on their file indicating that verbal permission should be obtained over the phone from a consumer before a new account can be opened in their name.

Here is a primer on freezing your credit file with the big three bureaus, including Innovis. This tutorial also includes advice on placing a security alert at ChexSystems, which is used by thousands of banks to verify customers that are requesting new checking and savings accounts. In addition, consumers can opt out of pre-approved credit offers by calling 1-888-5-OPT-OUT (1-888-567-8688), or visit optoutprescreen.com.

Oh, and if you don’t want Equifax sharing your salary history over the life of your entire career, you might want to opt out of that program as well.

Equifax and its ilk may one day finally be exposed for the digital dinosaurs that they are. But until that day, if you care about your identity you now may have another freeze to worry about. And if you decide to take the step of freezing your file at the NCTUE, please sound off about your experience in the comments below.

Cory DoctorowTalking privacy and GDPR with Thomson Reuters

Thomson Reuters interviewed me for their new series on data privacy and the EU General Data Protection Regulation; here’s the audio!


What if you just said when you breach, the damages that you owe to the people whose data you breached cannot be limited to the immediate cognizable consequences of that one breach but instead has to take recognition of the fact that breaches are cumulative? That the data that you release might be merged with some other set that was previously released either deliberately by someone who thought that they’d anonymized it because key identifiers had been removed that you’ve now added back in or accidentally through another breach? The merger of those two might create a harm.

Now you can re-identify a huge number of those prescriptions. That might create all kinds of harms that are not immediately apparent just by releasing a database of people’s rides, but when merged with maybe that NIH or NHS database suddenly becomes incredibly toxic and compromising.

If for example we said, “Okay, in recognition of this fact that once that data is released it never goes away, and each time it’s released it gets merged with other databases to create fresh harms that are unquantifiable in this moment and should be assumed to exceed any kind of immediate thing that we can put our finger on, that you have to pay fairly large statutory damages if you’re found to have mishandled data.” Well, now I think the insurance companies are going to do a lot of our dirty work for us.

We don’t have to come up with rules. We just have to wait for the insurance companies to show up at these places that they’re writing policies for and say, “Tell me again, why we should be writing you a policy when you’ve warehoused all of this incredibly toxic material that we’re all pretty sure you’re going to breach someday, and whose liability is effectively unbounded?” They’re going to make the companies discipline themselves.

Worse Than FailureExponential Backup

The first day of a new job is always an adjustment. There's a fine line between explaining that you're unused to a procedure and constantly saying "At my old company...". After all, nobody wants to be that guy, right? So you proceed with caution, trying to learn before giving advice.

But some things warrant the extra mile. When Samantha started her tenure at a mid-sized firm, it all started out fine. She got a computer right away, which is a nice plus. She met the team, got settled into a desk, and was given a list of passwords and important URLs to get situated. The usual stuff.

After changing her Windows password, she decided to start by browsing the source code repository. This company used Subversion, so she went and downloaded the whole repo so she could see the structure. It took a while, so she got up and got some coffee; when she got back, it had finished, and she was able to see the total size: 300 GB. That's... weird. Really weird. Weirder still, when she glanced over the commit history, it only dated back a year or so.

What could be taking so much space? Were they storing some huge binaries tucked away someplace that the code depended on? She didn't want to make waves, but this just seemed so... inefficiently huge. Now curious, she opened the repo, browsing the folder structure.

Subversion bases everything on folder structure; there is only really one "branch" in Git's thinking, but you can check out any subfolder without taking the whole repository. Inside of each project directory was a layout that is common to SVN repos: a folder called "branches", a folder called "tags", and a folder called "trunk" (Subversion's primary branch). In the branches directory there were folders called "fix" and "feature", and in each of those there were copies of the source code stored under the names of the branches. Under normal work, she'd start her checkout from one of those branch folders, thus only pulling down the code for her branch, and merge into the "trunk" copy when she was all done.

But there was one folder she didn't anticipate: "backups". Backups? But... this is version control. We can revert to an earlier version any time we want. What are the backups for? I must be misunderstanding. She opened one and was promptly horrified to find a series of zip files, dated monthly, all at revision 1.

Now morbidly curious, Samantha opened one of these zips. The top level folder inside the zip was the name of the project; under that, she found branches, tags, trunk. No way. They can't have-- She clicked in, and there it was, plain as day: another backups folder. And inside? Every backup older than the one she'd clicked. Each backup included, presumably, every backup prior to that, meaning that in the backup for October, the backup from January was included nine times, the backup from February eight times, and so on and so forth. Within two years, a floppy disk worth of code would fill a terabyte drive.

Samantha asked her boss, "What will you do when the repo gets too big to be downloaded onto your hard drive?

His response was quick and entirely serious: "Well, we back it up, then we make a new one."

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Krebs on SecurityMicrosoft Patch Tuesday, May 2018 Edition

Microsoft today released a bundle of security updates to fix at least 67 holes in its various Windows operating systems and related software, including one dangerous flaw that Microsoft warns is actively being exploited. Meanwhile, as it usually does on Microsoft’s Patch Tuesday — the second Tuesday of each month — Adobe has a new Flash Player update that addresses a single but critical security weakness.

First, the Flash Tuesday update, which brings Flash Player to v. 29.0.0.171. Some (present company included) would argue that Flash Player is itself “a single but critical security weakness.” Nevertheless, Google Chrome and Internet Explorer/Edge ship with their own versions of Flash, which get updated automatically when new versions of these browsers are made available.

You can check if your browser has Flash installed/enabled and what version it’s at by pointing your browser at this link. Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability.

Google Chrome blocks Flash from running on all but a handful of popular sites, and then only after user approval. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist/blacklist specific sites. If you spot an upward pointing arrow to the right of the address bar in Chrome, that means there’s an update to the browser available, and it’s time to restart Chrome.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis.

Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits. Microsoft users will need to install this month’s batch of patches to get the latest Flash version for IE/Edge, where most of the critical updates in this month’s patch batch reside.

According to security vendor Qualys, one Microsoft patch in particular deserves priority over others in organizations that are testing updates before deploying them: CVE-2018-8174 involves a problem with the way the Windows scripting engine handles certain objects, and Microsoft says this bug is already being exploited in active attacks.

Some other useful sources of information on today’s updates include the Zero Day Initiative and Bleeping Computer. And of course there is always the Microsoft Security Update Guide.

As always, please feel free to leave a comment below if you experience any issues applying any of these updates.

Rondam RamblingsA quantum mechanics puzzle, part deux

This post is (part of) the answer to a puzzle I posed here.  Read that first if you haven't already. To make this discussion concrete, let's call the time it takes for light to traverse the short (or Small) arm of the interferometer Ts, the long (or Big) arm Tb (because Tl looks too much like T1). So there are five interesting cases here.  Let's start with the easy one: we illuminate the

Mark ShuttleworthCue the Cosmic Cuttlefish

With our castor Castor now out for all to enjoy, and the Twitterverse delighted with the new minimal desktop and smooth snap integration, it’s time to turn our attention to the road ahead to 20.04 LTS, and I’m delighted to say that we’ll kick off that journey with the Cosmic Cuttlefish, soon to be known as Ubuntu 18.10.

Each of us has our own ideas of how the free stack will evolve in the next two years. And the great thing about Ubuntu is that it doesn’t reflect just one set of priorities, it’s an aggregation of all the things our community cares about. Nevertheless I thought I’d take the opportunity early in this LTS cycle to talk a little about the thing I’m starting to care more about than any one feature, and that’s security.

If I had one big thing that I could feel great about doing, systematically, for everyone who uses Ubuntu, it would be improving their confidence in the security of their systems and their data. It’s one of the very few truly unifying themes that crosses every use case.

It’s extraordinary how diverse the uses are to which the world puts Ubuntu these days, from the heart of the mainframe operation in a major financial firm, to the raspberry pi duck-taped to the back of a prototype something in the middle of nowhere, from desktops to clouds to connected things, we are the platform for ambitions great and small. We are stewards of a shared platform, and one of the ways we respond to that diversity is by opening up to let people push forward their ideas, making sure only that they are excellent to each other in the pushing.

But security is the one thing that every community wants – and it’s something that, on reflection, we can raise the bar even higher on.

So without further ado: thank you to everyone who helped bring about Bionic, and may you all enjoy working towards your own goals both in and out of Ubuntu in the next two years.

CryptogramThe US Is Unprepared for Election-Related Hacking in 2018

This survey and report is not surprising:

The survey of nearly forty Republican and Democratic campaign operatives, administered through November and December 2017, revealed that American political campaign staff -- primarily working at the state and congressional levels -- are not only unprepared for possible cyber attacks, but remain generally unconcerned about the threat. The survey sample was relatively small, but nevertheless the survey provides a first look at how campaign managers and staff are responding to the threat.

The overwhelming majority of those surveyed do not want to devote campaign resources to cybersecurity or to hire personnel to address cybersecurity issues. Even though campaign managers recognize there is a high probability that campaign and personal emails are at risk of being hacked, they are more concerned about fundraising and press coverage than they are about cybersecurity. Less than half of those surveyed said they had taken steps to make their data secure and most were unsure if they wanted to spend any money on this protection.

Security is never something we actually want. Security is something we need in order to avoid what we don't want. It's also more abstract, concerned with hypothetical future possibilities. Of course it's lower on the priorities list than fundraising and press coverage. They're more tangible, and they're more immediate.

This is all to the attackers' advantage.

Worse Than FailureYes == No

For decades, I worked in an industry where you were never allowed to say no to a user, no matter how ridiculous the request. You had to suck it up and figure out a way to deliver on insane requests, regardless of the technical debt they inflicted.

Canada Stop sign.svg

Users are a funny breed. They say things like I don't care if the input dialog you have works; the last place I worked had a different dialog to do the same thing, and I want that dialog here! With only one user saying stuff like that, it's semi-tolerable. When you have 700+ users and each of them wants a different dialog to do the same thing, and nobody in management will say no, you need to start creating table-driven dialogs (x-y coordinates, width, height, label phrasing, field layout within the dialog, different input formats, fonts, colors and so forth). Multiply that by the number of dialogs in your application and it becomes needlessly pointlessly impossibly difficult.

But it never stops there. Often, one user will request that you move a field from another dialog onto their dialog - just for them. This creates all sorts of havoc with validation logic. Multiply it by hundreds of users and you're basically creating a different application for each of them - each with its own validation logic, all in the same application.

After just a single handful of users demanding changes like this, it can quickly become a nightmare. Worse, once it starts, the next user to whom you say no tells you that you did it for the other guy and so you have to do it for them too! After all, each user is the most important user, right?

It doesn't matter that saying no is the right thing to do. It doesn't matter that it will put a zero-value load on development and debugging time. It doesn't matter that sucking up development time to do it means there are less development hours for bug fixes or actual features.

When management refuses to say no, it can turn your code into a Pandora's-Box-o-WTF™

However, there is hope. There is a way to tell the users no without actually saying no. It's by getting them to say it for you and then withdrawing their urgent, can't-live-without-it, must-have-or-the-world-will-end request.

You may ask how?

The trick is to make them see the actual cost of implementing their teeny tiny little feature.

Yes, we can add that new button to provide all the functionality of Excel in an in-app
calculator, but it will take x months (years) to do it, AND it will push back all of the
other features in the queue. Shall I delay the next release and the other feature requests
so we can build this for you, or would you like to schedule it for a future release?

Naturally you'll have to answer questions like "But it's just a button; why would it take that much effort?"

This is a good thing because it forces them down the rabbit hole into your world where you are the expert. Now you get to explain to them the realities of software development, and the full cost of their little request.

Once they realize the true cost that they'd have to pay, the urgency of the request almost always subsides to nice to have and gets pushed forward so as to not delay the scheduled release.

And because you got them to say it for you, you didn't have to utter the word no.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Linux AustraliaMichael Still: Adding oslo privsep to a new project, a worked example

Share

You’ve decided that using sudo to run command lines as root is lame and that it is time to step up and do things properly. How do you do that? Well, here’s a simple guide to adding oslo privsep to your project!

In a previous post I showed you how to add a new method that ran with escalated permissions. However, that’s only helpful if you already have privsep added to your project. This post shows you how to do that thing to your favourite python project. In this case we’ll use OpenStack Cinder as a worked example.

Note that Cinder already uses privsep because of its use of os-brick, so the instructions below skip adding oslo.privsep to requirements.txt. If your project has never ever used privsep at all, you’ll need to add a line like this to requirements.txt:

oslo.privsep

For reference, this post is based on OpenStack review 566,479, which I wrote as an example of how to add privsep to a new project. If you’re after a complete worked example in a more complete form than this post then the review might be useful to you.

As a first step, let’s add the code we’d want to write to actually call something with escalated permissions. In the Cinder case I chose the cgroups throttling code for this example. So first off we’d need to create the privsep directory with the relevant helper code:

diff --git a/cinder/privsep/__init__.py b/cinder/privsep/__init__.py
new file mode 100644
index 0000000..7f826a8
--- /dev/null
+++ b/cinder/privsep/__init__.py
@@ -0,0 +1,32 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+"""Setup privsep decorator."""
+
+from oslo_privsep import capabilities
+from oslo_privsep import priv_context
+
+sys_admin_pctxt = priv_context.PrivContext(
+ 'cinder',
+ cfg_section='cinder_sys_admin',
+ pypath=__name__ + '.sys_admin_pctxt',
+ capabilities=[capabilities.CAP_CHOWN,
+ capabilities.CAP_DAC_OVERRIDE,
+ capabilities.CAP_DAC_READ_SEARCH,
+ capabilities.CAP_FOWNER,
+ capabilities.CAP_NET_ADMIN,
+ capabilities.CAP_SYS_ADMIN],
+)

This code defines the permissions that our context (called cinder_sys_admin in this case) has. These specific permissions in the example above should correlate with those that you’d get if you ran a command with sudo. There was a bit of back and forth about what permissions to use and how many contexts to have while we were implementing privsep in OpenStack Nova, but we’ll discuss those in a later post.

Next we need the code that actually does the privileged thing:

diff --git a/cinder/privsep/cgroup.py b/cinder/privsep/cgroup.py
new file mode 100644
index 0000000..15d47e0
--- /dev/null
+++ b/cinder/privsep/cgroup.py
@@ -0,0 +1,35 @@
+# Copyright 2016 Red Hat, Inc
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
+#
+#    Licensed under the Apache License, Version 2.0 (the "License"); you may
+#    not use this file except in compliance with the License. You may obtain
+#    a copy of the License at
+#
+#         http://www.apache.org/licenses/LICENSE-2.0
+#
+#    Unless required by applicable law or agreed to in writing, software
+#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+#    License for the specific language governing permissions and limitations
+#    under the License.
+
+"""
+Helpers for cgroup related routines.
+"""
+
+from oslo_concurrency import processutils
+
+import cinder.privsep
+
+
+@cinder.privsep.sys_admin_pctxt.entrypoint
+def cgroup_create(name):
+    processutils.execute('cgcreate', '-g', 'blkio:%s' % name)
+
+
+@cinder.privsep.sys_admin_pctxt.entrypoint
+def cgroup_limit(name, rw, dev, bps):
+    processutils.execute('cgset', '-r',
+                         'blkio.throttle.%s_bps_device=%s %d' % (rw, dev, bps),
+                         name)

Here we just provide two methods which manipulate cgroups. That allows us to make this change to the throttling implementation in Cinder:

diff --git a/cinder/volume/throttling.py b/cinder/volume/throttling.py
index 39cbbeb..3c6ddaa 100644
--- a/cinder/volume/throttling.py
+++ b/cinder/volume/throttling.py
@@ -22,6 +22,7 @@ from oslo_concurrency import processutils
 from oslo_log import log as logging
 
 from cinder import exception
+import cinder.privsep.cgroup
 from cinder import utils
 
 
@@ -65,8 +66,7 @@ class BlkioCgroup(Throttle):
         self.dstdevs = {}
 
         try:
-            utils.execute('cgcreate', '-g', 'blkio:%s' % self.cgroup,
-                          run_as_root=True)
+            cinder.privsep.cgroup.cgroup_create(self.cgroup)
         except processutils.ProcessExecutionError:
             LOG.error('Failed to create blkio cgroup \'%(name)s\'.',
                       {'name': cgroup_name})
@@ -81,8 +81,7 @@ class BlkioCgroup(Throttle):
 
     def _limit_bps(self, rw, dev, bps):
         try:
-            utils.execute('cgset', '-r', 'blkio.throttle.%s_bps_device=%s %d'
-                          % (rw, dev, bps), self.cgroup, run_as_root=True)
+            cinder.privsep.cgroup.cgroup_limit(self.cgroup, rw, dev, bps)
         except processutils.ProcessExecutionError:
             LOG.warning('Failed to setup blkio cgroup to throttle the '
                         'device \'%(device)s\'.', {'device': dev})

These last two snippets should be familiar from the previous post about pivsep in this series. Finally for the actual implementation of privsep, we need to make sure that rootwrap has permissions to start the privsep helper daemon. You’ll get one daemon per unique security context, but in this case we only have one of those so we’ll only need one rootwrap entry. Note that I also remove the previous rootwrap entries for cgset and cglimit while I’m here.

diff --git a/etc/cinder/rootwrap.d/volume.filters b/etc/cinder/rootwrap.d/volume.filters
index abc1517..d2d1720 100644
--- a/etc/cinder/rootwrap.d/volume.filters
+++ b/etc/cinder/rootwrap.d/volume.filters
@@ -43,6 +43,10 @@ lvdisplay4: EnvFilter, env, root, LC_ALL=C, LVM_SYSTEM_DIR=, LVM_SUPPRESS_FD_WAR
 # This line ties the superuser privs with the config files, context name,
 # and (implicitly) the actual python code invoked.
 privsep-rootwrap: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, os_brick.privileged.default, --privsep_sock_path, /tmp/.*
+
+# Privsep calls within cinder iteself
+privsep-rootwrap-sys_admin: RegExpFilter, privsep-helper, root, privsep-helper, --config-file, /etc/(?!\.\.).*, --privsep_context, cinder.privsep.sys_admin_pctxt, --privsep_sock_path, /tmp/.*
+
 # The following and any cinder/brick/* entries should all be obsoleted
 # by privsep, and may be removed once the os-brick version requirement
 # is updated appropriately.
@@ -93,8 +97,6 @@ ionice_1: ChainingRegExpFilter, ionice, root, ionice, -c[0-3], -n[0-7]
 ionice_2: ChainingRegExpFilter, ionice, root, ionice, -c[0-3]
 
 # cinder/volume/utils.py: setup_blkio_cgroup()
-cgcreate: CommandFilter, cgcreate, root
-cgset: CommandFilter, cgset, root
 cgexec: ChainingRegExpFilter, cgexec, root, cgexec, -g, blkio:\S+
 
 # cinder/volume/driver.py

And because we’re not bad people we’d of course write a release note about the changes we’ve made…

diff --git a/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
new file mode 100644
index 0000000..e78fb00
--- /dev/null
+++ b/releasenotes/notes/privsep-rocky-35bdfe70ed62a826.yaml
@@ -0,0 +1,13 @@
+---
+security:
+    Privsep transitions. Cinder is transitioning from using the older style
+    rootwrap privilege escalation path to the new style Oslo privsep path.
+    This should improve performance and security of Nova in the long term.
+  - |
+    privsep daemons are now started by Cinder when required. These daemons can
+    be started via rootwrap if required. rootwrap configs therefore need to
+    be updated to include new privsep daemon invocations.
+upgrade:
+  - |
+    The following commands are no longer required to be listed in your rootwrap
+    configuration: cgcreate; and cgset.

This code will now work. However, we’ve left out one critical piece of the puzzle — testing. If this code was uploaded like this, it would fail in the OpenStack gate, even though it probably passed on your desktop. This is because many of the gate jobs are setup in such a way that they can’t run rootwrapped commands, which in this case means that the rootwrap daemon won’t be able to start.

I found this quite confusing in Nova when I was implementing things and had missed a step. So I wrote a simple test fixture that warns me when I am being silly:

diff --git a/cinder/test.py b/cinder/test.py
index c8c9e6c..a49cedb 100644
--- a/cinder/test.py
+++ b/cinder/test.py
@@ -302,6 +302,9 @@ class TestCase(testtools.TestCase):
         tpool.killall()
         tpool._nthreads = 20
 
+        # NOTE(mikal): make sure we don't load a privsep helper accidentally
+        self.useFixture(cinder_fixtures.PrivsepNoHelperFixture())
+
     def _restore_obj_registry(self):
         objects_base.CinderObjectRegistry._registry._obj_classes = \
             self._base_test_obj_backup
diff --git a/cinder/tests/fixtures.py b/cinder/tests/fixtures.py
index 6e275a7..79e0b73 100644
--- a/cinder/tests/fixtures.py
+++ b/cinder/tests/fixtures.py
@@ -1,4 +1,6 @@
 # Copyright 2016 IBM Corp.
+# Copyright 2017 Rackspace Australia
+# Copyright 2018 Michael Still and Aptira
 #
 #    Licensed under the Apache License, Version 2.0 (the "License"); you may
 #    not use this file except in compliance with the License. You may obtain
@@ -21,6 +23,7 @@ import os
 import warnings
 
 import fixtures
+from oslo_privsep import daemon as privsep_daemon
 
 _TRUE_VALUES = ('True', 'true', '1', 'yes')
 
@@ -131,3 +134,29 @@ class WarningsFixture(fixtures.Fixture):
                     ' This key is deprecated. Please update your policy '
                     'file to use the standard policy values.')
         self.addCleanup(warnings.resetwarnings)
+
+
+class UnHelperfulClientChannel(privsep_daemon._ClientChannel):
+    def __init__(self, context):
+        raise Exception('You have attempted to start a privsep helper. '
+                        'This is not allowed in the gate, and '
+                        'indicates a failure to have mocked your tests.')
+
+
+class PrivsepNoHelperFixture(fixtures.Fixture):
+    """A fixture to catch failures to mock privsep's rootwrap helper.
+
+    If you fail to mock away a privsep'd method in a unit test, then
+    you may well end up accidentally running the privsep rootwrap
+    helper. This will fail in the gate, but it fails in a way which
+    doesn't identify which test is missing a mock. Instead, we
+    raise an exception so that you at least know where you've missed
+    something.
+    """
+
+    def setUp(self):
+        super(PrivsepNoHelperFixture, self).setUp()
+
+        self.useFixture(fixtures.MonkeyPatch(
+            'oslo_privsep.daemon.RootwrapClientChannel',
+            UnHelperfulClientChannel))

Now if you fail to mock a privsep’ed call, then you’ll get something like this:

==============================
Failed 1 tests - output below:
==============================

cinder.tests.unit.test_volume_throttling.ThrottleTestCase.test_BlkioCgroup
--------------------------------------------------------------------------

Captured traceback:
~~~~~~~~~~~~~~~~~~~
    Traceback (most recent call last):
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/mock/mock.py", line 1305, in patched
        return func(*args, **keywargs)
      File "cinder/tests/unit/test_volume_throttling.py", line 66, in test_BlkioCgroup
        throttle = throttling.BlkioCgroup(1024, 'fake_group')
      File "cinder/volume/throttling.py", line 69, in __init__
        cinder.privsep.cgroup.cgroup_create(self.cgroup)
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 206, in _wrap
        self.start()
      File "/srv/src/openstack/cinder/.tox/py27/local/lib/python2.7/site-packages/oslo_privsep/priv_context.py", line 217, in start
        channel = daemon.RootwrapClientChannel(context=self)
      File "cinder/tests/fixtures.py", line 141, in __init__
        raise Exception('You have attempted to start a privsep helper. '
    Exception: You have attempted to start a privsep helper. This is not allowed in the gate, and indicates a failure to have mocked your tests.

The last bit is the most important. The fixture we installed has detected that you’ve failed to mock a privsep’ed call and has informed you. So, the last step of all is fixing our tests. This normally involves changing where we mock, as many unit tests just lazily mock the execute() call. I try to be more granular than that. Here’s what that looked like in this throttling case:

diff --git a/cinder/tests/unit/test_volume_throttling.py b/cinder/tests/unit/test_volume_throttling.py
index 82e2645..edbc2d9 100644
--- a/cinder/tests/unit/test_volume_throttling.py
+++ b/cinder/tests/unit/test_volume_throttling.py
@@ -29,7 +29,9 @@ class ThrottleTestCase(test.TestCase):
             self.assertEqual([], cmd['prefix'])
 
     @mock.patch.object(utils, 'get_blkdev_major_minor')
-    def test_BlkioCgroup(self, mock_major_minor):
+    @mock.patch('cinder.privsep.cgroup.cgroup_create')
+    @mock.patch('cinder.privsep.cgroup.cgroup_limit')
+    def test_BlkioCgroup(self, mock_limit, mock_create, mock_major_minor):
 
         def fake_get_blkdev_major_minor(path):
             return {'src_volume1': "253:0", 'dst_volume1': "253:1",
@@ -37,38 +39,25 @@ class ThrottleTestCase(test.TestCase):
 
         mock_major_minor.side_effect = fake_get_blkdev_major_minor
 
-        self.exec_cnt = 0
+        throttle = throttling.BlkioCgroup(1024, 'fake_group')
+        with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
+                             cmd['prefix'])
 
-        def fake_execute(*cmd, **kwargs):
-            cmd_set = ['cgset', '-r',
-                       'blkio.throttle.%s_bps_device=%s %d', 'fake_group']
-            set_order = [None,
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024),
-                         # a nested job starts; bps limit are set to the half
-                         ('read', '253:0', 512),
-                         ('read', '253:2', 512),
-                         ('write', '253:1', 512),
-                         ('write', '253:3', 512),
-                         # a nested job ends; bps limit is resumed
-                         ('read', '253:0', 1024),
-                         ('write', '253:1', 1024)]
-
-            if set_order[self.exec_cnt] is None:
-                self.assertEqual(('cgcreate', '-g', 'blkio:fake_group'), cmd)
-            else:
-                cmd_set[2] %= set_order[self.exec_cnt]
-                self.assertEqual(tuple(cmd_set), cmd)
-
-            self.exec_cnt += 1
-
-        with mock.patch.object(utils, 'execute', side_effect=fake_execute):
-            throttle = throttling.BlkioCgroup(1024, 'fake_group')
-            with throttle.subcommand('src_volume1', 'dst_volume1') as cmd:
+            # a nested job
+            with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
                 self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
                                  cmd['prefix'])
 
-                # a nested job
-                with throttle.subcommand('src_volume2', 'dst_volume2') as cmd:
-                    self.assertEqual(['cgexec', '-g', 'blkio:fake_group'],
-                                     cmd['prefix'])
+        mock_create.assert_has_calls([mock.call('fake_group')])
+        mock_limit.assert_has_calls([
+            mock.call('fake_group', 'read', '253:0', 1024),
+            mock.call('fake_group', 'write', '253:1', 1024),
+            # a nested job starts; bps limit are set to the half
+            mock.call('fake_group', 'read', '253:0', 512),
+            mock.call('fake_group', 'read', '253:2', 512),
+            mock.call('fake_group', 'write', '253:1', 512),
+            mock.call('fake_group', 'write', '253:3', 512),
+            # a nested job ends; bps limit is resumed
+            mock.call('fake_group', 'read', '253:0', 1024),
+            mock.call('fake_group', 'write', '253:1', 1024)])

…and we’re done. This post has been pretty long so I am going to stop here for now. However, hopefully I’ve demonstrated that its actually not that hard to implement privsep in a project, even with some slight testing polish.

Share

The post Adding oslo privsep to a new project, a worked example appeared first on Made by Mikal.

,

Cory DoctorowDonald Trump is a pathogen evolved to thrive in an attention-maximization ecosystem


My latest Locus column is The Engagement-Maximization Presidency, and it proposes a theory to explain the political phenomenon of Donald Trump: we live in a world in which communications platforms amplify anything that gets “engagement” and provides feedback on just how much your message has been amplified so you can tune and re-tune for maximum amplification.


Peter Watts’s 2002 novel Maelstrom illustrates a beautiful, terrifying example of this, in which a mindless, self-modifying computer virus turns itself into a chatbot that impersonates patient zero in a world-destroying pandemic; even though the virus doesn’t understand what it’s doing or how it’s doing it, it’s able to use feedback to refine its strategies, gaining control over more resources with which to try more strategies.
P>
It’s a powerful metaphor for the kind of cold reading we see Trump engaging in at his rallies, and for the presidency itself. I think it also explains why getting Trump of Twitter is impossible: it’s his primary feedback tool, and without it, he wouldn’t know what kinds of rhetoric to double down on and what to quietly sideline.

Maelstrom is concerned with a pandemic that is started by its protago­nist, Lenie Clark, who returns from a deep ocean rift bearing an ancient, devastating pathogen that burns its way through the human race, felling people by the millions.

As Clark walks across the world on a mission of her own, her presence in a message or news story becomes a signal of the utmost urgency. The filters are firewalls that give priority to some packets and suppress others as potentially malicious are programmed to give highest priority to any news that might pertain to Lenie Clark, as the authorities try to stop her from bringing death wherever she goes.

Here’s where Watt’s evolutionary bi­ology shines: he posits a piece of self-modifying malicious software – something that really exists in the world today – that automatically generates variations on its tactics to find computers to run on and reproduce itself. The more computers it colonizes, the more strategies it can try and the more computational power it can devote to analyzing these experiments and directing its randomwalk through the space of all possible messages to find the strategies that penetrate more firewalls and give it more computational power to devote to its task.

Through the kind of blind evolution that produces predator-fooling false eyes on the tails of tropical fish, the virus begins to pretend that it is Lenie Clark, sending messages of increasing convincingness as it learns to impersonate patient zero. The better it gets at this, the more welcoming it finds the firewalls and the more computers it infects.

At the same time, the actual pathogen that Lenie Clark brought up from the deeps is finding more and more hospitable hosts to reproduce in: thanks to the computer virus, which is directing public health authorities to take countermeasures in all the wrong places. The more effective the computer virus is at neutralizing public health authorities, the more the biological virus spreads. The more the biological virus spreads, the more anxious the public health authorities become for news of its progress, and the more computers there are trying to suck in any intelligence that seems to emanate from Lenie Clark, supercharging the computer virus.

Together, this computer virus and biological virus co-evolve, symbiotes who cooperate without ever intending to, like the predator that kills the prey that feeds the scavenging pathogen that weakens other prey to make it easier for predators to catch them.


The Engagement-Maximization Presidency [Cory Doctorow/Locus]


(Image: Kevin Dooley, CC-BY; Trump’s Hair)

Krebs on SecurityStudy: Attack on KrebsOnSecurity Cost IoT Device Owners $323K

A monster distributed denial-of-service attack (DDoS) against KrebsOnSecurity.com in 2016 knocked this site offline for nearly four days. The attack was executed through a network of hacked “Internet of Things” (IoT) devices such as Internet routers, security cameras and digital video recorders. A new study that tries to measure the direct cost of that one attack for IoT device users whose machines were swept up in the assault found that it may have cost device owners a total of $323,973.75 in excess power and added bandwidth consumption.

My bad.

But really, none of it was my fault at all. It was mostly the fault of IoT makers for shipping cheap, poorly designed products (insecure by default), and the fault of customers who bought these IoT things and plugged them onto the Internet without changing the things’ factory settings (passwords at least.)

The botnet that hit my site in Sept. 2016 was powered by the first version of Mirai, a malware strain that wriggles into dozens of IoT devices left exposed to the Internet and running with factory-default settings and passwords. Systems infected with Mirai are forced to scan the Internet for other vulnerable IoT devices, but they’re just as often used to help launch punishing DDoS attacks.

By the time of the first Mirai attack on this site, the young masterminds behind Mirai had already enslaved more than 600,000 IoT devices for their DDoS armies. But according to an interview with one of the admitted and convicted co-authors of Mirai, the part of their botnet that pounded my site was a mere slice of firepower they’d sold for a few hundred bucks to a willing buyer. The attack army sold to this ne’er-do-well harnessed the power of just 24,000 Mirai-infected systems (mostly security cameras and DVRs, but some routers, too).

These 24,000 Mirai devices clobbered my site for several days with data blasts of up to 620 Gbps. The attack was so bad that my pro-bono DDoS protection provider at the time — Akamai — had to let me go because the data firehose pointed at my site was starting to cause real pain for their paying customers. Akamai later estimated that the cost of maintaining protection against my site in the face of that onslaught would have run into the millions of dollars.

We’re getting better at figuring out the financial costs of DDoS attacks to the victims (5, 6 or 7 -digit dollar losses) and to the perpetrators (zero to hundreds of dollars). According to a report released this year by DDoS mitigation giant NETSCOUT Arbor, fifty-six percent of organizations last year experienced a financial impact from DDoS attacks for between $10,000 and $100,000, almost double the proportion from 2016.

But what if there were also a way to work out the cost of these attacks to the users of the IoT devices which get snared by DDos botnets like Mirai? That’s what researchers at University of California, Berkeley School of Information sought to determine in their new paper, “rIoT: Quantifying Consumer Costs of Insecure Internet of Things Devices.

If we accept the UC Berkeley team’s assumptions about costs borne by hacked IoT device users (more on that in a bit), the total cost of added bandwidth and energy consumption from the botnet that hit my site came to $323,973.95. This may sound like a lot of money, but remember that broken down among 24,000 attacking drones the per-device cost comes to just $13.50.

So let’s review: The attacker who wanted to clobber my site paid a few hundred dollars to rent a tiny portion of a much bigger Mirai crime machine. That attack would likely have cost millions of dollars to mitigate. The consumers in possession of the IoT devices that did the attacking probably realized a few dollars in losses each, if that. Perhaps forever unmeasured are the many Web sites and Internet users whose connection speeds are often collateral damage in DDoS attacks.

Image: UC Berkeley.

Anyone noticing a slight asymmetry here in either costs or incentives? IoT security is what’s known as an “externality,” a term used to describe “positive or negative consequences to third parties that result from an economic transaction. When one party does not bear the full costs of its actions, it has inadequate incentives to avoid actions that incur those costs.”

In many cases negative externalities are synonymous with problems that the free market has a hard time rewarding individuals or companies for fixing or ameliorating, much like environmental pollution. The common theme with externalities is that the pain points to fix the problem are so diffuse and the costs borne by the problem so distributed across international borders that doing something meaningful about it often takes a global effort with many stakeholders — who can hopefully settle upon concrete steps for action and metrics to measure success.

The paper’s authors explain the misaligned incentives on two sides of the IoT security problem:

-“On the manufacturer side, many devices run lightweight Linux-based operating systems that open doors for hackers. Some consumer IoT devices implement minimal security. For example, device manufacturers may use default username and password credentials to access the device. Such design decisions simplify device setup and troubleshooting, but they also leave the device open to exploitation by hackers with access to the publicly-available or guessable credentials.”

-“Consumers who expect IoT devices to act like user-friendly ‘plug-and-play’ conveniences may have sufficient intuition to use the device but insufficient technical knowledge to protect or update it. Externalities may arise out of information asymmetries caused by hidden information or misaligned incentives. Hidden information occurs when consumers cannot discern product characteristics and, thus, are unable to purchase products that reflect their preferences. When consumers are unable to observe the security qualities of software, they instead purchase products based solely on price, and the overall quality of software in the market suffers.”

The UC Berkeley researchers concede that their experiments — in which they measured the power output and bandwidth consumption of various IoT devices they’d infected with a sandboxed version of Mirai — suggested that the scanning and DDoSsing activity prompted by a Mirai malware infection added almost negligible amounts in power consumption for the infected devices.

Thus, most of the loss figures cited for the 2016 attack rely heavily on estimates of how much the excess bandwidth created by a Mirai infection might cost users directly, and as such I suspect the $13.50 per machine estimates are on the high side.

No doubt, some Internet users get online via an Internet service provider that includes a daily “bandwidth cap,” such that over-use of the allotted daily bandwidth amount can incur overage fees and/or relegates the customer to a slower, throttled connection for some period after the daily allotted bandwidth overage.

But for a majority of high-speed Internet users, the added bandwidth use from a router or other IoT device on the network being infected with Mirai probably wouldn’t show up as an added line charge on their monthly bills. I asked the researchers about the considerable wiggle factor here:

“Regarding bandwidth consumption, the cost may not ever show up on a consumer’s bill, especially if the consumer has no bandwidth cap,” reads an email from the UC Berkeley researchers who wrote the report, including Kim Fong, Kurt Hepler, Rohit Raghavan and Peter Rowland.

“We debated a lot on how to best determine and present bandwidth costs, as it does vary widely among users and ISPs,” they continued. “Costs are more defined in cases where bots cause users to exceed their monthly cap. But even if a consumer doesn’t directly pay a few extra dollars at the end of the month, the infected device is consuming actual bandwidth that must be supplied/serviced by the ISP. And it’s not unreasonable to assume that ISPs will eventually pass their increased costs onto consumers as higher monthly fees, etc. It’s difficult to quantify the consumer-side costs of unauthorized use — which is likely why there’s not much existing work — and our stats are definitely an estimate, but we feel it’s helpful in starting the discussion on how to quantify these costs.”

Measuring bandwidth and energy consumption may turn out to be a useful and accepted tool to help more accurately measure the full costs of DDoS attacks. I’d love to see these tests run against a broader range of IoT devices in a much larger simulated environment.

If the Berkeley method is refined enough to become accepted as one of many ways to measure actual losses from a DDoS attack, the reporting of such figures could make these crimes more likely to be prosecuted.

Many DDoS attack investigations go nowhere because targets of these attacks fail to come forward or press charges, making it difficult for prosecutors to prove any real economic harm was done. Since many of these investigations die on the vine for a lack of financial damages reaching certain law enforcement thresholds to justify a federal prosecution (often $50,000 – $100,000), factoring in estimates of the cost to hacked machine owners involved in each attack could change that math.

But the biggest levers for throttling the DDoS problem are in the hands of the people running the world’s largest ISPs, hosting providers and bandwidth peering points on the Internet today. Some of those levers I detailed in the “Shaming the Spoofers” section of The Democraticization of Censorship, the first post I wrote after the attack and after Google had brought this site back online under its Project Shield program.

By the way, we should probably stop referring to IoT devices as “smart” when they start misbehaving within three minutes of being plugged into an Internet connection. That’s about how long your average cheapo, factory-default security camera plugged into the Internet has before getting successfully taken over by Mirai. In short, dumb IoT devices are those that don’t make it easy for owners to use them safely without being a nuisance or harm to themselves or others.

Maybe what we need to fight this onslaught of dumb devices are more network operators turning to ideas like IDIoT, a network policy enforcement architecture for consumer IoT devices that was first proposed in December 2017.  The goal of IDIoT is to restrict the network capabilities of IoT devices to only what is essential for regular device operation. For example, it might be okay for network cameras to upload a video file somewhere, but it’s definitely not okay for that camera to then go scanning the Web for other cameras to infect and enlist in DDoS attacks.

So what does all this mean to you? That depends on how many IoT things you and your family and friends are plugging into the Internet and your/their level of knowledge about how to secure and maintain these devices. Here’s a primer on minimizing the chances that your orbit of IoT things become a security liability for you or for the Internet at large.

Sociological ImagesPocket-sized Politics

Major policy issues like gun control often require massive social and institutional changes, but many of these issues also have underlying cultural assumptions that make the status quo seem normal. By following smaller changes in the way people think about issues, we can see gradual adjustments in our culture that ultimately make the big changes more plausible.

Photo Credit: Emojipedia

For example, today’s gun debate even drills down to the little cartoons on your phone. There’s a whole process for proposing and reviewing new emoji, but different platforms have their own control over how they design the cartoons in coordination with the formal standards. Last week, Twitter pointed me to a recent report from Emojipedia about platform updates to the contested “pistol” emoji, moving from a cartoon revolver to a water pistol:

In an update to the original post, all major vendors have committed to this design change for “cross-platform compatibility.”

There are a couple ways to look at this change from a sociological angle. You could tell a story about change from the bottom-up, through social movements like the March For Our Lives, calling for gun reform in the wake of mass shootings. These movements are drawing attention to the way guns permeate American culture, and their public visibility makes smaller choices about the representation of guns more contentious. Apple didn’t comment directly on the intentions behind the redesign when it came out, but it has weighed in on the politics of emoji design in the past.

You could also tell a story about change from the top-down, where large tech companies have looked to copy Apple’s innovation for consistency in a contentious and uncertain political climate (sociologists call this “institutional isomorphism”). In the diagram, you can see how Apple’s early redesign provided an alternative framework for other companies to take up later on, just like Google and Microsoft adopted the dominant pistol design in earlier years.

Either way, if you favor common sense gun reform, redesigning emojis is obviously not enough. But cases like this help us understand how larger shifts in social norms are made up of many smaller changes that challenge the status quo.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureCodeSOD: CHashMap

There’s a phenomenon I think of as the “evolution of objects” and it impacts novice programmers. They start by having piles of variables named things like userName0, userName1, accountNum0, accountNum1, etc. This is awkward and cumbersome, and then they discover arrays. string* userNames, int[] accountNums. This is also awkward and cumbersome, and then they discover hash maps, and can do something like Map<string, string>* users. Most programmers go on to discover “wait, objects do that!”

Not so Brian’s co-worker, Dagny. Dagny wanted to write some C++, but didn’t want to learn that pesky STL or have to master templates. Dagny also considered themselves a “performance junkie”, so they didn’t want to bloat their codebase with peer-reviewed and optimized code, and instead decided to invent that wheel themselves.

Thus was born CHashMap. Now, Brian didn’t do us the favor of including any of the implementation of CHashMap, claiming he doesn’t want to “subject the readers to the nightmares that would inevitably arise from viewing this horror directly”. Important note for submitters: we want those nightmares.

Instead, Brian shares with us how the CHashMap is used, and from that we can infer a great deal about how it was implemented. First, let’s simply look at some declarations:

    CHashMap bills;
    CHashMap billcols;
    CHashMap summary;
    CHashMap billinfo;

Note that CHashMap does not take any type parameters. This is because it’s “type ignorant”, which is like being type agnostic, but with more char*. For example, if you want to get, say, the “amount_due” field, you might write code like this:

    double amount = 0;
    amount = Atof(bills.Item("amount_due"));

Yes, everything, keys and values, is simply a char*. And, as a bonus, in the interests of code clarity, we can see that Dagny didn’t do anything dangerous, like overload the [] operator. It would certainly be confusing to be able to index the hash map like it were any other collection type.

Now, since everything is stored as a char*, it’s onto you to convert it back to the right type, but since chars are just bytes if you don’t look too closely… you can store anything at that pointer. So, for example, if you wanted to get all of a user’s billing history, you might do something like this…

    CHashMap bills;
    CHashMap billcols;
    CHashMap summary;
    CHashMap billinfo;

    int nbills = dbQuery (query, bills, billcols);
    if (nbills > 0) {
        // scan the bills for amounts and/or issues
        double amount;
        double amountDue = 0;
        int unresolved = 0;

        for (int r=0; r<nbills; r++) {
            if (Strcmp(bills.Item("payment_status",r),BILL_STATUS_REMITTED) != 0) {
                billinfo.Clear();
                amount = Atof(bills.Item("amount_due",r));
                if (amount >= 0) {
                    amountDue += amount;
                    if (Strcmp(bills.Item("status",r),BILL_STATUS_WAITING) == 0) {
                        unresolved += 1;
                        billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))));
                        billinfo.AddItem ("biller", bills.Item("account_display_name",r));
                        billinfo.AddItem ("account", bills.Item("account_number",r));
                        billinfo.AddItem ("amount", amount);
                    }
                }
                else {
                    amountDue += 0;
                    unresolved += 1;
                    billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))));
                    billinfo.AddItem ("biller", bills.Item("account_display_name",r));
                    billinfo.AddItem ("account", bills.Item("account_number",r));
                    billinfo.AddItem ("amount", "???");
                }
                summary.AddItem ("", &billinfo);
            }
        }
    }

Look at that summary.AddItem ("", &billinfo) line. Yes, that is an empty key. Yes, they’re pointing it at a reference to the billinfo (which also gets Clear()ed a few lines earlier, so I have no idea what’s happening there). And yes, they’re doing this assignment in a loop, but don’t worry! CHashMap allows multiple values per key! That "" key will hold everything.

So, you have multi-value keys which can themselves point to nested CHashMaps, which means you don’t need any complicated JSON or XML classes, you can just use CHashMap as your hammer/foot-gun.

    //code which fetches account details from JSON
    CHashMap accounts;
    CHashMap details;
    CHashMap keys;

    rc = getAccounts (userToken, accounts);
    if (rc == 0) {
        for (int a=1; a<=accounts.Count(); a++) {
            cvt.jsonToKeys (accounts.Item(a), keys);
            rc = getAccountDetails (userToken, keys.Item("accountId"), details);
        }
    }
    // Details of getAccounts
    int getAccounts (const char * user, CHashMap& rows) {
      // <snip>
      AccountClass account;
      for (int a=1; a<=count; a++) {
        // Populate the account class
        // <snip>
        rows.AddItem ("", account.jsonify(t));
      }

With this kind of versatility, is it any surprise that pretty much every function in the application depends on a CHashMap somehow? If that doesn’t prove its utility, I don’t know what will. How could you do anything better? Use classes? Don’t make me laugh!

As a bonus, remember this line above? billinfo.AddItem ("duedate", FormatTime("YYYY-MM-DD hh:mm:ss",cvtUTC(bills.Item("due_date",r))))? Well, Brian has this to add:

it’s worth mentioning that our DB stores dates in the typical format: “YYYY-MM-DD hh:mm:ss”. cvtUTC is a function that converts a date-time string to a time_t value, and FormatTime converts a time_t to a date-time string.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Planet Linux AustraliaDavid Rowe: FreeDV 700D and SSB Comparison

Mark, VK5QI has just performed a SSB versus FreeDV 700D comparison between his home in Adelaide and the Manly Warringah Radio Society WebSDR SDR in Sydney, about 1200km away. The band was 40m, and the channel very poor, with some slow fading. Mark used SVN revision 3581, built himself on Ubuntu, with an interleaver setting (Tools-Options menu) of 1 frame. Transmit power for SSB and FreeDV 700D was about the same.

I’m still finishing off FreeDV 700D integration and tuning the mode – but this is a very encouraging start. Thanks Mark!

Don MartiUnlocking the hidden European mode in web ads

It would make me really happy to be able to yellow-list Google web ads in Privacy Badger. (Yellow-listed domains are not blocked, but have their cookies restricted in order to cut back on cross-site tracking.) That's because a lot of news and cultural sites use DoubleClick for Publishers and other Google services to deliver legit, context-based advertising. Unfortunately, as far as I can tell, Google mixes in-context ads with crappy, spam-like, targeted stuff. What I want is something like Doc Searls style ads: Just give me ads not based on tracking me.

Until now, there has been no such setting. There could have been, if Do Not Track (DNT) had turned out to be a thing, but no. But there is some good news. Instead of one easy-to-use DNT, sites are starting to give us harder-to-find, but still usable, settings, in order to enable GDPR-compliant ads for Europe. Here's Google's: Ads personalization settings in Google’s publisher ad tags - DoubleClick for Publishers Help.

Wait a minute? Google respects DNT now?

Sort of. GDPR-compliant terms written by Google aren't exactly the same as EFF's privacy-friendly Do Not Track (DNT) Policy All these different tracking policies are reminding me of open source licenses for some reason. but close enough. The catch is that as an end user, you can't just turn on Google's European mode. You have to do some JavaScript. I think I figured out how to do this in a simple browser extension to unlock secret European status.

Google doesn't appear to have their European mode activated yet, so I added a do-nothing "European mode" to the Aloodo project, for testing. I'm not able to yellow-list Google yet, but when GDPR takes effect later this month I'll test it some more.

In the meantime, I'll keep looking for other examples of hidden European mode, and see if I can figure out how to activate them.

,

Rondam RamblingsIn your face, liberal haters!

The New York Times reports that California is now world's 5th largest economy.  Only the U.S. as a whole, China, Japan and Germany are bigger.  On top of that the vast majority of that growth came from the coastal areas, where the liberals live. Meanwhile, in Kansas, the Republican experiment in stimulating economic growth by cutting taxes has gone down in screaming flames: The experiment with

,

CryptogramFriday Squid Blogging: US Army Developing 3D-Printable Battlefield Robot Squid

The next major war will be super weird.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet Linux AustraliaDavid Rowe: FreeDV 1600 Sample Clock Offset Bug

So I’m busy integrating FreeDV 700D into the FreeDV GUI program. The 700D modem works on larger frames (160ms) than the previous modes (e.g. 20ms for FreeDV 1600) so I need to adjust FIFO sizes.

As a reference I tried FreeDV 1600 between two laptops (one tx, one rx) and noticed it was occasionally losing frame sync, generating bit errors, and producing the occasional bloop in the audio. After a little head scratching I discovered a bug in the FreeDV 1600 FDMDV modem! Boy, is my face red.

The FMDMV modem was struggling with sample clock differences between the mod and demod. I think the bug was introduced when I did some (too) clever refactoring to reduce FDMDV memory consumption while developing the SM1000 back in 2014!

Fortunately I have a trail of unit test programs, leading back from FreeDV GUI, to the FreeDV API (freedv_tx and freedv_rx), then individual unit tests for each modem (fdmdv_mod/fdmdv_demod), and finally Octave simulation code (fdmdv.m, fdmdv_demod.m and friends) for the modem.

Octave (or an equivalent vector based scripting language like Python/numpy) is much easier to work with than C for complex DSP problems. So after a little work I reproduced the problem using the Octave version of the FDMDV modem – bit errors happening every time there was a timing jump.

The modulator sends parallel streams of symbols at about 50 baud. These symbols are output at a sample rate of 8000 Hz. Part of the demodulators job is to estimate the best place to sample each received modem symbol, this is called timing estimation. When the tx and rx are separate, the two sample clocks are slightly different – your 8000 Hz clock will be a few Hz different to mine. This means the timing estimate is a moving target, and occasionally we need to compenstate by talking a few more or few less samples from the 8000 Hz sample stream.

In the plot below the Octave demodulator was fed with a signal that is transmitted at 8010 Hz instead of the nominal 8000 Hz. So the tx is sampling faster than the rx. The y axis is the timing estimate in samples, x axis time in seconds. For FreeDV 1600 there are 160 samples per symbol (50 baud at 8 kHz). The timing estimate at the rx drifts forwards until we hit a threshold, set at +/- 40 samples (quarter of a symbol). To avoid the timing estimate drifting too far, we take a one-off larger block of samples from the input, the timing takes a step backwards, then starts drifting up again.

Back to the bug. After some head scratching, messing with buffer shifts, and rolling back phases I eventually fixed the problem in the Octave code. Next step is to port the code to C. I used my test framework that automatically compares a bunch of vectors (states) in the Octave code to the equivalent C code:

octave:8> system("../build_linux/unittest/tfdmdv")
sizeof FDMDV states: 40032 bytes
ans = 0
octave:9> tfdmdv
tx_bits..................: OK
tx_symbols...............: OK
tx_fdm...................: OK
pilot_lut................: OK
pilot_coeff..............: OK
pilot lpf1...............: OK
pilot lpf2...............: OK
S1.......................: OK
S2.......................: OK
foff_coarse..............: OK
foff_fine................: OK
foff.....................: OK
rxdec filter.............: OK
rx filt..................: OK
env......................: OK
rx_timing................: OK
rx_symbols...............: OK
rx bits..................: OK
sync bit.................: OK
sync.....................: OK
nin......................: OK
sig_est..................: OK
noise_est................: OK

passes: 46 fails: 0

Great! This system really lets me move fast once the Octave code is written and tested. Next step is to test the C version of the FDMDV modem using the command line arguments. Note how I used sox to insert a sample rate offset by changing the same rate of the raw sample stream:

build_linux/src$ ./fdmdv_get_test_bits - 30000 | ./fdmdv_mod - - | sox -t raw -r 8000 -s -2 - -t raw -r 7990 - | ./fdmdv_demod - - 14 demod_dump.txt | ./fdmdv_put_test_bits -
-----------------+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+---+
bits 29568  errors 0  BER 0.0000

Zero errors, despite 10Hz sample clock offset. Yayyyyy. The C demodulator outputs a bunch of vectors that can be plotted with an Octave helper program:

octave:6> fdmdv_demod_c("../build_linux/src/demod_dump.txt",28000)

The FDMDV modem is integrated with Codec 2 in the FreeDV API. This can be tested using the freedv_tx/freedv_rx programs. For convenience, I generated some 60 second test files at different sample rates. Here is how I test using the freedv_rx program:

./freedv_rx 1600 ~/Desktop/ve9qrp_1600_8010.raw - | aplay -f S16

The ouput audio sounds good, no bloops, and by examining the freedv_rx_log.txt file I can see the demodulator didn’t loose sync. Cool.

Here is a table of the samples I used for testing:

No clock offset Simulates Tx sample rate 10Hz slower than Rx Simulates Tx sampling 10Hz faster than Rx

Finally, the FreeDV API is linked with the FreeDV GUI program. Here is a video of me testing different sample clock offsets using the raw files in the table above. Note there is no audio in this video as my screen recorder fights with FreeDV for use of sound cards. However the decoded FreeDV audio should be uninterrupted, there should be no re-syncs, and zero bit errors:

The fix has been checked into codec2-dev SVN rev 3556, and will make it’s way into FreeDV GUI 1.3, to be released in late May 2018.

Reading Further

FDMDV modem
README_fdmdv.txt
Steve Ports an OFDM modem from Octave to C, some more on the Octave/C automated test framework and porting complex DSP algorithms.
Testing a FDMDV Modem. Early blog post on FDMDV modem with some more disucssion on sample clock offsets
Timing Estimation for PSK modems, talks a little about how we generate a timing estimate

CryptogramDetecting Laptop Tampering

Micah Lee ran a two-year experiment designed to detect whether or not his laptop was ever tampered with. The results are inconclusive, but demonstrate how difficult it can be to detect laptop tampering.

Worse Than FailureError'd: Version-itis

"No thanks, I'm holding out for version greater than or equal to 3.6 before upgrading," writes Geoff G.

 

"Looks like Twilio sent me John Doe's receipt by mistake," wrote Charles L.

 

"Little do they know that I went back in time and submitted my resume via punch card!" Jim M. writes.

 

Richard S. wrote, "I went to request a password reset from an old site that is sending me birthday emails, but it looks like the reCAPTCHA is no longer available and the site maintainers have yet to notice."

 

"It's nice to see that this new Ultra Speed Plus™ CD burner lives up to its name, but honestly, I'm a bit scared to try some of these," April K. writes.

 

"Sometimes, like Samsung's website, you have to accept that it's just ok to fail sometimes," writes Alessandro L.

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet Linux AustraliaSimon Lyall: Audiobooks – April 2018

Viking Britain: An Exploration by Thomas Williams

Pretty straightforward, Tells as the uptodate research (no Winged Helmets 😢) and easy to follow (easier if you have a map of the UK) 7/10

Contact by Carl Sagan

I’d forgotten how different it was from the movie in places. A few extra characters and plot twists. many more details and explanations of the science. 8/10

The Path Between the Seas: The Creation of the Panama Canal, 1870-1914 by David McCullough

My monthly McCullough book. Great as usual. Good picture of the project and people. 8/10

Winter World: The Ingenuity of Animal Survival by Bernd Heinrich

As per the title this spends much of the time on [varied strategies for] Winter adaptation vs Summer World’s more general coverage. A great listen 8/10

A Man on the Moon: The Voyages of the Apollo Astronauts by Andrew Chaikin

Great overview of the Apollo missions. The Author interviewed almost all the astronauts. Lots of details about the missions. Excellent 9/10

Walkaway by Cory Doctorow

Near future Sci Fi. Similar feel to some of his other books like Makers. Switches between characters & audiobook switches narrators to match. Fastforward the Sex Scenes 💤. Mostly works 7/10

The Neanderthals Rediscovered: How Modern Science Is Rewriting Their Story by Michael A. Morse

Pretty much what the subtitle advertises. Covers discoveries from the last 20 years which make other books out of date. Tries to be Neanderthals-only. 7/10

The Great Quake: How the Biggest Earthquake in North America Changed Our Understanding of the Planet by Henry Fountain

Straightforward story of the 1964 Alaska Earthquake. Follows half a dozen characters & concentrates on worst damaged areas. 7/10

Share

Rondam RamblingsI don't know where I'm a gonna go when the volcano blows

Hawaii's Kilauea volcano is erupting.  So is one in Vanuatu.  And there is increased activity in Yellowstone.  Hang on to your hats, folks, Jesus's return must be imminent. (In case you didn't know, the title is a line from a Jimmy Buffet song.)

,

Planet Linux AustraliaMichael Still: How to make a privileged call with oslo privsep

Share

Once you’ve added oslo privsep to your project, how do you make a privileged call? Its actually really easy to do. In this post I will assume you already have privsep running for your project, which at the time of writing limits you to OpenStack Nova in the OpenStack universe.

The first step is to write the code that will run with escalated permissions. In Nova, we have chosen to only have one set of escalated permissions, so its easy to decide which set to use. I’ll document how we reached that decision and alternative approaches in another post.

In Nova, all code that runs with escalated permissions is in the nova/privsep directory, which is a pattern I’d like to see repeated in other projects. This is partially because privsep maintains a whitelist of methods that are allowed to be run this way, but its also because it makes it very obvious to callers that the code being called is special in some way.

Let’s assume that we’re going to add a simple method which manipulates the filesystem of a hypervisor node as root. We’d write a method like this in a file inside nova/privsep:

import nova.privsep

...

@nova.privsep.sys_admin_pctxt.entrypoint
def update_motd(message):
    with open('/etc/motd', 'w') as f:
        f.write(message)

This method updates /etc/motd, which is the text which is displayed when a user interactively logs into the hypervisor node. “motd” stands for “message of the day” by the way. Here we just pass a new message of the day which clobbers the old value in the file.

The important thing is that entrypoint decorator at the start of the method. That’s how privsep decides to run this method with escalated permissions, and decides what permissions to use. In Nova at the moment we only have one set of escalated permissions, which we called sys_admin_pctxt because we’re artists. I’ll discuss in a later post how we came to that decision and what the other options were.

We can then call this method from anywhere else in Nova like this:

import nova.privsep.motd

...

nova.privsep.motd('This node is currently idle')

Note that we do imports for privsep code slightly differently. We always import the entire path, instead of creating a shortcut to just the module we’re using. In other words, we don’t do:

from nova.privsep import motd

...

motd('This node is a banana')

The above code would work, but is frowned on because it is less obvious here that the update_motd() method runs with escalated permissions — you’d have to go and read the imports to tell that.

That’s really all there is to it. The only other thing to mention is that there is a bit of a wart — code with escalated permissions can only use Nova code that is within the privsep directory. That’s been a problem when we’ve wanted to use a utility method from outside that path inside escalated code. The restriction happens for good reasons, so instead what we do in this case is move the utility into the privsep directory and fix up all the other callers to call the new location. Its not perfect, but its what we have for now.

There are some simple review criteria that should be used to assess a patch which implements new code that uses privsep in OpenStack Nova. They are:

  • Don’t use imports which create aliases. Use the “import nova.privsep.motd” form instead.
  • Keep methods with escalated permissions as simple as possible. Remember that these things are dangerous and should be as easy to understand as possible.
  • Calculate paths to manipulate inside the escalated method — so, don’t let someone pass in a full path and the contents to write to that file as root, instead let them pass in the name of the network interface or whatever that you are manipulating and then calculate the path from there. That will make it harder for callers to use your code to clobber random files on the system.

Adding new code with escalated permissions is really easy in Nova now, and much more secure and faster than it was when we only had sudo and root command lines to do these sorts of things. Let me know if you have any questions.

Share

The post How to make a privileged call with oslo privsep appeared first on Made by Mikal.

Krebs on SecurityTwitter to All Users: Change Your Password Now!

Twitter just asked all 300+ million users to reset their passwords, citing the exposure of user passwords via a bug that stored passwords in plain text — without protecting them with any sort of encryption technology that would mask a Twitter user’s true password. The social media giant says it has fixed the bug and that so far its investigation hasn’t turned up any signs of a breach or that anyone misused the information. But if you have a Twitter account, please change your account password now.

Or if you don’t trust links in blogs like this (I get it) go to Twitter.com and change it from there. And then come back and read the rest of this. We’ll wait.

In a post to its company blog this afternoon, Twitter CTO Parag Agrawal wrote:

“When you set a password for your Twitter account, we use technology that masks it so no one at the company can see it. We recently identified a bug that stored passwords unmasked in an internal log. We have fixed the bug, and our investigation shows no indication of breach or misuse by anyone.

A message posted this afternoon (and still present as a pop-up) warns all users to change their passwords.

“Out of an abundance of caution, we ask that you consider changing your password on all services where you’ve used this password. You can change your Twitter password anytime by going to the password settings page.”

Agrawal explains that Twitter normally masks user passwords through a state-of-the-art encryption technology called “bcrypt,” which replaces the user’s password with a random set of numbers and letters that are stored in Twitter’s system.

“This allows our systems to validate your account credentials without revealing your password,” said Agrawal, who says the technology they’re using to mask user passwords is the industry standard.

“Due to a bug, passwords were written to an internal log before completing the hashing process,” he continued. “We found this error ourselves, removed the passwords, and are implementing plans to prevent this bug from happening again.”

Agrawal wrote that while Twitter has no reason to believe password information ever left Twitter’s systems or was misused by anyone, the company is still urging all Twitter users to reset their passwords NOW.

A letter to all Twitter users posted by Twitter CTO Parag Agrawal

Twitter advises:
-Change your password on Twitter and on any other service where you may have used the same password.
-Use a strong password that you don’t reuse on other websites.
Enable login verification, also known as two factor authentication. This is the single best action you can take to increase your account security.
-Use a password manager to make sure you’re using strong, unique passwords everywhere.

This may be much ado about nothing disclosed out of an abundance of caution, or further investigation may reveal different findings. It doesn’t matter for right now: If you’re a Twitter user and if you didn’t take my advice to go change your password yet, go do it now! That is, if you can.

Twitter.com seems responsive now, but some period of time Thursday afternoon Twitter had problems displaying many Twitter profiles, or even its homepage. Just a few moments ago, I tried to visit the Twitter CTO’s profile page and got this (ditto for Twitter.com):

What KrebsOnSecurity and other Twitter users got when we tried to visit twitter.com and the Twitter CTO’s profile page late in the afternoon ET on May 3, 2018.

If for some reason you can’t reach Twitter.com, try again soon. Put it on your to-do list or calendar for an hour from now. Seriously, do it now or very soon.

And please don’t use a password that you have used for any other account you use online, either in the past or in the present. A non-comprehensive list (note to self) of some password tips are here.

I have sent some more specific questions about this incident in to Twitter. More updates as available.

Update, 8:04 p.m. ET: Went to reset my password at Twitter and it said my new password was strong, but when I submitted it I was led to a dead page. But after logging in again at twitter.com the new password worked (and the old didn’t anymore). Then it prompted me to enter one-time code from app (you do have 2-factor set up on Twitter, right?) Password successfully changed!

Rondam RamblingsA quantum mechanics puzzle

Time to take a break from politics and sociology and geek out about quantum mechanics for a while. Consider a riff on a Michelson-style interferometer that looks like this: A source of laser light shines on a half-silvered mirror angled at 45 degrees (the grey rectangle).  This splits the beam in two.  The two beams are in actuality the same color as the original, but I've drawn them in