Planet Russell

,

Planet Linux AustraliaRussell Coker: Men Commenting on Women’s Issues

A lecture at LCA 2011 which included some inappropriate slides was followed by long discussions on mailing lists. In February 2011 I wrote a blog post debunking some of the bogus arguments in two lists [1]. One of the noteworthy incidents in the mailing list discussion concerned Ted Ts’o (an influential member of the Linux community) debating the definition of rape. My main point on that issue in Feb 2011 was that it’s insensitive to needlessly debate the statistics.

Recently Valerie Aurora wrote about another aspect of this on The Ada Initiative blog [2] and on her personal blog. Some of her significant points are that conference harassment doesn’t end when the conference ends (it can continue on mailing lists etc), that good people shouldn’t do nothing when bad things happen, and that free speech doesn’t mean freedom from consequences or the freedom to use private resources (such as conference mailing lists) without restriction.

Craig Sanders wrote a very misguided post about the Ted Ts’o situation [3]. One of the many things wrong with his post is his statement “I’m particularly disgusted by the men who intervene way too early – without an explicit invitation or request for help or a clear need such as an immediate threat of violence – in womens’ issues“.

I believe that as a general rule when any group of people are involved in causing a problem they should be involved in fixing it. So when we have problems that are broadly based around men treating women badly the prime responsibility should be upon men to fix them. It seems very clear that no matter what scope is chosen for fixing the problems (whether it be lobbying for new legislation, sociological research, blogging, or directly discussing issues with people to change their attitudes) women are doing considerably more than half the work. I believe that this is an indication that overall men are failing.

Asking for Help

I don’t believe that members of minority groups should have to ask for help. Asking isn’t easy, having someone spontaneously offer help because it’s the right thing to do can be a lot easier to accept psychologically than having to beg for help. There is a book named “Women Don’t Ask” which has a page on the geek feminism Wiki [4]. I think the fact that so many women relate to a book named “Women Don’t Ask” is an indication that we shouldn’t expect women to ask directly, particularly in times of stress. The Wiki page notes a criticism of the book that some specific requests are framed as “complaining”, so I think we should consider a “complaint” from a woman as a direct request to do something.

The geek feminism blog has an article titled “How To Exclude Women Without Really Trying” which covers many aspects of one incident [5]. Near the end of the article is a direct call for men to be involved in dealing with such problems. The geek feminism Wiki has a page on “Allies” which includes “Even a blog post helps” [6]. It seems clear from public web sites run by women that women really want men to be involved.

Finally when I get blog comments and private email from women who thank me for my posts I take it as an implied request to do more of the same.

One thing that we really don’t want is to have men wait and do nothing until there is an immediate threat of violence. There are two massive problems with that plan, one is that being saved from a violent situation isn’t a fun experience, the other is that an immediate threat of violence is most likely to happen when there is no-one around to intervene.

Men Don’t Listen to Women

Rebecca Solnit wrote an article about being ignored by men titled “Men Explain Things to Me” [7]. When discussing women’s issues the term “Mansplaining” is often used for that sort of thing, the geek feminism Wiki has some background [8]. It seems obvious that the men who have the greatest need to be taught some things related to women’s issues are the ones who are least likely to listen to women. This implies that other men have to teach them.

Craig says that women need “space to discover and practice their own strength and their own voices“. I think that the best way to achieve that goal is to listen when women speak. Of course that doesn’t preclude speaking as well, just listen first, listen carefully, and listen more than you speak.

Craig claims that when men like me and Matthew Garrett comment on such issues we are making “women’s spaces more comfortable, more palatable, for men“. From all the discussion on this it seems quite obvious that what would make things more comfortable for men would be for the issue to never be discussed at all. It seems to me that two of the ways of making such discussions uncomfortable for most men are to discuss sexual assault and to discuss what should be done when you have a friend who treats women in a way that you don’t like. Matthew has covered both of those so it seems that he’s doing a good job of making men uncomfortable – I think that this is a good thing, a discussion that is “comfortable and palatable” for the people in power is not going to be any good for the people who aren’t in power.

The Voting Aspect

It seems to me that when certain issues are discussed we have a social process that is some form of vote. If one person complains then they are portrayed as crazy. When other people agree with the complaint then their comments are marginalised to try and preserve the narrative of one crazy person. It seems that in the case of the discussion about Rape Apology and LCA2011 most men who comment regard it as one person (either Valeria Aurora or Matthew Garrett) causing a dispute. There is even some commentary which references my blog post about Rape Apology [9] but somehow manages to ignore me when it comes to counting more than one person agreeing with Valerie. For reference David Zanetti was the first person to use the term “apologist for rapists” in connection with the LCA 2011 discussion [10]. So we have a count of at least three men already.

These same patterns always happen so making a comment in support makes a difference. It doesn’t have to be insightful, long, or well written, merely “I agree” and a link to a web page will help. Note that a blog post is much better than a comment in this regard, comments are much like conversation while a blog post is a stronger commitment to a position.

I don’t believe that the majority is necessarily correct. But an opinion which is supported by too small a minority isn’t going to be considered much by most people.

The Cost of Commenting

The Internet is a hostile environment, when you comment on a contentious issue there will be people who demonstrate their disagreement in uncivilised and even criminal ways. S. E. Smith wrote an informative post for Tiger Beatdown about the terrorism that feminist bloggers face [11]. I believe that men face fewer threats than women when they write about such things and the threats are less credible. I don’t believe that any of the men who have threatened me have the ability to carry out their threats but I expect that many women who receive such threats will consider them to be credible.

The difference in the frequency and nature of the terrorism (and there is no other word for what S. E. Smith describes) experienced by men and women gives a vastly different cost to commenting. So when men fail to address issues related to the behavior of other men that isn’t helping women in any way. It’s imposing a significant cost on women for covering issues which could be addressed by men for minimal cost.

It’s interesting to note that there are men who consider themselves to be brave because they write things which will cause women to criticise them or even accuse them of misogyny. I think that the women who write about such issues even though they will receive threats of significant violence are the brave ones.

Not Being Patronising

Craig raises the issue of not being patronising, which is of course very important. I think that the first thing to do to avoid being perceived as patronising in a blog post is to cite adequate references. I’ve spent a lot of time reading what women have written about such issues and cited the articles that seem most useful in describing the issues. I’m sure that some women will disagree with my choice of references and some will disagree with some of my conclusions, but I think that most women will appreciate that I read what women write (it seems that most men don’t).

It seems to me that a significant part of feminism is about women not having men tell them what to do. So when men offer advice on how to go about feminist advocacy it’s likely to be taken badly. It’s not just that women don’t want advice from men, but that advice from men is usually wrong. There are patterns in communication which mean that the effective strategies for women communicating with men are different from the effective strategies for men communicating with men (see my previous section on men not listening to women). Also there’s a common trend of men offering simplistic advice on how to solve problems, one thing to keep in mind is that any problem which affects many people and is easy to solve has probably been solved a long time ago.

Often when social issues are discussed there is some background in the life experience of the people involved. For example Rookie Mag has an article about the street harassment women face which includes many disturbing anecdotes (some of which concern primary school students) [12]. Obviously anyone who has lived through that sort of thing (which means most women) will instinctively understand some issues related to threatening sexual behavior that I can’t easily understand even when I spend some time considering the matter. So there will be things which don’t immediately appear to be serious problems to me but which are interpreted very differently by women. The non-patronising approach to such things is to accept the concerns women express as legitimate, to try to understand them, and not to argue about it. For example the issue that Valerie recently raised wasn’t something that seemed significant when I first read the email in question, but I carefully considered it when I saw her posts explaining the issue and what she wrote makes sense to me.

I don’t think it’s possible for a man to make a useful comment on any issue related to the treatment of women without consulting multiple women first. I suggest a pre-requisite for any man who wants to write any sort of long article about the treatment of women is to have conversations with multiple women who have relevant knowledge. I’ve had some long discussions with more than a few women who are involved with the FOSS community. This has given me a reasonable understanding of some of the issues (I won’t claim to be any sort of expert). I think that if you just go and imagine things about a group of people who have a significantly different life-experience then you will be wrong in many ways and often offensively wrong. Just reading isn’t enough, you need to have conversations with multiple people so that they can point out the things you don’t understand.

This isn’t any sort of comprehensive list of ways to avoid being patronising, but it’s a few things which seem like common mistakes.

Anne Onne wrote a detailed post advising men who want to comment on feminist blogs etc [13], most of it applies to any situation where men comment on women’s issues.

Planet DebianIan Donnelly: Wrapping Up

Hi Everybody!

I have been keeping very busy on this blog the past few days with some very exciting updates! Everything is now in place for my Google Summer of Code project! Elektra now has the ability to merge KeySets, ucf has been patched to allow custom merge commands, we included a new elektra-merge script to use in conjunction with the new ucf command, and we wrote a great tutorial on how to use Elektra to merge configuration files in any package using ucf. There are few things I have missed updating you all on or are still wrapping up.

First of all, I would like to address the Debian packages. While Elektra includes everything needed for my Google Summer of Code Project, it must be built from source right now. Unfortunately, with the rapid development Elektra has seen the past few months, we did not pay enough attention to the Debian packages and they became dusty and riddles with bugs. Fortunately, we have a solution and his name is Pino Toscano. Pino has, very graciously, agreed to help us fix our Debian packages back into shape. If you wish to see the current progress of the packages, you can check his repo. Pino has already made fantastic progress creating the Debian packages. I will post here when packages are all fixed up and the latest versions of Elektra are into the Debian repo.

Some great news that we just received is that Elektra 0.8.7 has just been accepted into Debian unstable! This is huge progress for our team and means that users can now download our packages directly from Debian’s unstable repo and test out these new features. Obviously, the normal caveats apply for any packages in unstable but this is still an amazing piece of news and I would like to thank Pino again for all the support he has provided.

Another worthy piece of news that I unfortunately haven’t had time to cover is that thanks to Felix Berlakovich Elektra has a new plug-in called ini. In case you couldn’t guess, this new plug-in is used to mount ini files and the good news is that it is a vast improvement on our older simpleini plug-in. While it is still a work in progress, this new plug-in is much more powerful than simpleini. The new plug-in supports section and comments and works with many more ini files than the old plug-in. You may have noticed that I used this new plug-in to mount smb.conf in my technical demo of Samba using Elektra from configuration merging. Since smb.conf follows the same syntax as an ini file this new plug-in works great for mounting this file into the Elektra Key Database.

I hope my blogs so far have been very informative and helpful.

Sincerely,
Ian S. Donnelly

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: the next generation Nova API

This is the final post in my series covering the highlights from the Juno Nova mid-cycle meetup. In this post I will cover our next generation API, which used to be called the v3 API but is largely now referred to as the v2.1 API. Getting to this point has been one of the more painful processes I think I've ever seen in Nova's development history, and I think we've learnt some important things about how large distributed projects operate along the way. My hope is that we remember these lessons next time we hit something as contentious as our API re-write has been.

Now on to the API itself. It started out as an attempt to improve our current API to be more maintainable and less confusing to our users. We deliberately decided that we would not focus on adding features, but instead attempt to reduce as much technical debt as possible. This development effort went on for about a year before we realized we'd made a mistake. The mistake we made is that we assumed that our users would agree it was trivial to move to a new API, and that they'd do that even if there weren't compelling new features, which it turned out was entirely incorrect.

I want to make it clear that this wasn't a mistake on the part of the v3 API team. They implemented what the technical leadership of Nova at the time asked for, and were very surprised when we discovered our mistake. We've now spent over a release cycle trying to recover from that mistake as gracefully as possible, but the upside is that the API we will be delivering is significantly more future proof than what we have in the current v2 API.

At the Atlanta Juno summit, it was agreed that the v3 API would never ship in its current form, and that what we would instead do is provide a v2.1 API. This API would be 99% compatible with the current v2 API, with the incompatible things being stuff like if you pass a malformed parameter to the API we will now tell you instead of silently ignoring it, which we call 'input validation'. The other thing we are going to add in the v2.1 API is a system of 'micro-versions', which allow a client to specify what version of the API it understands, and for the server to gracefully degrade to older versions if required.

This micro-version system is important, because the next step is to then start adding the v3 cleanups and fixes into the v2.1 API, but as a series of micro-versions. That way we can drag the majority of our users with us into a better future, without abandoning users of older API versions. I should note at this point that the mechanics for deciding what the minimum micro-version a version of Nova will support are largely undefined at the moment. My instinct is that we will tie to stable release versions in some way; if your client dates back to a release of Nova that we no longer support, then we might expect you to upgrade. However, that hasn't been debated yet, so don't take my thoughts on that as rigid truth.

Frustratingly, the intent of the v2.1 API has been agreed and unchanged since the Atlanta summit, yet we're late in the Juno release and most of the work isn't done yet. This is because we got bogged down in the mechanics of how micro-versions will work, and how the translation for older API versions will work inside the Nova code later on. We finally unblocked this at the mid-cycle meetup, which means this work can finally progress again.

The main concern that we needed to resolve at the mid-cycle was the belief that if the v2.1 API was implemented as a series of translations on top of the v3 code, then the translation layer would be quite thick and complicated. This raises issues of maintainability, as well as the amount of code we need to understand. The API team has now agreed to produce an API implementation that is just the v2.1 functionality, and will then layer things on top of that. This is actually invisible to users of the API, but it leaves us with an implementation where changes after v2.1 are additive, which should be easier to maintain.

One of the other changes in the original v3 code is that we stopped proxying functionality for Neutron, Cinder and Glance. With the decision to implement a v2.1 API instead, we will need to rebuild that proxying implementation. To unblock v2.1, and based on advice from the HP and Rackspace public cloud teams, we have decided to delay implementing these proxies. So, the first version of the v2.1 API we ship will not have proxies, but later versions will add them in. The current v2 API implementation will not be removed until all the proxies have been added to v2.1. This is prompted by the belief that many advanced API users don't use the Nova API proxies, and therefore could move to v2.1 without them being implemented.

Finally, I want to thank the Nova API team, especially Chris Yeoh and Kenichi Oomichi for their patience with us while we have worked through these complicated issues. It's much appreciated, and I find them a consistent pleasure to work with.

That brings us to the end of my summary of the Nova Juno mid-cycle meetup. I'll write up a quick summary post that ties all of the posts together, but apart from that this series is now finished. Thanks for following along.

Tags for this post: openstack juno nova mid-cycle summary api v3 v2.1
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots

Comment

,

Planet DebianDirk Eddelbuettel: RcppEigen 0.3.2.2.0

A new upstream release of the Eigen C++ template library for linear algebra was released a few days ago. And Yixuan Qiu did some really nice work rolling this into a new RcppEigen released and then sent me a nice pull requent. The new version is now on CRAN, and I will prepare a Debian in a moment too.

Upstream changes for Eigen are summarized in their changelog. On the RcppEigen side, Yixuan also rolled in some more changes on as() and wrap() converters as noted below in the NEWS entry.

Changes in RcppEigen version 0.3.2.2.0 (2014-08-19)

  • Updated to version 3.2.2 of Eigen

  • Rcpp::as() now supports the conversion from R vector to “row array”, i.e., Eigen::Array<T, 1, Eigen::Dynamic>

  • Rcpp::as() now supports the conversion from dgRMatrix (row oriented sparse matrices, defined in Matrix package) to Eigen::MappedSparseMatrix<T, Eigen::RowMajor>

  • Conversion from R matrix to Eigen::MatrixXd and Eigen::ArrayXXd using Rcpp::as() no longer gives compilation errors

Courtesy of CRANberries, there are diffstat reports for the most recent release.

Questions, comments etc about RcppEigen should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

LongNowTime Bottled in a Dozen 50-Milliliter Flasks

The 12 evolving E. coli populations

Photo by Michigan State University

For most living organisms, 60,000 generations is an extensive amount of time. Go back that many human generations, or about 1,500,000 years, and there are fossils suggesting Homo erectus were widespread in East and Southeast Asia at that time. Even for the fruit flies, which geneticists have studied for over a century because of their conveniently short lifespans, 60,000 generations equals about 3,750 years. But biologist Richard E. Lenski has observed 60,000 generations in under 27 years–all from a single strain of Escherichia coli (E. coli), the common gut microbe.

On February 24, 01988 Lenski and his team at Michigan State University embarked on an ongoing long-term evolution experiment (LTEE) to gauge the importance of chance and history on the adaptation of bacteria.  He started 12 genetically identical “lines” in 50-milliliter flasks from a single strain of E. coli. The bacteria reproduced every few hours. In April of this year, the population reached the milestone of 60,000 generations. In an interview with previous SALT Speaker Carl Zimmer back in 02009, Lenski explained:

I’ve always been fascinated by this tension between chance and necessity, the randomness of mutation and the predictable aspects of natural selection.

bacterial-growth-oEvolutionary biologists think about natural selection as a never-ending process because the environment is alway changing. However, LTEE takes place under much different circumstances than the “real world.” It is a very simple environment with no other species present. Researchers can expose populations to the same daily environmental stresses: a boom-and-bust cycle. Every 24 hours the bacteria are transferred to fresh glucose medium for 6 hours or so followed by 18 hours of starvation.

This constant laboratory environment allows for basic and rather abstract questions. How reproducible or repeatable is evolution? How long can fitness keep increasing and how high can it go? Do organisms ever reach their peak? And while the selective pressures and unchanging environment are not typically found in nature, Lenski argues there is still high value to his experiment:

The fact that the real world is a changing environment and not sort of this artificial constant environment we’ve made in the lab is a really important issue. But it’s doesn’t really tell us the answer in the baseline case, what would happen if the world did not change? And at least to my mind, science often progresses by coming to grips with these special cases, that don’t necessarily exist outside the lab….It’s really hard to make sense of the complicated, constantly changing world around us if we can’t make sense of these special, really simple cases.

The most obvious strengths of using bacteria for experimental evolution is the speed of generations, but an even more important advantage is that E. coli can be frozen. Lenski and his team have frozen the bacteria every 500 generations, creating what they call a “frozen fossil record.” Lenski explained in an interview for Science Podcast:

At different time points along the way we freeze the cells down and the frozen cells are actually viable, so we can bring them back from the freezer, we can resurrect them, revive them. That allows us to directly compete organisms that lived at different points in time.  So in effect, it allows us to do time travel.  The dream of any evolutionary biologist.

Petri dishes of E. coli

Photo via Beacon

In the November 02013 issue of Science, Lenski and two members of his lab – Michael J. Wiser and Noah Ribeck – published their most recent work looking at fitness over the 50,000 generations. They measured how much the evolved bacteria have improved relative to their ancestors under the same environmental setup.

They found that all 12 lines show consistent responses to selective pressures. For example, their descendants now grow faster in their standard sugary broth, and all populations show an increase in cell size.

Yet variation lies hidden underneath these parallel changes. The fitness increases were nearly uniform in all 12 lineages, but not exact; the cell size grew in all of the populations, but by different amounts. When Lenski and his colleagues studied the bacteria’s DNA, they found that after thousands of generations, the populations’ genomes were full of alterations. These changes were different in each population and had accumulated at very different rates, suggesting a prominent role of chance in setting evolution’s course.

In November 02013, after hitting the 50,000 generation mark, Lenski published a blog piece thinking about the long-term fate of his long-term experiment. He questions who will take over when he retires, and how the experiment will be sustained. He imagines his experiment being carried out by another 49,999 generations of scientists, each one overseeing another 50,000 bacterial generations. That is 50,0002 generations, or 2.5 billion generations in total, and would take about a million years to achieve. If this were to happen, Lenski predicts that the bacteria will reduce their doubling time from their ancestors’ ~55 minutes to ~23 minutes–which would also require a lot of freezer space. Lenski writes:

I’d really like science to test this prediction!  How often does evolutionary biology make quantitative predictions that extend a million years into the future?  Maybe the LTEE won’t last that long, but I see no reason that, with some proper support, it can’t reach 250,000 generations.  That would be less than a century from now.  If the experiment gets that far, I’d like to propose that it be renamed the VLTEE – the very long-term evolution experiment.

richard-lenski

Richard Lenski examines the growth of E. coli. Photo by G.L. Kohuth/Michigan State University

 

Planet Linux AustraliaMichael Still: Don't Tell Mum I Work On The Rigs




ISBN: 1741146984
LibraryThing
I read this book while on a flight a few weeks ago. Its surprisingly readable and relatively short -- you can knock it over in a single long haul flight. The book covers the memoirs of an oil rig worker, from childhood right through to middle age. That's probably the biggest weakness of the book, it just kind of stops when the writer reaches the present day. I felt there wasn't really a conclusion, which was disappointing.
An interesting fun read however.

Tags for this post: book paul_carter oil rig memoir
Related posts: Extreme Machines: Eirik Raude; New Orleans and sea level; Kern County oil wells on I-5; What is the point that people's morals evaporate?
Comment Recommend a book

TEDLiu Bolin turns friends into money, a solar cell from recycled car batteries, and fascinating looks at race in America

<iframe class="youtube-player" frameborder="0" height="360" src="http://www.youtube.com/embed/5tAXJc_D9HM?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>

As always, many members of the TED community have been popping in the news this week. Below, some highlights.

Liu Bolin uses paint to make himself disappear into backgrounds. In this video from The Creators Project, Bolin shows the next evolution—group versions of his signature work. Watch him turn his friends into money in the video above, painting them as bills and coins. (Watch Liu’s talk, “The invisible man.”)

A few TED-related items from the indispensable IEEE Spectrum: First, a solar cell made from recycled car batteries—that’s the promise of a new paper from battery researcher Angela Belcher and her co-authors at MIT. (Watch Angela’s TED Talk, “Using nature to grow batteries.”)

And — remember Dave Gallo’s famous cuttlefish who disappeared? The research behind it, from Roger Hanlon at Woods Hole, has now inspired electronic camouflage. There’s a jaw-dropping video—not so much for what it can do now, as for what it may be able to do one day. (Watch David’s talk, “Underwater astonishments.”)

Sally Kohn writes a powerful piece for the Daily Beast about the different experiences of black children and white children in the United States. Her point: that 6-year-olds and 9-year-olds should not be treated as potential threats because of their skin color. (Watch Sally’s talk, “Let’s try emotional correctness.”)

Former Missouri state senator and native St. Louisan Jeff Smith reflects on the region’s complicated racial history, and his own experiences navigating race in St. Louis. (Watch Jeff’s TED Talk, “Lessons in business … from prison.”)

Mark Ronson gave a TED Talk about the joys of sampling. Jonny Wilson—better known by his DJ name, Eclectic Method—has taken that bait and remixed the talk into a track. Sing-songy, super-catchy! (Watch Mark’s talk, “A brief history of sampling.”)

Clint Smith’s poetry team slam team clinched the National Poetry Slam, becoming the first delegation from Washington D.C. to do so. (Watch Clint’s TED Talk, “The danger of silence.”)

Milton Glaser, the designer behind “I Heart NY,” has created a logo to rebrand the climate movement. The accompanying slogan is intense: “It’s not warming, it’s dying.” (Watch Milton’s talk, “Using design to make ideas new.”)

Blue whale researcher Asha de Vos is a finalist for the JCI Ten Outstanding Young Persons of the World award. Find out how to vote for her. (Read about Asha’s work.)

And speaking of TED Fellows, applications are open for the TED2015 class. Find out more about applying, and about what makes a great TED Fellow.

Fellows-icon


TEDA filmmaker dives into Sylvia Earle’s underwater world in the doc Mission Blue

The new documentary Mission Blue, charts the life of oceanographer Sylvia Earle. Director Fisher Stevens set out to make a film about her work, and ended up being fascinated by the person. Photo: Mission Blue

The new documentary Mission Blue charts the life of oceanographer Sylvia Earle. Director Fisher Stevens set out to make a film about her work, and ended up being fascinated by the person. Photo: Mission Blue

Fisher Stevens produced The Cove, the 2009 Oscar-winning documentary about dubious Japanese dolphin-hunting practices, and his latest film takes him back into familiar waters. Stevens co-directed the newly-released doc Mission Blue, spending the last four years trekking around the world—from the Galapagos Islands to the Chesapeake Bay to the Great Barrier Reef—with unstoppable oceanographer Sylvia Earle.

Mission Blue is now on Netflix. Watch it here »

This film, which Stevens co-directed with Robert Nixon (Gorillas in the Mist), follows the venerable marine biologist, eco-activist and 2009 TED Prize winner as she campaigns to create “hope spots,” underwater areas protected by law because of how critical they are to the health of the ocean. The film sheds light on the dire environmental impact of the commercial fishing industry and reflects on the state of the world’s oceans. But while the film began with his deep respect for the ocean, Stevens says that as he and Nixon dug deeper, it evolved into something of a feminist tale. Mission Blue portrays Earle as a female pioneer in the stodgy male-dominated world of science, showing how time and time again she charges through glass ceilings—and submarines—with fierceness and determination. “She did all of this in a man’s world,” says Stevens.

We sat down with Stevens to talk to him about capturing Earle’s relentless efforts to save the oceans on film. Below, an edited transcript of our conversation.

Making a film about ocean conservation seems like a huge endeavor. Did you stick with your original script or did you change things up once you started filming?

We totally changed directions. We basically threw the original movie away and started from scratch. At first, Bob and I were making a film about “hope spots” and to be honest, the film felt a bit boring. Then Sylvia’s daughter sent us a box of archives—old movies of Sylvia swimming when she was a teenager in the Gulf of Mexico. All of them were 16mm and we had to process it all. But the footage was amazing. Since Sylvia is such an inspiration, we decided we had to make the movie about her. And oddly: she was not into it. She did not want to go into her personal life. But that’s what makes the movie work, at least for me; you get emotionally attached to Sylvia and you see the ocean through her eyes.

What were some of the challenges you faced during filming?

Trying to weave together all the different storylines. It was tricky to intermix the problems of the ocean —like overfishing, coral depletion, carbon acidification—into this one movie about this one woman. That’s why it took us more than four years to finish. There were lots of old tapes that needed to be processed and cleaned up, so there were some technical hurdles. The other challenge was making sure the film was entertaining. We had to work hard on keeping the science entertaining. We didn’t want too many talking heads.

Mission Blue is a global campaign. How did you decide what and where to film?

That was another challenge: where to go? We wanted to go everywhere. We wanted to go to China. We wanted to go to India. There’s this great reef in Cuba that’s completely protected, and I really regret not going there. Cuba is a  place where, because they’re communists, they don’t allow commercial fishing and as a result it’s the last real reef in the Caribbean that’s perfect. Between Sylvia’s schedule and my various projects, we were all traveling all the time, so sometimes it was hard to sync up. There were other places we could’ve gone. We could have kept shooting this movie forever.

Some of the footage is so gorgeous. Was it all real? For example, the sparkling fish underwater?

That’s all real. That wasn’t taken in the ‘70s. That’s new technology. We bought that from Edith Widder, who was on our Galapagos trip. She has cameras that go down 2,000 feet and she is able to capture those images. They’re mind-blowingly beautiful. They are like little planets, each one of them. It’s really moving.

Did you come away from this learning anything new or shocking about the ocean?  

I had a lot of stark realizations while filming. In the last scene in the film, when we went to Australia, we really went to find a beautiful place to show in the movie. We went 100 miles out into the Coral Sea, off the northeast coast of Australia, basically in the middle of nowhere, and there was nothing to see. The coral had been depleted and there were no fish!

Another thing I realized was that even though I knew about industrial fishing, I was shocked by the scale of it. The amount of oil wells also blew me away. And I was also shocked that Tony Abbot, the Prime Minister of Australia who got into office while we were filming, reversed a decision that was taken to have marine protected areas in the country. People like him are the ones who need to see this film.

Sylvia Earle, with Mission Blue directors Robert Nixon (left) and Fisher Stevens (right). Photo: Mission Blue

Sylvia Earle, on the set of Mission Blue. That’s Fisher Stevens on the right. Photo: Mission Blue

Did you know a great deal about the fishing industry and the state of the ocean before you started filming?

I’m a scuba diver and, through diving, I learned a lot about how bad the state of the ocean can be. I’ve dived in gorgeous places and seen amazing stuff, but I’ve also been in places where there’s nothing but bleached coral. That was basic education for me. I was also a pescatarian for a while—until I got mercury poisoning. Ten years ago, there was a mercury epidemic because we were eating so much sushi. Then I met Louie Psihoyos who was directing the film The Cove and mercury was a big part of the film. So yeah, you could say I knew a fair bit before working on Mission Blue.

Were there any hairy moments while filming? For example, you swim with a lot of sharks in the film. Was that nerve-wracking?

Sharks never scare me. The hairy thing was watching Sylvia swimming in the Chesapeake Bay in Virginia with the Atlantic menhaden and a bunch of fishing boats. I mean she could have gotten squished—she was pretty close in there. The fisherman had no idea what we were up to. They saw the cameras and were a bit perturbed. They kept telling us to get away from them. They were right in that it was dangerous and that Sylvia and Bryce Groark, our cameraman, could have easily been crushed by the two boats or gotten sucked up by the vacuum as they actually almost did. Shows you how brave Sylvia is—that at 78 years old, or at any age, she would be willing to go in there and get those shots and have that experience. She was amazing—she just went in. The police were waiting for us when we got back to the dock, questioning our motives and wanting to look at our footage, but we did nothing wrong. It was tense.

Australia was also pretty hairy. I got so seasick, and then sad seeing the reefs looking the way they did. The Great Barrier Reef was shocking to me—how much it’s been depleted, and that’s a protected zone. Part of it was like a graveyard. How do you go out to Holmes Reef, a reef a 100+ miles outside of any populated place and not see any fish? That means two things: the coral is dead so the fish don’t go to that coral, and giant fishing boats have trawled the ocean and taken everything.

What do you think it will take to get people to see ocean conservation as a huge issue and to act on it?

There needs to be a really big shift. And that shift means that the environment has to become the biggest political issue in the world. The UN is trying to make it a priority, but does the UN have teeth? Do we need to create a whole new environmental body?  We know the facts. There’s going to be 10 billion people in 2050. How are we going to sustain 10 billion people at this rate? We can’t. There are people, such as the US Ambassador to Palau and the president of Palau, who are trying to show that you can end industrial fishing and still have tourism. If we can use Palau as an example of how to make it work, then we’re making good progress. It takes people with power. Think of how the French protect Tahiti. When you go diving in Rangiroa, why are there so many sharks? Why are there so many stingrays? It’s because the French protect it. The tourists flock there to see these fish. It makes financial sense. The locals fish for their dinner, but there’s no large-scale fishing. It all boils down to economics in the end. They’re finally saying that the climate is changing so much that it’s affecting weather patterns, and that all these tornadoes, hurricanes and typhoons are affecting the economy. That’s what its going to take, sadly.

In your opinion, what’s the worst-case scenario?

I don’t even want to think about it. Water’s going to be an issue. Drought is an issue and then if we pollute too much of the ocean, we’re not going to be able to breathe. People are going to get respiratory problems, or get sick from eating fish that’s toxic.

What about those of us who like eating fish?

It should be treated as a special occasion. There are certain species of fish that they should stay away from, such as bluefin tuna, Chilean sea bass, and red snapper. There are two issues to think about: toxicity and sustainability. I’ll eat certain sushi, like wild salmon, if I’m in a local place and I’ll do it maybe four times a year. I make it a real treat. This year, I’ve had sushi zero times.

So the environment an issue that’s really close to your heart?

All conservation is close to my heart. I’m working with Louie Psihoyos who did The Cove on a film about extinction. Talk about depressing. It covers all species, including us. What are we doing? Why are we wiping everything out? And are we next? That kind of thing.

What is the most beautiful moment you’ve ever had in the water?

Without a doubt, it was in Hawaii. I had a couple of days off and a friend of mine invited me to swim with dolphins in Maui. Instead we end up seeing humpback whales—and I swam with them. To me, that was the most spiritual, most beautiful moment I’ve ever had in the water because they are just so content and gentle and you just feel their beauty. I’ve also swam with dolphins in Tahiti and seen huge Hammerhead sharks. I’ve been in a kayak surrounded by like 30 pilot whales in Papua New Guinea. I’ve been lucky.

Sylvia Earle where she's most at home, underwater. Photo: Mission Blue

Sylvia Earle where she’s most at home, underwater. Photo: Mission Blue


Krebs on SecurityStealthy, Razor Thin ATM Insert Skimmers

An increasing number of ATM skimmers targeting banks and consumers appear to be of the razor-thin insert variety. These card-skimming devices are made to fit snugly and invisibly inside the throat of the card acceptance slot. Here’s a look at a stealthy new model of insert skimmer pulled from a cash machine in southern Europe just this past week.

The bank that shared these photos asked to remain anonymous, noting that the incident is still under investigation. But according to an executive at this financial institution, the skimmer below was discovered inside the ATM’s card slot by a bank technician after the ATM’s “fatal error” alarm was set off, warning that someone was likely tampering with the cash machine.

A side view of the stainless steel insert skimmer pulled from a European ATM.

A side view of the stainless steel insert skimmer pulled from a European ATM.

“It was discovered in the ATM’s card slot and the fraudsters didn’t manage to withdraw it,” the bank employee said. “We didn’t capture any hidden camera [because] they probably took it. There were definitely no PIN pad [overlays]. In all skimming cases lately we see through the videos that fraudsters capture the PIN through [hidden] cameras.”

Here’s a closer look at the electronics inside this badboy, which appears to be powered by a simple $3 Energizer Lithium Coin battery (CR2012):

The backside of the insert skimmer reveals a tiny battery and a small data storage device (far left).

The backside of the insert skimmer reveals a small battery (top) and a tiny data storage device (far left).

Flip the device around and we get another look at the battery and the data storage component. The small area circled in red on the left in the image below appears to be the component that’s made to read the data from the magnetic stripe of cards inserted into the compromised ATM.

insert-frontside

Virtually all European banks issue chip-and-PIN cards (also called Eurocard, Mastercard and Visa or EMV), which make it far more expensive for thieves to duplicate and profit from counterfeit cards. Even still, ATM skimming remains a problem for European banks mainly because several parts of the world — most notably the United States and countries in Asia and South America — have not yet adopted this standard.

For reasons of backward compatibility with ATMs that aren’t yet in line with EMV, many EMV-compliant cards issued by European banks also include a plain old magnetic stripe. The weakness here, of course, is that thieves can still steal card data from Europeans using skimmers on European ATMs, but they need not fabricate chip-and-PIN cards to withdrawal cash from the stolen accounts: They simply send the card data to co-conspirators in the United States who use it to fabricate new cards and to pull cash out of ATMs here, where the EMV standard is not yet in force.

This angle shows the thinness of this insert skimmer a bit better.

This angle shows the thinness of this insert skimmer a bit better.

According to the European ATM Security Team (EAST), a nonprofit that represents banks in 29 countries with a total deployment of more than 640,000 cash machines, European financial institutions are increasingly moving to “geo-blocking” on their issued cards. In essence, more European banks are beginning to block the usage of cards outside of designated EMV chip liability shift areas.

“Fraud counter-measures such as Geo-blocking and fraud detection continue to improve,” EAST observed in a report produced earlier this year. “In twelve of the reporting countries (two of them major ATM deployers) one or more card issuers have now introduced some form of Geo-blocking.”

Source: European ATM Security Team (EAST).

Source: European ATM Security Team (EAST).

As this and other insert skimmer attacks show, it’s getting tougher to spot ATM skimming devices. It’s best to focus instead on protecting your own physical security while at the cash machine. If you visit an ATM that looks strange, tampered with, or out of place, try to find another ATM. Use only machines in public, well-lit areas, and avoid ATMs in secluded spots.

Last, but certainly not least, cover the PIN pad with your hand when entering your PIN: That way, if even if the thieves somehow skim your card, there is less chance that they will be able to snag your PIN as well. You’d be amazed at how many people fail to take this basic precaution. Yes, there is still a chance that thieves could use a PIN-pad overlay device to capture your PIN, but in my experience these are far less common than hidden cameras (and quite a bit more costly for thieves who aren’t making their own skimmers).

Are you as fascinated by ATM skimmers as I am? Check out my series on this topic, All About Skimmers.

Sociological ImagesPeach Panties and a New Pinterest Board: Sexy What!?

@zeyneparsel and Stephanie S. both sent in a link to a new craze in China: peach panties.  I totally made the craze part up — I have no idea about that – but the peach panties are real and there is a patent pending.

1

I thought they were a great excuse to make a new Pinterest board featuring examples of marketing that uses sex to sell decidely unsexy — or truly sex-irrelevant — things.  It’s called Sexy What!? and I describe it as follows:

This board is a collection of totally random stuff being made weirdly and unnecessarily sexual by marketers who — I’m gonna say it — have run out of ideas.

My favorites are the ads for organ donation, hearing aids, CPR, and sea monkeys.  Enjoy!

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet Linux AustraliaAndrew Pollock: [life] Day 204: Workshops Rail Museum

Zoe had a fabulous night's sleep and so did I. Despite that, I felt a bit tired today. Might have been all the unusual exercise yesterday.

After a leisurely start, we headed off in the direction of the Workshops Rail Museum for the day. We dropped past OfficeWorks on the way to return something I got months ago and didn't like, and I used the opportunity to grab a couple of cute little A5-sized clipboards. I'm going to keep one in the car and one in my Dad bag, so Zoe can doodle when we're on the go. I also discovered that one can buy reams of A5 paper.

We arrived at the Workshops, which were pretty quiet, except for a school excursion. Apparently they're also filming a movie there somewhere at the moment too (not in the museum part).

Despite Zoe's uninterrupted night's sleep, she fell asleep in the car on the way there, which was highly unusual. I let her sleep for a while in the car once we got there, before I woke her up. She woke up a bit grumpy, but once she realised where we were, she was very excited.

We had a good time doing the usual things, and then had a late lunch, and a brief return to the museum before heading over to Kim's place before she had to leave to pick up Sarah from school. Zoe and I looked after Tom and played with his massive pile of glitter play dough until Kim got back with Sarah.

Zoe and Sarah had their usual fabulous time together for about an hour before we had to head home. I'd had dinner going in the slow cooker, so it was nice and easy to get dinner on the table once we got home.

Despite her nap, Zoe went to bed easily. Now I have to try and convince Linux to properly print two-up on A4 paper. The expected methods aren't working for me.

CryptogramDisguising Exfiltrated Data

There's an interesting article on a data exfiltration technique.

What was unique about the attackers was how they disguised traffic between the malware and command-and-control servers using Google Developers and the public Domain Name System (DNS) service of Hurricane Electric, based in Fremont, Calif.

In both cases, the services were used as a kind of switching station to redirect traffic that appeared to be headed toward legitimate domains, such as adobe.com, update.adobe.com, and outlook.com.

[...]

The malware disguised its traffic by including forged HTTP headers of legitimate domains. FireEye identified 21 legitimate domain names used by the attackers.

In addition, the attackers signed the Kaba malware with a legitimate certificate from a group listed as the "Police Mutual Aid Association" and with an expired certificate from an organization called "MOCOMSYS INC."

In the case of Google Developers, the attackers used the service to host code that decoded the malware traffic to determine the IP address of the real destination and redirect the traffic to that location.

Google Developers, formerly called Google Code, is the search engine's website for software development tools, APIs, and documentation on working with Google developer products. Developers can also use the site to share code.

With Hurricane Electric, the attacker took advantage of the fact that its domain name servers were configured, so anyone could register for a free account with the company's hosted DNS service.

The service allowed anyone to register a DNS zone, which is a distinct, contiguous portion of the domain name space in the DNS. The registrant could then create A records for the zone and point them to any IP address.

Honestly, this looks like a government exfiltration technique, although it could be evidence that the criminals are getting even more sophisticated.

Worse Than FailureNon-Restorative Restoration

Jeremy’s employer, SwissMedia, were upgrading their proprietary CMS to run on new, shiny, PHP5. They planned for bumps in the road, but assembled a rugged upgrade plan with a steel chassis. When the time came to upgrade their largest client, French-Haitian News, Jeremy was behind the wheel.

The first step in the plan was for Jeremy to take a copy of their production database that he could experiment with and work out the kinks. He would then prove it worked with the PHP5 application, and get the stamp of approval to go to production. SwissMedia outsourced their data storage, so he contacted Sebastien at Datamaniaks to handle that part.

Somewhere between getting the data and making it work with the PHP5 application, Jeremy committed the dreaded “forgotten WHERE clause” boner. His local copy of the French-Haitian News DB became unusable. He immediately reached out to Sebastien to help remedy the situation.

“Hey, Sebastien,” his email started, “I’m sure I’ll have a good laugh about this later, but I totally trashed the test DB. Could you send me today’s 2AM backup when you get a chance?” He laid out the specific database backup he wanted, and where it needed to be delivered. Jeremy then skipped out for an early lunch while waiting for Sebastien to give him the goods.

Upon his return from lunch, Jeremy didn’t find any “goods”, but plenty of bads. The website director from the French-Haitian News had stuffed Jeremy’s voice-mail inbox full of angry messages. “Our content is out of date! We’ve lost ALL OF TODAY’S ARTICLES!” Jeremy pulled up the F-HN site and confirmed the issue- the last article had a timestamp of 0150.

Jeremy called Le Directeur and assured him they were looking into the problem. All he got in return was audible venom. Jeremy managed to tame the cobra by suggesting they start working on re-uploading all the content they had posted since 1:50AM. In the meantime, Jeremy could get to the root of the problem.

Jeremy opened up Outlook, ready to fire off an email to Datamaniaks with a big red exclamation point. Waiting for him was an email from Sebastian: “No problem, Jeremy! You just need to be more careful. LOL! I’ve restored the production database from the 2AM backup so you should be all set.”

The thud from Jeremy’s jaw hitting his desk could be heard across the SwissMedia office. Jeremy engaged his Caps Lock key and replied, “THAT IS NOT WHAT I AKSED FOR SEBASTIEN. I WANTED YOU TO SEND ME A COPY OF THE PRODUCTION BACKUP, NOT RESTORE FROM BACKUP! My customer is uploading the articles again, but if you send me the production database that I ORIGINALLY ASKED FOR, I can probably fix it faster than they can, then restore that version to production. Get me the database IMMEDIATELY and I’ll work things out with the client.”

Five agonizing minutes later, Sebastien replied, “Sorry for the confusion… I’m starting the production DB upload now!” As soon as the SQL dump was done transferring, Jeremy’s phone began to ring. It was Le Directeur again, and he had discovered an entirely new dimension of pissed off. “I NO LONGER HAVE A NEWS WEBSITE YOU *CONNARD! [long string of French obscenities redacted]” Jeremy went to the F-HN website, only to get a database error staring back at him.

After the stunned silence passed, Jeremy pulled up the directory where Sebastien had supposedly uploaded the database backup. It was not a backup. There sat the actual database file. Sebastien had apparently cut and pasted it to SwissMedia, removing it from where it was supposed to be, and leaving the F-HN website dead.

Jeremy let out his own extensive string of “pardon my French” obscenities. It was a long road to cleaning up the disaster. The aftermath saw much anticipation- there were two contracts everyone wanted to expire: the one SwissMedia had with Datamaniaks and the one French-Haitian News had with SwissMedia.

Image sources: Database, Trash

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet DebianSteve Kemp: Updating Debian Administration

Recently I've been getting annoyed with the Debian Administration website; too often it would be slower than it should be considering the resources behind it.

As a brief recap I have six nodes:

  • 1 x MySQL Database - The only MySQL database I personally manage these days.
  • 4 x Web Nodes.
  • 1 x Misc server.

The misc server is designed to display events. There is a node.js listener which receives UDP messages and stores them in a rotating buffer. The messages might contain things like "User bob logged in", "Slaughter ran", etc. It's a neat hack which gives a good feeling of what is going on cluster-wide.

I need to rationalize that code - but there's a very simple predecessor posted on github for the curious.

Anyway enough diversions, the database is tuned, and "small". The misc server is almost entirely irrelevent, non-public, and not explicitly advertised.

So what do the web nodes run? Well they run a lot. Potentially.

Each web node has four services configured:

  • Apache 2.x - All nodes.
  • uCarp - All nodes.
  • Pound - Master node.
  • Varnish - Master node.

Apache runs the main site, listening on *:8080.

One of the nodes will be special and will claim a virtual IP provided via ucarp. The virtual IP is actually the end-point visitors hit, meaning we have:

Master HostOther hosts

Running:

  • Apache.
  • Pound.
  • Varnish

Running:

  • Apache.

Pound is configured to listen on the virtual IP and perform SSL termination. That means that incoming requests get proxied from "vip:443 -> vip:80". Varnish listens on "vip:80" and proxies to the back-end apache instances.

The end result should be high availability. In the typical case all four servers are alive, and all is well.

If one server dies, and it is not the master, then it will simply be dropped as a valid back-end. If a single server dies and it is the master then a new one will appear, thanks to the magic of ucarp, and the remaining three will be used as expected.

I'm sure there is a pathological case when all four hosts die, and at that point the site will be down, but that's something that should be atypical.

Yes, I am prone to over-engineering. The site doesn't have any availability requirements that justify this setup, but it is good to experiment and learn things.

So, with this setup in mind, with incoming requests (on average) being divided at random onto one of four hosts, why is the damn thing so slow?

We'll come back to that in the next post.

(Good news though; I fixed it ;)

Planet DebianWouter Verhelst: Multiarchified eID libraries, now public

Yesterday, I spent most of the day finishing up the multiarch work I'd been doing on introducing multiarch to the eID middleware, and did another release of the Linux builds. As such, it's now possible to install 32-bit versions of the eID middleware on a 64-bit Linux distribution. For more details, please see the announcement.

Learning how to do multiarch (or biarch, as the case may be) for three different distribution families has been a, well, learning experience. Being a Debian Developer, figuring out the technical details for doing this on Debian and its derivatives wasn't all that hard. You just make sure the libraries are installed to the multiarch-safe directories (i.e., /usr/lib/<gnu arch triplet>), you add some Multi-Arch: foreign or Multi-Arch: same headers where appropriate, and you're done. Of course the devil is in the details (define "where appropriate"), but all in all it's not that difficult and fairly deterministic.

The Fedora (and derivatives, like RHEL) approach to biarch is that 64-bit distributions install into /usr/lib64 and 32-bit distributions install into /usr/lib. This goes for any architecture family, not just the x86 family; the same method works on ppc and ppc64. However, since fedora doesn't do powerpc anymore, that part is a detail of little relevance.

Once that's done, yum has some heuristics whereby it will prefer native-architecture versions of binaries when asked, and may install both the native-architecture and foreign-architecture version of a particular library package at the same time. Since RPM already has support for installing multiple versions of the same package on the same system (a feature that was originally created, AIUI, to support the installation of multiple kernel versions), that's really all there is to it. It feels a bit fiddly and somewhat fragile, since there isn't really a spec and some parts seem fairly undefined, but all in all it seems to work well enough in practice.

The openSUSE approach is vastly different to the other two. Rather than installing the foreign-architecture packages natively, as in the Debian and Fedora approaches, openSUSE wants you to take the native foo.ix86.rpm package and convert that to a foo-32bit.x86_64.rpm package. The conversion process filters out non-unique files (only allows files to remain in the package if they are in library directories, IIUC), and copes with the lack of license files in /usr/share/doc by adding a dependency header on the native package. While the approach works, it feels like unnecessary extra work and bandwidth to me, and obviously also wouldn't scale beyond biarch.

It also isn't documented very well; when I went to openSUSE IRC channels and started asking questions, the reply was something along the lines of "hand this configuration file to your OBS instance". When I told them I wasn't actually using OBS and had no plans of migrating to it (because my current setup is complex enough as it is, and replacing it would be far too much work for too little gain), it suddenly got eerily quiet.

Eventually I found out that the part of OBS which does the actual build is a separate codebase, and integrating just that part into my existing build system was not that hard to do, even though it doesn't come with a specfile or RPM package and wants to install files into /usr/bin and /usr/lib. With all that and some more weirdness I've found in the past few months that I've been building packages for openSUSE I now have... Ideas(TM) about how openSUSE does things. That's for another time, though.

(disclaimer: there's a reason why I'm posting this on my personal blog and not on an official website... don't take this as an official statement of any sort!)

Planet DebianGunnar Wolf: Walking without crutches

Walking without crutches

I still consider myself a newbie teacher. I'm just starting my fourth semester. And yes, I really enjoy it.

Now, how did I come to teaching? Well, my training has been mostly on stages for different conferences. More technical, more social, whatever — I have been giving ~10 talks a year for ~15 years, and I must have learnt something from that.

Some good things, some bad habits.

When giving presentations, a most usual technique is to prepare a set of slides to follow/support the ideas. And yes, that's what I did for my classes: Since my first semester, I prepared a nice set of slides, thematically split in 17 files, with ~30 to ~110 pages each (yes, huge variation). Given the course spans 32 classes (72 hours, 2¼ hours per class), each slide lasts for about two classes.

But, yes, this tends to make the class much less dynamic, much more scripted, rigid, and... Boring. From my feedback, I understand the students don't think I am a bad teacher, but still, I want to improve!

So, today I was to give the introduction to memory management. Easy topic, with few diagrams and numbers, mostly talking about the intuitive parts of a set of functions. I started scribbling and shortening the main points on a piece of paper (yes, the one on the picture). I am sure I can get down to more reduction — But this does feel like an improvement!

The class was quite successful. I didn't present the 100% of the material (which is one of the reasons I cling to my presentations — I don't want to skip important material), and at some point I do feel I was a bit going in circles. However, Operating Systems is a very intuitive subject, and getting the students to sketch by themselves the answers that describe the working of real operating systems was a very pleasant experience!

Of course, when I use my slides I do try to make it as interactive and collaborative as possible. But it is often unfeasible when I'm following a script. Today I was able to go around with the group's questions, find my way back to the outline I prepared.

I don't think I'll completely abandon my slides, specially for some subjects which include many diagrams or pictures. But I'll try to have this alternative closer to my mind.

Planet Linux AustraliaDavid Rowe: SM1000 Part 3 – Rx Working

After an hour of messing about it turns out a bad solder joint meant U6 wasn’t connected to the ADC1 pin on the STM32F4 (schematic). This was probably the source of “noise” in some of my earlier unit tests. I found it useful to write a program to connect the ADC1 input to the DAC2 output (loudspeaker) and “listen” to the noise. Software signal tracer. Note to self: I must add that sort of analog loopback as a SM1000 menu option. I “cooked” the bad joint for 10 seconds with the soldering iron and some fresh flux and the rx side burst into life.

Here’s a video walk through of the FreeDV Rx demo:

I am really excited by the “analog” feel to the SM1000. Power up and “off air” speech is coming out of the speaker a few 100ms later! Benefits of no operating system (so no boot delay) and the low latency, fast sync, FreeDV design that veterans like Mel Whitten K0PFX have designed after years of pioneering HF DV.

The SM1000 latency is significantly lower that the PC version of FreeDV. It’s easy to get “hard” real time performance without an operating system, so it’s safe to use nice small audio buffers. Although to be fair optimising latency in x86 FreeDV is not something I have explored to date.

The top level of the receive code is pretty simple:


/* ADC1 is the demod in signal from the radio rx, DAC2 is the SM1000 speaker */
 
nin = freedv_nin(f);  
nout = nin;
f->total_bit_errors = 0;
 
if (adc1_read(&adc16k[FDMDV_OS_TAPS_16K], 2*nin) == 0) {
  GPIOE->ODR = (1 << 3);
  fdmdv_16_to_8_short(adc8k, &adc16k[FDMDV_OS_TAPS_16K], nin);
  nout = freedv_rx(f, &dac8k[FDMDV_OS_TAPS_8K], adc8k);
  //for(i=0; i<FREEDV_NSAMPLES; i++)
  //   dac8k[FDMDV_OS_TAPS_8K+i] = adc8k[i];
  fdmdv_8_to_16_short(dac16k, &dac8k[FDMDV_OS_TAPS_8K], nout);              
  dac2_write(dac16k, 2*nout);
  //led_ptt(0); led_rt(f->fdmdv_stats.sync); led_err(f->total_bit_errors);
  GPIOE->ODR &= ~(1 << 3);
}

We read “nin” modem samples from the ADC, change the same rate from 16 to 8 kHz, then call freedv_rx(). We then re-sample the “nout” output decoded speech samples to 16 kHz, and send them to the DAC, where they are played out of the loudspeaker.

The commented out “for” loop is the analog loopback code I used to “listen” to the ADC1 noise. There is also some commented out code for blinking LEDs (e.g. if we have sync, bit errors) that I haven’t tested yet (indeed the LEDs haven’t been loaded onto the PCB). I like to hit the highest risk tasks on the check list first.

The “GPIOE->ODR” is the GPIO Port E output data register, that’s the code to take the TP8 line high and low for measuring the real time CPU load on the oscilloscope.

Running the ADC and DAC at 16 kHz means I can get away without analog anti-aliasing or reconstruction filters. I figure the SSB radio’s filtering can take care of that.

OK. Time to load up the switches and LEDs and get the SM1000 switching between Tx and Rx via the PTT button.

I used this line to compress the 250MB monster 1080p video from my phone to a 8MB file that was fast to upload on YouTube:

david@bear:~/Desktop$ ffmpeg -i VID_20140821_113318.mp4 -ab 56k -ar 22050 -b 300k -r 15 -s 480x360 VID_20140821_113318.flv

,

Planet DebianIan Donnelly: Technical Demo

Hi Everybody,

Today I wanted to talk a bit about our technical demo. We patched a version of Samba to use our elektra-merge script in order to handle it’s configuration file smb.conf. Using the steps from my previous tutorial, we patched Samba to use this new technique of config merging. This patched version of Samba mounts it’s configuration to system/samba/smb in the Elektra Key Database. Then, during package upgrades, it uses the new --threeway-merge-command command with elektra-merge as the specified command. The result is automatic handling of smb.conf that is conffile-like (thanks ucf!) and the ability to have a powerful, automatic, three-way merge solution on package upgrades.

The main thing I would like to discuss is how this project is an improvement upon the current implementation of three-way merges in ucf. Before this project, ucf could attempt three-way merges on files it controlled using the diff3 tool. The main limitation of tools like diff3 are that they are line based and don’t inherently understand the files they are dealing with. Elektra on the other hand allows for a powerful system of backends which use plug-ins to understand configuration files. Elektra doesn’t store configuration data on a line-by-line basis, but in a more abstract way that is tailored to each configuration file using backends. smb.conf is a great example of this because it uses the syntax of an ini file so Elektra can mount it in a way that is intuitive to an ini file. Since data is stored in a format of key=data within ini files, Elektra stores this data in a similar way. For each key in smb.conf store a Key in Elektra with the value of that key store in a string. Then, during a merge, we can compare Keys in each version of smb.conf and easily see which ones changed and how they need to be merged into the result. On the other hand, diff3 has no concept of ini files or keys, it just compares the different versions line by line which results in many more conflicts than using elektra-merge. Moreover, a traditional weakness of diff is moving lines or data around. While diff3 does a very good job at handling this, it’s not perfect. In Elektra, Keys are named in an intelligent way based on their backend, so for smb.conf the line workgroup = HOME would always be saved under system/samba/smb/workgroup. It doesn’t matter if the lines are changed between versions because Elektra just has to check for the Key and its value.

My favorite example is a shortcoming in the diff3 algorithm (at least in my opinion). If something is changed to the same value in ours and theirs, but they differ from base, diff3 reports a conflict. On the other hand elektra-merge can easily handle this problem. A simple example of this would be changing the max log size value in Samba. Here is that line in each version of smb.conf:
Base:
max log size = 1000
Ours:
max log size = 2000
Theirs:
max log size = 2000

Obviously, in the merged version, result, one would expect this line to be:
max log size = 2000

Let’s check the result from elektra-merge:
max log size = 2000

Great! How about diff3:
<<<<<<< smb.conf.base
max log size = 1000
=======
max log size = 2000
>>>>>>> smb.conf.theirs

Whoops! As I mentioned the diff3 algorithm can’t handle this type of change, it just results in a conflict. Note that smb.conf.base is just representative of the file used as base and that smb.conf.theirs is representative of the file used as theirs. The file names were changed for the sake of clarity.

There are many other examples of the benefits to storing configuration data in a database of Keys that can better conform to actual data as opposed to storing configuration parameters in files where they can only be compared on a line to line basis. With the help of storage plug-ins, Elektra can ‘understand’ the configurations stored in it’s Key Database. Since we store the data in a way that makes sense for configuration data, we can easily merge actual configuration data as opposed to just lines in a file. A good example of this is in-line comments. Many of our storage plug-ins understand the difference between comments and actual configuration data. So if a configuration file has an inline comment like so:
max log size = 10000 ; Controls the size of the log file (in KiB)
we can compare the actual Keys, value pairs between versions max log size = 10000 and deal with the comments separately.

As a result, if we have a base:
max log size = 1000 ; Size in KiB

Ours:
max log size = 10000 ; Size in KiB

Theirs:
max log size = 1000 ; Controls the size of the log file (in KiB)

The result using elektra-merge would be:
max log size = 10000 ; Controls the size of the log file (in KiB)

Obviously, this line would cause a conflict on any line-based merge algorithm such as diff3 or git. It is worth noting that the ability of elektra-merge is directly related to the quality of the storage plug-ins that Elektra relies on. elektra-merge only looks at the name, value, and any metadata affiliated with each key. As a result, using the line plug-in would result in a merge only as powerful as any other line-based merge. Yet by using the ini plug-in on an ini file we get a much more advanced merge like the one described above.

As you can tell, this new method offers clear advantages to the traditional method of using diff3 to handle configuration merges. Also, I hope this demo shows how easy it is to implement these great features into your Debian packages. The community can only benefit if maintainers take advantage of these great features. I am glad to say that my Google Summer of Code Project has been a success, even if we had to do a little change of plans. The ucf integration ended up working great and is really easy for maintainers to implement. Hope you enjoyed this demo and better understand the power of using Elektra.

Sincerely,
Ian S. Donnelly

Planet Linux AustraliaDavid Rowe: SM1000 Part 2 – Embedded FreeDV Tx Working

Just now I fired up the full, embedded FreeDV “tx side”. So speech is sampled from the SM1000 microphone, processed by the Codec 2 encoder, then sent to the FDMDV modulator, then out of the DAC as modem tones. It worked, and used only about 25% of the STM32F4 CPU! A laptop running the PC version of FreeDV is the receiver.

Here is the decoded speech from a test “transmission” which to my ear sounds about the same as FreeDV running on a PC. I am relieved that there aren’t too many funny noises apart from artefacts of the Codec itself (which are funny enough).

The scatter plot is really good – better than I expected. Nice tight points and a SNR of 25 dB. This shows that the DAC and line interface hardware is working well:

For the past few weeks I have been gradually building up the software for the SM1000. Codec 2 and the FDMDV modem needed a little tweaking to reduce the memory and CPU load required. It’s really handy that I am the author of both!

The hardware seems to be OK although there is some noise in the analog side (e.g. microphone amplifier, switching power supply) that I am still looking into. Thanks Rick Barnich KA8BMA for an excellent job on the hardware design.

I have also been working on various drivers (ADC, DAC, switches and LEDs), and getting my head around developing on a “bare metal” platform (no operating system). For example if I run out of memory it just hangs, and when I Ctrl-C in gdb the stack is corrupted and it’s in an infinite loop. Anyway, it’s all starting to make sense now, and I’m nearing the finish line.

The STM32F4 is a curious combination of a “router” class CPU that doesn’t have an operating system. By “router” class I mean a CPU found inside a DSL router, like a WRT54G, that runs embedded Linux. The STM32F4 is much faster (168MHz) and more capable than the smaller chips we usually call a “uC” (e.g. a PIC or AVR). Much to my surprise I’m not missing embedded Linux. In some ways an operating system complicates life, for example random context switches, i-cache thrashing, needing lots of RAM and Flash, large and complex build systems and on the hardware side an external address and data bus which means high speed digital signals and PCB area.

I am now working on the Rx side. I need to work out a way to extract demod information so I can determine that the analog line in, ADC, and demod are working correctly. At this stage nothing is coming out of U6, the line interface op-amp schematic here). Oh well, I will take a look at that tomorrow.

Planet DebianAurelien Jarno: MIPS Creator CI20

I have received two MIPS Creator CI20 boards, thanks to Imagination Technologies. It’s a small MIPS32 development board:

mips-ci20

As you can see it comes in a nice packaging with a world-compatible power adapter. It uses a Ingenic JZ4780 SoC with a dual core MIPS32 CPU running at 1.2GHz with a PowerVR SGX540 GPU. The board is fitted with 1GB of RAM, 8GB of NOR flash, HDMI output, USB 2.0 ports, Ethernet + Wi-Fi + BlueTooth, SD card slot, IR receiver, expansion headers and more. The schematics are available. The Linux kernel and the U-Boot bootloader sources are also available.

Powering this board with a USB keyboard, a USB mouse and a HDMI display, it boots off the internal flash on a Debian Wheezy up to the XFCE environment. Besides the kernel, the Wi-Fi + Bluetooth firmware, and very few configuration changes, it runs a vanilla Debian. Unfortunately I haven’t found time to play more with it yet, but it looks already quite promising.

The board has not been formally announced yet, so I do not know when it will become available, nor the price, but if you are interested I’ll bring it to DebConf14. Don’t hesitate to ask me if you want to look at it or play with it.

Geek FeminismI won’t leave Berlin until investors quit being pigs. Deal?

We’ve seen over the years through conference anti-harassment work that when people who have experienced harassment speak up, others are often empowered to share their own experiences of harassment from the same perpetrator, or other perpetrators in the same community.

Yesterday, a courageous New York-based entrepreneur named Geshe Haas came forward about having been the target of what one Valleywag commenter called an “entitled demand” for sex from an investor named Pavel Curda. Today, Berlin-based Lucie Montel also published a screenshot of a very similar advance from Curda:

Tech industry magazine The Next Web summed up the reports and stated that they will no longer publish Curda’s writing.

While there has been previous discussion of women entrepreneurs being sexually harassed by investors, those who have experienced harassment talk of widespread fear about naming names, even anonymously. The price of speaking up can be very high even without the particular power that investors hold over entrepreneurs, who lack even the bare minimum protection that employment law provides. Much of what we know about the gender climate in the investment and venture capital fields comes from whispered one-on-one conversations between women, as well as some details from lawsuits around such harassment in the VC industry.

To my knowledge, Haas and Montel are the first to come forward about this kind of harassment outside the context of legal action. In a climate where prominent incubators must “remind” their investors not to harass participants, this is a huge step forward. I hope that their brave examples will make it easier for other women to speak up in the future.

Rondam RamblingsThis is what real religious persecution looks like, part 2

Religious persecution does not look like this.  It looks like this: Saudi Arabia’s Commission for the Promotion of Virtue and Prevention of Vice has asked the interior ministry to arrest several people for apostasy and atheism.  The commission did not divulge the number of people whose arrest it requested, but it said that they insulted God and Prophet Mohammad (PBUH).  And this: In 13

Rondam RamblingsThe many elephants in the room in Ferguson, Missouri

As long as I'm pointing out the obvious, I figure I should point out a few of the proverbial elephants in the living room in Ferguson, Missouri: the town is 70% black, but the government is overwhelmingly white.  There are only two possible reasons for this: either blacks think that having their town run by whites is just hunky dory, or blacks in Fergusson don't vote.  Unsurprisingly, the latter

Krebs on SecurityCounterfeit U.S. Cash Floods Crime Forums

One can find almost anything for sale online, particularly in some of the darker corners of the Web and on the myriad cybercrime forums. These sites sell everything from stolen credit cards and identities to hot merchandise, but until very recently one illicit good I had never seen for sale on the forums was counterfeit U.S. currency.

Counterfeit Series 1996 $100 bill.

Counterfeit Series 1996 $100 bill.

That changed in the past month with the appearance on several top crime boards of a new fraudster who goes by the hacker alias “MrMouse.” This individual sells counterfeit $20s, $50s and $100s, and claims that his funny money will pass most of the tests that merchants use to tell bogus bills from the real thing.

MrMouse markets his fake funds as “Disney Dollars,” and in addition to blanketing some of the top crime forums with Flash-based ads for his service he has boldly paid for a Reddit stickied post  in the official Disney Market Place.

Judging from images of his bogus bills, the fake $100 is a copy of the Series 1996 version of the note — not the most recent $100 design released by the U.S. Treasury Department in October 2013. Customers who’ve purchased his goods say the $20 notes feel a bit waxy, but that the $50s and $100s are quite good fakes.

MrMouse says his single-ply bills do not have magnetic ink, and so they won’t pass machines designed to look for the presence of this feature. However, this fraudster claims his $100 bill includes most of the other security features that store clerks and cashiers will look for to detect funny money, including the watermark, the pen test, and the security strip.

MrMouse's ads for counterfeit $20s, $50s and $100s now blanket many crime forums.

MrMouse’s ads for counterfeit $20s, $50s and $100s now blanket many crime forums.

In addition, MrMouse says his notes include “microprinting,” tiny lettering that can only be seen under magnification (“USA 100″ is repeated within the number 100 in the lower left corner, and “The United States of America” appears as a line in the left lapel of Franklin’s coat). The sourdough vendor also claims his hundreds sport “color-shifting ink,” an advanced feature that gives the money an appearance of changing color when held at different angles.

I checked with the U.S. Secret Service and with counterfeiting experts, none of whom had previously seen serious counterfeit currency marketed and sold on Internet crime forums.

“That’s a first for me, but I guess they can sell anything online these days,” said Jason Kersten, author of The Art of Making Money: The Story of a Master Counterfeiter, a true crime story about a counterfeiter who made millions before his capture by the Secret Service.

Kersten said that outside of so-called “supernote” counterfeits made by criminals within North Korea, it is rare to find vendors advertising features that MrMouse is claiming on his C-notes, including Intaglio (pronounced “in-tal-ee-oh”) and offset printing. Both features help give U.S. currency a certain tactile feel, and it is rare to find that level of quality in fake bills, he said.

Fake money is supposed to leave a black mark with the pen; brown means the bill passes.

Fake money is supposed to leave a black mark with the pen; yellow/gold means the bill passes.

“What you really need to do is feel the money, because a digital image can be doctored in ways that real money cannot,” Kersten said. “With Intaglio, for example, the result is that when the ink dries, you feel a raised surface on the bill.”

The counterfeiting expert said most bogus cash will sell for between 30 and 50 percent of the face value of the notes, with higher-quality counterfeits typically selling toward the upper end of that scale. MrMosue charges 45 percent of the actual dollar amount, with a minimum order of $225 ($500 in bogus Benjamins) – payable in Bitcoins, of course.

According to Kersten, most businesses are ill-prepared to detect counterfeits, beyond simply using a cheap anti-counterfeit pen that checks for the presence of acid in the paper.

“The pen can be fooled if [the counterfeits] are printed on acid-free paper,” Kersten said. “Most businesses are woefully unprepared to spot counterfeits.”

Thankfully, counterfeits are fairly rare; according to a 2010 study (PDF) by the Federal Reserve Bank of Chicago, the incidence of counterfeits that cannot be detected with minimal authentication effort is likely on the order of about three in 100,000.

Kersten said he’s not surprised that it’s taken this long for funny money to be offered in a serious and organized fashion on Internet crime forums: While passing counterfeit notes is extremely risky (up to 20 years in prison plus fines for the attempted use of fake currency with the intent to defraud), anyone advertising on multiple forums that they are printing and selling fake currency is going to quickly attract a great deal of attention from federal investigators.

“The Secret Service does not have a sense of humor about this at all,” Kersten said. “They really don’t.”

MrMouse showcases the ultraviolet security strip in his fake $100 bills. The WillyClock bit is just an image watermark.

MrMouse showcases the ultraviolet security strip in his fake $100 bills. The WillyClock bit is just an image watermark.

TEDUnbelievable stats from our project to “clean” every video file

Some of our tape collection.

For Project Cleans, we’ve gone through 3,595 tapes of original files. Here, one of the shelves where these tapes are stored. There’s VHS, BetaMax, LaserDisc—you name it.

A while back, we told you about a big initiative underway in the TED office: Project Cleans. Essentially, we are taking each and every one of our 1,800+ talk video files and stripping them of the text and add-ons that were originally baked in, re-encoding the video, and saving the “clean” files in a new media back-end. Cleaning up these files is incredibly tedious — but it’s absolutely necessary, so we can share our talks around the world through our growing list of partners on the web, TV and radio, each with different needs, in 105 languages. Project Cleans will allow us to share our content farther and wider than ever before, without adding to the workload of our moderately small staff.

This project is big. Shockingly big. Below, just a few of the numbers involved in this process:

50. The number of members of the TED staffers who’ve been in some way involved in Project Cleans. That’s about 1/3 of the company across the production, distribution, partnership and editorial teams.

10. The number of freelance video editors working on the project.

1. A single brave freelancer has watched every single talk in our library for archival and quality control.

475. The number of hours he spent doing that over the course of seven months.

1035. The number of talks that are somewhere along the pipeline of getting cleaned.

703. The number of talks still to go.

10,500. The number of individual file components for our 1,800+ talks.

430. The number of hard drives backed up during this process.

40. The number of big boxes crammed full of tapes that we had to pull out of storage for Project Cleans.

3,595. The number of tapes contained in those boxes.

11. The number of types of media formats in those boxes. This includes 1,372 DVDs, 615 VHS tapes, 182 Betamax tapes, plus uncounted miniDVs, HD Cams, CDs, D-Betas, U-Matics, DVCAMs, DVC Pros and S-VHSs. And LaserDisc.

2 years. The length of time that this project has been in the works, total.

5 months. The length of time before we expect to finish.

1 petabyte. The amount of storage TED bought for 2014 so that we’d be set for this project. To put that in scale, that’s a million gigabytes or about 1,000 terabytes.

745 terabytes. The amount of storage we’ve used so far.

A look at one shelf of our drive and tape library. Photo: Michael Glass

A look at one shelf of our drive and tape library.

One of the 40 boxes of tapes we pulled out of storage for Project Cleans.

One of the 40 boxes of tapes we pulled out of storage for Project Cleans.

TED staffer Michael Glass surveys boxes of tapes from the TED Archive.

TED staffer Michael Glass surveys boxes of tapes from the TED Archive.


Geek FeminismWednesday Geek Woman: Sofia Samatar, Author, Poet, and Editor

Sofia SamatarIn addition to being the poetry and nonfiction editor for the literary journal Interfictions, Sofia Samatar is the winner of this year’s John W. Campbell Award for Best New Writer.

Her novel, A Stranger in Olondria, has gotten rave reviews. It won the Crawford Fantasy Award; it was a finalist for the Nebula Award and the Locus Award for Best First Novel; and it’s still a finalist for the British Fantasy Award and the World Fantasy Award. You can read an excerpt over on Tor.com.

Her short story “Selkie Stories Are For Losers” has been appearing on a lot of awards shortlists, too. It was a finalist for the Hugo, Nebula, and the British Science Fiction Awards, and is still a finalist for the World Fantasy Awards. You can find links to more of her short fiction (and her poetry!) on her website.

Sociological ImagesBathing Suit Fashion and the Project of Gender

I came across this ad for bathing suits from the 1920s and was struck by how similar the men’s and women’s suits were designed.  Hers might have some extra coverage up top and feature a tight skirt over shorts instead of just shorts but, compared to what you see on beaches today, they are essentially the same bathing suit.

1

So, why are the designs for men’s and women’s bathing suits so different today? Honestly, either one could be gender-neutral. Male swimmers already wear Speedos; the fact that the man in the ad above is covering his chest is evidence that there is a possible world in which men do so. I can see men in bikinis. Likewise, women go topless on some beaches and in some countries and it can’t be any more ridiculous for them to swim in baggy knee-length shorts than it is for men to do so.

But, that’s not how it is.  Efforts to differentiate men and women through fashion have varied over time.  It can be a response to a collective desire to emphasize or minimize difference, like these unisex pants marketed in the 1960s and 70s.  It can also be, however, a backlash to those same impulses.  When differences between men and women in education, leisure, and work start to disappear – as they are right now – some might cling even tighter to the few arenas in which men and women can be made to seem very different.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianOlivier Berger: Building a lab VM based on Debian for a MOOC, using Vagrant + VirtualBox

We’ve been busy setting up a Virtual Machine (VM) image to be used by participants of a MOOC that’s opening in early september on Relational Databases at Telecom SudParis.

We’ve chosen to use Vagrant and VirtualBox which are used to build, distribute and run the box, providing scriptability (reproducibility) and making it portable on most operating systems.

The VM itself contains a Debian (jessie) minimal system which runs (in the background) PostgreSQL, Apache + mod_php, phpPgAdmin, and a few applications of our own to play with example databases already populated in PostgreSQL.

As the MOOC’s language will be french, we expect the box to be used mostly on machines with azerty keyboards. This and other context elements led us to add some customizations (locale, APT mirror) in provisioning scripts run during the box creation.

At the moment, we generate 2 variants of the box, one for 32 bits kernel (i686) and one for 64 bits kernel (amd64) which (once compressed) represent betw. 300 and 350 Mb.

The resulting boxes are uploaded to a self-hosting site, and distributed through vagrantcloud. Once the VM are created in VirtualBox, the typical VMDK drives file is around 1.3Gb.

We use our own Debian base boxes containing a minimal Debian jessie/testing, instead of relying on someone else’s, and recreate them using (the development branch version of) bootsrap-vz. This ensure we can put more trust in the content as it’s a native Debian package installation without MITM intervention.

The VM are meant to be run headless for the moment, keeping their size to the minimum, even though we also provide a script to install and configure a desktop environment based on XFCE4.

The applications are either used through vagrant ssh, for instance for SQL command-line in psql, or in the Web browser, for our own Web based SQL exerciser, or phpPgAdmin (see a demo screencast (in french, w/ english subtitles)), which can then be used even off-line by the participants, which also means this requires no servers availability for our IT staff.

The MOOC includes a section on PHP + SQL programming, whose exercises can be performed using a shared sub-folder of /vagrant/ which allows editing on the host with the favourite native editor/IDE, while running PHP inside the VM’s Apache + mod_php.

The sources of our environment are available as free software, if you’re interested to replicate a similar environment for another project.

As we’re still polishing the environment before the MOOC opening (on september 10th), I’m not mentioning the box URLs but they shouldn’t be too hard to find if you’re investigating (refering to the fusionforge project’s web site).

We don’t know yet how suitable this environment will be for learning SQL and database design and programming, and if Vagrant will bring more difficulties than benefits. Still we hope that the participants will find this practical, allowing them to work on the lab / exercises whenever and wherever they chose, removing the pain of installing and configuring a RDBMS on their machines, or the need to be connected to a cloud or to our overloaded servers. Of course, one limitation will be the requirements on the host machines, that will need to be reasonably modern, in order to run a virtualized Linux system. Another is access to high bandwidth for downloading the boxes, but this is kind of a requirement already for downloading/watching the videos of the MOOC classes ;-)

Big thanks go to our intern Stéphane Germain, who joined us this summer to work on this virtualized environment.

Planet Linux AustraliaAndrew Pollock: [life] Day 203: Kindergarten, a run and cleaning

I started the day off with a run. It was just a very crappy 5 km run, but it was nice to be feeling well enough to go for a run, and have the weather cooperate as well. I look forward to getting back to 10 km in my near future.

I had my chiropractic adjustment and then got stuck into cleaning the house.

Due to some sort of scheduling SNAFU, I didn't have a massage today. I'm still not quite sure what happened there, but I biked over and everything. The upside was it gave me some more time to clean.

It also worked out well, because I'd booked a doctor's appointment pretty close after my massage, so it was going to be tight to get from one place to the other.

With my rediscovered enthusiasm for exercise, and cooperative weather, I decided to bike to Kindergarten for pick up. Zoe was very excited. I'd also forgotten that Zoe had a swim class this afternoon, so we only had about 30 minutes at home before we had to head out again (again by bike) to go to swim class. I used the time to finish cleaning, and Zoe helped mop her bathroom.

Zoe wanted to hang around and watch Megan do her swim class, so we didn't get away straight away, which made for a slightly late dinner.

Zoe was pretty tired by bath time. Hopefully she'll have a good sleep tonight.

Worse Than FailureCodeSOD: Misguided Optimization

States and their abbreviations are among my favorite kinds of data - they almost never ever change and, as such, you can hard code all that information into your app. I mean, why bother fetching it from the database every page load? That's just wasted CPU cycles.

So, I can find merit in the hard-coded approach that the below code takes that Alex E. sent our way. However, I definitely believe that it takes guts for anybody to make a claim about the efficiency of strcmp() when you perform a linear search on an ordered list.

const char *StAbbrs[] = {".", "AA", "AB", "AE", "AK", "AL", "AP", "AR",
"AS", "AZ", "BC", "CA", "CI", "CO", "CT", "CZ", "DC", "DE", "FL", "GA",
"GU", "HI", "IA", "ID", "IL", "IN", "KS", "KY", "LA", "MA", "MB", "MD",
"ME", "MI", "MN", "MO", "MP", "MS", "MT", "NB", "NC", "ND", "NE", "NF",
"NH", "NJ", "NM", "NS", "NV", "NY", "OH", "OK", "ON", "OR", "PA", "PE",
"QC", "PR", "RI", "SC", "SD", "SK", "TN", "TX", "US", "UT", "VA", "VI",
 "VT", "WA", "WI", "WV", "WY"};

const char *StateNames[] = {"Foreign Address", "Americas", "Alberta",
 "Europe", "Alaska", "Alabama", "Pacific", "Arkansas", "American Samoa",
 "Arizona", "British Columbia", "California", "Cayman Islands",
 "Colorado", "Connecticut", "Canal Zone", "Dist. of Columbia",
 "Delaware", "Florida", "Georgia", "Guam", "Hawaii", "Iowa", "Idaho",
 "Illinois", "Indiana", "Kansas", "Kentucky", "Louisiana", 
 "Massachusetts", "Manitoba", "Maryland", "Maine", "Michigan",
 "Minnesota", "Missouri", "Mariana Island", "Mississippi", "Montana", 
 "New Brunswick", "North Carolina", "North Dakota", "Nebraska", "Newfoundland", "New Hampshire", "New Jersey", "New Mexico", 
 "Nova Scotia", "Nevada", "New York", "Ohio", "Oklahoma", "Ontario", 
 "Oregon", "Pennsylvania", "Prince Edward Is.", "Quebec", "Puerto Rico", 
 "Rhode Island", "South Carolina", "South Dakota", "Saskatchewan", 
 "Tennessee", "Texas", "Federal", "Utah", "Virginia", "Virgin Islands", 
 "Vermont", "Washington", "Wisconsin", "West Virginia", "Wyoming"};

const char *StateName(const char *StAbbr) {
  for(short index = 0; index < MaxNumStates; index++)
  {		
    // It is faster to compare the two character strings 
    // as shorts than do a strcmp() on them.

    if(*(short *)StAbbr == *(short *)(StAbbrs[index]))
      return StateNames[index];

  }
  return "";
}
[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

CryptogramUS Air Force is Focusing on Cyber Deception

The US Air Force is focusing on cyber deception next year:

Background: Deception is a deliberate act to conceal activity on our networks, create uncertainty and confusion against the adversary's efforts to establish situational awareness and to influence and misdirect adversary perceptions and decision processes. Military deception is defined as "those actions executed to deliberately mislead adversary decision makers as to friendly military capabilities, intentions, and operations, thereby causing the adversary to take specific actions (or inactions) that will contribute to the accomplishment of the friendly mission." Military forces have historically used techniques such as camouflage, feints, chaff, jammers, fake equipment, false messages or traffic to alter an enemy's perception of reality. Modern day military planners need a capability that goes beyond the current state-of-the-art in cyber deception to provide a system or systems that can be employed by a commander when needed to enable deception to be inserted into defensive cyber operations.

Relevance and realism are the grand technical challenges to cyber deception. The application of the proposed technology must be relevant to operational and support systems within the DoD. The DoD operates within a highly standardized environment. Any technology that significantly disrupts or increases the cost to the standard of practice will not be adopted. If the technology is adopted, the defense system must appear legitimate to the adversary trying to exploit it.

Objective: To provide cyber-deception capabilities that could be employed by commanders to provide false information, confuse, delay, or otherwise impede cyber attackers to the benefit of friendly forces. Deception mechanisms must be incorporated in such a way that they are transparent to authorized users, and must introduce minimal functional and performance impacts, in order to disrupt our adversaries and not ourselves. As such, proposed techniques must consider how challenges relating to transparency and impact will be addressed. The security of such mechanisms is also paramount, so that their power is not co-opted by attackers against us for their own purposes. These techniques are intended to be employed for defensive purposes only on networks and systems controlled by the DoD.

Advanced techniques are needed with a focus on introducing varying deception dynamics in network protocols and services which can severely impede, confound, and degrade an attacker's methods of exploitation and attack, thereby increasing the costs and limiting the benefits gained from the attack. The emphasis is on techniques that delay the attacker in the reconnaissance through weaponization stages of an attack and also aid defenses by forcing an attacker to move and act in a more observable manner. Techniques across the host and network layers or a hybrid thereof are of interest in order to provide AF cyber operations with effective, flexible, and rapid deployment options.

More discussion here.

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: nova-network to Neutron migration

This will be my second last post about the Juno Nova mid-cycle meetup, which covers the state of play for work on the nova-network to Neutron upgrade.

First off, some background information. Neutron (formerly Quantum) was developed over a long period of time to replace nova-network, and added to the OpenStack Folsom release. The development of new features for nova-network was frozen in the Nova code base, so that users would transition to Neutron. Unfortunately the transition period took longer than expected. We ended up having to unfreeze development of nova-network, in order to fix reliability problems that were affecting our CI gating and the reliability of deployments for existing nova-network users. Also, at least two OpenStack companies were carrying significant feature patches for nova-network, which we wanted to merge into the main code base.

You can see the announcement at http://lists.openstack.org/pipermail/openstack-dev/2014-January/025824.html. The main enhancements post-freeze were a conversion to use our new objects infrastructure (and therefore conductor), as well as features that were being developed by Nebula. I can't find any contributions from the other OpenStack company in the code base at this time, so I assume they haven't been proposed.

The nova-network to Neutron migration path has come to the attention of the OpenStack Technical Committee, who have asked for a more formal plan to address Neutron feature gaps and deprecate nova-network. That plan is tracked at https://wiki.openstack.org/wiki/Governance/TechnicalCommittee/Neutron_Gap_Coverage. As you can see, there are still some things to be merged which are targeted for juno-3. At the time of writing this includes grenade testing; Neutron being the devstack default; a replacement for nova-network multi-host; a migration plan; and some documentation. They are all making good progress, but until these action items are completed, Nova can't start the process of deprecating nova-network.

The discussion at the Nova mid-cycle meetup was around the migration planning item in the plan. There is a Nova specification that outlines one possible plan for live upgrading instances (i.e, no instance downtime) at https://review.openstack.org/#/c/101921/, but this will probably now be replaced with a simpler migration path involving cold migrations. This is prompted by not being able to find a user that absolutely has to have live upgrade. There was some confusion, because of a belief that the TC was requiring a live upgrade plan. But as Russell Bryant says in the meetup etherpad:

"Note that the TC has made no such statement on migration expectations other than a migration path must exist, both projects must agree on the plan, and that plan must be submitted to the TC as a part of the project's graduation review (or project gap review in this case). I wouldn't expect the TC to make much of a fuss about the plan if both Nova and Neutron teams are in agreement."


The current plan is to go forward with a cold upgrade path, unless a user comes forward with an absolute hard requirement for a live upgrade, and a plan to fund developers to work on it.

At this point, it looks like we are on track to get all of the functionality we need from Neutron in the Juno release. If that happens, we will start the nova-network deprecation timer in Kilo, with my expectation being that nova-network would be removed in the "M" release. There is also an option to change the default networking implementation to Neutron before the deprecation of nova-network is complete, which will mean that new deployments are defaulting to the long term supported option.

In the next (and probably final) post in this series, I'll talk about the API formerly known as Nova API v3.

Tags for this post: openstack juno nova mid-cycle summary nova-network neutron migration
Related posts: Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: containers

Comment

Planet Linux AustraliaTridge on UAVs: First flight of ArduPilot on Linux

I'm delighted to announce that the effort to port ArduPilot to Linux reached an important milestone yesterday with the first flight of a fixed wing aircraft on a Linux based autopilot running ArduPilot.

As I mentioned in a previous blog post, we've been working on porting ArduPilot to Linux for a while now. There are lots of reasons for wanting to do this port, not least of which is that it is an interesting challenge!

For the test flight yesterday I used a PXF cape which is an add-on to a BeagleBoneBlack board which was designed by Philip Rowse from 3DRobotics. The PXF cape was designed as a development test platform for Linux based autopilots, and includes a rich array of sensors. It has 3 IMUs (a MPU6000, a MPU9250 and a LSM9DSO), plus two barometers (MS5611 on SPI and I2C), 3 I2C connectors for things like magnetometers and airspeed sensors plus a pile of UARTs, analog ports etc.

All of this sits on top of a BeagleBoneBlack, which is a widely used embedded Linux board with 512M ram, a 1GHz ARM CPU and 2 GByte of EMMC for storage. We're running the Debian Linux distribution on the BeagleBoneBlack, with a 3.8.13-PREEMPT kernel. The BBB also has a two nice co-processors called PRUs (programmable realtime units) which are ideal for timing critical tasks. In the flight yesterday we used one PRU for capturing PPM-SUM input from a R/C receiver, and the other PRU for PWM output to the aircrafts servos.

Summer of code project

The effort to port ArduPilot to Linux got a big boost a few months ago when we teamed up with Victor, Sid and Anuj from the BeaglePilot project as part of a Google Summer of Code project. Victor was sponsored by GSoC, while Sid and Anuj were sponsored as summer students in 3DRobotics. Together they have put a huge amount of effort in over the last few months, which culminated in the flight yesterday. The timing was perfect, as yesterday was also the day that student evaluations were due for the GSoc!

PXF on a SkyWalker

For the flight yesterday I used a 3DR SkyWalker, with the BBB+PXF replacing the usual Pixhawk. Because the port of ArduPilot to Linux used the AP_HAL hardware abstraction layer all of the hardware specific code is abstracted below the flight code, which meant I was able to fly the SkyWalker with exactly the same parameters loaded as I have previously used with the Pixhawk on the same plane.

For this flight we didn't use all of the sensors on the PXF however. Some issues with the build of the initial test boards meant that only the MPU9250 was fully functional, but that was quite sufficient. Future revisions of the PXF will fix up the other two IMUs, allowing us to gain the advantages of multiple IMUs (specifically it gains considerable robustness to accelerometer aliasing).

I also had a digital airspeed sensor (on I2C) and an external GPS/Compass combo to give the full set of sensors needed for good fixed wing flight.

Debugging at the field

As with any experimental hardware you have to expect some issues, and the PXF indeed showed up a problem when I arrived at the flying field. At home I don't get GPS lock due to my metal roof so I hadn't done much testing of the GPS and when I was doing pre-flight ground testing yesterday I found that I frequently lost the GPS. With a bit of work using valgrind and gdb I found the bug, and the GPS started to work correctly. It was an interesting bug in the UART layer in AP_HAL_Linux which also may affect the AP_HAL_PX4 code used on a Pixhawk (although with much more subtle effect), so it was an important fix, and really shows the development benefit of testing on multiple platforms.

After that issue was fixed the SkyWalker was launched, and as expected it flew perfectly, exactly as it would fly with any other ArduPilot based autopilot. There was quite a strong wind (about 15 knots, gusting to 20) which was a challenge for such a small foam plane, but it handled it nicely.

Lots more photos of the first flight are available here. Thanks to Darrell Burkey for braving a cold Canberra morning to come out and take some photos!

Next Steps

Now that we have ArduPilot on PXF flying nicely the next step is a test flight with a multi-copter (I'll probably use an Iris). I'm also looking forward to hearing about first flight reports from other groups working on porting ArduPilot to other Linux based boards, such as the NavIO.

This projects follows in the footsteps of quite a few existing autopilots that run on Linux, both open source and proprietary, including such well known projects as Paparrazi, the AR-Drone and many research autopilots at universities around the world. Having the abiity to run ArduPilot on Linux opens up some interesting possibilities for the ArduPilot project, including things like ROS integration, tightly integrated SLAM and lots of computationally intensive vision algorithms. I'm really looking forward to ArduPilot on Linux being widely available for everyone to try.

All of the code needed to fly ArduPilot on Linux is in the standard ArduPilot git repository.

Thanks

Many thanks to 3DRobotics for sponsoring the development of the PXF cape, and to Victor, Sid and Anuj for their efforts over the last few months! Special thanks to Philip Rowse for designing the board, and for putting up with lots of questions as we worked on the port, and to Craig Elder and Jeff Wurzbach for providing engineering support from the 3DR US offices.

Planet DebianGunnar Wolf: Bigger than the cloud

Summer is cool in Mexico City.

It is cool because, unlike Spring, this is our rainy season — And rains are very predictable. Almost every day we wake up with a gorgeous, clean, blue sky.

Cool, nice temperature, around 15°C. The sun slowly evaporates the rain throughout the morning; when I go out for lunch, the sky is no longer so blue, giving way to a seemingly dirty white/grayish tint. No, it's not our world-famous pollution: It's just yesterday's rain.

Rain starts falling usually between 4 and 7 PM. Sometimes it starts as a light rain, sometimes it starts with all of its thunder, all of its might. But anyway, almost every night, there is a moment of awe, of not believing how much rain we are getting today.

It slowly fades away during the late night. And when I wake up, early next morning, everything is wet and still smells fresh.

Yes, I love our summer, even though it makes shy away from my much enjoyed cycling to work and school. And I love taking some minutes off work, look through the window of my office (located ~70m over the level of our mostly flat city) and watching how different parts of the city have sun or rain; learning to estimate the distance to the clouds, adding it to the direction and guessing which of my friends have which weather.

But I didn't realize our city had so clearly defined micro-climates... (would they really be *micro*-climates?) In fact, it even goes against my knowledge of Mexico City's logic — I always thought Coyoacán, towards the South of the city, got more rain than the Center and North because we are near the mountains, and the dominant air currents go Southwards, "clumping" the clouds by us.

But no, or at least, not this year. Regina (still in the far South — Far because she's too far away from me and I'm too egocentric; she returns home after DebConf) often asks me about the weather, as our friends working nearer the center of the city. According to the photos they post on their $social_media_of_the_day accounts, rains are really heavier there.

Today I heard on the radio accounts of yesterday's chaos after the rain. This evening, at ESIME-Culhuacán, I saw one of the reported fallen trees (of course, I am not sure if it's from yesterday's rain). And the media pushes galleries of images of a city covered in hail... While in Copilco we only had a regular rain, I'd even say a mild one.

This city is bigger than any cloud you can throw at it.

AttachmentSize
IMG_20140819_155052.jpg929.69 KB

Planet DebianHolger Levsen: 20140819-lts-july-2014

Debian LTS - impressions and thoughts from my first month involvement

About LTS - we want feedback and more companies supporting it financially

Squeeze LTS, and even more Debian LTS, is a pretty young project, started in May 2014, so it's still a bit unclear where exactly we'll be going :) One purpose of this post is to spread some information about the initiative and invite you to tell us what you think about it or what your needs are.

LTS stands for "Long Term Support" and the goal of the project is to extend the security support for Squeeze (aka the current oldstable distribution) by two years. If it weren't for Squeeze LTS, the security support for it would have been stopped in May 2014 (=one year after the release of the current stable distribution), which for many is a too short timespan after it's release in February 2011. It's an experiment, we hope that there will be a similar Wheezy LTS initiative in future, but the future is unwritten and things will change based on our experiences and your needs.

If you have feedback on the direction LTS should take (or anything else LTS related), please comment on the lts mailing list. For immediate feedback there is also the #debian-lts IRC channel.

Another quite pragmatic way to express your needs is to read more about how to financially contribute to LTS and then doing exactly that - and unsurprisingly we are prioritizing the updates based on the needs expressed from our paying customers.

My LTS work in July 2014

So, "somehow" I started working for money on Debian LTS in July, which means there were 10h I got paid, and probably another 10h where I did LTS related work unpaid. I used those to release four updates for squeeze-lts (linux-2.6, file, munin and reportbug) fixing 22 CVEs in total.

The unpaid work was mostly spent on unsuccessfully working on security updates and on adding support for LTS to the security team tracker, which I improved but couldn't fully address and which I haven't properly shared / committed yet... but at least I have a local instance of the tracker now, which - for LTS - is more useful than the .debian.org one. Hopefully during DebConf14 we'll manage to fix the tracker for good.

Without success I've looked at libtasn1-3 (where the first fixes applied easily but then the code had changed too much from what was in squeeze compared to the available patches that I gave up) and libxstream-java (which is at version 1.3, while patches exist for upstream branches 1.4 and 2.x, but those need newer java to build and maybe if I'll spend two more hours I'll can get it build and then I'll have to find a useful test case, which looked quite difficult on a brief look.. so maybe I give up on libxstream-java too.... OTOH, if you use it and can do some testing, please do tell me.

Working on all these updates turned out to be more team work than expected and a lot of work involving code (which I did expect), and often code which I'd normally not look at... similarily with tools: one has to deal with tools one doesnt like, eg I had to install cdbs... :-) And as I usually like challenges, this has actually been a lot of fun! Though it's also pretty cool to use common best practices, easy and understandable workflows. I love README.Source (or better yet, when it's not needed). And while git is of course really really great, it's still very desirable if your package builds twice (=many times) in a row, without resetting it via git.

Some more observations

The first 16 updates (until July 19th) didn't have a DLA ID, until I suggested to introduce them and insisted until we agreed. So now we agreed to put the DLA ID in the subject of the announcement mails and there is also some tool support for generating the templates/mails, but enforcing proper subjects is not done, as silent bounces are useless (and non silent ones can be abused too easily). I'm not really happy about this, but who is happy with the way email works today? And I agree, it's not the end of the world if an LTS announcement is done without a proper ID, but it looks unprofessional and could be avoided, but we have more important work to do. But one day we should automate this better.

Another detail I'm not entirely happy is the policy/current decision that "almost everything is fine to upload if deemed sensible by the uploader" (which is everyone in the Debian upload keyring(s)). In discussions before actually having the archive some people suggested the desire to upload new upstream versions too (eg newer kernels, iceweasel or other software to keep running a squeeze desktop in the modern world were discussed). I sincerely hope for most intrusive new upstream versions squeeze-(sloppy-)backports is used instead, and squeeze-lts rather not. Luckily so far all uploads were (IMHO) sensible and so, right now, I will just say that I hope it will stay this way. And it's true, one also has to install these upgrades in the first place. But people do that blindly all the time...

So by design/desire currently there is no gatekeeping mechanism whatsover (no NEW, no proposed updates), except that only some selected "few" can upload. What is uploaded (and signed correctly), gets pushed to archive, buildds and the mirrors, and hopefully maybe someone will send an announcement. So far this has worked remarkedly well - and it's also the way the Debian Security team works, as I'm told. Looking at this from a process quality / automatisation perspective, all this manual and errorprone labour seems very strange to me. ;-)

And then there is another thing: as already mentioned, the people working paid hours for this, are prioritizing their work based on customer requests. So we did two updates (python-scipy and file), which are not fixed in wheezy yet. I think this is unfortunate and while I could probably prepare the wheezy DSA for file, I don't really want to join the Security Team... or maybe I want/should join the Security Team and be a seldomly active member (eg fixing file in wheezy now....)

A note related to this: out of those 37 uploads done until today, 16 were done by those two people being paid, while the other 21 uploads were done by 10 volunteers (or at least not paid by Debian LTS). It will be interesting to see how this statistics evolves over time.

Last, but not least, there is also this can of worms (aka: the discussion) about paying people to work on Debian... I do agree it's work we didnt find volunteers for and I also see how the (financial side of the) setup is done outside of Debian (and well too, btw!), but... then we also use Debian ressources like buildds, the archive itself and official mailing lists. Also I'm currently biased in this discussion, as I directly (and happily) profit from it. I'm mentioning this here, because I believe it's important we discuss this and come to both good and practical conclusions. FWIW, we have also discussed this on the list, feel free to search the archives for it.

To bring this post to an end: for those attending DebConf14 (directly or thanks to some ninjas), there will be an event about LTS in Portland, though I'm not sure yet what I will have to talk about what hasn't been already covered here :-) But this probably means that will be a good opportunity for you to do lots of talking instead! I'm curious what you will have to say!

Thanks for reading this far. I think I can promise that my next LTS report will be shorter :-D

Planet Linux AustraliaAndrew Pollock: [life] Day 202: Kindergarten, a lot of administrative running around, tennis

Yesterday was a pretty busy day. I hardly stopped, and on top of a poor night's sleep, I was pretty exhausted by the end of the day.

I started the day with a yoga class, because a few extraordinary things had popped up on my schedule, meaning this was the only time I could get to a class this week. It was a beginner's class, but it was nice to have a slower pace for a change, and an excellent way to start the day off.

I drove to Sarah's place to pick up Zoe and take her to Kindergarten, and made a bad choice for the route, and the traffic was particularly bad, and we got to Kindergarten a bit later than normal.

After I dropped Zoe off, I headed straight to the post office to get some passport photos for the application for my certificate of registration. I also noticed that they now had some post office boxes available (I was a bit miffed because I'd been actively discouraged from putting my name down for one earlier in the year because of the purported length of the wait list). I discovered that one does not simply open a PO box in the name of a business, one needs letters of authority and print outs of ABNs and whatnot, so after I got my passport photos and made a few other impulse purchases (USB speakers for $1.99?!) I headed back home to gather the other documentation I needed.

By the time I'd done that and a few other bits and pieces at home, it was time to pop back to get my yoga teacher to certify my photos. Then I headed into the city to lodge the application in person. I should get the piece of paper in 6 weeks or so.

Then I swung past the post office to complete my PO box application (successfully this time) and grab some lunch, and update my mailing address with the bank. By the time I'd done all that, I had enough time to swing past home to grab Zoe's tennis racquet and a snack for her and head to Kindergarten to pick her up.

Today's tennis class went much better. Giving her a snack before the class started was definitely the way to go. She'd also eaten a good lunch, which would have helped. I just need to remember to get her to go to the toilet, then she should be all good for an interruption-free class.

I dropped Zoe directly back to Sarah after tennis class today, and then swung by OfficeWorks to pick up some stationery on the way home.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.4.400.0

After two pre-releases in the last few days, Conrad finalised a new Armadillo version 4.400 today. I had kept up with the pre-releases, tested twice against all eighty (!!) CRAN dependents of RcppArmadillo and have hence uploaded RcppArmadillo 0.4.400.0 to CRAN and into Debian.

This release brings a number of new upstream features which are detailed below. As included is s bugfix for sparse matrix creation at the RcppArmadillo end which was found by the ASAN tests at CRAN --- which are similar to the sanitizers tests I recently blogged. I was able to develop and test the fix in the very docker r-devel-san images I had written about which was nice. Special thanks also to Ryan Curtin for help with the fix.

Changes in RcppArmadillo version 0.4.400.0 (2014-08-19)

  • Upgraded to Armadillo release Version 4.400 (Winter Shark Alley)

    • added gmm_diag class for statistical modelling using Gaussian Mixture Models; includes multi-threaded implementation of k-means and Expectation-Maximisation for parameter estimation

    • added clamp() for clamping values to be between lower and upper limits

    • expanded batch insertion constructors for sparse matrices to add values at repeated locations

    • faster handling of subvectors by dot()

    • faster handling of aliasing by submatrix views

  • Corrected a bug (found by the g++ Address Sanitizer) in sparse matrix initialization where space for a sentinel was allocated, but the sentinel was not set; with extra thanks to Ryan Curtin for help

  • Added a few unit tests for sparse matrices

Courtesy of CRANberries, there is also a diffstat report for the most recent release. As always, more detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaArjen Lentz: Two Spaces After a Period: Why You Should Never, Ever Do It | Slate.com

,

Planet DebianJoey Hess: using a debian package as the remote for a local config repo

Today I did something interesting with the Debian packaging for propellor, which seems like it could be a useful technique for other Debian packages as well.

Propellor is configured by a directory, which is maintained as a local git repository. In propellor's case, it's ~/.propellor/. This contains a lot of haskell files, in fact the entire source code of propellor! That's really unusual, but I think this can be generalized to any package whose configuration is maintained in its own git repository on the user's system. For now on, I'll refer to this as the config repo.

The config repo is set up the first time a user runs propellor. But, until now, I didn't provide an easy way to update the config repo when the propellor package was updated. Nothing would break, but the old version would be used until the user updated it themselves somehow (probably by pulling from a git repository over the network, bypassing apt's signature validation).

So, what I wanted was a way to update the config repo, merging in any changes from the new version of the Debian package, while preserving the user's local modifications. Ideally, the user could just run git merge upstream/master, where the upstream repo was included in the Debian package.

But, that can't work! The Debian package can't reasonably include the full git repository of propellor with all its history. So, any git repository included in the Debian binary package would need to be a synthetic one, that only contains probably one commit that is not connected to anything else. Which means that if the config repo was cloned from that repo in version 1.0, then when version 1.1 came around, git would see no common parent when merging 1.1 into the config repo, and the merge would fail horribly.

To solve this, let's assume that the config repo's master branch has a parent commit that can be identified, somehow, as coming from a past version of the Debian package. It doesn't matter which version, although the last one merged with will be best. (The easy way to do this is to set refs/heads/upstream/master to point to it when creating the config repo.)

Once we have that parent commit, we have three things:

  1. The current content of the config repo.
  2. The content from some old version of the Debian package.
  3. The new content of the Debian package.

Now git can be used to merge #3 onto #2, with -Xtheirs, so the result is a git commit with parents of #3 and #2, and content of #3. (This can be done using a temporary clone of the config repo to avoid touching its contents.)

Such a git commit can be merged into the config repo, without any conflicts other than those the user might have caused with their own edits.

So, propellor will tell the user when updates are available, and they can simply run git merge upstream/master to get them. The resulting history looks like this:

* Merge remote-tracking branch 'upstream/master'
|\  
| * merging upstream version
| |\  
| | * upstream version
* | user change
|/  
* upstream version

So, generalizing this, if a package has a lot of config files, and creates a git repository containing them when the user uses it (or automatically when it's installed), this method can be used to provide an easily mergable branch that tracks the files as distributed with the package.

It would perhaps not be hard to get from here to a full git-backed version of ucf. Note that the Debian binary package doesn't have to ship a git repisitory, it can just as easily ship the current version of the config files somewhere in /usr, and check them into a new empty repository as part of the generation of the upstream/master branch.

LongNowThe NSA reaches out

This lecture was presented as part of The Long Now Foundation’s monthly Seminars About Long-term Thinking.

Inside the NSA

Wednesday August 6, 02014 – San Francisco

Video is up on the Neuberger Seminar page.

*********************

Audio is up on the Neuberger Seminar page, or you can subscribe to our podcast.

*********************

The NSA reaches out – a summary by Stewart Brand

Of her eight great-grandparents, seven were murdered at Auschwitz. “So my family’s history burned into me a fear of what occurs when the power of a state is turned against its people or other people.”

Seeking freedom from threats like that brought her parents from Hungary to America. By 1976 they had saved up to take their first flight abroad. Their return flight from Tel Aviv was high-jacked by terrorists and landed at Entebbe Airport in Uganda. Non-Jewish passengers were released and the rest held hostage. The night before the terrorists were to begin shooting the hostages, a raid by Israeli commandos saved most of the passengers.

Anne Neuberger was just a baby in 1976. “My life would have looked very different had a military operation not brought my parents home. It gives me a perspective on the threats of organized terror and the role of intelligence and counterterrorism.” When she later entered government service, she sought out intelligence, where she is now the principal advisor to the Director for managing NSA’s work with the private sector.

The NSA, Neuberger said, has suffered a particularly “long and challenging year” dealing with the public loss of trust following the Snowden revelations. The agency is reviewing all of its activities to determine how to regain that trust. One change is more open engagement with the public. “This presentation is a starting point.”

“My family history,” she said, “instilled in me almost parallel value systems – fear of potential for overreach by government, and belief that sometimes only government, with its military and intelligence, can keep civilians safe. Those tensions shape the way I approach my work each day. I fully believe that the two seemingly contradictory factors can be held in balance. And with your help I think we can define a future where they are.”

The National Security Agency, she pointed out, actively fosters the growth of valuable new communication and computing technology and at the same time “needs the ability to detect, hopefully deter, and if necessary disable lethal threats.” To maintain those abilities over decades and foster a new social contract with the public, Neuberger suggested contemplating 5 tensions, 3 scenarios, and 3 challenges.

The tensions are… 1) Cyber Interdependencies (our growing digital infrastructure is both essential and vulnerable); 2) Intelligence Legitimacy Paradox (to regain trust, the NSA needs publicly understood powers to protect and checks on that power); 3) Talent Leverage (“the current surveillance debates have cast NSA in a horrible light, which will further hamper our recruiting efforts”); 4) Personal Data Norms (the growing Internet-of-things—Target was attacked through its air-conditioning network—opens vast new opportunities for tracking individual behavior by the private as well as public sector); 5) Evolving Internet Governance (the so-far relatively free and unpoliticized Internet could devolve into competing national nets).

Some thirty-year scenarios… 1) Intelligence Debilitated (with no new social contract of trust and thus the loss of new talent, the government cannot keep up with advancing technology and loses the ability to manage its hazards); 2) Withering Nation (privacy obsession hampers commercial activity and government oversight, and nations develop their own conflicting Internets); 3) Intelligent America (new social contract with agreed privacy norms and ongoing security assurance).

Initiatives under way from NSA… 1) Rebuild US Trust (move on from “quiet professionals” stance and actively engage the public); 2) Rebuild Foreign Trust (“extend privacy protections previously limited to US citizens to individuals overseas”); 3) Embrace Collective Oversight (reform bulk collection programs in response to the President’s Privacy and Civil Liberties Oversight Board).

As technology keeps advancing rapidly, the US needs to stay at the forefront in terms of inventing the leading technical tools to provide public services and maintain public security, plus the policy tools to balance civil liberties with protection against ever-evolving threats. “My call to action for everyone in this audience is get our innovative minds focussed on the full set of problems.”

A flood of QUESTION CARDS came to the stage, only a few of which we could deal with live. Anne Neuberger wanted to take all the questions with her to share with NSA colleagues, so Laura Welcher at Long Now typed them up. I figure that since the questioners wanted their questions aired on the stage to the live and video audience, they would like to have them aired here as well. And it would be in keeping with the NSA’s new openness to public discourse. Ms. Neuberger agreed…

I have a general (unfocused) question about transparency – which
hasn’t been mentioned thus far. What is the NSA’s rationale around
hiding its activities from the American people? What can you tell us
about the issue of transparency going forward?

What are the key questions NSA is discussing following the Snowden
releases? And what is the NSA doing to address these issues?

Germany is very, very upset. What could we have done, and what should
we do in the future, to fulfill our many responsibilities while also
respecting our most valuable international relationships?

How can we work toward a new social contract when the intelligence
agency directors repeatedly lie to the Congress and to the public?

Is it true you can still find one-star generals playing Magic the
Gathering in the NSA canteen during lunch hour?

The failures of 9-11 were not technical failures, but failures of
individuals and organizations to work together toward a common goal.
What concrete steps can you describe in the intelligence community
that have been taken to remedy this?

What is the NSA doing to make the scope of its data collection efforts
as transparent as possible, while still achieving its goals w.r.t.
national security?

Is it an acceptable outcome that NSA fails at securing us in the
service of privacy considerations?

If the Snowden incident hadn’t happened, would the NSA have hired the
civil liberties expert? What structural changes will make this role
actually effective?

Has the real tension been between the NSA needing to protect its own
systems while ensuring that everybody else’s are vulnerable? Is this
inevitable?

Do you believe the mission of the NSA can be accomplished without
building a record of all worldwide communications and activities? Why?

Is the NSA embedding backdoor or surveillance capability in any
commercial integrated circuits?

If you want to address the damage to public trust, and improve the
social contract, why not applaud the work Edward Snowden has done to
demonstrate how your agency has gone astray?

Do you consider the NSA’s role in weakening the RSA random number
generator to be a violation of the NSA’s existing social contract?
How do you think about its exploitability by criminal elements?

What do you tell American corporate tech leaders who are concerned
about lowered trust and security of their services and products? Lack
of trust based on national security letters, for example, or
weaknesses introduced into RSA crypto by the NSA?

What is the best mechanism for an intelligence agency to prevent
themselves from using “national security secrecy” to cover up an
embarrassment? Is there something better than whistleblowers?

Secure information and privacy need to be balanced – please give an
example of when you feel the NSA worked at its best in this balancing
act. Please be specific :-)

How much is your presentation a reflection of NSA or your personal views?

Should the NSA play a role in devising the new rules for cyberwar?
(Since the old rules for war don’t work in the digital universe.) How
do we citizens participate?

Do you personally feel that the leaks of the last year have revealed
serious overreach by your agency? Or, do you feel as though the NSA
has simply been unfairly painted and that the leaks have been
damaging?

Privacy is, logically, implied (4th, and 5th and 10th Amendments).
Should it be an explicit right? If so, how should it be architected?

Amnesty for Snowden?

When Russia invaded Ukraine, it seemed to take us by surprise. Have
Snowden’s revelations damaged our ability to anticipate sudden moves
by rivals and adversaries?

How can the NSA build an effective social contract when it destroys
evidence in an active case and when its decisions are made in a secret
court without public scrutiny?

How can the public make informed decisions if NSA keeps secret what it
is doing from its public rulers viz the abuses exposed by Snowden?

Can you give an example of a credible “cyber threat” thwarted by the NSA?

Why did NSA dissolve its Chief Scientist Office? So too FBI. This
Office funded the disk drive and speech recognition.

How do you reconcile your stated goal of improving the security of
private sector products with NSA’s documented practice of
intentionally weakening encryption standards and adding backdoors to
exported network devices that facilitate billions of dollars of
e-commerce?

How does surveillance directed towards the United States’s closest
allies help deter terrorist threats, and how does the damage of our
relationship with Germany and other allies offset the benefits of
conducting such surveillance?

I am an American, legally, politically, culturally, economically. I
was born in Pakistan and am a young male. My demographics are the
prime target of the NSA. I have no recourse if the NSA sees that I
have visited the “wrong” links. I am afraid that the NSA deems me a
suspect. Your response?

Balancing the needs of ‘security, society and business’ leaves most of
us with 1 vote in 3. Given the shared interest in big data by
security agencies and business, how do the rest of us keep from
getting outvoted 2-to-1 every time?

Your fears seem to be based on a highly competitive scarcity-based
economy. What is your role in a post-scarcity society?

In what ways do public, crowdsourced prediction markets help to
resolve the tension between public trust and the need for
sophisticated intel?

Does the government have either a duty or a need to be open and honest
in its communication with the public?

How does the NSA approach biological data? Synthetic biology applications?

You never use the word law.

How many more leaks would it take to make your mission impossible?
Personally I look forward to this particular point in time.

Please share your thoughts on: Re: ‘talent leverage’ impact on world
stage. We are all one family on spaceship earth, and we have grave
system failures in the ship. If the U.S. gov’t can shift from empire
to universal economic empowerment, based on natural carrying capacity
of each ecosystem. Then, trust can be restored that this is not a
gov’t of and for the military-industrial complex, and the most
powerful corporations.

What are three basic reasons that make the NSA assume that it doesn’t
need to obey the law?

Surveillance and security are mutually contradictory goals. Shouldn’t
these functions of the NSA be split into different agencies?

Was Snowden a hero or a damaging rogue? Did he catalyze changes to
keep NSA from being the “KGB”?

Do we live in a democracy when there are no checks and balances in the
intelligence community? –> CIA/Senate, –> Snowden/NSA?

You described the importance of a social contract in determining the
appropriate balance between privacy and intelligence gathering. But
contracts require all parties to be well-informed and to trust each
other. How can the American public trust the intelligence community
when all of the reforms you mentioned only occurred because a
concerned patriot chose to blow the whistle (and now faces
prosecution)?

How are we to maintain the creative outliers and risk takers (things
that have been known to create growth and brilliance) if we are
keeping / tracking ‘norms’ as acceptable – or the things we accept. –
How will we know if we are wrong?

Can or does the NSA influence or seek to influence immigration policy
so that the US could retain foreign workers here on expiring H1Bs?

What does the NSA see as some of the greatest emerging technologies
(quantum decryption for example) that can create the future
“Intelligent America”?

What are the factors that determines whether the NSA ‘quietly assists’
improving a company’s product security, or it weakens or promotes
weaker crypto standards / algorithms / tech?

Please talk about the recent large scale hacking from Russia.

Why frame this as “how can laws keep up with technology” instead of
“how do we keep the NSA from exceeding the law?”

1) Was NSA interdiction of a sovereign leader’s aircraft a violation
of international law? 2) Does NSA believe they can mill and drill a
database to find potential terrorists?

The NSA paid a private security form, RSA, to introduce a weakness
into its security software. Spying is one matter. But making our
defenses weaker is another. How do you defend this?

What is your biggest fear about NSA overreaching in its power [?]

How many real, proven terrorist threats to the U.S. have been
uncovered by NSA surveillance of email / cell phone activity of
private citizens in the last few years (4-8)?

Your list of tensions omitted any mention of corporate or otherwise
economic fallout that may result or have resulted from the Snowden
revelations. What relief mechanism do you foresee maintaining
corporate trust in the American government?

You mentioned doing during slide 14 that the Director of the NSA is
declassifying more information to promote “tranparency”. Can you
please elaborate on how we might find these recently declassified
documents?

Long ago we created a “privilege” for priests, doctors and lawyers,
fearing we could not use them without it. Today, our computers know
us better than our priests, but they have no privilege and can betray
us to surveillance. How do we fix that?

What systems are in place to prevent further leaks?

1) Is it ok for a foreign entity to collect and intercept President
Obama’s communications without our knowledge? 2) Do you think William
Binney and Thomas Drake are heroes?

How do we build a world of transparency, while also enabling security
for our broader society?

As we grow more connected, the sense of distance embodied in national
patriotism and the otherness of the world shrinks. How is a larger
NSA a reasonable response in terms of a social contract?

Describe the culture that says it’s ok to monitor and read US
citizens’ email (pre-revelation) [?]

How can the NSA enable more due process during the review of approvals
of modern “wire taps” (i.e. translating big data searches to
individuals)?

In the next 10 years there will be breakthroughs in math creating
radical changes in data mining. What are the social risks of that
being dominated by NGO’s vs. government?

Has the NSA performed criminally illegal wiretapping? If so, when
will those responsible be prosecuted?

Can you define what unlocking Big Data responsibly really means and
give examples? Can NSA regulate Facebook in terms of privacy and
ownership of users’ data?

How do other governments deal with similar problems?

What prevents NSA from trusting “Intelligent America” revealing that
linking information but not the content was broadly collected could
have been understood and well presented. Funded [?] “Intelligent
Ingestion of Information” …[?] DARPA 1991-1995.

Please address the spying upon and the filing of criminal charges
against US Senators and their staff by the USA, particularly in the
case of Senator Diane Feinstein of California.

Does the NSA’s legitmacy depend more on the safety of citizens or
ensuring the continuity of the Constitutional system?

Can you shed any light on why Pres. Obama has indicted more
whistleblowers than all previous presidents combined?

When will Snowden be recognized as a hero? When will Clapper go to
jail for perjury? Actions speak louder than buzz words.

Does NSA make available the algorithms for natural language processing
used by the data analysis systems?

In the long term view, it would seem freedom is a higher priority
value than safety so why is safety the highest value here? Why isn’t
the USA working primariy to ensure our continued freedom?

How do you protect sources and methods while forging the new social contract?

How can any company trust cybercommand when the same chief runs NSA
where the focus is attack? How can we trust the Utah Data Center
after such blatant lies of “targeted surveillance?”

Now that the mass surveillance programs have to some extent been
revealed, can we see some verifiable examples of their worth? If not,
will NSA turn back towards strengthening security instead of
undermining it?

The terrorist attacks of 9/11 encouraged our govt. leaders to adopt
aggressive surveillance laws and regulations and demands from the
intelligence communities. How do we reverse these policies adopted
under duress?

Subscribe to our Seminar email list for updates and summaries.

TEDMission Blue chronicles the life, loves and calling of ocean champion Sylvia Earle

<iframe class="youtube-player" frameborder="0" height="360" src="http://www.youtube.com/embed/B1wp2MQCsfQ?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>

Sylvia Earle is a fearless 78-year-old woman. In the new documentary Mission Blue, we watch her dive with sharks in the deep blue sea and fearlessly dodge fishing nets as she swims through the middle of a major fishing operation. The film offers a bold new view of the famed oceanographer whose relentless pursuit of saving the ocean takes her from the mythical expanse of Australia’s Great Barrier Reef to the swirling schools of the Chesapeake Bay menhaden fishery to the bustling fish markets of Tokyo.

Mission Blue is now available on Netflix. Watch it here »

Directed by Fisher Stevens (The Cove) and Bob Nixon (Gorillas in the Mist), Mission Blue serves up visually stunning underwater footage. But beyond that, it weaves an inspiring storyline that focuses on Earle herself. Mission Blue could have easily been a documentary about the devastation of the ocean, but Stevens and Nixon felt wary of making a film full of scientific data and talking heads for an audience not already enthusiastic about ocean conservation — and likely feeling compassion fatigue. So Stevens and Nixon tossed out their first script, which focused on Earle’s concept of “hope spots,” underwater areas so critical to the health of the ocean that they need to be protected by law. Instead, the filmmakers turned their focus on the legendary eco-activist herself.

With icon status as a National Geographic Explorer-in-Residence, Earle was a female biologist in “a time of bearded scientists,” one whose ongoing efforts to save the ocean have been recognized by presidents Bill Clinton, George W. Bush and Barack Obama. Meanwhile, her sweeping romantic history offers human interest. “That’s what makes the movie work, at least for me; you get emotionally attached to Sylvia and you see the ocean through her eyes,” says Stevens, who met Earle on a TED-led expedition to the Galapagos Islands after she was awarded the TED Prize in 2009. “I spent one week with her and I was hooked. After the Galapagos trip, I really didn’t want to leave Sylvia’s world, so I didn’t.”

The film opens with Earle and a team of scientists surveying whale sharks off the coast of Louisiana, about 60 miles from the site of the 2010 Gulf of Mexico oil spill, the worst in US history. In her scuba gear, which is like a second skin for her, Earle plunges into the water and swims alongside these majestic creatures, which can be up to 40 feet in length. She makes no secret of her love of sharks and dispels the widely accepted belief that we, as human beings, are on their lunch menu. “They’ve been living here for millions of years. We’re newcomers in their backyard,” she says. “I love being a part of their world. They’re completely innocent of anything humans do.” By innocent she means, for example, that they haven’t had a role in building the more than 33,000 oil drill sites in the Gulf, even though their habitats have been adversely affected by them.

With no problem being in the water for 12 hours at a time, the ocean is as much as a comfort zone to Earle as land is for the rest of us, and it’s heartbreaking for her to have witnessed its decline. “Sixty years ago, when I began exploring the ocean, no one imagined that we could do anything to harm it,” she says. “But now we’re facing paradise lost.”

As Earle sees the narrative of the ocean, human beings are in some ways the bad guys. The film takes a hard look at how our global appetite for seafood has brought many species to the edge of extinction. A heart-stopping moment comes toward the end of the film, when Earle returns to a location 100 miles into the Coral Sea, which she visited decades before and remembers for its vibrant array of ocean wildlife. Only on this dive, there are barely any fish; only coral reef ruins. It looks like a graveyard.

Mission Blue escorts us through Earle’s youth – which opened up when her family moved from New Jersey to Florida when she was 12 years old. Antique footage of Earle swimming in the Gulf as a young woman and as an up-and-coming scientist diving on early expeditions gives a nostalgic twist, and reaffirms the sense that Earle is doing the work she was born to do. “As a kid, I had complete freedom. To spend all day out [in nature] just fooling around on my own,” she says. She became “entranced by the idea of submarines” after reading the book Half Mile Down by marine biologist William Beebe, who she regarded “as a soul mate.” She was also inspired by Jacques Cousteau. “His silent world made me want to see what he saw,” she says. “To meet fish swimming in something other than butter and lemon slices on a plate.”

Sylvia Earle shares her TED Prize wish in 2009. Photo: James Duncan Davidson

Sylvia Earle shares her TED Prize wish in 2009. Photo: James Duncan Davidson

Throughout the film, Earle is both charming and a force to be reckoned with. On the road 300 days of the year, she spends her time campaigning, meeting with world leaders, guest lecturing and, of course, diving—she can’t help but make the rest of us feel like we’re slacking off. Self-effacing at her core, her giggly charisma is contagious, yet she can also be serious when she needs to be. In the film, there’s a snippet of an interview with Stephen Colbert, who teases her that the ocean “is deep and dark and full of sharks who want to eat us,” so why should he care about it? Earle’s response elicits a chilling reaction: “Think of the world without an ocean. You’ve got a planet a lot like Mars.”

Most admirable, perhaps, is Earle’s intolerance for bureaucratic faffing on environmental change that doesn’t lead to concrete action, and that can even conceal the severity of ocean degradation. She is not afraid to ruffle feathers if it means saving more gills. Her brief stint as the Chief Scientist at the National Oceanic and Atmospheric Administration (NOAA) proved too stifling for her—we see her as she resigns with dignity, preferring to venture out on her own, with the freedom to speak her mind rather than maintain silence on matters close to her heart.

“The thing that’s impressive to me about Sylvia is that she is not afraid to point fingers, and say ‘you know what you’re doing and it’s wrong,’” says Jeremy Jackson of the Smithsonian Institution, one ocean activist interviewed in the film. Director James Cameron (whose journey 36,000 feet down is chronicled in his new documentary Deepsea Challenge 3D) meanwhile describes Earle as “the Joan of Arc of the ocean.”

Getting media coverage has never been difficult for Earle, but the coverage was somewhat sexist in the earlier part of her career. Take, for example, the headline “Sylvia Sails Away with 70 Men (But She Expects No Problems)” after her first scientific expedition to the Indian Ocean. As much as the media played up the fact that she was a woman in a man’s world, when it came to her love life, Earle says: “I wasn’t interested in anybody who wasn’t interested in what I was interested in. I was attracted to the nerdy types who loved talking about stars and space, or about diving.” This probably explains how she met her second husband at what she calls “a scientific meeting about fish.” A true feminist, Earle juggled being a mother, a wife and a scientist, among other things, but no matter the obstacles she faced soaring through the proverbial glass ceiling, Earle never saw herself as a victim. “That’s life!” she says with her signature infectious optimism.

In the film, Earle takes on criticism that she is a radical. Most people, she reminds us, haven’t had the opportunity to spend thousands of hours underwater like she has, and to see the kind of destruction to the ocean that she’s seen over the course of her lifetime. With fifty percent less coral in the ocean now than there was in 1950, Earle says simply, “the ocean is dying,” with a deep sadness. While only 5% of the ocean has been seen, let alone mapped and explored, “if we continue business as usual, we’re in real trouble,” she says.

Yet Mission Blue is not all doom and gloom, as Earle is hopeful: “We still have a chance to fix things,” she says, noting that it’s a matter of getting everyone committed to the cause of ocean conservation. Viewers come away with a bigger, more pronounced understanding of the catastrophic impact of human behavior on the world’s oceans—and what we can do to start changing it.

In the end, what are the takeaways from Mission Blue? That following your passion makes for an exciting life; that sometimes you have to go against the grain to make a difference. And finally, that we’re all interconnected on this planet, which means that we all need to be mindful of the consequences of our choices.

Sylvia Earle amidst jellyfish.

Sylvia Earle amidst jellyfish. Photo: Courtesy of Mission Blue

This piece originally ran on the TED Prize Blog. Head there to read more about our annual award to a dreamer with a wish for the world »


TEDHow to take care of pet jellyfish

Meet the TED jellyfish: Jellius Caesar, Jelvis Presley and Sting. Photo: Dian Lofton

Meet the TED jellyfish: Jellius Caesar, Jelvis Presley and Sting. Photo: Dian Lofton

Last year, we in the TED office successfully raised an ant farm. So this year, we thought we were ready for a more ambitious project—taking care of three pet jellyfish. They may not look like it, but jellyfish can be real divas. Still, what doesn’t kill you makes you stronger, and the jellies (and the snails that live with them) are now a part of the TED family.

Allow us to take you on a journey into the wonderful world of jellyfish care …

1. Get a round tank with an air pump. Jellyfish have to have circular tanks because they can get stuck in the corners of rectangular tanks. (They don’t have brains, and thus they aren’t very smart.) We highly recommend the tanks from JellyfishArt.com or PetJellyfish.co.uk. They look cool, and come with most of what you need to get started.

2. Set up the tank to mimic the ocean’s salinity. Definitely follow the preparation instructions recommended by your tank retailer. But here is what we did to get our tank ready: We washed the substrate (that’s a fancy word for the little round balls of gravel you put in the tank), collected six gallons of distilled water, added marine salt to the water (you want the salinity to be like seawater, between 30 and 34 ppt), hooked up the tank and let it bubble for 24 hours.

Jellyfish supplies

Some of the supplies you’ll need to get your jellyfish tank ready for business. Photo: Asia Lindsay

3. Add the jellies! Because we are TED, we got our jellies from the internet. We opted for a kit from Jellyfish Art, the company that provided the tank—and who sent the jellies overnight by FedEx once everything was set up for them. Jellyfish are delicate creatures and are very sensitive to water changes, so you have to acclimate them gently to their new tank. The jellyfish arrive in a plastic bag, so just put the sealed bag into the tank to let the jellies get used to the temperature. Then add a cup of the tank’s water into the plastic bag. Continue to do this every 10 minutes or so for a couple of hours. The jellyfish will probably stop moving at this stage—and it will be absolutely terrifying. But fear not. They probably aren’t dead; they’re just adjusting. Once you release the jellyfish into the tank, they will continue to float around listlessly as their little world has been turned upside down. They should perk up in a couple of hours, when they get used to the new environment.

4. Feed your jellyfish. Jellyfish eat special food made from planktonic eggs, which are high in HUFAs (highly unsaturated fatty acids). You can buy this food separately, or get it as part of a kit like we did. Either way, you’ll have to feed your little darlings daily, but it is so much fun you could hardly count it as a chore. As I mentioned earlier, jellyfish aren’t terribly bright. So the best way to feed them is to inject the food directly into their stomachs with a turkey baster. My role at TED is nothing if not varied.

This is how you feed pet jellyfish. Domonstrated by me, Asia Lindsay. And yes—that is my jellyfish dress. Photo: Dian Lofton

This is how you feed pet jellyfish. Demonstrated by me—in my jellyfish dress. Photo: Dian Lofton

5. Watch the little guys very carefully for seven days. The first week is the hardest for the jellies. So do a water change two days in—remove about a quarter of the water at a time and then replace it with fresh salt water mix—and then another one five days after that. Frequent water changes help keep the jellies healthy—most of the illness afflicting jellyfish can be traced back to poor water quality. Now you’re in the home stretch of jellyfish parenthood! After this, you’ll only have to do a water change once a week. We do it on Thursdays, for some reason.

6. Name your jellyfish. Now that you’ve got the tank nicely set up and the feeding schedule down, it’s time to name your pets! (Sure, you could do it earlier—but the other steps seemed just a little more important, so I favored them earlier in this list.) At TED, we are a democracy so we put it to the staff and let people suggest names on a huge piece of paper on our staff refrigerator. We had some pun-tastic suggestions: Jelly Furtado, Dame Jellen Mirren, Jel-Z and Jellyoncé, Jelonius Monk, Anjellina Jellie, PB n Jelly, Duke Jellington. You get the picture. In the end, Jellius Caesar, Jelvis Presley and Sting emerged victorious.

We asked the office to name our jellyfish. Their suggestions ranged from. Photo: Thaniya Keereepart

We asked the office to name our jellyfish. Their suggestions ranged from. Photo: Thaniya Keereepart

7. Watch out for overfeeding and a cloudy tank. Once your jellyfish have made it through their first week, these are the two biggest dangers. We’ve tried a few methods to get the water clean, but in the end we added a bunch of crabs and snails who essentially work as the jellyfish tank’s garbage disposal, eating up the food that would otherwise sink to the bottom and form dreaded jellyfish tank gunk. Said gunk happens when you overfeed the jellies, so if you see it, remember less is more!

We’ve had the jellies for a couple of months now, and watching them pulse about in their little disco tank is now a frequent procrastination method in the TED office. We’ve had a few dramas (involving more than one lunchtime dash to the pet shop to buy water cleaning supplies, and the sad death of two additional jellies we purchased to keep our originals company—did you know that jellyfish simply disappear into thin air when they die?) but the project overall has been deeply fulfilling. Having jellyfish is a rollercoaster of emotions—but we wouldn’t have it any other way.

If you have any unusual pets, tell us below—we’d love to hear what animals TEDsters keep in their homes. In the meantime, if you want to follow the adventures of our jellyfish, check out the hashtag #TEDjellies.


Planet DebianJulien Danjou: Tracking OpenStack contributions in GitHub

I've switched my Git repositories to GitHub recently, and started to watch my contributions statistics, which were very low considering I spend my days hacking on open source software, especially OpenStack.

OpenStack hosts its Git repositories on its own infrastructure at git.openstack.org, but also mirrors them on GitHub. Logically, I was expecting GitHub to track my commits there too, as I'm using the same email address everywhere.

It turns out that it was not the case, and the help page about that on GitHub describes the rule in place to compute statistics. Indeed, according to GitHub, I had no relations to the OpenStack repositories, as I never forked them nor opened a pull request on them (OpenStack uses Gerrit).

Starring a repository is enough to build a relationship between a user and a repository, so this is was the only thing needed to inform GitHub that I have contributed to those repositories. Considering OpenStack has hundreds of repositories, I decided to star them all by using a small Python script using pygithub.

And voilà, my statistics are now including all my contributions to OpenStack!

Geek FeminismThe Linkspammer’s Guide to the Galaxy (19 August 2014)

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet Linux AustraliaRusty Russell: POLLOUT doesn’t mean write(2) won’t block: Part II

My previous discovery that poll() indicating an fd was writable didn’t mean write() wouldn’t block lead to some interesting discussion on Google+.

It became clear that there is much confusion over read and write; eg. Linus thought read() was like write() whereas I thought (prior to my last post) that write() was like read(). Both wrong…

Both Linux and v6 UNIX always returned from read() once data was available (v6 didn’t have sockets, but they had pipes). POSIX even suggests this:

The value returned may be less than nbyte if the number of bytes left in the file is less than nbyte, if the read() request was interrupted by a signal, or if the file is a pipe or FIFO or special file and has fewer than nbyte bytes immediately available for reading.

But write() is different. Presumably so simple UNIX filters didn’t have to check the return and loop (they’d just die with EPIPE anyway), write() tries hard to write all the data before returning. And that leads to a simple rule.  Quoting Linus:

Sure, you can try to play games by knowing socket buffer sizes and look at pending buffers with SIOCOUTQ etc, and say “ok, I can probably do a write of size X without blocking” even on a blocking file descriptor, but it’s hacky, fragile and wrong.

I’m travelling, so I built an Ubuntu-compatible kernel with a printk() into select() and poll() to see who else was making this mistake on my laptop:

cups-browsed: (1262): fd 5 poll() for write without nonblock
cups-browsed: (1262): fd 6 poll() for write without nonblock
Xorg: (1377): fd 1 select() for write without nonblock
Xorg: (1377): fd 3 select() for write without nonblock
Xorg: (1377): fd 11 select() for write without nonblock

This first one is actually OK; fd 5 is an eventfd (which should never block). But the rest seem to be sockets, and thus probably bugs.

What’s worse, are the Linux select() man page:

       A file descriptor is considered ready if it is possible to
       perform the corresponding I/O operation (e.g., read(2)) without
       blocking.
       ... those in writefds will be watched to see if a write will
       not block...

And poll():

	POLLOUT
		Writing now will not block.

Man page patches have been submitted…

Cory DoctorowNeal Stephenson and Cory speaking at Seattle’s Town Hall, Oct 26


We're getting together to talk about Hieroglyph: Stories and Visions for a Better Future , a project that Stephenson kicked off -- I've got a story in it called "The Man Who Sold the Moon."

The project's mission is to promote "Asimovian robots, Heinleinian rocket ships, Gibsonian cyberspace… plausible, thought-out pictures of alternate realities in which... compelling innovation has taken place." Tickets are $5.


Neal Stephenson and Cory Doctorow: Reigniting Society’s Ambition with Science Fiction

Sociological ImagesOKCupid Experiments on Its Users, Makes Us Hate Ourselves

In the aftermath of the revelation that Facebook has been manipulating our emotions – the one that prompted Jenny Davis to write a post titled Newsflash: Facebook Has Always Been Manipulating Your Emotions – the folks at OkCupid admitted that they been doing it, too.

I’ll let you debate the ethics. Here’s what Christian Rudder and his team found out about attractiveness. Let me warn you, it’s not pretty.

OkCupid originally gave users the opportunity to rate each other twice: once for personality and once for score.  The two were strikingly correlated.

5

Do better looking people have more fabulous personalities?  No. Here’s a hint: a woman with a personality rating in the 99th percentile whose profile contained no text at all.

6

Perhaps people were judging both looks and personality by looks alone.  They ran a test. Some people got to see a user’s profile picture and the text and others just saw the picture. Their ratings correlated which means, as Rudder put it: “Your picture is worth that fabled thousand words, but your actual words are worth… almost nothing.”

Their second “experiment” involved removing all of the pictures from the site for one full workday.  In response, users said something to the effect of hell no.  Here’s a graph showing the traffic on that day (in red) compared to a normal Tuesday (the dotted line):

1

When they put the pictures back up, the conversations that had started petered out much more aggressively than usual. As Rudder put it:  “It was like we’d turned on the bright lights at the bar at midnight.”  This graph shows that conversations started during the blackout had a shorter life expectancy than conversations normally did.

2

It’s too bad the people are putting such an emphasis on looks, because other data that OkCupid collected suggests that they aren’t as important as we think they are.  This figure shows the odds that a woman reported having a good time with someone she was set up with blind.  The odds are pretty even whether she and the guy are equally good looking, he’s much better looking, or she is.  Rudder says that the results for men are similar.

3

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet Linux AustraliaAndrew McDonnell: Unleashed GovHack – an Adelaide Adventure in Open Data

Last month I attended Unleashed Govhack, our local contribution to the Australian GovHack hackathon. Unleashed Essentially GovHack is a chance for makers, hackers, designers, artists, and researchers to team up with government ‘data custodians’ and build proof of concept applications (web or mobile), software tools, video productions or presentations (data journalism) in a way that […]

Planet Linux AustraliaLev Lafayette: A Source Installation of gzip

GNU zip is a compression utility free from patented algorithms. Software patents are stupid, and patented compression algorithms are especially stupid.

read more

CryptogramThe Security of al Qaeda Encryption Software

The web intelligence firm Recorded Future has posted two stories about how al Qaeda is using new encryption software in response to the Snowden disclosures. NPR picked up the story a week later.

Former NSA Chief Council Stewart Baker uses this as evidence that Snowden has harmed America. Glenn Greenwald calls this "CIA talking points" and shows that al Qaeda was using encryption well before Snowden. Both quote me heavily, Baker casting me as somehow disingenuous on this topic.

Baker is conflating my stating of two cryptography truisms. The first is that cryptography is hard, and you're much better off using well-tested public algorithms than trying to roll your own. The second is that cryptographic implementation is hard, and you're much better off using well-tested open-source encryption software than you are trying to roll your own. Admittedly, they're very similar, and sometimes I'm not as precise as I should be when talking to reporters.

This is what I wrote in May:

I think this will help US intelligence efforts. Cryptography is hard, and the odds that a home-brew encryption product is better than a well-studied open-source tool is slight. Last fall, Matt Blaze said to me that he thought that the Snowden documents will usher in a new dark age of cryptography, as people abandon good algorithms and software for snake oil of their own devising. My guess is that this an example of that.

Note the phrase "good algorithms and software." My intention was to invoke both truisms in the same sentence. That paragraph is true if al Qaeda is rolling their own encryption algorithms, as Recorded Future reported in May. And it remains true if al Qaeda is using algorithms like my own Twofish and rolling their own software, as Recorded Future reported earlier this month. Everything we know about how the NSA breaks cryptography is that they attack the implementations far more successfully than the algorithms.

My guess is that in this case they don't even bother with the encryption software; they just attack the users' computers. There's nothing that screams "hack me" more than using specially designed al Qaeda encryption software. There's probably a QUANTUMINSERT attack and FOXACID exploit already set on automatic fire.

I don't want to get into an argument about whether al Qaeda is altering its security in response to the Snowden documents. Its members would be idiots if they did not, but it's also clear that they were designing their own cryptographic software long before Snowden. My guess is that the smart ones are using public tools like OTR and PGP and the paranoid dumb ones are using their own stuff, and that the split was the same both pre- and post-Snowden.

Worse Than FailureInheritance

In life, you will inherit all sorts of things: traits from your direct ancestors, knick-knacks from relatives you tolerated, and sometimes, even money! Of course, there are other things in life that you inherit that you might not even want. The gene for some debilitating disease. The urn filled with the ashes of a relative you particularly despised. Code.

Gerhardt was employed at a C++ shop. Their main product used a third party library. Perhaps used is not quite right; abused is more apt. Every single field that was public (whether it looked like it should be public or not) was ab/used to the max.

At some point, the vendor upgraded the library, and much to the chagrin of all involved, lots of those formerly public variables and methods were now protected. Some of you will say that perhaps they should just change their code to use the library as the vendor intended. Real programmers™ with any real experience in OO languages will immediately think "OK, we can just wrap the protected stuff with our own classes and continue to access the formerly public stuff as before."

class SecretiveLegator {
  ...
  protected:
      TopSecretType   topSecretMember;
}

class LegacyHunter : public SecretiveLegator {
  public:
      void setTopSecretMember(const TopSecretType &value) {
	      topSecretMember = value;
      }
	  
      const TopSecretType & getTopSecretMember() const {
          return topSecretMember;
      }
}

Now, with an available work-around worthy of Wile E. Coyote, the forbidden fruit was once again hanging within reach of all who wanted it:

static_cast<LegacyHunter &> (secretiveLegator).setTopSecretMember(newValue);

Of course, if the vendor ever demoted those now-protected fields and methods to private...

Gerhardt inherited another piece of nifty engineering; the getOrSet method. Basically, it allowed you to either get the value of a variable, or set the value of that variable at any given time. What a marvel, you only needed one method! Of course, if you had never seen it and casually came upon it, you had one of those moments when you looked at something, had no clue what it was or why it existed, and mumbled those three magic little words under your breath. Then you look at the source code and wish you hadn't:

public:
    void getOrSet(bool getOrSet, TheType &x) {
	 if (getOrSet) {
	    this.x  = x;
	 } else {
		x = this.x;
         }
    }

Inheritance is usually a static thing. You inherit a trait from mommy or daddy; not some random stranger. Gerhardt's company employed the truly innovative pattern of dynamic inheritance. This is not dynamic casting, mind you, no! There was a BaseClass that consisted of a giant union of all of their structs, allowing code to read and write the underlying data by means of any one of those structs.

Each of the union's member structs represented one or more (possibly nested) derivations from BaseClass. BaseClass had a list of virtual and pure-virtual methods like so:

  baseClass *elements = (BaseClass *) new char[nElements * sizeof(BaseClass)];

To initialize the elements array, the proper union struct was used to write the appropriate kind of data to the element. Then the powdered unicorn dust was applied. They had several global variables, one for each subclass of BaseClass. The pointer in the array that represented a given sub-class was initialized like this:

  *(void **) p = *(void **) &globalInstanceOfTheSubclassOfBaseClass;

For those of you not up on C++ mojo, that copies the virtual function table pointer from the sample subclass to the start of the memory that they wanted to represent the subclass instance. By overwriting a given pointer with the value from a different subclass, you could completely change all the virtual functions that would be invoked for that particular subclass. Calling a given virtual method would result in code from the last subclass to which the pointer had been set being called.

You may ask, how was this creature used? The GUI had the ability to specify the type of object at any given node in their system. Once the pointer for the object was overwritten, the virtual function table was effectively patched and the resultant code would then behave as though the data in the union/struct was that of the selected subclass.

Of course, virtual function tables aren't the only pointers that can be abused. Gerhardt found where they were using member-pointers (as quasi enums) to tell the method (not what it had to do, but) who called it. The method would then deduce what it was supposed to do based upon who was invoking it.

This was particularly fascinating when inline functions were on a call stack that spanned different DLLs (where the address of the function was different in each and every one).

After a long period of the most bewildering crashes, he stripped out all of the function pointers, and switched the entire mechanism to an ordinary enum.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main September 2014 Meeting: AGM + lightning talks

Sep 2 2014 19:00
Sep 2 2014 21:00
Sep 2 2014 19:00
Sep 2 2014 21:00
Location: 

The Buzzard Lecture Theatre. Evan Burge Building, Trinity College, Melbourne University Main Campus, Parkville.

AGM + lightning talks

Notice of LUV Annual General Meeting, 2nd September 2014, 19:00.

Linux Users of Victoria, Inc., registration number
A0040056C, will be holding its Annual General Meeting at
7pm on Tuesday, 2nd September 2014, in the Buzzard
Lecture Theatre, Trinity College.

The AGM will be held in conjunction with our usual
September Main Meeting. As is customary, after the AGM
business we will have a series of lightning talks by
members on a recent Linux experience or project.

The Buzzard Lecture Theatre, Evan Burge Building, Trinity College Main Campus Parkville Melways Map: 2B C5

Notes: Trinity College's Main Campus is located off Royal Parade. The Evan Burge Building is located near the Tennis Courts. See our Map of Trinity College. Additional maps of Trinity and the surrounding area (including its relation to the city) can be found at http://www.trinity.unimelb.edu.au/about/location/map

Parking can be found along or near Royal Parade, Grattan Street, Swanston Street and College Crescent. Parking within Trinity College is unfortunately only available to staff.

For those coming via Public Transport, the number 19 tram (North Coburg - City) passes by the main entrance of Trinity College (Get off at Morrah St, Stop 12). This tram departs from the Elizabeth Street tram terminus (Flinders Street end) and goes past Melbourne Central Timetables can be found on-line at:

http://www.metlinkmelbourne.com.au/route/view/725

Before and/or after each meeting those who are interested are welcome to join other members for dinner. We are open to suggestions for a good place to eat near our venue. Maria's on Peel Street in North Melbourne is currently the most popular place to eat after meetings.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

September 2, 2014 - 19:00

read more

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: slots

If I had to guess what would be a controversial topic from the mid-cycle meetup, it would have to be this slots proposal. I was actually in a Technical Committee meeting when this proposal was first made, but I'm told there were plenty of people in the room keen to give this idea a try. Since the mid-cycle Joe Gordon has written up a more formal proposal, which can be found at https://review.openstack.org/#/c/112733.

If you look at the last few Nova releases, core reviewers have been drowning under code reviews, so we need to control the review workload. What is currently happening is that everyone throws up their thing into Gerrit, and then each core tries to identify the important things and review them. There is a list of prioritized blueprints in Launchpad, but it is not used much as a way of determining what to review. The result of this is that there are hundreds of reviews outstanding for Nova (500 when I wrote this post). Many of these will get a review, but it is hard for authors to get two cores to pay attention to a review long enough for it to be approved and merged.

If we could rate limit the number of proposed reviews in Gerrit, then cores would be able to focus their attention on the smaller number of outstanding reviews, and land more code. Because each review would merge faster, we believe this rate limiting would help us land more code rather than less, as our workload would be better managed. You could argue that this will mean we just say 'no' more often, but that's not the intent, it's more about bringing focus to what we're reviewing, so that we can get patches through the process completely. There's nothing more frustrating to a code author than having one +2 on their code and then hitting some merge freeze deadline.

The proposal is therefore to designate a number of blueprints that can be under review at any one time. The initial proposal was for ten, and the term 'slot' was coined to describe the available review capacity. If your blueprint was not allocated a slot, then it would either not be proposed in Gerrit yet, or if it was it would have a procedural -2 on it (much like code reviews associated with unapproved specifications do now).

The number of slots is arbitrary at this point. Ten is our best guess of how much we can dilute core's focus without losing efficiency. We would tweak the number as we gained experience if we went ahead with this proposal. Remember, too, that a slot isn't always a single code review. If the VMWare refactor was in a slot for example, we might find that there were also ten code reviews associated with that single slot.

How do you determine what occupies a review slot? The proposal is to groom the list of approved specifications more carefully. We would collaboratively produce a ranked list of blueprints in the order of their importance to Nova and OpenStack overall. As slots become available, the next highest ranked blueprint with code ready for review would be moved into one of the review slots. A blueprint would be considered 'ready for review' once the specification is merged, and the code is complete and ready for intensive code review.

What happens if code is in a slot and something goes wrong? Imagine if a proposer goes on vacation and stops responding to review comments. If that happened we would bump the code out of the slot, but would put it back on the backlog in the location dictated by its priority. In other words there is no penalty for being bumped, you just need to wait for a slot to reappear when you're available again.

We also talked about whether we were requiring specifications for changes which are too simple. If something is relatively uncontroversial and simple (a better tag for internationalization for example), but not a bug, it falls through the cracks of our process at the moment and ends up needing to have a specification written. There was talk of finding another way to track this work. I'm not sure I agree with this part, because a trivial specification is a relatively cheap thing to do. However, it's something I'm happy to talk about.

We also know that Nova needs to spend more time paying down its accrued technical debt, which you can see in the huge amount of bugs we have outstanding at the moment. There is no shortage of people willing to write code for Nova, but there is a shortage of people fixing bugs and working on strategic things instead of new features. If we could reserve slots for technical debt, then it would help us to get people to work on those aspects, because they wouldn't spend time on a less interesting problem and then discover they can't even get their code reviewed. We even talked about having an alternating focus for Nova releases; we could have a release focused on paying down technical debt and stability, and then the next release focused on new features. The Linux kernel does something quite similar to this and it seems to work well for them.

Using slots would allow us to land more valuable code faster. Of course, it also means that some patches will get dropped on the floor, but if the system is working properly, those features will be ones that aren't important to OpenStack. Considering that right now we're not landing many features at all, this would be an improvement.

This proposal is obviously complicated, and everyone will have an opinion. We haven't really thought through all the mechanics fully, yet, and it's certainly not a done deal at this point. The ranking process seems to be the most contentious point. We could encourage the community to help us rank things by priority, but it's not clear how that process would work. Regardless, I feel like we need to be more systematic about what code we're trying to land. It's embarrassing how little has landed in Juno for Nova, and we need to be working on that. I would like to continue discussing this as a community to make sure that we end up with something that works well and that everyone is happy with.

This series is nearly done, but in the next post I'll cover the current status of the nova-network to neutron upgrade path.

Tags for this post: openstack juno nova mid-cycle summary review slots blueprint priority project management
Related posts: Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: containers

Comment

,

Planet DebianJelmer Vernooij: Using Propellor for configuration management

For a while, I've been wanting to set up configuration management for my home network. With half a dozen servers, a VPS and a workstation it is not big, but large enough to make it annoying to manually log into each machine for network-wide changes.

Most of the servers I have are low-end ARM machines, each responsible for a couple of tasks. Most of my machines run Debian or something derived from Debian. Oh, and I'm a member of the declarative school of configuration management.

Propellor

Propellor caught my eye earlier this year. Unlike some other configuration management tools, it doesn't come with its own custom language but it is written in Haskell, which I am already familiar with. It's also fairly simple, declarative, and seems to do most of the handful of things that I need.

Propellor is essentially a Haskell application that you customize for your site. It works very similar to e.g. xmonad, where you write a bit of Haskell code for configuration which uses the upstream library code. When you run the application it takes your code and builds a binary from your code and the upstream libraries.

Each host on which Propellor is used keeps a clone of the site-local Propellor git repository in /usr/local/propellor. Every time propellor runs (either because of a manual "spin", or from a cronjob it can set up for you), it fetches updates from the main site-local git repository, compiles the Haskell application and runs it.

Setup

Propellor was surprisingly easy to set up. Running propellor creates a clone of the upstream repository under ~/.propellor with a README file and some example configuration. I copied config-simple.hs to config.hs, updated it to reflect one of my hosts and within a few minutes I had a basic working propellor setup.

You can use ./propellor <host> to trigger a run on a remote host.

At the moment I have propellor working for some basic things - having certain Debian packages installed, a specific network configuration, mail setup, basic Kerberos configuration and certain SSH options set. This took surprisingly little time to set up, and it's been great being able to take full advantage of Haskell.

Propellor comes with convenience functions for dealing with some commonly used packages, such as Apt, SSH and Postfix. For a lot of the other packages, you'll have to roll your own for now. I've written some extra code to make Propellor deal with Kerberos keytabs and Dovecot, which I hope to submit upstream.

I don't have a lot of experience with other Free Software configuration management tools such as Puppet and Chef, but for my use case Propellor works very well.

The main disadvantage of propellor for me so far is that it needs to build itself on each machine it runs on. This is fine for my workstation and high-end servers, but it is somewhat more problematic on e.g. my Raspberry Pi's. Compilation takes a while, and the Haskell compiler and libraries it needs amount to 500Mb worth of disk space on the tiny root partition.

In order to work with Propellor, some Haskell knowledge is required. The Haskell in the configuration file is reasonably easy to understand if you keep it simple, but once the compiler spits out error messages then I suspect you'll have a hard time without any Haskell knowledge.

Propellor relies on having a central repository with the configuration that it can pull from as root. Unlike Joey, I am wary of publishing the configuration of my home network and I don't have a highly available local git server setup.

Planet DebianDaniel Pocock: Is WebRTC private?

With the exciting developments at rtc.debian.org, many people are starting to look more closely at browser-based real-time communications.

Some have dared to ask: does it solve the privacy problems of existing solutions?

Privacy is a relative term

Perfect privacy and its technical manifestations are hard to define. I had a go at it in a blog on the Gold Standard for free communications technology on 5 June 2013. By pure co-incidence, a few hours later, the first Snowden leaks appeared and this particular human right was suddenly thrust into the spotlight.

WebRTC and ICE privacy risk

WebRTC does not give you perfect privacy.

At least one astute observer at my session at Paris mini-DebConf 2014 questioned the privacy of Interactive Connectivity Establishment (ICE, RFC 5245).

In its most basic form, ICE scans all the local IP addresses on your machine and NAT gateway and sends them to the person calling you so that their phone can find the optimal path to contact you. This clearly has privacy implications as a caller can work out which ISP you are connected to and some rough details of your network topology at any given moment in time.

What WebRTC does bring to the table

Some of this can be mitigated though: an ICE implementation can be tuned so that it only advertises the IP address of a dedicated relay host. If you can afford a little latency, your privacy is safe again. This privacy protecting initiative could be made by a browser vendor such as Mozilla or it can be done in JavaScript by a softphone such as JSCommunicator.

Many individuals are now using a proprietary softphone to talk to family and friends around the world. The softphone in question has properties like a virus, siphoning away your private information. This proprietary softphone is also an insidious threat to open source and free operating systems on the desktop. WebRTC is a positive step back from the brink. It gives people a choice.

WebRTC is a particularly relevant choice for business. Can you imagine going to a business and asking them to make all their email communication through hotmail? When a business starts using a particular proprietary softphone, how is it any different? WebRTC offers a solution that is actually easier for the user and can be secured back to the business network using TLS.

WebRTC is based on open standards, particularly HTML5. Leading implementations, such as the SIP over WebSocket support in reSIProcate, JSCommunicator and the DruCall module for Drupal are fully open source. Not only is it great to be free, it is possible to extend and customize any of these components.

What is missing

There are some things that are not quite there yet and require a serious effort from the browser vendors. At the top of the list for privacy:

  • ZRTP support - browsers currently support DTLS-SRTP, which is based on X.509. ZRTP is more like PGP, a democratic and distributed peer-to-peer privacy solution without needing to trust some central certificate authority.
  • TLS with PGP - the TLS protocol used to secure the WebSocket signalling channel is also based on X.509 with the risk of a central certificate authority. There is increasing chatter about the need for TLS to use PGP instead of X.509 and WebRTC would be a big winner if this were to eventuate and be combined with ZRTP.

You may think "I'll believe it when I see it". Each of these features, including WebRTC itself, is a piece of the puzzle and even solving one piece at a time brings people further out of danger from the proprietary mess the world lives with today.

To find out more about practical integration of WebRTC into free software solutions, consider coming to my talk at xTupleCon in October.

TEDTED Ideas in Business aims to shake up the same old thinking on professional development

TED Ideas in Business are playlists that bring together talks of interest to professional audiences. Here, the art for "Hidden Trends and Systems" and "Skillful Presentation."

TED Ideas in Business are playlists that bring together talks of interest to professional audiences. Here, the art for “Hidden Trends and Systems” and “Skillful Presentation.”

For many, the words “professional development” conjure up memories of sitting in a human resources office, watching a series of awkward training videos and then taking a mandatory quiz. The TED Distribution Team realized: it doesn’t need to be this way. Earlier this year, they started to think about how companies could use TED Talks to get people thinking about their professional lives.

The team is now rolling out TED Ideas in Business, a collection of 25+ playlists curated around big topics in the professional world, like effective leadership, career development, the future of work, and good decision-making. The playlists range from “The Psychology of Success” to “Democratizing Innovation” to “Invasion of the Cyber-Workers.” Each list contains talks that can help crystallize goals, start conversations and spark collaborations.

“TED Talks offer so many ideas that are great for a business audience,” says Janet Lee, our Content Distribution Editor. “The hope with this collection is that it’s not just useful for c-level executives, but for anyone who is looking to better themselves professionally.”

Yahoo! Japan is the first of TED’s partners to offer this programming format. The full collection of TED Ideas in Business playlists is available to readers of the business-oriented website Yahoo! Newsbiz.

“TED offers inspiration,” says Rui Nakamura, of Yahoo! Japan’s Media Service Division. “We’ve gotten comments on the playlists that say things like ‘encouraged,’ and even, ‘I decided to try something new.’”

Meanwhile, Wells Fargo handpicked 85 talks from the TED Ideas in Business collection to make available to employees through their in-house intranet.

“Since we launched our first 45 titles in mid-January, they have been viewed over 40,000 times. They are getting watched a lot,” says Vanessa Walsh, the Learning & Development Manager for the company. “Meeting each of our over 265,000 team members where they are in their development requires new approaches. Using video is a powerful, accessible tool. TED allows us to quickly bring in compelling, relevant videos.”

At Wells Fargo, employees watch talks on their own (an online tool recommends specific talks for employees based on their development goals), and in meetings too. Walsh has gotten feedback that Chimamanda Ngozi Adichie’s “The danger of a single story” sparked a great discussion about diversity at a meeting, and that watching Shawn Achor’s “The happy secret to better work” at an offsite session got a team laughing together. Walsh says, “Leaders are saying, ‘These videos inspired me to change the way I lead.’ It’s helping them to break the mold.”

At the same time, TED’s Distribution Team is working on one more effort to make sure that the ideas from TED Talks find their way to those who can use them in the business world. They recently launched a partnership with getAbstract to take talks, from both TED Ideas in Business and from the TED library at large, and distill them down into short takeaways that busy business leaders have time to read. “It’s a compact distillation of a talk,” says Lee. ”We work closely with them to make sure that each summary really captures the main ideas in a talk.”

Amy-Cuddy-abstract-small-redoTED Ideas in Business are not your typical business content, and this is evident even in the art that accompanies these playlists. The distribution team let loose when commissioning art, and our designer turned out fresh, brightly colored illustrations. “We took a really playful approach,” Lee says. “We realized that in businesses, people are used to seeing dry stock imagery. We wanted to make it fresher and more relatable. Something that a global audience can understand.”


CryptogramQUANTUM Technology Sold by Cyberweapons Arms Manufacturers

Last October, I broke the story about the NSA's top secret program to inject packets into the Internet backbone: QUANTUM. Specifically, I wrote about how QUANTUMINSERT injects packets into existing Internet connections to redirect a user to an NSA web server codenamed FOXACID to infect the user's computer. Since then, we've learned a lot more about how QUANTUM works, and general details of many other QUANTUM programs.

These techniques make use of the NSA's privileged position on the Internet backbone. It has TURMOIL computers directly monitoring the Internet infrastructure at providers in the US and around the world, and a system called TURBINE that allows it to perform real-time packet injection into the backbone. Still, there's nothing about QUANTUM that anyone else with similar access can't do. There's a hacker tool called AirPwn that basically performs a QUANTUMINSERT attack on computers on a wireless network.

A new report from Citizen Lab shows that cyberweapons arms manufacturers are selling this type of technology to governments around the world: the US DoD contractor CloudShield Technologies, Italy's Hacking Team, and Germany's and the UK's Gamma International. These programs intercept web connections to sites like Microsoft and Google -- YouTube is specially mentioned -- and inject malware into users' computers.

Turkmenistan paid a Swiss company, Dreamlab Technologies -- somehow related to the cyberweapons arms manufacturer Gamma International -- just under $1M for this capability. Dreamlab also installed the software in Oman. We don't know what other countries have this capability, but the companies here routinely sell hacking software to totalitarian countries around the world.

There's some more information in this Washington Post article, and this essay on the Intercept.

In talking about the NSA's capabilities, I have repeatedly said that today's secret NSA programs are tomorrow's PhD dissertations and the next day's hacker tools. This is exactly what we're seeing here. By developing these technologies instead of helping defend against them, the NSA -- and GCHQ and CSEC -- are contributing to the ongoing insecurity of the Internet.

Related: here is an open letter from Citizen Lab's Ron Diebert to Hacking Team about the nature of Citizen Lab's research and the misleading defense of Hacking Team's products.

Planet DebianJulien Danjou: OpenStack Ceilometer and the Gnocchi experiment

A little more than 2 years ago, the Ceilometer project was launched inside the OpenStack ecosystem. Its main objective was to measure OpenStack cloud platforms in order to provide data and mechanisms for functionalities such as billing, alarming or capacity planning.

In this article, I would like to relate what I've been doing with other Ceilometer developers in the last 5 months. I've lowered my involvement in Ceilometer itself directly to concentrate on solving one of its biggest issue at the source, and I think it's largely time to take a break and talk about it.

Ceilometer early design

For the last years, Ceilometer didn't change in its core architecture. Without diving too much in all its parts, one of the early design decision was to build the metering around a data structure we called samples. A sample is generated each time Ceilometer measures something. It is composed of a few fields, such as the the resource id that is metered, the user and project id owning that resources, the meter name, the measured value, a timestamp and a few free-form metadata. Each time Ceilometer measures something, one of its components (an agent, a pollster…) constructs and emits a sample headed for the storage component that we call the collector.

This collector is responsible for storing the samples into a database. The Ceilometer collector uses a pluggable storage system, meaning that you can pick any database system you prefer. Our original implementation has been based on MongoDB from the beginning, but we then added a SQL driver, and people contributed things such as HBase or DB2 support.

The REST API exposed by Ceilometer allows to execute various reading requests on this data store. It can returns you the list of resources that have been measured for a particular project, or compute some statistics on metrics. Allowing such a large panel of possibilities and having such a flexible data structure allows to do a lot of different things with Ceilometer, as you can almost query the data in any mean you want.

The scalability issue

We soon started to encounter scalability issues in many of the read requests made via the REST API. A lot of the requests requires the data storage to do full scans of all the stored samples. Indeed, the fact that the API allows you to filter on any fields and also on the free-form metadata (meaning non indexed key/values tuples) has a terrible cost in terms of performance (as pointed before, the metadata are attached to each sample generated by Ceilometer and is stored as is). That basically means that the sample data structure is stored in most drivers in just one table or collection, in order to be able to scan them at once, and there's no good "perfect" sharding solution, making data storage scalability painful.

It turns out that the Ceilometer REST API is unable to handle most of the requests in a timely manner as most operations are O(n) where n is the number of samples recorded (see big O notation if you're unfamiliar with it). That number of samples can grow very rapidly in an environment of thousands of metered nodes and with a data retention of several weeks. There is a few optimizations to make things smoother in general cases fortunately, but as soon as you run specific queries, the API gets barely usable.

During this last year, as the Ceilometer PTL, I discovered these issues first hand since a lot of people were feeding me back with this kind of testimony. We engaged several blueprints to improve the situation, but it was soon clear to me that this was not going to be enough anyway.

Thinking outside the box

Unfortunately, the PTL job doesn't leave him enough time to work on the actual code nor to play with anything new. I was coping with most of the project bureaucracy and I wasn't able to work on any good solution to tackle the issue at its root. Still, I had a few ideas that I wanted to try and as soon as I stepped down from the PTL role, I stopped working on Ceilometer itself to try something new and to think a bit outside the box.

When one takes a look at what have been brought recently in Ceilometer, they can see the idea that Ceilometer actually needs to handle 2 types of data: events and metrics.

Events are data generated when something happens: an instance start, a volume is attached, or an HTTP request is sent to an REST API server. These are events that Ceilometer needs to collect and store. Most OpenStack components are able to send such events using the notification system built into oslo.messaging.

Metrics is what Ceilometer needs to store but that is not necessarily tied to an event. Think about an instance CPU usage, a router network bandwidth usage, the number of images that Glance is storing for you, etc… These are not events, since nothing is happening. These are facts, states we need to meter.

Computing statistics for billing or capacity planning requires both of these data sources, but they should be distinct. Based on that assumption, and the fact that Ceilometer was getting support for storing events, I started to focus on getting the metric part right.

I had been a system administrator for a decade before jumping into OpenStack development, so I know a thing or two on how monitoring is done in this area, and what kind of technology operators rely on. I also know that there's still no silver bullet – this made it a good challenge.

The first thing that came to my mind was to use some kind of time-series database, and export its access via a REST API – as we do in all OpenStack services. This should cover the metric storage pretty well.

Cooking Gnocchi

A cloud of gnocchis!

At the end of April 2014, this led met to start a new project code-named Gnocchi. For the record, the name was picked after confusing so many times the OpenStack Marconi project, reading OpenStack Macaroni instead. At least one OpenStack project should have a "pasta" name, right?

The point of having a new project and not send patches on Ceilometer, was that first I had no clue if it was going to make something that would be any better, and second, being able to iterate more rapidly without being strongly coupled with the release process.

The first prototype started around the following idea: what you want is to meter things. That means storing a list of tuples of (timestamp, value) for it. I've named these things "entities", as no assumption are made on what they are. An entity can represent the temperature in a room or the CPU usage of an instance. The service shouldn't care and should be agnostic in this regard.

One feature that we discussed for several OpenStack summits in the Ceilometer sessions, was the idea of doing aggregation. Meaning, aggregating samples over a period of time to only store a smaller amount of them. These are things that time-series format such as the RRDtool have been doing for a long time on the fly, and I decided it was a good trail to follow.

I assumed that this was going to be a requirement when storing metrics into Gnocchi. The user would need to provide what kind of archiving it would need: 1 second precision over a day, 1 hour precision over a year, or even both.

The first driver written to achieve that and store those metrics inside Gnocchi was based on whisper. Whisper is the file format used to store metrics for the Graphite project. For the actual storage, the driver uses Swift, which has the advantages to be part of OpenStack and scalable.

Storing metrics for each entities in a different whisper file and putting them in Swift turned out to have a fantastic algorithm complexity: it was O(1). Indeed, the complexity needed to store and retrieve metrics doesn't depends on the number of metrics you have nor on the number of things you are metering. Which is already a huge win compared to the current Ceilometer collector design.

However, it turned out that whisper has a few limitations that I was unable to circumvent in any manner. I needed to patch it to remove a lot of its assumption about manipulating file, or that everything is relative to now (time.time()). I've started to hack on that in my own fork, but… then everything broke. The whisper project code base is, well, not the state of the art, and have 0 unit test. I was starring at a huge effort to transform whisper into the time-series format I wanted, without being sure I wasn't going to break everything (remember, no test coverage).

I decided to take a break and look into alternatives, and stumbled upon Pandas, a data manipulation and statistics library for Python. Turns out that Pandas support time-series natively, and that it could do a lot of the smart computation needed in Gnocchi. I built a new file format leveraging Pandas for computing the time-series and named it carbonara (a wink to both the Carbon project and pasta, how clever!). The code is quite small (a third of whisper's, 200 SLOC vs 600 SLOC), does not have many of the whisper limitations and… it has test coverage. These Carbonara files are then, in the same fashion, stored into Swift containers.

Anyway, Gnocchi storage driver system is designed in the same spirit that the rest of OpenStack and Ceilometer storage driver system. It's a plug-in system with an API, so anyone can write their own driver. Eoghan Glynn has already started to write a InfluxDB driver, working closely with the upstream developer of that database. Dina Belova started to write an OpenTSDB driver. This helps to make sure the API is designed directly in the right way.

Handling resources

Measuring individual entities is great and needed, but you also need to link them with resources. When measuring the temperature and the number of a people in a room, it is useful to link these 2 separate entities to a resource, in that case the room, and give a name to these relations, so one is able to identify what attribute of the resource is actually measured. It is also important to provide the possibility to store attributes on these resources, such as their owners, the time they started and ended their existence, etc.

Relationship of entities and resources

Once this list of resource is collected, the next step is to list and filter them, based on any criteria. One might want to retrieve the list of resources created last week or the list of instances hosted on a particular node right now.

Resources also need to be specialized. Some resources have attributes that must be stored in order for filtering to be useful. Think about an instance name or a router network.

All of these requirements led to to the design of what's called the indexer. The indexer is responsible for indexing entities, resources, and link them together. The initial implementation is based on SQLAlchemy and should be pretty efficient. It's easy enough to index the most requested attributes (columns), and they are also correctly typed.

We plan to establish a model for all known OpenStack resources (instances, volumes, networks, …) to store and index them into the Gnocchi indexer in order to request them in an efficient way from one place. The generic resource class can be used to handle generic resources that are not tied to OpenStack. It'd be up to the users to store extra attributes.

Dropping the free form metadata we used to have in Ceilometer makes sure that querying the indexer is going to be efficient and scalable.

The indexer classes and their relations

REST API

All of this is exported via a REST API that was partially designed and documented in the Gnocchi specification in the Ceilometer repository; though the spec is not up-to-date yet. We plan to auto-generate the documentation from the code as we are currently doing in Ceilometer.

The REST API is pretty easy to use, and you can use it to manipulate entities and resources, and request the information back.

Macroscopic view of the Gnocchi architecture

Roadmap & Ceilometer integration

All of this plan has been exposed and discussed with the Ceilometer team during the last OpenStack summit in Atlanta in May 2014, for the Juno release. I led a session about this entire concept, and convinced the team that using Gnocchi for our metric storage would be a good approach to solve the Ceilometer collector scalability issue.

It was decided to conduct this project experiment in parallel of the current Ceilometer collector for the time being, and see where that would lead the project to.

Early benchmarks

Some engineers from Mirantis did a few benchmarks around Ceilometer and also against an early version of Gnocchi, and Dina Belova presented them to us during the mid-cycle sprint we organized in Paris in early July.

The following graph sums up pretty well the current Ceilometer performance issue. The more you feed it with metrics, the more slow it becomes.

For Gnocchi, while the numbers themselves are not fantastic, what is interesting is that all the graphs below show that the performances are stable without correlation with the number of resources, entities or measures. This proves that, indeed, most of the code is built around a complexity of O(1), and not O(n) anymore.

Next steps

Clément drawing the logo

While the Juno cycle is being wrapped-up for most projects, including Ceilometer, Gnocchi development is still ongoing. Fortunately, the composite architecture of Ceilometer allows a lot of its features to be replaced by some other code dynamically. That, for example, enables Gnocchi to provides a Ceilometer dispatcher plugin for its collector, without having to ship the actual code in Ceilometer itself. That should help the development of Gnocchi to not be slowed down by the release process for now.

The Ceilometer team aims to provide Gnocchi as a sort of technology preview with the Juno release, allowing it to be deployed along and plugged with Ceilometer. We'll discuss how to integrate it in the project in a more permanent and strong manner probably during the OpenStack Summit for Kilo that will take place next November in Paris.

Sociological ImagesWho Are Habitats For? Electrified Nature in Zoo Exhibits

What do you see?

1

While it hasn’t always been the case, most well-funded zoos today feature pleasant-enough looking habitats for their animals.  They are typically species-appropriate, roomy enough to look less-than-totally miserable, and include trees and shrubs and other such natural features that make them attractive.

How, though, a friend of mine recently asked “does that landscaping stay nice? Why don’t [the animals] eat it, lie down on it, rip it to shreds for fun, or poop all over it?”

Because, she told me, some of it is hot-wired to give them a shock if they touch it. These images are taken from the website Total Habitat, a source of electrified grasses and vines.  

1 2 3

Laurel Braitman writes about these products in her book, Animal Madness.  When she goes to zoos, she says, she doesn’t “marvel at the gorilla… but instead at the mastery of the exhibit itself.”  She writes:

The more naturalistic the cages, the more depressing they can be because they are that much more deceptive. To the mandrill on the other side of the glass, the realistic foliage that frames his favorite perch doesn’t help him one bit if it has been hot-wired so that he doesn’t destroy it… Some of the new natural looking exhibits may be even worse for their inhabitants than the old cement ones, as the new plants and other features can shrink the animals’ usable space.

The take-home message is that these attractive, naturalistic environments are more for us than they are for the animal.  They teach us what the animal’s natural habitat might look like and they soothe us emotionally, reassuring us that the animal must be living a nice life.

I don’t know the extent to which zoos use electrified grasses and vines, but next time you visit one you might be inspired to look a little more closely.

Photo of elephants from wikimedia commons.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet Linux AustraliaAndrew Pollock: [life] Day 201: Kindergarten, some startup stuff, car wash and a trip to the vet ophthalmologist

Zoe woke up at some point in the night and ended up in bed with me. I don't even remember when it was.

We got going reasonably quickly this morning, and Zoe wanted porridge for breakfast, so I made a batch of that in the Thermomix.

She was a little bit clingy at Kindergarten for drop off. She didn't feel like practising writing her name in her sign-in book. Fortunately Miss Sarah was back from having done her prac elsewhere, and Zoe had been talking about missing her on the way to Kindergarten, so that was a good distraction, and she was happy to hang out with her while I left.

I used the morning to do some knot practice for the rock climbing course I've signed up for next month. It was actually really satisfying doing some knots that previously I'd found to be mysterious.

I had a lunch meeting over at the Royal Brisbane Women's Hospital to bounce a startup idea off a couple of people, so I headed over there for a very useful lunch discussion and then briefly stopped off at home before picking up Zoe from Kindergarten.

Zoe had just woken up from a nap before I arrived, and was a bit out of sorts. She perked up a bit when we went to the car wash and had a babyccino while the car got cleaned. Sarah was available early, so I dropped Zoe around to her straight after that.

I'd booked Smudge in for a consult with a vet ophthalmologist to get her eyes looked at, so I got back home again, and crated her and drove to Underwood to see the vet. He said that she had the most impressive case of eyelid agenesis he'd ever seen. She also had persistent pupillary membranes in each eye. He said that the eyelid agenesis is a pretty common birth defect in what he called "dumpster cats", which for all we know, is exactly what Smudge (or more importantly, her mother) were. He also said that other eye defects, like the membranes she had, were common in cases where there was eyelid agenesis.

The surgical fix was going to come in at something in the order of $2,000 an eye, be a pretty long surgery, and involve some crazy transplanting of lip tissue. Cost aside, it didn't sound like a lot of fun for Smudge, and given her age and that she's been surviving the way she is, I can't see myself wanting to spend the money to put her through it. The significantly cheaper option is to just religiously use some lubricating eye gel.

After that, I got home with enough time to eat some dinner and then head out to crash my Thermomix Group Leader's monthly team meeting to see what it was like. I got a good vibe from that, so I'm happy to continue with my consultant application.

Planet DebianJunichi Uekawa: sigaction bit me.

sigaction bit me. There's a system call and a libc function of the similar (sigaction vs rt_sigaction) name but they behave differently.

CryptogramNSA/GCHQ/CSEC Infecting Innocent Computers Worldwide

There's a new story on the c't magazin website about a 5-Eyes program to infect computers around the world for use as launching pads for attacks. These are not target computers; these are innocent third parties.

The article actually talks about several government programs. HACIENDA is a GCHQ program to port-scan entire countries, looking for vulnerable computers to attack. According to the GCHQ slide from 2009, they've completed port scans of 27 different countries and are prepared to do more.

The point of this is to create ORBs, or Operational Relay Boxes. Basically, these are computers that sit between the attacker and the target, and are designed to obscure the true origins of an attack. Slides from the Canadian CSEC talk about how this process is being automated: "2-3 times/year, 1 day focused effort to acquire as many new ORBs as possible in as many non 5-Eyes countries as possible." They've automated this process into something codenamed LANDMARK, and together with a knowledge engine codenamed OLYMPIA, 24 people were able to identify "a list of 3000+ potential ORBs" in 5-8 hours. The presentation does not go on to say whether all of those computers were actually infected.

Slides from the UK's GCHQ also talk about ORB detection, as part of a program called MUGSHOT. It, too, is happy with the automatic process: "Initial ten fold increase in Orb identification rate over manual process." There are also NSA slides that talk about the hacking process, but there's not much new in them.

The slides never say how many of the "potential ORBs" CSEC discovers or the computers that register positive in GCHQ's "Orb identification" are actually infected, but they're all stored in a database for future use. The Canadian slides talk about how some of that information was shared with the NSA.

Increasingly, innocent computers and networks are becoming collateral damage, as countries use the Internet to conduct espionage and attacks against each other. This is an example of that. Not only do these intelligence services want an insecure Internet so they can attack each other, they want an insecure Internet so they can use innocent third parties to help facilitate their attacks.

The story contains formerly TOP SECRET documents from the US, UK, and Canada. Note that Snowden is not mentioned at all in this story. Usually, if the documents the story is based on come from Snowden, the reporters say that. In this case, the reporters have said nothing about where the documents come from. I don't know if this is an omission -- these documents sure look like the sorts of things that come from the Snowden archive -- or if there is yet another leaker.

Worse Than FailureCodeSOD: The Constant Bomb

On one hand, this Java class Jim found is just another instance where somebody made constants like this:

	public static final String NO_SPACE = "";
	public static final String SINGLE_SPACE = " ";
	public static final String DOUBLE_SPACE = "  ";
	public static final String ZERO = "0";
	public static final String FLAG_Y = "Y";
	public static final String FLAG_N = "N";

We’ve all seen this particular anti-pattern <script src="http://www.cornify.com/js/cornify.js" type="text/javascript"></script> before, and we’re bored of it. But Jim’s co-worker really really went all out on this one. You want to format dates?

/******************* DATE AND TIME FORMAT Starts ********************/
	public static final String DATE_PATTERN_WITH_MONTH_NAME = "dd-MMM-yy";
	public static final String DAY_FORMAT = "dd";
	public static final String DATE_PATTERN_WITH_ONLY_MONTH = "MMM";
	public static final String FULL_MONTH_FORMAT = "MMMMMMMM";
	public static final String YEAR_FORMAT = "yyyy";
	public static final String DATE_TIME_PATTERN_FULL = "MM/dd/YYYY hh:mm:ss";
// and so many more

And what about capitals vs. lowercase?

	public static final String ARRIVAL_CAPS_TEXT = "ARRIVAL";
	public static final String ARRIVAL_TEXT = "Arrival";

	public static final String ALL = "ALL";
	public static final String CHECKED = "checked";
	public static final String UNCHECKED = "unchecked";
	public static final String UNCHECKED_C = "UNCHECKED";
	public static final String CHECKED_C = "CHECKED";

I don’t know why “ALL” doesn’t need two versions, but I’m glad that they didn’t bother to be consistent. The constants go on to include all of the labels for various navigation menus, including public static final String SUBMENU_GENERICFALLOUT = "Generic fallout";, which I assume is named because this code is threatening to cause cancer.

What about database stuff?

/*************************** ORACLE STORED PROCEDURE NAME Starts ****************************/
	public static final String GET_LEVEL_RATE_PROCEURE = "MW_CV_TYPES_PKG.GET_LEVEL_RATE_CODE";
	public static final String GET_CORP_INFO_RPT_PROCEURE = "MW_CV2_TYPES_PKG.GET_CORP_INFO_RPT";
	public static final String PROC_MW_TADETAIL = "MW_UI_REPORTS_PKG.mw_tadetail_p";

We’ve got database stuff. Hundreds of lines of it, and with some great database naming conventions to go with.

And it goes on and on and on and on and on.

<link href="http://yandex.st/highlightjs/7.3/styles/default.min.css" rel="stylesheet"/> <script src="http://img.thedailywtf.com/images/remy/highlight.js/highlight.pack.js"></script> <script>hljs.initHighlightingOnLoad();</script>


public final class Constants { 

/** * Private constructor is added here so that this class cannot be * instantiated anywhere else. */ private Constants() { } /*************** COMMON CONSTANTS for all Module Starts ***************/ public static final String CALL = "{CALL "; public static final String FUNCTION_CALL = " {? = call"; public static final String END_CALL = "}"; public static final String NO_SPACE = ""; public static final String SINGLE_SPACE = " "; public static final String DOUBLE_SPACE = "  "; public static final String ZERO = "0"; public static final String FLAG_Y = "Y"; public static final String FLAG_N = "N"; public static final String FLAG_YES_CAPS = "YES"; public static final String FLAG_NO_CAPS = "NO"; public static final String FLAG_YES = "Yes"; public static final String FLAG_NO = "No"; public static final boolean BOOLEAN_TRUE = true; public static final boolean BOOLEAN_FALSE = false; public static final String PIPE_DELIMITER = "|"; public static final String SWITCH_ON = "On"; public static final String SWITCH_OFF = "Off"; public static final String SWITCH_TRUE = "True"; public static final String A_TEXT = "A"; public static final String B_TEXT = "B"; public static final String C_TEXT = "C"; public static final String D_TEXT = "D"; public static final String E_TEXT = "E"; public static final String F_TEXT = "F"; public static final String G_TEXT = "G"; public static final String I_TEXT = "I"; public static final String M_TEXT = "M"; public static final String P_TEXT = "P"; public static final String R_TEXT = "R"; public static final String S_TEXT = "S"; public static final String T_TEXT = "T"; public static final String ARRIVAL_CAPS_TEXT = "ARRIVAL"; public static final String ARRIVAL_TEXT = "Arrival"; public static final String BOOKING_TEXT = "Booking"; public static final String ALL = "ALL"; public static final String CHECKED = "checked"; public static final String UNCHECKED = "unchecked"; public static final String UNCHECKED_C = "UNCHECKED"; public static final String CHECKED_C = "CHECKED"; public static final String ALL_PRODUCTS="All Products"; public static final String NO_PRODUCTS="*** No Products Exist ***"; public static final String PROD_SELECTED = "** Select Product **"; public static final String PROD_SEL="SEL"; /*************** COMMON CONSTANTS for all Module Ends. ***************/ /******************* DATE AND TIME FORMAT Starts ********************/ public static final String DATE_PATTERN_WITH_MONTH_NAME = "dd-MMM-yy"; public static final String DAY_FORMAT = "dd"; public static final String DATE_PATTERN_WITH_ONLY_MONTH = "MMM"; public static final String FULL_MONTH_FORMAT = "MMMMMMMM"; public static final String YEAR_FORMAT = "yyyy"; public static final String DATE_TIME_PATTERN_FULL = "MM/dd/YYYY hh:mm:ss"; public static final String DATE_TIME_PATTERN = "DDMMMYY"; public static final String DATE_PATTERN = "MM.DD.YYYY"; public static final String DATE_PATTERN_MMMddyyyy = "MMM dd yyyy"; public static final String DATE_PATTERN_ddMMMYYYY = "ddMMMYYYY"; public static final String DATE_PATTERN_dd_MMM_YYYY = "dd-MMM-YYYY"; public static final String DATE_PATTERN_YYYY_MM_DD = "yyyy-MM-dd"; public static final String DATE_PATTERN_hh_mma_dd_MMM_YYYY = "hh:mma dd-MMM-YYYY"; public static final String DATE_PATTERN_ddMMMYY_HHmmss = "ddMMMYY HH:mm:ss"; public static final String DATE_PATTERN_MM_YYYY = "MM/YYYY"; public static final String WEEK_DAY_FORMAT = "EEE"; public static final String FOOTER_TIMESTAMP = "MMMM dd, YYYY hh:mm:ss a"; public static final String DATE_PATTERN_DD_MM_YYYY = "dd/MM/yyyy"; /******************* DATE AND TIME FORMAT Starts ********************/ /* Exception Handling :Start */ public static final String JAVA_EXCEPTION_CODE = "9999"; public static final String ERRORPAGE = "error"; public static final String DB_ERROR_TYPE = "DB"; public static final String JAVA_ERROR_TYPE = "AS"; public static final String PROD_BEGIN_DT_ERROR="begDtError"; public static final String PROD_END_DT_ERROR="endDtError"; /* Exception Handling :End */ /***** Properties File URL Starts ******/ public static final String COMMON_PROP_PATH = "com/hotelcodata/memberweb/properties/mwcommon.properties"; public static final String MW_PROP_PATH = "com/hotelcodata/memberweb/properties/memberweb.properties"; /***** Properties File URL Ends ******/ /**************************** LEFT NAVIGATION Starts *****************************/ public static final String MENU_REVMGMT = "Revenue Mgmt"; public static final String SUBMENU_DAILYMONITOR = "Daily Monitor"; public static final String SUBMENU_RATEMONITOR = "Rate Monitor"; public static final String SUBMENU_VIEWCATEGORIES = "View Categories"; public static final String SUBMENU_MODROOMALLOC = "Modify Room Alloc"; public static final String SUBMENU_MODPLANALLOC = "Modify Plan Alloc"; public static final String SUBMENU_VIEWRATEPLANS = "View Rate Plans"; public static final String SUBMENU_VIEWRATERULES = "View Rate Rules"; public static final String SUBMENU_MODIFYRATES = "Modify Rates"; public static final String SUBMENU_MODIFYGUARANTEES = "Modify Guarantees"; public static final String SUBMENU_LEVELMONITOR = "Level Monitor"; public static final String SUBMENU_VIEWRATELEVELS = "View Rate Levels"; public static final String SUBMENU_MODIFYLEVELSETUP = "Modify Level Setup"; public static final String SUBMENU_MODLVLALLOC = "Modify Level Allocat"; public static final String SUBMENU_MODLVLROOMTYPE = "Modify Level Room Ty"; public static final String SUBMENU_MODIFYBARLEVELROOM = "Modify BAR Level Roo"; public static final String SUBMENU_MODIFYLEVELSTATUS = "Modify Level Status"; public static final String SUBMENU_MODIFYSTATUS = "Modify Status"; public static final String SUBMENU_MODIFYOVERALLALLOCATION = "Modify Overall Alloc"; public static final String SUBMENU_MODIFYCATALLOC = "Modify Cat Alloc"; public static final String SUBMENU_VIEWHISTORY = "View History"; public static final String SUBMENU_VIEWEXCEPTIONREPORT = "View Exception Repor"; public static final String SUBMENU_FORECASTTOOL = "Forecast Tool"; public static final String SUBMENU_NEWVIEWHISTORY = "New View History"; public static final String SUBMENU_PITEST = "PI TEST"; public static final String SUBMENU_GENERICFALLOUT = "Generic fallout"; public static final String MENU_RESERVATION = "Reservations"; public static final String SUBMENU_ARRIVALLIST = "Arrivals List"; public static final String SUBMENU_ARRLISTWITHCC = "Arr List with CC"; public static final String SUBMENU_CCPROCESSING = "CC Processing"; public static final String SUBMENU_BWIEXTRAFORM = "BWI Extra Form"; public static final String SUBMENU_PROPERTYTOPROPERTY = "Property-to-Property"; public static final String SUBMENU_RESVBOOKEMPLOYEERATE = "Book Employee Rate"; public static final String SUBMENU_RESVTRANSACTIONLOG = "Transaction Log"; public static final String SUBMENU_RESVTRANSLOGWITHCC = "Trans Log with CC"; public static final String MENU_TRAVELCARD = "Travel Card"; public static final String SUBMENU_CARDPROCESSING = "Card Processing"; public static final String FILEDOWNLOAD_KEY = "key"; public static final String FILEDOWNLOAD_PDF = "pdf"; public static final String FILEDOWNLOAD_XML = "xml"; public static final String MENU_REPORT = "Reports"; public static final String SUBMENU_DAILYDENIALS = "Daily Denials"; public static final String SUBMENU_MEMBERREPORTS = "Member Reports"; public static final String SUBMENU_BWSTATEMENTS = "BW Statements"; public static final String SUBMENU_LEVELINVSTATUSREQ = "Level Inv Status Req"; public static final String SUBMENU_LEVELINVSTATUSRES = "Level Inv Status Res"; public static final String SUBMENU_INVSTATUSREQ = "Inv Status Request"; public static final String SUBMENU_INVSTATUSRES = "Inv Status Results"; public static final String SUBMENU_AFFILIATEREPORTS = "Affiliate Reports"; public static final String SUBMENU_NEWMENUITEM = NO_SPACE; public static final String SUBMENU_PERFSTATSADVANCED = "Perf Stats - Advance"; public static final String SUBMENU_PERFSTATSMARKETING = "Perf Stats - Marketi"; public static final String SUBMENU_PERFSTATSDAILYO = "Perf Stats - Daily O"; public static final String SUBMENU_CORPORATEACCOUNTS = "Corp Accounts"; public static final String SUBMENU_DETTRAVAG = "Det by Trav Agt"; public static final String SUBMENU_INVREQUESTREPORT = "INV REquest Report"; public static final String CBM_DET_REP_C = "CBM Billing Credit Report  "; public static final String CBM_DET_REP_D = "CBM Billing Debit Report  "; public static final String TOPCRPCRIT1="Net Revenue sorted by Fiscal YTD totals"; public static final String MENU_FREQUENTGUEST = "Frequent Guest"; public static final String SUBMENU_PENDREWARD = "Pending Rewards"; public static final String SUBMENU_REWARD = "Rewards"; public static final String SUBMENU_MILESONLY = "Miles Only"; public static final String SUBMENU_MEMBERSEARCH = "Member Search"; public static final String SUBMENU_REPORT = "Reports"; public static final String SUBMENU_ENROLL = "Enroll"; public static final String SUBMENU_REDEEMPTION = "Redemption"; public static final String SUBMENU_REGISFORPROMOS = "Register for Promo's"; public static final String SUBMENU_FRONTDESKINCENTIVE = "Front Desk Incentive"; public static final String SUBMENU_SPECIALOFFERLINK = "Special Offers"; public static final String SUBMENU_REDEEMPOINTS = "Redeem Points"; public static final String MENU_BESTCHEQUECOMM = "BestCheque Commissions"; public static final String SUBMENU_VIEWEDITCOMM = "View/Edit Comm"; public static final String SUBMENU_ADDPROPERTYDIRECT = "Add Property Direct"; public static final String SUBMENU_TAXDEFINITIONS = "Tax Definitions"; public static final String MENU_LEADS = "Leads"; public static final String SUBMENU_ELEADS = "eLeads"; public static final String MENU_PROPERTYDATA = "Property Data"; public static final String SUBMENU_PDS = "PDS"; public static final String SUBMENU_PDSTUTORIAL = "PDS Tutorial"; public static final String SUBMENU_HOTELGUIDEPROOF = "Hotel Guide Proof"; public static final String SUBMENU_PHOTOSVIRTUALTOUR = "Photos/Virtual Tours"; public static final String MENU_PRODUCT = "Products"; public static final String SUBMENU_PRODUCTMONITOR = "Product Monitor"; public static final String SUBMENU_PRODVIEWHISTORY = "View History"; public static final String MENU_SECURITY = "Security"; public static final String SUBMENU_WHOSON = "Who's On"; public static final String SUBMENU_PICKPROPERTY = "Pick a Property"; public static final String SUBMENU_MAINTAINAFFIUSERS = "Maintain AffilUsers"; public static final String MENU_HELP = "Help"; public static final String SUBMENU_FAQTITLE = "FAQ"; public static final String SUBMENU_QUICKREFERENCE = "Quick Reference"; public static final String SUBMENU_TUTORIALTITLE = "Tutorial"; public static final String SUBMENU_USETITLE = "Do Not Use"; public static final String MENU_CUSTOMERCARE = "Customer Care"; public static final String SUBMENU_VIEWOPENINQUIRIES = "View Open Inquiries"; public static final String SUBMENU_RESOLVEDINQUIRIES = "Resolved Inquiries"; public static final String SUBMENU_FCRINQUIRIES = "FCR Inquiries"; public static final String SUBMENU_MONTHLYCOMPLAINT = "Monthly Complaint"; public static final String SUBMENU_ROLLINGMONTHS = "Rolling 12 Months"; /**************************** LEFT NAVIGATION Ends *****************************/ /*************************** ORACLE STORED PROCEDURE NAME Starts ****************************/ public static final String GET_LEVEL_RATE_PROCEURE = "MW_CV_TYPES_PKG.GET_LEVEL_RATE_CODE"; public static final String GET_CORP_INFO_RPT_PROCEURE = "MW_CV2_TYPES_PKG.GET_CORP_INFO_RPT"; public static final String PROC_MW_TADETAIL = "MW_UI_REPORTS_PKG.mw_tadetail_p"; public static final String GET_RATE_PLAN_PROCEURE = "MW_4_PKG.mw_vrmain_p"; public static final String GET_RATE_CATEGORY_PROCEURE = "MW_CV_TYPES_PKG.GET_RATE_CATEGORY"; public static final String GET_RESORT_INFO_PROCEURE = "MW_CV_TYPES_PKG.GET_RESORT_INFO"; public static final String DO_RATE_CAT_ALTER_PROCEDURE = "MW_4_PKG.mw_doratecatalter_p"; public static final String GET_PLAN_ALLOC_PROCEDURE = "MW_4_PKG.mw_qrcd_p"; public static final String SUBMIT_PLAN_ALLOC_PROCEDURE = "MW_4_PKG.MW_DORATECODEALTER_P"; public static final String GET_CA_DET_BOOKINGS = "MW_CA_DET_PKG.GET_CA_DET_BOOKINGS"; public static final String GET_CITY_DENIALS = "MW_CITY_DEN_PKG.GET_CITY_DENIALS"; public static final String PROC_MW_DENIAL = "MW_UI_REPORTS_PKG.MW_DAILY_DENIALS_MAIN_P"; public static final String PROC_REDEMPTION = "mw_ui_frequentguest_pkg.mw_fmgcrr_p"; public static final String PROC_REV_MGMT_VIEW_CATEGORIES = "MW_CV3_TYPES_PKG.GET_CATEGORY_CODES"; public static final String PROC_REV_MGMT_VIEW_RATE_RULES = "MW_CV3_TYPES_PKG.GET_RESORT_RATE_RULES"; public static final String GET_LEVEL_ALLOC_PROCEDURE = "MW_4_PKG.mw_lmqrlalloc_p"; public static final String SUBMIT_LEVEL_ALLOC_PROCEDURE = "mw_4_pkg.mw_dolevelallocalter_p"; public static final String GET_MODIFY_STATUS_INFO_PROC = "MW_4_PKG.mw_qs_p"; public static final String SUBMIT_MODIFY_STATUS_INFO_PROCEDURE = "mw_4_pkg.mw_dostatusalter_p"; public static final String VIEW_TRANSACTION_HISTORY = "mw_4_pkg.mw_rmth_p"; public static final String TRANSACTION_HISTORY_INPUT_DISP = "mw_4_pkg.mw_vhs_p"; public static final String LEVEL_REASSIGN_HIST_DISP = "mw_4_pkg.mw_rmth_level_p"; public static final String PROC_FDIEMPDATA = "MW_UI_FREQUENTGUEST_PKG. mw_frntdsk_incntv_p"; public static final String PROC_FDIPREFIX = "MW_FM_PKG.GET_PX_CNT_CLUB_P"; public static final String PROC_FDISAVENEWMEM = "MW_FM_PKG.fm_gc_enroll_emp"; public static final String PROC_FDIUPDATEMEMSTATUS = "MW_FM_PKG.fm_update_front_desk_emp"; public static final String MODIFY_BAR_LEVEl_ROOM_INFO = "mw_4_pkg.mw_lmqrtstat_p"; public static final String SUBMIT_MODIFY_BAR_LEVEl_ROOM_INFO = "mw_4_pkg.mw_dolevelroomstatusalter_p"; public static final String MODIFY_RATE_LEVEl_ROOM_INFO = "mw_4_pkg.mw_lmqrtalloc_p"; public static final String PROC_REPORTS_SOURCE_OF_BOOKING = "MW_UI_REPORTS_PKG.mw_bksrc_p"; public static final String SUBMIT_MODIFY_LEVEl_ROOM_ALLOC = "mw_4_pkg.mw_dolevelroomallocalter_p"; public static final String PROC_ZETA_FAX_DETAIL_BY_PROPERTY = "mw_zeta_fax_pkg.get_zeta_fax_detail"; public static final String PROC_ZETA_FAX_RATE_BILLING = "mw_zeta_fax_pkg.get_zeta_fax_rate_billing"; public static final String PROC_INTL_TRAFFIC_PROP = "mw_intl_traffic_pkg.get_intl_traffic_info"; public static final String PROC_MW_TA_DET_P = "mw_ui_reports_pkg.mw_ta_det_p"; public static final String GET_AFFILIATE_REPORT = "mw_ui_reports_pkg.mw_affrpts_p"; public static final String PROC_MW_TA_CO_D = "mw_ta_co_d_pkg.get_ta_co_d_bookings"; public static final String ZERO_ALLOC_PROC = "MW_UI_REPORTS_PKG.mw_zeroalloc_p"; public static final String VIEW_RATE_PROC = "MW_UI_REPORTS_PKG.mw_viewrate_p"; public static final String PROC_MW_CA_D = "MW_CA_D_PKG.GET_CA_D_BOOKINGS"; public static final String PROC_AFF_INV_DIS ="mw_ui_reports_pkg.mw_aff_req_p"; public static final String PROC_AFF_INV_SUBMIT ="mw_ui_reports_pkg.mw_aff_requests_p"; public static final String PROC_AFF_INV_UNPROCESSED ="mw_inv_adhoc_report_pkg.GETUNPROCESSEDAFFREQUESTS"; public static final String PROC_AFF_INV_DELETE ="MW_INV_ADHOC_REPORT_PKG.DELETEAFFREPORTREQUEST"; /******* RESERVATION's Stored Procedure Name Starts *******/ public static final String ARRIVALS_LIST_PROC = "MW_UI_RESERVATIONS_PKG.MW_SLOA_P"; public static final String RESV_GTD_SET_AMT_PROC = "MW_UI_RESERVATIONS_PKG.MW_SET_AMOUNT_P"; public static final String CANCL_RESV_PROC = "MW_UI_RESERVATIONS_PKG.MW_CANCEL_LEG_P"; public static final String INSERT_FAX_REQ_PROC = "MW_CV3_TYPES_PKG.INSERT_FAX_REQUEST"; public static final String CCPROCCESS_BWI_PROC = "MW_UI_RESERVATIONS_PKG.MW_SET_BWI_REP_P"; public static final String CCPROCCESS_CREDIT_PROC = "MW_UI_RESERVATIONS_PKG.MW_CC_RESV_REVIEW_REP_P"; public static final String TRANS_LOG_PROC = "MW_UI_RESERVATIONS_PKG.MW_TRAN_LOG_REPORT_DISP_P"; public static final String TRANS_LINK_PROC = "MW_UI_RESERVATIONS_PKG.MW_TRANS_LINK_P"; public static final String UPDATE_NOTIF_PROC = "MW_CV3_TYPES_PKG.UPDATE_STAR_NOTIF"; public static final String RESV_DETAILS_PROC = "MW_UI_RESERVATIONS_PKG.MW_RESV_DETAILS_P"; public static final String PRINT_RESV_DET_PROC = "MW_UI_RESERVATIONS_PKG.MW_PRINT_RESV_DETAILS_P"; public static final String RESORT_DET_PROC = "MW_CV_TYPES_PKG.GET_RESORT_INFO"; /******* RESERVATION's Stored Procedure Name Ends *******/ /******* SECURITY's Stored Procedure Name Starts *******/ public static final String WHOS_ON_PROC = "MW_CV_TYPES_PKG.GET_ACTIVE_USER_INFO"; public static final String GET_PICK_UP_PROPERTY = "MW.MW_UI_LOGIN_LOGOUT_PKG.MW_GET_USER_PROPERTY_P"; public static final String PICK_UP_PROPERTY_SUBMIT = "MW.MW_UI_LOGIN_LOGOUT_PKG.MW_AUTH_USER_PROPERTY_P"; public static final String MW_LOGOUT = "MW.MW_UI_LOGIN_LOGOUT_PKG.MW_USER_LOGOUT_P"; public static final String SPINNING_SYMBOL_INFO = "MW.MW_UI_LOGIN_LOGOUT_PKG.MW_GET_PROP_CURR_LOGO_P"; /******* SECURITY's Stored Procedure Name Ends *******/ /*************************** ORACLE STORED PROCEDURE NAME Ends ****************************/ /*************************** MODULE SPECIFIC Constants Starts ****************************/ /******* RESERVATION's Constants Starts *******/ public static final String RESV_ALLOWED_TEXT = "ALLOWED"; public static final String RESV_NOT_ALLOWED_TEXT = "NOTALLOWED"; public static final String RESV_TRANLOG_TEXT = "TRANLOG"; public static final String RESV_BWI_REPORT = "bwireport"; public static final String RESV_CREDIT_REPORT = "creditreport"; public static final String AL_LOOKUP_TYPE = "LOA_SORTED_BY"; public static final String RESV_PRODUCTS_TEXT = "PRODUCTS"; public static final String RESV_GCCI_TEXT = "GCCI"; public static final String RESV_BASE_TEXT = "BASE"; public static final String RESV_PLATINUM_TEXT = "PLATINUM"; public static final String RESV_DIAMOND_TEXT = "DIAMOND"; public static final String RESV_GOLDELITE_TEXT = "GOLD ELITE"; public static final String RESV_GOLD_TEXT = "GOLD"; public static final String RESV_CANCELLED_TEXT = "CANCELLED"; public static final String RESV_PTH_TEXT = "PTH"; public static final String RESV_DATE_TEXT = "DATE"; public static final String RESV_DESC_TEXT = "DESC"; public static final String RESV_TOTAL_TEXT = "TOTAL"; public static final String RESV_AMOUNT_SETTLED_TEXT = "AS"; public static final String RESV_AMOUNT_CREDITED_TEXT = "AC"; public static final String RESV_MESSAGE_TEXT = "message"; public static final String RESV_SELL_CHANGE_TEXT = "sellchange"; public static final String RESV_CANCEL_TEXT = "cancel"; public static final String RESV_BWI_REPORTVAL = "BWI"; public static final String RESV_CREDIT_REPORTVAL = "Credit"; public static final String RESV_LEFT_BRACKET = "  ("; public static final String RESV_RIGHT_BRACKET = ")"; public static final String RESV_COMMA = ","; public static final String RESV_CARAT_SYMBOL = "^"; public static final String RESV_DASH_SYMBOL = "-"; public static final String RESV_AT_SYMBOL = "@"; public static final String RESV_AGES_TEXT = "(Ages "; public static final String RESV_ON_TEXT = "On:"; public static final String RESV_CANCEL_BY_DEFAULT = "4:00PM"; public static final String RESV_STAR_NOTIF_HELPER = " STAR_NOTIFICATIONS_HELPER"; public static final String RESV_STAR_NOTIF = "STAR_NOTIFICATIONS"; /******* RESERVATION's Constants Ends *******/ // Frequent Guest Pending Rewards public static final String PROC_MW_FREQGST_PR = "MW_UI_FREQUENTGUEST_PKG.mw_fmppm_p"; public static final String PROC_MW_FMT_PR = "MW_UI_FREQUENTGUEST_PKG.mw_fmt_p"; public static final String PROC_MW_PR_SUBMIT = "mw_fm_pkg.update_pending_transaction_p"; public static final String PROC_MW_PR_NOSHOW = "MW_FM_PKG.no_show_cancel_reservation"; public static final String PROC_MW_PR_STATUS = "MW_BWR_PKG.MW_FM_FVRP_CHECK"; public static final String PROC_MW_PR_PNDN_TRAN = "MW_FM_PKG.iu_mw_fm_pending_transaction"; public static final String PROC_MW_PR_TRAN = "MW_UI_FREQUENTGUEST_PKG.mw_fm_tran_p"; public static final String MW_FG_PR_MANUAL = "MANUAL"; public static final String MW_FG_PR_SYSTEM = "SYSTEM"; public static final String MW_FG_PR_STYLE_1 = "style_name_1"; public static final String MW_FG_PR_STYLE_2 = "style_name_2"; public static final String MW_FG_PR_STYLE_3 = "style_name_3"; public static final String MW_FG_PR_STYLE_4 = "style_name_4"; public static final String MW_FG_PR_SW = "SW"; public static final String MW_FG_PR_BY = "BY"; public static final String MW_FG_PR_BN = "BN"; public static final String MW_FG_PR_STYLE_PTS_1 = "style_pts_1"; public static final String MW_FG_PR_STYLE_PTS_2 = "style_pts_2"; public static final String MW_FG_PR_STYLE_PTS_3 = "style_pts_3"; public static final String MW_FG_PR_STYLE_RP_1 = "style_rp_1"; public static final String MW_FG_PR_STYLE_RP_2 = "style_rp_2"; public static final String MW_FG_PR_STYLE_AP_1 = "style_ap_1"; public static final String MW_FG_PR_STYLE_AP_2 = "style_ap_2"; public static final String MW_FG_PR_STYLE_AP_3 = "style_ap_3"; public static final String MW_FG_PR_NOSHOW = "NOSHOW"; public static final String MW_FG_BWR_POPUP_MSG = "Edit Mode - change fields as needed and click 'Submit'."; public static final String MW_FG_BWR_POPUP = "bwrpopup"; public static final String MW_FG_BWR_INLINE = "display: inline;"; public static final String MW_FG_BWR_NONE = "display: none;"; public static final String MW_FG_BWR_ACTION = "INSERT"; public static final String MW_FG_BWR_ACTION_U = "UPDATE"; public static final String MW_FG_BWR = "BWR"; public static final String MW_FG_BWR_ANSICD = "600663"; public static final String MW_FG_BWR_ERROR = "Exception occured during submission."; public static final String MW_FG_PARTNER = ">Partner<"; public static final String MW_FG_MINUS = "-"; public static final String MW_FG_TILDE = "~"; public static final String MW_FG_TILDE_X = "X~"; public static final String GET_RESORT_INFO = "MW_CV_TYPES_PKG.GET_RESORT_INFO" ; public static final String VENDOR_CODE_LH = "LH"; public static final String VENDOR_CODE_AB = "AB"; public static final String VENDOR_CODE_SW = "SW"; public static final String VENDOR_CODE_SK = "SK"; public static final String VENDOR_CODE_AM = "AM"; public static final String VENDOR_CODE_VA = "VA"; public static final String VENDOR_CODE_TK = "TK"; public static final String PARTNER_REWARDS_0 = "0"; public static final String PARTNER_REWARDS_20 = "20"; public static final String PARTNER_REWARDS_40 = "40"; public static final String PARTNER_REWARDS_60 = "60"; public static final String PARTNER_REWARDS_80 = "80"; public static final String PARTNER_REWARDS_200 = "200"; public static final String PARTNER_REWARDS_250 = "250"; public static final String PARTNER_REWARDS_500 = "500"; public static final String PARTNER_REWARDS_750 = "750"; public static final String PARTNER_REWARDS_1000 = "1000"; public static final String PARTNER_REWARDS_1500 = "1500"; public static final String PARTNER_REWARDS_2000 = "2000"; public static final String PARTNER_REWARDS_2500 = "2500"; public static final String PARTNER_REWARDS_3000 = "3000"; public static final String PARTNER_REWARDS_3500 = "3500"; public static final String PARTNER_REWARDS_4000 = "4000"; public static final String PARTNER_REWARDS_4500 = "4500"; public static final String FMPPM = "fmppm"; public static final String FMGC = "fmgc"; public static final String MERGED = "MERGED"; public static final String RESTRICT = "RESTRICT"; public static final String CARDNOTFOUND = "CARDNOTFOUND"; public static final String DELETED="DELETED"; public static final String WARN_TEXT = "WARN"; // Frequent Guest Member Search public static final String MEM_SRCH_PROMO = "NASCAR"; public static final String MEM_SRCH_STYLE_1 = "style_1"; public static final String MEM_SRCH_STYLE_2 = "style_2"; public static final String PROC_MW_FREQGST_MS_VENDOR = "MW_FM_PKG.get_fm_vendor_lov"; public static final String PROC_MW_FG_MS_GCCMEM = "MW_FM_PKG.getGccMembers"; // Frequent Guest Miles Only public static final String PROC_FG_MILES_P = "mw_ui_frequentguest_pkg.mw_fmmiles_p"; public static final String PROC_FG_MILES_TRAN = "MW_FM_PKG.iu_mw_fm_transaction"; public static final String MW_FG_USD = "USD"; // FreguentGuest Report public static final String GET_FMRM_PROCEURE = "MW_UI_FREQUENTGUEST_PKG.mw_fmrm_p"; public static final String GET_BWR_ENROLLMENT_GOAL_PROCEURE = "MW_FM_PKG.get_fmbpegreport_info"; public static final String GET_FM_BILLS_MILES_INFO_PROCEURE = "MW_FM_PKG.get_fm_billmiles_info"; public static final String GET_FM_BILL_POINT_INFO_PROCEURE = "MW_FM_PKG.get_fm_billpoints_info"; public static final String GET_FM_BILL_GC_FNV_PROCEURE = "MW_FM_PKG.get_fm_billgcfnv_info"; public static final String GET_FMRL_PROCEURE = "MW_UI_FREQUENTGUEST_PKG.mw_fmrl_p"; public static final String GET_PROPERTY_OFFER_PROCEURE = "MW_FM_PKG.get_fmPOReport_info"; public static final String GET_BWR_PROPERTY_ENROLLMENT = "MW_FM_PKG.GET_BWR_ENROLL_INFO"; public static final String GET_ENROLL_INFO = "MW_FM_PKG.doMemberInsert"; public static final String GET_ENROLL_DROPDOWN = "mw_ui_frequentguest_pkg.mw_enroll_p"; public static final String GET_WELCOME_LETTER = "mw_ui_frequentguest_pkg.mw_enroll_msg_p"; public static final String GET_FM_DO_Propoff_DEL = "MW_FM_PKG.doPropoffDelete"; public static final String BWR_PT_PTR_REJ = "ET"; public static final String FREE_NTS_VOU_REJ = "ER"; public static final String PTR_RWD_BILL = "BA"; public static final String BWR_PTS_BILL_BY_MONTH = "BG"; public static final String REDEEM_BILL_BY_MONTH = "BR"; public static final String PROP_SPE_OFFER = "PO"; public static final String BWR_PROP_ENROLL = "PE"; public static final String BWR_PROP_ENROLL_GOAL = "BPEG"; public static final String TRN_TYPE_GF = "GF"; public static final String GCFNV = "GC_FNVMW"; public static final String E = " E "; public static final String PTS = "pts"; public static final String RWDS = "rwds"; public static final String AND = "and"; public static final String GRAND_TOTALS = "Grand Totals:"; public static final String GCCPTS = "GCCPTS"; public static final String GCCI_TOTAL = "GCCI Totals:"; public static final String FNVMW = "FNVMW"; public static final int MAX_ROWS = 500; public static final int END_ROWS = 100; public static final String RPT_T = "BWR Points/Partner Rewards Awarded by Date"; public static final String RPT_ET = "BWR Points/Partner Rewards Rejects"; public static final String RPT_R = "Free Night Vouchers Redeemed"; public static final String RPT_ER = "Free Night Voucher Rejects"; public static final String RPT_BG = "BWR Points Billing by Month/Year"; public static final String RPT_BA = "Partner Rewards Billing by Month/Year"; public static final String RPT_BR = "Redemption Billing by Month/Year"; public static final String RPT_A = "Lynx Arrivals with Frequent Guest Info"; public static final String RPT_PO = "Property Special Offers"; public static final String RPT_PE = "BWR Property Enrollments"; public static final String RPT_BPEG = "BWR Property Enrollment Goal"; public static final String TRAN_KEY = "TRAN"; public static final String TRAN_VAL = "Transaction Date"; public static final String ARR_KEY = "ARR"; public static final String ARR_VAL = "Arrival Date"; public static final String DEP_KEY = "DEP"; public static final String DEP_VAL = "Departure Date"; public static final String REP_TA_CA_SUMMARY = " summary records."; public static final String REP_PROD_PROP_NOT_FOUND = "Property information not found"; public static final String ARRDATE_KEY = "Name~ArrDate"; public static final String ARRDATE_VAL = "Guest Name, Arrival Date"; public static final String ARRDATE_NAME_KEY = "ArrDate~Name"; public static final String ARRDATE_NAME_VAL = "Arrival Date, Guest Name"; public static final String USERDATE_KEY = "User~Date"; public static final String USER_INIT_VAL = "User Id, Transaction Date"; public static final String VENDOR_KEY = "Vendor~Name"; public static final String VENDOR_VAL = "Partner Code, Guest Name"; public static final String NAME_DEP_KEY = "Name~DepDate"; public static final String NAME_DEP_VAL = "Guest Name, Departure Date"; public static final String DEPDATE_NAME_KEY = "DepDate~Name"; public static final String DEPDATE_NAME_VAL = "Departure Date, Guest Name"; public static final String VEN_ARR_KEY = "Vendor~ArrDate"; public static final String VEN_ARR_VAL = "Partner Code, Arrival Date"; public static final String REF_DATE_KEY = "Ref~RedDate"; public static final String REF_DATE_VAL = "Reference, Redeem Date"; public static final String RED_DATE_KEY = "RedDate~Ref"; public static final String RED_DATE_VAL = "Redeem Date, Reference"; public static final String USER_DATE_KEY = "User~Date"; public static final String USER_DATE_VAL = "User Id, Transaction Date"; public static final String RED_KEY = "RED"; public static final String RED_VAL = "Redeem Date"; public static final String PERIOD_KEY = "Period"; public static final String ENR_KEY = "ENR"; public static final String ENR_VAL = "Enrollment Date"; public static final String REDDATE_KEY = "RedDate"; public static final String REDDATE_VAL = "Redemption Type, Redeem Date"; public static final String VOUID_KEY = "VouchId"; public static final String VOUID_VAL = "Redemption Type, Voucher Number"; public static final String BILL_KEY = "BILL"; public static final String BILL_VAL = "Billing Month/Year"; public static final String STA_BEG_DATE_KEY = "st-pobegdate"; public static final String STA_BEG_DATE_VAL = "Status, Offer Begin Date"; public static final String PRP_OFF_KEY = "POFF"; public static final String PRP_OFF_VAl = "Property Offers Date"; public static final String OFF_BEG_DATE_KEY = "pobegdate"; public static final String OFF_BEG_DATE_VAL = "Offer Begin Date"; public static final String EMP_LAST_KEY = "Employee~Name"; public static final String EMP_LAST_VAL = "Employee Last Name"; public static final String GET_FM_REDEEM_POINTS = "MW_UI_FREQUENTGUEST_PKG.mw_fmfnv_p"; public static final String SUBMIT_FM_REDEEM_POINTS = "mw_ui_frequentguest_pkg.mw_redeempntsinfo_p"; public static final String FIRST_NIGHT = "FIRST NIGHT"; public static final String SECOND_NIGHT = "SECOND NIGHT"; public static final String THIRD_NIGHT = "THIRD NIGHT"; public static final String FOURTH_NIGHT = "FOURTH NIGHT"; public static final String FIFTH_NIGHT = "FIFTH NIGHT"; public static final String DATE_FORMAT = "ddMMMyyyy"; public static final String Inactivate = "Inactivate"; public static final String Indicator = "'I'"; public static final String reactivate = "Re-activate"; public static final String reindicate = "'A'"; public static final String DATE_FORMAT_TWO = "ddMMMyy"; public static final String GET_FM_SPECIAL_OFFER = "mw_ui_frequentguest_pkg.mw_fmpomenu_p"; public static final String GET_FM_PROP_OFFER = "mw_fm_pkg.iu_mw_fm_prop_offers"; public static final String GET_FM_SPECIAL_OFFER_DEL = "mw_fm_pkg.dopropofftempdelete"; public static final String GET_FM_SPECIAL_OFFER_DETAIL = "mw_ui_frequentguest_pkg.mw_fmpodetail_p"; public static final String UPADTE_PROP_SPE_OFFER = "mw_ui_frequentguest_pkg.mw_special_offer_p"; public static final String SPECIAl_OFFER_MENU = "specialOfferSuccess"; public static final String FMRM = "fmrm"; // frequency guest Promotion Registration public static final String GET_FMPRP_PROCEURE = "MW_UI_FREQUENTGUEST_PKG.mw_fmpr_p"; public static final String GET_TERM_CONDITION = "GC.GC_PROMO_PKG.get_promo_term_and_cond"; public static final String DISPLAY = "DISPLAY"; public static final String UPDATE = "UPDATE"; public static final String NUM600663 = "600663"; // frequent guest Special Offer public static final String ACT_E = "e"; public static final String PENDING = "Pending"; public static final String ACT_W = "w"; public static final String ACT_N = "n"; public static final String ACT_C = "c"; public static final String ACT_V = "v"; public static final String BOTH = "both"; public static final String POINTS = "points"; public static final String MILES = "miles"; public static final String ROOM_RATE = "Room Rate Only"; public static final String ROOM_FOOD = "Room and Food"; public static final String TOTAL_BILL = "Total Bill"; public static final String STANDARD = "Standard"; public static final String DOUBLE = "Double"; public static final String TRIPLE = "Triple"; public static final String QUAD = "Quadruple"; /************ Stored Function names ********************/ public static final String GET_PROPERTY_DATE = "cpm.GET_PROPERTY_DATE"; public static final String GET_MONTHLY_SUMMARY_REPORT_PROCEDURE = "mw_ui_customercare_pkg.mw_cust_mgnt_mon_com_p"; public static final String GET_RESOLVED_INQUIRY_PROCEDURE = "mw_ui_customercare_pkg.mw_cust_mgnt_ring_p"; public static final String GET_VIEW_INQUIRY_PROCEDURE = "mw_ui_customercare_pkg.mw_cust_mgnt_view_open_inq_p"; public static final String GET_ROLLING_INQUIRY_PROCEDURE = "mw_ui_customercare_pkg.mw_cust_mgnt_rolling_month_p"; public static final String GET_INQUIRY_NOTIFY_INFO_PROCEDURE = "mw_ui_customercare_pkg.mw_cust_mgnt_csincdet_p"; public static final String REVIEW_FILE_WRAPPER_PROCEDURE = "mw_ui_customercare_pkg.mw_cust_mgnt_update_cs_inc_p"; public static final String REVIEW_FILE_PROCEDURE = "MW_INQ_PKG.UPDATE_CS_INC_REVIWED_P"; public static final String SUBMIT_COMMENT_PROCEDURE = " mw_inq_pkg.add_cs_inc_comment_p"; public static final String GET_FCR_INQUIRY_PROCEDURE = "mw_ui_customercare_pkg.mw_cust_mgnt_fcr_inquiry_p"; public static final String GET_FCR_INQUIRY_INFO_PROCEDURE = "mw_ui_customercare_pkg.mw_cust_mgnt_fcr_inquiry_dtl_p"; public static final String GET_VIEW_INQUIRY_INFO_PROCEDURE = "mw_ui_customercare_pkg.mw_cust_mgnt_view_opn_inqdl_p"; public static final String PROC_MW_TOPTRAVLTA = "MW_UI_REPORTS_PKG.MW_TA_SUM_CO_L_P"; public static final String PROC_MW_TOPCRPCA = "MW_UI_REPORTS_PKG.MW_CA_SUM_CO_L_P"; public static final String PROC_MW_PRODBYPROP_ALL = "MW_UI_REPORTS_PKG.MW_PRODPROP_P"; public static final String GET_PRODUCT_MONITOR_PROCEDURE = "MW_UI_PRODUCTS_PKG.MW_PRODUCT_MONITOR_P"; public static final String GET_PRODUCT_MONITOR_POPUP = "MW_UI_PRODUCTS_PKG.MW_PMMOD_P"; public static final String GET_PRODUCT_MONITOR_ALLOC = "MW_UI_PRODUCTS_PKG.MW_PMSA_P"; public static final String AIRLINE_MILAGE_BY_PROP = "MW_UI_REPORTS_PKG.mw_detbill_p"; public static final String GET_PRODUCT_MONITOR_RATE = "MW_UI_PRODUCTS_PKG.MW_PMR_P"; public static final String GET_PRODUCT_HISTORY_PROCEDURE = "MW_UI_PRODUCTS_PKG.MW_HISTORY_RESULT_P"; public static final String GET_BCC_CV_STATUS = "MW_6_PKG.get_ta_types_f"; public static final String FETCH_QUICK_EDIT_SP = "MW_6_Pkg.quick_edit_bc_comm"; public static final String FETCH_BESTCHEQUE_COMM = "MW_UI_BESTCHEQUE_PKG.MW_GET_BC_COMMISSION_P"; public static final String FETCH_COMM_DETAIL = "MW_UI_BESTCHEQUE_PKG.MW_COMMISSION_DTL_P"; public static final String FETCH_PRODUCT_STATUS_POPUP = "MW_UI_PRODUCTS_PKG.MW_PMS_P"; public static final String FETCH_BCC_HISTORY_POPUP = "MW_UI_BESTCHEQUE_PKG.MW_BEST_CHEQUE_HISTORY_P"; public static final String PROPERTY_PERFORMANCE_RATE = "MW_UI_REPORTS_PKG.mw_rate_p"; public static final String FETCH_BCC_FALLOUT_SP = "MW_6_Pkg.iu_bc_comm"; public static final String FETCH_BCC_ADD_PROP = "mw_ui_hotelcocheque_pkg.mw_bcpdb_p"; public static final String FETCH_BCC_TAX = "mw_cv2_types_pkg.get_bc_taxdef_info"; public static final String FETCH_ONE_TAX = "mw_cv2_types_pkg.get_one_bc_taxdef_info"; public static final String FETCH_BCC_EDIT_POPUP = "mw_ui_hotelcocheque_pkg.mw_bstchk_edit_p"; public static final String FETCH_BCC_UPDATE_POPUP = "mw_ui_hotelcocheque_pkg.mw_hotelco_cheque_update_dtl"; public static final String BCC_ADJUST_FALLOUT = "mw_6_pkg.bc_insert_adjustment"; public static final String BCC_TAX_FLAT = "Flat"; public static final String FETCH_BCC_ADJUST_POPUP = "mw_ui_hotelcocheque_pkg.mw_bstchk_adjst_p"; public static final String UPDATE_TAX_FALLOUT= "mw_6_pkg.bc_updatetaxdef"; public static final String ADD_TAX_FALLOUT= "mw_6_pkg.bc_addTaxDef"; public static final String BCC_EDIT_LINK = "edit"; public static final String BCC_ADJUST_LINK = "adjust"; public static final String PROPERTY_SPECIAL_OFFER = "MW_FM_PKG.get_fmPOReport_info"; public static final String CBM_BILLING_DT = "MW_RESSTAT_PKG.get_aff_cbmbillingdt_info"; public static final String CBM_BILLING_CR = "MW_RESSTAT_PKG.get_aff_cbmbillingcr_info"; public static final String CBM_BILLING_ORIGIN = "MW_RESSTAT_PKG.origin"; /*** Stored Procedure Name -- Modify Rates -- Revenue Management ***/ public static final String PROC_REV_MGMT_SHOW_MODIFY_RATES = "MW_4_PKG.MW_QR_P"; public static final String PROC_REV_MGMT_GET_PRE_DEF_RATE_PLAN = "MW_5_PKG.GET_PRE_DEF_RATE_PLAN_P"; public static final String PROC_REV_MGMT_DO_RATE_ALTER = "mw_4_pkg.mw_doratealter_p"; /***** Product URL ***/ public static final String PRODUCT_MONITOR_PAGE = "productMonitorPage"; public static final String PRODUCT_MONITOR_ATTR_POPUP = "productMonitorAttrPopUp"; public static final String PRODUCT_MONITOR_ALLOC_POPUP = "productMonitorAllocationPopUp"; /***** Revenue Management -- View Rate Rules -- Texts *****/ public static final String VIEW_RATE_RULES_HEADER_ADJUST = "Adjust - System will auto-adjust secondary rate plans to ensure rate integrity compliance"; public static final String VIEW_RATE_RULES_HEADER_REJECT = "Reject - System does not allow secondary rate adjustments that violate critical business rules."; public static final String VIEW_RATE_RULES_HEADER_WARNING = "Warning - Secondary rate plan alter allowed but is not reccommended."; public static final String VIEW_RATE_RULES_ACTION_ADJUEST = "ADJUST"; public static final String VIEW_RATE_RULES_ACTION_REJECT = "REJECT"; public static final String VIEW_RATE_RULES_ACTION_WARNING = "WARNING"; public static final String VIEW_RATE_RULES_LAST_ACTION_START = "start"; public static final String VIEW_RATE_RULES_MUST_MUST_TEXT = "must"; public static final String VIEW_RATE_RULES_MUST_SHOULD_TEXT = "should"; public static final String VIEW_RATE_RULES_CONDITION_CODE_H = "H"; public static final String VIEW_RATE_RULES_RATE_TEXT = "Rate "; public static final String VIEW_RATE_RULES_BE_TEXT = " be "; public static final String VIEW_RATE_RULES_BE_EQUAL_TO_LESS_THAN_TEXT = " be equal to or less than"; public static final String VIEW_RATE_RULES_CURRENCY_UNITS_LESS_THAN_TEXT = " currency units less than"; public static final String VIEW_RATE_RULES_LESS_THAN_TEXT = "% less than"; public static final String VIEW_RATE_RULES_BE_LESS_THAN_EQUAL_TO_GREATER_THAN_TEXT = " be less than, equal to or greater than"; public static final String VIEW_RATE_RULES_SPECIAL_RULES_APPLY_TEXT = "Special rules apply"; /***** Revenue Management -- Modify Rates *****/ public static final String MODIFY_RATES_COUNTER_TYPE_LN_S_CTR = "ln_s_ctr"; public static final String MODIFY_RATES_COUNTER_TYPE_LN_PD_CTR = "ln_pd_ctr"; public static final String MODIFY_RATES_COUNTER_TYPE_LN_P_CTR = "ln_p_ctr"; public static final String MODIFY_RATES_COUNTER_TYPE_LN_F_CTR = "ln_f_ctr"; public static final String MODIFY_RATES_UI_NBSP = "&nbsp;"; public static final String MODIFY_RATES_MODNAME = "ModifyRates"; public static final String MODIFY_RATES_SUBMODNAME_QR = "QR"; public static final String MODIFY_RATES_SUBMODNAME_DMR = "DMR"; public static final String MODIFY_RATES_SUBMODNAME_RMR = "RMR"; public static final String GENERATE_MW_SESS_ID = "MW_2_PKG.genSessionId()"; public static final String PHOTO_VIRTUAL_TOUR = "MW_UI_PROPERTYDATA_PKG.MW_PROP_DATA_GET_URL_P"; public static final String MODCATALLOC_MODNAME = "ModifyCatAlloc"; public static final String SUCCESS_RESULT = "success"; public static final String FAILURE_RESULT = "failure"; public static final String MODLEVELALLOC_MODNAME = "ModifyLevelAlloc"; public static final String MODSTATUS_MODNAME = "ModifyStatus"; public static final String MODLEVELALLOC_SUBMODNAME_TWO = "LMSA"; public static final String MODSTATUS_SUBMODNAME_TWO = "DMS"; public static final String RATE_CATEGORY_PROC_INP_ONE = "RATECATEGORY"; public static final String MODCATALLOC_RATECATEGORY_NAME = "GROUP"; public static final String MODPLANALLOC_SUBMODNAME_NAME = "ModifyPlanAlloc"; public static final String MODBARLEVELROOMTYPESTS_MODNAME = "ModifyBarLevelRoomTypeStatus"; public static final String MODBARLEVELROOMTYPESTS_SUBMODNAME_TWO = "LMS"; public static final String MODLEVELROOMTYPEALLOC_MODNAME = "ModifyLevelRoomTypeAllocation"; public static final String PROC_MODIFY_LEVEl_STATUS_1 = "mw_4_pkg.mw_lmqrlstat_p"; public static final String PROC_MODIFY_LEVEl_STATUS_2 = "MW_4_PKG.MW_DOLEVELSTATUSALTER_P"; public static final String MODIFY_LEVEl_STATUS_NAME = "ModifyLevelStatus"; public static final String PROC_MODIFY_OVERALL_ALLOC_ONE = "mw_cv_types_pkg.get_rate_code"; public static final String PROC_MODIFY_OVERALL_ALLOC_TWO = "MW_4_PKG.mw_dooverallalter_p"; public static final String MODIFY_OVERALL_ALLOC_NAME = "ModifyOverallAllocation"; public static final String MODIFY_OVERALL_ALLOC_INPUT_ONE = "OVERALL"; /***** Revenue Management -- VIEW MODIFICATION HISTORY *****/ public static final String LEVELS_DISABLE = "DISABLED"; /** Reports - AffiliateReports - TotalDataBySourceOfBooking */ public static final String PROPERTY = "Property"; public static final String TOTALS = "Totals:"; public static final String LIGHT_BLUE_BOX = "light_blue_box tright"; public static final String TRIGHT = "tright"; public static final String PROP_INFO = "prodByProp prodType"; public static final String PROP_TYPE = "prodType"; public static final String RETSPECIFICPROPERTYSTATUS = "specificpropsrcofbkgdetrslt"; public static final String RETTOTALDATABYSRCSTATUS = "totaldatabysrcofbkgdetrslt"; public static final String DET_TRAV_AGT_REPORT = "dettravelagentreport"; public static final String PROPERTY_INFO_NOT_FOUND = "Property information not found"; public static final String DET_TRAV_AGT_DEAFULT_STATUS = "CYTD"; public static final String DET_TRAV_AGT_DEAFULT_STATUS2 = "FYTD"; public static final String XML_RESV_FILE_NAME1 = "1_extract"; public static final String XML_RESV_FILE_NAME2 = "sob_extract"; public static final String FALLOUT_VIEW = "falloutview"; public static final String ERROR_VIEW = "errorview"; public static final String REPORT_FORMAT_ONE = "01"; public static final String FORMAT_ONE = "1"; public static final String REPORT_FORMAT_COMMA = ","; public static final String REPORT_FORMAT_COMMA_SPACE = ", "; public static final String MEMBER_REP_NAME_ONE = "rtb"; public static final String MEMBER_REP_NAME_TWO = "RTB"; public static final String MEMBER_REP_NAME_THREE = "trbill"; public static final String MEMBER_REP_NAME_FOUR = "TRBILL"; public static final String GLOBAL_FILE_NAME = "gfbill_det"; public static final String COST_SHARING_FILE_NAME = "cost"; public static final String AFF_BILLING_FILE_NAME_ONE = "fmbill"; public static final String AFF_BILLING_FILE_NAME_TWO = "gcci"; public static final String CONCAT_STRING_HYPHEN = " - "; public static final String AFFILIATE_INFO = "Translation Cost Detail"; public static final String RTB_INFO = "Reservation Traffic Bulletin"; public static final String REPORT_STMT = "STMT"; public static final String REPORT_BACKUP = "BACKUP"; public static final String RESORT_ID = "resortId"; public static final String COUNTRY_CODE = "countryCode"; public static final String DEPARTURE = "Departure"; public static final String DATES_FROM = "dates from"; public static final String REP_FREQUENT_GST = "for Frequent Guest"; public static final String REP_CORPORTE_ID = "for Corporate ID"; public static final String REP_TRAVEL_ID = "for Travel Agent"; public static final String REP_TO = "to"; public static final String REP_DENIAL_DATE_OF = "Date of"; public static final String REP_DENIAL = "Denial"; public static final String REP_FREQ_NUM = "600663"; public static final String REP_FREQ_QES = "?"; public static final String SEMI_COLON = ";"; public static final String GET_ROOM_ALLOC_INFO = "MW_4_PKG.MW_QSA_P"; public static final String MODROOMALLOC_MODNAME = "ModifyRoomAlloc"; public static final String SUBMIT_ROOM_ALLOC_PROCEDURE = "mw_4_pkg.mw_dospecificalter_p"; public static final String GET_GUARANTEE_INFO = "MW_4_PKG.MW_QG_P"; public static final String GET_GUARANTEE_MODNAME = "ModifyGuarantee"; public static final String SUBMIT_GUARANTEE_INFO_PROCEDURE = "MW_4_PKG.MW_DOGUARANTEE_P"; public static final String filename = "eFile"; public static final String filepath = "filePath"; public static final String contypeone = "contentType"; public static final String disposition = "Content-disposition"; public static final String dispositioninline = "inline; filename=\""; public static final String conappndone = "/"; public static final String conappndtwo = "\""; public static final String contypetwo = "application/octet-stream"; public static final String dispositionattachment = "attachment; filename=\""; /* Rev Mgmt - Rate Monitor : Start */ public static final String RATECODEPOPUPLINK = "rateCodeLink"; public static final String RATELEVELPOPUPLINK = "rateLevelLink"; public static final String OVERALLALLOCPOPUPLINK = "overallAlterLink"; public static final String MODIFYRATESPOPUPLINK = "modifyRatesLink"; public static final String GET_RATE_MONITOR_PROCEDURE = "mw_4_pkg.mw_rmmeat_p"; public static final String GET_RMRLVL_DET_PROCEDURE = "mw_4_pkg.mw_rmrlevel_det_p"; public static final String GET_RATE_CATEGORY_PROCEDURE = "MW_4_PKG.mw_revenue_rate_cd_los_p"; public static final String GET_RMRPL_DET_PROCEDURE = "MW_4_PKG.mw_rmrplan_det_p"; public static final String GET_RM_RMO_PROCEDURE = "MW_4_PKG.mw_rmo_p"; public static final String GET_RM_RMR_PROCEDURE = "MW_4_PKG.mw_rmr_p"; public static final String DO_LEVEL_OVERALL_ALTER_PROCEDURE = "MW_4_PKG.mw_doleveloverallalter_p"; public static final String RM_MODINVALLOC_MODNAME = "ModifyInvAlloc"; public static final String TIMEZONE = "PHOENIX_CRO"; public static final String MODIFYOVERALL_SUBMODULE_ONE = "LMO"; public static final String MODIFYOVERALL_SUBMODULE_TWO = "RMO"; public static final String RATE_MONITOR_MODULE = "RATE_MONITOR"; public static final String MONITOR_FALLOUT_PAGE = "monitorfalloutview"; public static final String RATE_CATEGORY_ENDS = "-Ends: "; public static final String RATE_MONITOR_TYPE_LEVEL = "LEVEL"; public static final String RATE_MONITOR_TYPE_RATE = "RATE"; public static final String RATE_MONITOR_TYPE_ROOM = "ROOM"; public static final String DELIMITER_EQUAL_TO = "="; public static final String RATE_MONITOR_STOP = "STOP"; public static final String RATES_STAND_ALONE = "STAND_ALONE"; public static final String RATES_UNIT_OFF = "UNIT_OFF"; public static final String PERCENT_OFF = "PERCENT_OFF"; public static final String DATE_FORMAT_ONLY_DAY = "EE"; /* Rev Mgmt - Rate Monitor : End */ /***** Revenue Management -- MODIFY LEVEL SET UP -- START *****/ public static final String SHOW_MODIFY_RATE_LEVEL_SETUP_PROCEDURE = "MW_4_PKG.mw_rate_setup_p"; public static final String SHOW_MODIFY_RATE_LEVEL_ASSIGNMENT_PROCEDURE = "MW_CV_TYPES_PKG.GET_RATE_LEVEL_TO_MOVE"; public static final String SUBMIT_MODIFY_RATE_LEVEL_ASSIGNMENT_PROCEDURE = "MW_5_PKG.DOLEVELREASSIGN"; public static final String SUBMIT_MODIFY_RATE_LEVEL_NAMES_PROCEDURE = "MW_5_PKG.DOLEVELNAMEALTER"; public static final String POPUP_FALLOUT_VIEW = "falloutpopupview"; public static final String POPUP_ERROR_VIEW = "errorpopupview"; public static final String MODIFY_RATE_LEVEL_SETUP_SUBMODNAME_LMS = "LMS"; public static final String MODIFY_RATE_LEVEL_SETUP_MODULE_NAME = "ModifyLevelSetup"; /***** Revenue Management -- MODIFY LEVEL SET UP -- END *****/ public static final String PMODEU = "U"; public static final String DELIMITER = "%"; public static final String SPACE_160 = "&#160;"; /* Airline Mileage Report */ public static final String BILLING_DATE = "Billing date"; public static final String ZERO_ONE = "01"; public static final String FOR_AFFILIATE = "for Affiliate"; public static final String FOR_PROPERTY = "for Property"; public static final String AIRLINEMILEAGESUMMARYRSLT = "airlinemileagesummarybyproprslt"; public static final String AIRLINEMILEAGEDETAILRSLT = "airlinemileagedetbyproprslt"; public static final String PROPERTYPERFORMANCEBYRATEPLAN = "propertyperformancebyrateplan"; public static final String PROPERTYPERFORMANCEBYRATEPLANRSLT = "propertyperformancebyrateplanrslt"; public static final String CT_CITY_REQUESTED = "~*~"; public static final String SEL = "SEL"; public static final String NONE = "None"; public static final String NOT_AVAIL = "N/A"; public static final String BREAK_DELIMETER = "<BR/>"; public static final String CUST_FALLOUT = "fallout"; public static final String NONE_CAPS = "NONE"; public static final String GET_LM_NONACTIVE_PROCEDURE = "MW_CV_TYPES_PKG.GET_INACTIVE_DETAIL"; public static final String NONACTIVEPOPUPTYPE = "nonActivePopupType"; /***** Revenue Management -- LEVEL MONITOR -- START *****/ public static final String REV_MGMT_LEVEL_MONITOR = "levelmonitorview"; public static final String OVERALLPOPUPLINK = "overallAlterLink"; public static final String OVERALLLVLALLOCPOPUPLINK = "overallLvlAlterLink"; public static final String OVERALLRATEPOPUPTYPE = "overallRateType"; public static final String OVERALLROOMPOPUPTYPE = "overallRoomType"; public static final String OUTSTATUSPOPUPLINK = "outStatusAlterLink"; public static final String OUTAVAILPOPUPLINK = "outAvailAlterLink"; public static final String REV_MGMT_RATE_MONITOR = "ratemonitorview"; public static final String GET_LEVEL_MONITOR_PROCEDURE = "mw_4_pkg.mw_lmmeat_p"; public static final String GET_LM_RMO_PROCEDURE = "MW_4_PKG.mw_lmrlalloc_p"; public static final String GET_LM_STAT_PROCEDURE = "MW_4_PKG.mw_lmqrlstat_p"; public static final String CONCAT_STRING_EQUAL = "="; public static final String LEVEL_MONITOR_MODULE = "LEVEL_MONITOR"; public static final String TOTAL_INV = "TOTAL_INV"; public static final String STOP = "STOP"; public static final String NBSP = "|nbsp"; /***** Revenue Management -- LEVEL MONITOR -- END *****/ /***** Affiliate Report -- START *****/ public static final String PROC_REPORTS_HOTEL_REVENUE_BY_RATE_PLAN = "MW_UI_REPORTS_PKG.mw_ratep_p"; public static final String PROPERTY_TO_PROPERTY_COMM_RATE = "MW_RESSTAT_PKG.get_aff_ptp_comm_info"; public static final String PROPERTY_AND_RATE_PLAN = "MW_UI_REPORTS_PKG.mw_prate_p"; public static final String MONTHLY_SCAT_INCIDENT_DETAIL = "MW_UI_REPORTS_PKG.mw_scaffil_p"; public static final String PROC_REPORTS_INVENTORY_STATUS_REQ = "MW_UI_REPORTS_PKG.mw_afflt_lvl_inv_stat_rqst_p"; public static final String PROC_REPORTS_INVENTORY_STATUS = "MW_UI_REPORTS_PKG.mw_aff_lvlrqst_p"; public static final String INV_GET_PROCESSED_REQUEST = "MW_INV_ADHOC_REPORT_PKG.getprocessedrequests"; public static final String LVL_INV_GET_UNPROCESSED_REQUEST = "MW_INV_ADHOC_REPORT_PKG.getlvlunprocessedrequests"; public static final String INV_DELETE_REPORT_REQUEST = "MW_INV_ADHOC_REPORT_PKG.deletereportrequest"; public static final String LVL_INV_INSERT_REPORT_REQUEST = "MW_INV_ADHOC_REPORT_PKG.insertlvlreportrequest"; public static final String LVL_INV_GET_MIN_MAX_PROPERTY = "MW_INV_ADHOC_REPORT_PKG.GETMINMAXPROPERTY"; public static final String LVL_INV_VALIDATION_REPORT_REQUEST = "mw_inv_adhoc_report_pkg.getvalidations"; public static final String INV_GET_UNPROCESSED_REQUEST = "MW_INV_ADHOC_REPORT_PKG.getunprocessedrequests"; public static final String INV_INSERT_REPORT_REQUEST = "MW_INV_ADHOC_REPORT_PKG.insertreportrequest"; public static final String OPENING_BRACE = "("; public static final String CLOSING_BRACE = ")"; public static final String NO_RECORDS_FOUND = "No Records found"; public static final String STYLE_CLASS1 = "step1"; public static final String STYLE_CLASS2 = "step2"; public static final String STYLE_CLASS3 = "step3"; public static final String STYLE_CLASS4 = "step4"; public static final String TOTAL_FOR = "Total for"; public static final String EQUALS = "="; public static final String COUNTABLE_COMPLAINTS = " Countable Complaints"; public static final String BETWEEN = "Between"; public static final String XML_BROCHURE_FILE_NAME = "brochure"; public static final String XML_HIST_RESV_FILE_NAME = "cr_extract"; public static final String UNDERSCORE_DELIMITER = "_"; public static final String LRP = "LRP"; public static final String RTL = "RTL"; public static final String LRT = "LRT"; public static final String RP = "RP"; public static final String RT = "RT"; /***** Affiliate Report -- END *****/ // Added for Daily Monitor - Start public static final String DM_DEFAULT_RATE_PLAN = "RACK|Y|STANDARD|||RACK RATE|D|Y|RACK|RACK|N|"; public static final String DM_RATE_PROC = "mw_4_pkg.mw_dmrate_p"; public static final String DM_MEAT_PROC = "mw_4_pkg.mw_dmmeat_p"; public static final String DM_RATE_CAT_PROC = "mw_4_pkg.mw_dmrcat_p"; public static final String DM_VIEW_RATE_PLAN_PROC = "mw_4_pkg.mw_dmrt_p"; public static final String DM_HOLD_CXL_PROC = "mw_4_pkg.mw_dmg_p"; public static final String DM_MODIFY_ROOM_STATUS_PROC = "mw_4_pkg.mw_dms_p"; public static final String DM_MODIFY_ROOM_RATES_PROC = "mw_4_pkg.mw_dmr_p"; public static final String DM_NON_ACTIVE_DETAIL_PROC = "MW_CV_TYPES_PKG.GET_INACTIVE_DETAIL"; public static final String DM_DOOVERALLALTER_PROC = "mw_4_pkg.mw_dooverallalter_p"; public static final String DM_DORATECATALTER_PROC = "mw_4_pkg.mw_doratecatalter_p"; public static final String DM_DORATECODEALTER_PROC = "mw_4_pkg.mw_DORATECODEALTER_p"; public static final String DM_DOGUARANTEE_PROC = "mw_4_pkg.mw_doguarantee_p"; public static final String DM_DOSPECIFICALTER_PROC = "mw_4_pkg.mw_dospecificalter_p"; public static final String DM_DOSTATUSALTER_PROC = "mw_4_pkg.mw_dostatusalter_p"; public static final String DM_DORATEALTER_PROC = "mw_4_pkg.mw_doratealter_p"; public static final String SPACE = "&nbsp"; public static final String SPACE_IMAGE = "SPACE"; public static final String STOP_IMAGE = "STOP"; public static final String RATE_CATEGORY = "Rate Category"; public static final String CATEGORY = "Cat"; public static final String FRONT_SLASH = " / "; public static final String DM_POPUP_TOTAL_INV = "totalInvLink"; public static final String DM_POPUP_RATE_CAT = "rateCatLink"; public static final String DM_POPUP_RATE_CD = "rateCdLink"; public static final String DM_POPUP_HOLD_CXL = "holdCxlLink"; public static final String DM_POPUP_ROOM_TYPE = "roomTypeLink"; public static final String DM_POPUP_ROOM_RATES = "roomRatesLink"; public static final String DM_POPUP_NON_ACTIVE = "nonActiveLink"; public static final String DELIMETER_PIPE = "|"; public static final String STARTING_BRACE = " - ("; public static final String HYPERLINK_DELIMETER = "~"; public static final String HYPHEN = "-"; public static final String STAR = "*"; public static final String SELECTED = "SELECTED"; public static final String THREE_SPACE = "&nbsp;&nbsp;&nbsp;"; public static final String TOTAL_INV_CLASS = "TOTAL_INV"; public static final String AMT_TYPE_STANDALONE = "STAND_ALONE"; public static final String AMT_TYPE_UNITOFF = "UNIT_OFF"; public static final String AMT_TYPE_PERCENTOFF = "PERCENT_OFF"; public static final String DAILY_MONITOR_MODNAME = "DAILY_MONITOR"; public static final String DM_TOTALINV_ORIGINATOR_1 = "QO"; public static final String DM_TOTALINV_ORIGINATOR_2 = "DMO"; public static final String DM_RATECAT_ORIGINATOR_1 = "QRCAT"; public static final String DM_RATECAT_ORIGINATOR_2 = "DMRCAT"; public static final String DM_RATEPLAN_ORIGINATOR_1 = "QRCD"; public static final String DM_RATEPLAN_ORIGINATOR_2 = "DMRCD"; public static final String DM_HOLDCXL_ORIGINATOR_1 = "QG"; public static final String DM_HOLDCXL_ORIGINATOR_2 = "DMG"; public static final String DM_ROOMALLOC_ORIGINATOR_1 = "QSA"; public static final String DM_ROOMALLOC_ORIGINATOR_2 = "DMSA"; public static final String DM_ROOMSTATUS_ORIGINATOR_1 = "QS"; public static final String DM_ROOMSTATUS_ORIGINATOR_2 = "DMS"; public static final String DM_ROOMRATE_ORIGINATOR_1 = "QR"; public static final String DM_ROOMRATE_ORIGINATOR_2 = "DMR"; public static final String GREATER_SIGN = ">"; public static final String LESSER_SIGN = "<"; public static final String NO_DATA = "blank.png"; public static final String EXTENSION = ".gif"; // Added for Daily Monitor - End public static final String HIDE = "hide"; public static final String BLANK = "blank"; public static final String secretKey = "mE3BrW5B"; public static final String TRBILL = "trbill"; public static final String RTB = "rtb"; public static final String SOFT_CLOSE = "SOFT_CLOSE"; public static final String SFCLS = "SFCLS"; /**************************** MODULE SPECIFIC Constants Ends *****************************/ /****** Duplicated Literals Starts..... To be removed soon ******/ public static final String RESV_YES_TEXT = "Yes"; public static final String RESV_NO_TEXT = "No"; public static final String RESV_Y_TEXT = "Y"; public static final String RESV_N_TEXT = "N"; public static final String AL_ADMIN_FLAG = "N"; public static final String ALCC_ADMIN_FLAG = "Y"; public static final String TL_ADMIN_FLAG = "N"; public static final String TLCC_ADMIN_FLAG = "Y"; public static final String RESV_DISABLE_CHECKBOX = "D"; public static final String RESV_ENABLE_CHECKBOX = "E"; public static final String RESV_ZERO_TEXT = "0"; public static final String RESV_ARRIVAL_TEXT = "ARRIVAL"; public static final String RESV_ARRIVAL_SM_TEXT = "Arrival"; public static final String RESV_BOOKING_TEXT = "Booking"; public static final String RESV_TL_DD_FORMAT = "dd"; public static final String RESV_TL_MMM_FORMAT = "MMM"; public static final String RESV_PIPE = "|"; public static final String ARRIVAL = "Arrival"; public static final String BOOKING = "Booking"; public static final String REP_DENIAL_CAP_ARRIVAL = "ARRIVAL"; public static final String MW_FG_PR_Y = "Y"; public static final String MW_FG_PR_N = "N"; public static final String MEM_SRCH_SPEED = "Y"; public static final String VIEW_TRANS_CONDITION_CODE_Y = "Y"; public static final String VIEW_TRANS_CONDITION_CODE_N = "N"; public static final String RESV_ACTIVE_RESV = "A"; public static final String RESV_ALL_RESV = "B"; public static final String RESV_CANC_RESV = "C"; public static final String LYNX_ARR = "A"; public static final String OPTION_TYPE = "A"; public static final String VIEW_RATE_RULES_CONDITION_CODE_A = "A"; public static final String VIEW_RATE_RULES_CONDITION_CODE_B = "B"; public static final String VIEW_RATE_RULES_CONDITION_CODE_C = "C"; public static final String MODIFY_LEVEL = "A"; public static final String RATE_CATEGORY_PROC_INP_TWO = "A"; public static final String MODIFY_OVERALL_ALLOC_INPUT_TWO = "A"; public static final String ARRIVAL_DATA_TYPE = "A"; public static final String BOOKING_DATA_TYPE = "B"; public static final String PMODEA = "A"; public static final String PMODEC = "C"; public static final String RATE_LEVEL_A = "A"; public static final String MONTH_FORMAT = "MMM"; public static final String YEAR_TYPE = "C"; public static final String RESORT_DOM_INDICATOR = "D"; public static final String PMODED = "D"; public static final String TRN_TYPE_F = "F"; public static final String RATE_MONITOR_FLAT_RATE = "F"; public static final String FLAT_PERCENT_F = "F"; public static final String TRN_TYPE_G = "G"; public static final String VIEW_RATE_RULES_CONDITION_CODE_G = "G"; public static final String RESORT_IND_INDICATOR = "I"; public static final String RATE_MONITOR_IMAGE = "I"; public static final String IMAGE = "I"; public static final String MW_FG_PR_M = "M"; public static final String MEM_STATUS_M = "M"; public static final String TRN_TYPE_M = "M"; public static final String TRN_TYPE_P = "P"; public static final String RATE_MONITOR_PERCENT_RATE = "P"; public static final String FLAT_PERCENT_P = "P"; public static final String MEM_STATUS_R = "R"; public static final String FREE_NTS_VOU_REDEEM = "R"; public static final String VIEW_TRANS_CONDITION_CODE_S = "S"; public static final String ACT_S = "S"; public static final String BWR_PT_PTR_RWD = "T"; public static final String FLAG_T = "T"; public static final String EMPTY_SPACES = " "; public static final String MW_FG_ZERO = "0"; public static final String RATE_MONITOR_TYPE_ALL = "ALL"; public static final String ON = "On"; public static final String OFF = "Off"; public static final String MODIFY_OVERALL_ALLOC_SUBMODNAME_ONE = "QO"; public static final String MODIFY_OVERALL_ALLOC_SUBMODNAME_TWO = "DMO"; public static final String MODCATALLOC_SUBMODNAME_ONE = "QRCAT"; public static final String MODCATALLOC_SUBMODNAME_TWO = "DMRCAT"; public static final String MODPLANALLOC_SUBMODNAME_ONE = "QRCD"; public static final String MODPLANALLOC_SUBMODNAME_TWO = "DMRCD"; public static final String GET_GUARANTEE_SUBMODNAME_ONE = "QG"; public static final String MODLEVELALLOC_SUBMODNAME_ONE = "QSA"; public static final String MODROOMALLOC_SUBMODNAME_ONE = "QSA"; public static final String MODSTATUS_SUBMODNAME_ONE = "QS"; public static final String MODBARLEVELROOMTYPESTS_SUBMODNAME_ONE = "QS"; public static final String MODIFY_RATE_LEVEL_SETUP_SUBMODNAME_QS = "QS"; public static final String MODBARLEVELROOMTYPESTS_SUBMODNAME_THREE = "LMO"; public static final String MODBARLEVELROOMTYPESTS_SUBMODNAME_FOUR = "RMO"; public static final String CALL_PROC = "{CALL "; public static final String GET_GUARANTEE_SUBMODNAME_TWO = "LMSA"; public static final String MODROOMALLOC_SUBMODNAME_TWO = "LMSA"; public static final String COMM= "Commissionable"; public static final String PARTCOMM= "Partially Payable"; public static final String NONCOMM= "Non-Commissionable"; public static final String CANCEL= "Cancel"; public static final String NOSHOW= "No Show"; /****** Duplicated Literals Ends..... To be removed soon ******/ public static final String ERRORPOPUPPAGE = "errorpopup"; public static final String RM_MODIFY_RATE_LINK = "Modify Rate"; public static final String ALPHANUMEXP = "[A-Z0-9\\-\\.\\s\\,\\/\\~\\$\\&\\*\\_\\|\\;]+"; public static final String valErrorMsg = "An error has occurred during processing - Please try again."; public static final String NULL = "null"; public static final String LOGOUT_URL_KEY = "LOGOUT_URL"; public static final String ENGLISH = "ENGLISH"; public static final String GC_CHECK_DUPLICATE = "GC_MEMBER_PKG.GC_CHECK_DUPLICATE_P"; public static final String CBM_AFFILIATE_PROC = "mw_ui_reports_pkg.mw_cbmbilling_p";

}

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaCraige McWhirter: Introduction to Managing OpenStack Via the CLI

Assumptions:

Introduction:

There's great deal of ugliness in OpenStack but what I enjoy the most is the relative elegance of driving an OpenStack deployment from the comfort of my own workstation.

Once you've configured your workstation as an OpenStack Management Client, these are some of the commands you can run from your workstation against an OpenStack deployment.

There are client commands for each of the projects, which makes it rather simple to relate the commands you want to run against the service you need to work with. ie:

$ PROJECT --version

$ cinder --version
1.0.8
$ glance --version
0.12.0
$ heat --version
0.2.9
$ keystone --version
0.9.0
$ neutron --version
2.3.5
$ nova --version   
2.17.0

Getting by With a Little Help From Your Friends

The first slice of CLI joy when using these OpenStack clients is the CLI help that is available for each of the clients. When each client is called with --help, a comprehensive list of options and sub commands is dumped to STDOUT, which is useful but but not expected.

The question you usually find yourself asking is, "How do I use those sub commands?", which answered by utilising the following syntax:

$ PROJECT help subcommand

This will dump all the arguments for the specified subcommand to STDOUT. I've used the below example for it's brevity:

$ keystone help user-create
usage: keystone user-create --name <user-name> [--tenant <tenant>]
                            [--pass [<pass>]] [--email <email>]
                            [--enabled <true|false>]

Create new user

Arguments:
  --name <user-name>    New user name (must be unique).
  --tenant <tenant>, --tenant-id <tenant>
                        New user default tenant.
  --pass [<pass>]       New user password; required for some auth backends.
  --email <email>       New user email address.
  --enabled <true|false>
                        Initial user enabled status. Default is true.

Getting Behind the Wheel

Before you can use these commands, you will need to set some appropriate environment variables. When you completed configuring your workstation as an OpenStack Management Client, you will have completed short file that set the username, passowrd, tenant name and authentication URL for you OpenStack clients. Now is the time to source that file:

$ source <username-tenant>.sh

I have one of these for each OpenStack deployment, user account and tenant that I wish to work with. Sourcing the relevent one before I commence a body of work.

Turning the Keystone

Keystone provides the authentication service, assuming you have appropriate privileges, you will need to:

Create a Tenant (referred to as a Project via the Web UI).

$ keystone tenant-create --name DemoTenant --description "Don't forget to \
delete this tenant"
+-------------+------------------------------------+
|   Property  |               Value                |
+-------------+------------------------------------+
| description | Don't forget to delete this tenant |
|   enabled   |                True                |
|      id     |  painguPhahchoh2oh7Oeth2jeh4ahMie  |
|     name    |             DemoTenant             |
+-------------+------------------------------------+

$ keystone tenant-list
+----------------------------------+----------------+---------+
|                id                |      name      | enabled |
+----------------------------------+----------------+---------+
| painguPhahchoh2oh7Oeth2jeh4ahMie |   DemoTenant   |   True  |
+----------------------------------+----------------+---------+

Create / add a user to that tenant.

$ keystone user-create --name DemoUser --tenant DemoTenant --pass \
Tahh9teih3To --email demo.tenant@example.tld    
+----------+----------------------------------+
| Property |              Value               |
+----------+----------------------------------+
|  email   |     demo.tenant@example.tld      |
| enabled  |               True               |
|    id    | ji5wuVaTouD0ohshoChoohien3Thaibu |
|   name   |             DemoUser             |
| tenantId | painguPhahchoh2oh7Oeth2jeh4ahMie |
| username |             DemoUser             |
+----------+----------------------------------+

$ keystone user-role-list --user DemoUser --tenant DemoTenant                               
+----------------------------------+----------+----------------------------------+----------------------------------+
|                id                |   name   |             user_id              |            tenant_id             |
+----------------------------------+----------+----------------------------------+----------------------------------+
| eiChu2Lochui7aiHu5OF2leiPhai6nai | _member_ | ji5wuVaTouD0ohshoChoohien3Thaibu | painguPhahchoh2oh7Oeth2jeh4ahMie |
+----------------------------------+----------+----------------------------------+----------------------------------+

Provide that user with an appropriate role (defaults to member)

$ keystone user-role-add --user DemoUser --role admin --tenant DemoTenant                   
$ keystone user-role-list --user DemoUser --tenant DemoTenant                               
+----------------------------------+----------+----------------------------------+----------------------------------+
|                id                |   name   |             user_id              |            tenant_id             |
+----------------------------------+----------+----------------------------------+----------------------------------+
| eiChu2Lochui7aiHu5OF2leiPhai6nai | _member_ | ji5wuVaTouD0ohshoChoohien3Thaibu | painguPhahchoh2oh7Oeth2jeh4ahMie |
| ieDieph0iteidahjuxaifi6BaeTh2Joh |  admin   | ji5wuVaTouD0ohshoChoohien3Thaibu | painguPhahchoh2oh7Oeth2jeh4ahMie |
+----------------------------------+----------+----------------------------------+----------------------------------+

Taking a Glance at Images

Glance provides the service for discovering, registering and retrieving virtual machine images. It is via Glance that you will be uploading VM images to OpenStack. Here's how you can upload a pre-existing image to Glance:

Note: If your back end is Ceph then the images must be in RAW format.

$ glance image-create --name DemoImage --file /tmp/debian-7-amd64-vm.qcow2 \
--progress --disk-format qcow2 --container-format bare \
--checksum 05a0b9904ba491346a39e18789414724                       
[=============================>] 100%
+------------------+--------------------------------------+
| Property         | Value                                |
+------------------+--------------------------------------+
| checksum         | 05a0b9904ba491346a39e18789414724     |
| container_format | bare                                 |
| created_at       | 2014-08-13T06:00:01                  |
| deleted          | False                                |
| deleted_at       | None                                 |
| disk_format      | qcow2                                |
| id               | mie1iegauchaeGohghayooghie3Zaichd1e5 |
| is_public        | False                                |
| min_disk         | 0                                    |
| min_ram          | 0                                    |
| name             | DemoImage                            |
| owner            | gei6chiC3hei8oochoquieDai9voo0ve     |
| protected        | False                                |
| size             | 2040856576                           |
| status           | active                               |
| updated_at       | 2014-08-13T06:00:28                  |
| virtual_size     | None                                 |
+------------------+--------------------------------------+

$ glance image-list
+--------------------------------------+---------------------------+-------------+------------------+------------+--------+
| ID                                   | Name                      | Disk Format | Container Format | Size       | Status |
+--------------------------------------+---------------------------+-------------+------------------+------------+--------+
| mie1iegauchaeGohghayooghie3Zaichd1e5 | DemoImage                 | qcow2       | bare             | 2040856576 | active |
+--------------------------------------+---------------------------+-------------+------------------+------------+--------+

Starting Something New With Nova

Build yourself an environment file with the new credentials:

export OS_USERNAME=DemoTenant
export OS_PASSWORD=Tahh9teih3To
export OS_TENANT_NAME=DemoTenant
export OS_AUTH_URL=http://horizon.my.domain.tld:35357/v2.0

Then source it.

By now you just want a VM, so lets knock one up for the user and tenant you just created:

$ nova boot --flavor m1.small --image DemoImage DemoVM                                     
+--------------------------------------+--------------------------------------------------+
| Property                             | Value                                            |
+--------------------------------------+--------------------------------------------------+
| OS-DCF:diskConfig                    | MANUAL                                           |
| OS-EXT-AZ:availability_zone          | nova                                             |
| OS-EXT-SRV-ATTR:host                 | -                                                |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | -                                                |
| OS-EXT-SRV-ATTR:instance_name        | instance-0000009f                                |
| OS-EXT-STS:power_state               | 0                                                |
| OS-EXT-STS:task_state                | scheduling                                       |
| OS-EXT-STS:vm_state                  | building                                         |
| OS-SRV-USG:launched_at               | -                                                |
| OS-SRV-USG:terminated_at             | -                                                |
| accessIPv4                           |                                                  |
| accessIPv6                           |                                                  |
| adminPass                            | W3kd5WzYD2tE                                     |
| config_drive                         |                                                  |
| created                              | 2014-08-13T06:41:19Z                             |
| flavor                               | m1.small (2)                                     |
| hostId                               |                                                  |
| id                                   | 248a247d-83ff-4a52-b9b4-4b3961050e94             |
| image                                | DemoImage (d51001c2-bfe3-4e8a-86d8-e2e35898c0f3) |
| key_name                             | -                                                |
| metadata                             | {}                                               |
| name                                 | DemoVM                                           |
| os-extended-volumes:volumes_attached | []                                               |
| progress                             | 0                                                |
| security_groups                      | default                                          |
| status                               | BUILD                                            |
| tenant_id                            | c6c88f8dbff34b60b4c8e7fad1bda869                 |
| updated                              | 2014-08-13T06:41:19Z                             |
| user_id                              | cb040e80138c4374b46f4d31da38be68                 |
+--------------------------------------+--------------------------------------------------+

Now you can use nova show DemoVM to provide you with the IP address of the host or acces the console via the Horizon dashboard.

Krebs on SecurityLorem Ipsum: Of Good & Evil, Google & China

Imagine discovering a secret language spoken only online by a knowledgeable and learned few. Over a period of weeks, as you begin to tease out the meaning of this curious tongue and ponder its purpose, the language appears to shift in subtle but fantastic ways, remaking itself daily before your eyes. And just when you are poised to share your findings with the rest of the world, the entire thing vanishes.

loremipsumThis fairly describes my roller coaster experience of curiosity, wonder and disappointment over the past few weeks, as I’ve worked alongside security researchers in an effort to understand how “lorem ipsum” — common placeholder text on countless Web sites — could be transformed into so many apparently geopolitical and startlingly modern phrases when translated from Latin to English using Google Translate. (If you have no idea what “lorem ipsum” is, skip ahead to a brief primer here).

Admittedly, this blog post would make more sense if readers could fully replicate the results described below using Google Translate. However, as I’ll explain later, something important changed in Google’s translation system late last week that currently makes the examples I’ll describe impossible to reproduce.

CHINA, NATO, SEXY, SEXY

It all started a few months back when I received a note from Lance James, head of cyber intelligence at Deloitte. James pinged me to share something discovered by FireEye researcher Michael Shoukry and another researcher who wished to be identified only as “Kraeh3n.” They noticed a bizarre pattern in Google Translate: When one typed “lorem ipsum” into Google Translate, the default results (with the system auto-detecting Latin as the language) returned a single word: “China.”

Capitalizing the first letter of each word changed the output to “NATO” — the acronym for the North Atlantic Treaty Organization. Reversing the words in both lower- and uppercase produced “The Internet” and “The Company” (the “Company” with a capital “C” has long been a code word for the U.S. Central Intelligence Agency). Repeating and rearranging the word pair with a mix of capitalization generated even stranger results. For example, “lorem ipsum ipsum ipsum Lorem” generated the phrase “China is very very sexy.”

Until very recently, the words on the left were transformed to the words on the right using Google Translate.

Until very recently, the words on the left were transformed to the words on the right using Google Translate.

Kraeh3n said she discovered the strange behavior while proofreading a document for a colleague, a document that had the standard lorem ipsum placeholder text. When she began typing “l-o-r..e..” and saw “China” as the result, she knew something was strange.

“I saw words like Internet, China, government, police, and freedom and was curious as to how this was happening,” Kraeh3n said. “I immediately contacted Michael Shoukry and we began looking into it further.”

And so the duo started testing the limits of these two words using a mix of capitalization and repetition. Below is just one of many pages of screenshots taken from their results:

ipsumlorem

The researchers wondered: What was going on here? Has someone outside of Google figured out how to map certain words to different meanings in Google Translate? Was it a secret or covert communications channel? Perhaps a form of communication meant to bypass the censorship erected by the Chinese government with the Great Firewall of China? Or was this all just some coincidental glitch in the Matrix?

For his part, Shoukry checked in with contacts in the U.S. intelligence industry, quietly inquiring if divulging his findings might in any way jeopardize important secrets. Weeks went by and his sources heard no objection. One thing was for sure, the results were subtly changing from day to day, and it wasn’t clear how long these two common but obscure words would continue to produce the same results.

“While Google translate may be incorrect in the translations of these words, it’s puzzling why these words would be translated to things such as ‘China,’ ‘NATO,’ and ‘The Free Internet,’” Shoukry said. “Could this be a glitch? Is this intentional? Is this a way for people to communicate? What is it?”

When I met Shoukry at the Black Hat security convention in Las Vegas earlier this month, he’d already alerted Google to his findings. Clearly, it was time for some intense testing, and the clock was already ticking: I was convinced (and unfortunately, correct) that much of it would disappear at any moment.

A BRIEF HISTORY OF LOREM IPSUM

Cicero.

Cicero.

Search the Internet for the phrase “lorem ipsum,” and the results reveal why this strange phrase has such a core connection to the lexicon of the Web. Its origins in modernity are murky, but according to multiple sites that have attempted to chronicle the history of this word pair, “lorem ipsum” was taken from a scrambled and altered section of “De finibus bonorum et malorum,” (translated: “Of Good and Evil,”) a 1st-Century B.C. Latin text by the great orator Cicero.

According to Cecil Adams, curator of the Internet trivia site The Straight Dope, the text from that Cicero work was available for many years on adhesive sheets in different sizes and typefaces from a company called Letraset.

“In pre-desktop-publishing days, a designer would cut the stuff out with an X-acto knife and stick it on the page,” Adams wrote. “When computers came along, Aldus included lorem ipsum in its PageMaker publishing software, and you now see it wherever designers are at work, including all over the Web.”

This pair of words is so common that many Web content management systems deploy it as default text. Case in point: Lorem Ipsum even shows up on healthcare.gov. According to a story published Aug. 15 in the Daily Mail, more than a dozen apparently dormant healthcare.gov pages carry the dummy text. (Click here if you skipped ahead to this section).

LOREMipsumhealthcare

FURTHER TESTING

Things began to get even more interesting when the researchers started adding other words from the Cicero text from which the “lorem ipsum” bit was taken, including: “Neque porro quisquam est qui dolorem ipsum quia dolor sit amet, consectetur, adipisci velit . . .”  (“There is no one who loves pain itself, who seeks after it and wants to have it, simply because it is pain …”).

Adding “dolor” and “sit” and “consectetur,” for example, produced even more bizarre results. Translating “consectetur Sit Sit Dolor” from Latin to English produces “Russia May Be Suffering.” “sit sit dolor dolor” translates to “He is a smart consumer.” An example of these sample translations is below:

ipsum

Latin is often dismissed as a “dead” language, and whether or not that is fair or true it seems pretty clear that there should not be Latin words for “cell phone,” “Internet” and other mainstays of modern life in the 21st Century. However, this incongruity helps to shed light on one possible explanation for such odd translations: Google Translate simply doesn’t have enough Latin texts available to have thoroughly learned the language.

In an introductory video titled Inside Google Translate, Google explains how the translation engine works, the sources of the engine’s intelligence, and its limitations. According to Google, its Translate service works “by analyzing millions and millions of documents that have already been translated by human translators.” The video continues:

“These translated texts come from books, organizations like the United Nations, and Web sites from all around the world. Our computers scan these texts looking for statistically significant patterns. That is to say, patterns between the translation and the original text that are unlikely to occur by chance. Once the computer finds a pattern, you can use this pattern to translate similar texts in the future. When you repeat this process billions of times, you end up with billions of patterns, and one very smart computer program.”

Here’s the rub:

“For some languages, however, we have fewer translated documents available, and therefore fewer patterns that our software has detected. This is why our translation quality will vary by language and language pair.”

Still, this doesn’t quite explain why Google Translate would include so many references specific to China, the Internet, telecommunications, companies, departments and other odd couplings in translating Latin to English.

In any case, we may never know the real explanation. Just before midnight, Aug. 16, Google Translate abruptly stopped translating the word “lorem” into anything but “lorem” from Latin to English. Google Translate still produces amusing and peculiar results when translating Latin to English in general.

A spokesman for Google said the change was made to fix a bug with the Translate algorithm (aligning ‘lorem ipsum’ Latin boilerplate with unrelated English text) rather than a security vulnerability.

Kraeh3n said she’s convinced that the lorem ipsum phenomenon is not an accident or chance occurrence.

“Translate [is] designed to be able to evolve and to learn from crowd-sourced input to reflect adaptations in language use over time,” Kraeh3n said. “Someone out there learned to game that ability and use an obscure piece of text no one in their right mind would ever type in to create totally random alternate meanings that could, potentially, be used to transmit messages covertly.”

Meanwhile, Shoukry says he plans to continue his testing for new language patterns that may be hidden in Google Translate.

“The cleverness of hiding something in plain sight has been around for many years,” he said. “However, this is exceptionally brilliant because these templates are so widely used that people are desensitized to them, and because this text is so widely distributed that no one bothers to question why, how and where it might have come from.”

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: scheduler

This post is in a series covering the discussions at the Juno Nova mid-cycle meetup. This post will cover the current state of play of our scheduler refactoring efforts. The scheduler refactor has been running for a fair while now, dating back to at least the Hong Kong summit (so about 1.5 release cycles ago).

The original intent of the scheduler sub-team's effort was to pull the scheduling code out of Nova so that it could be rapidly iterated on its own, with the eventual goal being to support a single scheduler across the various OpenStack services. For example, the scheduler that makes placement decisions about your instances could also be making decisions about the placement of your storage resources and could therefore ensure that they are co-located as much as possible.

During this process we realized that a big bang replacement is actually much harder than we thought, and the plan has morphed into being a multi-phase effort. The first step is to make the interface for the scheduler more clearly defined inside the Nova code base. For example, in previous releases, it was the scheduler that launched instances: the API would ask the scheduler to find available hypervisor nodes, and then the scheduler would instruct those nodes to boot the instances. We need to refactor this so that the scheduler picks a set of nodes, but then the API is the one which actually does the instance launch. That way, when the scheduler does move out it's not trusted to perform actions that change hypervisor state, and the Nova code does that for it. This refactoring work is under way, along with work to isolate the SQL database accesses inside the scheduler.

I would like to set expectations that this work is what will land in Juno. It has little visible impact for users, but positions us to better solve these problems in Kilo.

We discussed the need to ensure that any new scheduler is at least as fast and accurate as the current one. Jay Pipes has volunteered to work with the scheduler sub-team to build a testing framework to validate this work. Jay also has some concerns about the resource tracker work that is being done at the moment that he is going to discuss with the scheduler sub-team. Since the mid-cycle meetup there has been a thread on the openstack-dev mailing list about similar resource tracker concerns (here), which might be of interest to people interested in scheduler work.

We also need to test our assumption at some point that other OpenStack services such as Neutron and Cinder would be even willing to share a scheduler service if a central one was implemented. We believe that Neutron is interested, but we shouldn't be surprising our fellow OpenStack projects by just appearing with a complete solution. There is a plan to propose a cross-project session at the Paris summit to cover this work.

In the next post in this series we'll discuss possibly the most controversial part of the mid-cycle meetup. The proposal for "slots" for landing blueprints during Kilo.

Tags for this post: openstack juno nova mid-cycle summary scheduler
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: containers

Comment

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: bug management

Welcome to the next exciting installment of the Nova Juno mid-cycle meetup summary. In the previous chapter, our hero battled a partially complete cells implementation, by using his +2 smile of good intentions. In this next exciting chapter, watch him battle our seemingly never ending pile of bugs! Sorry, now that I'm on to my sixth post in this series I feel like it's time to get more adventurous in the introductions.

For at least the last cycle, and probably longer, Nova has been struggling with the number of bugs filed in Launchpad. I don't think the problem is that Nova has terrible code, it is instead that we have a lot of users filing bugs, and the team working on triaging and closing bugs is small. The complexity of the deployment options with Nova make this problem worse, and that complexity increases as we allow new drivers for things like different storage engines to land in the code base.

The increasing number of permutations possible with Nova configurations is a problem for our CI systems as well, as we don't cover all of these options and this sometimes leads us to discover that they don't work as expected in the field. CI is a tangent from the main intent of this post though, so I will reserve further discussion of our CI system until a later post.

Tracy Jones and Joe Gordon have been doing good work in this cycle trying to get a grip on the state of the bugs filed against Nova. For example, a very large number of bugs (hundreds) were for problems we'd fixed, but where the bug bot had failed to close the bug when the fix merged. Many other bugs were waiting for feedback from users, but had been waiting for longer than six months. In both those cases the response was to close the bug, with the understanding that the user can always reopen it if they come back to talk to us again. Doing "quick hit" things like this has reduced our open bug count to about one thousand bugs. You can see a dashboard that Tracy has produced that shows the state of our bugs at http://54.201.139.117/nova-bugs.html. I believe that Joe has been moving towards moving this onto OpenStack hosted infrastructure, but this hasn't happened yet.

At the mid-cycle meetup, the goal of the conversation was to try and find other ways to get our bug queue further under control. Some of the suggestions were largely mechanical, like tightening up our definitions of the confirmed (we agree this is a bug) and triaged (and we know how to fix it) bug states. Others were things like auto-abandoning bugs which are marked incomplete for more than 60 days without a reply from the person who filed the bug, or unassigning bugs when the review that proposed a fix is abandoned in Gerrit.

Unfortunately, we have more ideas for how to automate dealing with bugs than we have people writing automation. If there's someone out there who wants to have a big impact on Nova, but isn't sure where to get started, helping us out with this automation would be a super helpful way to get started. Let Tracy or I know if you're interested.

We also talked about having more targeted bug days. This was prompted by our last bug day being largely unsuccessful. Instead we're proposing that the next bug day have a really well defined theme, such as moving things from the "undecided" to the "confirmed" state, or similar. I believe the current plan is to run a bug day like this after J-3 when we're winding down from feature development and starting to focus on stabilization.

Finally, I would encourage people fixing bugs in Nova to do a quick search for duplicate bugs when they are closing a bug. I wouldn't be at all surprised to discover that there are many bugs where you can close duplicates at the same time with minimal effort.

In the next post I'll cover our discussions of the state of the current scheduler work in Nova.

Tags for this post: openstack juno nova mi-cycle summary bugs
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Michael's surprisingly unreliable predictions for the Havana Nova release; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues

Comment

,

Planet Linux AustraliaSridhar Dhanapalan: Twitter posts: 2014-08-11 to 2014-08-17

Kelvin ThomsonPolice Association Links Crime to Population Growth

The Victorian Police Association has called for an extra 1880 first response officers to deal with rapidly rising demand on a stretched Police Force. The Sunday Herald Sun has reported police force fears that ghettos and no-go zones could emerge unless Victoria Police gets more police.<o:p></o:p>

Police Association Secretary Detective Ron Iddles said that population growth and crime went hand-in-hand. He said "Population is the main driver of demand for police resources and it is no surprise that crime rates are rising when Victoria's population is growing at the fastest rate in decades".<o:p></o:p>

He is absolutely right. There would not be the increasing levels of crime and the need for more and more police if Victoria was not running such rapid population growth. Furthermore it is unfair that ordinary Victorians, who have not asked for and are not the beneficiaries of rapid population growth, should be expected to pay for its consequences, such as a big increase in police numbers. <o:p></o:p>

It is the population boosters such as the Housing Industry Association, and the property developers who make a killing through population driven rising land prices, who should pay for these costs, not ordinary Victorians.

Geek FeminismThe Hugo Awards!

It was a good year for women in Science Fiction and Fantasy at this year’s Hugo Awards, which were presented this evening in London, at the 2014 WorldCon.

Here are this year’s winners:

I’m thrilled that the Hugo Award for Best Novel went to Ann Leckie’s Ancillary Justice. Leckie has been sweeping the genre’s major awards this year for her compelling tale of vengeance and identity. Ancillary Justice does interesting things with gender, and deftly handles social issues from drug addiction to colonization–wrapping it all up in a richly-detailed galactic epic. I can’t recommend it enough.

She was nominated alongside Charles Stross for Neptune’s Brood, Mira Grant for ParasiteThe Wheel of Time, by Robert Jordan and Brandon Sanderson, and Larry Correia’s Warbound, listed in order of votes received.

The award for Best Novella went to Charles Stross’s “Equoid.” It was nominated alongside Catherynne M. Valente’s Six-Gun Snow White, “Wakulla Springs,” by Andy Duncan and Ellen Klages, Brad Torgersen’s “The Chaplain’s Legacy,” and Dan Wells’s The Butcher of Khardov.

Mary Robinette Kowal’s “The Lady Astronaut of Mars won the Hugo for Best Novelette. It’s available for free at Tor.com if you haven’t had a chance to read it yet, and it’s another one that I can’t recommend highly enough (full disclosure: I’ve been a student in two of Kowal’s writing courses and I think she’s a delightful human being).

“The Lady Astronaut of Mars” was nominated alongside Ted Chiang’s “The Truth of Fact, the Truth of Feeling,” Aliette de Bodard’s “The Waiting Stars,” and Brad Torgersen’s “The Exchange Officers.” The voters decided not to award a fifth-place in the category, voting ‘No Award’ ahead of Theodore Beale’s “Opera Vita Aeterna.”

The award for Best Short Story went to The Water That Falls on You from Nowhere, by John Chu. Chu gave a touching acceptance speech, thanking the many people who have supported and encouraged him as he faced racism and heterosexism to pursue his writing career. His work was nominated alongside “Selkie Stories Are for Losers”, by Sofia Samatar, “If You Were a Dinosaur, My Love” by Rachel Swirsky, and “The Ink Readers of Doi Saket” by Thomas Olde Heuvelt.

We Have Always Fought: Challenging the Women, Cattle and Slaves Narrative, by Kameron Hurley won the Hugo for Best Related Work. This is an excellent and well-deserving essay on the history of women in combat, challenging the common narrative that women can’t be heroes of genre fiction because it’s ahistorical. Definitely worth a read if you haven’t seen it yet.

It was nominated alongside Jeff VanderMeer and Jeremy Zerfoss’s Wonderbook: The Illustrated Guide to Creating Imaginative FictionWriting Excuses Season 8, by Brandon Sanderson, Dan Wells, Mary Robinette Kowal, Howard Tayler, and Jordan Sanderson, Queers Dig Time Lords: A Celebration of Doctor Who by the LGBTQ Fans Who Love It, Edited by Sigrid Ellis and Michael Damian Thomas, and Speculative Fiction 2012: The Best Online Reviews, Essays and Commentary, by Justin Landon and Jared Shurin.

In the Best Graphic Story category, Randall Munroe won for xkcd: Time, a four-month-long comic that was updated at the rate of one frame an hour. He couldn’t make it to London to accept the award, so Cory Doctorow accepted on his behalf–wearing the cape and goggles in which he’s depicted as a character in xkcd.

Also nominated in the category: Saga, Vol 2, by Brian K. Vaughan and Fiona Staples, Girl Genius, Volume 13: Agatha Heterodyne & The Sleeping City, by Phil and Kaja Foglio and Cheyenne Wright, “The Girl Who Loved Doctor Who,” by Paul Cornell and Jimmy Broxton, and The Meathouse Man, by George R. R. Martin and Raya Golden.

The award for Best Dramatic Presentation, Long Form went to Gravity, written by Alfonso Cuarón & Jonás Cuarón and directed by Alfonso Cuarón.

Best Dramatic Presentation, Short Form went to Game of Thrones: “The Rains of Castamere”, written by David Benioff and D.B. Weiss and directed by David Nutter.

Ellen Datlow took home the Hugo for Best Editor, Short Form. She was nominated alongside John Joseph Adams, Neil Clarke, Jonathan Strahan, and Sheila Williams.

In the Best Editor, Long Form category, the award went to Ginjer Buchanan, nominated alongside Sheila Gilbert, Liz Gorinsky, Lee Jarris, and Toni Weisskopf.

Julie Dillon won this year’s award for Best Professional Artist, nominated alongside Daniel Dos Santos, John Picacio, John Harris, Fiona Staples, and Galen Dara.

Lightspeed Magazine was this year’s winner for Best Semiprozine, nominated alongside Strange Horizons, Apex Magazine, Interzone, and Beneath Ceaseless Skies.

In the Best Fanzine category, the winner was A Dribble of Ink, nominated alongside The Book Smugglers, PornokitschJourney Planet, and Elitist Book Reviews.

The award for Best Fancast went to SF Signal Podcast, nominated alongside The Coode Street PodcastGalactic Suburbia PodcastTea and Jeopardy, The Skiffy and Fanty Show, Verity!, and The Writer and the Critic.

This year’s award for Best Fan Writer went to Kameron Hurley, author of insightful and incisive feminist commentary on the history and future of SFF as a genre and a community. She was nominated alongside Abigail Nussbaum, Foz Meadows, Liz Bourke, and Mark Oshiro.

The Best Fan Artist award went to Sarah Webb. Also nominated in the category:  Brad W. Foster, Mandie Manzano, Spring Schoenhuth, and Steve Stiles.

Worldcon also presents the  John W. Campbell Award for Best New Writer, which is not a Hugo, but is administered with the Hugos. This year’s winner was Sofia Samatar.

Samatar was nominated alongside Wesley Chu, Ramez Naam, Benjanun Sriduangkaew, and Max Gladstone. I’m thrilled to see Samatar go home with a Hugo, but I’m also pleased that fandom chose to recognize the talents of so many writers of color this year. If you haven’t checked out their work yet, it comes highly recommended. I’m personally really enjoying Samatar’s A Stranger In Olondria.

For more information on this year’s Hugo voting, check out the LonCon 3 site‘s detailed vote breakdown [PDF link].


I wrote previously about the attempt to stuff the ballot box for political reasons this year. I’m glad that fandom saw fit to reject this politicization of its biggest award, but since I’ve already seen folks trolling about ‘social justice warriors,’ this is your reminder that we have a strictly-enforced comment policy.

Planet DebianAndrew Pollock: [tech] Solar follow up

Now that I've had my solar generation system for a little while, I thought I'd write a follow up post on how it's all going.

Energex came out a week ago last Saturday and swapped my electricity meter over for a new digital one that measures grid consumption and excess energy exported. Prior to that point, it was quite fun to watch the old analog meter going backwards. I took a few readings after the system was installed, through to when the analog meter was disconnected, and the meter had a value 26 kWh lower than when I started.

I've really liked how the excess energy generated during the day has effectively masked any relatively small overnight power consumption.

Now that I have the new digital meter things are less exciting. It has a meter measuring how much power I'm buying from the grid, and how much excess power I'm exporting back to the grid. So far, I've bought 32 kWh and exported 53 kWh excess energy. Ideally I want to minimise the excess because what I get paid for it is about a third of what I have to pay to buy it from the grid. The trick is to try and shift around my consumption as much as possible to the daylight hours so that I'm using it rather than exporting it.

On a good day, it seems I'm generating about 10 kWh of energy.

I'm still impatiently waiting for PowerOne to release their WiFi data logger card. Then I'm hoping I can set up something automated to submit my daily production to PVOutput for added geekery.

Planet DebianJamie McClelland: Getting to know systemd

Update 2014-08-20: apcid needs tweaking. See update section below.

Somehow both my regular work laptop and home entertainment computers (both running Debian Jessie) were switched to systemd without me noticing. Judging from by dpkg.log it may have happened quite a while ago. I'm pretty sure that's a compliment to the backwards compatibility efforts made by the systemd developers and a criticism of me (I should be paying closer attention to what's happening on my own machines!).

In any event, I've started trying to pay more attention now - particularly learning how to take advantage of this new software. I'll try to keep this blog updated as I learn. For now, I have made a few changes and discoveries.

First - I have a convenient bash wrapper I use that both starts my OpenVPN client to a samba server and then mounts the drive. I only connect when I need to and rarely do I properly disconnect (the OpenVPN connection does not start automatically). So, I've written the script to carefully check if my openvpn connection is present and either restart or start depending on the results.

I had something like this:

if ps -eFH | egrep [o]penvpn; then
  sudo /etc/init.d/openvpn restart
else
  sudo /etc/init.d/openvpn start
fi

One of the advertised advantages of systemd is the ability to more accurately detect if a service is running. So, first I changed this to:

if systemctl -q is-active openvpn.service; then
  sudo systemctl restart openvpn.service
else
  sudo systemctl start openvpn.service
fi

However, after reviewing the man page I realized I can shorten if further to simply:

  sudo systemctl restart openvpn.service

According to the man page, restart means:

Restart one or more units specified on the command line. If the units are not
running yet, they will be started.

After discovering this meaning for "restart" in systemd, I tested and realized that "restart" works the same way for openvpn using the old sysv style init system. Oh well. At least there's a man page and a stronger guarantee that it will work with all services, not just the ones that happen to respect that convention in their init.d scripts.

The next step was to disable openvpn on start up. I confess, I never bothered to really learn update-rc.d. Everytime I read the manpage I ended up throwing up my hands and renaming symlinks by hand. In the case of openvpn I had previously edited /etc/default/openvpn to indicate that "none" of the virtual private networks should be started.

Now, I've returned that file to the default configuration and instead I ran:

systemctl disable openvpn.service

UPDATES

2014-08-20: I've recently noticed strange behavior when I wake my laptop. Seems to sometimes go right back to sleep. After doing some digging I traced the problem to some customizations I have made to my laptop's acpid behavior combined with systemd taking over some apci events.

Up to now, I have created my own /etc/acpi files so I have full control over the acpi events. In particular, I don't want my laptop to suspend when I close the lid. I only want it to suspend when I press the suspend key. And, when it does suspend, I want it to run my own personal suspend script so I can do things like lock the screen and restart tint2.

I've found that systemd launches it's own acpi event monitoring that ignores my /etc/acpid rules (the systemd "unit" that monitors events is called acpid.socket which exists in addition to acpid.service). The systemd reaction to events is controlled by the systemd-logind.service which has a configuration file: /etc/systemd/logind.conf. By default, systemd-logind.service will put my laptop to sleep when the lid is closed and when the suspend button is pushed. systemd seems to get the signal first, putting the laptop to sleep. After I wake it up, acpid gets the signal - so it goes right back to sleep.

Reading man logind.conf helps. I was able to restore my desired behavior by adding these lines to /etc/systemd/logind.conf:

HandleSuspendKey=ignore
HandleLidSwitch=ignore

Then: sudo systemctl restart systemd-logind.service.

Sociological ImagesSunday Fun: The Best Thesis Defense…

…is a good offense.

Congratulations to all the August thesis and dissertation defenders out there!

1 (2) - Copy

And thanks to xkcd for the ongoing higher ed humor.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Don MartiOriginal bug, not original sin

Ethan Zuckerman calls advertising The Internet's Original Sin. But sin is overstating it. Advertising has an economic and social role, just as bacteria have an important role in your body. Many kinds of bacteria can live on and around you just fine, and only become a crisis when your immune system is compromised.

The bad news is that the Internet's immune system is compromised. Quinn Norton summed it up: Everything is Broken. The same half-assed approach to security that lets random trolls yell curse words on your baby monitor is also letting a small but vocal part of the ad business claim an unsustainable share of Internet-built wealth at the expense of original content.

But email spam didn't kill email, and surveillance marketing won't kill the Web. Privacy tech is catching up. AdNews has a good piece on the progress of ad blocking, but I'm wondering about how accurate any measurement of ad blocking can be in the presence of massive fraud. Fraudulent traffic is a big part of the picture, and nobody has an incentive to run an ad blocker on that. The results from the combination of fraud and use of privacy tools are unpredictable. Paywalls are the obvious next step, but there are ways for sites to work with privacy tools, not against them.

What Ethan calls pay-for-performance is the smaller, and less valuable, part of advertising. Online ads are stuck in that niche not so much because of original sin, but because of an original bug. When the browsers of Dot-Com Boom 1.0 came out in a rush with support for privacy antifeatures such as third-party tracking, the Web excluded itself from lucrative branding or signaling advertising. The Web became a direct-response medium like email spam or direct mail. Bob Hoffman said, The web is a much better yellow pages and a much worse television. But that's not inherent in the medium. The Web is able to carry better and more signalful ads as the privacy level goes up. That's a matter of fixing the privacy bugs that allow for tracking, not a sin to expiate.

Recent news, from Kate Tummarello at The Hill: Tech giants at odds over Obama privacy bill. Microsoft is coming in on one side, and a group of mostly surveillance marketing firms calling itself the united voice of the Internet economy is on the other. There's no one original sin here, but there's plenty of opportunity in fixing bugs.

Bonus links

Jeff Jarvis: Absolution? Hell, no

Jason Dorrier: Burger Robot Poised to Disrupt Fast Food Industry

BOB HOFFMAN: Confusing Gadgetry With Behavior

Planet Linux AustraliaAndrew Pollock: [life] Day 198: Dentist, play date and some Science Friday

First up on Friday morning was Zoe's dentist appointment. When Sarah dropped her off, we jumped in the car straight away and headed out. It's a long way to go for a 10 minute appointment, but it's worth it for the "Yay! I can't wait!" reaction I got when I told her she was going a few days prior.

Having a positive view of dental care is something that's very important to me, as teeth are too permanent to muck around with. The dentist was very happy with her teeth, and sealed her back molars. Apparently it's all the rage now, as these are the ones that hang around until she's 12.

Despite this being her third appointment with this dentist, Zoe was feeling a bit shy this time, so she spent the whole time reclining on me in the chair. She otherwise handled the appointment like a trooper.

After we got home, I had a bit of a clean up before Zoe's friend from Kindergarten, Vaeda and her Mum came over for lunch. The girls had a good time playing together really nicely for a couple of hours afterwards.

I was flicking through 365 Science Experiments, looking for something physics-related for a change, when I happened on the perfect thing. The girls were already playing with a bunch of balloons that I'd blown up for them, so I just charged one up with static electricity and made their hair stand on end, and also picked up some torn up paper. Easy.

After Vaeda left, we did the weekend grocery shop early, since we were going away for the bulk of the weekend.

It was getting close to time to start preparing dinner after that. Anshu came over for dinner and we all had a nice dinner together.

Geek FeminismA Linkspam in Time (17 August 2014)

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet DebianFrancesca Ciceri: Adventures in Mozillaland #4

Yet another update from my internship at Mozilla, as part of the OPW.

An online triage workshop

One of the most interesting thing I've done during the last weeks has been to held an online Bug Triage Workshop on the #testday channel at irc.mozilla.org.
That was a first time for me: I had been a moderator for a series of training sessions on IRC organized by Debian Women, but never a "speaker".
The experience turned out to be a good one: creating the material for the workshop had me basically summarize (not too much, I'm way too verbose!) all what I've learned in this past months about triaging in Mozilla, and speaking of it on IRC was a sort of challenge to my usual shyness.

And I was so very lucky that a participant was able to reproduce the bug I picked as example, thus confirming it! How cool is that? ;)

The workshop was about the very basics of triaging for Firefox, and we mostly focused on a simplified lifecycle of bugs, a guided tour of bugzilla (including the quicksearch and the advanced one, the list view, the individual bug view) and an explanation of the workflow of the triager. I still have my notes, and I plan to upload them to the wiki, sooner or later.

I'm pretty satisfied of the outcome: the only regret is that the promoting wasn't enough, so we have few participants.
Will try to promote it better next time! :)

about:crashes

Another thing that had me quite busy in the last weeks was to learn more about crashes and stability in general.
If you are unfortunate enough to experience a crash with Firefox, you're probably familiar with the Mozilla Crash Reporter dialog box asking you to submit the crash report.

But how does it works?

From the client-side, Mozilla uses Breakpad as set of libraries for crash reporting. The Mozilla specific implementation adds to that a crash-reporting UI, a server to collect and process crash reported data (and particularly to convert raw dumps into readable stack traces) and a web interface, Socorro to view and parse crash reports.

Curious about your crashes? The about:crashes page will show you a list of the submitted and unsubmitted crash reports. (And by the way, try to type about:about in the location bar, to find all the super-secret about pages!)

For the submitted ones clicking on the CrashID will take you to the crash report on crash-stats, the website where the reports are stored and analyzed. The individual crash report page on crash-stats is awesome: it shows you the reported bug numbers if any bug summaries match the crash signature, as well as many other information. If crash-stats does not show a bug number, you really should file one!

The CrashKill team works on these reports tracking the general stability of the various channels, triaging the top crashes, ensuring that the crash bugs have enough information and are reproducible and actionable by the devs.
The crash-stats site is a mine of information: take a look at the Top Crashes for Firefox 34.0a1.
If you click on a individual crash, you will see lots of details about it: just on the first tab ("Signature Summary") you can find a breakdown of the crashes by OS, by graphic vendors or chips or even by uptime range.
A very useful one is the number of crashes per install, so that you know how widespread is the crashing for that particular signature. You can also check the comments the users have submitted with the crash report, on the "Comments" tab.

One and Done tasks review

Last week I helped the awesome group of One and Done developers, doing some reviewing of the tasks pages.

One and Done is a brilliant idea to help people contribute to the QA Mozilla teams.
It's a website proposing the user a series of tasks of different difficulty and on different topics to contribute to Mozilla. Each task is self-contained and can last few minutes or be a bit more challenging. The team has worked hard on developing it and they have definitely done an awesome job! :)

I'm not a coding person, so I just know that they're using Django for it, but if you are interested in all the dirty details take a look at the project repository. My job has been only to check all the existent tasks and verify that the description and instruction are correct, that the task is properly tagged and so on. My impression is that this an awesome tool, well written and well thought with a lot of potential for helping people in their first steps into Mozilla. Something that other projects should definitely imitate (cough Debian cough).

What's next?

Next week I'll be back on working on bugs. I kind of love bugs, I have to admit it. And not squashing them: not being a coder make me less of a violent person toward digital insects. Herding them is enough for me. I'm feeling extremely non-violent toward bugs.

I'll try to help Liz with the Test Plan for Firefox 34, on the triaging/verifying bugs part.
I'll also try to triage/reproduce some accessibility bugs (thanks Mario for the suggestion!).

Planet DebianAndreas Metzler: progress

The GnuTLS28 transition is making progress, more than 60% done:

GnuTLS progress diagramm

Thanks to a national holiday combined with rainy weather this should look even better soon:
ametzler@argenau:~$ w3m -dump https://ftp-master.debian.org/deferred.html | grep changes | wc -l
26
ametzler@argenau:~$ w3m -dump https://ftp-master.debian.org/deferred.html | grep Metzler | wc -l
18

Planet DebianMatt Brown: GPG Key Transition

Firstly, thanks to all who responded to my previous rant. It turns out exactly what I wanted does exist in the form of a ID-000 format smartcard combined with a USB reader. Perfect. No idea why I couldn’t find that on my own prior to ranting, but very happy to have found it now.

Secondly, now that I’ve got my keys and management practices in order, it is time to begin transitioning to my new key.

Click this link to find the properly signed, full transition statement.

I’m not going to paste the full statement into this post, but my new key is:

pub   4096R/A48F065A 2014-07-27 [expires: 2016-07-26]
      Key fingerprint = DBB7 64A1 797D 2E21 C799  3CD7 A628 CB5F A48F 065A
uid                  Matthew Brown <matt @mattb.net.nz>
uid                  Matthew Brown <mattb @debian.org>
sub   4096R/1937883F 2014-07-27 [expires: 2016-07-26]

If you signed my old key, I’d very much appreciate a signature on my new key, details and instructions in the transition statement. I’m happy to reciprocate if you have a similarly signed transition statement to present.

,

LongNowDaniel Kahneman: Thinking Fast and Slow — A Seminar Flashback

In August 02013 Nobel Prize-winning psychologist Daniel Kahneman spoke for Long Now about two types of thinking he’s identified and their implications. The pioneer of behavioral economics gave an insightful and humor-filled presentation on how we think and make decisions. Kahneman contrasted his pessimism with Stewart Brand’s characteristic optimism in their on-stage conversation after the talk (which was ended prematurely by a fire alarm). Twice a month we highlight a Seminar About Long-term Thinking (SALT) from our archives.

Video of the 12 most recent Seminars is free for all to view. Thinking Fast and Slow is a recent SALT talk, free for public viewing until September 02014. SALT audio is free for everyone on our Seminar pages and via podcastLong Now members can see all Seminar videos in HD.

<iframe frameborder="no" height="166" scrolling="no" src="https://w.soundcloud.com/player/?url=https%3A//api.soundcloud.com/tracks/117088837&amp;color=ff5500&amp;auto_play=false&amp;hide_related=false&amp;show_comments=true&amp;show_user=true&amp;show_reposts=false" width="100%"></iframe>

From Stewart Brand’s summary of this Seminar (in full here):

Before a packed house, Kahneman began with the distinction between what he calls mental “System 1”—fast thinking, intuition—and “System 2”—slow thinking, careful consideration and calculation. System 1 operates on the illusory principle: What you see is all there is. System 2 studies the larger context. System 1 works fast (hence its value) but it is unaware of its own process. Conclusions come to you without any awareness of how they were arrived at. System 2 processes are self-aware, but they are lazy and would prefer to defer to the quick convenience of System 1.

Daniel Kahneman is professor emeritus of psychology and public affairs at Princeton University’s Woodrow Wilson School. His books include the best selling Thinking, Fast and Slow (02011). He won the Nobel Memorial Prize in Economics for his work in prospect theory in 02002 and was awarded the Presidential Medal of Freedom in 02014.

Daniel Kahneman

The Seminars About Long-term Thinking series began in 02003 and is presented each month live in San Francisco. It is curated and hosted by Long Now’s President Stewart Brand. Seminar audio is available to all via podcast.

Everyone can watch full video of the last 12 Long Now Seminars (including this Seminar video until late June 02014). Long Now members can watch the full ten years of Seminars in HD. Membership levels start at $8/month and include lots of benefits.

You can join Long Now here.

Planet DebianIan Donnelly: How-To: Integrate elektra-merge Into a Debian Package

Hi Everybody,

So I already explained that we decided to go in a new direction and patch ucf to allow automatic configuration file merging with any custom command. Today I wanted to explain how to use ucf’s new `–three-way-merge-command` functionality in
conjunction with Elektra in order to ultilize Elektra’s powerful tools in order to allow automatic three-way merges of your package’s configuration during upgrades in a way that is more reliable than a diff3 merge. This guide assumes that you are fimilar with ucf already and are just trying to impliment the --three-way-merge-command option using Elektra.

The addition of the --three-way-merge-command option was a part of my Google Summer of Code Project. This option takes the form:

--three-way-merge-command command [New File] [Destination]

Where command is the command you would like to use for the merge. New File and Destination are the same as always.

We added a new script to Elektra called elektra-merge for use with this new option in ucf. This script acts as a liaison between ucf and Elektra, allowing a regular ucf command to run a kdb merge even though ucf commands only pass New File and Destination whereas kdb merge requires ourpath, theirpath, basepath, and resultpath. Since ucf already performs a three-way merge, it keeps track of all the necessary files to do so, even though it only takes in New File and Destination.

In order to use elektra-merge, the current configuration file must be mounted to KDB to serve as ours in the merge. The script automatically mounts theirs, base, and result using the kdb remount command in order to use the same backend as ours (since all versions of the same file should use the same backend anyway) and this way users don’t need to worry about specifying the backend for each version of the file. Then the script attempts a merge on the newly mounted KeySets. Once this is finished, either on success or not, the script finishes by unmouting all but our copy of the file to cleanup KDB. Then, if the merge was successful ucf will replace ours with the result providing the package with an automatically merged configuration which will also be updated in KDB itself.

Additionally, we added two other scripts, elektra-mount and elektra-umount which act as simple wrappers for kdb mount and kdb umount. That work identically but are more script friendly

The full command to use elektra-merge to perform a three-way merge on a file managed by ucf is:

ucf --three-way --threeway-merge-command elektra-merge [New File] [Destination]

Thats it! As described above, elektra-merge is smart enough to run the whole merge off of the information from that command and utilizes the new kdb remount command to do so.

Integrating elektra-merge into a package that already uses ucf is very easy! In postinst you should have a line similar to:

ucf [New File] [Destination]

or perhaps:

ucf --three-way [New File] [Destination]

All you must do is in postinst, when run with the configure option you must mount the config file to Elektra:

kdb elektra-mount [New File] [Mounting Destination] [Backend]

Next, you must update the line containing ucf with the options --three-way and --threeway-merge-command like so:

ucf --three-way --threeway-merge-command elektra-merge [New File] [Destination]

Then, in your postrm script, during a purge, you must umount the config file before deleting it:

kdb lektra-umount [Name]

That’s it! With those small changes you can use Elektra to perform automatic three-way merges on any files that your package uses ucf to handle!

I just wanted to show a quick example, below is a diff representing the changes we made to the samba-common package in order to allow automatic configuration merging for smb.conf using Elektra. We chose this package because it already
uses ucf to handle smb.conf but it frequently requires users to manually merge changes across versions. Here is the patch showing what we changed:

diff samba_orig/samba-3.6.6/debian/samba-common.postinst samba/samba-3.6.6/debian/samba-common.postinst
92c92,93
< ucf --three-way --debconf-ok "$NEWFILE" "$CONFIG"
---
> kdb elektra-mount "$CONFIG" system/samba/smb ini
> ucf --three-way --threeway-merge-command elektra-merge --debconf-ok "$NEWFILE" "$CONFIG"
Only in samba/samba-3.6.6/debian/: samba-common.postinst~
diff samba_orig/samba-3.6.6/debian/samba-common.postrm samba/samba-3.6.6/debian/samba-common.postrm
4a5
> kdb elektra-umount system/samba/smb

As you can see, all we had to do was add the line to mount smb.conf during install, update the ucf command to include the new --threeway-merge-command option, and unmount system/samba/smb during a purge. It really is that easy!

Sincerely,
Ian S. Donnelly

Planet DebianDaniel Pocock: WebRTC: what works, what doesn't

With the release of the latest rtc.debian.org portal update, there are numerous improvements but there are still some known problems too.

The good news is that if you have a web browser, you can probably make successful WebRTC calls from one developer to another without any need to install or configure anything else.

The bad news is that not every permutation of browser and client will work. Here I list some of the limitations so people won't waste time on them.

The SIP proxy supports any SIP client

Just about any SIP client can connect to the proxy server and register. This does not mean that every client will be able to call each other. Generally speaking, modern WebRTC clients will be able to call each other. Standalone softphones or deskphones will call each other. Calling from a normal softphone or deskphone to a WebRTC browser, or vice-versa, will not work though.

Some softphones, like Jitsi, have implemented most of the protocols to communicate with WebRTC but they are yet to put the finishing touches on it.

Chat should just work for any combination of clients

The new WebRTC frontend supports SIP chat messaging.

There is no presence or buddy list support yet.

You can even use a tool like sipsak to accept or send SIP chats from a script.

Chat works for any client new or old. Although a WebRTC user can't call a softphone user, for example, they can send chats to each other.

WebRTC support in Iceweasel 24 on wheezy systems is very limited

On a wheezy system, the most recent Iceweasel update is version 24.7.

This version supports most of WebRTC but does not support TURN relay servers to help you out of a NAT network.

If you call between two wheezy machines on the same NAT network it will work. If the call has to traverse a NAT boundary it will not work.

Wheezy users need to either download a newer Firefox version or use Chromium.

JsSIP doesn't handle ICE elegantly

Internet Connectivity Establishment (ICE, RFC 5245) is meant to prevent calls from being answered with missing audio or video streams.

ICE is a mandatory part of WebRTC.

When correctly implemented, the JavaScript application will exchange ICE candidates and run the connectivity checks before alerting anybody that a call is ringing. If the checks fail (for example, with Iceweasel 24 and NAT), the caller should be told the call can't be made and the callee shouldn't be disturbed at all.

JsSIP is not operating in this manner though. It alerts the callee before telling the browser to start the connectivity checks. Then it even waits for the callee to answer. Only then does it tell the browser to start checking connectivity. This is not a fault with the ICE standard or the browser, it is an implementation problem.

Therefore, until this is fully fixed, people may still see some calls that appear to answer but don't have any media stream. After this is fixed, such calls really will be a thing of the past.

Debian RTC testing is more than just a pipe dream

Although these glitches are not ideal for end users, there is a clear roadmap to resolve them.

There are also a growing collection of workarounds to minimize the inconvenience. For example, JSCommunicator has a hack to detect when somebody is using Iceweasel 24 and just refuse to make the call. See the option require_relay_candidate in the config.js settings file. This also ensures that it will refuse to make a call if the TURN server is offline. Better to give the user a clear error than a call without any audio or video stream.

require_relay_candidate is enabled on freephonebox.net because it makes life easier for end users. It is not enabled on rtc.debian.org because some DDs may be willing to tolerate this issue when testing on a local LAN.

To find out more about practical integration of WebRTC into free software solutions, consider coming to my talk at xTupleCon in October.

Planet DebianMatthias Klumpp: AppStream/DEP-11 Debian progress

There hasn’t been a progress-report on DEP-11 for some time, but that doesn’t mean there was no work going on on it.

DEP-11 is Debian’s implementation of AppStream, as well as an effort to enhance the metadata available about software in Debian. While initially, AppStream was only about applications, DEP-11 was designed with a larger scope, to collect data about libraries, binaries and things like Python modules. Now, since AppStream 0.6, DEP-11 and AppStream have essentially the same scope, with the difference of DEP-11 metadata being described in YAML, while official AppStream data is XML. That was due to a request by our ftpmasters team, which doesn’t like XML (which is also not used anywhere in Debian, as opposed to YAML). But this doesn’t mean that people will have to deal with the YAML file format: The libappstream library will just take DEP-11 data as another data source for it’s Xapian database, allowing anything using libappstream to access that data just like the XML stuff. Richards libappstream-glib will also receive support for the DEP-11 format soon, filling it’s in-memory data cache and enabling the use of GNOME-Software on Debian.

So, what has been done so far? The past months, my Google Summer of Code student. Abhishek Bhattacharjee, was working hard to integrate DEP-11 support into dak, the Debian Archive Kit, which maintains the whole Debian archive. The result will be an additional metadata table in our internal Postgres database, storing detailed information about the software available in a Debian package, as well as “Components-<arch>.yml.gz” files in the Debian repositories. Dak will also produce an application icon-cache and a screenshots repository. During the time of the SoC, Abhishek focused mainly on the applications part of things, and less on the other components (like extracting data about Python modules or libraries) – these things can easily be implemented later.

The remaining steps will be to polish the code and make it merge-ready for Debian’s dak (as soon as it has received enough testing, we will likely give it a try on the Tanglu Debian derivative). Following that, Apt will be extended to fetch the DEP-11 data on-demand on systems where it is useful (which is currently mostly desktop-systems) – if you want to save a little bit of space, you will be able to disable downloading this extra metadata in Apt. From there, libappstream will take the data for it’s Xapian db. This will lead to the removal of the much-hated (from ftpmasters and maintainers side) app-install-data package, which has not been updated for two years and only contains a small fraction of the metadata provided by DEP-11.

What Debian will ultimately gain from this effort is support for software-centers like GNOME-Software, and improved support for tools like Apper and Muon in displaying applications. Long-term, with more metadata being available, It would be cool to add support for it to “specialized package managers”, like Python’s pip, npm or gem, to make them fetch information about available distribution software and install that instead of their own copies from 3rd-party repositories, if possible. This should ultimatively lead to less code duplication on distributions and will likely result in fewer security issues, since the officially maintained and integrated distribution packages can easily be used, if possible. This is no attempt to make tools like pip obsolete, but an attempt to have the different tools installing software on your machine communicate better, instead of creating parallel worlds in terms of software management. Another nice sideeffect of more metadata will be options to search for tools handling mimetypes in the software repos (in case you can’t open a file), smart software centers installing missing firmware, and automatic suggestions for developers which software they need to install in order to build a specific software package. Also, the data allows us to match software across distributions, for that, I will have some news soon (not sure how soon though, as I am currently in thesis-writing-mode, and therefore have not that much spare time). Since the goal is to have these features available on all distributions supporting AppStream, it will take longer to realize – but we are on a good way.

So, if you want some more information about my student’s awesome work, you can read his blogpost about it. He will also be at Debconf’14 (Portland). (I can’t make it this time, but I surely won’t miss the next Debconf)

Sadly, I only see a very small chance to have the basic DEP-11 stuff land in-time for Jessie (lots of review work needs to be done, and some more code needs to be written), but we will definitively have it in Jessie+1.

A small example on how this data will look like can be found here – a larger, actual file is available here. Any questions and feedback are highly appreciated.

Sociological ImagesSaturday Stat: The Invention of the “Illegal Immigrant”

Citing the immigration scholar, Francesca Pizzutelli, Fabio Rojas explains that the phrase “illegal immigrant” wasn’t a part of the English language before the 1930s.  More often, people used the phrase “irregular immigrant.”   Instead of an evaluative term, it was a descriptive one referring to people who moved around and often crossed borders for work.

1

Rojas points out that the language began to change after anti-immigration laws were passed by Congress in the 1920s.  The graph above also reveals a steep climb in both “illegal immigrant” and “illegal alien” beginning in the ’70s.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianBits from Debian: Debian turns 21!

Today is Debian's 21st anniversary. Plenty of cities are celebrating Debian Day. If you are not close to any of those cities, there's still time for you to organize a little celebration!

Debian 21

Happy 21st birthday Debian!

,

CryptogramFriday Squid Blogging: Te Papa Museum Gets a Second Colossal Squid

That's two more than I have. They're hoping it's a male.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

TEDOn origami, Alzheimer’s & kindness: Global health expert Alanna Shaikh rethinks preparing for dementia

Alanna Shaikh at TED2013, a year after her powerful talk about Alzheimer's disease.

Alanna Shaikh at TED2013, about six months after giving a powerful talk about Alzheimer’s disease and the three strategies she was putting in place in case she should ever get it. Photo: Ryan Lash/TED

Global health expert Alanna Shaikh gave an unexpected and moving talk at TEDGlobal 2012, called “How I’m preparing to get Alzheimer’s.” In it, she told the story of her father’s struggle with the disease, and outlined some strategies she’d devised in case dementia struck her later in life, too. The TED Blog was curious: How is her experiment going?

While most of Shaikh’s goals haven’t exactly gone as planned, in the process, she’s had a lightbulb moment about how to think about dementia—and learned to be a better person, to boot. Here, a conversation about the relationship between kindness and health, and living an enjoyable life in the present while planning for the future.

What have you been up to since your talk went live two years ago?

I talked about three things I was trying to do to prepare for Alzheimer’s: physically preparing by becoming stronger and more flexible, cultivating hobbies that would stick with me through the illness and trying to change who I am to be better and nicer. What really succeeded, weirdly enough, is I honestly think I am a better person. By deliberately choosing to be kind over and over again, it seems to now come naturally to me.

What were you like before?

Very judgmental and critical. I was committed to being a good person, but I wasn’t particularly worried about being a nice person. One of my friends in college told me that his favorite thing about me was I always had something bitchy to say about someone. This is someone who loves me—he meant it as a positive. I don’t think anybody who’s known me in the last couple of years would say that now. Dealing with my dad made me realize how much nice actually matters. And kindness. I had never really thought about what kindness and niceness have to do with each other.

I’ve never thought about that. What is the difference between nice and kind?

Being nice is not making a fuss and letting things happen to you. Not protesting. Whereas kindness is about deliberately giving the best of yourself, and deliberately looking for ways to find the positive in things. The example I give sometimes is this: the office building I used to work in didn’t have enough elevators. So if you wanted to leave the building at any time between 5 and 6pm, it was just packed—the elevator would stop on every floor, it would take forever and it was all sweaty. There were these people on the third floor, and they were always laughing and flirting and holding the elevator for each other, and you’d end up crammed in the corner for five minutes while you waited for them to stop saying goodbye to each other and hugging and whatever. At the beginning, I was like, “Those damned idiots on the third floor—why can’t they just take the stairs?” And then I started deliberately thinking, “No, these are young people enjoying life.” And so I started to think of them as the happy people on the third floor, and then realized that they are just thinking about their lives, not necessarily thinking too much about what it meant to be crammed into the elevator while they said goodbye. I started to try to take that approach to everything, to really look for the positive perspective.

Sounds like generosity of spirit, in a way.

I guess so. Because I’m an expat, I move a lot. So each new place you live is a chance to be the person you are right then. I realized that people who know me where I’m living now in Kyrgyzstan think of me as this very funny, positive, kind person. I love that. It doesn’t feel fake. I think I really am that person now, and I love that I was able to do that. It was the hardest thing for me, thinking, “I can pretend that I’m nice, but can I really become nice?”

Have you thought about kindness and its role in healing and health? Do you think it’s better for us to be kind?

I’ve never thought about that before, but I’m sure it is. For one thing, I think it takes a lot less emotional energy to be kind. Think of me getting off that elevator thinking about the happy people around me, versus me getting off that elevator being all, “Grrrr.” It has to be better for my heart. It has to be better not to get all that cortisol revved up inside of me.

Alanna's father in a happy moment, long after his Alzheimer's had set in. Photo: Alanna Shaikh

Alanna’s father in a happy moment, long after his Alzheimer’s had set in. Photo: Alanna Shaikh

There’s also the question of kindness in the healing professions — the idea that patients are more likely to respond well to compassionate doctors and healers who touch their patients.

I think that’s probably true. In my day job, I’ve been part of a lot of different trainings for physicians, and one of the amazing things we’ve discovered is that the part physicians really love is the interpersonal skills, learning how to talk to their patients gently and kindly. We started including that in basically everything we teach, whether we’re teaching infection control or HIV care or breastfeeding support or whatever. The first component is always, “How do you talk to patients so they’ll listen?” The doctors absolutely love that, because it turns out they’ve been yearning to connect kindly; they just didn’t have the tools. That is the first thing they see results from: talking to their patients differently brings them different results as medical professionals. It seems to bring better outcomes. Often, doctors are afraid that if they are kind they’ll lose their authority, or patients won’t take them seriously, so it’s valuable to have an outsider validate the idea that you can be a respected professional and still be kind and generous to people, and that you don’t have to be stern and harsh to be an authority figure.

Are you still doing the same exercises that you discussed in the talk?

The hobbies didn’t work out as well as I wanted. It turns out I only like making origami boxes, but I really have no interest in making any other kind of origami—zebras or cranes or anything. Everybody who saw the TED Talk gave me origami stuff. I have four books, I have all this paper — and I just make a lot of origami boxes.

That’s probably fine from a cognitive perspective. At this point, I can have a piece of paper in my hands, and be watching TV and look down, and I’ve made a box. So clearly, this is being hardwired into me, and that’s good. That’s probably better than being able to make lots of different things, from a what-if-I-get dementia perspective. But I thought I was going to have this whole fleet of little animals, and it turns out that that’s not me. I can become kinder, but I can’t become a person who likes making origami.

The same thing happened with the knitting. I never made it past being able to knit a blob. I’ve done better with drawing, though. I still draw, and it’s really enjoyable. Connecting to that part of me has been great. And I’ve found myself also taking a lot of pictures, because drawing has me thinking visually. Photography’s not a particularly useful what-if-I-get-Alzheimer’s hobby, but it’s a sign that I’m thinking visually.

When you have Alzheimer’s, what happens when you go to take a picture? Your brain just doesn’t take in what’s on the screen?

At the very end, if you handed my dad a camera, he would’ve held it upside-down or sideways. He just wouldn’t have known what to do with it. But if you gave him a pencil, he could sign his name. My dad was a college professor in a state system, so if you gave him paperwork, he would fill it out, right up until the end. If he saw something that was obviously some sort of bureaucratic form, he’d scribble nonsense on all the lines. So he still knew what to do with a pencil. A pencil was comfortable. But a camera was alien.

What about exercise?

I’ve kept up with weightlifting. Not as regularly as I should, but often enough—I feel like I’m maintaining muscle mass. I’m still a strong and muscular person. And I stopped doing regular yoga, but I miss it, and I’m going back. I was on a very committed schedule, and then my yoga teacher moved, and I was like, “Oh, I’ll use videos,“ but it turned out I wouldn’t. Now I do the sun salutation every other morning, and that’s sort of the extent of it.

In your talk, you seemed pretty positive that you were going to get Alzheimer’s. But what are the statistics, really?

I kind of tune that out, because there are so many unknowns in terms of how exactly my father developed Alzheimer’s. I’m going to have genetic testing done next time I’m in the US long enough to get it. They can determine whether you have the gene that makes you much more susceptible to developing the disease. Basically, if you have the mutation for early-onset Alzheimer’s – -which is what my father had — it’s almost inevitable that you’ll get the disease. Beyond that, testing can’t tell you much.

Alanna Shaikh enjoying drawing with her son. Photo: Alanna Shaikh

Alanna Shaikh enjoying drawing with her son. Photo: Alanna Shaikh

Would it give you an idea of when onset would be?

No, they can’t do that yet.

How old was your father when he started developing symptoms?

In his early 50s.

That’s sobering.

Yeah. I mean, we didn’t know really what the symptoms meant at the time, but in retrospect, you can very clearly see the Alzheimer’s developing.

What were some of the symptoms?

For him, it was disinhibition. He just started acting weird. We thought maybe he had bipolar disorder, as he had some manic episodes, and he started telling dirty jokes he never told before. He started talking about his childhood in Pakistan and India, which he never talked about because it was really traumatic. Those are also things that can happen if you’re having a midlife crisis, so we didn’t recognize it as dementia. It’s not your classic pattern.

It turns out that for people who are highly intelligent, it doesn’t necessarily manifest in the same way, because they’re really good at compensating. They have enough excess cognitive capacity to make up for dropping and losing things, for example. If they’re forgetting words or names, they have the ability to develop mnemonics — that sort of thing.

Have you been doing work with Alzheimer’s since the talk? Did people start approaching you?

They did, and that was one greatest things about doing the talk. I’d really never thought about it before as anything beyond my personal story. I thought, if you’re going to give a TED Talk, you have to tell your best story—and this was my best story, maybe it’d be useful to someone else. It turned out it was really useful to others. I get emailed probably once a week from someone telling me that they saw the talk, and it helped them. That’s just the best feeling, because if you watch the talk, you’ll see it was really, really hard for me to give. It’s good to know that something that was that difficult for me was worth it.

I’ve also met with the Alzheimer’s Association of California about talking to people for them. I’ve been part of a group that’s working to increase attention to neurological disorders. I’ve been contacted by other people who want me to get involved in outreach. I’ve been thinking a lot about how I can get involved in Alzheimer’s advocacy.

But I’ve also been thinking bigger. It was interesting coming at this as an international development person, because all of the people I know in the professional international development sphere saw the talk, came back, and said, “You realize that you’re basically talking about disaster risk reduction. In a lot of ways, you’re talking about the same resilience that you want to build in a community. You’re talking about what do you do if you live in a place that tends to be hit by tornadoes.” So I started thinking more broadly about how to think about a future that isn’t the future you choose. How can you build a life that you’re living right now that prepares you for both the best possible future and the worst possible future? It’s a really, really big topic, and it might be the one I think about for the rest of my life. It’s created this lens for me to look at the world, and to think about the work I do with global health, and how that all comes together into this idea of how to have a good life in the present that also prepares a good life in the future.

Where will you take this idea?

I’m actually in the beginning stages of writing a book, and it’s one of the big themes. When you read the comments on my talk, a lot of people say, “How can you let the future affect your life like that?” “She’s given up, she’s making a mistake.” It was really interesting to me, because people seem to have this idea that your life now would be inherently terrible if you thought about your future too much as you live your daily life. But it seems like the “you” in the future is really going to regret that choice. The future Alanna is going to come back and slap me upside the head if I pretend that she doesn’t exist right now. It seems to be a surprising idea that you can live a good life now that prepares you for a good future. People think of it as a trade-off.

It seems your talk helped you evolve the work already you do into something much bigger.

Yes, I wasn’t expecting that. When I did the talk, I had really never considered Alzheimer’s in the context of global health, even though dementia and its effects on society are part of the global health discussion. My area of specialty is a lot more about primary health care and building health systems. So they were totally separate things in my world. It was almost like people had to point it out to me.

The disconnect between personal and professional was so strong that when people asked me to do advocacy for Alzheimer’s, I’d say no at first. My thought, even as someone who’s at risk, was that I’d rather that money be spent on vaccination for children or something that seems like it would help more people. But as time went on, connecting to so many people about aging and dementia and the future of the health system, I finally realized that there are many things that can be done to help people with dementia that help everyone. It’s not an either/or trade-off. If you help caregivers, then you’re helping moms with young babies and people taking care of the elderly. If you teach health care providers to treat people with kindness, that benefits everyone. Taking health care and the future seriously benefits everyone. There are ways to think about dementia that are not dementia-exclusive. And I don’t think I would ever have had any of those thoughts if it hadn’t been for all the conversations I’ve had since the TED Talk.

I actually wrote the talk at TEDIndia in 2009, while watching one of the speakers. I started thinking, “If I were going to give a TED Talk, what would I say? What about me is interesting?” I realized I was actually doing this thing that’s fairly interesting. And so I wrote down the title, “How I’m Preparing to Get Alzheimer’s,” and then I wrote the entire TED Talk, sitting right there in the audience. I started crying as I wrote everything down.

And what happened with your father?

My dad died about two months after the talk. It’s hard, because people do always ask after my dad, and I have to tell them that. But he saw the talk, and I am glad. I don’t know how much he understood, but he knew it was me on a big stage talking to people — and he was proud of me, and that made him happy.

me and garth, big


Geek FeminismSurely You’re Joking, Mr Linkspam

  • Many Women Leave Engineering, Blame The Work Culture | All Tech Considered, NPR (August 12): “Conventional wisdom says that women in engineering face obstacles such as the glass ceiling, a lack of self-confidence and a lack of mentors. But psychologists who delved deeper into the issue with a new study found that the biggest pushbacks female engineers receive come from the environments they work in.”
  • Survey of Academic Field Experiences (SAFE): Trainees Report Harassment and Assault | PLOS ONE (July 16): “Codes of conduct and sexual harassment policies were not regularly encountered by respondents, while harassment and assault were commonly experienced by respondents during trainee career stages. Women trainees were the primary targets; their perpetrators were predominantly senior to them professionally within the research team.”
  • Harassment in Science, Replicated | New York Times (August 11): “More than half of the female respondents said they weren’t taken seriously because of their gender, one in three had experienced delayed career advancement, and nearly half said they had not received credit for their ideas. Almost half said they had encountered flirtatious or sexual remarks, and one in five had experienced uninvited physical contact.”
  • Guardians of the Galaxy, We Need To Talk | Tor.com (August 13): “Here’s the thing. You can’t give me Gamora then spend the whole movie slut-shaming her and locking her into an unnecessary romance, then expect me to grateful a woman was even allowed a prominent role.”
  • “Guardians of the Galaxy” passes the Bechdel test, still fails women | Salon (August 6): “In this context, it’s pretty easy to imagine what happened with Guardians of the Galaxy: Gunn genuinely went out to create a film with “strong female characters” and was savvy enough to include a basic Bechdel pass. But then secure in the knowledge that he was meeting that goal, he failed to realize that jokes about prostitution and background characters like the Collector’s assistant and Peter Quill’s one-night-stands would serve to undermine those intentions.”
  • A Female Superhero Pitches a Movie | Adventures of Angelfire, YouTube (August 8): A funny take on how movie executives react to the idea of a movie starring a female superhero.
  • Where Are the Superheroines of STEM on the Silver Screen? A Wishlist of Amazing Women | Autostraddle (August 9): “My point is, there are enough lady STEMers to be getting on with, moviemakers and film-shakers. And I have a few suggestions.”
  • I Desire to Be More Sensitive | Satifice (July 16): “After a rocky start to the morning, I can tell you the absolute last thing I had any desire to do during my one hour lunch break was to engage in the emotional and intellectual labour of teaching SWC how to do things better. “
  • We Have a Rape Gif Problem and Gawker Media Won’t Do Anything About It | Jezebel (August 11): “If this were happening at another website, if another workplace was essentially requiring its female employees to manage a malevolent human pornbot, we’d report the hell out of it here and cite it as another example of employers failing to take the safety of its female employees seriously. But it’s happening to us. It’s been happening to us for months. And it feels hypocritical to continue to remain silent about it.”
  • What Gawker Media is Doing About Our Rape Gif Problem | Jezebel (August 13): “But you, Troll, also did this company a favor: you have forced Gawker Media to give the problem — the state of our commenting system and specifically its failures — the attention it deserves. You’ve actually done some good around here, Troll!”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet DebianPaul Tagliamonte: PyGotham 2014

I’ll be there this year!

Talks look amazing, I can’t wait to hit up all the talks. Looks really well organized! Talk schedule has a bunch that I want to hit, I hope they’re recorded to watch later!

If anyone’s heading to PyGotham, let me know, I’ll be there both days, likely floating around the talks.

Krebs on SecurityWhy So Many Card Breaches? A Q&A

The news wires today are buzzing with stories about another potentially major credit/debit card breach at yet another retail chain: This time, the apparent victim is AB Acquisition, which operates Albertsons stores under a number of brands, including ACME Markets, Jewel-Osco, Shaw’s and Star Markets. Today’s post includes no special insight into this particular retail breach, but rather seeks to offer answers to some common questions regarding why we keep hearing about them.

QWhy do we keep hearing about breaches involving bricks-and-mortar stores?

Credit and debit cards stolen from bricks-and-mortar stores (called “dumps”) usually sell for at least ten times the price of cards stolen from online merchants (referred to in the underground as “CVVs” or just “credit cards”). As a result, dumps are highly prized by today’s cyber crooks, and there are dozens of underground “card shops” online that will happily buy the cards from hackers and resell them on the open market. For a closer look at how these shops work (and how, for example, the people responsible for these retail break-ins very often also are actually running the card shops themselves) see Peek Inside a Carding Shop.

Okay, I’ll bite: Why are dumps so much more expensive and valuable to attackers?

A big part of the price difference has to do with the number of steps it takes for the people buying these stolen cards (a.k.a. “carders”) to “cash out” or gain value from the stolen cards. For example, which of these processes is likely to be more successful, hassle-free and lucrative for the bad guy?

1. Armed with a stack of dumps, a carder walks into a big box store and walks out with high-priced electronics or gift cards that he can easily turn into cash.

2. Armed with a list of CVVs, a carder searches online for stores that will ship to an address that is different from the one on the card. Assuming the transaction is approved, he has the goods shipped to a guy he knows at another address who will take a cut of the action. That is, *if* the fraudulently purchased goods don’t get stopped or intercepted along the way by the merchant or shipping company when someone complains about a fraudulent transaction.

If you guessed #1, you’re already thinking like a carder!

Snap! But it seems like these breaches are becoming more common. Is that true?

It’s always hard to say whether something is becoming more common, or if we’re just becoming more aware of the thing in question. I think it’s safe to say that more people are looking for patterns that reveal these retail breaches (including yours truly, but somehow this one caught me– and just about everyone I’ve asked — unawares).

Certainly, banks — which shoulder much of the immediate cost from such breaches — are out for blood and seem more willing than ever to dig deep into their own fraud data for patterns that would reveal which merchants got hacked. Visa and MasterCard each have systems in place for the banks to recover at least a portion of the costs associated with retail credit and debit card fraud (such as the cost of re-issuing compromised cards), but the banks still need to be able to tie specific compromised cards to specific merchant breaches.

Assuming we are seeing an increased incidence of this type of fraud, why might that be the case?

One possible answer is that fraudsters realize that the clock is ticking and that U.S. retailers may not always be such a lucrative target. Much of the retail community is working to meet an October 2015 deadline put in place by MasterCard and Visa to move to chip-and-PIN enabled card terminals at their checkout lanes. Somewhat embarrassingly, the United States is the last of the G20 nations to adopt this technology, which embeds a small computer chip in each card that makes it much more expensive and difficult (but not impossible) for fraudsters to clone stolen cards.

That October 2015 deadline comes with a shift in liability for merchants who haven’t yet adopted chip-and-PIN (i.e., those merchants not in compliance could find themselves responsible for all of the fraudulent charges on purchases involving chip-enabled cards that were instead merely swiped through a regular mag-stripe card reader at checkout time).

When is enough enough already for the bad guys? 

I haven’t found anyone who seems to know the answer to this question, but I’ll take a stab: There appears to be a fundamental disconnect between the fraudsters incentivizing these breaches/selling these cards and the street thugs who end up buying these stolen cards.

Trouble is, in the wake of large card breaches at Target, Michaels, Sally Beauty, P.F. Chang’s, et. al., the underground market for these cards would appear to most observers to be almost completely saturated.

For example, in my own economic analysis of the 40 million cards stolen in the Target breach, I estimate that the crooks responsible for that breach managed to sell only about 2-4 percent of the cards they stole. But that number tells only part of the story. I also spoke with a number of banks and asked them: Of the cards that you were told by Visa and MasterCard were compromised in the Target breach, what percentage of those cards did you actually see fraud on? The answer: only between three and seven percent!

So, while the demand for all but a subset of cards issued by specific banks may be low (the crooks buying stolen cards tend to purchase cards issued by smaller banks that perhaps don’t have such great fraud detection and response capabilities), the hackers responsible for these breaches don’t seem to care much about the basic laws of supply and demand. That’s because even a two to four percent sales ratio is still a lot of money when you’re talking about a breach involving millions of cards that each sell for between $10 to $30.

Got more questions? Fire away in the comments section. I’ll do my best to tackle them when time permits.

Here is a link to AB Acquisition LLC’s statement on this latest breach.

Planet DebianAurelien Jarno: Intel about to disable TSX instructions?

Last time I changed my desktop computer I bought a CPU from the Intel Haswell family, the one available on the market at that time. I carefully selected the CPU to make sure it supports as many instructions extensions as possible in this family (Intel likes segmentation, even high-end CPUs like the Core i7-4770k do not support all possible instructions). I ended-up choosing the Core i7-4771 as it supports the “Transactional Synchronization Extensions” (Intel TSX) instructions, which provide transactional memory support. Support for it has been recently added in the GNU libc, and has been activated in Debian. By choosing this CPU, I wanted to be sure that I can debug this support in case of bug report, like for example in bug#751147.

Recently some computing websites started to mention that the TSX instructions have bugs on Xeon E3 v3 family (and likely on Core i7-4771 as they share the same silicon and stepping), quoting this Intel document. Indeed one can read on page 49:

HSW136. Software Using Intel TSX May Result in Unpredictable System Behavior

Problem: Under a complex set of internal timing conditions and system events, software using the Intel TSX (Transactional Synchronization Extensions) instructions may result in unpredictable system behavior.
Implication: This erratum may result in unpredictable system behavior.
Workaround: It is possible for the BIOS to contain a workaround for this erratum.

And later on page 51:

Due to Erratum HSw136, TSX instructions are disabled and are only supported for software development. See your Intel representative for details.

The same websites tell that Intel is going to disable the TSX instructions via a microcode update. I hope it won’t be the case and that they are going to be able to find a microcode fix. Otherwise it would mean I will have to upgrade my desktop computer earlier than expected. It’s a bit expensive to upgrade it every year and that’s a the reason why I skipped the Ivy Bridge generation which didn’t bring a lot from the instructions point of view. Alternatively I can also skip microcode and BIOS updates, in the hope I won’t need another fix from them at some point.

TEDWhat can the American and British education systems learn from classrooms in the developing world?

A group of students in Karakati, India, research the answer to a big question at one location of Sugata Mitra's School in the Cloud. According to Mitra and Adam Braun, there's a lot that Western schools can learn about education from students in India.

Students in Phaltan, India, research the answer to a big question at one of Sugata Mitra’s School in the Cloud labs. According to Mitra and his Microsoft Work Wonders Project partner, Adam Braun, there’s quite a bit that Western schools can learn from classrooms in the developing world.

Adam Braun went to school in the US and now runs a nonprofit that builds schools in Ghana, Laos, Nicaragua and Guatemala. In contrast, Sugata Mitra—the winner of the 2013 TED Prize—went to school in India and now is a professor in the UK, where his research on self-directed learning routinely brings him into elementary schools. Both of these education activists have seen how typical classrooms function in the Western world, and both have seen how typical classrooms function in the developing world. And both say, the West isn’t always better.

Braun and Mitra have teamed up through Microsoft’s Work Wonders Project to bring Mitra’s School in the Cloud learning platform into Braun’s Pencils of Promise schools. As the two pilot their partnership in a school in rural Ghana, we got them together via Skype to talk through a bold question: what can the West learn from the developing world when it comes to education? Their conversation is packed with insights.

To start us off, can each of you share three lessons that the developing world can teach the developed world when it comes to education?

Adam Braun: I think that, in the developed world, we tend to assume that we have all the answers and that those will trickle down to the people at the base of the pyramid. But there’s a lot to be learned from unexpected places too. Three things that that our staff and team has observed:

  1. In the American education system, the teacher is usually assumed to be the expert. We have this traditional model where one teacher stands in front of 30 kids. But the act of teaching is actually one of the most valuable ways to learn. It’s nice to see environments where children can be teachers. That’s something that Sugata has really expanded on with his Self-Organized Learning Environments (SOLEs), which a lot of times remove the teacher altogether and allow children to learn from one another and teach one another simultaneously.
  1. In the United States, there is the expectation that students are supposed to sit still. You’re told not to fidget and to focus. But scientific research shows that brain activity is significantly heightened after 20 minutes of physical activity. There’s significant value in what you see in the developing world—in between classes, kids run in a field, play in a river, climb a mountain. And because they don’t always have proper desks, they’re often learning while sitting on the floor or moving about in the classroom. That can actually lead to better retention and synthesis of information.
  1. The third thing is about the way we learn to read. As much as we think of reading as the act of simply turning letters into sounds in our head, literacy is actually the act of conversion of symbol into sound, and that symbol can take several different forms. When we expect kids to learn, we usually activate two different components—the auditory and visual, so they hear things aloud and observe through their eyes. But kids can learn better when you also activate a third plane—the spatial—through things like sign language. We’ve been piloting programs where we have kids create symbols with their hands and it’s leading to phenomenal literacy gains.

<iframe class="youtube-player" frameborder="0" height="360" src="http://www.youtube.com/embed/ZRE1eBptVy8?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>

Sugata Mitra: Those are such valuable observations. I do hesitate to say, “What can developing countries teach the developed countries about education?” but I would like to frame it as, “What can children teach us about learning?” Because it’s kind of stupid to think that for thousands of years we never asked them. Here are three counterintuitive things that I’ve learned:

  1. My first observation, I learned quite accidentally: a reduction in resources brings increased cooperation. It sounds obvious that if there’s one computer and five children, they can fight, in which case nobody gets to do much, or they can come to an agreement about what they want to do. I think it’s very healthy for children to find agreement. That’s a lesson that I brought from India and hesitantly tried in England—and to my absolute delight, it worked exactly the same way, like magic. At first, the kids asked, “Why are you turning computers off?” I said, “You guys are not going to talk if everybody has a computer.” As soon as I did that, the good old hole in the wall from the slums of Delhi suddenly appeared in the U.S. and U.K. with exactly the same results.
    .
  2. The act of cooperating around the Internet amplifies reading comprehension. Reading has always been taught as a one-to-one thing. With reading on paper, it isn’t easy to read together, whereas on a nice, big computer screen, you can. When children do that, they learn to read adult-level text as a result. People sometimes don’t believe me. They say, “How could they possibly read the Harvard Business Review and understand anything?” To which my answer is, “I don’t know, but they seem to understand it.” Now, I’m measuring this, and have a couple of students who are studying it too. It’s a very exciting finding, and one that could be very relevant in the United States, where reading comprehension is a problem. If this works, it could be a simple solution. I mean, what can be simpler than saying, “Shut down a few computers and read together.”
    .
  3. The third thing is just what Adam said: children in classrooms in the developing world move around a lot because they’re not being supervised. They run about, they disturb each other, they do all the things you’re not supposed to do inside a classroom—but the results are good. I was in a school in London, doing a SOLE with 9-year olds, and they were making a tremendous amount of noise. At the end of it, I asked, “Weren’t you disturbed by all the noise?” One little girl said, “When I hear the voices of my friends, I feel relaxed.” That was a revelation—I had never thought of that. She was concentrating better because she heard voices around her. It was actually aiding the learning process.

All of these observations are so interesting. Let’s start with one Sugata mentioned: that a reduction in resources can lead to more cooperation. Adam, is that something that you’ve seen?

Adam Braun: I definitely agree with that idea. As an entrepreneur, I think a lot from a business standpoint and time and time again, the most effective exercises in the manifestation of ideas happens when you limit resources. It’s common practice in off-site exercises for people to be given limited resources and be told, “Okay, go solve this problem.” You see unexpected, innovation solutions emerge. I think that applies to the learning process as well.

Sugata Mitra: You got me thinking about an activity that’s very popular with children in India: airplanes. In the cities, children can buy perfect scale models of airplanes to play with. But in the villages and the slums, the children make an airplane out of two ice cream sticks. Isn’t that far more creative and imaginative?

<iframe class="youtube-player" frameborder="0" height="360" src="http://www.youtube.com/embed/KPCg0CdguAI?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>

In School in the Cloud classrooms, are there other resources in the room besides computer stations, or do you keep it scaled down to those to get that cooperation effect?

Sugata Mitra: The most prominent equipment in the room are the computers—there are usually six or seven of them with good, high-definition, large screens. In two of the schools, I also installed Xboxes. The teachers thought I was out of my mind. They said, “The kids are not going to do anything besides play with the Xbox.” But I think there’s another way to look at it: If they’re playing all the time, that means that whatever we’ve asked them to do is not as interesting as the Xbox. In which case, we’d better rethink the tasks that we’re asking them to do. We tried an experiment—I gave the students a really nice SOLE question and they got to working, and I purposely said, “Oh, by the way, there’s an Xbox.” As the teacher predicted, everybody started playing. But there were three little boys who were busily working on the question. I went up to them and said, “I’ve got a serious problem. If everyone keeps playing with the Xbox, my SOLE isn’t going to happen properly. Can you help me?” One of these kids got up, went straight to the Xbox, stood in front of the screen, and said, “We are in the middle of an important educational experiment. Do you guys mind getting back to work?” And you know what—the other kids listened.

That’s interesting—they started playing teacher. Adam, you’d mentioned earlier the importance of kids getting the chance to teach. Is that something you see happening frequently in classrooms in the developing world?

Adam Braun: You see it a tremendous amount. Even outside the classroom, it’s just in the way of life. Increasingly in Western culture, parents feel protective of their kids, so there’s constantly this need for an adult to be around and, with the competition for colleges and whatnot, kids have multiple layers of teachers—once they leave the school, they have the tutor, then once they’re done with the tutor, they work with the parent. It’s rare for one child to teach another child. And yet in the developing world, what you see is that the parents go out, spend the day in the field, and the kids are kind of left to their own devices. Oftentimes, it’s expected that the older children will look after the younger children. It’s part of the norm of the culture. I think that the learner is able to relate better, because they’re closer to a peer and they see the world with the same lens. And the child who is teaching develops an advanced sense of responsibility and a mastery of whatever content they’re teaching because, as I think we all know, you can’t teach something unless you know it intimately. I’ve seen this in dozens of countries.

Sugata Mitra: I see it all the time in India as well, the children having to mind each other. I want to add one thing, Adam, which might interest you. There’s one situation where a child becoming the teacher doesn’t work: in the classroom itself. If you take a classroom and say, “Okay, Adam is the teacher,” what many children do is they get a stick and they hold it up and they say, “Everybody sit quietly. I don’t want a single word to be spoken.” The child’s idea of what a teacher is comes from that military, colonial background. But when they’re in the field or the house and the parents are not there, then they help each other. If you ask them, they’ll say, “I’m helping him tie his shoelaces.” They won’t say, “I’m teaching him to tie his shoelaces.” I think there’s a huge difference between the two. What both Adam and I are talking about as an alternative to what’s currently going on in schools—the whole idea of converting “teaching” into “helping you learn.” If that were the ethos, I think children would learn a lot better.

<iframe class="youtube-player" frameborder="0" height="360" src="http://www.youtube.com/embed/qIo3OAH4fcQ?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>

Is there a difference in attitude about going to school? I feel like in the United States, there was always a sense of, “School is something we have to do.” Not of it being something exciting or a real opportunity. Is that different in India, Ghana, and other countries you’ve seen?

Sugata Mitra: In India, it’s exactly the same. The children are reluctant—they have gloomy faces in the morning. If you really probe, they say they don’t quite understand why they’re going to school except to see their friends—that’s the big delight. It’s not that they don’t want to learn things, it’s just that they are not given an option to learn it their way. They’re told to learn in a particular way, and I think that’s the reason why all of us in the morning used to feel a little glum. We knew we would be told to sit quietly, take notes, memorize things, and we also knew that these are not fun things to do.

Adam Braun: I’ve seen all sides of the spectrum. I’ve seen communities where education is just not a high priority; part of that might be that the parents themselves weren’t educated. They have never really seen the benefits that can be accrued, and they have more pressing daily concerns. More often than not, the kids’ attitudes stem from the attitude of the surrounding community. It takes a catalyst to ignite some type of commitment—that can come from a parent, an engaged teacher in the town, a grandparent, a village chief, an older sibling. Once that child feels like they’re supported in this endeavor, things just take off. When you see a catalyst in the community, you tend to see far greater commitment to education and the opportunities that it provides. I think that applies to kids in the Western world too.

Sugata Mitra: There are self-motivated children. By about 11, 12, 13 years of age, you find—in India and the U.K., certainly—children who say, “I must do well in school so that I can become an engineer.” But for the 6-year-olds and 7-year-olds, that goal is too far away. They think of school as a place where they teach things; and, most of the time, they don’t like the way in which it is taught. So I think parents and other adults telling the child, “You have to go to school,” may not be the right approach. If school could be exciting—if the school had an Xbox, for example—I think it wouldn’t be that difficult to get children to want to go. In the hole-in-the-wall experiment, parents used to complain to me that their children weren’t coming home. I was thinking to myself, “These are the same children who hate going to school, but they’re not coming back from their roadside computer. What’s the difference?” The difference was the freedom to learn their way.

Are there any skills, habits or abilities that you’ve seen in students in India and Ghana that would be useful for students in the Western world to learn?

Sugata Mitra: We just did an experiment together, which you can see in the video above, where we had children of similar ages in Ghana and the United States answer the same question—“Why is the blue whale so big?”—in a Self-Organized Learning Environment. We got nearly identical results. Which I think that is a lesson in itself: when it comes to motivated, self-directed learning, children really are not different from country to country. That is terrific news. We don’t have to think of different solutions for different socio-economic strata. If we do it right, then children will engage in the same way, whether it’s in the United States or in Ghana.

Adam Braun: There’s one basic thing I’ve seen that’s different in students in the developing world: resilience. It’s not to say you don’t see it in Western culture, but students in the developing world face significantly more potential obstacles than most kids in the Western world. There are so many hurdles to becoming a university student when you start out living in a bamboo hut without running water, when nobody in your family has even completed secondary school before. It requires a tremendous amount of resilience to get to the point where you graduate. In the developing world, I think there is an appreciation for the difficulties of the journey. Because of that, there’s an expectation that, once someone succeeds, that they give back to their community in some capacity. I hope that continues—and it’s something that would be very beneficial to Western culture as well. Right now, we have this romanticizing of the person who goes off and leaves behind where they came from, but it’s really beautiful and powerful to see that in the developing world, even if a person doesn’t move back to the community, they find a way to help lift others out of impoverished situations. I don’t want to say it doesn’t happen in Western culture, but it’s something that I see happen consistently in the developing world.

Sugata Mitra: I’ve spent most of my life in India; it’s only the last eight or nine years that I’ve been outside. The sentiment that Adam’s describing—that’s the first thing that I saw missing in Western society. I wondered why, but there is a very simple answer: because there isn’t a shortage. If you take a young man or woman who is earning good money, they think, “My parents are okay—I don’t really need to send them money.” And they are correct. But in a way, it’s very ironic. Young people eventually stop aspiring. What Adam is saying, very politely, is that poverty drives a certain value system—a good value system—but poverty is not a nice thing, so we have to invent a new way to have that value system without the actual poverty.

Students at the Pencils of Promise Omega School in Ghana talk to Adam Braun and Sugata Mitra remotely. Photo: Microsoft Work Wonders Project

Students at the Omega School in Ghana talk to Adam Braun and Sugata Mitra remotely. Through Microsoft’s Work Wonders Project, Pencils of Promise and the School in the Cloud are teaming up to develop a new model of education.

What is a lesson that each of you has personally learned from a student you’ve met in India, Ghana, or another country?

Adam Braun: I remember very distinctly being in a multiple-acre dump in Phnom Penh, Cambodia. It’s a horrific, hellish place that’s pretty devastating to see—communities live on the edge of it and they send the kids out to collect sacks of hard plastic from the garbage, for which they can get something like 10 or 15 cents. When I was 21 years old, I got involved with an organization called the Cambodian Children’s Fund, which took kids out of that situation and put them into a facility that provided quality food, healthcare, and a full education. I went back at one point and I was walking around the outskirts of this dump and I found this kid who was living there. I asked him and his friend what they wanted to be when they grew up. They couldn’t have been more than 10 years old; they’re garbage pickers. And one of them looked at me and said he wanted to be a lawyer, and the other one said he wanted to be the Prime Minister. I remember being so awestruck by the size of these kids’ dreams. As adults, particularly adults from the Western world, we have this expectation that kids understand the limits of their situation and that, because of that, they’ll be inhibited in how big they dream. But these kids were very serious. They weren’t willing to limit themselves to the confines of the situation they were in at that moment. That was an immense lesson that always stuck with me. When you hear a child in a situation like that say they want to be the head of their country, it’s extraordinarily humbling and motivating and inspiring at the same time.

Sugata Mitra: The lesson I want to share is of a completely different kind. I learned it from a child who was about 5 years old in the United States. He was being taught how to multiply—to write down two digit numbers, one below the other, and then you do this and that. I was chatting with him and I asked, “Why are you struggling with multiplying?” Without batting an eyelid, he said, “Because I could use a phone. Why do I have to multiply numbers like that on a piece of paper?” He was really annoyed. And at that moment, I realized something important: that the answer to his question was, “They are wasting your time.” I gave a lecture a couple of days later, and I brought this up. I said, “Maybe it’s time we stopped teaching paper arithmetic.” And boy, that community of teachers erupted—they could have killed me. “How can you not have arithmetic? It’s a pillar of primary education.” But who is primary education for? Five-year olds and six-year olds. And there’s my 5-year old friend, who was so extraordinarily annoyed by it. I learned a lot from that conversation with him.

It is still early days for your collaboration, but could you give an update on where things are with bringing SOLEs into Pencils of Promise schools? 

Sugata Mitra: We’re moving this project forward. This is where our meeting ground is: the SOLE as a method is effective and inexpensive, because it works best with just a few computers. So if we take the SOLE idea into Adam’s school and restructure a little bit, then we would get a new kind of model for a school, which others may want to look at. Where Adam’s work is invaluable to me is that his schools—and he has made a lot of them—are currently up and running whereas my holes in the wall are not. I have not been able to solve the problem of sustainability, and I want to find some of the right techniques to get it. It will be exciting to see where we can go together. It should be a few months before the first one or two start to function.

Adam Braun: Leslie Engle is our Director of Impact and she’s been working with a gentleman on Sugata’s side to formalize what this program can look like. We’re excited to find a middle ground in which our commitment and experience with building sustainable schools can mesh with Sugata’s brilliance around SOLEs, and to bring it to life on the ground. Obviously it takes time and it takes iteration and it takes willingness to put forward new ideas. But we’re both committed to it and I’m excited to see where it develops through the rest of the year.

Sugata Mitra: It’s interesting—with the Schools in the Cloud program that I’m doing, my biggest problem, surprisingly, is not pedagogy, or student engagement. My biggest problem is getting quality Internet connections and quality electricity—not having to spend hundreds of thousands of dollars on solar energy, only to have it not function properly. We need equipment that is designed for the tropics. If you design a pair of shoes to work in New York City and then take it to the paddy fields of Vietnam, in an hour, there will be no shoe left. But if you’ve designed a shoe that works in the paddy fields of Vietnam, that will work forever in New York City. I think that if Adam takes his model, builds it and then brings it out of Africa, we’ll get a model of education that works everywhere.

Adam Braun: I would like to echo that. The most sustainable projects and products are those that are built in challenging environments. That’s an exciting place for us to be, knowing that we have the ability and the staff and the relationships to actually do something on the ground. The hope is that if it works out there, then we can expand it and it can propagate across various other parts of the world.


Krebs on SecurityHow Secure is Your Security Badge?

Security conferences are a great place to learn about the latest hacking tricks, tools and exploits, but they also remind us of important stuff that was shown to be hackable in previous years yet never really got fixed. Perhaps the best example of this at last week’s annual DefCon security conference in Las Vegas came from hackers who built on research first released in 2010 to show just how trivial it still is to read, modify and clone most HID cards — the rectangular white plastic “smart” cards that organizations worldwide distribute to employees for security badges.

HID iClass proximity card.

HID iClass proximity card.

Nearly four years ago, researchers at the Chaos Communication Congress (CCC), a security conference in Berlin, released a paper (PDF) demonstrating a serious vulnerability in smart cards made by Austin, Texas-based HID Global, by far the largest manufacturer of these devices. The CCC researchers showed that the card reader device that HID sells to validate the data stored on its then-new line of iClass proximity cards includes the master encryption key needed to read data on those cards.

More importantly, the researchers proved that anyone with physical access to one of these readers could extract the encryption key and use it to read, clone, and modify data stored on any HID cards made to work with those readers.

At the time, HID responded by modifying future models of card readers so that the firmware stored inside them could not be so easily dumped or read (i.e., the company removed the external serial interface on new readers). But according to researchers, HID never changed the master encryption key for its readers, likely because doing so would require customers using the product to modify or replace all of their readers and cards — a costly proposition by any measure given HID’s huge market share.

Unfortunately, this means that anyone with a modicum of hardware hacking skills, an eBay account, and a budget of less than $500 can grab a copy of the master encryption key and create a portable system for reading and cloning HID cards. At least, that was the gist of the DefCon talk given last week by the co-founders of Lares Consulting, a company that gets hired to test clients’ physical and network security.

Lares’ Joshua Perrymon and Eric Smith demonstrated how an HID parking garage reader capable of reading cards up to three feet away was purchased off of eBay and modified to fit inside of a common backpack. Wearing this backpack, an attacker looking to gain access to a building protected by HID’s iClass cards could obtain that access simply by walking up to a employee of the targeted organization and asking for directions, a light of a cigarette, or some other pretext.

Card cloning gear fits in a briefcase. Image: Lares Consulting.

Card cloning gear fits in a briefcase. Image: Lares Consulting.

Perrymon and Smith noted that, thanks to software tools available online, it’s easy to take card data gathered by the mobile reader and encode it onto a new card (also broadly available on eBay for a few pennies apiece). Worse yet, the attacker is then also able to gain access to areas of the targeted facility that are off-limits to the legitimate owner of the card that was cloned, because the ones and zeros stored on the card that specify that access level also can be modified.

Smith said he and Perrymon wanted to revive the issue at DefCon to raise awareness about a widespread vulnerability in physical security.  HID did not respond to multiple requests for comment.

“Until recently, no one has really demonstrated properly what the risk is to a business here,” Smith said. “SCADA installations, hospitals, airports…a lot of them use HID cards because HID is the leader in this space, but they’re using compromised technology. Your card might not have data center or HR access but I can get into those places within your organization just by coming up to some employee standing outside the building and bumming a light off of him.”

Organizations that are vulnerable have several options. Probably the cheapest involves the use of some type of sleeve for the smart cards. The wireless communications technology that these cards use to transmit data — called radio-frequency identification or RFID – can be blocked when not in use by storing the key cards inside a special RFID-shielding sleeve or wallet. Of course, organizations can replace their readers with newer (perhaps non-HID?) technology, and/or add biometric components to card readers, but these options could get pricey in a hurry.

A copy of the slides from Perrymon and Smith’s DefCon talk is available here.

Sociological ImagesWhat Does It Mean to be Authentically Cajun?

Flashback Friday.

The term “Cajun” refers to a group of people who settled in Southern Louisiana after being exiled from Acadia (now Nova Scotia, New Brunswick, and Prince Edward Island) in the mid 1700s.  For a very long time, being Cajun meant living, humbly, off the land and bayou (small-scale agriculture, hunting, fishing, and trapping).  Unique cuisine and music developed among these communities.

In Blue Collar Bayou, Jaques Henry and Carl Bankston III explain that today more than 70% live in urban areas and most work in blue collar jobs in service industries, factories, or the oil industry. “Like other working-class and middle-class Americans,’ they write, “the Southwestern Louisianan of today is much more likely to buy dinner at the Super Kmart than to trap it in the bayou” (p. 188).

1

But they don’t argue that young Cajuns who live urban lifestyles and work in factories are no longer authentically Cajun.  Instead, they suggest that the whole notion of ethnic authenticity is dependent on economic change.

When our economy was a production economy (that is, who you are is what you make), it made sense that Cajun-ness was linked to how one made a living.  But, today, in a consumption economy (when our identities are tied up with what we buy), it makes sense that Cajun-ness involves consumption of products like food and music.

Of course, commodifying Cajun-ness (making it something that you can buy) means that, now, anyone can purchase and consume it.  Henry and Bankston see this more as a paradox than a problem, arguing that the objectification and marketing of “Cajun” certainly makes it sellable to non-Cajuns, but does not take away from its meaningfulness to Cajuns themselves.  Tourism, they argue, “encourages Cajuns to act out their culture both for commercial gain and cultural preservation” (p. 187).

Photos borrowed from GQ, EW, and My New Orleans.  Originally posted in 2009.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Planet DebianSteinar H. Gunderson: Blenovo part III

I just had to add this to the saga:

I got an email from Lenovo Germany today, saying they couldn't reach me (and that the case would be closed after two days if I don't contact them back). I last sent them the documents they asked for July 3rd.

I am speechless.

Update, Aug 19: They actually called again today (not closing the case), saying that they had received the required documents and could repair my laptop under the ThinkPad Protection Program. I told him in very short terms what had happened (that Lenovo Norway had needed seven minutes to do what they needed three months for) and that this was the worst customer service experience I'd ever had, and asked them to close the case. He got a bit meek.

Planet DebianSteve Kemp: A tale of two products

This is a random post inspired by recent purchases. Some things we buy are practical, others are a little arbitrary.

I tend to avoid buying things for the sake of it, and have explicitly started decluttering our house over the past few years. That said sometimes things just seem sufficiently "cool" that they get bought without too much thought.

This entry is about two things.

A couple of years ago my bathroom was ripped apart and refitted. Gone was the old and nasty room, and in its place was a glorious space. There was only one downside to the new bathroom - you turn on the light and the fan comes on too.

When your wife works funny shifts at the hospital you can find that the (quiet) fan sounds very loud in the middle of the night and wakes you up..

So I figured we could buy a couple of LED lights and scatter them around the place - when it is dark the movement sensors turn on the lights.

These things are amazing. We have one sat on a shelf, one velcroed to the bottom of the sink, and one on the floor, just hidden underneath the toilet.

Due to the shiny-white walls of the room they're all you need in the dark.

By contrast my second purchase was a mistake - The Logitech Harmony 650 Universal Remote Control should be great. It clearly has the features I want - Able to power:

  • Our TV.
  • Our Sky-box.
  • OUr DVD player.

The problem is solely due to the horrific software. You program the device via an application/website which works only under Windows.

I had to resort to installing Windows in a virtual machine to make it run:

# Get the Bus/ID for the USB device
bus=$(lsusb |grep -i Harmony | awk '{print $2}' | tr -d 0)
id=$(lsusb |grep -i Harmony | awk '{print $4}' | tr -d 0:)

# pass to kvm
kvm -localtime ..  -usb -device usb-host,hostbus=$bus,hostaddr=$id ..

That allows the device to be passed through to windows, though you'll later have to jump onto the Qemu console to re-add the device as the software disconnects and reconnects it at random times, and the bus changes. Sigh.

I guess I can pretend it works, and has cut down on the number of remotes sat on our table, but .. The overwhelmingly negative setup and configuration process has really soured me on it.

There is a linux application which will take a configuration file and squirt it onto the device, when attached via a USB cable. This software, which I found during research prior to buying it, is useful but not as much as I'd expected. Why? Well the software lets you upload the config file, but to get a config file you must fully complete the setup on Windows. It is impossible to configure/use this device solely using GNU/Linux.

(Apparently there is MacOS software too, I don't use macs. *shrugs*)

In conclusion - Motion-activated LED lights, more useful than expected, but Harmony causes Discord.

Worse Than FailureError'd: The Ouroboros Request

Rob wrote, "I was trying to raise a service request for an Office 365 issue but the service request functionality was broken. I could of course try and report this... by raising a service request."

 

"I tried to pay my toll charges today, like a good citizen, but it looks like the State of California considers it an error that the credit card request was approved," Jonathan writes.

 

Giulio C. wrote, "Babelfish seems quite sure of its successful translation, but I don't really think that is an Italian sentence."

 

"Seems as if there are some people in Germany and the US that still live in AOL," wrote Thilo.

 

Hamish writes, "Breaking news! Your Mac needs to restart!"

 

"After we upgraded our Fibre_Channel-SAN we received the following," Henry K. wrote, "Now, THAT'S what I call an upgrade!"

 

Philipp H. wrote, "This is certainaly not an error that I'd expect to see on Linux Today's site."

 

"Saw this when looking at some laptops on dell.com...do I need to somehow solve for X?" asks Indrek.

 

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: cells

This is the next post summarizing the Juno Nova mid-cycle meetup. This post covers the cells functionality used by some deployments to scale Nova.

For those unfamiliar with cells, it's a way of combining smaller Nova installations into a thing which feels like a single large Nova install. So for example, Rackspace deploys Nova in cells of hundreds of machines, and these cells form a Nova availability zone which might contain thousands of machines. The cells in one of these deployments form a tree: users talk to the top level of the tree, which might only contain API services. That cell then routes requests to child cells which can actually perform the operation requested.

There are a few reasons why Rackspace does this. Firstly, it keeps the MySQL databases smaller, which can improve the performance of database operations and backups. Additionally, cells can contain different types of hardware, which are then partitioned logically. For example, OnMetal (Rackspace's Ironic-based baremetal product) instances come from a cell which contains OnMetal machines and only publishes OnMetal flavors to the parent cell.

Cells was originally written by Rackspace to meet its deployment needs, but is now used by other sites as well. However, I think it would be a stretch to say that cells is commonly used, and it is certainly not the deployment default. In fact, most deployments don't run any of the cells code, so you can't really call them even a "single cell install". One of the reasons cells isn't more widely deployed is that it doesn't implement the entire Nova API, which means some features are missing. As a simple example, you can't live-migrate an instance between two child cells.

At the meetup, the first thing we discussed regarding cells was a general desire to see cells finished and become the default deployment method for Nova. Perhaps most people end up running a single cell, but in that case at least the cells code paths are well used. The first step to get there is improving the Tempest coverage for cells. There was a recent openstack-dev mailing list thread on this topic, which was discussed at the meetup. There was commitment from several Nova developers to work on this, and notably not all of them are from Rackspace.

It's important that we improve the Tempest coverage for cells, because it positions us for the next step in the process, which is bringing feature parity to cells compared with a non-cells deployment. There is some level of frustration that the work on cells hasn't really progressed in Juno, and that it is currently incomplete. At the meetup, we made a commitment to bringing a well-researched plan to the Kilo summit for implementing feature parity for a single cell deployment compared with a current default deployment. We also made a commitment to make cells the default deployment model when this work is complete. If this doesn't happen in time for Kilo, then we will be forced to seriously consider removing cells from Nova. A half-done cells deployment has so far stopped other development teams from trying to solve the problems that cells addresses, so we either need to finish cells, or get out of the way so that someone else can have a go. I am confident that the cells team will take this feedback on board and come to the summit with a good plan. Once we have a plan we can ask the whole community to rally around and help finish this effort, which I think will benefit all of us.

In the next blog post I will cover something we've been struggling with for the last few releases: how we get our bug count down to a reasonable level.

Tags for this post: openstack juno nova mid-cycle summary cells
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots

Comment

Planet Linux AustraliaMichael Still: More bowls and pens

The pens are quite hard to make by the way -- the wood is only a millimeter or so thick, so it tends to split very easily.

         

Tags for this post: wood turning 20140805-woodturning photo

Comment

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: DB2 support

This post is one part of a series discussing the OpenStack Nova Juno mid-cycle meetup. It's a bit shorter than most of the others, because the next thing on my list to talk about is DB2, and that's relatively contained.

IBM is interested in adding DB2 support as a SQL database for Nova. Theoretically, this is a relatively simple thing to do because we use SQLAlchemy to abstract away the specifics of the SQL engine. However, in reality, the abstraction is leaky. The obvious example in this case is that DB2 has different rules for foreign keys than other SQL engines we've used. So, in order to be able to make this change, we need to tighten up our schema for the database.

The change that was discussed is the requirement that the UUID column on the instances table be not null. This seems like a relatively obvious thing to allow, given that UUID is the official way to identify an instance, and has been for a really long time. However, there are a few things which make this complicated: we need to understand the state of databases that might have been through a long chain of upgrades from previous Nova releases, and we need to ensure that the schema alterations don't cause significant performance problems for existing large deployments.

As an aside, people sometimes complain that Nova development is too slow these days, and they're probably right, because things like this slow us down. A relatively simple change to our database schema requires a whole bunch of performance testing and negotiation with operators to ensure that its not going to be a problem for people. It's good that we do these things, but sometimes it's hard to explain to people why forward progress is slow in these situations.

Matt Riedemann from IBM has been doing a good job of handling this change. He's written a tool that operators can run before the change lands in Juno that checks if they have instance rows with null UUIDs. Additionally, the upgrade process has been well planned, and is documented in the specification available on the fancy pants new specs website.

We had a long discussion about this change at the meetup, and how it would impact on large deployments. Both Rackspace and HP were asked if they could run performance tests to see if the schema change would be a problem for them. Unfortunately HP's testing hardware was tied up with another project, so we only got numbers from Rackspace. For them, the schema change took 42 minutes for a large database. Almost all of that was altering the column to be non-nullable; creating the new index was only 29 seconds of runtime. However, the Rackspace database is large because they don't currently purge deleted rows, if they can get that done before running this schema upgrade then the impact will be much smaller.

So the recommendation here for operators is that it is best practice to purge deleted rows from your databases before an upgrade, especially when schema migrations need to occur at the same time. There are some other takeaways for operators as well: if we know that operators have a large deployment, then we can ask if an upgrade will be a problem. This is why being active on the openstack-operators mailing list is important. Additionally, if operators are willing to donate a dataset to Turbo-Hipster for DB CI testing, then we can use that in our automation to try and make sure these upgrades don't cause you pain in the future.

In the next post in this series I'll talk about the future of cells, and the work that needs to be done there to make it a first class citizen.

Tags for this post: openstack juno nova mid-cycle summary sql database sqlalchemy db2
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: containers

Comment

,

Planet DebianJuliana Louback: JSCommunicator 2.0 (Beta) is Live!

This is the last week of Google Summer of Code 2014 - all good things must come to an end. To wrap things up, I’ve merged all my work on JSCommunicator into a new version with all the added features. You can now demo the new and improved (or at least so I hope) JSCommunicator on rtc.debian.org!

JSCommunicator 2.0 has an assortment of new add-ons, the most important new features are the Instant Messaging component and the internationalization support.

The UI has been reorganized but we are currently not using a skin for color scheme - will be posting about that in a bit. The idea is to have a more neutral look that can be easily customized and integrated with other web apps.

A chat session is automatically opened when you begin a call with someone - unless you already started a chat session with said someone. Sound alerts for new incoming messages are optional in the config file, visual alerts occur when an inactive chat tab receives a new message. Future work includes multiple user chat sessions and adapting the layout to a large amount of chat tabs. Currently it only handles 6. (Should I allow more? Who chats with more than 6 people at once? 14 year old me would, but now I just can’t handle that. Anyway, I welcome advice on how to go about this. Should we do infinite tabs or if not, what’s the cut-off?)

About internationalization, I’m uber proud to say we currently run in 6 languages! The 6 are English (default), Spanish, French, Portuguese, Hebrew and German. But one thing I must mention is that since I added new stuff to JSCommunicator, some of the new stuff doesn’t have a translation. I took care of the Portuguese translation and Yehuda Korotkin quickly turned in the Hebrew translation, but we are still missing an update for Spanish, French and German. If you can contribute, please do. There are about 10 new labels to translate, you can fix the issue here. Or if you’re short on time, shoot me an email with the translation for what’s on the right side of the ‘=’:

welcome = Welcome,

call = Call

chat = Chat

enter_contact = Enter contact

type_to_chat = type to chat…

start_chat = start chat

me = me

logout = Logout

no_contact = Please enter a contact.

remember_me = Remember me

I’ll merge it myself but I’ll be sure to add you to the authors list.

LongNowDrew Endy Seminar Tickets

 

The Long Now Foundation’s monthly

Seminars About Long-term Thinking

Drew Endy presents The iGEM Revolution

Drew Endy presents “The iGEM Revolution”

TICKETS

Tuesday September 16, 02014 at 7:30pm SFJAZZ Center

Long Now Members can reserve 2 seats, join today! General Tickets $15

 

About this Seminar:

Drew Endy helped start the newest engineering major, bioengineering, at both MIT and Stanford. His research teams pioneered the redesign of genomes and invented the transcriptor, a simple DNA element that allows living cells to implement Boolean logic.

In 02013 President Obama recognized Endy for his work with the BioBricks Foundation to bootstrap a free-to-use language for programming life. He has been working with designers, social scientists, and others to transcend the industrialization of nature, most recently co-authoring Synthetic Aesthetics (MIT Press, 02014).

Drew is also a co-founder of Gen9, Inc., a DNA construction company, and the iGEM competition. Esquire magazine named Endy one of the 75 most influential people of the 21st century.

Planet DebianGregor Herrmann: RC bugs 2014/13 - 2014/33

perl 5.20 got uploaded to debian unstable a few minutes ago; be prepared for some glitches when upgrading sid machines/chroots in the next days, while all 557 reverse dependencies are rebuilt via binNMUs.

how does this relate to this blog post's title? it does, since during the last weeks I was mostly trying to help with the preparation of this transition. & we managed to fix quite a few bugs while they were not bumped to serious yet, otherwise the list below would be a bit longer :)

anyway, here are the the RC bugs I've worked on in the last 20 or so weeks:

  • #711614 – src:libscriptalicious-perl: "libscriptalicious-perl: FTBFS with perl 5.18: test hang"
    upload new upstream release (pkg-perl)
  • #711616 – src:libtest-refcount-perl: "libtest-refcount-perl: FTBFS with perl 5.18: test failures"
    build-depend on fixed version (pkg-perl)
  • #719835 – libdevel-findref-perl: "libdevel-findref-perl: crash in XS_Devel__FindRef_find_ on Perl 5.18"
    upload new upstream release (pkg-perl)
  • #720021 – src:libhtml-template-dumper-perl: "libhtml-template-dumper-perl: FTBFS with perl 5.18: test failures"
    mark fragile test as TODO (pgk-perl)
  • #720271 – src:libnet-jabber-perl: "libnet-jabber-perl: FTBFS with perl 5.18: test failures"
    add patch to sort hash (pkg-perl)
  • #726948 – libmath-bigint-perl: "libmath-bigint-perl: uninstallable in sid - obsoleted by perl 5.18"
    upload new upstream release (pkg-perl)
  • #728634 – src:fusesmb: "fusesmb: FTBFS: configure: error: Please install libsmbclient header files."
    finally upload to DELAYED/2 with patch from November (using pkg-config)
  • #730936 – src:libaudio-mpd-perl: "libaudio-mpd-perl: FTBFS: Tests errors"
    upload new upstream release (pkg-perl)
  • #737434 – src:libmojomojo-perl: "[src:libmojomojo-perl] Sourceless file (minified)"
    add unminified version of javascript file to source package (pkg-perl)
  • #739505 – libcgi-application-perl: "libcgi-application-perl: CVE-2013-7329: information disclosure flaw"
    upload with patch prepared by carnil (pkg-perl)
  • #739809 – src:libgtk2-perl: "libgtk2-perl: FTBFS: Test failure"
    add patch from Colin Watson (pkg-perl)
  • #743086 – src:libmousex-getopt-perl: "libmousex-getopt-perl: FTBFS: Tests failures"
    add patch from CPAN RT (pkg-perl)
  • #743099 – src:libclass-refresh-perl: "libclass-refresh-perl: FTBFS: Tests failures"
    upload new upstream release (pkg-perl)
  • #745792 – encfs: "[PATCH] Fixing FTBFS on i386 and kfreebsd-i386"
    use DEB_HOST_MULTIARCH to find libraries, upload to DELAYED/2
  • #746148 – src:redshift: "redshift: FTBFS: configure: error: missing dependencies for VidMode method"
    add missing build dependency, upload to DELAYED/2
  • #747771 – src:bti: "bti: FTBFS: configure: line 3571: syntax error near unexpected token `PKG_CHECK_MODULES'"
    add missing build dependency
  • #748996 – libgd-securityimage-perl: "libgd-securityimage-perl: should switch to use libgd-perl"
    update (build) dependency (pkg-perl)
  • #749509 – src:visualvm: "visualvm: FTBFS: debian/visualvm/...: Directory nonexistent"
    use override_dh_install-indep in debian/rules (pkg-java)
  • #749825 – src:libtime-parsedate-perl: "libtime-parsedate-perl: trying to overwrite '/usr/share/man/man3/Time::ParseDate.3pm.gz', which is also in package libtime-modules-perl 2011.0517-1"
    add missing Breaks/Replaces (pkg-perl)
  • #749938 – libnet-ssh2-perl: "libnet-ssh2-perl: FTBFS: libgcrypt20 vs. libcrypt11"
    upload package with fixed build-dep, prepared by Daniel Lintott (pkg-perl)
  • #750276 – libhttp-async-perl: "libhttp-async-perl: FTBFS: Tests failures"
    upload new upstream release prepared by Daniel Lintott (pkg-perl)
  • #750283 – src:xacobeo: "xacobeo: FTBFS: Tests failures when network is accessible"
    add missing build dependency (pkg-perl)
  • #750305 – src:libmoosex-app-cmd-perl: "libmoosex-app-cmd-perl: FTBFS: Tests failures"
    add patch to fix test regexps (pkg-perl)
  • #750325 – src:libtemplate-plugin-latex-perl: "libtemplate-plugin-latex-perl: FTBFS: Tests failures"
    upload new upstream releases prepared by Robert James Clay (pkg-perl)
  • #750341 – src:cpanminus: "cpanminus: FTBFS: Trying to write outside builddir"
    set HOME for tests (pkg-perl)
  • #750564 – obexftp: "missing license in debian/copyright"
    add missing license to debian/copyright, QA upload
  • #750770 – libsereal-decoder-perl: "libsereal-decoder-perl: FTBFS on various architectures"
    upload new upstream development release (pkg-perl)
  • #751044 – packaging-tutorial: "packaging-tutorial: FTBFS - File `bxcjkjatype.sty' not found."
    send a patch (updated build-depends) to the BTS
  • #751563 – src:tuxguitar: "tuxguitar: depends on xulrunner which is no more"
    do some triaging (pkg-java)
  • #752171 – src:pcp: "pcp: Build depends on autoconf"
    upload NMU prepared by Xilin Sun, adding missing build dependency
  • #752347 – highlight: "highlight: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752349 – src:nflog-bindings: "nflog-bindings: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752469 – clearsilver: "clearsilver: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752470 – ekg2: "ekg2: hardcodes /usr/lib/perl5"
    calculate perl lib path at build time, QA upload
  • #752472 – fwknop: "fwknop: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752476 – handlersocket: "handlersocket: hardcodes /usr/lib/perl5"
    create .install from .install.in at build time, QA upload
  • #752704 – lcgdm: "lcgdm: hardcodes /usr/lib/perl5"
    create .install from .install.in at build time, upload to DELAYED/5
  • #752705 – libbuffy-bindings: "libbuffy-bindings: hardcodes /usr/lib/perl5"
    pass value of $Config{vendorarch} to dh_install in debian/rules, upload to DELAYED/5
  • #752710 – liboping: "liboping: hardcodes /usr/lib/perl5"
    use executable .install file for perl library path, upload to DELAYED/5
  • #752714 – lockdev: "lockdev: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752716 – ming: "ming: hardcodes /usr/lib/perl5"
    NMU with the minimal changes from the next release
  • #752799 – obexftp: "obexftp: hardcodes /usr/lib/perl5"
    calculate perl lib path at build time, QA upload
  • #752810 – src:razor: "razor: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752812 – src:redland-bindings: "redland-bindings: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #752815 – src:stfl: "stfl: hardcodes /usr/lib/perl5"
    create .install from .install.in at build time, upload to DELAYED/5
  • #752924 – libdbix-class-perl: "libdbix-class-perl: FTBFS: Failed test 'Cascading delete on Ordered has_many works'"
    add patch from upstream git (pkg-perl)
  • #752928 – libencode-arabic-perl: "libencode-arabic-perl: FTBFS with newer Encode: Can't locate object method "export_to_level" via package "Encode""
    add patch from Niko Tyni (pkg-perl)
  • #752982 – src:libwebservice-musicbrainz-perl: "libwebservice-musicbrainz-perl: hardcodes /usr/lib/perl5"
    pass create_packlist=0 to Build.PL, upload to DELAYED/5
  • #752988 – libnet-dns-resolver-programmable-perl: "libnet-dns-resolver-programmable-perl: broken with newer Net::DNS"
    add patch from CPAN RT (pkg-perl)
  • #752989 – libio-callback-perl: "libio-callback-perl: FTBFS with Perl 5.20: alternative dependencies"
    versioned close (pkg-perl)
  • #753026 – libje-perl: "libje-perl: FTBFS with Perl 5.20: test failures"
    upload new upstream release (pkg-perl)
  • #753038 – libplack-test-anyevent-perl: "libplack-test-anyevent-perl: FTBFS with Perl 5.20: alternative dependencies"
    versioned close (pkg-perl)
  • #753057 – libinline-java-perl: "libinline-java-perl: broken symlinks when built under perl 5.20"
    fix symlinks to differing paths in perl 5.18 vs. 5.20 (pkg-perl)
  • #753144 – src:net-snmp: "net-snmp: FTBFS on kfreebsd-amd64 - 'struct kinfo_proc' has no member named 'kp_eproc'"
    add patch from Niko Tyni, upload to DELAYED/5, later reschedules to 0-day with maintainer's approval
  • #753214 – src:license-reconcile: "license-reconcile: FTBFS: Tests failures"
    make (build) dependency versioned (pkg-perl)
  • #753237 – src:libcgi-application-plugin-ajaxupload-perl: "libcgi-application-plugin-ajaxupload-perl: Tests failures"
    make (build) dependency versioned (pkg-perl)
  • #754125 – libimager-perl: "libimager-perl: FTBFS on s390x"
    close bug, package builds again after libpng upload (pkg-perl)
  • #754691 – src:libio-interface-perl: "libio-interface-perl: FTBFS on kfreebsd-*: invalid storage class for function 'XS_IO__Interface_if_flags'"
    add patch which adds a missing } (pkg-perl)
  • #754993 – libdevice-usb-perl: "libdevice-usb-perl: FTBFS with newer Inline(::C)"
    workaround an Inline bug in debian/rules
  • #755028 – src:libtk-tablematrix-perl: "libtk-tablematrix-perl: hardcodes /usr/lib/perl5"
    use $Config{vendorarch} in debian/rules, upload to DELAYED/5
  • #755324 – src:pinto: "pinto: FTBFS: Tests failures"
    add patch to "use" required module (pkg-perl)
  • #755332 – src:libdevel-nytprof-perl: "libdevel-nytprof-perl: FTBFS: Tests failures"
    mark failing tests temporarily as TODO (pkg-perl)
  • #757754 – obexftp: "obexftp: FTBFS: format not a string literal and no format arguments [-Werror=format-security]"
    add patch with format argument, QA upload
  • #757774 – src:libwx-glcanvas-perl: "libwx-glcanvas-perl: hardcodes /usr/lib/perl5"
    build-depend on new libwx-perl (pkg-perl)
  • #757855 – libwx-perl: "libwx-perl: embeds exact wxWidgets version, needs stricter dependencies"
    use virtual package provided by alien-wxwidgets (pkg-perl)
  • #758127 – src:libwx-perl: "libwx-perl: FTBFS on arm*"
    report and try to debug new build failure (pkg-perl)

p.s.: & now, go & enjoy the new perl 5.20 features :)

Planet Linux AustraliaMichael Still: Review priorities as we approach juno-3

I just send this email out to openstack-dev, but I am posting it here in case it makes it more discoverable to people drowning in email:

To: openstack-dev
Subject: [nova] Review priorities as we approach juno-3

Hi.

We're rapidly approaching j-3, so I want to remind people of the
current reviews that are high priority. The definition of high
priority I am using here is blueprints that are marked high priority
in launchpad that have outstanding code for review -- I am sure there
are other reviews that are important as well, but I want us to try to
land more blueprints than we have so far. These are listed in the
order they appear in launchpad.

== Compute Manager uses Objects (Juno Work) ==

https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/compute-manager-objects-juno,n,z

This is ongoing work, but if you're after some quick code review
points they're very easy to review and help push the project forward
in an important manner.

== Move Virt Drivers to use Objects (Juno Work) ==

I couldn't actually find any code out for review for this one apart
from https://review.openstack.org/#/c/94477/, is there more out there?

== Add a virt driver for Ironic ==

This one is in progress, but we need to keep going at it or we wont
get it merged in time.

* https://review.openstack.org/#/c/111223/ was approved, but a rebased
ate it. Should be quick to re-approve.
* https://review.openstack.org/#/c/111423/
* https://review.openstack.org/#/c/111425/
* ...there are more reviews in this series, but I'd be super happy to
see even a few reviewed

== Create Scheduler Python Library ==

* https://review.openstack.org/#/c/82778/
* https://review.openstack.org/#/c/104556/

(There are a few abandoned patches in this series, I think those two
are the active ones but please correct me if I am wrong).

== VMware: spawn refactor ==

* https://review.openstack.org/#/c/104145/
* https://review.openstack.org/#/c/104147/ (Dan Smith's -2 on this one
seems procedural to me)
* https://review.openstack.org/#/c/105738/
* ...another chain with many more patches to review

Thanks,
Michael


The actual email thread is at http://lists.openstack.org/pipermail/openstack-dev/2014-August/043098.html.

Tags for this post: openstack juno review nova ptl
Related posts: Juno Nova PTL Candidacy; Thoughts from the PTL; Havana Nova PTL elections; Expectations of core reviewers; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots

Comment

Sociological Images#IfTheyGunnedMeDown Attacks Portrayals of Black Men Killed by Police

This has been a hard week.  Another young, unarmed black man was killed by police. The Root added Michael Brown’s face to a slideshow of such incidents, started after a black man named Eric Garner died after being put in a chokehold by officers less than one month ago.  This week’s guilty verdict in the trial of the man who shot Renisha McBride left me feeling numb.  Nothing good could come of it, but at least I didn’t feel worse.

The shooting of Michael Brown, however, is still undergoing trial by media and the verdict is swayed by the choices made by producers and directors as to how to portray him. When Marc Duggan was killed by police earlier this year, they often featured pictures in which he looked menacing, including ones that had been cropped in ways that enhanced that impression.

Left: Photo of Duggan frequently used by media; right: uncropped photo in which he holds a plaque commemorating his deceased daughter.

antonio_gramsci_by_ludilozezanje-d5eqwsv

As the media coverage of Brown’s death heated up, the image that first circulated of Brown was this:

4

Reports state that this was his current Facebook profile picture, with the implication that media actors just picked the first or most prominent picture they saw.  Or, even, that somehow it’s Brown’s fault that this is the image they used.

Using the image above, though, is not neutrality.  At best, it’s laziness; they simply decided not to make a conscious, careful choice.  It’s their job to pick a photograph and I don’t know exactly what the guidelines are but “pick the first one you see” or “whatever his Facebook profile pic was on the day he died” is probably not among them.

There are consequential choices to be made.  As an example, here are two photos that have circulated since criticism of his portrayal began — the top more obviously sympathetic and the bottom more neutral:

2 3

Commenting on this phenomenon, Twitter user @CJ_musick_lawya released two photos of himself, hashtagged with #iftheygunnedmedown, and asked readers which photo they thought media actors would choose.

Top: Wearing a cap and gown with former President Clinton; bottom: in sunglasses posing with a bottle and a microphone.

1

The juxtaposition brilliantly revealed how easy it is to demonize a person, especially if they are a member of a social group stereotyped as violence-prone, and how important representation is.  It caught on and the imagery was repeated to powerful effect. A summary at The Root featured examples like these:

2 3 4

The New York Times reports that the hashtag has been used more than 168,000 times as of  August 12th.  I want to believe that conversations like these will educate and put pressure on those with the power to represent black men and all marginalized peoples to make more responsible and thoughtful decisions.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

Sam VargheseEmma Alberici strikes again

EMMA ALBERICI: And the question is: can air strikes drive the Islamic State out of the Middle East? – The ABC’s Lateline programme on August 13, 2014

I KID you not. This was a serious question put to David Kilcullen, a so-called counter-insurgency expert, by Emma Alberici, one of the most glorious examples of incompetence at the Australian national broadcaster.

Now Alberici, one would assume, has some idea about the size of the Middle East. One would also assume that she is aware that in no conflict has air power, no matter how awesome, been able to drive an enemy out of a battle zone.

How did she ask such a dumb question?

Despite her stupidity, this is the woman chosen to front one of the ABC’s national programmes twice or thrice a week. She draws a salary of around $190,000 per annum and sits there, tilting her head from side to side, and asking stupid questions. And this is not the first time I have had occasion to point this out.

The discussion revolved around the Islamic State in Iraq and the Levant – which now calls itself Islamic State – a militant group which has made rapid gains in taking over towns and cities in Iraq, and some parts of Syria. It is also fighting in the south of Lebanon. The US has launched air strikes on the group to protect minority sects which are being terrorised and fleeing their residences.

The choice of Kilcullen to discuss matters relating to militancy is questionable. According to a genuine expert, Kilcullen was one of those, who along with John Nagl and other counter-insurgency “experts”, devised a strategy in Afghanistan that aimed to unite Afghans by trying to Westernise them via popular elections, installing women’s rights, dismantling tribalism, introducing secularism and establishing NGO-backed bars and whorehouses in Kabul. When the West finally leaves that war-torn country later this year, the Taliban will be back within another six months.

But let’s leave that alone; maybe the choice of Kilcullen was made by someone else. However, no matter who chooses the guest to be interviewed, it is the presenter’s choice to do some preparation and not end up looking stupid. Alberici is a master of the art of putting her foot in her mouth.

A week ago
, a young man named Steve Cannane presented Lateline. He had as his guest Martin Chulov, the Middle East correspondent for the Guardian. Chulov is an old hand in the Mideast and very sound on the subject. Cannane did not put a foot wrong; he had prepared well and asked intelligent questions. The whole interview was gripping and highly informative stuff.

And then we have Alberici. Why, oh why, can the ABC not find a better presenter? In the past, the likes of Maxine McKew and Virginia Trioli were excellent presenters on the same programme; Tony Jones does an adequate job on other nights of the week now.

What is the hold that Alberici has on the ABC top brass? She was a miserable failure at hosting a programme called Business Breakfast which gave many people indigestion. For that, she has been made the presenter of what is arguably the ABC’s second-most important news and current affairs programme after 7.30. At the ABC, it would appear, nothing succeeds like failure.

Planet Linux AustraliaAndrew Pollock: [life] Day 197: Ekka outing

I started the day with a yoga class. It was good to be back. I missed last week's class because I was under the weather, and was really missing yoga. It was just me and one other student this morning, so it was nice.

I picked up Zoe from Sarah's place this morning. She was a bit wrecked from an outing to the Ekka yesterday with Sarah, but adamant that she wasn't tired, and still wanted to go to the Ekka with me today.

I decided that given she was up for it, and tomorrow's schedule didn't really permit going, it had to be today or bust, so I figured we'd just go and take it gently, and go home at the first sign of trouble.

The day worked out perfectly fine. We caught the train in, and stopped at the animal nursery first. Zoe got to hand feed some lambs, goats and calves, as well as hold a baby chicken.

The main goal for the day was the rides, so we headed over there, after I'd gotten propositioned by the Surf Life Savers for a raffle ticket, and located some fairy floss for Zoe. Suitably sugared up, we hit the kids rides area.

I'd prepurchased a $40 ride card, at a $5 discount, and tried to impress upon Zoe that once it was exhausted, we were done with the rides. She seemed to get that. I did less well with convincing her to check out everything on offer before we started blowing money on rides.

The first thing she wanted to go on was the Magic Circus, which they mercifully only charged us entry for one on. It was a pretty cool multi-level physical sensory sort of thing. It was fun to go with her.

After that we waited in line for an eternity for bungy trampoline. This was where I wished we'd scouted around a bit first, because we waited in line for ages for a single trampoline, where there were another four standing idle a bit further down. I used the wait to grab a bit of food and share it with Zoe.

Next, we went on the dodgem cars. That was heaps of fun. Zoe couldn't reach the pedal, but she could steer (with a little bit of help occasionally). She seems to really enjoy rides where she gets thrown around. She's going to be a total adrenaline junkie when she's bigger I think.

What I thought was going to be her last ride was the Big Bubble Bump, those big air-inflated balls on a wading pool. She was very keen for that one, and the line was short. She had lots of fun tumbling all over the place.

With a little bit of extra assistance, we managed to squeeze one more go on the Magic Circus out of the ride card, which made her very happy.

After the obligatory strawberry sundae, she was pretty much done, and we'd managed to avoid the rain, so we headed home. I thought she was going to fall asleep on the train on the way home, but she didn't, and perked up by the time we got home. I tried to convince her to nap in my bed while I read a book, but she ended up just playing with Smudge while I read for a bit.

After the quiet time, we went for a scooter ride around the block, via the Hawthorne Garage, to collect some produce for making a fresh batch of vegetable stock concentrate, and then Sarah arrived to pick Zoe up.

It was a really good day, and Zoe went really well. I bet she crashes tonight.

Worse Than FailureCodeSOD: Securing Input

We all know that many developers have difficulty in dealing with built-in concepts like dates and times, and that for and switch statements don't necessarily have to be used with each other. However, validating a piece of input is usually more straightforward. You compare what you got to what was expected.

Mathieu was tasked with migrating some Groovy scripts. While the technical migration was fairly easy, he found it necessary to rewrite certain portions of the input validation routines. For example, the task of validating the month portion of a date string seemed straightforward enough...

  def MONTHLIST = "JANFEBMARAPRMAYJUNJULAUGSEPOCTNOVDEC" 
  /* ... */ 
  def MonthOk = MONTHLIST.contains(date.substring(1, 4).toUpperCase())

This works well for months like ANF and POC.

Of course, this was written by the same highly skilled person who decided that you needed scratch variables, a try-catch block and throwing an exception to see if something was null...

def noNullString(str) { 
  def rc = str 
  try { 
      def x = rc.trim(); 
  } 
  catch (NullPointerException e) { 
      rc = "" 
  } 
  return rc; 
}

I guess we should all count our blessings that the developer of that code limited himself to strings; imagine what he might have done with a boolean?!

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: ironic

Welcome to the third in my set of posts covering discussion topics at the nova juno mid-cycle meetup. The series might never end to be honest.

This post will cover the progress of the ironic nova driver. This driver is interesting as an example of a large contribution to the nova code base for a couple of reasons -- its an official OpenStack project instead of a vendor driver, which means we should already have well aligned goals. The driver has been written entirely using our development process, so its already been reviewed to OpenStack standards, instead of being a large code dump from a separate development process. Finally, its forced us to think through what merging a non-trivial code contribution should look like, and I think that formula will be useful for later similar efforts, the Docker driver for example.

One of the sticking points with getting the ironic driver landed is exactly how upgrade for baremetal driver users will work. The nova team has been unwilling to just remove the baremetal driver, as we know that it has been deployed by at least a few OpenStack users -- the largest deployment I am aware of is over 1,000 machines. Now, this unfortunate because the baremetal driver was always intended to be experimental. I think what we've learnt from this is that any driver which merges into the nova code base has to be supported for a reasonable period of time -- nova isn't the right place for experiments. Now that we have the stackforge driver model I don't think that's too terrible, because people can iterate quickly in stackforge, and when they have something stable and supportable they can merge it into nova. This gives us the best of both worlds, while providing a strong signal to deployers about what the nova team is willing to support for long periods of time.

The solution we came up with for upgrades from baremetal to ironic is that the deployer will upgrade to juno, and then run a script which converts their baremetal nodes to ironic nodes. This script is "off line" in the sense that we do not expect new baremetal nodes to be launchable during this process, nor after it is completed. All further launches would be via the ironic driver.

These nodes that are upgraded to ironic will exist in a degraded state. We are not requiring ironic to support their full set of functionality on these nodes, just the bare minimum that baremetal did, which is listing instances, rebooting them, and deleting them. Launch is excluded for the reasoning described above.

We have also asked the ironic team to help us provide a baremetal API extension which knows how to talk to ironic, but this was identified as a need fairly late in the cycle and I expect it to be a request for a feature freeze exception when the time comes.

The current plan is to remove the baremetal driver in the Kilo release.

Previously in this post I alluded to the review mechanism we're using for the ironic driver. What does that actually look like? Well, what we've done is ask the ironic team to propose the driver as a series of smallish (500 line) changes. These changes are broken up by functionality, for example the code to boot an instance might be in one of these changes. However, because of the complexity of splitting existing code up, we're not requiring a tempest pass on each step in the chain of reviews. We're instead only requiring this for the final member in the chain. This means that we're not compromising our CI requirements, while maximizing the readability of what would otherwise be a very large review. To stop the reviews from merging before we're comfortable with them, there's a marker review at the beginning of the chain which is currently -2'ed. When all the code is ready to go, I remove the -2 and approve that first review and they should all merge together.

In the next post I'll cover the state of adding DB2 support to nova.

Tags for this post: openstack juno nova mid-cycle summary ironic
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: containers

Comment

Planet DebianHideki Yamane: New Debian T-shirts (2014 summer)

For these every 4 or 5 years, Jun Nogata made Debian T-shirts and today I got a 2014 summer version (thanks!  :-), looks good.


I'll take 2 or 3 Japanese Large-size one to DebConf14 in Portland. Please let me know if you want it. (Update: all T-shirts are reserved now, thanks)

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: containers

This is the second in my set of posts discussing the outcomes from the OpenStack nova juno mid-cycle meetup. I want to focus in this post on things related to container technologies.

Nova has had container support for a while in the form of libvirt LXC. While it can be argued that this support isn't feature complete and needs more testing, its certainly been around for a while. There is renewed interest in testing libvirt LXC in the gate, and a team at Rackspace appears to be working on this as I write this. We have already seen patches from this team as they fix issues they find on the way. There are no plans to remove libvirt LXC from nova at this time.

The plan going forward for LXC tempest testing is to add it as an experimental job, so that people reviewing libvirt changes can request the CI system to test LXC by using "check experimental". This hasn't been implemented yet, but will be advertised when it is ready. Once we've seen good stable results from this experimental check we will talk about promoting it to be a full blown check job in our CI system.

We have also had prototype support for Docker for some time, and by all reports Eric Windisch has been doing good work at getting this driver into a good place since it moved to stackforge. We haven't started talking about specifics for when this driver will return to the nova code base, but I think at this stage we're talking about Kilo at the earliest. The driver has CI now (although its still working through stability issues to my understanding) and progresses well. I expect there to be a session at the Kilo summit in the nova track on the current state of this driver, and we'll decide whether to merge it back into nova then.

There was also representation from the containers sub-team at the meetup, and they spent most of their time in a break out room coming up with a concrete proposal for what container support should look like going forward. The plan looks a bit like this:

Nova will continue to support "lowest common denominator containers": by this I mean that things like the libvirt LXC and docker driver will be allowed to exist, and will expose the parts of containers that can be made to look like virtual machines. That is, a caller to the nova API should not need to know if they are interacting with a virtual machine or a container, it should be opaque to them as much as possible. There is some ongoing discussion about the minimum functionality we should expect from a hypervisor driver, so we can expect this minimum level of functionality to move over time.

The containers sub-team will also write a separate service which exposes a more full featured container experience. This service will work by taking a nova instance UUID, and interacting with an agent within that instance to create containers and manage them. This is interesting because it is the first time that a compute project will have an in operating system agent, although other projects have had these for a while. There was also talk about the service being able to start an instance if the user didn't already have one, or being able to declare an existing instance to be "full" and then create a new one for the next incremental container. These are interesting design issues, and I'd like to see them explored more in a specification.

This plan met with general approval within the room at the meetup, with the suggestion being that it move forward as a stackforge project as part of the compute program. I don't think much code has been implemented yet, but I hope to see something come of these plans soon. The first step here is to create some specifications for the containers service, which we will presumably create in the nova-specs repository for want of a better place.

Thanks for reading my second post in this series. In the next post I will cover progress with the Ironic nova driver.

Tags for this post: openstack juno nova mid-cycle summary containers docker lxc
Related posts: Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: social issues; Juno nova mid-cycle meetup summary: slots

Comment

Planet Linux AustraliaMichael Still: Juno nova mid-cycle meetup summary: social issues

Summarizing three days of the Nova Juno mid-cycle meetup is a pretty hard thing to do - I'm going to give it a go, but just in case I miss things, there is an etherpad with notes from the meetup at https://etherpad.openstack.org/p/juno-nova-mid-cycle-meetup. I'm also going to do it in the form of a series of posts, so as to not hold up any content at all in the wait for perfection. This post covers the mechanics of each day at the meetup, reviewer burnout, and the Juno release.

First off, some words about the mechanics of the meetup. The meetup was held in Beaverton, Oregon at an Intel campus. Many thanks to Intel for hosting the event -- it is much appreciated. We discussed possible locations and attendance for future mid-cycle meetups, and the consensus is that these events should "always" be in the US because that's where the vast majority of our developers are. We will consider other host countries when the mix of Nova developers change. Additionally, we talked about the expectations of attendance at these events. The Icehouse mid-cycle was an experiment, but now that we've run two of these I think they're clearly useful events. I want to be clear that we expect nova-drivers members to attend these events at all possible, and strongly prefer to have all nova-cores at the event.

I understand that sometimes life gets in the way, but that's the general expectation. To assist with this, I am going to work on advertising these events much earlier than we have in the past to give time for people to get travel approval. If any core needs me to go to the Foundation and ask for travel assistance, please let me know.

I think that co-locating the event with the Ironic and Containers teams helped us a lot this cycle too. We can't co-locate with every other team working on OpenStack, but I'd like to see us pick a couple of teams -- who we might be blocking -- each cycle and invite them to co-locate with us. It's easy at this point for Nova to become a blocker for other projects, and we need to be careful not to get in the way unless we absolutely need to.

The process for each of the three days: we met at Intel at 9am, and started each day by trying to cherry pick the most important topics from our grab bag of items at the top of the etherpad. I feel this worked really well for us.

Reviewer burnout

We started off talking about core reviewer burnout, and what we expect from core. We've previously been clear that we expect a minimum level of reviews from cores, but we are increasingly concerned about keeping cores "on the same page". The consensus is that, at least, cores should be expected to attend summits. There is a strong preference for cores making it to the mid-cycle if at all possible. It was agreed that I will approach the OpenStack Foundation and request funding for cores who are experiencing budget constraints if needed. I was asked to communicate these thoughts on the openstack-dev mailing list. This openstack-dev mailing list thread is me completing that action item.

The conversation also covered whether it was reasonable to make trivial updates to a patch that was close to being acceptable. For example, consider a patch which is ready to merge apart from its commit message needing a trivial tweak. It was agreed that it is reasonable for the second core reviewer to fix the commit message, upload a new version of the patch, and then approve that for merge. It is a good idea to leave a note in the review history about this when these cases occur.

We expect cores to use their judgement about what is a trivial change.

I have an action item to remind cores that this is acceptable behavior. I'm going to hold off on sending that email for a little bit because there are a couple of big conversations happening about Nova on openstack-dev. I don't want to drown people in email all at once.

Juno release

We also took at look at the Juno release, with j-3 rapidly approaching. One outcome was to try to find a way to focus reviewers on landing code that is a project priority. At the moment we signal priority with the priority field in the launchpad blueprint, which can be seen in action for j-3 here. However, high priority code often slips away because we currently let reviewers review whatever seems important to them.

There was talk about picking project sponsored "themes" for each release -- with the obvious examples being "stability" and "features". One problem here is that we haven't had a lot of luck convincing developers and reviewers to actually work on things we've specified as project goals for a release. The focus needs to move past specific features important to reviewers. Contributors and reviewers need to spend time fixing bugs and reviewing priority code. The harsh reality is that this hasn't been a glowing success.

One solution we're going to try is using more of the Nova weekly meeting to discuss the status of important blueprints. The meeting discussion should then be turned into a reminder on openstack-dev of the current important blueprints in need of review. The side effect of rearranging the weekly meeting is that we'll have less time for the current sub-team updates, but people seem ok with that.

A few people have also suggested various interpretations of a "review day". One interpretation is a rotation through nova-core of reviewers who spend a week of their time reviewing blueprint work. I think these ideas have merit. An action item for me to call for volunteers to sign up for blueprint focused reviewing.

Conclusion

As I mentioned earlier, this is the first in a series of posts. In this post I've tried to cover social aspects of nova -- the mechanics of the Nova Juno mid-cycle meetup, and reviewer burnout - and our current position in the Juno release cycle. There was also discussion of how to manage our workload in Kilo, but I'll leave that for another post. It's already been alluded to on the openstack-dev mailing list this post and the subsequent proposal in gerrit. If you're dying to know more about what we talked about, don't forget the relatively comprehensive notes in our etherpad.

Tags for this post: openstack juno nova mid-cycle summary core review social
Related posts: Juno nova mid-cycle meetup summary: slots; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler; Juno nova mid-cycle meetup summary: ironic; Juno nova mid-cycle meetup summary: DB2 support; Juno nova mid-cycle meetup summary: containers

Comment

CryptogramNew Snowden Interview in Wired

There's a new article on Edward Snowden in Wired. It's written by longtime NSA watcher James Bamford, who interviewed Snowden in Moscow.

There's lots of interesting stuff in the article, but I want to highlight two new revelations. One is that the NSA was responsible for a 2012 Internet blackout in Syria:

One day an intelligence officer told him that TAO­ -- a division of NSA hackers­ -- had attempted in 2012 to remotely install an exploit in one of the core routers at a major Internet service provider in Syria, which was in the midst of a prolonged civil war. This would have given the NSA access to email and other Internet traffic from much of the country. But something went wrong, and the router was bricked instead -- rendered totally inoperable. The failure of this router caused Syria to suddenly lose all connection to the Internet -- although the public didn't know that the US government was responsible....

Inside the TAO operations center, the panicked government hackers had what Snowden calls an "oh shit" moment. They raced to remotely repair the router, desperate to cover their tracks and prevent the Syrians from discovering the sophisticated infiltration software used to access the network. But because the router was bricked, they were powerless to fix the problem.

Fortunately for the NSA, the Syrians were apparently more focused on restoring the nation’s Internet than on tracking down the cause of the outage. Back at TAO's operations center, the tension was broken with a joke that contained more than a little truth: "If we get caught, we can always point the finger at Israel."

Other articles on Syria.

The other is something called MONSTERMIND, which is an automatic strike-back system for cyberattacks.

The program, disclosed here for the first time, would automate the process of hunting for the beginnings of a foreign cyberattack. Software would constantly be on the lookout for traffic patterns indicating known or suspected attacks. When it detected an attack, MonsterMind would automatically block it from entering the country -- a "kill" in cyber terminology.

Programs like this had existed for decades, but MonsterMind software would add a unique new capability: Instead of simply detecting and killing the malware at the point of entry, MonsterMind would automatically fire back, with no human involvement.

A bunch more articles and stories on MONSTERMIND.

And there's this 2011 photo of Snowden and former NSA Director Michael Hayden.

Planet DebianDaniel Pocock: Bug tracker or trouble ticket system?

One of the issues that comes up from time to time in many organizations and projects (both community and commercial ventures) is the question of how to manage bug reports, feature requests and support requests.

There are a number of open source solutions and proprietary solutions too. I've never seen a proprietary solution that offers any significant benefit over the free and open solutions, so this blog only looks at those that are free and open.

Support request or bug?

One common point of contention is the distinction between support requests and bugs. Users do not always know the difference.

Some systems, like the Github issue tracker, gather all the requests together in a single list. Calling them "Issues" invites people to submit just about anything, such as "I forgot my password".

At the other extreme, some organisations are so keen to keep support requests away from their developers that they operate two systems and a designated support team copies genuine bugs from the customer-facing trouble-ticket/CRM system to the bug tracker. This reduces the amount of spam that hits the development team but there is overhead in running multiple systems and having staff doing cut and paste.

Will people use it?

Another common problem is that a full bug report template is overkill for some issues. If a user is asking for help with some trivial task and if the tool asks them to answer twenty questions about their system, application version, submit log files and other requirements then they won't use it at all and may just revert to sending emails or making phone calls.

Ideally, it should be possible to demand such details only when necessary. For example, if a support engineer routes a request to a queue for developers, then the system may guide the support engineer to make sure the ticket includes attributes that a ticket in the developers' queue should have.

Beyond Perl

Some of the most well known systems in this space are Bugzilla, Request Tracker and OTRS. All of these solutions are developed in Perl.

These days, Python, JavaScript/Node.JS and Java have taken more market share and Perl is chosen less frequently for new projects. Perl skills are declining and younger developers have usually encountered Python as their main scripting language at university.

My personal perspective is that this hinders the ability of Perl projects to attract new blood or leverage the benefits of new Python modules that don't exist in Perl at all.

Bugzilla has fallen out of the Debian and Ubuntu distributions after squeeze due to its complexity. In contrast, Fedora carries the Bugzilla packages and also uses it as their main bug tracker.

Evaluation

I recently started having a look at the range of options in the Wikipedia list of bug tracking systems.

Some of the trends that appear:

  • Many appear to be bug tracking systems rather than issue tracking / general-purpose support systems. How well do they accept non-development issues and keep them from spamming the developers while still providing a useful features for the subset of users who are doing development?
  • A number of them try to bundle other technologies, like wiki or FAQ systems: but how well do they work with existing wikis? This trend towards monolithic products is slightly dangerous. In my own view, a wiki embedded in some other product may not be as well supported as one of the leading purpose-built wikis.
  • Some of them also appear to offer various levels of project management. For development tasks, it is just about essential for dependencies and a roadmap to be tightly integrated with the bug/feature tracker but does it make the system more cumbersome for people dealing with support requests? Many support requests, like "I've lost my password", don't really have any relationship with project management or a project roadmap.
  • Not all appear to handle incoming requests by email. Bug tracking systems can be purely web/form-based, but email is useful for helpdesk systems.

Questions

This leaves me with some of the following questions:

  • Which of these systems can be used as a general purpose help-desk / CRM / trouble-ticket system while also being a full bug and project management tool for developers?
  • For those systems that don't work well for both use cases, which combinations of trouble-ticket system + bug manager are most effective, preferably with some automated integration?
  • Which are more extendable with modern programming practices, such as Python scripting and using Git?
  • Which are more future proof, with choice of database backend, easy upgrades, packages in official distributions like Debian, Ubuntu and Fedora, scalability, IPv6 support?
  • Which of them are suitable for the public internet and which are only considered suitable for private access?

,

TEDFrom supermodel to managing editor: How Cameron Russell’s TED Talk inspired her to start a magazine

working2

Cameron Russell and art director Hannah Assebe think over an idea for the redesign of the Interrupt magazine website. They can’t wait to show off the final product in September. Photo: Becky Chung

In the chaotic heart of downtown Brooklyn, a door greets you with an illustration of a man in a pink tutu and leopard print leggings tossing up a peace sign. Welcome to Space-Made—the art lab where Cameron Russell and her collaborators create Interrupt, a magazine that lets marginalized communities tell their own stories. It’s a concept that the supermodel felt compelled to launch after her TED Talk, “Looks aren’t everything. Believe me, I’m a model,” went viral.

At a table, near the bubblegum machine and shelves full of photography books, Russell and Interrupt’s art director, Hannah Assebe, go over the latest wireframes for their website redesign. Fresh copies of Interrupt’s fourth issue, themed “I Live For That!,” are stacked next to them. For this issue, which explores the bounds of LGBTQ love, Russell and Assebe tried something unusual: rather than curate the magazine themselves, they appointed two outside groups as co-editors-in-chief—Project SOL, a LGBTQ teen group, and HOLAAfrica, a pan-African Queer Feminist Collective. The results were a fiery and fresh publication like no other. Russell and Assebe learned a lot from this collaboration, and are using the experiment to help them with the next iterations of Interrupt.

Cameron Russell: Looks aren't everything. Believe me, I'm a model.Cameron Russell: Looks aren't everything. Believe me, I'm a model.But before we talk about where they’re going from here, let’s rewind a little to where it all started: at TEDxMidAtlantic in 2012. Russell gave a bold talk at the event, admitting that becoming a model was easy because she happened to to have won the “genetic lottery” of being white, pretty and privileged. In the talk, Russell shared her surprise at how often young girls want to know how they too could be models. “Why?” Russell asked on-stage, “You can be anything. You could be the President of the United States, or the inventor of the next Internet, or a ninja cardio-thoracic surgeon poet, which would be awesome, because you’d be the first one.” In her talk, Russell stressed that modeling is not a sustainable career path. “You don’t have any creative control,” she said.

As Russell hopped off the stage, she started cooking up an idea: to create her own magazine. She initially wanted to create a publication for audiences outside of mainstream fashion. “I felt like fashion was this really tiny world. You know everybody. You don’t work with new people very often. It’s the same characters,” she remembers. “I noticed many fashion bloggers, who were the same people being marginalized from mainstream media, and I thought a magazine format could pull them all together.”

Russell ended up moving away from this vision. But the idea of featuring new voices and faces paved the way for Interrupt.

For Russell, the conditions for growing an experimental magazine seemed perfect. Before her TED Talk, she and a small team had already been playing with community storytelling through her art-meets-activism collective Space-Made. “We thought art really engages a massive number of people. What if that could be translated to a political action?” she explains. Space-Made’s mission prioritized the voices of women, people of color, LGBTQ, low-income and other artists from often-marginalized communities. Over the years, their projects included a writing workshop following the Zimmerman verdict for 16- and 17- year olds, interactive technology courses for seniors and an art hack day around the topic of campaign finance reform.

Cameron Russell shot down the idea that it's "hard" to be a model at TEDxMidAtlantic. Her talk quickly went viral. Photo: Courtesy of TEDxMidAtlantic

Cameron Russell shot down the idea that it’s “hard” to be a model at TEDxMidAtlantic. Her talk quickly went viral. Photo: Courtesy of TEDxMidAtlantic

Meanwhile, as the views of her TED Talk climbed, Russell noticed a power stemming from the avalanche of media interest around her talk. “I didn’t see the point in rehashing the same narrative, my own story of being a model. It was getting stale,” she says. Instead, while she had the spotlight, she thought she could show what others were working on instead. She posted a letter on Tumblr: “Can we reroute this press deluge… to create opportunities for more of our voices to get heard in the average news cycle? Let’s try and use it as a platform to say what we want. Let’s put forward a diversity of radical ideas; let’s showcase the programs and organizations we make strong.”

Russell found herself flooded with stories from all walks of life—stories that cracked open social perceptions often seen in mainstream media and that highlighted voices on the fringe. Russell explains, “I was thinking about access to media and it clicked: What if we build a space for media-makers who don’t have it?”

In the Interrupt office, Russell and Assebe tell the story of how their “participatory” magazine, which devotes its pages to the work of these media outsiders, came to be.   They also reminisce about how they first met—when Russell hired Assebe to make the slides for her TED Talk. “I didn’t think you were a model when I first met you,” recalls Assebe.

“I get that a lot, actually,” says Russell. “Tons of people come up to me to ask about my TED Talk now, not about [modeling]. I feel it’s easier to remember someone if you’ve been staring at them speak, than if you’ve flipped past their photo in an ad. You’ve invested time in hearing what they have to say.”

With the help of Assebe and the Space-Made team, Russell launched the first issue of Interrupt Magazine, “Put Me On TV!” with essays and videos from women all around the world who are working to improve media representation. In subsequent issues, they let readers vote on themes. Issue two, called “Lips, Tits, Zits, Thighs, Eyes and Muffin Tops,” their first print publication, was designed to look like a tabloid, but with more thoughtful content than what you’d find in People or InTouch. The cover teased stories about plus-sized models, journal entries of young girls who love their bodies and an essay and photo documentary by writer H. Tucker Rosenbrok on his transition from female to male. Issue three was a digital collection of stories on race.

With issue four came the co-editors-in-chief experiment. Russell and the team decided not just to feature work of outsiders, but to have community leaders curate their own issue. With this, the purpose and editorial strategy of Interrupt Magazine have begun to shift. “I have resources—filmmakers, editors and graphic designers. Why not pass those off to a different editor-in-chief every time?” asks Russell. “Like elected officials have term limits, we decided to have them for our editors too, to keep the ideas and the voice of Interrupt fresh and bring new audiences each time around.”

Editors-in-chief have full freedom over how their stories are told, says Russell. The even have full control over the theme. This new strategy even includes thinking of an issue as a content package, or an experience, with Interrupt at its center. To complement issue four, Space-Made simultaneously launched “We Are The Youth,” a book of portrait photos of LGBTQ youth in the United States and their stories in the form of as-told-to essays.

Because Interrupt focuses on communities and groups who rarely see their stories told in magazine form, Russell aims to distribute issues directly to their respective communities. Copies of issue four, for example, will be available in LGBTQ youth shelters in New York.

The cover illustration for issue four, "I Live For That!" was made by Mohammed "MoJuicy" Fayaz. In an editors' note, Project SOL and HOLAAfrica! write "This issue is an effort to represent the way we see the world and ourselves. It is not meant to speak for all LGBTQ youth, but we hope to inspire people to create their own stories and media." Photo: Becky Chung

The cover illustration for issue four, “I Live For That!” was drawn by Mohammed “MoJuicy” Fayaz. (Who also made the illustration on the door of Space-Made.) In an editors’ note, Project SOL and HOLAAfrica! write, “This issue is an effort to represent the way we see the world and ourselves. It is not meant to speak for all LGBTQ youth, but we hope to inspire people to create their own stories and media.” Photo: Becky Chung

“I’m totally fascinated by undervalued leaders and experts,” says Russell, “Why does our media ignore them, why does our electoral system ignore them? I wanted to build a sustainable platform for them—be it a magazine, a media outlet or a physical space, a network. I think there are a lot of different iterations.”

Whether digital or print, each issue of Interrupt feels substantial. And each feels reminiscent of the zine era thanks to its size, its DIY sensibility and its small, concentrated doses of content that kick you in the head. Class mobility, foster care and DIY feminism will be themes of future issues, says Russell, each with its own unique editor-in-chief and distribution strategy.

While working on the upcoming issue about class mobility, the newest editor-in-chief Lynn Cyrin, an advocate for the rights of transwomen and homeless women, told Russell a story: Most homeless shelters in San Francisico do not have wifi—a huge roadblock because it prevents the homeless from having access to the rest of the world via the Internet. When she was homeless, she taught herself how to program. Russell noticed a thread with other editors-in-chief that Interrupt is working with: “Because they are experts of their communities, they are identifying places where resources are not going.”

Russell and her team think a lot about how their platform can keep helping creators lead the conversation, and how they can continually support the editors in their future work and causes. Interrupt’s redesigned website, to be revealed in the upcoming months, will further explore this idea. Not all issues will have a print or an interactive component—but all will be on the site. Each new issue will take center stage while older issues will exist on a separate page, so that they can continue to “interrupt” the conversation. Russell and her team are also brainstorming the best ways to help readers dip their toes into the activism waters. The current iteration of the site helps readers learn how to get involved in the social causes brought up in different issues, but they think they can do more.

In addition to running this ambitious, big-vision magazine, Russell is still modeling. “In the last two years I’ve been more successful as a model than I ever have been,” says Russell. “Eleven years of it later, I’m still trying to find how it’s useful.”

When she started modeling, Russell assumed that the job would phase out after college—she thought it would be something with a definitive expiration date. But while it’s never what she imagined doing for her career, she maintains that it has been useful for giving her access to experiences, and that it has financially helped her get an education and start projects like Interrupt. In the end, she’s embraced it as a part of her identity.

Russell is a model, but she’s also an activist, a writer, an editor, a curator and a publisher. As she moves forward with Interrupt, she wants to help leaders become media-makers and media-makers become leaders. Even when she’s contemplating her own career path, she returns to thinking about the work of her collaborators. “We hope our investment in their editorship could be a sustainable one that will matter beyond the scope of the issue,” she says. “We want it to keep on having an impact—on their careers and on their communities.”

stacksissuefour

At the end of this issue, there are helpful listings for local New York-based LGBTQ youth groups, programs, hotlines, homeless services and health care assistance. This stack may end up in one of those locations. And for a reader who might not be familiar with terminology, a mini glossary defines everything from “cisgender” to “two spirit.” Photo: Becky Chung


Planet DebianIan Donnelly: The New Deal: ucf Integration

 

Hi Everybody,

A few days ago I posted an entry on this blog called dpkg Woes where I explained that due to a lack of response, we were abandoning our plan to patch dpkg for my Google Summer of Code project, and I explained that we had a new solution. Well today I would like to tell you about that solution. Instead of patching dpkg, which would take a long time and seemed like it would never make it upstream, we have added some new features to ucf which will allow my Google Summer of Code project to be realized.

If you don’t know, ucf, which stands for Update Configuration File, is a popular Debian package whose goal is to “preserve user changes to config files.” It is meant to act as an alternative to considering a configuration file a conffile on systems that use dpkg. Instead, package maintainers can use ucf to handle these files in a conffile-like way. Where conffiles must work on all systems, because they are shipped with the package, configuration files that use ucf can be handled by maintainer scripts and can vary between systems. ucf exists as a script that allows conffile-like handling of non-conffile configuration files and allows much more flexibility than dpkg’s conffile system. In fact, ucf even includes an option to perform a three-way merge on files it manages, it currently only uses diff3 for the task though.

As you can see, ucf has a goal that while different than ours, seems naturally compatible to our goal of automatic conffile merging. Obviously, since ucf is a different tool than dpkg we had to re-think how we were going to integrate with ucf. Luckily, integration with ucf proved to be much more simple than integration with dpkg. All we had to do to integrate with ucf was to add a generic hook to attempt a three way merge using any tool created for the task such as Elektra and kdb merge. Felix submitted a pull request with the exact code almost a week ago and we have talked with Manoj Srivastava, the developer for ucf, and he seemed to really like the idea. The only changes we made are to add an option for a three-way merge command, and if one is present, the merge is attempted using the specified command. It’s all pretty simple really.

Since we decided to include a generic hook for a three-way merge command instead of an Elektra-specific one (which would be less open and would create a dependency on Elektra), we also had to add functionality to Elektra to work with this hook. We ended up writing a new script, called elektra-merge which is now included in our repository. All this script does is act as a liaison between the ucf --three-way-merge-command option and Elektra itself. The script automatically mounts the correct files for theirs and base and dest using the new remount command.

Since the only parameters that are passed to the ucf merge command are the paths of ours, theirs, base and result, we were missing vital information on how to mount these files. Our solution was to create the remount command which mirrors the backend configuration of an existing mountpoint to create a new mountpoint using a new file. So if ours is mounted to system/ours using ini, kdb remount /etc/theirs system/theirs system/ours will mount /etc/theirs to system/theirs using the same backend as ours. Since theirs, base, and result should all have the same backend as ours, we can use remount to mount these files even if all we know is their paths.

Now, package maintainers can edit their scripts to utilize this new feature. If they want, package maintainers can specify a command to use to merge files using ucf during package upgrades. I will soon be posting a tutorial about how to integrate this feature into a package and how to use Elektra in your scripts in order to allow for automatic three-way merges during package upgrade. I will post a link to the tutorial here once it is published.

Sincerely,
Ian S. Donnelly

CryptogramSecurity as Interface Guarantees

This is a smart and interesting blog post:

I prefer to think of security as a class of interface guarantee. In particular, security guarantees are a kind of correctness guarantee. At every interface of every kind ­ user interface, programming language syntax and semantics, in-process APIs, kernel APIs, RPC and network protocols, ceremonies ­-- explicit and implicit design guarantees (promises, contracts) are in place, and determine the degree of “security” (however defined) the system can possibly achieve.

Design guarantees might or might not actually hold in the implementation ­-- software tends to have bugs, after all. Callers and callees can sometimes (but not always) defend themselves against untrustworthy callees and callers (respectively) in various ways that depend on the circumstances and on the nature of caller and callee. In this sense an interface is an attack surface --­ but properly constructed, it can also be a defense surface.

[...]

But also it’s an attempt to re-frame security engineering in a way that allows us to imagine more and better solutions to security problems. For example, when you frame your interface as an attack surface, you find yourself ever-so-slightly in a panic mode, and focus on how to make the surface as small as possible. Inevitably, this tends to lead to cat-and-mouseism and poor usability, seeming to reinforce the false dichotomy. If the panic is acute, it can even lead to nonsensical and undefendable interfaces, and a proliferation of false boundaries (as we saw with Windows UAC).

If instead we frame an interface as a defense surface, we are in a mindset that allows us to treat the interface as a shield: built for defense, testable, tested, covering the body; but also light-weight enough to carry and use effectively. It might seem like a semantic game; but in my experience, thinking of a boundary as a place to build a point of strength rather than thinking of it as something that must inevitably fall to attack leads to solutions that in fact withstand attack better while also functioning better for friendly callers.

I also liked the link at the end.

Planet DebianRichard Hartmann: Slave New World

Ubiquitous surveillance is a given these days, and I am not commenting on the crime or the level of stupidity of the murderer, but the fact that the iPhone even logs when you turn your flashlight on and off is scary.

Very, very scary in all its myriad of implications.

But at least it's not as if both your phone and your carrier wouldn't log your every move anyway.

Because Enhanced 911 and its ability to silently tell the authorities your position was not enough :)

Planet DebianDaniel Pocock: WebRTC in CRM/ERP solutions at xTupleCon 2014

In October this year I'll be visiting the US and Canada for some conferences and a wedding. The first event will be xTupleCon 2014 in Norfolk, Virginia. xTuple make the popular open source accounting and CRM suite PostBooks. The event kicks off with a keynote from Apple co-founder Steve Wozniak on the evening of October 14. On October 16 I'll be making a presentation about how JSCommunicator makes it easy to add click-to-call real-time communications (RTC) to any other web-based product without requiring any browser plugins or third party softphones.

Juliana Louback has been busy extending JSCommunicator as part of her Google Summer of Code project. When finished, we hope to quickly roll out the latest version of JSCommunicator to other sites including rtc.debian.org, the WebRTC portal for the Debian Developer community. Juliana has also started working on wrapping JSCommunicator into a module for the new xTuple / PostBooks web-based CRM. Versatility is one of the main goals of the JSCommunicator project and it will be exciting to demonstrate this in action at xTupleCon.

xTupleCon discounts for developers

xTuple has advised that they will offer a discount to other open source developers and contributers who wish to attend any part of their event. For details, please contact xTuple directly through this form. Please note it is getting close to their deadline for registration and discounted hotel bookings.

Potential WebRTC / JavaScript meet-up in Norfolk area

For those who don't or can't attend xTupleCon there has been some informal discussion about a small WebRTC-hacking event at some time on 15 or 16 October. Please email me privately if you may be interested.

TEDHow TEDx is spreading through rural Colombia—thanks largely to one organizer

TEDxGuatavita took place in rural Colombia and TK. Photo: Courtesy of Philipe Spath

TEDxGuatavita took place in rural Colombia and focused on the “enormous hidden potential of projects germinating in the countryside.” Photo: Courtesy of Felipe Spath

If you were to map out all 10 TEDx events that Felipe Spath has organized, you’d have a constellation of waypoints zigzagging through Colombia’s countryside—passing farms, mountains, lakes, coffee fields and mines. The tireless Colombian native organized his first TEDx event in 2012 and has been going nonstop ever since, working to bring ideas from TEDx to small rural communities that, without him, may not have heard of TED.

Spath, an anthropologist, grew up in the country’s capital, Bogotá, and studied at the local university, Universidad de los Andes. After working in the restaurant business for several years, he decided to quit his job, leave the city and spend three years visiting rural areas in Asia. When he returned to Colombia, it was with a different mindset—he wanted to see his own country and spend time in some of its less-appreciated areas. He moved to a small village, Guatavita, and started a farm with his brother, a photographer who documents the lives and traditions of rural Colombia.

Spath first heard about TEDx from a friend, Juan Pablo Calderón, the organizer of the first TEDx event in Colombia, TEDxCeiba, held in Bogotá. “I attended TEDxCeiba and was deeply inspired and overwhelmed by the quantity and quality of the networks that were created that day,” says Spath. “This was several years ago, and to this day that first contact with TEDx shines through all we do at events now.”

But Spath didn’t think to hold his own TEDx event until a year later, when at a dinner with Calderón, the veteran organizer encouraged the newbie to take on a different kind of TEDx event, one that would focus on the “enormous hidden potential of projects germinating in the countryside.” Spath was loving life in Guatavita, and decided to take on the challenge. 

“TEDxGuatavita was born that day,” Spath says. For it, he dreamed up the theme “Hay Campo en el Campo,” which translates to, “There is space and opportunity in the countryside.” The stage was decorated with hay bales, farming tools and boots filled with plants, a perfect setting for talks about the innovative projects sprouting in Colombia’s rural areas. Spath’s team even created a “rural billboard” to promote the event with the help of his brother’s travelling cinema project. Using a sheet, rope and logs, they created a pop-up billboard onto which a giant invitation to the event was projected, even at night, asking local farmers to attend.

The TEDxGuatavita stage was especially notable for its design—bales of hay, farm equipment, and plants in boots. Photo: Courtesy of Philipe Spath

The TEDxGuatavita stage was especially notable for its design—bales of hay, farm equipment, and plants in boots. Photo: Courtesy of Felipe Spath

TEDxGuatavita was a success because people made it their own, says Spath. A cattle farmer discussed his sustainable cattle breeding model; a member of the National Federation of Coffee Growers of Colombia discussed the intricate fabric of the coffee industry; the head of the oldest cooperative for female artisans in rural Colombia explained how the group was born and how it affects the lives of women now.

After TEDxGuatavita, Spath caught the TEDx bug. He went on to organize a TEDxYouth event in Guatavita (TEDxYouth@Guatavita), a TEDxLive event (TEDxGuatavitaLive), and six more TEDx events in rural communities nearby. “My inspiration to keep holding events comes from the deep belief of how effective TEDx events are for positively affecting our world,” he says. “As an organizer, you regularly meet people who tell you about a project started at a TEDx event, or how their lives, their families, the world was made a better place after those ideas or connections entered their lives.” 

Though he organizes events in towns outside his own, Spath sees himself more as a mentor to local people — a cheerleader to help people get through the difficult but rewarding work of hosting a TEDx event. Many of the events Spath works on—like TEDxAguaLinda in Sesquile, Colombia; TEDxZapatero in Cartagena de Indias; TEDxCazuca in Soacha; and TEDxLaCalera in La Calera—are TEDx in a Box events. The TEDx program created these ready-made boxes that contain everything needed to host an event—a projector, a collection of subtitled TED Talks, a sound system, microphones, a how-to guide—to support events in underserved communities. These boxes make it much easier to host an event in areas where technology and sponsorship are hard to come by.

“[My team and I] work with different communities, but they organize the events. The event itself happens because of the work and creativity of local folks; we are there for resources and guidance,” Spath says. “Often in rural or peripheral communities, outsiders arrive to inspire, or teach or talk about ideas which will ‘change their lives’ — but rarely do. It is an amazingly powerful moment when the person up onstage is not an outsider, but your grandmother, or your neighbor. This greatly empowers communities, and has the potential for so much more.”

Speakers at these TEDx events have included a documentary filmmaker who is filming the lives of Colombian homeschooling families; a journalist working to develop community centers with accessible technology for people with disabilities; and the founder of a rural, open seed library, which allows anyone in the area to borrow or contribute seeds to diversify their crop production.

“Each time we are so surprised by the stories told, and by the diverse approaches for solving problems and turning them into opportunities,” Spath says. “TEDx is an amazing channel for generating social transformation. That second when the host starts talking and the show is on, you understand how all of what you have worked for is the beginning of great things to happen.”

The very colorful stage of TEDxAguaLinda, one of the 10 events that Philipe Spath has helped organize in Colombia. Photo: Courtesy of Philipe Spath

The very colorful stage of TEDxAguaLinda, one of the 10 events that Felipe Spath has helped organize in Colombia. Photo: Courtesy of Felipe Spath


Planet DebianRiku Voipio: Booting Linaro ARMv8 OE images with Qemu

A quick update - Linaro ARMv8 OpenEmbbeded images work just fine with qemu 2.1 as well:

$ http://releases.linaro.org/14.07/openembedded/aarch64/Image
$ http://releases.linaro.org/14.07/openembedded/aarch64/vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img.gz
$ qemu-system-aarch64 -m 1024 -cpu cortex-a57 -nographic -machine virt \
-kernel Image -append 'root=/dev/vda2 rw rootwait mem=1024M console=ttyAMA0,38400n8' \
-drive if=none,id=image,file=vexpress64-openembedded_lamp-armv8-gcc-4.9_20140727-682.img \
-netdev user,id=user0 -device virtio-net-device,netdev=user0 -device virtio-blk-device,drive=image
[ 0.000000] Linux version 3.16.0-1-linaro-vexpress64 (buildslave@x86-64-07) (gcc version 4.8.3 20140401 (prerelease) (crosstool-NG linaro-1.13.1-4.8-2014.04 - Linaro GCC 4.8-2014.04) ) #1ubuntu1~ci+140726114341 SMP PREEMPT Sat Jul 26 11:44:27 UTC 20
[ 0.000000] CPU: AArch64 Processor [411fd070] revision 0
...
root@genericarmv8:~#
Quick benchmarking with age-old ByteMark nbench:
Index Qemu Foundation Host
Memory 4.294 0.712 44.534
Integer 6.270 0.686 41.983
Float 1.463 1.065 59.528
Baseline (LINUX) : AMD K6/233*
Qemu is upto 8x faster than Foundation model on Integers, but only 50% faster on Math. Meanwhile, the Host pc spends 7-40x slower emulating ARMv8 than executing native instructions.

Sociological ImagesWhy Can’t Conservatives See the Benefits of Affordable Child Care?

Ross Douthat is puzzled. He seems to sense that a liberal policy might actually help, but his high conservative principles and morality keep him from taking that step. It’s a political version of Freudian repression – the conservative superego forcing tempting ideas to remain out of awareness.

In his column, Douthat recounts several anecdotes of criminal charges brought against parents whose children were unsupervised for short periods of time.  The best-known of these criminals of late is Debra Harrell, the mother in South Carolina who let her 9-year-old daughter go to a nearby playground while she (Debra) worked at her job at McDonald’s. The details of the case make it clear that this was not a bad mom – not cruel, not negligent. The playground was the best child care she could afford.

One solution should be obvious – affordable child care.  But the U.S. is rather stingy when it comes to kids. Other countries are way ahead of us on public spending for children.

1

Conservatives will argue that child care should be private not public and that local charities and churches do a better job than do state-run programs. Maybe so. The trouble is that those private programs are not accessible to everyone. If Debra Harrell had been in France or Denmark, the problem would never have arisen.

The other conservative U.S. policy that put Debra Harrell in the arms of the law is “welfare reform.”  As Douthat explains, in the U.S., thanks to changes in the welfare system much lauded by conservatives, the U.S. now has “a welfare system whose work requirements can put a single mother behind a fast-food counter while her kid is out of school.”

That’s the part that perplexes Douthat. He thinks that it’s a good thing for the government to force poor women to work, but it’s a bad thing for those women not to have the time to be good mothers. The two obvious solutions – affordable day care or support for women who stay home to take care of kids – conflict with the cherished conservative ideas: government bad, work good.

This last issue presents a distinctive challenge to conservatives like me, who believe such work requirements are essential. If we want women like Debra Harrell to take jobs instead of welfare, we have to also find a way to defend their liberty as parents, instead of expecting them to hover like helicopters and then literally arresting them if they don’t.

As he says, it’s a distinctive challenge, but only if you cling so tightly to conservative principles that you reject solutions – solutions that seem to be working quite well in other countries – just because they involve the government or allow poor parents not to work.

Conservatives love to decry “the nanny state.”  That means things like government efforts to improve kids’ health and nutrition. (Right wingers make fun of the first lady for trying to get kids to eat sensibly and get some exercise.)

A nanny is a person who is paid to look after someone else’s kids. Well-off people hire them privately (though they still prefer to call them au pairs). But for the childcare problems of low-income parents, what we need is more of a nanny state, or more accurately, state-paid nannies.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at http://thesocietypages.org/socimages)

Sam VargheseRather than sell his budget, Tony is busy grandstanding to boost his poll numbers

WOULD Tony Abbott be indulging in all the grandstanding he is doing abroad if his government had brought down a budget that was, in the main, acceptable to the people and had cleared parliament with a few changes here and there?

One really has to wonder.

After the budget in May, the popularity of the prime minister dropped like a stone. Little wonder that this happened, given that the budget had several measures that would hit the poorer classes. All of it was done in the name of sorting out a budget crisis which the government insisted existed. Financial experts are still trying to find the reason for the use of the word “crisis”.

Three months later, the budget is still hanging around the government’s neck like an albatross. But Abbott’s poll numbers are up as he has grasped every possible chance to boost them.

The poll standings of any leader tend to rise during periods when the country is under threat. So Abbott has manufactured one; the Islamic militancy in Iraq and the emergence of Australian citizens playing a role in it has given him a handy prop.

He’s also announced a data retention scheme – though what will be retained is unclear. Never mind, it adds to security, says Abbott. The presence of the US State Secretary and Defence Secretary this week, for the annual bilateral ministerial talks, hasn’t hurt.

But before that, the downing of a Malaysian passenger plane, killing 298 people including 38 Australians, came as a godsend to Abbott. He fronted up to indulge in some chest-thumping and fuming against Russia, whom he accused of being responsible. The missile that shot down the plane came from an area in Ukraine which wants to revert to Russian control, hence Abbott’s claims.

Abbott made his foreign minister, Julie Bishop, a show-pony of the highest order, take the lead in pushing an UN security council resolution condemning Russia. And as soon as he could, he imposed sanctions on Moscow. Never mind that Russia’s retaliation, which cuts off something like $500 million of imports from Australia, is going to hurt a lot of small farmers.

Now Abbott has dashed off to the Netherlands, to express gratitude to the Dutch for taking the lead in getting the bodies of the plane crash victims back for examination.

Tony is also threatening to send troops to Iraq – for humanitarian reasons, he says, because the Islamic militants there are threatening a tribe called the Yazidis who live in the north. The fact that the US, which has begun bombing the militants to protect the Yazidis, has ruled out sending ground troops doesn’t bother Tony one bit.

There have been plenty of false leads thrown here and there but with the Murdoch media firmly in his pocket, Tony is going places.

And the budget? Oh, don’t bother, that’s Joe Hockey’s baby. Tony has bigger fish to fry.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Beginners August Meeting : MythTV

Aug 16 2014 12:30
Aug 16 2014 16:30
Aug 16 2014 12:30
Aug 16 2014 16:30
Location: 

RMIT Building 91, 110 Victoria Street, Carlton South

MythTV is a free and open source home entertainment application with a simplified "10-foot user interface" design for the living-room TV, and turns a computer with the necessary hardware into a network streaming digital video recorder, a digital multimedia home entertainment system, or home theatre personal computer. It runs on various operating systems, primarily Linux, Mac OS X and FreeBSD.

This introduction to MythTV with live examples, will be presented by LUV Committee member Deb Henry.

LUV would like to acknowledge Red Hat for their help in obtaining the Buzzard Lecture Theatre venue and VPAC for hosting, and BENK Open Systems for their financial support of the Beginners Workshops

Linux Users of Victoria Inc., is an incorporated association, registration number A0040056C.

August 16, 2014 - 12:30

CryptogramAutomatic Scanning for Highly Stressed Individuals

This borders on ridiculous:

Chinese scientists are developing a mini-camera to scan crowds for highly stressed individuals, offering law-enforcement officers a potential tool to spot would-be suicide bombers.

[...]

"They all looked and behaved as ordinary people but their level of mental stress must have been extremely high before they launched their attacks. Our technology can detect such people, so law enforcement officers can take precautions and prevent these tragedies," Chen said.

Officers looking through the device at a crowd would see a mental "stress bar" above each person's head, and the suspects highlighted with a red face.

The researchers said they were able to use the technology to tell the difference between high-blood oxygen levels produced by stress rather than just physical exertion.

I'm not optimistic about this technology.

Worse Than FailureCodeSOD: Day After Übermorgen

While working on his company's reservation manager, Stephaan stumbled upon some PHP code that calculated the date values for tomorrow ($morgen) and the day after tomorrow ($ubermorgen). Something about the code struck him as ... wrong.

``` // FORMAT DATE // detect this day and this month (without 0) $today = date("j") ; $thismonth = date("n") ; $manyday = date("t") ;

// define morgen and ubermorgen dates 
if ($manyday == 30 && $today == 30) // for 30. from 30 days's month 
{ 
    $morgen = 1 ; 
    $morgenmonth = $thismonth+1 ; 
    $ubermorgen = 2 ; 
    $ubermorgenmonth = $thismonth+1 ; 
} 
elseif ($manyday == 31 && $today == 30) // for 30. from 31 days's month 
{ 
    $morgen = 31 ; 
    $morgenmonth = $thismonth ; 
    $ubermorgen = 1 ; 
    $ubermorgenmonth = $thismonth+1 ; 
} 
elseif ($manyday == 29 && $today == 28) // for 28 february 
{ 
    $morgen = 29 ; 
    $morgenmonth = $thismonth ; 
    $ubermorgen = 1 ; 
    $ubermorgenmonth = $thismonth+1 ; 
} 
elseif ($manyday == 29 && $today == 29) // for 29 february 
{ 
    $morgen = 1 ; 
    $morgenmonth = $thismonth ; 
    $ubermorgen = 2 ; 
    $ubermorgenmonth = $thismonth+1 ; 
} 
elseif ($today == 30 && $thismonth == 12) // for 30 december 
{ 
    $morgen = 31 ; 2
    $morgenmonth = $thismonth ; 
    $ubermorgen = 1 ; 
    $ubermorgenmonth = 1 ; 
} 
elseif ($today == 31 && $thismonth == 12) // for 31 december 
{ 
    $morgen = 1 ; 
    $morgenmonth = 1 ; 
    $ubermorgen = 2 ; 
    $ubermorgenmonth = 1 ; 
} 
elseif ($today == 31) // for 31. from 31 days's month 
{ 
    $morgen = 1 ; 
    $morgenmonth = $thismonth+1 ; 
    $ubermorgen = 2 ; 
    $ubermorgenmonth = $thismonth+1 ; 
} 
else // normal days 
{ 
    $morgen = $today+1 ; 
    $morgenmonth = $thismonth ; 
    $ubermorgen = $today+2 ; 
    $ubermorgenmonth = $thismonth ; 
} 

```

Feeling in an experimental mood, Stephaan decided to test the code. It worked for any date given on a leap year, but broke on February 27th of any other year. Typically, errors in date-calculating code happen on leap years, but in this instance it only worked perfectly during leap years.

Stephaan could think of several solutions. He could calculate $morgen as ($today + 1) % $manyday. He could then create a function, morgen(date), and write ubermorgen(date) with the recursive morgen(morgen(date)). However, PHP already has built-in functions for calculating dates, and could simply write $übermorgen = strtotime('+ 2 days').

But Stephaan did none of these things, as the code wasn't actually being referenced anywhere else in the application. $morgen and $ubermorgen were quietly put to pasture, and no one was the wiser.

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Planet Linux AustraliaMichael Still: More turning

Some more pens, and then I went back to bowls for a bit.

The attraction of pens is that I can churn out a pen in about 30 minutes, whereas a bowl can take twice that. Therefore when I have a small chance to play in the garage I'll do a pen, whereas when I have more time I might do a bowl.

           

Tags for this post: wood turning 20140718-woodturning photo

Comment

Planet Linux AustraliaMichael Still: I've been making pens

So, I've been making pens for the last few months. Here are some examples. The first two are my first "commission", in that my brother in law asked for some pens for a friend's farewell gift.

   

Tags for this post: wood turning 20140628-woodturning photo

Comment

Krebs on SecurityTenn. Firm Sues Bank Over $327K Cyberheist

An industrial maintenance and construction firm in Tennessee that was hit by a $327,000 cyberheist is suing its financial institution to recover the stolen funds, charging the bank with negligence and breach of contract. Court-watchers say the lawsuit — if it proceeds to trial — could make it easier and cheaper for cyberheist victims to recover losses.

teciIn May, 2012, Kingsport, Tenn.-based Tennessee Electric Company Inc. (now TEC Industrial) was the target of a corporate account takeover that saw cyber thieves use a network of more than four dozen money mules to siphon $327,804 out of the company’s accounts at TriSummit Bank.

TriSummit was able to claw back roughly $135,000 of those unauthorized transfers, leaving Tennessee Electric with a loss of $192,656. Earlier this month, the company sued TriSummit in state court, alleging negligence, breach of contract, gross negligence and fraudulent concealment.

Both companies declined to comment for this story. But as Tennessee Electric’s complaint (PDF) notes (albeit by misspelling my name), I called Tennessee Electric on May 10, 2012 to alert the company about a possible cyberheist targeting its accounts. I’d contacted the company after speaking with a money mule who’d acknowledged receiving thousands of dollars pulled from the firm’s accounts at TriSummit.

According to the complaint, the attackers first struck on May 8, after Tennessee Electric’s controller tried, unsuccessfully, to log into the bank’s site and upload that week’s payroll batch (typically from $200,000 to $240,000 per week). When the controller called TriSummit to inquire about the site problems, the bank said the site was probably undergoing maintenance and that the controller was welcome to visit the local bank branch and upload the file there. The controller did just that, uploading four payroll batches worth $202,664.47.

[SIDE NOTE: When I spoke with Tennessee Electric's controller back in 2012, the controller for the company told me she was asked for and supplied the output of a one-time token upon login. This would make sense given the controller's apparent problems accessing the bank's Web site. Cyber thieves involved in these heists typically use password-stealing malware to control what the victim sees in his or her browser; when a victim logs in at a bank that requires a one-time token, the malware will intercept that token and then redirect the victim's browser to an error page or a "down for maintenance" message -- all the while allowing the thieves to use the one-time token and the victim's credentials to log in as the legitimate user.]

On May 9, Tennessee Electric alleges, TriSummit Bank called to confirm the $202,664.47 payroll batch — as per an agreement the bank and the utility had which called for the bank to verbally verify all payment orders by phone. But according to Tennessee Electric, the bank for some reason had already approved a payroll draft of $327,804 to be sent to 55 different accounts across the United States — even though the bank allegedly never called to get verification of that payment order.

Tennessee Electric alleges that the bank only called to seek approval for the fraudulent batch on May 10, more than a day after having approved it and after I contacted Tennessee Electric to let them know they’d been robbed by the Russian cyber mob.

ANALYSIS

This lawsuit, if it heads to trial, could help set a more certain and even standard for figuring out who’s at fault when businesses are hit by cyberheists (for better or worse, most such legal challenges are overwhelmingly weighted toward banks and quietly settled for a fraction of the loss).

Consumers who bank online are protected by Regulation E, which dramatically limits the liability for consumers who lose money from unauthorized account activity online (provided the victim notifies their financial institution of the fraudulent activity within 60 days of receiving a disputed account statement).

Businesses, however, do not enjoy such protections. States across the country have adopted the Uniform Commercial Code (UCC), which holds that a payment order received by the [bank] is “effective as the order of the customer, whether or not authorized, if the security procedure is a commercially reasonable method of providing security against unauthorized payment orders, and the bank proves that it accepted the payment order in good faith and in compliance with the security procedure and any written agreement or instruction of the customer restricting acceptance of payment orders issued in the name of the customer.”

Under state interpretations of the UCC, the most that a business hit with a cyberheist can hope to recover is the amount that was stolen. That means that it’s generally not in the business’s best interests to sue their bank unless the amount of theft was quite high, because the litigation fees required to win a court battle can quickly equal or surpass the amount stolen.

Recent cyberheist cases in other states have brought mixed (if modest) results for the plaintiffs. But Charisee Castagnoli, an adjunct professor of law at the John Marshall Law School, said those decisions may end up helping Tennessee Electric’s case because they hold open the possibility that courts could hear one of these cases using something other than a strict interpretation of the UCC or contract law  – such as fraud or negligence claimsAnd that could lead to courts awarding punitive damages, which can often amount to several times the plaintiff’s actual losses.

“We’re still seeing lawyers who are hunting for their best argument in terms of financial recovery, but what they’re really searching for is a way to get this out of the UCC and out of contract law, because under those you only get actual damages,” Castagnoli said. “And there’s really no way under the UCC and contract law theory to apply an economic recovery that will be an incentive for banks to change their behavior.”

Most recently, for example, Missouri-based Choice Escrow & Land Title unsuccessfully sued its bank to recover $440,000 stolen in a 2010 cyberheist. Choice’s attorney’s failed to convince the first court that the bank’s online security procedures weren’t commercially reasonable. An appeals court confirmed that ruling, and went a step further by affirming that the bank could recover its attorney’s fees from Choice Escrow.

In the case of Patco Construction, a company in Maine that was hit by a $588,000 cyberheist in 2009, a lower court ruled the security at Patco’s bank was commercially reasonable. But an appeals court in Boston called the bank’s security systems “commercially unreasonable,” reversing the lower court.  Castagnoli said the appeals court in the Patco case also left open what the victim’s obligations and responsibilities are in the event that the bank’s security measures fail.

“Even though it looks like from a victim business’s perspective that the Patco case is good and the Choice decision bad, there may be enough good language in both of those cases [to help] Tennessee Electric’s case,” Castagnoli said.”You’d think with a harmonized statute [like the UCC] which exists across all 50 states that we’d have some clarity in terms of plaintiff rights of recovery in these cases, but we really don’t.”

Do you run your own business and bank online but aren’t willing to place all of your trust in your bank’s online security? Consider adopting some of the advice I laid out in Online Banking Best Practices for Businesses and Banking on a Live CD.

Planet Linux AustraliaMichael Still: Thoughts from the PTL

I sent this through to the openstack-dev mailing list (you can see the thread here), but I want to put it here as well for people who don't actively follow the mailing list.

First off, thanks for electing me as the Nova PTL for Juno. I find the
outcome of the election both flattering and daunting. I'd like to
thank Dan and John for running as PTL candidates as well -- I strongly
believe that a solid democratic process is part of what makes
OpenStack so successful, and that isn't possible without people being
will to stand up during the election cycle.

I'm hoping to send out regular emails to this list with my thoughts
about our current position in the release process. Its early in the
cycle, so the ideas here aren't fully formed yet -- however I'd rather
get feedback early and often, in case I'm off on the wrong path. What
am I thinking about at the moment? The following things:

* a mid cycle meetup. I think the Icehouse meetup was a great success,
and I'd like to see us do this again in Juno. I'd also like to get the
location and venue nailed down as early as possible, so that people
who have complex travel approval processes have a chance to get travel
sorted out. I think its pretty much a foregone conclusion this meetup
will be somewhere in the continental US. If you're interested in
hosting a meetup in approximately August, please mail me privately so
we can chat.

* specs review. The new blueprint process is a work of genius, and I
think its already working better than what we've had in previous
releases. However, there are a lot of blueprints there in review, and
we need to focus on making sure these get looked at sooner rather than
later. I'd especially like to encourage operators to take a look at
blueprints relevant to their interests. Phil Day from HP has been
doing a really good job at this, and I'd like to see more of it.

* I promised to look at mentoring newcomers. The first step there is
working out how to identify what newcomers to mentor, and who mentors
them. There's not a lot of point in mentoring someone who writes a
single drive by patch, so working out who to invest in isn't as
obvious as it might seem at first. Discussing this process for
identifying mentoring targets is a good candidate for a summit
session, so have a ponder. However, if you have ideas let's get
talking about them now instead of waiting for the summit.

* summit session proposals. The deadline for proposing summit sessions
for Nova is April 20, which means we only have a little under a week
to get that done. So, if you're sitting on a summit session proposal,
now is the time to get it in.

* business as usual. We also need to find the time for bug fix code
review, blueprint implementation code review, bug triage and so forth.
Personally, I'm going to focus on bug fix code review more than I have
in the past. I'd like to see cores spend 50% of their code review time
reviewing bug fixes, to make the Juno release as solid as possible.
However, I don't intend to enforce that, its just me asking real nice.

Thanks for taking the time to read this email, and please do let me
know if you think this sort of communication is useful.


Tags for this post: openstack juno ptl nova
Related posts: Juno Nova PTL Candidacy; Review priorities as we approach juno-3; Havana Nova PTL elections; Expectations of core reviewers; Juno nova mid-cycle meetup summary: ironic; Merged in Havana: fixed ip listing for single hosts

Comment

Planet Linux AustraliaMichael Still: Juno Nova PTL Candidacy

This is a repost of an email to the openstack-dev list, which is mostly here for historical reasons.

Hi.

I would like to run for the OpenStack Compute PTL position as well.

I have been an active nova developer since late 2011, and have been a
core reviewer for quite a while. I am currently serving on the
Technical Committee, where I have recently been spending my time
liaising with the board about how to define what software should be
able to use the OpenStack trade mark. I've also served on the
vulnerability management team, and as nova bug czar in the past.

I have extensive experience running Open Source community groups,
having served on the TC, been the Director for linux.conf.au 2013, as
well as serving on the boards of various community groups over the
years.

In Icehouse I hired a team of nine software engineers who are all
working 100% on OpenStack at Rackspace Australia, developed and
deployed the turbo hipster third party CI system along with Joshua
Hesketh, as well as writing nova code. I recognize that if I am
successful I will need to rearrange my work responsibilities, and my
management is supportive of that.

The future
--------------

To be honest, I've thought for a while that the PTL role in OpenStack
is poorly named. Specifically, its the T that bothers me. Sure, we
need strong technical direction for our programs, but putting it in
the title raises technical direction above the other aspects of the
job. Compute at the moment is in an interesting position -- we're
actually pretty good on technical direction and we're doing
interesting things. What we're not doing well on is the social aspects
of the PTL role.

When I first started hacking on nova I came from an operations
background where I hadn't written open source code in quite a while. I
feel like I'm reasonably smart, but nova was certainly the largest
python project I'd ever seen. I submitted my first patch, and it was
rejected -- as it should have been. However, Vishy then took the time
to sit down with me and chat about what needed to change, and how to
improve the patch. That's really why I'm still involved with
OpenStack, Vishy took an interest and was always happy to chat. I'm
told by others that they have had similar experiences.

I think that's what compute is lacking at the moment. For the last few
cycles we're focused on the technical, and now the social aspects are
our biggest problem. I think this is a pendulum, and perhaps in a
release or two we'll swing back to needing to re-emphasise on
technical aspects, but for now we're doing poorly on social things.
Some examples:

- we're not keeping up with code reviews because we're reviewing the
wrong things. We have a high volume of patches which are unlikely to
ever land, but we just reject them. So far in the Icehouse cycle we've
seen 2,334 patchsets proposed, of which we approved 1,233. Along the
way, we needed to review 11,747 revisions. We don't spend enough time
working with the proposers to improve the quality of their code so
that it will land. Specifically, whilst review comments in gerrit are
helpful, we need to identify up and coming contributors and help them
build a relationship with a mentor outside gerrit. We can reduce the
number of reviews we need to do by improving the quality of initial
proposals.

- we're not keeping up with bug triage, or worse actually closing
bugs. I think part of this is that people want to land their features,
but part of it is also that closing bugs is super frustrating at the
moment. It can take hours (or days) to replicate and then diagnose a
bug. You propose a fix, and then it takes weeks to get reviewed. I'd
like to see us tweak the code review process to prioritise bug fixes
over new features for the Juno cycle. We should still land features,
but we should obsessively track review latency for bug fixes. Compute
fails if we're not producing reliable production grade code.

- I'd like to see us focus more on consensus building. We're a team
after all, and when we argue about solely the technical aspects of a
problem we ignore the fact that we're teaching the people involved a
behaviour that will continue on. Ultimately if we're not a welcoming
project that people want to code on, we'll run out of developers. I
personally want to be working on compute in five years, and I want the
compute of the future to be a vibrant, friendly, supportive place. We
get there by modelling the behaviour we want to see in the future.

So, some specific actions I think we should take:

- when we reject a review from a relatively new contributor, we should
try and pair them up with a more experienced developer to get some
coaching. That experienced dev should take point on code reviews for
the new person so that they receive low-latency feedback as they
learn. Once the experienced dev is ok with a review, nova-core can
pile on to actually get the code approved. This will reduce the
workload for nova-core (we're only reviewing things which are of a
known good standard), while improving the experience for new
contributors.

- we should obsessively track review performance for bug fixes, and
prioritise them where possible. Let's not ignore features, but let's
agree that each core should spend at least 50% of their review time
reviewing bug fixes.

- we should work on consensus building, and tracking the progress of
large blueprints. We should not wait until the end of the cycle to
re-assess the v3 API and discover we have concerns. We should be
talking about progress in the weekly meetings and making sure we're
all on the same page. Let's reduce the level of surprise. This also
flows into being clearer about the types of patches we don't want to
see proposed -- for example, if we think that patches that only change
whitespace are a bad idea, then let's document that somewhere so
people know before they put a lot of effort in.

Thanks for taking the time to read this email!


Tags for this post: openstack juno ptl nova election
Related posts: Havana Nova PTL elections; Review priorities as we approach juno-3; Thoughts from the PTL; Expectations of core reviewers; Juno nova mid-cycle meetup summary: ironic; Merged in Havana: fixed ip listing for single hosts

Comment

Planet Linux AustraliaMichael Still: Expectations of core reviewers

One of the action items from the nova midcycle was that I was asked to make nova's expectations of core reviews more clear. This blog post is an attempt at that.

Nova expects a minimum level of sustained code reviews from cores. In the past this has been generally held to be in the order of two code reviews a day, which is a pretty low bar compared to the review workload of many cores. I feel that existing cores understand this requirement well, and I am mostly stating it here for completeness.

Additionally, there is increasing levels of concern that cores need to be on the same page about the criteria we hold code to, as well as the overall direction of nova. While the weekly meetings help here, it was agreed that summit attendance is really important to cores. Its the way we decide where we're going for the next cycle, as well as a chance to make sure that people are all pulling in the same direction and trust each other.

There is also a strong preference for midcycle meetup attendance, although I understand that can sometimes be hard to arrange. My stance is that I'd like core's to try to attend, but understand that sometimes people will miss one. In response to the increasing importance of midcycles over time, I commit to trying to get the dates for these events announced further in advance.

Given that we consider these physical events so important, I'd like people to let me know if they have travel funding issues. I can then approach the Foundation about funding travel if that is required.

Tags for this post: openstack juno ptl nova
Related posts: Juno Nova PTL Candidacy; Review priorities as we approach juno-3; Thoughts from the PTL; Havana Nova PTL elections; Juno nova mid-cycle meetup summary: nova-network to Neutron migration; Juno nova mid-cycle meetup summary: scheduler

Comment

Planet Linux AustraliaMichael Still: Juno TC Candidacy

Another email archived for historical reasons.

I'd also like to announce my TC candidacy. I am currently a member of
the TC, and I would like to continue to serve.

I first started hacking on Nova during the Diablo release, with my
first code contributions appearing in the Essex release. Since then
I've hacked mostly on Nova and Oslo, although I have also contributed
to many other projects as my travels have required. For example, I've
tried hard to keep various projects in sync with their imports of
parts of Oslo I maintain.

I work full time on OpenStack at Rackspace, leading a team of
developers who work solely on upstream open source OpenStack. I am a
Nova and Oslo core reviewer and the Nova PTL.

I have been serving on the TC for the last year, and in the Icehouse
release started acting as the liaison for the board "defcore"
committee along with Anne Gentle. "defcore" is the board effort to
define what parts of OpenStack we require vendors to ship in order to
be able to use the OpenStack trade mark, so it involves both the board
and the TC. That liaison relationship is very new and only starting to
be effective now, so I'd like to keep working on that if you're
willing to allow it.


Tags for this post: openstack juno tc election
Related posts: Juno Nova PTL Candidacy; Havana Nova PTL elections

Comment

,

Geek FeminismQuick hit: Maryam Mirzakhani wins the Fields Medal

Image of Maryam Mirzakhani

CC-BY-SA 3.0 image by Ehsan Tabari

The Fields Medal is the highest award in the field of mathematics. Some people have called it the math equivalent of the Nobel Prize, though it’s not a perfect analogy since Fields medalists must be younger than 40 years old. Fifty people received the Fields Medal between 1936 and 2010 (the award is given every four years to between two and four mathematicians). All of them were men.

Today, Stanford math professor Maryam Mirzakhani (born in 1977) became the first woman, and the first person of Iranian descent, to win the Fields Medal. (It was also awarded to Artur Avila, Manjul Bhargava, and Martin Hairer.) Her work lies in the intersection of geometry, topology, and dynamical systems.

You can read more about Dr. Mirzakhani in a profile of her by Erica Klarreich:

Mirzakhani likes to describe herself as slow. Unlike some mathematicians who solve problems with quicksilver brilliance, she gravitates toward deep problems that she can chew on for years. “Months or years later, you see very different aspects” of a problem, she said. There are problems she has been thinking about for more than a decade. “And still there’s not much I can do about them,” she said.

Mirzakhani doesn’t feel intimidated by mathematicians who knock down one problem after another. “I don’t get easily disappointed,” she said. “I’m quite confident, in some sense.”

Planet DebianIan Donnelly: How-To: kdb import

Hi everybody,

Today I wanted to go over what I think is a very useful command in the kdb tool, kdb import. As you know, the kdb tool allows users to interact with the Elektra Key Database (KDB) via the command line. Today I would like to explain the import function of kdb.

The command to use kdb import is:

kdb import [options] destination [format]

In this command, destination is where the imported Keys should be stored below. For instance, kdb import system/imported would store all the keys below system/imported. This command takes Keys from stdin to store them into KDB. Typically, this command is used with a pipe to read in the Keys from a file.

The format argument you see above can be a very powerful option to use with kdb import. The format argument allows a user to specify which plug-in is used to import the Keys into the Key Database. The user can specify any storage plug-in to serve as the format for the Keys to be imported. For instance, if a user wanted to import a /etc/hosts file into KDB without mounting it, they could use the command cat /etc/hosts | kdb import system/hosts hosts. This command would essentially copy the current hosts file into KDB, like mounting it. Unlike mounting it, changes to the Keys would not be reflected in the hosts file and vise versa.

If no format is specified, the format dump will be used instead. The dump format is the standard way of expressing Keys and all their relevant information. This format is intended to be used only within Elektra. The dump format is a good means of backing up Keys from the Key Database for use with Elektra later such as reimporting them later. As of this writing, dump is the only way to fully preserve all parts of the KeySet.

It is very important to note that the dump does not rename keys by design. If a user exports a KeySet using dump using a command such as kdb export system/backup > backup.ecf, they can only import that keyset back into system/backup using a command like cat backup.ecf | kdb import system/backup.

The kdb import command only takes one special option:

-s --strategy

which is used to specify a strategy to use if Keys already exist in the specified destination.
The current list of strategies are:

preserve any keys already in the destination will not be overwritten
overwrite any keys already in the destination will be overwritten if a new key has the same name
cut all keys already in the destination will be removed, then new keys will be imported

If no strategy is specified, the command defaults to the preserve strategy as to not be destructive to any previous keys.

An example of using the kdb import is as follows:

cat backup.ecf | kdb import system/restore

This command would import all keys stored in the file backup.ecf into the Key Database under system/restore.

In this example, backup.ecf was exported from the KeySet using the dump format by using the command:
kdb export system/backup > backup.ecf

backup.ecf contains all the information about the keys below system/backup:

$cat backup.ecf
kdbOpen 1
ksNew 3
keyNew 19 0
system/backup/key1
keyMeta 7 1
binary
keyEnd
keyNew 19 0
system/backup/key2
keyMeta 7 1
binary
keyEnd
keyNew 19 0
system/backup/key3
keyMeta 7 1
binary
keyEnd
ksEnd

Before the import command, system/backup does not exists and no keys are contained there.
After the import command, running the command kdb ls system/backup prints:

system/backup/key1
system/backup/key2
system/backup/key3

As you can see, the kdb import command is a very useful tool included as part of Elektra. I also wrote a tutorial on the kdb export command. Please go read that as well because those two commands go hand in hand and allow some very powerful usage of Elektra.

Sincerely,
Ian S. Donnelly

Planet DebianIan Donnelly: How-To: kdb export

Hi everybody,

I recently posted a tutorial on the kdb import command. Well I also wanted to go over it’s sibling function, kdb export. These two commands work very similarly, but there are some differences.

First of all, the command to use kdb export is:

kdb export [options] source [format]

In this command, source is the root key of which Keys should be exported. For instance, kdb export system/export would export all the keys below system/export. Additionally, this command exports keys under the system/elektra directory by default. It does this so that information about the keys stored under this directory will be included if the Keys are later imported into an Elektra Key Database. This command export Keys to stdout to store them into the Elektra Key Database. Typically, the export command is used with redirection to write the Keys to a file.

As we discussed already, the format argument can be a very powerful option to use with kdb export. Just like with kdb import the format argument allows a user to specify which plug-in is used to export the Keys from the Key Database. The user can specify any storage plug-in to serve as the format for the exported Keys. For instance, if a user mounted their hosts file to system/hosts using kdb mount /etc/hosts system/hosts hosts they would be able to export these Keys using the hosts format by using the command kdb export system/hosts hosts > hosts.ecf. This command would essentially create a backup of their current /etc/hosts file in a valid format for /etc/hosts.

If no format is specified, the format dump will be used instead. The dump format is the standard way of expressing Keys and all their relevant information. This format is intended to be used only within Elektra. The dump format is a good means of backing up Keys from the Key Database for use with Elektra later such as reimporting them later. As of this writing, dump is the only way to fully preserve all parts of the KeySet.

The kdb export command takes one special option, but it’s different than the one for kdb import, that option is:

-E --without-elektra which omits the system/elektra directory of keys

An example of using the kdb export is as follows:

kdb export system/backup > backup.ecf

This command would export all keys stored under system/backup, along with relevant Keys in system/elektra, into a file called backup.ecf.

As you can see, the kdb export command is a very useful tool just like it’s sibling, kdb import. If you haven’t yet, please go read the tutorial I wrote for kdb import because these two commands are best used together and can enable some really great features of Elektra.

Sincerely,
Ian S. Donnelly

Krebs on SecurityAdobe, Microsoft Push Critical Security Fixes

Adobe and Microsoft today each independently released security updates to fix critical problems with their products. Adobe issued patches for Adobe Reader/Acrobat, Flash Player and AIR, while Microsoft pushed nine security updates to address at least 37 security holes in Windows and related software.

Microsoft's recommended patch deployment priority for enterprises, Aug. 2014.

Microsoft’s recommended patch deployment priority for enterprises, Aug. 2014.

Two of the nine update bundles Microsoft released today earned the company’s most-dire “critical” label, meaning the vulnerabilities fixed in the updates can be exploited by bad guys or malware without any help from users. A critical update for Internet Explorer accounts for the bulk of flaws addressed this month, including one that was actively being exploited by attackers prior to today, and another that was already publicly disclosed, according to Microsoft.

Other Microsoft products fixed in today’s release include Windows Media Center, One Note, SQL Server and SharePoint. Check out the Technet roundup here and the Microsoft Bulletin Summary Web page at this link.

There are a couple other important changes from Microsoft this month: The company announced that it will soon begin blocking out-of-date ActiveX controls for Internet Explorer users, and that it will support only the most recent versions of the .NET Framework and IE for each supported operating system (.NET is a programming platform required by a great many third-party Windows applications and is therefore broadly installed).

These changes are both worth mentioning because this month’s patch batch also includes Flash fixes (an ActiveX plugin on IE) and another .NET update. I’ve had difficulties installing large Patch Tuesday packages along with .NET updates, so I try to update them separately. To avoid any complications, I would recommend that Windows users install all other available recommended patches except for the .NET bundle; after installing those updates, restart Windows and then install any pending .NET fixes).

Finally, I should note that Microsoft released a major new version (version 5) of its Enhanced Mitigation Experience Toolkit (EMET), a set of tools designed to protect Windows systems even before new and undiscovered threats against the operating system and third-party software are formally addressed by security updates and antimalware software. I’ll have more on EMET 5.0 in an upcoming blog post (my review of EMET 4 is here) but this is a great tool that can definitely help harden Windows systems from attacks. If you already have EMET installed, you’ll want to remove the previous version and reboot before upgrading to 5.0.

ADOBE

Adobe’s critical update for Flash Player fixes at least seven security holes in the program. Which version of Flash you should have on your system in order to get the protection from these latest fixes depends on which operating system and which browser you use, so consult the (admittedly complex) chart below for your appropriate version number.

brokenflash-aTo see which version of Flash you have installed, check this link. IE10/IE11 on Windows 8.x and Chrome should auto-update their versions of Flash, although my installation of Chrome says it is up-to-date and yet is still running v. 14.0.0.145 (with no outstanding updates available, and no word yet from Chrome about when the fix might be available).

The most recent versions of Flash are available from the Flash home page, but beware potentially unwanted add-ons, like McAfee Security Scan. To avoid this, uncheck the pre-checked box before downloading, or grab your OS-specific Flash download from here.

Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.). If you have Adobe AIR installed (required by some programs like Tweetdeck and Pandora Desktop), you’ll want to update this program. AIR ships with an auto-update function that should prompt users to update when they start an application that requires it; the newest, patched version is v. 14.0.0.137 for Windows, Mac, and Android.

adobeFlash-AirAug2014

Adobe said it is not aware of any exploits in the wild that target any of the issues addressed in this month’s Flash update. However, the company says there are signs that attackers are are already targeting the lone bug fixed in an update released today for Windows versions of Adobe Reader and Acrobat (Adobe Reader and Acrobat for Apple’s OS X are not affected).

reader-acrobat-aug2014

Experience technical issues during or after applying any of these updates, or with the instructions above? Please feel free to sound off in the comments below.

Update, 6:52 p.m. ET: In the second paragraph, corrected the number of updates Microsoft released today.

TEDHow did Nick Hanauer get onto TED’s home page?!

Nick Hanauer and TED Curator Chris Anderson are both "really bad at holding grudges." Here, the two talk before an in-office event. Photo: Ryan Lash

Nick Hanauer and I are both really bad at holding grudges. Here, we talk before an in-office event last week. Photo: Ryan Lash

There’s a bit of a back story behind today’s TED Talk, in which Nick Hanauer issues a powerful warning to his fellow zillionaire ‘plutocrats’ that it’s time to take the inequality issue seriously, and makes the case to dramatically raise the minimum wage. Some of you may remember that two years ago there was an online spat between Nick and TED over a prior talk of his, also about inequality. We liked the talk, and agreed with its sentiments, but saw a few key problems with it that kept us from posting it on our home page (though we did post it on YouTube.) We were accused of censoring him, and the row generated an extraordinary level of heat.

Roll the clock forward two years, and worries about growing economic inequality have only increased. Nick has become a leading voice on the topic. He wrote a widely circulated article for Politico and is credited as a key force behind Seattle’s decision to adopt a $15 minimum wage. He and I ran into each other last month and discovered that we’re both really bad at holding grudges. So I invited him to come and give a new, longer talk on inequality at our office theater, reflecting his latest thinking. That happened last week (right after we ceremonially buried the hatchet .. see the pic below!). The talk was terrific: honest, surprising and important. I’m proud to be posting it today.

<iframe allowfullscreen="allowFullScreen" frameborder="0" height="329" mozallowfullscreen="mozallowfullscreen" scrolling="no" src="https://embed-ssl.ted.com/talks/nick_hanauer_beware_fellow_plutocrats_the_pitchforks_are_coming.html" webkitallowfullscreen="webkitAllowFullScreen" width="586"></iframe>
At TED@250, an in-office event held last week, Nick and Chris officially buried the hatchet. Photo: Ryan Lash

Nick and I officially bury the hatchet in the TED office last week. Photo: Ryan Lash


Sociological ImagesJulie Chen Explains Why She Underwent “Westernizing” Surgery

Eyelid surgery is the third most common cosmetic procedure in the world.  Some are necessary for drooping eyelids that interfere with vision, others are undertaken in order to enable people to look younger, but many people choose these surgeries to make their eyes look more Western or whiter, a characteristic often conflated with attractiveness.

Recently Julie Chen — a TV personality and news anchor — revealed that she had undergone eyelid and other surgeries almost 20 years ago in order to comply with the standards of beauty and “relatability” demanded of her bosses.  She released these photos in tandem with the story:

1

Chen said that she was torn about whether to get the surgeries.  Her entire family got involved in the conversation and they split, too, arguing about whether the surgeries represented a rejection of her Chinese ancestry.

Ultimately, though, Chen was under a lot of pressure from her bosses.  One told her “you will never be on this anchor desk, because you’re Chinese.” He went on:

Let’s face it, Julie, how relatable are you to our community? How big of an Asian community do we have in Dayton? ‘On top of that, because of your heritage, because of your Asian eyes, sometimes I’ve noticed when you’re on camera and you’re interviewing someone, you look disinterested, you look bored.

Another man, a “big time agent,” told her: “I cannot represent you unless you get plastic surgery to make your eyes look bigger.”

While cosmetic surgeries are often portrayed as vanity projects, Chen’s story reveals that they are also often about looking “right” in a competitive industry. Whether it’s erotic dancers getting breast implants, waitresses getting facelifts, or aspiring news anchors getting eyelid surgery, often economic pressures — mixed with racism and sexism — drive these decisions.

Lisa Wade is a professor of sociology at Occidental College and the co-author of Gender: Ideas, Interactions, Institutions. You can follow her on Twitter and Facebook.

(View original at http://thesocietypages.org/socimages)

TEDRemembering Robin Williams

Robin Williams hijacks the TED2008 stage before the BBC World Debate. Photo: Andrew Heavens

Robin Williams hijacks the TED2008 stage before the BBC World Debate. Photo: Andrew Heavens

It’s 2008, moments before a BBC broadcast live from the stage at TED. But something’s gone wrong. The house lights are still up, the camera ops are looking at one another, official-looking folks are wandering at the stage apron muttering into headsets, and the panelists are sitting patiently onstage but looking, increasingly, baffled. Minutes go by.

And then a voice rises from the audience, wondering “why at a technology conference everything is running so shittily”! As Kim Zetter wrote: “at least that’s the word I think he used; it was hard to hear the last word through the audience’s laughter.” It was Robin Williams, who’d spent the day watching TED, and who now jumped out of the audience to grab the mic and reel off 10 or 15 minutes — reports vary — of improvised comedy about the day of ideas, TED in general and his own wide-ranging future shock.

The BBC shot the whole thing while waiting for their own production to come back online, and they eventually posted the monologue, cut into 3 minutes of breathtaking tightrope work.

And when I read the news today, I watched it again, and it reminded me of what we just lost — but it also gave me 3 minutes of pure, wild joy. Just watch him go.

<iframe class="youtube-player" frameborder="0" height="360" src="http://www.youtube.com/embed/_q790fmirQk?version=3&amp;rel=1&amp;fs=1&amp;showsearch=0&amp;showinfo=1&amp;iv_load_policy=1&amp;wmode=transparent" type="text/html" width="586"></iframe>

See our Community Director Tom Rielly’s reflections on Robin Williams »


Planet Linux AustraliaAndrew Pollock: [life] Day 195: Kindergarten, tennis, play date

Today should have have been more focused on my real estate licence training than it was, but live intruded.

Zoe had a good sleep, almost bang on 11 hours. She was happy but very congested when she woke up. Today was pajama day at Kindergarten, and she was very excited. As I result, I got her to Kindergarten quite quickly.

After I got home, I gave Zoe's bunk bed drawers another coat of sealant and by the time I was done with that, I pretty much had to jump on a bus to the city for a lunch meeting. I got a tiny bit of work done on my next course assessment on the bus.

After my lunch meeting I jumped in a taxi back home to meet with my Thermomix Group Leader to give her my application to become a consultant and then it was time to pick up Zoe from Kindergarten.

I almost got to Kindergarten before I realised that in my haste, I hadn't grabbed a change of clothes for Zoe to do tennis in. Fortunately I'd repacked her spare clothes for Kindergarten that morning, so after some brief dithering, I decided to run with that set of clothes. It worked out okay.

I think with tennis, it's a battle between being hungry after Kindergarten and tired after Kindergarten. Zoe did a better job of being focused, but still was ready to pack it in a little bit before the end of class. Still fighting off a cold can't have helped either. Next week I'll try and remember to bring a quick snack for her to eat between Kindergarten and tennis.

Zoe was desperate for Megan to come over for a play date after tennis. Jason had some stuff to do first, so we came home, and Zoe watched a bit of TV, and then Jason dropped Megan off and dashed off to Bunnings for a bit.

I ended up having dinner ready by the time he was due back, so I suggested they all stay for dinner, which they did, and then they went home afterwards.

Zoe was pretty tired, so I got her to bed a bit early.

Worse Than FailureNuclear Internship

Before he could graduate, Grigori’s Russian university program required him to complete a large-scale, real-world project. Like most of his peers, he planned to use this as an opportunity for job experience, which meant partnering with an outside company. Since Grigori did low-level development and microelectronic engineering, he found a paid internship position with the Russian Automation Institute. RAI has one major client: the company responsible for managing Russia’s nuclear reactors and supply parts for nuclear weapons.

Before Grigori could start working, his soon-to-be mentor assigned him a “test”. “Before you begin, you must implement this conversion.” The conversion in question was to turn IEEE754 <script src="http://www.cornify.com/js/cornify.js" type="text/javascript"></script> floating points into a “secret” format. The spec document was a three column spreadsheet- a floating point number, a binary32 floating point number, and the “secret” format.
From wikimedia commons: http://commons.wikimedia.org/wiki/File:Haigerloch-nuclear-reactor.JPG “Can I have more details?” Grigori asked Aleksandr. “Is there any documentation about the format?”

“Not that you can have, no. We use custom CPUs, and they are very secret. You must work from this document.”

It wasn’t a huge challenge. By comparing the columns in the spreadsheet, Grigori was able to discover that this “secret” format used a 6-bit exponent instead of binary32’s 8-bits. He handed his program over to Aleksandr, who showered him with praise. “Amazing. No one else has done this yet. This will go into immediate use.”

Grigori, still young and naive and lacking job-experience, just accepted the praise with a smile. He didn’t think about what it implied. Instead, still beaming from the ego boost, he showed up to work the following Monday.

The building he worked in had been designed in a particularly paranoid phase of the USSR. It was an anonymous, windowless cube, surrounded by a twisting intestine of barbed wire and security guards. After a series of searches, ID checks, logbook sign ins, and lots of dark look from men with automatic weapons, the bowels of the security apparatus released Grigori into the building. Aleksandr floated by the door, waiting for him.

Aleksandr didn’t give him a tour, except to make clear that Grigori should only ever open the door to his office and the restroom, and under no uncertain terms should ever touch anything else in the building. Grigori’s office looked more like an interrogation chamber- there was a single folding chair, a desk that had last seen use in 1967, and a naked fluorescent bulb. A pair of binders were chained to the desk. They were the documentation.

“Your computer comes soon. We got you a new one,” Aleksandr said. “In the meantime, read the documentation.”

The documentation was surprisingly helpful, in part because it hadn’t been made by RIA. RIA had purchased the chip designs from a third party in the mid–90s, and had spent the next 15 years getting their own assembly line off the ground. The chips were a fairly standard 80186, aside from their unusual floating point implementation.

When Grigori’s computer finally arrived, with its 13" CRT and 256MB of RAM, his first task was to move some of the documented circuit diagrams to SCADE. It was dull and brainless, and after a week, Grigori was ready to quit. That’s when Aleksandr called Grigori and several other interns in for a special project.

“Our circuit designs haven’t changed since we bought them,” Aleksandr explained, “but they are very expensive to manufacture. We must analyze the designs and find ways to remove components.”

“Is that really going to make them cheaper?” Grigori asked. “Won’t that change entail a lot of testing? Won’t the whole assembly line process need to be retooled? And what about building new test masks for the boards?”

“We are engineers, not accountants. We must find ways to make the boards cheaper.”

Over the next few weeks, they started to find ways to remove components. Grigori identified one mildly expensive capacitor that would be replaced. His peers found some resistors that could be safely removed. Aleksandr took that as inspiration, and decided that instead of having a pull-down resistor for each sensitive component (to protect against variations in current), he could have a single resistor on the entire board which would protect all the components.

Pleased with the success of his dream-team, Aleksandr promised them all a bonus, then sent the designs down to manufacturing. Manufacturing immediately sent them back, and along with their objections:

  • The new designs would require extensive testing before they could be used.
  • Entirely new test-masks needed to be made, which would cost millions OR they’d need to do a slow and expensive manual test of key components.
  • The line would need to be retooled OR many assembly steps would have to be manually performed.

Aleksandr looked at these objections. “Testing? These are little modifications. Our designs are good. I vouch for them as safe. And manual assembly? Manual testing? I have a solution for that.”

His solution was to shove Grigori and the other interns behind desk with an oscilloscope, multimeter, and soldering iron. “You need this to graduate. Do the work.”

The promised bonus never arrived, but Grigori did eventually escape and graduate. It was at his graduation that Grigori saw Aleksandr again. The mentor had cornered the dean of the engineering school. “You send us good students, but could you maybe send us more next semester? A lot more?”

[Advertisement] Have you seen BuildMaster 4.3 yet? Lots of new features to make continuous delivery even easier; deploy builds from TeamCity (and other CI) to your own servers, the cloud, and more.

Geek FeminismExtremely Loud and Incredibly Linkspam (12th August, 2014)

  • this has been going on a long time | Filling the Well (August 6): “Women have always spoken out against sexism and the injustice done to them for no other reason than their identity as women.” Quotes from outspoken feminists throughout the ages.
  • Twitter Won’t Stop Harassment on Its Platform, So Its Users Are Stepping In (August 6): “The network tells abused individuals to shut up (‘abusive users often lose interest once they realize that you will not respond’), unfollow, block, and—in extreme cases—get off Twitter, pick up the phone, and call the police. Twitter opts to ban abusive users from its network only when they issue ‘direct, specific threats of violence against others.’ That’s a criminal standard stricter than the code you’d encounter at any workplace, school campus, or neighborhood bar.”
  • Why are the media so obsessed with female scientists’ appearance? | theguardian.com (August 10): “Yet another profile of Susan Greenfield feels the need to dwell on her ‘long, youthfully blond hair’. Why are the media so rubbish at covering women in science?”
  • In Science, It Matters That Women Come Last | FiveThirtyEight (August 5): “The news is both good and bad. When a female scientist writes a paper, she is more likely to be first author than the average author on that paper. But she is less likely to be last author, writes far fewer papers and is especially unlikely to publish papers on her own. Because she writes fewer papers, she ends up more isolated in the network of scientists, with additional consequences for her career.”
  • Five Aussie women with apps to their name | the age (August 9): “Apple alone has clocked more than 1.2 million apps and 75 billion user downloads from its App Store worldwide, while Google Play lists 1.3 million apps. Most of those, people assume, were built by men. But for women, who’ve long been a minority group in the tech sector, the apps market is proving fertile, according to Miriam Hochwald, founder of Girl Geek Coffees, a networking group.”
  • Backing diversity lowers the bar? | SC Magazine (August 4): “when we as an industry only make room for “tough” people, people who are willing to put up with sexist, racist, ageist, ableist and LGBTQ-phobic behavior, we pass up brilliant minds and incredible talent. Some of the smartest people I know are not people that would be described as alpha. When you pass them up because they’re not exactly what you had in mind, you’re the one losing out.”

We link to a variety of sources, some of which are personal blogs.  If you visit other sites linked herein, we ask that you respect the commenting policy and individual culture of those sites.

You can suggest links for future linkspams in comments here, or by using the “geekfeminism” tag on Pinboard, Delicious or Diigo; or the “#geekfeminism” tag on Twitter. Please note that we tend to stick to publishing recent links (from the last month or so).

Thanks to everyone who suggested links.

Planet Linux AustraliaSilvia Pfeiffer: Progress with rtc.io

At the end of July, I gave a presentation about WebRTC and rtc.io at the WDCNZ Web Dev Conference in beautiful Wellington, NZ.

webrtc_talk

Putting that talk together reminded me about how far we have come in the last year both with the progress of WebRTC, its standards and browser implementations, as well as with our own small team at NICTA and our rtc.io WebRTC toolbox.

WDCNZ presentation page5

One of the most exciting opportunities is still under-exploited: the data channel. When I talked about the above slide and pointed out Bananabread, PeerCDN, Copay, PubNub and also later WebTorrent, that’s where I really started to get Web Developers excited about WebRTC. They can totally see the shift in paradigm to peer-to-peer applications away from the Server-based architecture of the current Web.

Many were also excited to learn more about rtc.io, our own npm nodules based approach to a JavaScript API for WebRTC.

rtcio_modules

We believe that the World of JavaScript has reached a critical stage where we can no longer code by copy-and-paste of JavaScript snippets from all over the Web universe. We need a more structured module reuse approach to JavaScript. Node with JavaScript on the back end really only motivated this development. However, we’ve needed it for a long time on the front end, too. One big library (jquery anyone?) that does everything that anyone could ever need on the front-end isn’t going to work any longer with the amount of functionality that we now expect Web applications to support. Just look at the insane growth of npm compared to other module collections:

Packages per day across popular platforms (Shamelessly copied from: http://blog.nodejitsu.com/npm-innovation-through-modularity/)

For those that – like myself – found it difficult to understand how to tap into the sheer power of npm modules as a font end developer, simply use browserify. npm modules are prepared following the CommonJS module definition spec. Browserify works natively with that and “compiles” all the dependencies of a npm modules into a single bundle.js file that you can use on the front end through a script tag as you would in plain HTML. You can learn more about browserify and module definitions and how to use browserify.

For those of you not quite ready to dive in with browserify we have prepared prepared the rtc module, which exposes the most commonly used packages of rtc.io through an “RTC” object from a browserified JavaScript file. You can also directly download the JavaScript file from GitHub.

Using rtc.io rtc JS libraryUsing rtc.io rtc JS library

So, I hope you enjoy rtc.io and I hope you enjoy my slides and large collection of interesting links inside the deck, and of course: enjoy WebRTC! Thanks to Damon, JEeff, Cathy, Pete and Nathan – you’re an awesome team!

On a side note, I was really excited to meet the author of browserify, James Halliday (@substack) at WDCNZ, whose talk on “building your own tools” seemed to take me back to the times where everything was done on the command-line. I think James is using Node and the Web in a way that would appeal to a Linux Kernel developer. Fascinating!!

Planet Linux AustraliaPeter Lieverdink: iPhotography Widget

A while ago, I was contacted by MobileZap, a reseller of mobile phone accessories and asked if I was interested in reviewing an iPhone zoom lens attachment. Unfortunately the widget only attached to the iPhone 5S - mine's a 5c - so I wasn't able to. I also mostly do wide angle photography (landscapes) so a zoom lens would be sort of wasted on me anyway.

When browsing the website, I did stumble across the olloclip wide-angle/fisheye/macro lens kit, which piqued my interest. When I mentioned this, they sent me one and asked me to write a blog about my experiences using it. Happily, I was about to go on a road trip past some very large holes in the ground where it would come it very handly indeed!

The (to give it its full and proper name)  olloclip iPhone 5S / 5 Fisheye, Wide-angle, Macro Lens Kit comes in a fully recyclable plastic and paper package. It includes the phone adapter with lenses, an insert to make the adapter fit iPods and a fabric pouch to keep the lenses free of scratches when not in use. The pouch also doubles as a lens cloth.

 

First Light: Royal Park

I've been taking an image of Melbourne's CBD once a day (when I'm in the country) from the same spot in Royal Park for close to a year, so I thought I'd start by using the olloclip for the same image:

Royal Park - iPhone 5c

iPhone 5c standard.

 

Royal Park - olloclip Wide Angle

iPhone 5c with olloclip wide angle lens.

 

Royal Park - olloclip Fisheye

iPhone 5c with the olloclip fisheye lens.

 

Oops, it turns out the lens kit doesn't really fit the iPhone 5c! The adapter is made for the 5s model and the rounded edge on the 5c means it doesn't slide all the way on, so the lens and the camera don't quite align. Mind you, a little bit of image editing to trim this image still results in something useable for blogs and twitter :-)

To give you an idea of the field of view of each of the lens adapters, I've stacked the three images on top of each other at the approximate same size:

Royal Park - olloclip combined

Relative sizes of the fields of view of the olloclip wide anfle and fish eye lenses.

 

Big Things: Road Trip

The main reason I agreed to review this lens kit was to play on the road trip, which was through the south eastern USA, from Los Angeles to Austin. Happily, that included a few choice large holes in the ground subjects for wide angle photography, as well as a friend with a iPhone 5S, on which the olloclip fit just fine.

Sedona - olloclip Fisheye

Cathedral, Sedona, AZ. iPhone 5S with olloclip fish-eye lens.

 

Barringer Crater - olloclip fisheye

Barringer Crater, Flagstaff, AZ. iPhone 5S with olloclip fish-eye lens. 

 

Grand Canyon - olloclip wide angle

Grand Canyon South Rim, Desert View, AZ. iPhone 5c with olloclip wide angle lens.

 

As you can see, the olloclip fits just fine on the iPhone 5S - there is no assymmetric distortion like there was in the iPhone 5c fish-eye image.

 

Small Things: Macro

The wide angle lens consists of two lenses, the top one of which you can unscrew and remove to make a macro lens. I didn't really have anything to take photos of, until a house move left me with a large pile of small change to sort through.

It turns out that some old Australian coins have minting errors or oversights, which make them sought after by collectors. Specifically, the some of the 2 cent coins are missing the designer's initials (S.D.).

Here was a lovely way to try out the macro lens. It works fine as a magnifying glass, too!

2 cents - olloclip macro

Australian 2 cent piece with 'SD' initials (just left of the lizard's toe), iPhone 5c with olloclip macro lens. 

 

2 cents - olloclip macro

 Australian 2 cent piece without initials, iPhone 5c with olloclip macro lens.

 

a penny - olloclip macro

Australian 1917 penny, iPhone 5c with olloclip macro lens.

 

Conclusion

The macro attachment has come in incredibly handy for close-up images of items to put on eBay. The fact that it doesn't quite fit the iPhone 5c has not been a hindrance, in the way it was for the fish-eye lens.

All in all, I've found the olloclip to be a nifty little attachment and nice to have handy.

 

Full disclosure

I am not affiliated with olloclip or MobileZap. MobileZap provided me with a free olloclip to review.

,

TEDGallery: Watercolor sketchnotes from TEDSalon Berlin

TED-Salon-Berlin dromedary

The animal you see in this watercolor-washed illustration is not a camel. This is, technically, a dromedary. Anja Kantowsky, a communications consultant who lives in Germany, created this image at the TEDSalon Berlin in June as she watched a talk from nine-time TED speaker Hans Rosling and his son Ola Rosling that used the differences between the two animals to make a point.

“A camel has two humps, and a dromedary just one,” explains Kantowksy. “Hans and Ola used the two to paint a picture of how wealth is distributed across the population. In a camel-hump-shaped world, there are many poor, no middle class and some rich; in a dromedary-hump-shaped economy, the majority of people are in the middle with few very poor and very rich. I was interested to hear that the world’s wealth is distributed in dromedary-hump pattern right now. I wouldn’t have guessed that.”

Kantowsky sought to capture this point, and others made by the pair in their talk (which will appear on TED.com in the fall) in her sketch above. It’s one of 12 that she drew freehand during the TEDSalon Berlin, using the app Paper. “It has a watercolor pencil—it’s their killer feature,” says Kantowsky. “If you have a look at #madewithpaper on Twitter, you’ll realize that it’s kind of like a popular Instagram filter. You see a lot created with it.”

In her work, Kantowsky often uses drawing as a tool, and she recently took a workshop that got her especially interested in sketchnoting. So she was excited to sketch talks in the TEDSalon Berlin. “Having watched only single talks before, the salon experience was new to me,” says Kantowsky. “All the talks came together to form the theme ‘Bits of Knowledge.’ It brought together narratives on data, networks and their disruptive power.”

But the Roslings’ talk was by far and away Kantowsky’s favorite. “I admired their technique—a quiz—to teach us that we have false assumptions about how the world works,” she says. “They were very effective in showing me that I have to re-assess my view of the world. They would be great communications consultants!”

See all of Kantowsky’s sketches below:

<iframe allowfullscreen="allowfullscreen" height="737" mozallowfullscreen="mozallowfullscreen" src="https://www.slideshare.net/slideshow/embed_code/36458214" webkitallowfullscreen="webkitallowfullscreen" width="900"></iframe>


Planet DebianCyril Brulebois: Mark a mail as read across maildirs

Problem

Discussions are sometimes started by mailing a few different mailing lists so that all relevant parties have a chance to be aware of a new topic. It’s all nice when people can agree on a single venue to send their replies to, but that doesn’t happen every time.

Case in point, I’m getting 5 copies of a bunch of mails, through the following debian-* lists: accessibility, boot, cd, devel, project.

Needless to say: Reading, or marking a given mail as read once per maildir rapidly becomes a burden.

Solution

I know some people use a duplicate killer at procmail time (hello gregor) but I’d rather keep all mails in their relevant maildirs.

So here’s mark-read-everywhere.pl which seems to do the job just fine for my particular setup: all maildirs below ~/mails/* with the usual cur, new, tmp subdirectories.

Basically, given a mail piped from mutt, compute a hash on various headers, look at all new mails (new subdirectories), and mark the matching ones as read (move to the nearby cur subdirectories, and change suffix from , to ,S).

Mutt key binding (where X is short for cross post):

macro index X "<pipe-message>~/bin/mark-as-read-everywhere.pl<enter>"

This isn’t pretty or bulletproof but it already started saving time!

Now to wonder: was it worth the time to automate that?

Planet DebianCyril Brulebois: How to serve Perl source files

I noticed a while ago a Perl script file included on my blog wasn’t served properly, since the charset wasn’t announced and web browsers didn’t display it properly. The received file was still valid UTF-8 (hello, little © character), at least!

First, wrong intuition

Reading Apache’s /etc/apache2/conf.d/charset it looks like the following directive might help:

AddDefaultCharset UTF-8

but comments there suggest reading the documentation! And indeed that alone isn’t sufficient since this would only affect text/plain and text/html. The above directive would have to be combined with something like this in /etc/apache2/mods-enabled/mime.conf:

AddType text/plain .pl

Real solution

To avoid any side effects on other file types, the easiest way forward seems to avoid setting AddDefaultCharset and to associate the UTF-8 charset with .pl files instead, keeping the text/x-perl MIME type, with this single directive (again in /etc/apache2/mods-enabled/mime.conf):

AddCharset UTF-8 .pl

Looking at response headers (wget -d) we’re moving from:

Content-Type: text/x-perl

to:

Content-Type: text/x-perl; charset=utf-8

Conclusion

Nothing really interesting, or new. Just a small reminder that tweaking options too hastily is sometimes a bad idea. In other news, another Perl script is coming up soon. :)