Planet Russell

,

CryptogramYet Another Russian Hack of the NSA -- This Time with Kaspersky's Help

The Wall Street Journal has a bombshell of a story. Yet another NSA contractor took classified documents home with him. Yet another Russian intelligence operation stole copies of those documents. The twist this time is the the Russians identified the documents because the contractor had Kaspersky Labs anti-virus installed on his home computer.

This is a huge deal, both for the NSA and Kaspersky. The The Wall Street Journal article contains no evidence, only unnamed sources. But I am having trouble seeing how the already embattled Kaspersky Labs survives this.

WSJ follow up. Four more news articles.

Worse Than FailureError'd: Sorry for the Inconvenience

"Yeah, I'm kinda sorry that I have to use Visual Studio too," wrote Kevin D.

 

"Turns out, the Office 365 Dev Center isn't as helpful as one would expect," wrote John A.

 

"I'm not sure what I saved, but it sure feels good to be 18 more than average!" writes Bob.

 

Kevin M. wrote, "Thanks, Verizon, for being incredibly precise! Now, if only there were some way to round numbers off..."

 

David E. writes, "And just like that, our IT department becomes a tremendous profit center."

 

"Well done Dell! This will keep out those bots for sure," writes Stephan H.

 

[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Planet DebianRaphaël Hertzog: My Free Software Activities in September 2017

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

Debian LTS

This month I was allocated 12h but I only spent 10.5h. During this time, I continued my work on exiv2. I finished reproducing all the issues and then went on doing code reviews to confirm that vulnerabilities were not present when the issue was not reproducible. I found two CVE where the vulnerability was present in the wheezy version and I posted patches in the upstream bug tracker: #57 and #55.

Then another batch of 10 CVE appeared and I started the process over… I’m currently trying to reproduce the issues.

While doing all this work on exiv2, I also uncovered a failure to build on the package in experimental (reported here).

Misc Debian/Kali work

Debian Live. I merged 3 live-build patches prepared by Matthijs Kooijman and added an armel fix to cope with the the rename of the orion5x image into the marvell one. I also uploaded a new live-config to fix a bug with the keyboard configuration. Finally, I also released a new live-installer udeb to cope with a recent live-build change that broke the locale selection during the installation process.

Debian Installer. I prepared a few patches on pkgsel to merge a few features that had been added to Ubuntu, most notably the possibility to enable unattended-upgrades by default.

More bug reports. I investigated much further my problem with non-booting qemu images when they are built by vmdebootstrap in a chroot managed by schroot (cf #872999) and while we have much more data, it’s not yet clear why it doesn’t work. But we have a working work-around…

While investigating issues seen in Kali, I opened a bunch of reports on the Debian side:

  • #874657: pcmanfm: should have explicit recommends on lxpolkit | polkit-1-auth-agent
  • #874626: bin-nmu request to complete two transitions and bring back some packages in testing
  • #875423: openssl: Please re-enable TLS 1.0 and TLS 1.1 (at least in testing)

Packaging. I sponsored two uploads (dirb and python-elasticsearch).

Debian Handbook. My work on updating the book mostly stalled. The only thing I did was to review the patch about wireless configuration in #863496. I must really get back to work on the book!

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 4, week 1

The last term of the school year – traditionally far too short and crowded with many events, both at and outside of school. OpenSTEM’s® Understanding Our World® program for HASS + Science ensures that not only are the students kept engaged with interesting material, but that teachers can relax, knowing that all curriculum-relevant material is […]

,

CryptogramReplacing Social Security Numbers

In the wake of the Equifax break, I've heard calls to replace Social Security numbers. Steve Bellovin explains why this is hard.

Planet DebianRoss Gammon: My FOSS activities for August & September 2017

I am writing this from my hotel room in Bologna, Italy before going out for a pizza. After a successful Factory Acceptance Test today, I might also allow myself to celebrate with a beer. But anyway, here is what I have been up to in the FLOSS world for the last month and a bit.

Debian

  • Uploaded gramps (4.2.6) to stretch-backports & jessie-backports-sloppy.
  • Started working on the latest release of node-tmp. It needs further work due to new documentation being included etc.
  • Started working on packaging the latest goocanvas-2.0 package. Everything is ready except for producing some autopkgtests.
  • Moved node-coffeeify experimental to unstable.
  • Updated the Multimedia Blends Tasks with all the latest ITPs etc.
  • Reviewed doris for Antonio Valentino, and sponsored it for him.
  • Reviewed pyresample for Antonio Valentino, and sponsored it for him.
  • Reviewed a new parlatype package for Gabor Karsay, and sponsored it for him.

Ubuntu

  • Successfully did my first merge using git-ubuntu for the Qjackctl package. Thanks to Nish for patiently answering my questions, reviewing my work, and sponsoring the upload.
  • Refreshed the gramps backport request to 4.2.6. Still no willing sponsor.
  • Tested Len’s rewrite of ubuntustudio-controls, adding a CPU governor option in particular. There are a couple of minor things to tidy up, but we have probably missed the chance to get it finalised for Artful.
  • Tested the First Beta release of Ubuntu Studio 17.10 Artful and wrote the release notes. Also drafted my first release announcement on the Ubunti Studio website which Eylul reviewed and published.
  • Refreshed the ubuntustudio-meta package and requested sponsorship. This was done by Steve Langasek. Thanks Steve.
  • Tested the Final Beta release of Ubuntu Studio 17.10 Artful and wrote the release notes.
  • Started working on a new Carla package, starting from where Víctor Cuadrado Juan left it (ITP in Debian).

TEDBreak the mold: The talks of TED@BCG 2017

The main stage at TED@BCG at East End Studios, October 4, 2017, in Milan, Italy. Photo: Richard Hadley / TED

Complex times require a bold embrace of diversity and difference — and an ability to turn the unknown into an advantage. How can we tap into the unexpected?

For a sixth year, BCG has partnered with TED to bring experts in education, diversity, AI, biology and more to the stage to share ideas from the forefront of innovation. At this year’s TED@BCG — held on October 4, 2017, at East End Studios in Milan, Italy — 17 creators, leaders and innovators invited us to challenge preconceived ideas, grapple with real problems and open our minds to new ways of thinking.

After opening remarks from Rich Lesser, president and CEO of BCG, the talks of Session 1

“Can I be myself today?” Erica Joy Baker speaks to the questions that underrepresented people may carry with them at work, creating anxiety their coworkers don’t feel. She took the stage at TED@BCG in Milan. Photo: Richard Hadley / TED

Bridging the “anxiety gap.” “Most of us come to work with a general set of questions and concerns: How do I make an impact? Will I meet my goals today?” says Erica Joy Baker, senior engineering manager at Patreon and an advocate for diversity and inclusion in tech. “But people from underrepresented groups have a different set of questions: Am I being paid fairly? How do I avoid sexual harassment? Can I be myself today?” Baker calls these ever-looming concerns the “anxiety gap” — a gulf created by the issues that she faces in the workplace as a woman of color, and one which she must always navigate before she can do the job she was hired for. She believes that companies need to recognize this phenomenon and make changes so all of their employees can thrive. For starters, she says, bosses should think about a time when they felt like an outsider and then figure out how to prevent their workers from experiencing those feelings on the job. They should make sure employees know they’re fairly compensated (by publishing salary ranges), and they should set up a safe, confidential method to report harassment. And all people, regardless of job title or pay grade, can speak up on behalf of coworkers who are struggling to traverse the anxiety gap. “Show them you are not only their ally,” Baker suggests, but that “you are their advocate and accomplice.”

Are diverse companies smarter and more creative? You may think, instinctively, well of course they are … but Rocío Lorenzo and her team actually did the study to prove the effect is real. She speaks at TED@BCG in Milan. Photo: Richard Hadley / TED

Why diversity in the workplace is a competitive advantage. As a business advisor, Rocío Lorenzo noticed that the more diverse companies she worked with often produced fresher, more creative ideas than less diverse ones. So she wanted to know, was there a link? Were diverse companies really more innovative? Lorenzo and her team surveyed 171 companies from Germany, Austria and Switzerland to find out. At the onset of the study, she and her team doubted that they would find anything significant, but after the data came in, the answer was a clear yes. “More diverse companies are simply more innovative. Period,” she says. Since that study, Lorenzo has worked with many organizations who want to increase their innovation through strategic hiring and promotions, and she’s seen incredible results. How can your company follow suit? According to Lorenzo, who you hire and who you promote matter. Change the face of leadership and make it more diverse, she says, because diversity won’t happen naturally, and one woman in the boardroom won’t cut it.

Superheroes wanted. With great design power comes great problem-solving ability. “Design unlocks solutions to our problems,” says design champion Kevin Bethune. “I believe that design is not just important but absolutely necessary to achieve success in business.” Bracketed by examples from the design world, his own company and his personal inspirations, Bethune reveals the four superpowers that all designers possess: x-ray vision (understanding the implicit behaviors of people ), shapeshifting (emulating behavior of their subjects and turning feelings into products and services), extrasensory perception, or ESP (tapping into people’s senses) and the ability to make others superhuman (guiding people into a state of flow). Don’t believe it? Give designers room to truly thrive in the workplace and see firsthand what happens when these powers are released.

Humans and bacteria: an unlikely partnership. When it comes to decreasing our fossil-fuel dependence, we need to think beyond just how we power our homes and businesses. Nearly all the products we use today, including things like the fabric in our clothes, rely on petroleum and require huge amounts of water. Natsai Audrey Chieza, founder and creative director of R&D studio Faber Futures, is inventing new ways to engineer the things we want and need by fusing biology, technology and design. Consider fermentation, which she calls an “advanced technological toolkit for our survival.” Humans have long used this bacterial process to create things like cheese and beer; in the 1920s, Alexander Fleming employed it to create penicillin. After Chieza noticed the bacterial strain streptomyces coelicolor produces striking pigments, she harnessed it to dye textiles. The resulting fabric is colorfast and chemical-free (bonus: the process takes very little water). Chieza is now trying to scale up her methods, but what excites her most are the similar efforts that are underway in other labs and design studios. One startup is making “leather” from mushrooms, while another is using yeast to create a protein-based, super-strong yarn. The possibilities represented by such bio-based industries are both thrilling and dizzying, and we need to think about how we can best build, distribute and regulate them. As she says: “This is the material future that we must be bold enough to shape.”

The most mysterious microbes on earth. How deep can we go into the earth and find a living thing? We still don’t know the answer to this very basic question about life, says microbiologist Karen Lloyd. But in the 1980s, a scientist named John Parkes discovered living microbes buried in mud deep in the seafloor (a discovery confirmed with a subsequent expedition in 2001). “They’re not like anything we’ve seen before,” Lloyd says, which makes them extremely tricky to study. No one has even managed to grow them in a petri dish. “Those microbes have a fundamentally different relationship with time and energy than we do,” she says, “but if we can continue to find creative ways to study them, then maybe we’ll finally understand what life — all of life — is like on Earth.”

How great ideas happen. “We have all probably wondered how great minds achieved what they achieved,” says physicist Vittorio Loretto, opening up Session 2. The more astonishing their feats, the more we assume that they’re geniuses and unlike us. But is that really true? Are advances like the ones made by Newton and Einstein achieved by great leaps or by something else? How do we really conceive of something new? As a possible answer, Loretto introduces the concept of the adjacent possible: everything that is one step away from what already exists. “You can achieve them through incremental modifications and recombinations of the existing material,” he says. The adjacent possible is continuously shaped by our actions and our choices, helping us push the boundaries of what’s possible: “Impossible missions might not be so impossible after all.”

Getting the most bang for your buck on R&D. No matter what sector you’re in, most good ideas exist outside your organization — and most bad ones too. It’s up to your internal experts to help tell the difference between the two, says innovation instigator Michael Ringel. Ringel and his team found that the biggest drivers for a company’s success in R&D are a mixture of three things: external innovation, having access to great scientific information and actually listening to the internal experts who dispense that data. It’s the lack of follow-through on the listening portion that often makes productivity elusive. Ringel suggests seriously contemplating (rather than ignoring) internal research, offering positive personal incentives — and embracing the spirit of collaboration to cultivate the best ideas.

Where are people shopping in China? On their phones, says Angela Wang at TED@BCG in Milan. Photo: Richard Hadley / TED

The future of shopping? China, the world’s most populous country, is “like a huge laboratory generating all sorts of experiments and innovations,” says retail expert Angela Wang. And in this lab, everything is taking place on people’s phones. In China, 500 million customers — equal to the entire populations of the US, UK and Germany — are regularly making purchases via mobile platforms, where shopping and payment are seamlessly linked. Mobile payment is the norm even in brick-and-mortar stores. What should retailers know? People want shopping to be ultra-convenient (a Shanghai-based supermarket will deliver any of 4,000 items ordered via mobile to homes in 30 minutes), ultra-flexible (fashion retailers are responding to celebrity style and social postings by using “microstudios” to produce mini lines of a few dozen garments in 3-4 days) and ultra-social (shopping is embedded in social interactions, whether it’s customers sharing recommendations — and buying — via chat or the relationships that social influencers and 24/7 e-commerce shopping assistants are forging with customers). “We’re at the very beginning of a huge transformation,” says Wang, and this seismic shift in retail is reshaping not just the point of purchase but also supply chains, distribution and marketing. These rapid-fire, ongoing changes are the new business-as-usual, leaving retailers with a simple choice: adapt or die.

If you ever feel lost, stop and listen for your song. Music gives a soul to the universe and flight to the imagination, says student, musician and TED-Ed Clubs superstar Anika Paulson. Guitar in hand, she plays through the beats of her life, exploring how music connects us and makes us what we are — and how it can help us find our rhythm when we’ve lost it. “Where music has rhythm, we have routines and habits, things that help us remember what to do and how to stay on track,” Paulson says, strumming her guitar. Friends and family create harmonic structure in your life, and you’re the melody, she continues. In times of change, the new and off-tempo noises that enter your life might change your melody, but it’s still the same song — your song. “Music is my way of coping with the changes in my life,” Paulson says in a new movement of her song. “It changes and it builds and it diminishes, but it’s always there, surrounding us, connecting us to each other and showing us the beauty of the universe.”

Philipp Gerbert tells us what’s next in AI at TED@BCG in Milan. Photo: Richard Hadley / TED

AI in practice. AI isn’t an abstract, mysterious force for experts only; instead, it’s for all of us to use and benefit from, says AI pathfinder Philipp Gerbert. The basic principles of AI are actually rather simple, Gerbet says: today, AI is a fast intuition and calculation machine with improving vision and language skills, whose intelligence we need to nurture with lots of data and feedback. And it solves problems really differently from the way a human might. It doesn’t just help on major, complex applications like self-driving cars; it can help with everyday tasks as well, Gerbert says. “When people — and by that I mean all us — start applying their knowledge of AI, we have seen the applications explode,” he says. If you’re working on tasks from translation to recruiting to cyber security and much more, you could use an AI assistant. “AI is not this destructive, mysterious force,” he concludes. “With this understanding, we can start moving from mere spectators, or prey, to becoming actors in the AI world.”

The danger of AI bias. As a research scientist at Google, Margaret Mitchell works on helping computers talk about what they see and understand. One day, she showed a computer an image of a house burning down, and the computer remarked on what a spectacular view it was. “I realized that as I worked on improving AI task by task, data set by data set, I was creating massive gaps, holes and blind spots in what it could understand,” she says, “and while doing so, I was creating all kinds of biases.” She realized that we need to think deeply about how the technology we create today will look in the future. “We can be proactive around the outcomes of AI and begin tracing out the evolutionary path that we’d like it to follow,” she says. But this isn’t something that only large tech companies can contribute to. “The math, the models, the basic building blocks of artificial intelligence are something that we can all access and work with,” she says. “We have open-source tools for machine learning and intelligence that we can contribute to.” We can also start a conversation about technology — what concerns and excites us, what aspects could be more beneficial or more problematic as time goes on. “If we want AI to evolve in a way that helps humans, then we need to define the goals and strategies that enable that path now,” she concludes.

Krumping to Mozart. Krumping is a free, expressive, exaggerated street dance that originated in African American communities in Los Angeles and has evolved into a global art form. Opening Session 3, French artist Wolf escaped into what felt like a moment of spiritual transcendence onstage, as he krumped with exacting precision to the “Lacrimosa” from Mozart’s Requiem aeternam.

A new paradigm of social responsibility. How can we make lasting and significant progress on the big challenges in our world? We need business to drive the solutions, says social impact strategist Wendy Woods. CSR, or corporate social responsibility, is the norm today — but it’s not strong or durable enough to drive solutions, she suggests, because it’s an incremental cost — and it’s almost always the first thing to be cut in bad times. Woods has a new paradigm: TSI, or total societal impact. “It’s the sum of all of the ways a business can affect society by thinking about their product design, manufacturing and distribution,” she explains. “And it can actually create core business benefits while solving meaningful societal problems.” That is, companies that perform strongly on social and environmental areas do better in the long run. Sharing new data from a study that looked at the companies that have done well on TSI, Woods shows how companies across industries like oil and gas, pharmaceuticals and banking see better margins and valuations when they do positive things like minimizing their impact on the environment and maintaining strong occupational safety programs. “One of the best ways for businesses to help ensure their own growth and longevity is to meet some of the hardest challenges in our society, and to do so profitably,” Woods concludes.

How do you do the Ice Bucket Challenge in a desert country? With a bucket full of sand, says Lana Mazahreh as she speaks to the world’s evolving water crisis at TED@BCG in Milan. Photo: Richard Hadley / TED

Hopeful solutions to the water crisis. According to the Food and Agriculture Organization of the United Nations, today nearly one in three people live in a country that is facing a water crisis. Water conservation activist Lana Mazahreh grew up in Jordan, a parched country that has experienced absolute water scarcity since 1973, and she learned how to conserve water as soon as she was old enough to learn how to write her name. What can the rest of the world learn from parched countries on how to save water and address what’s fast becoming a global crisis? Mazahreh shares three lessons from how water-poor countries have survived and thrived: Tell people how much water they really have, so they can take responsibility; empower people to save water, with simple tools like tap and showerhead regulators and toolkits with water-saving techniques; and look below the surface for water savings in unexpected places, such as in Australia, where they’re recycling wastewater. “If we just look at what water-poor countries have done, the solutions are out there,” Mazahreh says. “Now it’s really just up to all of us to take action.”

“Good” and “bad” are often incomplete stories. When we describe events as “good” or “bad,” we dilute the complexity of the human experience, says writer Heather Lanier. After her daughter was diagnosed with Wolf-Hirschhorn syndrome, a genetic condition that results in developmental delays, Lanier soon came to realize that a short-sighted perspective diminished what it means to live a full life — especially when it comes to individuals with physical and mental disabilities, who are often stripped of being seen as multidimensional people. “When we label people as tragic or angelic, bad or good, we rob them of their humanity, along with not only the messiness and complexity that come with it, but the rights and dignities as well,” she says. Instead of fixating on pity or attempting to normalize things, we should question the cultural values used to mark success and failure in life. Lanier takes life as it comes and watches her daughter’s life unfold for what it is — beautiful, complicated, joyful, hard — and responds to every new experience with the words from an ancient parable: “Good or bad, hard to say.”

The global learning crisis — and what to do about it. We are in the midst of a learning crisis, says Amel Karboul, an education pioneer and Secretary-General of the Maghreb Education Commission. Globally, a quarter billion of the world’s children are out of school, and an additional 330 million are in school but failing to learn. If nothing changes, that number will only grow, but it doesn’t have to be this way. Karboul shares two important ways we can improve our education systems so that every child is in school and learning within just one generation: have countries learn from others within their same income level, and divide teaching between content teachers and tutoring teachers. We can implement these ideas worldwide by bringing stakeholders together, relentlessly following up to make sure progress is happening and finding new ways for countries to borrow money for education, Karboul says. “Education is the human rights struggle of our generation,” she concludes, “Quality education for all: that’s the freedom fight we’ve got to win.”

Unfairness can make a workplace into an unhappy place, says Marco Alverà at TED@BCG in Milan. Photo: Richard Hadley / TED

How to promote a fair workplace. What is it about unfairness? Whether it’s not being invited to a friend’s wedding (when other people who barely know her are) or getting reprimanded for an honest mistake, unfairness often makes us so upset that we can’t think straight. But unfairness isn’t just a personal issue; it’s also bad news for business, says Marco Alverà, CEO of Italian natural gas infrastructure company Snam. As Alverà explains, partial treatment or unwarranted penalties in the workplace often make workers unhappy and unengaged, leading to millions lost in productivity each year. So how do you promote a fair workplace? Alverà explains that organizations can create a culture of fairness by rewarding employees for doing what they feel is right, instead of what’s selfish or quick. In a brief interview following the talk, he offers a final tip for the rest of us: “You know what’s right — go for it!”


Google AdsenseAdSense now understands Bengali (Bangla)

Today, we’re excited to announce the addition of Bengali (Bangla), a language spoken by millions in Bangladesh, India and many other countries around the world, to the family of AdSense supported languages.


The interest for Bengali language content has been growing steadily over the last few years. AdSense provides an easy way for publishers to monetize the content they create in Bengali, and help advertisers looking to connect with the growing online Bengali audience to reach them with relevant ads.


To start monetizing your Bengali (Bangla) content website with Google AdSense:


  1. Check the AdSense program policies and make sure your website is compliant.
  2. Add the AdSense code to start displaying relevant ads to your users.


Welcome to AdSense! Sign Up now!

Posted by: AdSense Internationalization Team

CryptogramHP Shared ArcSight Source Code with Russians

Reuters is reporting that HP Enterprise gave the Russians a copy of the ArcSight source code.

The article highlights that ArcSight is used by the Pentagon to protect classified networks, but the security risks are much broader. Any weaknesses the Russians discover could be used against any ArcSight customer.

What is HP Enterprise thinking? Near as I can tell, they only gave it away because the Russians asked nicely.

Supply chain security is very difficult. The article says that Russia demands source code because it's worried about supply chain security: "One reason Russia requests the reviews before allowing sales to government agencies and state-run companies is to ensure that U.S. intelligence services have not placed spy tools in the software." That's a reasonable thing to worry about, considering what we know about NSA's interdiction of commercial hardware and software products. But how can Group A convince Group B of the integrity and security of hardware/software without putting itself at risk from Group B?

This is one of the areas where open-source software has a security edge. If everyone has access to the source code -- and security doesn't depend on its secrecy -- then there's no advantage in getting a copy. As long as companies rely on obscurity for their security, these sorts of attacks are possible and profitable.

I wonder what sorts of assurances HP Enterprise gave its customers that it would secure its source code, and if any of those customers have negligence options against HP Enterprise.

News articles.

EDITED TO ADD (10/5): Commentary.

Planet DebianWouter Verhelst: Patching Firefox

At work, I help maintain a smartcard middleware that is provided to Belgian citizens who want to use their electronic ID card to, e.g., log on to government websites. This middleware is a piece of software that hooks into various browsers and adds a way to access the smartcard in question, through whatever APIs the operating system and the browser in question provide for that purpose. The details of how that is done differ between each browser (and in the case of Google Chrome, for the same browser between different operating systems); but for Firefox (and Google Chrome on free operating systems), this is done by way of a PKCS#11 module.

For Firefox 57, mozilla decided to overhaul much of their browser. The changes are large and massive, and in some ways revolutionary. It's no surprise, therefore, that some of the changes break compatibility with older things.

One of the areas in which breaking changes were made is in the area of extensions to the browser. Previously, Firefox had various APIs available for extensions; right now, all APIs apart from the WebExtensions API are considered "legacy" and support for them will be removed from Firefox 57 going forward.

Since installing a PKCS#11 module manually is a bit complicated, and since the legacy APIs provided a way to do so automatically provided the user would first install an add-on (or provided the installer of the PKCS#11 module sideloads it), most parties who provide a PKCS#11 module for use with Firefox will provide an add-on to automatically install it. Since the alternative involves entering the right values in a dialog box that's hidden away somewhere deep in the preferences screen, the add-on option is much more user friendly.

I'm sure you can imagine my dismay when I found out that there was no WebExtensions API to provide the same functionality. So, after asking around a bit, I filed bug 1357391 to get a discussion started. While it took some convincing initially to get people to understand the reasons for wanting such an API, eventually the bug was assigned the "P5" priority -- essentially, a "we understand the need and won't block it, but we don't have the time to implement it. Patches welcome, though" statement.

Since having an add-on was something that work really wanted, and since I had the time, I got the go-ahead from management to look into implementing the required code myself. I made it obvious rather quickly that my background in Firefox was fairly limited, though, and so was assigned a mentor to help me through the process.

Having been a Debian Developer for the past fifteen years, I do understand how to develop free software. Yet, the experience was different enough that still learned some new things about free software development, which was somewhat unexpected.

Unfortunately, the process took much longer than I had hoped, which meant that the patch was not ready by the time Firefox 57 was branched off mozilla's "central" repository. The result of that is that while my patch has been merged into what will eventually become Firefox 58, it looks strongly as though it won't make it into Firefox 57. That's going to cause some severe headaches, which I'm not looking forward to; and while I can certainly understand the reasons for not wanting to grant the exception for the merge into 57, I can't help but feeling like this is a missed opportunity.

Anyway, writing code for the massive Open Source project that mozilla is has been a load of fun, and in the process I've learned a lot -- not only about Open Source development in general, but also about this weird little thing that Javascript is. That might actually be useful for this other project that I've got running here.

In closing, I'd like to thank Tomislav 'zombie' Jovanovic for mentoring me during the whole process, without whom it would have been doubtful if I would even have been ready by now. Apologies for any procedural mistakes I've made, and good luck in your future endeavours! :-)

Planet Linux AustraliaMaxim Zakharov: MS Gong ride

I have returned to cycling a couple weeks ago and I am taking part in the MS Sydney to the Gong Ride - The Ride to Fight Multiple Sclerosis.

Though it would be a huge fun and a great challenge to ride over 80km along the Sydney coast, this is a fundraising event and entry fee only covers event staging costs. Every dollar you DONATE will go directly to ensuring the thousands of Australians with multiple sclerosis are able to receive the support and care they need to live well.

Please DONATE now to support my ride and change the lives of Australians living with multiple sclerosis.

Make a Donation!

Thank you for your support.

PS: Please visit fund raising pages of my friends Natasha and Eric who have inspired me to return to cycling and take this ride!

Worse Than FailureSponsor Post: Hired: State of Contracting

Our sponsor, Hired, passed us off a report they just published: “The State of Contract Work”. I said to myself, “Wait a second, I’m a contractor!” Well, technically, I’m more of a consultant or sometimes a trainer- one of those evil highly paid consultants who swing in, tell developers how to do their jobs, and leave behind nothing more than the smell of brimstone and invoices.

The bad thing about this line of work, at least from the perspective of a TDWTF article, is that if I encounter a real WTF, it’s because someone wants me to fix it. A WTF that is getting fixed isn’t really a WTF anymore. That doesn’t mean I don’t encounter some real head-scratchers from time to time.

For example, I had a client that wanted to figure out best practices around using the Cassandra database. For the unfamiliar, Cassandra is a trendy “big-data” tool, a massively distributed database with no single point of failure and limited guarantees about how consistent the data is across replicas. It’s good for blisteringly fast writes, good for reliability, and absolutely terrible for any sort of ad-hoc query and data analysis.

So, I talked with them a bit about their Cassandra needs, roll into the office, and that’s when I start getting the real picture: a pointy-haired boss heard that Cassandra was cool, that FaceBook and Netflix used it a lot, and thus… they were going to use it. For everything. All of their applications, from their legacy mainframe apps, to their one-off SQL server DBs for intranet apps, to their massive supply-chain and retail business were going to run on Cassandra. They started by adopting it for the massive supply-chain and retail portion of their business, and thus were actually quite successful- it was the right tool, for the right job.

Thus armed with a wrecking ball and a single success with it, every problem started to look like a building that needed to be knocked down. This lead to a lot of conversations like this:

Client: So, we need to run ad-hoc reports out of Cassandra. How do we do that?
Me: You… don’t. You either need to know your query needs up front, so you can build tables and materialized views to support it, or you use something like Hadoop to run map-reduce jobs.
Client: Right, but we’re not using a tool like that. How do we do this in Cassandra?

These efforts are still ongoing, but it sounds like the “pick the right tool for the job,” speech is starting to sink in. They’re still determined to move all their mainframe applications, and all their mainframe developers onto Cassandra though, so maybe I’m just overly optimistic. I suspect that, in another year, the energy in the effort will peter out, the organization will decide that it’s not that they misused Cassandra, but that Cassandra is just bad, and the highly paid consultant who they brought in to talk about Cassandra is the real villain, but until then… I’m at their site often enough that the front-desk clerk at the hotel invited me to her wedding.

Well, maybe you don’t want to be a highly paid consultant, but if you do want to do some sort of contracting, Hired has good news for you: there’s about $1 trillion dollars in the IT contracting market. Since they’re placing a lot of workers, and their business is driven by analytics, they’ve got some insights into the contracting market.

A chart highlighting salaries for contractors around the US, and the markets- SF is the big market, Engineering Managers can expect to make $118 an hour

13% of the companies using Hired want to find contractors, as it’s a quick way to staff up with highly specialized skills to accomplish a specific project. It also means they don’t have to worry about any sort of benefits- freelance contractors don’t get 401K or dental. What they do get is more money.

How much more? It’s variable, but someone with 10 years of experience could be looking at over $100 an hour, with the added benefit that they’re getting paid by the hour. Unhealthy companies (or 90% of Silicon Valley) love to run their employees through 130-hour week death marches, and those employees aren’t getting extra pay. Hired’s contractors, on the other hand, work an average of 22 hours a week.

A table contrasting the benefits of full-time employment and contracting, with the expected details- benefits vs. flexibility

Speaking personally, it’s that kind of flexibility that I find attractive about being a contrractor. The downside, of course, is the lack of benefits. The average premium for just health benefits is about $4,700/year if you buy it for yourself, while an full-time employee’s health plan costs them 1/4 that amount.

The best markets are places you would usually expect- Seattle, the Bay Area, and Austin. But that doesn’t mean you have to pack up and leave for those locales- remote contract work is big, and that’s the other benefit for a contractor.

Hired’s report sums this up with a pretty typical, “who has it better”, and decides that, “it depends”. I’m glad I’m a contractor, despite the feast-or-famine aspect (and honestly, “feast” is worse for me: I get burned out real quick), but I certainly wouldn’t say that everyone can or should do that, and certainly, I’ve been able to do it through a combination of lucky networking and just plain luck.

Read the entire report, and then let Hired help you find your next job.

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

,

Planet DebianSteve Kemp: Tracking aircraft in real-time, via software-defined-radio

So my last blog-post was about creating a digital-radio, powered by an ESP8266 device, there's a joke there about wireless-control of a wireless. I'm not going to make it.

Sticking with a theme this post is also about radio, software-defined radio. I know almost nothing about SDR, except that it can be used to let your computer "do stuff" with radio. The only application I've ever read about that seemed interesting was tracking aircraft.

This post is about setting up a Debian GNU/Linux system to do exactly that, show aircraft in real-time above your head! This was almost painless to setup.

  • Buy the hardware.
  • Plug in the hardware.
  • Confirm it is detected.
  • Install the appropriate sdr development-package(s).
  • Install the magic software.
    • Written by @antirez, no less, you know it is gonna be good!

So I bought this USB device from AliExpress for the grand total of €8.46. I have no idea if that URL is stable, but I suspect it is probably not. Good luck finding something similar if you're living in the future!

Once I connected the Antenna to the USB stick, and inserted it into a spare slot it showed up in the output of lsusb:

  $ lsusb
  ..
  Bus 003 Device 043: ID 0bda:2838 Realtek Semiconductor Corp. RTL2838 DVB-T
  ..

In more detail I see the major/minor numbers:

  idVendor           0x0bda Realtek Semiconductor Corp.
  idProduct          0x2838 RTL2838 DVB-T

So far, so good. I installed the development headers/library I needed:

  # apt-get install librtlsdr-dev libusb-1.0-0-dev

Once that was done I could clone antirez's repository, and build it:

  $ git clone https://github.com/antirez/dump1090.git
  $ cd dump1090
  $ make

And run it:

  $ sudo ./dump1090 --interactive --net

This failed initially as a kernel-module had claimed the device, but removing that was trivial:

  $ sudo rmmod dvb_usb_rtl28xxu
  $ sudo ./dump1090 --interactive --net

Once it was running I'd see live updates on the console, every second:

  Hex    Flight   Altitude  Speed   Lat       Lon       Track  Messages Seen       .
  --------------------------------------------------------------------------------
  4601fc          14200     0       0.000     0.000     0     11        1 sec
  4601f2          9550      0       0.000     0.000     0     58        0 sec
  45ac52 SAS1716  2650      177     60.252    24.770    47    26        1 sec

And opening a browser pointing at http://localhost:8080/ would show that graphically, like so:

NOTE: In this view I'm in Helsinki, and the airport is at Vantaa, just outside the city.

Of course there are tweaks to be made:

  • With the right udev-rules in place it is possible to run the tool as non-root, and blacklist the default kernel module.
  • There are other forks of the dump1090 software that are more up-to-date to explore.
  • SDR can do more than track planes.

Planet DebianDaniel Silverstone: F/LOSS (in)activity, September 2017

In the interests of keeping myself "honest" regarding F/LOSS activity, here's a report, sadly it's not very good.

Unfortunately, September was a poor month for me in terms of motivation and energy for F/LOSS work. I did some amount of Gitano work, merging a patch from Richard Ipsum for help text of the config command. I also submitted another patch to the STM32F103xx Rust repository, though it wasn't a particularly big thing. Otherwise I've been relatively quiet on the Rust/USB stuff and have otherwise kept away from projects.

Sometimes one needs to take a step away from things in order to recuperate and care for oneself rather than the various demands on ones time. This is something I had been feeling I needed for a while, and with a lack of motivation toward the start of the month I gave myself permission to take a short break.

Next weekend is the next Gitano developer day and I hope to pick up my activity again then, so I should have more to report for October.

TEDThe big idea: What your casual online behavior reveals to hackers (and what to do about it)

It seems these days, everybody’s getting hacked.

With so much of our most sensitive information stored on servers in some remote part of the world, it seems concerningly easy for malicious hackers to worm their way past secure firewalls and into bank accounts, credit card databases, corporate emails and even hospital systems.

On a global average, these hacks cost companies about $3.6 million, according to IBM’s annual Cost of Data Breach Study with 2016 being a record-breaking year for data breaches in the United States alone — which is shocking, seeing as many breaches still go unreported.

This isn’t exactly news: if you are a digital citizen, then you’re probably aware that nothing is safe on the internet. But beyond hacking, it turns out you’re revealing more than you think with your most casual online behavior.

So, let’s find out: what are you sharing about yourself online?

Your Wi-fi may reveal your secrets

A basic but surprisingly telling way to learn about the more undisclosed aspects of your life is through the wireless networks that we connect to daily. Get ready to shift your cell phone to airplane mode …

Whenever you connect to new Wi-fi, your smartphone also beams out a list of the networks you’ve previously connected to, even if you’re not actively using wireless internet, warns cybersecurity expert James Lyne. For hackers, this list is relatively easy to access and exploit. Many companies name their internet after themselves — which means your previous Wi-fi connections can disclose the last hotel you stayed in, the gym you go to and the coffee shops you frequent. This doesn’t include the threats possible when you connect to unsecured hotspots, like when you’re jonesing for free internet while waiting at the airport.

Example from University of Wisconsin

Depending on how uniquely you name your home network, a person might also figure out where you live. For example, say you name your Wi-fi JonesOnElmSt. If a hacker knows some simple information about your household — perhaps the family dog’s name — then she may be able to guess that password too.

So, here’s something for you to think about: “As we adopt these new applications and mobile devices, as we play with these shiny new toys, how much are we trading off convenience for privacy and security?” says Lyne. “Next time you install something, look at the settings and ask yourself, ‘Is this information that I want to share? Would someone be able to abuse it?”

Social media is an obvious culprit

Frank Abagnale Jr. (whose crime spree inspired the movie “Catch Me If You Can”) summed it up neatly in a recent interview with The Wall Street Journal.

“Technology breeds crime—it always has and it always will. There’s always going to be people willing to use technology in a negative, self-serving way. So today it’s much easier, whether it’s forging checks or getting information,” he says. “People go on Facebook and tell you what car they drive, their mother’s name, their wife’s maiden name, children’s name, where they’re going on vacation, where they’ve been on vacation. There’s nothing you can’t research in a matter of a couple of minutes and find out about someone.”

By sharing small aspects of yourself — like the names of your nieces, the high school you attended or the street you grew up on — there’s a possibility that you’re offering up answers to your security questions. It’s hard to avoid sharing these things, but it might be in your best interest to scrub your Facebook and similar accounts and make sure your family and personal information is private or erased.

But say that advice doesn’t resonate, that you’ve been ultra social media savvy and have your accounts under lock and key; you can still give information about yourself quite regularly — not without care, but without thought.

In a fascinating conversation between computer scientist Jennifer Golbeck (TED Talk: Your social media “likes” expose more than you think) and privacy economist Alessandro Acquisti (TED Talk: What will a future without secrets look like?), the two experts spoke at length about the little-discussed aspects of online privacy.

Here’s a boiled down version of their most interesting points:

  • The most casual and random behavior reveals a lot. “You can ‘like’ Facebook pages, you can post these things about yourself, and then we can infer a completely unrelated trait about you based on a combination of likes or the type of words that you’re using, or even what your friends are doing, even if you’re not posting anything. It’s things that are inherent in what you’re sharing that reveal these other traits, which may be things you want to keep private and that you had no idea you were sharing.”
  • Having social media (and apps) means you forfeit your personal data. “There are terms of service that regulate the sites you use, like on Facebook and Twitter and Pinterest — though those can change — but even within those, you’re essentially handing control of your data over to the companies. And they can kind of do what they want with it, within reason. You don’t have the legal right to request that data be deleted, to change it, to refuse to allow companies to use it. Some companies may give you that right, but you don’t have a natural, legal right to control your personal data. So if a company decides they want to sell it or market it or release it or change your privacy settings, they can do that,” says Golbeck.
  • Policymaking for privacy in the US needs work. “The policymaking effort in the U.S. focuses almost exclusively on control and transparency, i.e. telling users how their data is used and giving them some degree of control. And those are important things! However, they are not sufficient means of privacy protection, in that there are a number of ways in which transparency control can be bypassed or muted. What we are missing from the Fair Information Practices are other principles, such as purpose specification (the reason data is being gathered should be specified before or at the time of collection), use limitation (subsequent uses of data should be limited to specific purposes) and security safeguards.”“To be clear, I’m not suggesting that all this information will be used negatively, or that online disclosures are inherently negative. That’s not at all the point,” says Acquisti. “The point is, we really don’t know how this information will be used.”

Case in point is Luke R. DuBois’ project “A More Perfect Union.” Dubois created profiles on 21 different dating services and used the data given by other users to piece together a compelling cartographical tapestry of adjectives that describe towns across the US.

Los Angeles’ word is “acting.” And all towns around the area, are similar Hollywood words like “director,” “film,” “blonde” and “career.”

For a sampling of what he learned, check out the TED Ideas blog article or watch the talk to learn more about the data visualization, plus other projects he’s worked on:

Don’t fret — just be vigilant and proactive

A majority of us are at the fault of weak code and the exploitation of human nature.

Privacy researcher and TED Fellow Christopher Soghoian (TED Talk: How to avoid surveillance … with the phone in your pocket) details five easy ways to keep your data safe:

  1. Outsource your passwords to a robot. The human brain can only remember so many passwords, and too often we just reuse passwords across Facebook, our favorite shopping sites … and our bank. This is a Very Bad Idea. Once hackers break into one website and steal a database of email addresses and passwords, they’ll try to use those same email+password combinations to log in to other sites. The solution: Use a password manager, a software tool for computers and mobile devices, which will pick random, long passwords for each site you visit, and synchronize them across your many devices. Some popular password managers are 1Password, Dashlane and LastPass.
  2. Get a U2F key — and/or use two-factor authentication wherever possible. Make sure that even if someone learns your password, they won’t be able to log in. To do this, you’ll want to enable two-factor authentication, a security feature that can be added to many online accounts. For some sites, this step can take the form of a random number sent to your phone by text message, or a special app on your smartphone that generates one-time login codes. Google has pledged to upgrade their two-factor authentication in light of the many recent high-profile attacks. If you’re traveling, get a U2F security key, a thumb-drive-sized device that fits into the USB port. When you login to an account from a new computer, the U2F key handles your two-factor authentication. It costs about $15.
  3. Enable disk encryption. If you lose your laptop or your phone and it doesn’t have disk encryption turned on, whoever finds the device can get all your data too. On the iPhone and iPad, disk encryption is turned on by default, but for Windows, Android or Mac OS you need to make the effort to switch it on. Here’s how to encrypt your disk drive like you mean it.
  4. Put a sticker over your webcam. There are software tools used by criminals, stalkers and generally creepy people that allow them to turn on your webcam without your knowledge. Granted, this doesn’t happen millions of times a year, but the horror stories are real and terrifying. One simple sticker means you use your webcam when you choose to use it. (You may also want to cover your microphone.)
  5. Encrypt your telephone calls and text messages. The voice and text message services provided by phone companies are not secure and can be spied on with relatively inexpensive equipment. Your own government, a foreign government, criminals, hackers and stalkers could listen to your phone calls and read your text messages. Some Internet-based mobile apps that you likely already use are much more secure, enabling you to talk privately to your loved ones and colleagues, and don’t require that you do anything or turn on any special features to get the added security protections — Apple’s FaceTime and WhatsApp on Android are both good. If you want an even stronger level of security, there is a fantastic free tool called Signal available on Apple’s App Store.

In 2016, Kevin Roose, a tech columnist for the New York Times, hired hackers to tear apart his online life to learn firsthand how to avoid such a harrowing, paralyzing experience. In his article, How to Not Get Hacked, According to Expert Hackers, Roose outlines some key strategies to keep yourself from being raked over hot coals by some strangers online — a lot of which lines up with Soghoian’s advice. He suggests things like using a VPN (Virtual Private Network) for $3.99/month if you use hotel or coffee shop Wi-fi or familiarizing yourself with urlquery.net to avoid potentially sketchy websites.

… Or if you don’t care about keeping your information secure, you can could do what artist Hasan Elahi did and share everything about yourself all the time and post it to a website in real-time.

Whatever your preference, managing your digital life is difficult and sometimes feels impossible with the amount of steps you have to take in order to achieve some sort of peace of mind. Take these facts and steps in stride, do what you can to protect yourself and in the meantime draw your own selfie — using your personal data. Good night and good luck.


Worse Than FailureCodeSOD: The Anty Pattern

An anti-pattern that shows up from time to time here is the old “our IDE’s build output is mapped to a network drive on the web server”, but “Drummer” shows us a novel new variation on that theme.

It all started when a co-worker asked them, “how do I change the compiler version?” The code was built using Ant, so “Drummer” opened the build file and searched through it for a javac element- the Ant command which runs the Java compiler.

They didn’t find anything, but after a more manual search, they found this:

    <target name="create_xxx_jar" depends="get_svn_info">
        <jar destfile="dist/${xxx.jarfile}" manifest="manifest.mf" >
            <fileset dir="bin"/>
            <fileset file=".classpath"/>
            <fileset file=".project"/>
            <fileset file="manifest.mf"/>
        </jar>
    </target>

This bit of scripting code creates the output jar file containing all the compiled classes. Note that it does this by pulling them straight out of the bin folder. How do they get into the bin folder? Because Eclipse was configured to compile on every save. Note, the script doesn’t check that there’s anything in the bin folder. It doesn’t check that the compile was successful. It doesn’t wait for a build to complete. By default, those are debug builds.

And this output jar is exactly what gets shipped to the customer. You’ll be shocked to learn that there’s no automated testing or CI here.

That is their deployment process. Hit save. Run Ant. Scoop up the jar and ship it.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianIain R. Learmonth: MAC Catching

As we walk around with mobile phones in our pockets, there are multiple radios each with identifiers that can be captured and recorded just through their normal operation. Bluetooth and Wifi devices have MAC addresses and can advertise their presence to other devices merely by sending traffic, or by probing for devices to connect to if they’re not connected.

I found a simple tool, probemon that allows for anyone with a wifi card to track who is at which location at any given time. You could deploy a few of these with Raspberry Pis or even go even cheaper with a number of ESP8266.

In the news recently was a report from TfL about their WiFi data collection. Sky News reported that TfL “plans to make £322m by collecting data from passengers’ mobiles”. TfL have later denied this but the fact remains that collecting this data is trivial.

I’ve been thinking about ideas for spoofing mass amounts of wireless devices making the collected data useless. I’ve found that people have had success in using Scapy to forge WiFi frames. When I have some free time I plan to look into some kind of proof-of-concept for this.

On the underground, this is the way to do this, but above ground I’ve also heard of systems that use the TMSI from 3G/4G, not WiFi data, to identify mobile phones. You’ll have to be a bit more brave if you want to forge these (please do not, unless using alternative licensed frequencies, you may interfere with mobile service and prevent 999 calls).

If you wanted to spy on mobile phones near to you, you can do this with the gr-gsm package now available in Debian.

Krebs on SecurityFear Not: You, Too, Are a Cybercrime Victim!

Maybe you’ve been feeling left out because you weren’t among the lucky few hundred million or billion who had their personal information stolen in either the Equifax or Yahoo! breaches. Well buck up, camper: Both companies took steps to make you feel better today.

Yahoo! announced that, our bad!: It wasn’t just one billion users who had their account information filched in its record-breaking 2013 data breach. It was more like three billion (read: all) users. Meanwhile, big three credit bureau Equifax added 2.5 million more victims to its roster of 143 million Americans who had their Social Security numbers and other personal data stolen in a breach earlier this year. At the same time, Equifax’s erstwhile CEO informed Congress that the breach was the result of even more bone-headed security than was first disclosed.

To those still feeling left out by either company after this spate of bad news, I have only one thing to say (although I feel a bit like a broken record in repeating this): Assume you’re compromised, and take steps accordingly.

If readers are detecting a bit of sarcasm and cynicism in my tone here, it may be that I’m still wishing I’d done almost anything else today besides watching three hours worth of testimony from former Equifax CEO Richard Smith before lawmakers on a panel of the House Energy & Commerce Committee.

While he is no longer the boss of Equifax, Smith gamely agreed to submit to several day’s worth of grilling from legislators in both houses of Congress this week. It was clear from the questions that lawmakers didn’t ask in Round One, however, that Smith was far more prepared for the first batch of questioning than they were, and that the entire ordeal would amount to only a gentle braising.

Nevertheless, Smith managed to paint an even more dismal picture than was already known about the company’s failures to secure the very data that makes up the core of its business. Helpfully, Smith clarified early on in the hearing that the company’s customers are in fact banks and other businesses — not consumers.

Smith told lawmakers that the breach stemmed from a combination of technological error and a human error, casting it as the kind of failure that could have happened to anyone. In reality, the company waited 4.5 months (after it discovered the breach in late July 2017) to fix a dangerous security flaw that it should have known was being exploited on Day One (~March 6 or 7, 2017).

“The human error involved the failure to apply a software patch to a dispute portal in March 2017,” Smith said. He declined to explain (and lawmakers inexplicably failed to ask) how 145.5 million Americans — nearly 60 percent of the adult population of the United States — could have had their information tied up in a dispute portal at Equifax. “The technological error involved a scanner which failed to detect a vulnerability on that particular portal.”

As noted in this Wired.com story, Smith admitted that the data compromised in the breach was not encrypted:

When asked by representative Adam Kinzinger of Illinois about what data Equifax encrypts in its systems, Smith admitted that the data compromised in the customer-dispute portal was stored in plaintext and would have been easily readable by attackers. “We use many techniques to protect data—encryption, tokenization, masking, encryption in motion, encrypting at rest,” Smith said. “To be very specific, this data was not encrypted at rest.”

It’s unclear exactly what of the pilfered data resided in the portal versus other parts of Equifax’s system, but it turns out that also didn’t matter much, given Equifax’s attitude toward encryption overall. “OK, so this wasn’t [encrypted], but your core is?” Kinzinger asked. “Some, not all,” Smith replied. “There are varying levels of security techniques that the team deploys in different environments around the business.”

Smith also sought to justify the company’s historically poor breach response after it publicly disclosed the break-in on Sept. 7 — roughly 40 days after Equifax’s security team first became aware of the incident (on July 29). As many readers here are well familiar, KrebsOnSecurity likened that breach response to a dumpster fire — noting that it was perhaps the most haphazard and ill-conceived of any major data breach disclosure in history.

Smith artfully dodged questions of why the company waited so long to notify the public, and about the perception that Equifax sought to profit off of its own data breach. One lawmaker noted that Smith gave two public speeches in the second and third weeks of August in which he was quoted as saying that fraud was a “a huge opportunity for Equifax,” and that it was a “massive, growing business” for the company.

Smith interjected that he had “no indication” that consumer data was compromised at the time of the Aug. 11 speech. As for the Aug. 17 address, he said “we did not know how much data was compromised, what data was compromised.”

Follow-up questions from lawmakers on the panel revealed that Smith didn’t ask for a briefing about what was then allegedly only classified internally as “suspicious activity” until August 15, almost two weeks after the company hired outside cybersecurity experts to examine the issue.

Smith also maneuvered around questions about why Equifax chose to disclose the breach on the very day that Hurricane Irma was dominating front-page news with an imminent landfall on the eastern seaboard of the United States.

However, Smith did blame Irma in explaining why the company’s phone systems were simply unable to handle the call volume from U.S. consumers concerned about the Category Five data breach, saying that Irma took down two of Equifax’s largest call centers days after the breach disclosure. He said the company handled over 420 million consumer visits to the portal designed to help people figure out whether they were victimized in the breach, underscoring how so many American adults were forced to revisit the site again and again because it failed to give people consistent answers about whether they were affected.

Just a couple of hours after the House Commerce panel hearing ended, Politico ran a story noting that the Internal Revenue Service opted to award Equifax a $7.25 million no-bid contract to provide identity-proofing and anti-fraud services to the tax bureau. Bear in mind that Equifax’s poor security contributed to an epidemic of tax refund fraud at the IRS in the 2015 and 2016 tax years, when fraudsters took advantage of weak security questions provided to the IRS by Equifax to file and claim phony tax refund requests on behalf of hundreds of thousands of taxpayers.

Don’t forget that tax fraudsters exploited this same lax security at Equifax’s TALX payroll division to steal employee tax records from an as-yet undisclosed number of companies between April 2016 and March 2017.

Finally, much of today’s hearing centered around questions about the difference between a security freeze — a right that was hard-won on a state-by-state level over several years — and the “credit lock” services being pushed instead by Equifax and the big bureaus. Lawmakers on today’s panel seemed content with Smith’s answer that the two things were effectively the same, only that a freeze was more cumbersome and costly, whereas credit locks were free and far more consumer-friendly.

To those still wavering on which is better, I have only to point to reasoning by Christina Tetreault, a staff attorney on the financial services team of Consumers Union — the policy arm of Consumer Reports. Tetreault notes that perhaps the main reason a security freeze is the better option is that its promise to guard your credit accounts is guaranteed by law, whereas a credit lock is simply an agreement between you and the credit monitoring company.

“Having a contractual agreement is not as strong as having protections under law,” Tetreault said. “The contract may be unclear, may include provisions that allow the other party to change it, or include provisions that you may be better off not agreeing to, such as an arbitration agreement.”

What’s more, placing a freeze on your file is exactly what Equifax and the other bureaus do not want you to do, because it prevents them from making money by selling your credit file to banks and others (including ID thieves) who wish to grant new lines of credit in your name. If that’s not the best reason for opting for a freeze, I don’t know what is.

If anyone needs more convincing on this front, check out the testimony given in other committees today by representatives from banking behemoth Wells Fargo, which is under fire signing up tens of thousands of auto loan customers for insurance they did not need and in some cases couldn’t afford. That scandal comes on the heels of another debacle in which Wells Fargo was found to have created more than 3.5 million bank accounts without consumers’ permission between 2009 and 2016.

Mr. Smith is slated to testify before at least three other committees in the House and Senate this week before he’s off the hot seat. On Friday, KrebsOnSecurity published a lengthy list of questions that lawmakers should consider asking the former Equifax CEO. Here’s hoping our elected representatives don’t merely use these additional opportunities for more grandstanding and regurgitating the same questions.

Planet Linux AustraliaSimon Lyall: DevOps Days Auckland 2017 – Wednesday Session 3

Sanjeev Sharma – When DevOps met SRE: From Apollo 13 to Google SRE

  • Author of Two DevOps Bookks
  • Apollo 13
    • Who were the real heroes? The guys back at missing control. The Astronaunts just had to keep breathing and not die
  • Best Practice for Incident management
    • Prioritize
    • Prepare
    • Trust
    • Introspec
    • Consider Alternatives
    • Practice
    • Change it around
  • Big Hurdles to adoption of DevOps in Enterprise
    • Literature is Only looking at one delivery platform at a time
    • Big enterprise have hundreds of platforms with completely different technologies, maturity levels, speeds. All interdependent
    • He Divides
      • Industrialised Core – Value High, Risk Low, MTBF
      • Agile/Innovation Edge – Value Low, Risk High, Rapid change and delivery, MTTR
      • Need normal distribution curve of platforms across this range
      • Need to be able to maintain products at both ends in one IT organisation
  • 6 capabilities needed in IT Organisation
    • Planning and architecture.
      • Your Delivery pipeline will be as fast as the slowest delivery pipeline it is dependent on
    • APIs
      • Modernizing to Microservices based architecture: Refactoring code and data and defining the APIs
    • Application Deployment Automation and Environment Orchestration
      • Devs are paid code, not maintain deployment and config scripts
      • Ops must provide env that requires devs to do zero setup scripts
    • Test Service and Environment Virtualisation
      • If you are doing 2week sprints, but it takes 3-weeks to get a test server, how long are your sprints
    • Release Management
      • No good if 99% of software works but last 1% is vital for the business function
    • Operational Readiness for SRE
      • Shift between MTBF to MTTR
      • MTTR  = Mean time to detect + Mean time to Triage + Mean time to restore
      • + Mean time to pass blame
    • Antifragile Systems
      • Things that neither are fragile or robust, but rather thrive on chaos
      • Cattle not pets
      • Servers may go red, but services are always green
    • DevOps: “Everybody is responsible for delivery to production”
    • SRE: “(Everybody) is responsible for delivering Continuous Business Value”

Share

Planet DebianNorbert Preining: Einstein and Freud’s letters on “Why War?” – 85th anniversary

85 years ago, on 30 July 1932, Albert Einstein send a letter to Sigmund Freud discussing the question: Why War? Freud answered to this letter in early September 1932. To commemorate the 85 year anniversary, the German typographer Harald Geisler has started a project on Kickstarter to recreate the letters send back then. Over the last weeks the two letters have arrived at my place in Japan:

But not only were the letters reproduces, but typeset in the original digitized handwriting and sent from the original locations. Harald Geisler crafted fonts based on the hand-writing of Einstein and Freud, an layout out the pages of the letters according to the originals. Since the letters were originally written in German, an English translation also typeset in the hand-writing font was added.

In addition to the a bit archaic hand writing of Freud which even many German natives will not be able to read, the German text of his letter has been included in normal print style. Not only that, Harald Geisler even managed to convince the Sigmund Freud Museum to let rest the letters for one night in the very office where the original letter was written, so all the letters send out actually came from Freud’s office.

This project was one of the first Kickstarter projects I supported, and I really liked the idea, and would like to thanks Harald Geisler for realizing it. These kind of activities, combining of typography, history, action, dedication, keep our culture and history alive. Thanks.

Harald Geisler also invites us all to continue the dialog on Why War?, which is getting more and more relevant again, with war-mongering becoming respected practice.

Planet DebianDirk Eddelbuettel: RProtoBuf 0.4.11

RProtoBuf provides R bindings for the Google Protocol Buffers ("ProtoBuf") data encoding and serialization library used and released by Google, and deployed fairly widely in numerous projects as a language and operating-system agnostic protocol.

A new releases RProtoBuf 0.4.11 appeared on CRAN earlier today. Not unlike the other recent releases, it is mostly a maintenance release which switches two of the vignettes over to using the pinp package and its template for vignettes.

Changes in RProtoBuf version 0.4.11 (2017-10-03)

  • The RProtoBuf-intro and RProtoBif-quickref vignettes were converted to Rmarkdown using the templates and style file from the pinp package.

  • A few minor internal upgrades

CRANberries also provides a diff to the previous release. The RProtoBuf page has copies of the (older) package vignette, the 'quick' overview vignette, a unit test summary vignette, and the pre-print for the JSS paper. Questions, comments etc should go to the GitHub issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaSimon Lyall: DevOps Days Auckland 2017 – Wednesday Session 2

Marcus Bristol (Pushpay) – Moving fast without crashing

  • Low tolerance for errors in production due to being in finance
  • Deploy twice per day
  • Just Culture – Balance safety and accountability
    • What rule?
    • Who did it?
    • How bad was the breach?
    • Who gets to decide?
  • Example of Retributive Culture
    • KPIs reflect incidents.
    • If more than 10% deploys bad then affect bonus
    • Reduced number of deploys
  • Restorative Culture
  • Blameless post-mortem
    • Can give detailed account of what happened without fear or retribution
    • Happens after every incident or near-incident
    • Written Down in Wiki Page
    • So everybody has the chance to have a say
    • Summary, Timeline, impact assessment, discussion, Mitigations
    • Mitigations become highest-priority work items
  • Our Process
    • Feature Flags
    • Science
    • Lots of small PRs
    • Code Review
    • Testers paired to devs so bugs can be fixed as soon as found
    • Automated tested
    • Pollination (reviews of code between teams)
    • Bots
      • Posts to Slack when feature flag has been changed
      • Nags about feature flags that seems to be hanging around in QA
      • Nags about Flags that have been good in prod for 30+ days
      • Every merge
      • PRs awaiting reviews for long time (days)
      • Missing postmortun migrations
      • Status of builds in build farm
      • When deploy has been made
      • Health of API
      • Answer queries on team member list
      • Create ship train of PRs into a build and user can tell bot to deploy to each environment

Share

,

Planet Linux AustraliaSimon Lyall: DevOps Days Auckland 2017 – Wednesday Session 1

Michael Coté – Not actually a DevOps Talk

Digital Transformation

  • Goal: deliver value, weekly reliably, with small patches
  • Management must be the first to fail and transform
  • Standardize on a platform: special snow flakes are slow, expensive and error prone (see his slide, good list of stuff that should be standardize)
  • Ramping up: “Pilot low-risk apps, and ramp-up”
  • Pair programming/working
    • Half the advantage is people speed less time on reddit “research”
  • Don’t go to meetings
  • Automate compliance, have what you do automatic get logged and create compliance docs rather than building manually.
  • Crafting Your Cloud-Native Strategy

Sajeewa Dayaratne – DevOps in an Embedded World

  • Challenges on Embedded
    • Hardware – resource constrinaed
    • Debugging – OS bugs, Hardware Bugs, UFO Bugs – Oscilloscopes and JTAG connectors are your friend.
    • Environment – Thermal, Moisture, Power consumption
    • Deploy to product – Multi-month cycle, hard of impossible to send updates to ships at sea.
  • Principles of Devops , equally apply to embedded
    • High Frequency
    • Reduce overheads
    • Improve defect resolution
    • Automate
    • Reduce response times
  • Navico
    • Small Sonar, Navigation for medium boats, Displays for sail (eg Americas cup). Navigation displays for large ships
    • Dev around world, factory in Mexico
  • Codebase
    • 5 million lines of code
    • 61 Hardware Products supported – Increasing steadily, very long lifetimes for hardware
    • Complex network of products – lots of products on boat all connected, different versions of software and hardware on the same boat
  • Architecture
    • Old codebase
    • Backward compatible with old hardware
    • Needs to support new hardware
    • Desire new features on all products
  • What does this mean
    • Defects were found too late
    • Very high cost of bugs found late
    • Software stabilization taking longer
    • Manual test couldn’t keep up
    • Cost increasing , including opportunity cost
  • Does CI/CD provide answer?
    • But will it work here?
    • Case Study from HP. Large-Scale Agile Development by Gary Gruver
  • Our Plan
    • Improve tolls and archetecture
    • Build Speeds
    • Automated testing
    • Code quality control
  • Previous VCS
    • Proprietary tool with limit support and upgrades
    • Limited integration
    • Lack of CI support
    • No code review capacity
  • Move to git
    • Code reviews
    • Integrated CI
    • Supported by tools
  • Archetecture
    • Had a configurable codebase already
    • Fairly common hardware platform (only 9 variations)
    • Had runtime feature flags
    • But
      • Cyclic dependancies – 1.5 years to clean these up
      • Singletons – cut down
      • Promote unit testability – worked on
      • Many branches – long lived – mega merges
  • Went to a single Branch model, feature flags, smaller batch sizes, testing focused on single branch
  • Improve build speed
    • Start 8 hours to build Linux platform, 2 hours for each app, 14+ hours to build and package a release
    • Options
      • Increase speed
      • Parallel Builds
    • What did
      • ccache.clcache
      • IncrediBuild
      • distcc
    • 4-5hs down to 1h
  • Test automation
    • Existing was mock-ups of the hardware to not typical
    • Started with micro-test
      • Unit testing (simulator)
      • Unit testing (real hardware)
    • Build Tools
      • Software tools (n2k simulator, remote control)
      • Hardware tools ( Mimic real-world data, re purpose existing stuff)
    • UI Test Automation
      • Build or Buy
      • Functional testing vs API testing
      • HW Test tools
      • Took 6 hours to do full test on hardware.
  • PipeLine
    • Commit -> pull request
    • Automated Build / Unit Tests
    • Daily QA Build
  • Next?
    • Configuration as code
    • Code Quality tools
    • Simulate more hardware
    • Increase analytics and reporting
    • Fully simulated test env for dev (so the devs don’t need the hardware)
    • Scale – From internal infrastructure to the cloud
    • Grow the team
  • Lessons Learnt
    • Culture!
    • Collect Data
    • Get Executive Buy in
    • Change your tolls and processes if needed
    • Test automation is the key
      • Invest in HW
      • Silulate
      • Virtualise
    • Focus on good software design for Everything

Share

Planet DebianChristoph Egger: Observations on Catalunya

Some things I don't really understand reading in German media

  • Suddenly the electoral system becomes a legitimacy problem. While it has never been a problem for any of the previous decisions of the Catalunyan regional government suddenly a "only 48% of people voted for the government" results in the decisions being illegitimate? This is also a property of many governments (Greece and the US president being obvious examples but also the German Bundestag can have a majority government without the majority of votes). Is this just the media trying to find something they can blame on "the other side"?

  • How can you ever possibly excuse violence against people peacefully and non-violently doing whatever they're doing. Sure this referendum was considered illegal (and it may be legitimate to ignore the result, or legal prosecution of the initiators) but how can that ever possibly be an excuse for half a population peacefully doing whatever they are about to do? How can you possibly claim that "both sides are to blame" for the violence? "Die Zeit" seems to be the only one with an somewhat convincing argument ("Deciding to press on despite the obviously happening violence") while "Welt", "Spiegel" and "Süddeutsche" all trying to blame the regional government for the violence with as much of an argument as asking people to do something illegal in a totally peaceful way. Possibly an argument for legal consequences, sure -- but for violence?

Too bad I didn't keep the links / articles from Sunday night.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #127

Here's what happened in the Reproducible Builds effort between Sunday September 24 and Saturday September 30 2017:

Development and fixes in key packages

Kai Harries did an initial packaging of the Nix package manager for Debian. You can track his progress in #877019.

Uploads in Debian:

Packages reviewed and fixed, and bugs filed

Patches sent upstream:

Reproducible bugs (with patches) filed in Debian:

QA bugs filed in Debian:

Reviews of unreproducible packages

103 package reviews have been added, 153 have been updated and 78 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (177)
  • Andreas Beckmann (2)
  • Daniel Schepler (1)

diffoscope development

Mattia Rizzolo uploaded version 87 to stretch-backports.

  • Holger Levsen:
    • Bump standards version to 4.1.1, no changes needed.

strip-nondeterminism development

  • Holger Levsen:
    • Bump Standards-Version to 4.1.1, no changes needed.

reprotest development

  • Ximin Luo:
    • New features:
      • Add a --env-build option for testing different env vars. (In-progress, requires the python-rstr package awaiting entry into Debian.)
      • Add a --source-pattern option to restrict copying of source_root.
    • Usability improvements:
      • Improve error messages in some common scenarios.
      • Output hashes after a successful --auto-build.
      • Print a warning message if we reproduced successfully but didn't vary everything.
      • Update examples in documentation.
    • Have dpkg-source extract to different build dir iff varying the build-path.
    • Pass --debug to diffoscope if verbosity >= 2.
    • Pass --exclude-directory-metadata to diffoscope(1) by default.
    • Much refactoring to support the other work and several minor bug fixes.
  • Holger Levsen:
    • Bump standards version to 4.1.1, no changes needed.

tests.reproducible-builds.org

  • Holger Levsen:
    • Fix scheduler to not send empty scheduling notifications in the rare cases nothing has been scheduled.
    • Fix colors in 'amount of packages build each day on $ARCH' graphs.

reproducible-website development

  • Holger Levsen:
    • Fix up HTML syntax
    • Announce that RWS3 will happen at Betahaus, Berlin

Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann, Holger Levsen and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianChristoph Egger: Another Xor (CSAW 2017)

A short while ago, FAUST participated in this year's CSAW qualification and -- as usual -- I was working on the Crypto challenges again. The first puzzle I worked on was called "Another Xor" -- and, while there are quite some write ups already our solution was somewhat different (maybe even the intended solution given how nice things worked out) and certainly interesting.

The challenge provides a cipher-text. It's essentially a stream cipher with key repeated to generate the key stream. The plain-text was plain + key + checksum.

p = this is a plaintextThis is the keyfa5d46a2a2dcdeb83e0241ee2c0437f7
k = This is the keyThis is the keyThis is the keyThis is the keyThis i

Key length

Our first step was figuring out the key length. Let's assume for now the key was This is the key. Notice that the key is also part of the plain-text and we know something about its location -- it ends at 32 characters from the back. If we only take a look at the encrypted key it should have the following structure:

p' = This is the key
k' = he keyThis is t

The thing to notice here is that every character in the Key appears both in the plain-text and key stream sequence. And the cipher-text is the XOR (⊕) of both. Therefore XOR over the cipher-text sequence encrypting the key should equal 0 (⊕(p') ⊕ ⊕(k') = 0). So remove the last 32 characters and find all suffixes that result in a XOR of 0. Fortunately there is exactly one such suffix (there could be multiple) and therefore we know the key size: 67.

Key recovery

Now the nice thing to notice is that the length of the key (67) is a prime (and 38, the plain-text length, is a generator). As a result, we only need to guess one byte of the key:

Assume you know one byte of the key (and the position). Now you can use that one byte of the key to decrypt the next byte of the key (using the area where the key is part of the plain-text). Due to the primeness of the key length this allows recovery of the full key.

Finally you can either print all 256 options and look for the one that looks reasonable or you can verify the md5sum which will give you the one valid solution, flag{sti11_us3_da_x0r_for_my_s3cratz}.

Code


cipher = b"'L\x10\x12\x1a\x01\x00I[P-U\x1cU\x7f\x0b\x083X]\x1b'\x03\x0bR(\x04\r7SI\n\x1c\x02T\x15\x05\x15%EQ\x18\x00\x19\x11SJ\x00RV\n\x14YO\x0b\x1eI\n\x01\x0cE\x14A\x1e\x07\x00\x14aZ\x18\x1b\x02R\x1bX\x03\x05\x17\x00\x02\x07K\n\x1aLAM\x1f\x1d\x17\x1d\x00\x15\x1b\x1d\x0fH\x0eI\x1e\x02I\x01\x0c\x15\x00P\x11\\PXPCB\x03B\x13TBL\x11PC\x0b^\tM\x14IW\x08\rDD%FC"

def keycover(guess):
    key = dict()
    pos = 38
    key[38] = guess

    for i in range(67):
        newpos = (pos % 67) + 38
        key[newpos] = xor(cipher[pos:], key[pos])
        pos = newpos

    try:
        return b''.join([ key[i] for i in range(38, 105, 1) ])
    except:
        return b'test'

for guess in range(256):
    keycand = keycover(bytes([guess]))

    plaincand = xor(cipher, repeat(keycand, len(cipher)))

    if md5(plaincand[:-32]).hexdigest().encode() == plaincand[-32:]:
        print(keycand, plaincand)

Planet DebianChristoph Egger: Looking for a mail program + desktop environment

Seems it is now almost a decade since I migrated from Thunderbird to GNUS. And GNUS is an awesome mail program that I still rather like. However GNUS is also heavily quirky. It's essentially single-threaded and synchronous which means you either have to wait for the "IMAP check for new mails" to finish or you have to C-g abort it if you want the user interface to work; You have to wait for the "Move mail" to complete (which can take a while -- especially with dovecot-antispam training the filter) before you can continue working. It has it's funny way around TLS and certificate validation. And it seems to hang from time to time until it is C-g interrupted.

So when I set up my new desktop machine I decided to try something else. My first try was claws-mail which seems OK but totally fails in the asynchronous area. While the GUI stays reactive, all actions that require IMAP interactions become incredibly slow when a background IMAP refresh is running. I do have quite some mailboxes and waiting the 5+ minutes after opening claws or whenever it decides to do a refresh is just to much.

Now my last try has been Kmail -- also driven by the idea of having a more integrated setup with CalDAV and CardDAV around and similar goodies. And Kmail really compares nicely to claws in many ways. After all, I can use it while it's doing its things in the background. However the KDE folks seem to have dropped all support for the \recent IMAP flag which I heavily rely on. I do -- after all -- keep a GNUS like workflow where all unread mail (ref \seen) needs to still be acted upon which means there can easily be quite a few unread messages when I'm busy at the moment and just having a quick look at the new (ref \recent) mail to see if there's something super-urgent is essential.

So I'm now looking for useful suggestions for a mail program (ideally with desktop integration) with the following essential features:

  • It stays usable at all times -- which means smarter queuing than claws -- so foreground actions are not delayed by any background task the mail program might be up to and tasks like moving mail are handled in the background.
  • Decent support for filtering. Apart from some basic stuff I need shortcut filtering for \recent mail.
  • Option to hide \seen mail (and ideally hide all folders that only contain \seen mail). Hopefully toggle-able by some hotkey. "Age in days" would be an acceptable approximation, but Kmail doesn't seem to allow that in search (it's available as a filter though).

Sociological ImagesThoughts, Prayers, and Political Skeptics

The harrowing mass shooting in Las Vegas this week is part of a tragic pattern, and it raises big questions about how we deal with such tragedies in public life. In the face of such horror, many people rightly turn to their deep convictions for comfort and strength, and their leaders are no different. Referencing religion is a common choice for politicians, especially in troubling times. Experimental evidence shows these references draw voters in, but lately it seems like the calls for comfort may have gotten a little…rehearsed.

For a growing number of Americans, calls for “thoughts and prayers” ring especially hollow. About a fifth of the U.S. population has no religious affiliation, and new experimental research shows we may be drastically underestimating the number of atheists in the population as well. Despite these trends, we don’t often seen direct challenges to religious beliefs and practices in policy debates. Healthcare reform advocates don’t usually argue that we should keep people alive and well because “there probably isn’t an afterlife.” While the battle to legalize same sex marriage discussed the separation of church and state, we didn’t see many large advocacy groups arguing for support on the grounds that biblical claims simply weren’t true.

In lieu of prayer, calls for concrete action on gun control in the face of mass shootings are a new challenge to these cultural norms. In the wake of the 2015 San Bernardino shooting,  the New York Daily News ran this cover:

Now, in press conferences and on the floor of Congress, more political leaders are openly saying that thoughts are prayers are not enough to solve this problem. Sociologists know that the ways we frame issues matter, and here we might be seeing a new framing strategy emerging from the gun control debate that could reshape the role of religion in American politics in the long term.

 

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianDimitri John Ledkov: An interesting bug - network-manager, glibc, dpkg-shlibdeps, systemd, and finally binutils

Not so long ago I went to effectively recompile NetworkManager and fix up minor bug in it. It built fine across all architectures, was considered to be installable etc. And I was expecting it to just migrate across. At the time, glibc was at 2.26 in artful-proposed and NetworkManager was built against it. However release pocket was at glibc 2.24. In Ubuntu we have a ProposedMigration process in place which ensures that newly built packages do not regress in the number of architectures built for; installable on; and do not regress themselves or any reverse dependencies at runtime.

Thus before my build of NetworkManager was considered for migration, it was tested in the release pocket against packages in the release pocket. Specifically, since package metadata only requires glibc 2.17 NetworkManager was tested against glibc currently in the release pocket, which should just work fine....
autopkgtest [21:47:38]: test nm: [-----------------------
test_auto_ip4 (__main__.ColdplugEthernet)
ethernet: auto-connection, IPv4 ... FAIL ----- NetworkManager.log -----
NetworkManager: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.25' not found (required by NetworkManager)
At first I only saw failing tests, which I thought is transient failure. Thus they were retried a few times. Then I looked at the autopkgtest log and saw above error messages. Perplexed, I have started a lxd container with ubuntu artful, enabled proposed and installed just network-manager from artful-proposed and indeed a simple `NetworkManager --help` failed with above error from linker.

I am too young to know what dependency-hell means, since ever since I used Linux (Ubuntu 7.04) all glibc symbols were versioned, and dpkg-shlibdeps would generate correct minimum dependencies for a package. Alas in this case readelf confirmed that indeed /usr/sbin/NetworkManager requires 2.25 and dpkg depends is >= 2.17.

Further reading readelf output I checked that all of the glibc symbols used are 2.17 or lower, and only the "Version needs section '.gnu.version_r'" referenced GLIBC_2.25 symbol. Inspecting dpkg-shlibdeps code I noticed that it does not parse that section and only searches through the dynamic symbols used to establish the minimum required version.

Things started to smell fishy. On one hand, I trust dpkg-shlibdeps to generate the right dependencies. On the other hand I also trust linker to not tell lies either. Hence I opened a Debian BTS bug report about this issue.

At this point, I really wanted to figure out where the reference to 2.25 comes from. Clearly it was not from any private symbols as then the reference would be on 2.26. Checking glibc abi lists I found there were only a handful of symbols marked as 2.25
$ grep 2.25 ./sysdeps/unix/sysv/linux/x86_64/64/libc.abilist
GLIBC_2.25 GLIBC_2.25 A
GLIBC_2.25 __explicit_bzero_chk F
GLIBC_2.25 explicit_bzero F
GLIBC_2.25 getentropy F
GLIBC_2.25 getrandom F
GLIBC_2.25 strfromd F
GLIBC_2.25 strfromf F
GLIBC_2.25 strfroml F
Blindly grepping for these in network-manager source tree I found following:
$ grep explicit_bzero -r configure.ac src/
configure.ac: explicit_bzero],
src/systemd/src/basic/string-util.h:void explicit_bzero(void *p, size_t l);
src/systemd/src/basic/string-util.c:void explicit_bzero(void *p, size_t l) {
src/systemd/src/basic/string-util.c:        explicit_bzero(x, strlen(x));
First of all it seems like network-manager includes a partial embedded copy of systemd. Secondly that code is compiled into a temporary library and has autconf detection logic to use explicit_bzero. It also has an embedded implementation of explicit_bzero when it is not available in libc, however it does not have FORTIFY_SOURCES implementation of said function (__explicit_bzero_chk) as was later pointed out to me. And whilst this function is compiled into an intermediary noinst library, no functions that use explicit_bzero are used in the end by NetworkManger binary. To proof this, I've dropped all code that uses explicit_bzero, rebuild the package against glibc 2.26, and voila it only had Version reference on glibc 2.17 as expected from the end-result usage of shared symbols.

At this point toolchain bug was a suspect. It seems like whilst explicit_bzero shared symbol got optimised out, the version reference on 2.25 persisted to the linked binaries. At this point in the archive a snapshot version of binutils was in use. And in fact forcefully downgrading bintuils resulted in correct compilation / versions table referencing only glibc 2.17.

Mathias then took over a tarball of object files and filed upstream bug report against bintuils: "[2.29 Regression] ld.bfd keeps a version reference in .gnu.version_r for symbols which are optimized out". The discussion in that bug report is a bit beyond me as to me binutils is black magic. All I understood there was "we moved sweep and pass to another place due to some bugs", doing that introduced this bug, thus do multiple sweep and passes to make sure we fix old bugs and don't regress this either. Or something like that. Comments / Better description of the bintuils fix are welcomed.

Binutils got fixed by upstream developers, cherry-picked into debian, and ubuntu, network-manager got rebuild and everything is wonderful now. However, it does look like unused / deadend code paths tripped up optimisations in the toolchain which managed to slip by distribution package dependency generation and needless require a higher up version of glibc. I guess the lesson here is do not embed/compile unused code. Also I'm not sure why network-manager uses networkd internals like this, and maybe systemd should expose more APIs or serialise more state into /run, as most other things query things over dbus, private socket, or by establishing watches on /run/systemd/netif. I'll look into that another day.

Thanks a lot to Guillem Jover, Matthias Klose, Alan Modra, H.J. Lu, and others for getting involved. I would not be able to raise, debug, or fix this issue all by myself.

Planet DebianIain R. Learmonth: Facebook Lies

In the past, I had a Facebook account. Long ago I “deleted” this account through the procedure outlined on their help pages. In theory, 14 days after I used this process my account would be irrevocably gone. This was all lies.

My account was not deleted and yesterday I received an email:

Screenshot of the email I received from Facebook

It took me a moment to figure it out, but what had happened here is someone had logged into my Facebook account using my email address and password. Facebook simply reactivated the account, which had not had its data deleted, as if I had logged in.

This was possible because:

  1. Facebook was clinging to the hope that I would like to return
  2. The last time I used Facebook I didn’t know what a password manager was and was using the same password for basically everything

When I logged back in, all I needed to provide to prove I was me was my date of birth. Given that old Facebook passwords are readily available from dumps (people think their accounts are gone, so why should they be changing their passwords?) and my date of birth is not secret either, this is not great.

I followed the deletion procedure again and in 2 weeks (you can’t immediately request deletion apparently) I’ll check to see if the account is really gone. I’ve updated the password so at least the deletion process can’t be interrupted by whoever has that password (probably lots of people - it’ll be in a ton of dumps where databases have been hacked).

If it’s still not gone, I hear you can just post obscene and offensive material until Facebook deletes you. I’d rather not have to take that route though.

If you’re interested to see if you’ve turned up in a hacked database dump yourself, I would recommend hibp.

Update (2017-10-04): Thanks for all the comments. Sorry I haven’t been able to reply to all of them. Discussion around this post occured at Hacker News if you would like to read more there. You can also read about a similar, and more frustrating, case that came up in the HN discussion.

CryptogramE-Mail Tracking

Interesting survey paper: on the privacy implications of e-mail tracking:

Abstract: We show that the simple act of viewing emails contains privacy pitfalls for the unwary. We assembled a corpus of commercial mailing-list emails, and find a network of hundreds of third parties that track email recipients via methods such as embedded pixels. About 30% of emails leak the recipient's email address to one or more of these third parties when they are viewed. In the majority of cases, these leaks are intentional on the part of email senders, and further leaks occur if the recipient clicks links in emails. Mail servers and clients may employ a variety of defenses, but we analyze 16 servers and clients and find that they are far from comprehensive. We propose, prototype, and evaluate a new defense, namely stripping tracking tags from emails based on enhanced versions of existing web tracking protection lists.

Blog post on the research.

Planet Linux AustraliaBen Martin: Ikea wireless charger in CNC mahogany case

I notice that Ikea sell their wireless chargers without a shell for insertion into desks. The "desk" I chose is a curve cut profile in mahogany that just happens to have the same fit as an LG G3/4/5 type phone. The design changed along the way to a more upright one which then required a catch to stop the phone sliding off.


This was done in Fusion360 which allows bringing in STL files of things like phones and cutting those out of another body. It took a while to work out the ball end toolpath but I finally worked out how to get something that worked reasonably well. The chomps in the side allow fingers to securely lift the phone off the charger.

It will be interesting to play with sliced objects in wood. Layering 3D cuts to build up objects that are 10cm (or about 4 layers) tall.

Worse Than FailureThe Porpoise of Comment Easter Eggs

Today's submitter writes: I wonder how many developers out there have managed, intentionally or otherwise, to have a comment Easter egg go viral within a project.

It seems in the late '90's he was working on a project codenamed "Dolphin." This wasn't the GameCube; it was an ASP/VB6 N-Tier system, also known as "way less fun." One of the first phases of the project involved a few web-based forms. The architects provided them with some simple standard templates to use, such as the method header comment block. This comment block included a Purpose field, which in a moment of self-amusement our submitter changed to Porpoise throughout the VB6 classes and ASP scripts he'd written.

The first phase was released, and after code review, that particular implementation was cited as the paragon that other implementations should follow. Of course, this led to rampant copy-pasta throughout the entire system. By the end of phase 2, the code comments for the Dolphin project were inextricably filled with Porpoises. Being a subtle word change, it largely went unnoticed. Every once in a while, a developer would actually notice and nearly keel over laughing.

Of course, there's also a famous instance of a code comment going properly viral. Deep within the bowels of the Unix kernel, there is a method responsible for saving the CPU context when processes are switched—any time a time slice is used up, an interrupt signal is caught, a system call is made, or a page fault occurs. The code to do this in an efficient manner is horrifically complicated, so it's commented with, You are not expected to understand this. This comment can now be found on buttons, mousepads, t-shirts, hoodies, and tons of other merchandise. It's become a rallying cry of the Unix geeks, a smug way of saying, "I understand where this is from. Do you?"

Have any of you ever written something that went viral, either locally within your company or across the broader Internet community? Let us know in the comments or—if you've got a good one—drop us a submission.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet Linux AustraliaSimon Lyall: DevOps Days Auckland 2017 – Tuesday Session 3

Mirror, mirror, on the wall: testing Conway’s Law in open source communities – Lindsay Holmwood

  • The map between the technical organisation and the technical structure.
  • Easy to find who owns something, don’t have to keep two maps in your head
  • Needs flexibility of the organisation structure in order to support flexibility in a technical design
  • Conway’s “Law” really just adage
  • Complexity frequently takes the form of hierarchy
  • Organisations that mirror perform badly in rapidly changing and innovative enviroments

Metrics that Matter – Alison Polton-Simon (Thoughtworks)

  • Metrics Mania – Lots of focus on it everywhere ( fitbits, google analytics, etc)
  • How to help teams improve CD process
  • Define CD
    • Software consistently in a deployable state
    • Get fast, automated feedback
    • Do push-button deployments
  • Identifying metrics that mattered
    • Talked to people
    • Contextual observation
    • Rapid prototyping
    • Pilot offering
  • 4 big metrics
    • Deploy ready builds
    • Cycle time
    • Mean time between failures
    • Mean time to recover
  • Number of Deploy-ready builds
    • How many builds are ready for production?
    • Routine commits
    • Testing you can trust
    • Product + Development collaboration
  • Cycle Time
    • Time it takes to go from a commit to a deploy
    • Efficient testing (test subset first, faster testing)
    • Appropriate parallelization (lots of build agents)
    • Optimise build resources
  • Case Study
    • Monolithic Codebase
    • Hand-rolled build system
    • Unreliable environments ( tests and builds fail at random )
    • Validating a Pull Request can take 8 hours
    • Coupled code: isolated teams
    • Wide range of maturity in testing (some no test, some 95% coverage)
    • No understanding of the build system
    • Releases routinely delay (10 months!) or done “under the radar”
  • Focus in case study
    • Reducing cycle time, increasing reliability
    • Extracted services from monolith
    • Pipelines configured as code
    • Build infrastructure provisioned as docker and ansible
    • Results:
      • Cycle time for one team 4-5h -> 1:23
      • Deploy ready builds 1 per 3-8 weeks -> weekly
  • Mean time between failures
    • Quick feedback early on
    • Robust validation
    • Strong local builds
    • Should not be done by reducing number of releases
  • Mean time to recover
    • How long back to green?
    • Monitoring of production
    • Automated rollback process
    • Informative logging
  • Case Study 2
    • 1.27 million lines of code
    • High cyclomatic complexity
    • Tightly coupled
    • Long-running but frequently failing testing
    • Isolated teams
    • Pipeline run duration 10h -> 15m
    • MTTR Never -> 50 hours
    • Cycle time 18d -> 10d
    • Created a dashboard for the metrics
  • Meaningless Metrics
    • The company will build whatever the CEO decides to measure
    • Lines of code produced
    • Number of Bugs resolved. – real life duplicates Dilbert
    • Developers Hours / Story Points
    • Problems
      • Lack of team buy-in
      • Easy to agme
      • Unintended consiquences
      • Measuring inputs, not impacts
  • Make your own metrics
    • Map your path to production
    • Highlights pain points
    • collaborate
    • Experiment

 

Share

Planet Linux AustraliaSimon Lyall: DevOps Days Auckland 2017 – Tuesday Session 2

Using Bots to Scale incident Management – Anthony Angell (Xero)

  • Who we are
    • Single Team
    • Just a platform Operations team
  • SRE team is formed
    • Ops teams plus performance Engineering team
  • Incident Management
    • In Bad old days – 600 people on a single chat channel
    • Created Framework
    • what do incidents look like, post mortems, best practices,
    • How to make incident management easy for others?
  • ChatOps (Based on Hubot)
    • Automated tour guide
    • Multiple integrations – anything with Rest API
    • Reducing time to restore
    • Flexability
  • Release register – API hook to when changes are made
  • Issue report form
    • Summary
    • URL
    • User-ids
    • how many users & location
    • when started
    • anyone working on it already
    • Anything else to add.
  • Chat Bot for incident
    • Populates for an pushes to production channel, creates pagerduty alert
    • Creates new slack channel for incident
    • Can automatically update status page from chat and page senior managers
    • Can Create “status updates” which record things (eg “restarted server”), or “yammer updates” which get pushed to social media team
    • Creates a task list automaticly for the incident
    • Page people from within chat
    • At the end: Gives time incident lasted, archives channel
    • Post Mortum
  • More integrations
    • Report card
    • Change tracking
    • Incident / Alert portal
  • High Availability – dockerisation
  • Caching
    • Pageduty
    • AWS
    • Datadog

 

Share

,

Planet DebianAntoine Beaupré: My free software activities, September 2017

Debian Long Term Support (LTS)

This is my monthly Debian LTS report. I mostly worked on the git, git-annex and ruby packages this month but didn't have time to completely use my allocated hours because I started too late in the month.

Ruby

I was hoping someone would pick up the Ruby work I submitted in August, but it seems no one wanted to touch that mess, understandably. Since then, new issues came up, and not only did I have to work on the rubygems and ruby1.9 package, but now the ruby1.8 package also had to get security updates. Yes: it's bad enough that the rubygems code is duplicated in one other package, but wheezy had the misfortune of having two Ruby versions supported.

The Ruby 1.9 also failed to build from source because of test suite issues, which I haven't found a clean and easy fix for, so I ended up making test suite failures non-fatal in 1.9, which they were already in 1.8. I did keep a close eye on changes in the test suite output to make sure tests introduced in the security fixes would pass and that I wouldn't introduce new regressions as well.

So I published the following advisories:

  • ruby 1.8: DLA-1113-1, fixing CVE-2017-0898 and CVE-2017-10784. 1.8 doesn't seem affected by CVE-2017-14033 as the provided test does not fail (but it does fail in 1.9.1). test suite was, before patch:

    2199 tests, 1672513 assertions, 18 failures, 51 errors
    

    and after patch:

    2200 tests, 1672514 assertions, 18 failures, 51 errors
    
  • rubygems: uploaded the package prepared in August as is in DLA-1112-1, fixing CVE-2017-0899, CVE-2017-0900, CVE-2017-0901. here the test suite passed normally.

  • ruby 1.9: here I used the used 2.2.8 release tarball to generate a patch that would cover all issues and published DLA-1114-1 that fixes the CVEs of the two packages above. the test suite was, before patches:

    10179 tests, 2232711 assertions, 26 failures, 23 errors, 51 skips
    

    and after patches:

    1.9 after patches (B): 10184 tests, 2232771 assertions, 26 failures, 23 errors, 53 skips
    

Git

I also quickly issued an advisory (DLA-1120-1) for CVE-2017-14867, an odd issue affecting git in wheezy. The backport was tricky because it wouldn't apply cleanly and the git package had a custom patching system which made it tricky to work on.

Git-annex

I did a quick stint on git-annex as well: I was able to reproduce the issue and confirm an approach to fixing the issue in wheezy, although I didn't have time to complete the work before the end of the month.

Other free software work

New project: feed2exec

I should probably make a separate blog post about this, but ironically, I don't want to spend too much time writing those reports, so this will be quick.

I wrote a new program, called feed2exec. It's basically a combination of feed2imap, rss2email and feed2tweet: it allows you to fetch RSS feeds and send them in a mailbox, but what's special about it, compared to the other programs above, is that it is more generic: you can basically make it do whatever you want on new feed items. I have, for example, replaced my feed2tweet instance with it, using this simple configuration:

[anarcat]
url = https://anarc.at/blog/index.rss
output = feed2exec.plugins.exec
args = tweet "%(title)0.70s %(link)0.70s"

The sample configuration file also has examples to talk with Mastodon, Pump.io and, why not, a torrent server to download torrent files available over RSS feeds. A trivial configuration can also make it work as a crude podcast client. My main motivation to work on this was that it was difficult to extend feed2imap to do what I needed (which was to talk to transmission to download torrent files) and rss2email didn't support my workflow (which is delivering to feed-specific mail folders). Because both projects also seemed abandoned, it seemed like a good idea at the time to start a new one, although the rss2email community has now restarted the project and may produce interesting results.

As an experiment, I tracked my time working on this project. It turns out it took about 45 hours to write that software. Considering feed2exec is about 1400 SLOC, that's 30 lines of code per hour. I don't know if that's slow or fast, but it's an interesting metric for future projects. It sure seems slow to me, but we need to keep in mind those 30 lines of code don't include documentation and repeated head banging on the keyboard. For example, I found two issues with the upstream feedparser package which I use to parse feeds which also seems unmaintained, unfortunately.

Feed2exec is beta software at this point, but it's working well enough for me and the design is much simpler than the other programs of the kind. The main issue people can expect from it at this point is formatting issues or parse errors on exotic feeds, and noisy error messages on network errors, all of which should be fairly easy to fix in the test suite. I hope it will be useful for the community and, as usual, I welcome contributions, help and suggestions on how to improve the software.

More Python templates

As part of the work on feed2exec, I did cleanup a few things in the ecdysis project, mostly to hook tests up in the CI, improve on the advancedConfig logger and cleanup more stuff.

While I was there, it turns out that I built a pretty decent basic CI configuration for Python on GitLab. Whereas the previous templates only had a non-working Django example, you should now be able to chose a Python template when you configure CI on GitLab 10 and above, which should hook you up with normal Python setup procedures like setup.py install and setup.py test.

Selfspy

I mentioned working on a monitoring tool in my last post, because it was a feature from Workrave missing in SafeEyes. It turns out there is already such a tool called selfspy. I did an extensive review of the software to make sure it wouldn't leak out confidential information out before using it, and it looks, well... kind of okay. It crashed on me at least once so far, which is too bad because then it loses track of the precious activity. I have used it at least once to figure out what the heck I worked on during the day, so it's pretty useful. I particularly used it to backtrack my work on feed2exec as I didn't originally track my time on the project.

Unfortunately, selfspy seems unmaintained. I have proposed a maintenance team and hopefully the project maintainer will respond and at least share access so we don't end up in a situation like linkchecker. I also sent a bunch of pull requests to fix some issues like being secure by default and fixing the build. Apart from the crash, the main issue I have found with the software is that it doesn't detect idle time which means certain apps are disproportionatly represented in statistics. There are also some weaknesses in the crypto that should be adressed for people that encrypt their database.

Next step is to package selfspy in Debian which should hopefully be simple enough...

Restic documentation security

As part of a documentation patch on the Restic backup software, I have improved on my previous Perl script to snoop on process commandline arguments. A common flaw in shell scripts and cron jobs is to pass secret material in the environment (usually safe) but often through commandline arguments (definitely not safe). The challenge, in this peculiar case, was the env binary, but the last time I encountered such an issue was with the Drush commandline tool, which was passing database credentials in clear to the mysql binary. Using my Perl sniffer, I could get to 60 checks per second (or 60Hz). After reimplementing it in Python, this number went up to 160Hz, which still wasn't enough to catch the elusive env command, which is much faster at hiding arguments than MySQL, in large part because it simply does an execve() once the environment is setup.

Eventually, I just went crazy and rewrote the whole thing in C which was able to get 700-900Hz and did catch the env command about 10-20% of the time. I could probably have rewritten this by simply walking /proc myself (since this is what all those libraries do in the end) to get better result, but then my point was made. I was able to prove to the restic author the security issues that warranted the warning. It's too bad I need to repeat this again and again, but then my tools are getting better at proving that issue... I suspect it's not the last time I have to deal with this issue and I am happy to think that I can come up with an even more efficient proof of concept tool the next time around.

Ansible 101

After working on documentation last month, I ended up writing my first Ansible playbook this month, converting my tasksel list to a working Ansible configuration. This was a useful exercise: it allow me to find a bunch of packages which have been removed from Debian and provides much better usability than tasksel. For example, it provides a --diff argument that shows which packages are missing from a given setup.

I am still unsure about Ansible. Manifests do seem really verbose and I still can't get used to the YAML DSL. I could probably have done the same thing with Puppet and just run puppet apply on the resulting config. But I must admit my bias towards Python is showing here: I can't help but think Puppet is going to be way less accessible with its rewrite in Clojure and C (!)... But then again, I really like Puppet's approach of having generic types like package or service rather than Ansible's clunky apt/yum/dnf/package/win_package types...

Pat and Ham radio

After responding (too late) to a request for volunteers to help in Puerto Rico, I realized that my amateur radio skills were somewhat lacking in the "packet" (data transmission in ham jargon) domain, as I wasn't used to operate a Winlink node. Such a node can receive and transmit actual emails over the airwaves, for free, without direct access to the internet, which is very useful in disaster relief efforts. Through summary research, I stumbled upon the new and very promising Pat project which provides one of the first user-friendly Linux-compatible Winlink programs. I provided improvements on the documentation and some questions regarding compatibility issues which are still pending.

But my pet issue is the establishment of pat as a normal internet citizen by using standard protocols for receiving and sending email. Not sure how that can be implemented, but we'll see. I am also hoping to upload an official Debian package and hopefully write more about this soon. Stay tuned!

Random stuff

I ended up fixing my Kodi issue by starting it as a standalone systemd service, instead of gdm3, which is now completely disabled on the media box. I simply used the following /etc/systemd/service/kodi.service file:

[Unit]
Description=Kodi Media Center
After=systemd-user-sessions.service network.target sound.target

[Service]
User=xbmc
Group=video
Type=simple
TTYPath=/dev/tty7
StandardInput=tty
ExecStart=/usr/bin/xinit /usr/bin/dbus-launch --exit-with-session /usr/bin/kodi-standalone -- :1 -nolisten tcp vt7
Restart=on-abort
RestartSec=5

[Install]
WantedBy=multi-user.target

The downside of this is that it needs Xorg to run as root, whereas modern Xorg can now run rootless. Not sure how to fix this or where... But if I put needs_root_rights=no in Xwrapper.config, I get the following error in .local/share/xorg/Xorg.1.log:

[  2502.533] (EE) modeset(0): drmSetMaster failed: Permission denied

After fooling around with iPython, I ended up trying the xonsh shell, which is supposed to provide a bash-compatible Python shell environment. Unfortunately, I found it pretty unusable as a shell: it works fine to do Python stuff, but then all my environment and legacy bash configuration files were basically ignored so I couldn't get working quickly. This is too bad because the project looked very promising...

Finally, one of my TLS hosts using a Let's Encrypt certificate wasn't renewing properly, and I figured out why. It turns out the ProxyPass command was passing everything to the backend, including the /.well-known requests, which obviously broke ACME verification. The solution was simple enough, disable the proxy for that directory:

ProxyPass /.well-known/ !

Planet Linux AustraliaSimon Lyall: DevOps Days Auckland 2017 – Tuesday Session 1

DevSecOps – Anthony Rees

“When Anthrax and Public Enemy came together, It was like Developers and Operations coming together”

  • Everybody is trying to get things out fast, sometimes we forget about security
  • Structural efficiency and optimised flow
  • Compliance putting roadblock in flow of pipeline
    • Even worse scanning in production after deployment
  • Compliance guys using Excel, Security using Shell-scripts, Develops and Operations using Code
  • Chef security compliance language – InSpec
    • Insert Sales stuff here
  • ispec.io
  • Lots of pre-written configs available

Immutable SQL Server Clusters – John Bowker (from Xero)

  • Problem
    • Pet Based infrastructure
    • Not in cloud, weeks to deploy new server
    • Hard to update base infrastructure code
  • 110 Prod Servers (2 regions).
  • 1.9PB of Disk
  • Octopus Deploy: SQL Schemas, Also server configs
  • Half of team in NZ, Half in Denver
    • Data Engineers, Infrastructure Engineers, Team Lead, Product Owner
  • Where we were – The Burning Platform
    • Changed mid-Migration from dedicated instances to dedicated Hosts in AWS
    • Big saving on software licensing
  • Advantages
    • Already had Clustered HA
    • Existing automation
    • 6 day team, 15 hours/day due to multiple locations of team
  • Migration had to have no downtime
    • Went with node swaps in cluster
  • Split team. Half doing migration, half creating code/system for the node swaps
  • We learnt
    • Dedicated hosts are cheap
    • Dedicated host automation not so good for Windows
    • Discovery service not so good.
    • Syncing data took up to 24h due to large dataset
    • Powershell debugging is hard (moving away from powershell a bit, but powershell has lots of SQL server stuff built in)
    • AWS services can timeout, allow for this.
  • Things we Built
    • Lots Step Templates in Octopus Deploy
    • Metadata Store for SQL servers – Dynamite (Python, Labda, Flask, DynamoDB) – Hope to Open source
    • Lots of PowerShell Modules
  • Node Swaps going forward
    • Working towards making this completely automated
    • New AMI -> Node swap onto that
    • Avoid upgrade in place or running on old version

Share

Krebs on SecurityUSPS ‘Informed Delivery’ Is Stalker’s Dream

A free new service from the U.S. Postal Service that provides scanned images of incoming mail before it is slated to arrive at its destination address is raising eyebrows among security experts who worry about the service’s potential for misuse by private investigators, identity thieves, stalkers or abusive ex-partners. The USPS says it hopes to have changes in place by early next year that could help blunt some of those concerns.

The service, dubbed “Informed Delivery,” has been available to select addresses in several states since 2014 under a targeted USPS pilot program, but it has since expanded to include many ZIP codes nationwide, according to the Postal Service. U.S. residents can tell if their address is eligible by visiting informeddelivery.usps.com.

Image: USPS

Image: USPS

According to the USPS, some 6.3 million accounts have been created via the service so far. The Postal Service says consumer feedback has been overwhelmingly positive, particularly among residents who travel regularly and wish to keep close tabs on any mail being delivered while they’re on the road.

But a review of the methods used by the USPS to validate new account signups suggests the service is wide open to abuse by a range of parties, mainly because of weak authentication and because it is not easy to opt out of the service.

Signing up requires an eligible resident to create a free user account at USPS.com, which asks for the resident’s name, address and an email address. The final step in validating residents involves answering four so-called “knowledge-based authentication” or KBA questions. KrebsOnSecurity has relentlessly assailed KBA as an unreliable authentication method because so many answers to the multiple-guess questions are available on sites like Spokeo and Zillow, or via social networking profiles.

Once signed up, a resident can view scanned images of the front of each piece of incoming mail in advance of its arrival. Unfortunately, because of the weak KBA questions (provided by recently-breached big-three credit bureau Equifax, no less) stalkers, jilted ex-partners, and private investigators also can see who you’re communicating with via the Postal mail.

Perhaps this wouldn’t be such a big deal if the USPS notified residents by snail mail when someone signs up for the service at their address, but it doesn’t.

Peter Swire, a privacy and security expert at Georgia Tech and a senior counsel at the law firm of Alston & Bird, said strong authentication relies on information collected from multiple channels — such as something you know (a password) and something you have (a mobile phone). In this case, however, the USPS has opted not to leverage a channel that it uniquely controls, namely the U.S. Mail system.

“The whole service is based on a channel they control, and they should use that channel to verify people,” Swire said. “That increases user trust that it’s a good service. Multi-channel authentication is becoming the industry norm, and the U.S. Postal Service should catch up to that.” 

I also wanted to know whether there was any way for households to opt out of having scanned images of their mail sent as part of this offering. The USPS replied that consumers may contact the Informed Delivery help desk to request that the service not be presented to anyone in their household. “Each request is individually reviewed and assessed by members of the Postal Service Informed Delivery, Privacy and Legal teams,” the Postal Service replied.

There does not appear to be any limit on the number of people who can sign up for the service at any one address, except that one needs to know the names and KBA question answers for a valid resident of that address.

“Informed Delivery may be accessed by any adult member of a household,” the USPS wrote in response to questions. “Each member of the household must be able to complete the identity proofing process implemented by the Postal Service.”

The Postal Service said it is not possible for an address occupant to receive emailed, scanned images of incoming mail at more than one email address. In other words, if you wish to prevent others from signing up in your name or in the name of any other adults at the address, the surest way to do that may be to register your own account and then urge all other adult residents at the address to create their own accounts.

A highly positive story about Informed Delivery published by NBC in April 2017 suggests another use for the service: Reducing mail theft. However, without stronger authentication, this service could let local ID thieves determine with pinpoint accuracy exactly when mail worth stealing is set to arrive.

The USPS says businesses are not currently eligible to sign up as recipients of Informed Delivery. However, people running businesses out of their home could also be the target of competitors hoping to steal away customers, or to pose as partner firms in demanding payment for outstanding invoices.

Informed Delivery seems like a useful service for those residents who wish to take advantage of it. But lacking stronger consumer validation the service seems ripe for abuse. The USPS should use its own unique communications channel (snail mail) to alert Americans when their physical address has been signed up for this service.

Bob Dixon, the executive program director for Informed Delivery, said the Postal Service is working on an approach that it hopes to make available to the public in January 2018 which would allow USPS to send written notification to addresses when someone at that residence signs up for Informed Delivery.

Dixon said that capability will build on technology already in place to notify Americans via mail when a change of address is requested. Currently, the USPS allows address changes via the USPS Web site or in-person at any one of more than 3,000 post offices nationwide. When a request is processed, the USPS sends a confirmation letter to both the old address and the new address.

If someone already signed up for Informed Delivery later posts a change of address request, the USPS does not automatically transfer the Informed Delivery service to the new address: Rather, it sends a mailer with a special code tied to the new address and to the username that requested the change. To resume Informed Delivery at the new address, that code needs to be entered online using the account that requested the address change.

“Part of coming up with a mail-based verification system will also let us do some additional notification that, candidly, we just haven’t built yet,” Dixon said. “It is our intent to have this ready by January 2018, and it is one of our higher priorities to get it done by then.”

There is a final precaution that should block anyone from signing up as you: Readers who have taken my advice to freeze their credit files with the four major consumer credit reporting bureaus (Equifax, Experian, Innovis and Trans Union) will find they are not able to sign up for Informed Delivery online. That’s because having a freeze in place should block Equifax from being able to ask you the four KBA questions.

By the way, this same dynamic works with other services that you may not wish to use but which require you otherwise to plant your flag of identity to prevent others from doing so on your behalf, such as managing your relationship to the Internal Revenue Service online and the Social Security Administration. For more information on why you should get a freeze and how to do that, see this piece.

Update, 3:48 p.m. ET: Added bit about how a freeze can block someone from signing up in your name.

Update, Oct. 4, 11:01 a.m.: Several readers have written in to say that although the Postal Service says citizens can opt out of Informed Delivery at a specific address by contacting the Informed Delivery Help Desk, none of those readers have successfully been able to achieve this result. One reader forwarded a response from the Help Desk folks that stated emphatically, “I do understand your concern about fraud and theft but there is no way to make your home address ineligible for Informed Delivery.” No way, that is, kexcept to register as every adult at your address, as stated above.

Planet DebianJonathan Dowland: PhD

I'm very excited to (finally) announce that I've embarked upon a part-time PhD in Computing Science at Newcastle University!

I'm at the very beginning of a journey that is expected to last about six years. The area I am going to be working in is functional stream processing and distributed systems architecture, in the context of IoT. This means investigating and working with technologies such as Apache Spark; containers (inc. Docker); Kubernetes and OpenShift; but also Haskell. My supervisor is Prof. Paul Watson. This would not be possible without the support of my employer, Red Hat, for which I am extremely grateful.

I hope to write much more about this topic here in the near future, so watch this space!

Planet DebianLars Wirzenius: Attracting contributors to a new project

How do you attract contributors to a new free software project?

I'm in the very early stages of a new personal project. It is irrelevant for this blog post what the new project actually is. Instead, I am thinking about the following question:

Do I want the project to be mainly for myself, and maybe a handful of others, or do I want to try to make it a more generally useful, possibly even a well-known, popular project? In other words, do I want to just solve a specific problem I have or try to solve it for a large group of people?

If it's a personal project, I'm all set. I can just start writing code. (In fact, I have.) If it's the latter, I'll need to attract contributions from others, and how do I do that?

I asked that question on Twitter and Mastodon and got several suggestions. This is a summary of those, with some editorialising from me.

  • The most important thing is probably that the project should aim for something that interests other people. The more people it interests, the easier it will be to attract contributors. This should be written up and displayed prominently: what does (or will) the software do and what can it e used for.

  • Having something that kind of works, and easy to improve, seems to also be key. An empty project is daunting to do anything with. Part of this is that the software the project is producing should be easy to install and get running. It doesn't have to be fully featured. It doesn't even have to be alpha level quality. It needs to do something.

    If the project is about producing a spell checker, say, and it doesn't even try to read an input file, it's probably too early for anyone else to contribute. A spell checker that lists every word in the input file as badly spelt is probably more attractive to contribute to.

  • It helps to document where a new contributor should start, and how they would submit their contribution. A list of easy things to work on may also help. Having a roadmap of near future developent steps and a long-term vision will make things easier. Having an architectural document to explain how the system hangs together will help.

  • A welcoming, constructive atmosphere helps. People should get quick feedback to questions, issues, patches, in order to build momentum. Make it fun for people to contibute, and they'll contribute more.

  • A public source code repository, and a public ticketing system, and public discussion forums (mailing lists, web forums, IRC channels, etc) will help.

  • Share the power in the project. Give others the power to make decisions, or merge things from other contributors. Having a clear, functioning governance structure from the start helps.

I don't know if these things are all correct, or that they're enough to grow a successful, popular project.

Karl Foger'l seminal book Producing Open Source Software should also be mentioned.

CryptogramRemote Malware Attacks on ATMs

This report discusses the new trend of remote malware attacks against ATMs.

Worse Than FailureCodeSOD: Dashboard Confessional

Three years ago, this XKCD comic captured a lot of the problems we have with gathering requirements:

A comic where a customer asks a developer to a) Take a photo and determine if it's in a national park (easy says the dev), b) determine if it's of a bird (I need a research team and 5 years)

Our users have no idea which kinds of problems are hard and which kinds are easy. This isn’t just for advanced machine learning classification projects- I’ve had users who assumed changing the color of an element on a page was hard (it wasn’t), to users who assumed wiring up our in-house ERP to a purchased ERP was the simplest thing ever (it wasn’t).

Which brings us to Christopher Shankland’s contribution. He works for a game company, and while that often means doing game development, it often means doing tooling and platform management for the design team, like providing fancy dashboards for the designers to review how users play the game so that they can tweak the play.

That lead to this conversation:

Game Designer: I want to see how players progress through the game missions
Christopher: Great. I’ll add a funnel chart to our dashboard app, which can query data from the database!
Game Designer: Also, I need to change the order the missions display in all the time…
Christopher: Okay, that’ll require a data change every time you want to flip the order…
Game Designer: Fine, but I shouldn’t have to ask anyone else to do it…
Christopher: Um… I’d have to bolt a UI onto the database, it’s not really meant-
Game Designer: That sounds time consuming. I need this data YESTERDAY.
Christopher: I could-
Game Designer: YESTERDAY. GIVE ME DATA. NOW.

So Christopher hacked together a solution. Between fighting with the designer’s fluid and every changing demands, the fact that what the designer wanted didn’t mesh well with how the dashboard system assumed analytics would be run, the demand that it be done in the dashboard system anyway, and the unnecessary time pressure, Christopher didn’t do his best work. He sends us this code, as penance. It’s long, it’s convoluted, and it uses lots of string concatenation to generate SQL statements.

As Chris rounded out his message to us: “This is why I drink.”

-- Create syntax for 'chart_first_map_daily'

DROP PROCEDURE IF EXISTS `chart_first_map_daily`;

DELIMITER ;;
CREATE DEFINER=`megaforce_stats`@`%` PROCEDURE `chart_first_map_daily`(IN timeline INT)
BEGIN

SET SESSION group_concat_max_len = 1000000;

DROP TABLE IF EXISTS `megaforce_stats`.`chart_first_map_daily`;
CREATE TABLE `megaforce_stats`.`chart_first_map_daily` (
        `absolute_order` INT(11) UNSIGNED NOT NULL,
        `date` DATE NOT NULL,
        `task_id` INT(11) UNSIGNED NOT NULL,
        `number_completed` INT(11) UNSIGNED NOT NULL DEFAULT 0,
        `new_user_completion_percentage` FLOAT(23) NOT NULL DEFAULT 0,
        `segment` VARCHAR(32) DEFAULT "Unknown",
        PRIMARY KEY (`date`, `task_id`, `segment`)
) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;

SET @last_date = date_sub(curdate(), INTERVAL 1 DAY);
SET @timeline = timeline;
SET @first_date = date_sub(@last_date, INTERVAL @timeline DAY);

SET @first_campaign_id = (SELECT `id` FROM `megaforce_game`.`campaigns` WHERE NOT EXISTS (SELECT * FROM `megaforce_game`.`campaign_dependencies` WHERE `unlocked_campaign_id` = `megaforce_game`.`campaigns`.`id`) AND `active` = 1 AND `type_id` NOT IN (2,3,4));

-- Create a helper table for ordering
DROP TABLE IF EXISTS `megaforce_stats`.`absolute_task_ordering`;
CREATE TABLE `megaforce_stats`.`absolute_task_ordering` (
        `task_id` INT(11) UNSIGNED NOT NULL,
        `absolute_order` INT(11) UNSIGNED NOT NULL AUTO_INCREMENT,
        PRIMARY KEY (`absolute_order`)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;

SET @current_mission_id = -1;
SET @sort_order = 2;

SELECT
        IF(COUNT(`id`) > 0, `id`, -1) INTO @current_mission_id
FROM
        `megaforce_game`.`missions`
WHERE
        NOT EXISTS (
                SELECT * FROM `megaforce_game`.`mission_dependencies` WHERE `unlocked_mission_id` = `megaforce_game`.`missions`.`id`
        ) AND active = 1 AND campaign_id = @first_campaign_id AND type_id = 1;

WHILE @current_mission_id > 0 DO
        INSERT INTO
                `megaforce_stats`.`absolute_task_ordering` (`task_id`)
        SELECT
                `id`
        FROM
                `megaforce_game`.`tasks`
        WHERE
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY
                `order`;

        INSERT INTO
                `megaforce_stats`.`chart_first_map_daily` (
                        `absolute_order`,`date`,`task_id`, `number_completed`,`new_user_completion_percentage`, `segment`
                )
        SELECT
                `task_info`.`absolute_order`,
                `sessions`.`date`,
                `task_info`.`task_id`,
                `task_info`.`number_completed`,
                `task_info`.`number_completed` / `sessions`.`new_users`,
                -1
        FROM (
                        SELECT
                                `date`, SUM(`new_users`) AS `new_users`
                        FROM `megaforce_stats`.`sessions_daily`
                        WHERE DATE(`date`) > @first_date
                        AND DATE(`date`) <= @last_date
                        GROUP BY `date`
                ) AS `sessions`
        LEFT JOIN (
                SELECT
                        `absolute_order`, DATE(`date_completed`) AS `date`, COUNT(DISTINCT(`user_name`)) AS `number_completed`, `megaforce_game`.`tasks`.`id` AS `task_id`
                FROM `megaforce_game`.`track_completed_tasks`
                JOIN `megaforce_stats`.`accounts_real`
                ON `user_name` = `userName`
                JOIN `megaforce_game`.`tasks`
                ON `megaforce_game`.`tasks`.`id` = `megaforce_game`.`track_completed_tasks`.`task_id`
                JOIN `megaforce_stats`.`absolute_task_ordering`
                ON `megaforce_stats`.`absolute_task_ordering`.`task_id` = `megaforce_game`.`tasks`.`id`
                WHERE DATE(`date_completed`) = DATE(`date_created`) AND `mission_id` = @current_mission_id AND `active` = 1
                GROUP BY DATE(`date_completed`), `megaforce_game`.`tasks`.`id`
                ORDER BY `order`
        ) AS `task_info` ON `task_info`.`date` = `sessions`.`date`;

        -- Create our CREATE TABLE statement
        SET @mission_chart_table_name = CONCAT("chart_first_map_daily_", @current_mission_id);
        SELECT
                GROUP_CONCAT(`id` SEPARATOR "_completion` INT(11) UNSIGNED NOT NULL, `task_") INTO @mission_chart_task_columns
        FROM
                `megaforce_game`.`tasks`
        WHERE
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY
                `order`;

        SET @drop_mission_chart = CONCAT("DROP TABLE IF EXISTS `megaforce_stats`.`", @mission_chart_table_name, "`");

        PREPARE stmt FROM @drop_mission_chart;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;

        SET @create_mission_chart = CONCAT("
                CREATE TABLE `megaforce_stats`.`", @mission_chart_table_name, "` (
                        `date` DATE NOT NULL,
                        `task_", @mission_chart_task_columns, "_completion` INT(11) UNSIGNED NOT NULL,
                        `segment` VARCHAR(32) DEFAULT 'Unknown',
                        PRIMARY KEY (`date`,`segment`)
                ) ENGINE = InnoDB DEFAULT CHARSET=utf8
        ");

        PREPARE stmt FROM @create_mission_chart;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;

        SELECT
                GROUP_CONCAT(`id` SEPARATOR "_completion`.`number_completed` / `sessions`.`new_users` * 100, `task_") INTO @task_list
        FROM
                `megaforce_game`.`tasks`
        WHERE
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY
                `order`;

        SELECT
                GROUP_CONCAT(
                                CONCAT(`id`, " GROUP BY DATE(`date_completed`), `segment`) AS `task_", `id`, "_completion` ON `task_", `id`, "_completion`.`segment` = `sessions`.`segment` AND `task_", `id`)
                        SEPARATOR
                                "_completion`.`date` = `sessions`.`date`
                                LEFT JOIN (
                                        SELECT
                                                DATE(`date_completed`) AS `date`, COUNT(*) AS `number_completed`, `segment`
                                        FROM `megaforce_game`.`track_completed_tasks`
                                        JOIN `megaforce_stats`.`accounts_real`
                                        ON `track_completed_tasks`.`user_name` = `accounts_real`.`userName`
                                        WHERE DATE(`date_created`) = DATE(`date_completed`) AND `task_id` = "
                ) INTO @task_join_tables
        FROM
                `megaforce_game`.`tasks`
        WHERE
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY
                `order`;

        SET @insert_mission_chart = CONCAT("
                INSERT INTO
                        `megaforce_stats`.`", @mission_chart_table_name, "`
                SELECT
                        `sessions`.`date`,`task_", @task_list, "_completion`.`number_completed` / `sessions`.`new_users` * 100, `sessions`.`segment`
                FROM (
                        SELECT
                                `date`, `new_users`, `segment`
                        FROM `megaforce_stats`.`sessions_daily`
                        WHERE DATE(`date`) > @first_date
                        AND DATE(`date`) <= @last_date
                        GROUP BY `date`, `segment`
                ) AS `sessions`
                LEFT JOIN (
                        SELECT
                                DATE(`date_completed`) AS `date`, COUNT(*) AS `number_completed`, `segment`
                        FROM `megaforce_game`.`track_completed_tasks`
                        JOIN `megaforce_stats`.`accounts_real`
                        ON `track_completed_tasks`.`user_name` = `accounts_real`.`userName`
                        WHERE DATE(`date_created`) = DATE(`date_completed`) AND `task_id` = ", @task_join_tables, "_completion`.`date` = `sessions`.`date`
        ");

        PREPARE stmt FROM @insert_mission_chart;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;

        SELECT
                GROUP_CONCAT(
                                CONCAT(`id`, " GROUP BY DATE(`date_completed`)) AS `task_", `id`, "_completion` ON `task_", `id`)
                        SEPARATOR
                                "_completion`.`date` = `sessions`.`date`
                                LEFT JOIN (
                                        SELECT
                                                DATE(`date_completed`) AS `date`, COUNT(*) AS `number_completed`
                                        FROM `megaforce_game`.`track_completed_tasks`
                                        JOIN `megaforce_stats`.`accounts_real`
                                        ON `track_completed_tasks`.`user_name` = `accounts_real`.`userName`
                                        WHERE DATE(`date_created`) = DATE(`date_completed`) AND `task_id` = "
                ) INTO @task_join_tables
        FROM
                `megaforce_game`.`tasks`
        WHERE
                `mission_id` = @current_mission_id AND `active` = 1
        ORDER BY
                `order`;

        SET @insert_mission_chart = CONCAT("
                INSERT INTO
                        `megaforce_stats`.`", @mission_chart_table_name, "`
                SELECT
                        `sessions`.`date`,`task_", @task_list, "_completion`.`number_completed` / `sessions`.`new_users` * 100, -1
                FROM (
                        SELECT
                                `date`, SUM(`new_users`) AS `new_users`
                        FROM `megaforce_stats`.`sessions_daily`
                        WHERE DATE(`date`) > @first_date
                        AND DATE(`date`) <= @last_date
                        GROUP BY `date`
                ) AS `sessions`
                LEFT JOIN (
                        SELECT
                                DATE(`date_completed`) AS `date`, COUNT(*) AS `number_completed`
                        FROM `megaforce_game`.`track_completed_tasks`
                        JOIN `megaforce_stats`.`accounts_real`
                        ON `track_completed_tasks`.`user_name` = `accounts_real`.`userName`
                        WHERE DATE(`date_created`) = DATE(`date_completed`) AND `task_id` = ", @task_join_tables, "_completion`.`date` = `sessions`.`date`
        ");

        PREPARE stmt FROM @insert_mission_chart;
        EXECUTE stmt;
        DEALLOCATE PREPARE stmt;

        -- Dynamically create our charts (multiple data by mission)
        DELETE FROM `megaforce_stats`.`gecko_chart_sql` WHERE `sql_key` = CONCAT("CHART_FIRST_MAP_DAILY_", @current_mission_id);
        DELETE FROM `megaforce_stats`.`gecko_chart_info` WHERE `sql_key` = CONCAT("CHART_FIRST_MAP_DAILY_", @current_mission_id);

        INSERT INTO
                `megaforce_stats`.`gecko_chart_sql` (`sql_key`,`sql_query`,`data_field`,`segment_field`)
        VALUES
                (CONCAT("CHART_FIRST_MAP_DAILY_", @current_mission_id), CONCAT("SELECT * FROM `megaforce_stats`.`", @mission_chart_table_name, "`"), "date", "segment");

        INSERT INTO
                `megaforce_stats`.`gecko_chart_info` (`sql_key`,`data_field`,`title`,`category`,`sort_order`,`type`,`data_name`,`chart_type`)
        VALUES
                (CONCAT("CHART_FIRST_MAP_DAILY_", @current_mission_id), "", CONCAT("Mission ", @current_mission_id, " Task Completion"), 10, @sort_order, "spline", "", "hc_line_multiple_segments_date");

        SET @sort_order = @sort_order + 1;

        SELECT
                IF(COUNT(`unlocked_mission_id`) > 0, `unlocked_mission_id`, -1) INTO @current_mission_id
        FROM
                `megaforce_game`.`mission_dependencies`
        WHERE
                `required_mission_id` = @current_mission_id;
END WHILE;

END;;
DELIMITER ;
[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet DebianUwe Kleine-König: IPv6 in my home network

I am lucky and get both IPv4 (without CGNAT) and IPv6 from my provider. Recently after upgrading my desk router (that is an Netgear WNDR3800 that serves the network on my desk) from OpenWRT to latest LEDE I looked into what can be improved in the IPv6 setup for both my home network (served by a FRITZ!Box) and my desk network.

Unfortunately I was unable to improve the situation compared to what I already had before.

Things that work

Making IPv6 work in general was easy, just a few clicks in the configuration of the FRITZ!Box and it mostly worked. After that I have:

  • IPv6 connectivity in the home net
  • IPv6 connectivity in the desk net

Things that don't work

There are a few things however that I'd like to have, that are not that easy it seems:

ULA for both nets

I let the two routers announce an ULA prefix each. Unfortunately I was unable to make the LEDE box announce its net on the wan interface for clients in the home net. So the hosts in the desk net know how to reach the hosts in the home net but not the other way round which makes it quite pointless. (It works fine as long as the FRITZ!Box announces a global net, but I'd like to have local communication work independent of the global connectivity.)

To fix this I'd need something like radvd on my LEDE router, but that isn't provided by LEDE (or OpenWRT) any more as odhcpd is supposed to be used which AFAICT is unable to send RAs on the wan interface though. Ok, probably I could install bird, but that seems a bit oversized. I created an entry in the LEDE forum but without any reply up to now.

Alternatively (but less pretty) I could setup an IPv6 route in the FRITZ!Box, but that only works with a newer firmware and as this router is owned by my provider I cannot update it.

Firewalling

The FRITZ!Box has a firewall that is not very configurable. I can punch a hole in it for hosts with a given interface-ID, but that only works for hosts in the home net, not the machines in the delegated subnet behind the LEDE router. In fact I think the FRITZ!Box should delegate firewalling for a delegated net also to the router of that subnet.

So having a global address on the machines on my desk doesn't allow me to reach them from the internet.

Update: according to the German changelog firmware 6.83 seems to include that feature. Cheers AVM. Now waiting for my provider to update ...

Planet DebianJunichi Uekawa: Recently I was writing log analysis tools in javascript.

Recently I was writing log analysis tools in javascript. Javascript part is challenging.

Planet DebianJames McCoy: Monthly FLOSS activity - 2017/09 edition

Debian

devscripts

Before deciding to take an indefinite hiatus from devscripts, I prepared one more upload merging various contributed patches and a bit of last minute cleanup.

  • build-rdeps

    • Updated build-rdeps to work with compressed apt indices. (Debian bug #698240)
    • Added support for Build-Arch-{Conflicts,Depends} to build-rdeps. (adc87981)
    • Merged Andreas Henriksson's patch for setting remote.<name>.push-url when using debcheckout to clone a git repository. (Debian bug #753838)
  • debsign

    • Updated bash completion for gpg keys to use gpg --with-colons, instead of manually parsing gpg -K output. Aside from being the Right Way™ to get machine parseable information out of gpg, it fixed completion when gpg is a 2.x version. (Debian bug #837380)

I also setup integration with Travis CI to hopefully catch issues sooner than "while preparing an upload", as was typically the case before. Anyone with push access to the Debian/devscripts GitHub repo can take advantage of this to test out changes, or keep the development branches up to date. In the process, I was able to make some improvements to travis.debian.net, namely support for DEB_BUILD_PROFILES ¹² and using a separate, minimal docker image for running autopkgtests.

unibilium

  • Packaged the new upstream release (1.2.1)

  • Basic package maintenance (-dbgsym package, policy update, enabled hardening flags).

  • Uploaded 1.2.1-1

neovim

  • Attempted to nudge lua-nvim's builds along on a couple architectures where they were waiting for neovim to be installable

    • x32: Temporarily removed lua-nvim Build-Depends to break the BD-Uninstallable cycle between lua-nvim and neovim. ✓
    • powerpcspe: Temporarily removed luajit Build-Depends, reducing test scope, to fix the build. ❌
      • If memory serves, the test failures are fixed upstream for the next release.
  • Uploaded 0.2.0-4

Oddly, the mips64el builds were in BD-Uninstallable state, even though luajit's buildd status showed it was built. Looking further, I noticed the libluajit-5.1{,-dev} binary packages didn't have the mips64el architecture enabled, so I asked for it to be enabled.

msgpack-c

There were a few packages left which would FTBFS if I uploaded msgpack-c 2.x to unstable.

All of the bug reports had either trivial work arounds (i.e., forcing use of the v1 C++ API) or trivial patches. However, I didn't want to continue waiting for the packages to get fixed since I knew other people had expressed interest in the new msgpack-c.

Trying to avoid making other packages insta-buggy, I NMUed autobahn-cpp with the v1 work around. That didn't go over well, partly because I didn't send a finalized "Hey, I'd like to get this done and here's my plan to NMU" email.

Based on that feedback, I decided to bump the remaining bugs to "serious" instead of NMUing and upload msgpack-c. Thanks to Jonas Smedegaard for quickly integrating my proposed fix for libdata-messagepack-perl. Hopefully, upstream has some time to review the PR soon.

vim

  • Used the powerpc porterbox to debug and fix a 32-bit integer overflow that was causing test failures.

  • Asked the vim-perl folks about getting updated runtime files to Bram, after Jakub Wilk filed Debian bug #873755. This had been fixed 4+ years earlier, but not yet merged back into Vim. Thanks to Rob Hoelz for pulling things together and sending the updates to Bram.

  • I've continued to receive feedback from Debian users about their frustration with Vim's new "defaults.vim", both in regards to the actual default settings and its interaction with the system-wide vimrc file. While I still don't intend to deviate from upstream's behavior, I did push back some more on the existing behavior. I appreciate Christian Brabandt's effort, as always, to understand the issue at hand and have constructive discussions. His final suggestion seems like it will resolve the system vimrc interaction, so hopefully Bram is receptive to it.

  • Uploaded 2:8.0.1144-1

  • Thanks to a nudge from Salvatore Bonaccorso and Moritz Mühlenhoff, I uploaded 2:8.0.0197-4+deb9u1 which fixes CVE-2017-11109. I had intended to do this much sooner, but it fell through the cracks. Due to Adam Barratt's quick responses, this should make it into the upcoming Stretch 9.2 release.

subversion

  • Started work on updating the packaging
    • Converted to 3.0 (quilt) source format
    • Updated to debhelper 10 compat
    • Initial attempts at converting to a dh rules file
      • Running into various problems here and still trying to figure out whether they're in the upstream build system, Debian's patches, or both.

neovim

  • Worked with Niko Dittmann to fix build failures Niko was experiencing on OpenBSD 6.1 #7298

  • Merged upstream Vim patches into neovim from various contributors

  • Discussed focus detection behavior after a recent change in the implementation (#7221)

    • While testing focus detection in various terminal emulators, I noticed pangoterm didn't support this. I submitted a merge request on libvterm to provide an API for reporting focus changes. If that's merged, it will be trivial for pangoterm to notify applications when the terminal has focus.
  • Fixed a bug in our tooling around merging Vim patches, which was causing it to incorrectly drop certain files from the patches. #7328

Planet Linux AustraliaJames Morris: Linux Security Summit 2017 Roundup

The 2017 Linux Security Summit (LSS) was held last month in Los Angeles over the 14th and 15th of September.  It was co-located with Open Source Summit North America (OSSNA) and the Linux Plumbers Conference (LPC).

LSS 2017 sign at conference

LSS 2017

Once again we were fortunate to have general logistics managed by the Linux Foundation, allowing the program committee to focus on organizing technical content.  We had a record number of submissions this year and accepted approximately one third of them.  Attendance was very strong, with ~160 attendees — another record for the event.

LSS 2017 Attendees

LSS 2017 Attendees

On the day prior to LSS, attendees were able to access a day of LPC, which featured two tracks with a security focus:

Many thanks to the LPC organizers for arranging the schedule this way and allowing LSS folk to attend the day!

Realtime notes were made of these microconfs via etherpad:

I was particularly interested in the topic of better integrating LSM with containers, as there is an increasingly common requirement for nesting of security policies, where each container may run its own apparently independent security policy, and also a potentially independent security model.  I proposed the approach of introducing a security namespace, where all security interfaces within the kernel are namespaced, including LSM.  It would potentially solve the container use-cases, and also the full LSM stacking case championed by Casey Schaufler (which would allow entirely arbitrary stacking of security modules).

This would be a very challenging project, to say the least, and one which is further complicated by containers not being a first class citizen of the kernel.   This leads to security policy boundaries clashing with semantic functional boundaries e.g. what does it mean from a security policy POV when you have namespaced filesystems but not networking?

Discussion turned to the idea that it is up to the vendor/user to configure containers in a way which makes sense for them, and similarly, they would also need to ensure that they configure security policy in a manner appropriate to that configuration.  I would say this means that semantic responsibility is pushed to the user with the kernel largely remaining a set of composable mechanisms, in relation to containers and security policy.  This provides a great deal of flexibility, but requires those building systems to take a great deal of care in their design.

There are still many issues to resolve, both upstream and at the distro/user level, and I expect this to be an active area of Linux security development for some time.  There were some excellent followup discussions in this area, including an approach which constrains the problem space. (Stay tuned)!

A highlight of the TPMs session was an update on the TPM 2.0 software stack, by Philip Tricca and Jarkko Sakkinen.  The slides may be downloaded here.  We should see a vastly improved experience over TPM 1.x with v2.0 hardware capabilities, and the new software stack.  I suppose the next challenge will be TPMs in the post-quantum era?

There were further technical discussions on TPMs and container security during subsequent days at LSS.  Bringing the two conference groups together here made for a very productive event overall.

TPMs microconf at LPC with Philip Tricca presenting on the 2.0 software stack.

This year, due to the overlap with LPC, we unfortunately did not have any LWN coverage.  There are, however, excellent writeups available from attendees:

There were many awesome talks.

The CII Best Practices Badge presentation by David Wheeler was an unexpected highlight for me.  CII refers to the Linux Foundation’s Core Infrastructure Initiative , a preemptive security effort for Open Source.  The Best Practices Badge Program is a secure development maturity model designed to allow open source projects to improve their security in an evolving and measurable manner.  There’s been very impressive engagement with the project from across open source, and I believe this is a critically important effort for security.

CII Bade Project adoption (from David Wheeler’s slides).

During Dan Cashman’s talk on SELinux policy modularization in Android O,  an interesting data point came up:

We of course expect to see application vulnerability mitigations arising from Mandatory Access Control (MAC) policies (SELinux, Smack, and AppArmor), but if you look closely this refers to kernel vulnerabilities.   So what is happening here?  It turns out that a side effect of MAC policies, particularly those implemented in tightly-defined environments such as Android, is a reduction in kernel attack surface.  It is generally more difficult to reach such kernel vulnerabilities when you have MAC security policies.  This is a side-effect of MAC, not a primary design goal, but nevertheless appears to be very effective in practice!

Another highlight for me was the update on the Kernel Self Protection Project lead by Kees, which is now approaching its 2nd anniversary, and continues the important work of hardening the mainline Linux kernel itself against attack.  I would like to also acknowledge the essential and original research performed in this area by grsecurity/PaX, from which this mainline work draws.

From a new development point of view, I’m thrilled to see the progress being made by Mickaël Salaün, on Landlock LSM, which provides unprivileged sandboxing via seccomp and LSM.  This is a novel approach which will allow applications to define and propagate their own sandbox policies.  Similar concepts are available in other OSs such as OSX (seatbelt) and BSD (pledge).  The great thing about Landlock is its consolidation of two existing Linux kernel security interfaces: LSM and Seccomp.  This ensures re-use of existing mechanisms, and aids usability by utilizing already familiar concepts for Linux users.

Overall I found it to be an incredibly productive event, with many new and interesting ideas arising and lots of great collaboration in the hallway, lunch, and dinner tracks.

Slides from LSS may be found linked to the schedule abstracts.

We did not have a video sponsor for the event this year, and we’ll work on that again for next year’s summit.  We have discussed holding LSS again next year in conjunction with OSSNA, which is expected to be in Vancouver in August.

We are also investigating a European LSS in addition to the main summit for 2018 and beyond, as a way to help engage more widely with Linux security folk.  Stay tuned for official announcements on these!

Thanks once again to the awesome event staff at LF, especially Jillian Hall, who ensured everything ran smoothly.  Thanks also to the program committee who review, discuss, and vote on every proposal, ensuring that we have the best content for the event, and who work on technical planning for many months prior to the event.  And of course thanks to the presenters and attendees, without whom there would literally and figuratively be no event :)

See you in 2018!

 

Planet Linux AustraliaOpenSTEM: Stone Axes and Aboriginal Stories from Victoria

In the most recent edition of Australian Archaeology, the journal of the Australian Archaeological Association, there is a paper examining the exchange of stone axes in Victoria and correlating these patterns of exchange with Aboriginal stories in the 19th century. This paper is particularly timely with the passing of legislation in the Victorian Parliament on […]

,

Planet DebianIain R. Learmonth: Free Software Efforts (2017W39)

Here’s my weekly report for week 39 of 2017. In this week I have travelled to Berlin and caught up on some podcasts in doing so. I’ve also had some trouble with the RSS feeds on my blog but hopefully this is all fixed now.

Thanks to Martin Milbret I now have a replacement for my dead workstation, an HP Z600, and there will be a blog post about this new set up to come next week. Thanks also to Sýlvan and a number of others that made donations towards getting me up and running again. A breakdown of the donations and expenses can be found at the end of this post.

Debian

Two of my packages measurement-kit from OONI and python-azure-devtools used to build the Azure Python SDK (packaged as python-azure) have been accepted by ftp-master into Debian’s unstable suite.

I have also sponsored uploads for comptext, comptty, fllog, flnet and gnustep-make.

I had previously encouraged Eric Heintzmann to become a DM and I have given him DM upload privileges for the gnustep-make package as he has shown to care for the GNUstep packages well.

Bugs closed (fixed/wontfix): #8751251, #8751261, #861753, #873083

Tor Project

My Tor Project contributions this week were primarily attending the Tor Metrics meeting which I have reported on in a separate blog post.

Sustainability

I believe it is important to be clear not only about the work I have already completed but also about the sustainability of this work into the future. I plan to include a short report on the current sustainability of my work in each weekly report.

The replacement workstation arrived on Friday and is now up and running. In total I received £308.73 in donations and spent £36.89 on video adapters and £141.94 on replacement hard drives for my NAS (which includes my local Debian mirror and backups).

For the Tor Metrics meeting in Berlin, Tor Project paid my flights and accommodation and I paid only for ground transport and food myself. The total cost for ground transport during the trip was £45.92 (taxi to airport, 1 Tageskarte) and total cost for food was £23.46.

The current funds I have available for equipment, travel and other free software expenses is now £60.52. I do not believe that any hardware I rely on is looking at imminent failure.


  1. Fixed by a sponsored upload, not by my changes [return]

Planet DebianThorsten Alteholz: My Debian Activities in September 2017

FTP assistant

This month almost the same numbers as last month appeared in the statistics. I accepted 213 packages and rejected 15 uploads. The overall number of packages that got accepted this month was 425.

Debian LTS

This was my thirty-ninth month that I did some work for the Debian LTS initiative, started by Raphael Hertzog at Freexian.

This month my all in all workload has been 15.75h. During that time I did LTS uploads of:

  • [DLA 1109-1] libraw security update for one CVE
  • [DLA 1117-1] opencv security update for 13 CVEs

I also took care of libstrusts1.2-java and marked all CVEs as not-affected and I marked all CVEs for jasper as no-dsa. I also started to work on sam2p.

Just as I wanted to upload a new version of libofx, a new CVE was discovered that was not closed in time. I tried to find a patch on my own but had difficulties in reproducing this issue.

Other stuff

This month I made myself familiar with glewlwyd and according to upstream, the Debian packages work out-of-the box. However upstream does not stop working on that software, so I uploaded new versions of hoel, ulfius and glewlwyd.

As libjwt needs libb64, which was orphanded, I used it as DOPOM and adopted it.

Does anybody still know the Mayhem-bugs? I could close one by uploading an updated version of siggen.

I also went through my packages and looked for patches that piled up in the BTS. As a result i uploaded updated versions of radlib, te923con, node-starttls, harminv and uucp.

New upstream versions of openoverlayrouter and fasttree also made it into the archive.

Last but not least I moved several packages to the debian-mobcom group.

Don MartiThe capital dynamics are all wrong.

Ben Werdmuller, in Why open source software isn’t as ethical as you think it is:

When you release open source software, you have this egalitarian idea that you’re making it available to people who can really use it, who can then built on it to make amazing things....While this is a fine position to take, consider who has the most resources to build on top of a project that requires development. With most licenses, you’re issuing a free pass to corporations and other wealthy organizations, while providing no resources to those needy users. OpenSSL, which every major internet company depends on, was until recently receiving just $2,000 a year in donations, with the principal author in financial difficulty.

This is a good example of one of the really interesting problems of working in an immature industry. We don't have our incentives hooked up right yet.

  • Why does open source have some bugs that stay open longer than careers do?

  • Why do people have the I've been coding to create lots of value for big companies for years and I'm still broke problem?

  • How does millions of dollars of shared vigilance even make the news, when the value extracted is in the billions?

  • Why is the meritocracy of open source even more biased than other technical and collaborative fields? (Are we at the bottom of the standings?) Why are we walking away from that many potential contributors?

Quinn Norton: Software is a Long Con:

It is to the benefit of software companies and programmers to claim that software as we know it is the state of nature. They can do stupid things, things we know will result in software vulnerabilities, and they suffer no consequences because people don’t know that software could be well-written. Often this ignorance includes developers themselves. We’ve also been conditioned to believe that software rots as fast as fruit. That if we waited for something, and paid more, it would still stop working in six months and we’d have to buy something new. The cruel irony of this is that despite being pushed to run out and buy the latest piece of software and the latest hardware to run it, our infrastructure is often running on horribly configured systems with crap code that can’t or won’t ever be updated or made secure.

We have two possible futures.

  • People finally get tired of software's boyish antics lethal irresponsibility, and impose a regulatory regime. Rent-seekers rejoice. Software innovation as we know it ceases, and we get something like the pre-breakup Bell System—you have to be an insider to build and deploy anything that reaches real people.

  • The software scene outgrows the "disclaimer of implied warranty" level of quality, on its own.

How do we get there? One approach is to use market mechanisms to help quantify software risk, then enable users with a preference for high quality and developers with a preference for high quality to interact directly, not through the filter of software companies that win by releasing early at a low quality level.

There is an opportunity here for the kinds of companies that are now doing open source license analysis. Right now they're analyzing relatively few files in a project—the licenses and copyrights. A tool will go through your software stack, and hooray, you don't have anything that depends on something with a consistent license, or on a license that would look bad to the people you want to see your company to.

What if that same tool would give you a better quality number for your stack, based on walking your dependency tree and looking for weak points based on market activity?

Why blockchain?

One important reason is that black or gray hat security researchers are likely to have extreme confidentiality requirements, especially when trading on knowledge from a co-conspirator who may not be aware of the trade. (A possible positive externality win from bug futures markets is the potential to reduce the trustworthiness of underground vulnerability markets, driving marginal vuln transactions to the legit market.)

Bug futures series so far

Planet DebianPaul Wise: FLOSS Activities September 2017

Changes

Issues

Review

Administration

  • icns: merged patches
  • Debian: help guest user with access, investigate/escalate broken network, restart broken stunnels, investigate static.d.o storage, investigate weird RAID mails, ask hoster to investigate power issue,
  • Debian mentors: lintian/security updates & reboot
  • Debian wiki: merged & deployed patch, redirect DDTSS translator, redirect user support requests, whitelist email addresses, update email for accounts with bouncing email,
  • Debian derivatives census: merged/deployed patches
  • Debian PTS: debugged cron mails, deployed changes, reran scripts, fixed configuration file
  • Openmoko: debug reboot issue, debug load issues

Communication

Sponsors

The samba bug was sponsored by my employer. All other work was done on a volunteer basis.

,

Planet DebianChris Lamb: Free software activities in September 2017

Here is my monthly update covering what I have been doing in the free software world in September 2017 (previous month):

  • Submitted a pull request to Quadrapassel (the Gnome version of Tetris) to start a new game when the pause button is pressed outside of a game. This means you would no longer have to use the mouse to start a new game. [...]
  • Made a large number of improvements to AptFS — my FUSE-based filesystem that provides a view on unpacked Debian source packages as regular folders — including moving away from manual parsing of package lists [...] and numerous code tidying/refactoring changes.
  • Sent a small patch to django-sitetree, a Django library for menu and breadcrumb navigation elements to not mask test exit codes from the surrounding shell. [...]
  • Updated travis.debian.net, my hosted service for projects that host their Debian packaging on GitHub to use the Travis CI continuous integration platform to test builds:
    • Add support for "sloppy" backports. Thanks to Bernd Zeimetz for the idea and ongoing testing. [...]
    • Merged a pull request from James McCoy to pass DEB_BUILD_PROFILES through to the build. [...]
    • Workaround Travis CI's HTTP proxy which does not appear to support SRV records. [...]
    • Run debc from devscripts if the build was successful [...] and output the .buildinfo file if it exists [...].
  • Fixed a few issues in local-debian-mirror, my package to easily maintain and customise a local Debian mirror via the DebConf configuration tool:
    • Fix an issue where file permissions from the remote could result in a local archive that was impossible to access. [...]
    • Clear out empty directories on the local repository. [...]
  • Updated django-staticfiles-dotd, my Django staticfiles adaptor to concatentate static media in .d-style directories to support Python 3.x by using bytes objects (commit) and move away from monkeypatch as it does not have a Python 3.x port yet (commit).
  • I also posted a short essay to my blog entitled "Ask the Dumb Questions" as well as provided an update on the latest Lintian release.

Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, most software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to allow verification that no flaws have been introduced — either maliciously or accidentally — during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

I have generously been awarded a grant from the Core Infrastructure Initiative to fund my work in this area.

This month I:

  • Published a short blog post about how to determine which packages on your system are reproducible. [...]
  • Submitted a pull request for Numpy to make the generated config.py files reproducible. [...]
  • Provided a patch to GTK upstream to ensure the immodules.cache files are reproducible. [...]
  • Within Debian:
    • Updated isdebianreproducibleyet.com, moving it to HTTPS, adding cachebusting as well as keeping the number up-to-date.
    • Submitted the following patches to fix reproducibility-related toolchain issues:
      • gdk-pixbuf: Make the output of gdk-pixbuf-query-loaders reproducible. (#875704)
      • texlive-bin: Make PDF IDs reproducible. (#874102)
    • Submitted a patch to fix a reproducibility issue in doit.
  • Categorised a large number of packages and issues in the Reproducible Builds "notes" repository.
  • Chaired our monthly IRC meeting. [...]
  • Worked on publishing our weekly reports. (#123, #124, #125, #126 & #127)


I also made the following changes to our tooling:

reproducible-check

reproducible-check is our script to determine which packages actually installed on your system are reproducible or not.

  • Handle multi-architecture systems correctly. (#875887)
  • Use the "restricted" data file to mask transient issues. (#875861)
  • Expire the cache file after one day and base the local cache filename on the remote name. [...] [...]

I also blogged about this utility. [...]

diffoscope

diffoscope is our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues.

  • Filed an issue attempting to identify the causes behind an increased number of timeouts visible in our CI infrastructure, including running a number of benchmarks of recent versions. (#875324)
  • New features:
    • Add "binwalking" support to analyse concatenated CPIO archives such as initramfs images. (#820631).
    • Print a message if we are reading data from standard input. [...]
  • Bug fixes:
    • Loosen matching of file(1)'s output to ensure we correctly also match TTF files under file version 5.32. [...]
    • Correct references to path_apparent_size in comparators.utils.file and self.buf in diffoscope.diff. [...] [...]
  • Testing:
    • Make failing some critical flake8 tests result in a failed build. [...]
    • Check we identify all CPIO fixtures. [...]
  • Misc:
    • No need for try-assert-except block in setup.py. [...]
    • Compare types with identity not equality. [...] [...]
    • Use logging.py's lazy argument interpolation. [...]
    • Remove unused imports. [...]
    • Numerous PEP8, flake8, whitespace, other cosmetic tidy-ups.

strip-nondeterminism

strip-nondeterminism is our tool to remove specific non-deterministic results from a completed build.

  • Log which handler processed a file. (#876140). [...]

disorderfs

disorderfs is our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out reproducibility issues.



Debian

My activities as the current Debian Project Leader are covered in my monthly "Bits from the DPL" email to the debian-devel-announce mailing list.

Lintian

I made a large number of changes to Lintian, the static analysis tool for Debian packages. It reports on various errors, omissions and general quality-assurance issues to maintainers:

I also blogged specifically about the Lintian 2.5.54 release.


Patches contributed

  • debconf: Please add a context manager to debconf.py. (#877096)
  • nm.debian.org: Add pronouns to ALL_STATUS_DESC. (#875128)
  • user-setup: Please drop set_special_users hack added for "the convenience of heavy testers". (#875909)
  • postgresql-common: Please update README.Debian for PostgreSQL 10. (#876438)
  • django-sitetree: Should not mask test failures. (#877321)
  • charmtimetracker:
    • Missing binary dependency on libqt5sql5-sqlite. (#873918)
    • Please drop "Cross-Platform" from package description. (#873917)

I also submitted 5 patches for packages with incorrect calls to find(1) in debian/rules against hamster-applet, libkml, pyferret, python-gssapi & roundcube.


Debian LTS


This month I have been paid to work 15¾ hours on Debian Long Term Support (LTS). In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, etc.
  • Documented an example usage of autopkgtests to test security changes.
  • Issued DLA 1084-1 and DLA 1085-1 for libidn and libidn2-0 to fix an integer overflow vulnerabilities in Punycode handling.
  • Issued DLA 1091-1 for unrar-free to prevent a directory traversal vulnerability from a specially-crafted .rar archive. This update introduces an regression test.
  • Issued DLA 1092-1 for libarchive to prevent malicious .xar archives causing a denial of service via a heap-based buffer over-read.
  • Issued DLA 1096-1 for wordpress-shibboleth, correcting an cross-site scripting vulnerability in the Shibboleth identity provider module.

Uploads

  • python-django:
    • 1.11.5-1 — New upstream security release. (#874415)
    • 1.11.5-2 — Apply upstream patch to fix QuerySet.defer() with "super" and "subclass" fields. (#876816)
    • 2.0~alpha1-2 — New upstream alpha release of Django 2.0, dropping support for Python 2.x.
  • redis:
    • 4.0.2-1 — New upstream release.
    • 4.0.2-2 — Update 0004-redis-check-rdb autopkgtest test to ensure that the redis.rdb file exists before testing against it.
    • 4.0.2-2~bpo9+1 — Upload to stretch-backports.
  • aptfs (0.11.0-1) — New upstream release, moving away from using /var/lib/apt/lists internals. Thanks to Julian Andres Klode for a helpful bug report. (#874765)
  • lintian (2.5.53, 2.5.54) — New upstream releases. (Documented in more detail above.)
  • bfs (1.1.2-1) — New upstream release.
  • docbook-to-man (1:2.0.0-39) — Tighten autopkgtests and enable testing via travis.debian.net.
  • python-daiquiri (1.3.0-1) — New upstream release.

I also made the following non-maintainer uploads (NMUs):

  • vimoutliner (0.3.4+pristine-9.3):
    • Make the build reproducible. (#776369)
    • Expand placeholders in Debian.README. (#575142, #725634)
    • Recommend that the ftplugin is enabled. (#603115)
    • Correct "is not enable" typo.
  • bittornado (0.3.18-10.3):
    • Make the build reproducible. (#796212).
    • Add missing Build-Depends on dh-python.
  • dtc-xen (0.5.17-1.1):
    • Make the build reproducible. (#777322)
    • Add missing Build-Depends on dh-python.
  • dict-gazetteer2k (1.0.0-5.4):
    • Make the build reproducible. (#776376).
    • Override empty-binary-packagea Lintian warning to avoid dak autoreject.
  • cgilib (0.6-1.1) — Make the build reproducible. (#776935)
  • dhcping (1.2-4.2) — Make the build reproducible. (#777320)
  • dict-moby-thesaurus (1.0-6.4) — Make the build reproducible. (#776375)
  • dtaus (0.9-1.1) — Make the build reproducible. (#777321)
  • fastforward (1:0.51-3.2) — Make the build reproducible. (#776972)
  • wily (0.13.41-7.3) — Make the build reproducible. (#777360)

Debian bugs filed

  • clipit: Please choose a sensible startup default in "live" mode. (#875903)
  • git-buildpackage: Please add a --reset option to gbp pull. (#875852)
  • bluez: Please default Device "friendly name" to hostname without domain. (#874094)
  • bugs.debian.org: Please explicitly link to {packages,tracker}.debian.org. (#876746)
  • Requests for packaging:
    • selfspy — log everything you do on the computer. (#873955)
    • shoogle — use the Google API from the shell. (#873916)

FTP Team


As a Debian FTP assistant I ACCEPTed 86 packages: bgw-replstatus, build-essential, caja-admin, caja-rename, calamares, cdiff, cockpit, colorized-logs, comptext, comptty, copyq, django-allauth, django-paintstore, django-q, django-test-without-migrations, docker-runc, emacs-db, emacs-uuid, esxml, fast5, flake8-docstrings, gcc-6-doc, gcc-7-doc, gcc-8, golang-github-go-logfmt-logfmt, golang-github-google-go-cmp, golang-github-nightlyone-lockfile, golang-github-oklog-ulid, golang-pault-go-macchanger, h2o, inhomog, ip4r, ldc, libayatana-appindicator, libbson-perl, libencoding-fixlatin-perl, libfile-monitor-lite-perl, libhtml-restrict-perl, libmojo-rabbitmq-client-perl, libmoosex-types-laxnum-perl, libparse-mime-perl, libplack-test-agent-perl, libpod-projectdocs-perl, libregexp-pattern-license-perl, libstring-trim-perl, libtext-simpletable-autowidth-perl, libvirt, linux, mac-fdisk, myspell-sq, node-coveralls, node-module-deps, nov-el, owncloud-client, pantomime-clojure, pg-dirtyread, pgfincore, pgpool2, pgsql-asn1oid, phpliteadmin, powerlevel9k, pyjokes, python-evdev, python-oslo.db, python-pygal, python-wsaccel, python3.7, r-cran-bindrcpp, r-cran-dotcall64, r-cran-glue, r-cran-gtable, r-cran-pkgconfig, r-cran-rlang, r-cran-spatstat.utils, resolvconf-admin, retro-gtk, ring-ssl-clojure, robot-detection, rpy2-2.8, ruby-hocon, sass-stylesheets-compass, selinux-dbus, selinux-python, statsmodels, webkit2-sharp & weston.

I additionally filed 4 RC bugs against packages that had incomplete debian/copyright files against: comptext, comptext, ldc & python-oslo.concurrency.

Rondam RamblingsThe Bitcoin apocalypse is coming in mid-November to a block chain near you

[UPDATE: This post was originally said that the SegWit2X fork will happen on November 1.  In fact it is scheduled to occur on block 494,764 .  It is impossible to predict exactly when this will happen, but at current hash rates it will probably be some time in mid-to-late November.  The post has been edited to reflect this.] Back in 2004 someone launched a web site called FuckedGoogle.com

Planet DebianIain R. Learmonth: Breaking RSS Change in Hugo

My website and blog are managed by the static site generator Hugo. I’ve found this to be a stable and flexible system, but at the last upgrade a breaking change has occurred that broken the syndication of my blog on various planets.

At first I thought perhaps with my increased posting rate the planets were truncating my posts but this was not the case. The problem was in Hugo pull request #3129 where for some reason they have changed the RSS feed to contain only a “lead” instead of the full article.

I’ve seen other content management systems offer a similar option but at least they point out that it’s truncated and offer a “read more” link. Here it just looks like I’m publishing truncated unfinished really short posts.

If you take a look at the post above, you’ll see that the change is in an embedded template and it took a little reading the docs to work out how to revert the change. The steps are actually not that difficult, but it’s still annoying that the change occurred.

In a Hugo site, you will have a layouts directory that will contain your overrides from your theme. Create a new file in the path layouts/_default/rss.xml (you may need to create the _default directory) with the following content:

<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>{{ if eq  .Title  .Site.Title }}{{ .Site.Title }}{{ else }}{{ with .Title }}{{.}} on {{ end }}{{ .Site.Title }}{{ end }}</title>
    <link>{{ .Permalink }}</link>
    <description>Recent content {{ if ne  .Title  .Site.Title }}{{ with .Title }}in {{.}} {{ end }}{{ end }}on {{ .Site.Title }}</description>
    <generator>Hugo -- gohugo.io</generator>{{ with .Site.LanguageCode }}
    <language>{{.}}</language>{{end}}{{ with .Site.Author.email }}
    <managingEditor>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</managingEditor>{{end}}{{ with .Site.Author.email }}
    <webMaster>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</webMaster>{{end}}{{ with .Site.Copyright }}
    <copyright>{{.}}</copyright>{{end}}{{ if not .Date.IsZero }}
    <lastBuildDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</lastBuildDate>{{ end }}
    {{ with .OutputFormats.Get "RSS" }}
        {{ printf "<atom:link href=%q rel=\"self\" type=%q />" .Permalink .MediaType | safeHTML }}
    {{ end }}
    {{ range .Data.Pages }}
    <item>
      <title>{{ .Title }}</title>
      <link>{{ .Permalink }}</link>
      <pubDate>{{ .Date.Format "Mon, 02 Jan 2006 15:04:05 -0700" | safeHTML }}</pubDate>
      {{ with .Site.Author.email }}<author>{{.}}{{ with $.Site.Author.name }} ({{.}}){{end}}</author>{{end}}
      <guid>{{ .Permalink }}</guid>
      <description>{{ .Content | html }}</description>
    </item>
    {{ end }}
  </channel>
</rss>

If you like my new Hugo theme, please let me know and I’ll bump tidying it up and publishing it further up my todo list.

Planet DebianHideki Yamane: MIRROR DISK USAGE: growing

One year later: mirror disk usage is growing
I'll prepare exchanging whole system in the end of this year.

Planet DebianArturo Borrero González: Installing spotify-client in Debian testing (Buster)

debian-spotify logo

Similar to the problem described in the post Google Hangouts in Debian testing (Buster), the Spotify application for Debian (a package called spotify-client) is not ready to run in Debian testing (Buster) as is.

In this particular case, it seems there is only one problem, and is related to openssl/libssl. The spotify-client package requires libssl1.0.0 while in Debian testing (Buster) we have an updated libssl1.1.

Fortunately, this is rather easy to solve, given the little additional dependencies of both spotify-client and libssl1.0.0.

What we will do is to install libssl1.0.0 from jessie-backports, coexisting with libssl1.1.

Simple steps:

  • 1) add jessie-backports repository to your /etc/apt/sources.list file:
    deb http://httpredir.debian.org/debian/ jessie-backports main

  • 2) update your repo database:
    % user@debian:~ $ sudo aptitude update
    
  • 3) verify we have both libssl1.1 and libssl1.0.0 ready to install:
    % user@debian:~ $ aptitude search libssl
    [...]
    p   libssl1.0.0       - Secure Sockets Layer toolkit - shared libraries                                       
    i   libssl1.1         - Secure Sockets Layer toolkit - shared libraries
    [...]
    
  • 4) Follow steps by Spotify to install the spotify-client package:
    https://www.spotify.com/uk/download/linux/

  • 5) Run it and enjoy your music!

  • 6) You can cleanup the jessie-backports line from /etc/apt/sources.list.


Bonus point: Why jessie-backports?? Well, according to the openssl package tracker, jessie-backports contains the most recent version of the libssl1.0.0 package.

BTW, thanks to the openssl Debian maintainers, their work is really appreciated :-) And thanks to Spotify for providing a Debian package :-)

,

Rondam RamblingsA brief history of political discourse in the United States

1776 When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel them to

Planet DebianEnrico Zini: Systemd socket units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.socket units

Socket units tell systemd to listen on a given IPC, network socket, or file system FIFO, and use another unit to service requests to it.

For example, this creates a network service that listens on port 55555:

# /etc/systemd/system/ddate.socket
[Unit]
Description=ddate service on port 55555

[Socket]
ListenStream=55555
Accept=true

[Install]
WantedBy=sockets.target
# /etc/systemd/system/ddate@.service
[Unit]
Description=Run ddate as a network service

[Service]
Type=simple
ExecStart=/bin/sh -ec 'while true; do /usr/bin/ddate; sleep 1m; done'
StandardOutput=socket
StandardError=journal

Note that the .service file is called ddate@ instead of ddate: units whose name ends in '@' are template units which can be activated multiple times, by adding any string after the '@' in the unit name.

If I run nc localhost 55555 a couple of times, and then check the list of running units, I see ddate@… instantiated twice, adding the local and remote socket endpoints to the unit name:

$ systemctl list-units 'ddate@*'
  UNIT                                             LOAD   ACTIVE SUB     DESCRIPTION
  ddate@15-127.0.0.1:55555-127.0.0.1:36936.service loaded active running Run ddate as a network service (127.0.0.1:36936)
  ddate@16-127.0.0.1:55555-127.0.0.1:37002.service loaded active running Run ddate as a network service (127.0.0.1:37002)

This allows me to monitor each running service individually.

systemd also automatically creates a slice unit called system-ddate.slice grouping all services together:

$ systemctl status system-ddate.slice
 system-ddate.slice
   Loaded: loaded
   Active: active since Thu 2017-09-21 14:25:02 CEST; 9min ago
    Tasks: 4
   CGroup: /system.slice/system-ddate.slice
           ├─ddate@15-127.0.0.1:55555-127.0.0.1:36936.service
            ├─18214 /bin/sh -ec while true; do /usr/bin/ddate; sleep 1m; done
            └─18661 sleep 1m
           └─ddate@16-127.0.0.1:55555-127.0.0.1:37002.service
             ├─18228 /bin/sh -ec while true; do /usr/bin/ddate; sleep 1m; done
             └─18670 sleep 1m

This allows to also work with all running services for this template unit as a whole, sending a signal to all their processes and setting up resource control features for the service as a whole.

See:

CryptogramFriday Squid Blogging: Squid Empire Is a New Book

Regularly I receive mail from people wanting to advertise on, write for, or sponsor posts on my blog. My rule is that I say no to everyone. There is no amount of money or free stuff that will get me to write about your security product or service.

With regard to squid, however, I have no such compunctions. Send me any sort of squid anything, and I am happy to write about it. Earlier this week, for example, I received two -- not one -- copies of the new book Squid Empire: The Rise and Fall of Cephalopods. I haven't read it yet, but it looks good. It's the story of prehistoric squid.

Here's a review by someone who has read it.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDHealing hearts at the intersection of modern medicine and indigenous culture

Worldwide, nearly one out of every hundred children is born with a congenital heart disease, which can vary from defective vessels and leaky valves, to holes in the heart. Dr. Franz Freudenthal (TED Talk: A new way to heal hearts without surgery) deals in the latter as a pediatric cardiologist who has developed a better, invasive surgery-free alternative to close these life-threatening cavities.

So, when a baby is born with a hole in its heart, what happens and how do you fix it?

For a hole in the heart to develop, prematurity and genetic conditions tend to be the leading cause. A baby in the womb does not breathe and relies on the mother until it takes first breaths at birth, which signals major changes to take place in the body — especially within the cardiovascular and respiratory systems. Breathing, a new experience for the baby, stimulates some vessels in the heart to close. However, this is not always the case, and abnormal communication between atria can leave passages underdeveloped and gaping.

“When you look at patients with this condition, they seem desperate to breathe,” says Freudenthal. “To close the hole, major surgery used to be the only solution.”

Decades of research reveals that lack of oxygen can also be to blame. In high-altitude locations where air is thin, such as the mountainous regions of Freudenthal’s native Bolivia, the frequency of this kind of heart defect increases dramatically. For high-altitude patients, the holes tend to be more severe due to a larger gap between arteries.

The first of many breakthroughs for a non-invasive mechanism to solve these kinds of heart defects came to Freudenthal during his time in medical school, brainstorming with a classmate while they camped in the Amazon. As they were building their fire, adding kindling to feed the flames, he noticed something that piqued his scientific curiosity.

“The only thing that would not burn in the fire was a green avocado branch,” he says. “Then came a moment of inspiration. So, we used the branch as a mold for our first invention.”

Filling hearts, one hole at a time

Observing the properties of the green avocado branch as it reacted to the flames was a great place to start. The fact that the branch withstood the heat of the fire allowed Freudenthal to look for a metal that could replicate its properties under similar conditions. He eventually landed on a smart material called Nitinol. Made of a nickel-titanium alloy, Nitinol has two unique properties that are incredibly useful in biomedical applications: It can be worked into unique shapes and retain them; and it’s superelastic, meaning that when it’s stretched or flattened, it needs no heating in order to regain its original form.

“I knew this material was ideal since it keeps its shape,” he says. “This is why the device can be transported into the body inside a tube [implantation catheter]. It can be deployed in the right spot inside the heart, recovering its ‘memorized’ shape.”

From that discovery came thousands of hours of lab work, numerous in-vitro and in-vivo studies, and a persistent enthusiasm to unravel such a complex issue. It was a lengthy, demanding process on the road to creating a prototype, a specialized piece of wire coiled into into the shape of a plug that could be be transferred through a catheter to wherever in the heart it is needed, neatly plugging the hole.

However, an issue arose when Freudenthal and Dr. Alexandra Heath, his wife and partner, realized the device could only service patients below a certain altitude level. Many of their patients lived at 12,000 feet above sea level and had extra-wide gaps in their arteries — larger than the plug of coiled-up wire could cover.

“The first coil could successfully treat only half of the patients in Bolivia,” Freudenthal says. “The search started again. We went back to the drawing board.”

The next generation of device, influenced by past generations

After many trials and several iterations, a key development came from an unlikely source — the loom-weaving technique of the native Andes peoples. Freudenthal’s grandmother, Dr. Ruth Tichauer, a Jewish refugee who resettled in the heart of the Andes mountains, had worked closely — and Freudenthal alongside her, growing up — with remote indigenous communities, and that connection proved ever more fruitful.

For centuries, the women of these communities told stories by weaving complex patterns using looms. With Freudenthal’s vision, instead of fabric yarn, the women carefully weave Nitinol.

“We take this traditional method of weaving and make a design,” Freudenthal says. “The weaving allows us to create a seamless device that doesn’t rust because it’s made of only one piece. It can change by itself into very complex structures.”

From this insight evolved the Nit-Occlud ASD-R system and a way to fix a baby’s heart without major invasive surgery.

As seen above, the device enters the heart through the body’s natural channels via the implantation catheter and expands, placing itself before closing the hole. From start to finish, the entire procedure takes 30 minutes to complete.

After a few days, heart tissue begins to grow over the device — a process called epithelialization — eventually covering it entirely. If the hole is not too large to warrant further surgery, the implant stays as part of the child’s heart for the rest of their life.

“We are so proud that some of our former patients are part of our team,” Freudenthal shares. “We receive strength from our patients — their resilience and courage inspire our creativity.”

Right now, Freudenthal’s company PFM SRL has the Nit-Occlud ASD-R system registered in around 60 countries and estimates it has saved the lives of some 2,500 children.


Krebs on SecurityHere’s What to Ask the Former Equifax CEO

Richard Smith — who resigned as chief executive of big-three credit bureau Equifax this week in the wake of a data breach that exposed 143 million Social Security numbers — is slated to testify in front of no fewer than four committees on Capitol Hill next week. If I were a lawmaker, here are some of the questions I’d ask when Mr. Smith goes to Washington.

capitol

Before we delve into the questions, a bit of background is probably in order. The new interim CEO of Equifax — Paulino do Rego Barros Jr. — took to The Wall Street Journal and other media outlets this week to publish a mea culpa on all the ways Equifax failed in responding to this breach (the title of the op-ed in The Journal was literally “I’m sorry”).

“We were hacked,” Barros wrote. “That’s the simple fact. But we compounded the problem with insufficient support for consumers. Our website did not function as it should have, and our call center couldn’t manage the volume of calls we received. Answers to key consumer questions were too often delayed, incomplete or both.”

Barros stated that Equifax was working to roll out a new system by Jan. 31, 2018 that would let consumers “easily lock and unlock access to their Equifax credit files.”

“You will be able to do this at will,” he continued. “It will be reliable, safe, and simple. Most significantly, the service will be offered free, for life.”

I have argued for years that all of the data points needed for identity thieves to open new lines of credit in your name and otherwise ruin your credit score are available for sale in the cybercrime underground. To be certain, the Equifax breach holds the prospect that ID thieves could update all that stolen data with newer records. I’ve argued that the only sane response to this sorry state of affairs is for consumers to freeze their files at the bureaus, which blocks potential creditors — and ID thieves — from trashing your credit file and credit score.

Equifax is not the only bureau promoting one of these lock services. Since Equifax announced its breach on Sept. 7, big-three credit bureaus Trans Union and Experian have worked feverishly to steer consumers seeking freezes toward these locks instead, arguing that they are easier to use and allow consumers to lock and unlock their credit files with little more than the press of a button on a mobile phone app. Oh, and the locks are free, whereas the bureaus can (and do) charge consumers for placing and/or thawing a freeze (the laws freeze fee laws differ from state to state).

CREDIT FREEZE VS. CREDIT LOCK

My first group of questions would center around security freezes or credit freezes, and the difference between those and these credit lock services being pushed hard by the bureaus.

Currently, even consumer watchdog groups say they are uncertain about the difference between a freeze and a lock. See this press release from Thursday by U.S. PIRG, the federation of state Public Interest Research Groups, for one such example.

Also, I’m curious to know what percentage of Americans had a freeze prior to the breach, and how many froze their credit files (or attempted to do so) after Equifax announced the breach. The answers to these questions may help explain why the bureaus are now massively pushing their new credit lock offerings (i.e., perhaps they’re worried about the revenue hit they’ll take should a significant percentage of Americans decide to freeze their credit files).

I suspect the pre-breach number is less than one percent. I base this guess loosely on some data I received from the head of security at Dropbox, who told KrebsOnSecurity last year that less than one percent of its user base of 500 million registered users had chosen to turn on 2-factor authentication for their accounts. This extra security step can block thieves from accessing your account even if they steal your password, but many consumers simply don’t take advantage of such offerings because either they don’t know about them or they find them inconvenient.

Bear in mind that while most two-factor offerings are free, most freezes involve fees, so I’d expect the number of pre-breach freezers to be a fraction of one percent. However, if only one half of one percent of Americans chose to freeze their credit files before Equifax announced its breach — and if the total number of Americans requesting a freeze post-breach rose to, say, one percent — that would still be a huge jump (and potentially a painful financial hit to Equifax and the other bureaus).

creditfreeze

So without further ado, here are some questions I’d ask on the topic of credit locks and freezes:

-Approximately how many credit files on Americans does Equifax currently maintain?

-Prior to the Equifax breach, approximately how many Americans had chosen to freeze their credit files at Equifax?

-Approximately how many total Americans today have requested a freeze from Equifax? This should include the company’s best estimate on the number of people who have requested a freeze but — because of the many failings of Equifax’s public response cited by Barros — were unable to do so via phone or the Internet.

-Approximately how much does Equifax charge each time the company sells a credit check (i.e., a bank or other potential creditor performs a “pull” on a consumer credit file)?

-On average, how many times per year does Equifax sell access to consumer’s credit file to a potential creditor?

-Mr. Barros said Equifax will extend its offer of free credit freezes until the end of January 2018. Why not make them free indefinitely, just as the company says it plans to do with its credit lock service?

-In what way does a consumer placing a freeze on their credit file limit Equifax’s ability to do business?

-In what way does a consumer placing a lock on their credit file limit Equifax’s ability to do business?

-If a lock accomplishes the same as a freeze, why create more terminology that only confuses consumers?

-By agreeing to use Equifax’s lock service, will consumers also be opting in to any additional marketing arrangements, either via Equifax or any of its partners?

BREACH RESPONSE

Equifax could hardly have bungled their breach response more if they tried. It is said that one should never attribute to malice what can more easily be explained by incompetence, but Equifax surely should have known that how they handled their public response would be paramount to their ability to quickly put this incident behind them and get back to business as usual.

dumpsterfire

Equifax has come under heavy criticism for waiting too long to disclose this breach. It has said that the company became aware of the intrusion on July 29, and yet it did not publicly disclose the breach until Sept. 7.However, when Equifax did disclose, it seemed like everything about the response was rushed and ill-conceived.

One theory that I simply cannot get out of my head is that perhaps Equifax rushed preparations for is breach disclosure and response because it was given a deadline by extortionists who were threatening to disclose the breach on their own if the company did not comply with some kind of demand.

-I’d ask a question of mine that Equifax refused to answer shortly after the breach: Whether the company was the target of extortionists over this data breach *before* the breach was officially announced on Sept. 7.

-Equifax said the attackers abused a vulnerability in Apache Struts to break in to the company’s Web applications. That Struts flaw was patched by the Apache Foundation on March 8, 2017, but Equifax waited until after July 30, 2017 — after it learned of the breach — to patch the vulnerability. Why did Equifax decide to wait four and a half months to apply this critical update?

-How did Equifax become aware of this breach? Was it from an external source, such as law enforcement?

-Assuming Equifax learned about this breach from law enforcement agencies, what did those agencies say regarding how they learned about the breach?

FRAUD AND ABUSE

Multiple news organizations have reported that companies which track crimes related to identity theft — such as account takeovers, new account fraud, and e-commerce fraud — saw huge upticks in all of these areas corresponding to two periods that are central to Equifax’s breach timeline; the first in mid-May, when Equifax said the intruders began abusing their access to the company, and the second late July/early August, when Equifax said it learned about the breach.

This chart shows spikes in various forms of identity abuse — including account takeovers and new account fraud — as tracked by ThreatMetrix, a San Jose, Calif. firm that helps businesses prevent fraud.

-Has Equifax performed any analysis on consumer credit reports to determine if there has been any pattern of consumer harm as a result of this breach?

-Assuming the answer to the previous question is yes, did the company see any spikes in applications for new lines of consumer credit corresponding to these two time periods in 2017?

Many fraud experts report that a fast-growing area of identity theft involves so-called “synthetic ID theft,” in which fraudsters take data points from multiple established consumer identities and merge them together to form a new identity. This type of fraud often takes years to result in negative consequences for consumers, and very often the debt collection agencies will go after whoever legitimately owns the Social Security number used by that identity, regardless of who owns the other data points.

-Is Equifax aware of a noticeable increase in synthetic identity theft in recent months or years?

-What steps, if any, does Equifax take to ensure that multiple credit files are not using the same Social Security number?

-Prior to its breach disclosure, Equifax spent more than a half million dollars in the first half of 2017 lobbying Congress to pass legislation that would limit the legal liability of credit bureaus in connection with data security lapses. Do you still believe such legislation is necessary? Why or why not?

What questions did I leave out, Dear Readers? Or is there a way to make a question above more succinct? Sound off in the comments below, and I may just add yours to the list!

In the meantime, here are the committees at which Former Equifax CEO Richard Smith will be testifying next week on Capitol Hill. Some of these committees will no doubt be live-streaming the hearings. Check back at the links below on the morning-of for more information on that. Also, C-SPAN almost certainly will be streaming some of these as well:

-Tuesday, Oct. 3, 10:00 a.m., House Energy and Commerce Committee. Rayburn House Office Bldg. Room 2123.

-Wednesday, Oct. 4, 10:00 a.m., Senate Committee on Banking, Housing, & Urban Affairs. Dirksen Senate Office Bldg., Room 538.

-Wednesday, Oct. 4, 2:30 p.m., Senate Judiciary Subcommittee on Privacy, Technology and the Law. Dirksen Senate Office Bldg., Room 226.

-Thursday, Oct. 5, 9:15 a.m., House Financial Services Committee. Rayburn House Office Bldg., Room 2128.

Planet DebianIain R. Learmonth: Tor Metrics Team Meeting in Berlin

We had a meeting of the Metrics Team in Berlin yesterday to organise a roadmap for the next 12 months. This roadmap isn’t yet finalised as it will now be taken to the main Tor developers meeting in Montreal where perhaps there are things we thought were needed but aren’t, or things that we had forgotten. Still we have a pretty good draft and we were all quite happy with it.

We have updated tickets in the Metrics component on the Tor trac to include either “metrics-2017“ or “metrics-2018“ in the keywords field to identify tickets that we expect to be able to resolve either by the end of this year or by the end of next year (again, not yet finalised but should give a good idea). In some cases this may mean closing the ticket without fixing it, but only if we believe that either the ticket is out of scope for the metrics team or that it’s an old ticket and no one else has had the same issue since.

Having an in-person meeting has allowed us to have easy discussion around some of the more complex tickets that have been sitting around. In many cases these are tickets where we need input from other teams, or perhaps even just reassigning the ticket to another team, but without a clear plan we couldn’t do this.

My work for the remainder of the year will be primarily on Atlas where we have a clear plan for integrating with the Tor Metrics website, and may include some other small things relating to the website.

I will also be triaging the current Compass tickets as we look to shut down compass and integrate the functionality into Atlas. Compass specific tickets will be closed but some tickets relating to desirable functionality may be moved to Atlas with the fix implemented there instead.

CryptogramDeloitte Hacked

The large accountancy firm Deloitte was hacked, losing client e-mails and files. The hackers had access inside the company's networks for months. Deloitte is doing its best to downplay the severity of this hack, but Bran Krebs reports that the hack "involves the compromise of all administrator accounts at the company as well as Deloitte's entire internal email system."

So far, the hackers haven't published all the data they stole.

Planet DebianSven Hoexter: Last rites to the lyx and elyxer packaging

After having been a heavy LyX user from 2005 to 2010 I've continued to maintain LyX more or less till now. Finally I'm starting to leave that stage and removed myself from the Uploaders list. The upload with some other last packaging changes is currently sitting in the git repo. Mainly because lintian on ftp-master currently rejects 'pakagename@packages.d.o' maintainer addresses (the alternative to the lists.alioth.d.o maintainer mailinglists). For elyxer I filled a request for removal. It hasn't seen any upstream activity for a while and the LyX build in HTML export support improved.

My hope is that if I step away far enough someone else might actually pick it up. I had this strange moment when I lately realized that xchat got reintroduced to Debian after mapreri and myself spent some time last year to get it removed before the stretch release.

Worse Than FailureError'd: Please Leave a Message

"So is this the email equivalent of one man's trash is another man's treasure?" writes Allan.

 

David C. wrote, "I received this automated bill notification from Canada Post's online inbox service saying that, possibly, nobody wants me to pay them."

 

"Well, to be fair, the did say that using Mail Chimp makes it easy to send email," Jacob R. wrote.

 

"Here at M*******t we take your privacy seriously!" James writes.

 

Kurt W. writes, "It's funny because email clients usually crap out before filtering 9 quintillion messages."

 

"I'm a little bit suspicious about these files I found in our logging directory," wrote Michael G., "Sadly, I am not working for the National Lottery..."

 

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet DebianPetter Reinholdtsen: Visualizing GSM radio chatter using gr-gsm and Hopglass

Every mobile phone announce its existence over radio to the nearby mobile cell towers. And this radio chatter is available for anyone with a radio receiver capable of receiving them. Details about the mobile phones with very good accuracy is of course collected by the phone companies, but this is not the topic of this blog post. The mobile phone radio chatter make it possible to figure out when a cell phone is nearby, as it include the SIM card ID (IMSI). By paying attention over time, one can see when a phone arrive and when it leave an area. I believe it would be nice to make this information more available to the general public, to make more people aware of how their phones are announcing their whereabouts to anyone that care to listen.

I am very happy to report that we managed to get something visualizing this information up and running for Oslo Skaperfestival 2017 (Oslo Makers Festival) taking place today and tomorrow at Deichmanske library. The solution is based on the simple recipe for listening to GSM chatter I posted a few days ago, and will show up at the stand of Åpen Sone from the Computer Science department of the University of Oslo. The presentation will show the nearby mobile phones (aka IMSIs) as dots in a web browser graph, with lines to the dot representing mobile base station it is talking to. It was working in the lab yesterday, and was moved into place this morning.

We set up a fairly powerful desktop machine using Debian Buster/Testing with several (five, I believe) RTL2838 DVB-T receivers connected and visualize the visible cell phone towers using an English version of Hopglass. A fairly powerfull machine is needed as the grgsm_livemon_headless processes from gr-gsm converting the radio signal to data packages is quite CPU intensive.

The frequencies to listen to, are identified using a slightly patched scan-and-livemon (to set the --args values for each receiver), and the Hopglass data is generated using the patches in my meshviewer-output branch. For some reason we could not get more than four SDRs working. There is also a geographical map trying to show the location of the base stations, but I believe their coordinates are hardcoded to some random location in Germany, I believe. The code should be replaced with code to look up location in a text file, a sqlite database or one of the online databases mentioned in the github issue for the topic.

If this sound interesting, visit the stand at the festival!

Planet DebianDirk Eddelbuettel: Rcpp 0.12.13: Updated vignettes, and more

The thirteenth release in the 0.12.* series of Rcpp landed on CRAN this morning, following a little delay because Uwe Ligges was traveling and whatnot. We had announced its availability to the mailing list late last week. As usual, a rather substantial amount of testing effort went into this release so you should not expect any surprise.

This release follows the 0.12.0 release from July 2016, the 0.12.1 release in September 2016, the 0.12.2 release in November 2016, the 0.12.3 release in January 2017, the 0.12.4 release in March 2016, the 0.12.5 release in May 2016, the 0.12.6 release in July 2016, the 0.12.7 release in September 2016, the 0.12.8 release in November 2016, the 0.12.9 release in January 2017, the 0.12.10.release in March 2017, the 0.12.11.release in May 2017, and the 0.12.12 release in July 2017 making it the seventeeth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1069 packages (and hence 73 more since the last release) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases contains a large-ish update to the documentation as all vignettes (apart from the unit test one, which is a one-off) now use Markdown and the (still pretty new) pinp package by James and myself. There is also a new vignette corresponding to the PeerJ preprint James and I produced as an updated and current Introduction to Rcpp replacing the older JSS piece (which is still included as a vignette too).

A few other things got fixed: Dan is working on const iterators you would expect with modern C++, Lei Yu spotted error in Modules, and more. See below for details.

Changes in Rcpp version 0.12.13 (2017-09-24)

  • Changes in Rcpp API:

    • New const iterators functions cbegin() and cend() have been added to several vector and matrix classes (Dan Dillon and James Balamuta in #748) starting to address #741).
  • Changes in Rcpp Modules:

    • Misplacement of one parenthesis in macro LOAD_RCPP_MODULE was corrected (Lei Yu in #737)
  • Changes in Rcpp Documentation:

    • Rewrote the macOS sections to depend on official documentation due to large changes in the macOS toolchain. (James Balamuta in #742 addressing issue #682).

    • Added a new vignette ‘Rcpp-introduction’ based on new PeerJ preprint, renamed existing introduction to ‘Rcpp-jss-2011’.

    • Transitioned all vignettes to the 'pinp' RMarkdown template (James Balamuta and Dirk Eddelbuettel in #755 addressing issue #604).

    • Added an entry on running 'compileAttributes()' twice to the Rcpp-FAQ (##745).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianEnrico Zini: Systemd path units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.path units

This kind of unit can be used to monitor a file or directory for changes using inotify, and activate other units when an event happens.

For example, this activates a unit that manages a spool directory, which activates another unit whenever a .pdf file is added to /tmp/spool/:

[Unit]
Description=Monitor /tmp/spool/ for new .pdf files

[Path]
Unit=beeponce.service
PathExistsGlob=/tmp/spool/*.pdf
MakeDirectory=true

This instead activates another unit whenever /tmp/ready is changed, for example by someone running touch /tmp/ready:

[Unit]
Description=Monitor /tmp/ready

[Path]
Unit=beeponce.service
PathChanged=/tmp/ready

And beeponce.service:

[Unit]
Description=Beeps once

[Service]
Type=oneshot
ExecStart=/usr/bin/aplay /tmp/beep.wav

See man systemd.path

Planet DebianSean Whitton: Debian Policy 4.1.1.0 released

I just released Debian Policy version 4.1.1.0.

There are only two normative changes, and neither is very important. The main thing is that this upload fixes a lot of packaging bugs that were found since we converted to build with Sphinx.

There are still some issues remaining; I hope to submit some patches to the www-team’s scripts to fix those.

Planet DebianRicardo Mones: Long time no post

Seems the breakage of my desktop computer more than 3 months ago did also caused also a hiatus on my online publishing activities... it was not really intended, it happened I was just busy with other things ಠ_ಠ.

With a broken computer being able to build software on the laptop became a priority. Around September 2016 or so the good'n'old black MacBook decided to stop working. I didn't really need a replacement by that time, but never liked to have just a single working system, and in October just found an offer which I could not resist and bought a ThinkPad X260. It helped to build my final project (it was faster than the desktop), but lacking time for FOSS hadn't used it for much more.

Setting up the laptop for software (Debian packages and Claws Mail, mainly) was somewhat easy. Finding a replacement for the broken desktop was a bit more difficult. I considered a lot of configurations and prices, from those new Ryzen to just buying the same components (pretty difficult now because they're discontinued). In the end, I decided to spend the minimum and make good use of everything else still working (memory, discs and wireless card), so I finally got an AMD A10-7860K on top of an Asus A88M-PLUS. This board has more SATA ports, so I added an unused SSD, remains of a broken laptop, to install the new system —Debian Stretch, of course ʘ‿ʘ— while keeping the existing software RAID partitions of the spinning drives.


The last thing distracting from the usual routine was replacing the car. Our child is growing as expected and the Fiesta was starting to appear small and uncomfortable, specially for long distance travel. We went for an hybrid model, with a high capacity boot. Given our budget, we only found 3 models below the limit: Kia Niro, Hyundai Ioniq and Toyota Auris TS. The color was decided by the kid (after forbidding black), and this was the winner...

In the middle of all of this we also took some vacation to travel to the south of Galicia, mostly around Vigo area, but also visiting Oporto and other nice places.

CryptogramNew Internet Explorer Bug

There's a newly discovered bug in Internet Explorer that allows any currently visited website to learn the contents of the address bar when the user hits enter. This feels important; the site I am at now has no business knowing where I go next.

Sociological ImagesThe different media spheres of the right and the left — and how they’re throwing elections to the Republicans

A new study tackles the media landscape building up to the election. The lead investigator, Rob Faris, runs a center at Harvard that specializes in the internet and society. He and his co-authors asked what role partisanship and disinformation might have played in the 2016 U.S. election. The study looked at links between internet news sites and also the behavior of Twitter and Facebook users, so it paints a picture of how news and opinion is being produced by media conglomerates and also how individuals are using and sharing this information.

They found severe ideological polarization, something we’ve known for some time, but also asymmetry in how media production and consumption works on either side. That is, journalists and readers on the left are behaving differently from those on the right.

The right is more insular and more partisan than the left: conservatives consume less neutral and “other side” news than liberals do and their outlets are more aggressively partisan. Breitbart News now sits neatly at the center. Measured by inlinks, it’s as influential as FOX News and, on social media, substantially more. Here’s the  network map for Twitter:

Breitbart’s centrality on the right is a symptom of how extreme the Republican base has become. Breitbart’s Executive Chairman, Steve Bannon — former White House Chief Strategist — calls it “home of the alt-right,” a group that shows “extreme” bias against racial minorities and other out-groups. 

The insularity and lack of interest in balanced reporting made right-leaning readers susceptible to fake stories. Faris and his colleagues write:

The more insulated right-wing media ecosystem was susceptible to sustained network propaganda and disinformation, particularly misleading negative claims about Hillary Clinton. Traditional media accountability mechanisms — for example, fact-checking sites, media watchdog groups, and cross-media criticism — appear to have wielded little influence on the insular conservative media sphere.

There is insularity and partisanship on the left as well, but it is mediated by commitments to traditional journalistic norms — e.g., covering “both sides” — and so, on the whole, the left got more balance in their media diet and less “fake news” because they were more friendly to fact checkers.

The interest in balance, however, perhaps wasn’t entirely good. Faris and his co-authors found that the right exploited the left’s journalistic principles, pushing left-leaning and neutral media outlets to cover negative stories about Clinton by claiming that not doing so was biased. Centrist media outlets responded with coverage, but didn’t ask the same of the right (it is possible this shaming tactic wouldn’t have worked the other way).

The take home message is: During the 2016 election season, right-leaning media consumers got rabid, un-fact checked, and sometimes false anti-Clinton and pro-Trump material and little else, while left-leaning media consumers got relatively balanced coverage of Clinton: both good stories and bad ones, but more bad ones than they would have gotten (for better or worse) if the right hadn’t been yanking their chain about being “fair.”

We should be worried about how polarization, “fake news,” horse-race journalism, and infotainment are influencing the ability of voters to gather meaningful information with which to make voting decisions, but the asymmetry between the left and the right media sphere — particularly how it makes the right vulnerable to propagandists and the left vulnerable to ideological bullying by the right — should leave us even more worried. These are powerful forces, held up both by the institutions and the individuals, that are dramatically skewing election coverage, undermining democracy, and throwing elections, and governance itself, to the right.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

TEDSymbolic logic: How African alphabets got to the TEDGlobal stage

All around the theater space, characters from African alphabets were projected on walls and floors in vivid color. The characters came from many languages, and were chosen by designer Saki Mafundikwa to match the theme of the conference: Builders. Truth-Tellers. Catalysts. Photo: Bret Hartman / TED

TEDGlobal 2017 was an important homecoming to the African continent, and a ton of work went into creating an authentic experience, from the curation of talks to the music to the graphics and stage design. Saki Mafundikwa, a graphic designer, filmmaker, design teacher and founder of the Zimbabwe Institute of Vigital Arts (and a TED speaker himself) was commissioned to create an aesthetic for the theatre stage that was as elegant as it was culturally and thematically relevant.

These 3 preliminary designs were part of the process of developing the design system for the theater. While they ended up not being used as is, their gorgeous colors and shapes showcased the potential of using alphabets as a design feature throughout the space. Images courtesy Saki Mafundikwa.

The elegant final designs for the stage backdrop highlight subtle color combinations, to complement the lively design elements projected around the theater space. Images courtesy Saki Mafundikwa

Most people who watch the talks online will see Mafundikwa’s abstract fabric designs on fabric drapes over the stage. But it might be that only those who were actually there in the theater will be able to truly appreciate the true stars of the show: giant symbols, beamed down on the floor and sides of the 600-seater with gobos. “TED loved the idea of gobos,” Mafundikwa says. “It’s one of those rare but beautiful moments when, as a designer, you have an idea and the client loves it!”

The symbols are not Klingon (obviously). They are alphabets from ancient African writing systems, of which Mafundikwa is a globally recognized expert.

“Some of the symbols are proverbs, like the Adinkra of the Akan people of Ghana. Those were easier to find in keeping with the theme. But others, like Ethiopic, which are syllabaries — each character stands for a syllable — were not so easy.”

These characters come from the Adinkra of the Akan people of Ghana. Saki chose symbols that matched with the conference’s theme.

Not all parts of Africa produced writing systems, Mafundikwa says, so finding a gamut of symbols that were truly representative proved to be a challenge. Nonetheless, he was ultimately able to present symbols that spanned all four hemispheres of the continent.

“In the end, there were two sets of designs: the symbols projected on the auditorium walls and floor and the stage backdrop. Initially, I just went crazy and produced a bunch of ideas and there was quite some back-and-forth until we settled on what you saw in Arusha.”

Characters from the Bantu language, from South Africa, create poetic matches to the conference themes — where “goddess of creation” represents truth-tellers, and the character for “bee” represents builders. Image courtesy Saki Mafundikwa

Keep an eye out for Mafundikwa’s designs onstage and in camera angles during the TEDGlobal 2017 talks, which have already begun to go live. To learn more about Mafundikwa’s work, watch his own TED Talk about the beauty and ingenuity of ancient African alphabet from 2013.

Characters from Angola’s Jokwe language and Nigeria’s Nsibidi, at top, and examples of Ethiopic, Wolof (from Senegal) and Somali.


Planet DebianMatthias Klumpp: Adding fonts to software centers

Last year, the AppStream specification gained proper support for adding metadata for fonts, after Richard Hughes did some work on it years ago. We weren’t happy with how fonts were handled at that time, so we searched for better solutions, which is why this took a bit longer to be done. Last year, I was implementing the final support for fonts in both appstream-generator (the metadata extractor used by Debian and a few others) as well as the AppStream specification. This blogpost was sitting on my todo list as a draft for a long time now, and I only just now managed to finish it, so sorry for announcing this so late. Fonts are already available via AppStream for a year, and this post just sums up the status quo and some neat tricks if you want to write metainfo files for fonts. If you are following AppStream (or the Debian fonts list), you know everything already 🙂 .

Both Richard and I first tried to extract all the metadata to display fonts in a proper way to the users from the font files directly. This turned out to be very difficult, since font metadata is often wrong or incomplete, and certain desirable bits of metadata (like a longer description) are missing entirely. After messing around with different ways to solve this for days (afterall, by extracting the data from font files directly we would have hundreds of fonts directly available in software centers), I also came to the same conclusion as Richard: The best and easiest solution here is to mandate the availability of metainfo files per font.

Which brings me to the second issue: What is a font? For any person knowing about fonts, they will understand one font as one font face, e.g. “Lato Regular Italic” or “Lato Bold”. A user however will see the font family as a font, e.g. just “Lato” instead of all the font faces separated out. Since AppStream data is used primarily by software centers, we want something that is easy for users to understand. Hence, an AppStream “font” components really describes a font family or collection of fonts, instead of individual font faces. We do also want AppStream data to be useful for system components looking for a specific font, which is why font components will advertise the individual font face names they contain via a

<provides/>
 -tag. Naming fonts and making them identifiable is a whole other issue, I used a document from Adobe on font naming issues as a rough guideline while working on this.

How to write a good metainfo file for a font is best shown with an example. Lato is a well-looking font family that we want displayed in a software center. So, we write a metainfo file for it an place it in

/usr/share/metainfo/com.latofonts.Lato.metainfo.xml
  for the AppStream metadata generator to pick up:

<?xml version="1.0" encoding="UTF-8"?>
<component type="font">
  <id>com.latofonts.Lato</id>
  <metadata_license>FSFAP</metadata_license>
  <project_license>OFL-1.1</project_license>

  <name>Lato</name>
  <summary>A sanserif type­face fam­ily</summary>
  <description>
    <p>
      Lato is a sanserif type­face fam­ily designed in the Sum­mer 2010 by Warsaw-based designer
      Łukasz Dziedzic (“Lato” means “Sum­mer” in Pol­ish). In Decem­ber 2010 the Lato fam­ily
      was pub­lished under the open-source Open Font License by his foundry tyPoland, with
      sup­port from Google.
    </p>
  </description>

  <url type="homepage">http://www.latofonts.com/</url>

  <provides>
    <font>Lato Regular</font>
    <font>Lato Black Italic</font>
    <font>Lato Black</font>
    <font>Lato Bold Italic</font>
    <font>Lato Bold</font>
    <font>Lato Hairline Italic</font>
    ...
  </provides>
</component>

When the file is processed, we know that we need to look for fonts in the package it is contained in. So, the appstream-generator will load all the fonts in the package and render example texts for them as an image, so we can show users a preview of the font. It will also use heuristics to render an “icon” for the respective font component using its regular typeface. Of course that is not ideal – what if there are multiple font faces in a package? What if the heuristics fail to detect the right font face to display?

This behavior can be influenced by adding

<font/>
  tags to a
<provides/>
  tag in the metainfo file. The font-provides tags should contain the fullnames of the font faces you want to associate with this font component. If the font file does not define a fullname, the family and style are used instead. That way, someone writing the metainfo file can control which fonts belong to the described component. The metadata generator will also pick the first mentioned font name in the
<provides/>
  list as the one to render the example icon for. It will also sort the example text images in the same order as the fonts are listed in the provides-tag.

The example lines of text are written in a language matching the font using Pango.

But what about symbolic fonts? Or fonts where any heuristic fails? At the moment, we see ugly tofu characters or boxes instead of an actual, useful representation of the font. This brings me to an inofficial extension to font metainfo files, that, as far as I know, only appstream-generator supports at the moment. I am not happy enough with this solution to add it to the real specification, but it serves as a good method to fix up the edge cases where we can not render good example images for fonts. AppStream-Generator supports the FontIconText and FontSampleText custom AppStream properties to allow metainfo file authors to override the default texts and autodetected values. FontIconText will override the characters used to render the icon, while FontSampleText can be a line of text used to render the example images. This is especially useful for symbolic fonts, where the heuristics usually fail and we do not know which glyphs would be representative for a font.

For example, a font with mathematical symbols might want to add the following to its metainfo file:

<custom>
  <value key="FontIconText">∑√</value>
  <value key="FontSampleText">∑ ∮ √ ‖...‖ ⊕ 𝔼 ℕ ⋉</value>
</custom>

Any unicode glyphs are allowed, but asgen will but some length restrictions on the texts.

So, In summary:

  • Fonts are hard
  • I need to blog faster
  • Please add metainfo files to your fonts and submit them upstream if you can!
  • Fonts must have a metainfo file in order to show up in GNOME Software, KDE Discover, AppCenter, etc.
  • The “new” font specification is backwards compatible to Richard’s pioneer work in 2014
  • The appstream-generator supports a few non-standard values to influence how font images are rendered that you might be interested in (maybe we can do something like that for appstream-builder as well)
  • The appstream-generator does not (yet?) support the <extends/> logic Richard outlined in his blog post, mainly because it wasn’t necessary in Debian/Ubuntu/Arch yet (which is asgen’s primary audience), and upstream projects would rarely want to write multiple metainfo files.
  • The metaInfo files are not supposed to replace the existing fontconfig files, and we can not generate them from existing metadata, sadly
  • If you want a more detailed look at writing font metainfo files, take a look at the AppStream specification.
  • Please write more font metadata 😉

 

Planet DebianRussell Coker: Process Monitoring

Since forking the Mon project to etbemon [1] I’ve been spending a lot of time working on the monitor scripts. Actually monitoring something is usually quite easy, deciding what to monitor tends to be the hard part. The process monitoring script ps.monitor is the one I’m about to redesign.

Here are some of my ideas for monitoring processes. Please comment if you have any suggestions for how do do things better.

For people who don’t use mon, the monitor scripts return 0 if everything is OK and 1 if there’s a problem along with using stdout to display an error message. While I’m not aware of anyone hooking mon scripts into a different monitoring system that’s going to be easy to do. One thing I plan to work on in the future is interoperability between mon and other systems such as Nagios.

Basic Monitoring

ps.monitor tor:1-1 master:1-2 auditd:1-1 cron:1-5 rsyslogd:1-1 dbus-daemon:1- sshd:1- watchdog:1-2

I’m currently planning some sort of rewrite of the process monitoring script. The current functionality is to have a list of process names on the command line with minimum and maximum numbers for the instances of the process in question. The above is a sample of the configuration of the monitor. There are some limitations to this, the “master” process in this instance refers to the main process of Postfix, but other daemons use the same process name (it’s one of those names that’s wrong because it’s so obvious). One obvious solution to this is to give the option of specifying the full path so that /usr/lib/postfix/sbin/master can be differentiated from all the other programs named master.

The next issue is processes that may run on behalf of multiple users. With sshd there is a single process to accept new connections running as root and a process running under the UID of each logged in user. So the number of sshd processes running as root will be one greater than the number of root login sessions. This means that if a sysadmin logs in directly as root via ssh (which is controversial and not the topic of this post – merely something that people do which I have to support) and the master process then crashes (or the sysadmin stops it either accidentally or deliberately) there won’t be an alert about the missing process. Of course the correct thing to do is to have a monitor talk to port 22 and look for the string “SSH-2.0-OpenSSH_”. Sometimes there are multiple instances of a daemon running under different UIDs that need to be monitored separately. So obviously we need the ability to monitor processes by UID.

In many cases process monitoring can be replaced by monitoring of service ports. So if something is listening on port 25 then it probably means that the Postfix “master” process is running regardless of what other “master” processes there are. But for my use I find it handy to have multiple monitors, if I get a Jabber message about being unable to send mail to a server immediately followed by a Jabber message from that server saying that “master” isn’t running I don’t need to fully wake up to know where the problem is.

SE Linux

One feature that I want is monitoring SE Linux contexts of processes in the same way as monitoring UIDs. While I’m not interested in writing tests for other security systems I would be happy to include code that other people write. So whatever I do I want to make it flexible enough to work with multiple security systems.

Transient Processes

Most daemons have a second process of the same name running during the startup process. This means if you monitor for exactly 1 instance of a process you may get an alert about 2 processes running when “logrotate” or something similar restarts the daemon. Also you may get an alert about 0 instances if the check happens to run at exactly the wrong time during the restart. My current way of dealing with this on my servers is to not alert until the second failure event with the “alertafter 2” directive. The “failure_interval” directive allows specifying the time between checks when the monitor is in a failed state, setting that to a low value means that waiting for a second failure result doesn’t delay the notification much.

To deal with this I’ve been thinking of making the ps.monitor script automatically check again after a specified delay. I think that solving the problem with a single parameter to the monitor script is better than using 2 configuration directives to mon to work around it.

CPU Use

Mon currently has a loadavg.monitor script that to check the load average. But that won’t catch the case of a single process using too much CPU time but not enough to raise the system load average. Also it won’t catch the case of a CPU hungry process going quiet (EG when the SETI at Home server goes down) while another process goes into an infinite loop. One way of addressing this would be to have the ps.monitor script have yet another configuration option to monitor CPU use, but this might get confusing. Another option would be to have a separate script that alerts on any process that uses more than a specified percentage of CPU time over it’s lifetime or over the last few seconds unless it’s in a whitelist of processes and users who are exempt from such checks. Probably every regular user would be exempt from such checks because you never know when they will run a file compression program. Also there is a short list of daemons that are excluded (like BOINC) and system processes (like gzip which is run from several cron jobs).

Monitoring for Exclusion

A common programming mistake is to call setuid() before setgid() which means that the program doesn’t have permission to call setgid(). If return codes aren’t checked (and people who make such rookie mistakes tend not to check return codes) then the process keeps elevated permissions. Checking for processes running as GID 0 but not UID 0 would be handy. As an aside a quick examination of a Debian/Testing workstation didn’t show any obvious way that a process with GID 0 could gain elevated privileges, but that could change with one chmod 770 command.

On a SE Linux system there should be only one process running with the domain init_t. Currently that doesn’t happen in Stretch systems running daemons such as mysqld and tor due to policy not matching the recent functionality of systemd as requested by daemon service files. Such issues will keep occurring so we need automated tests for them.

Automated tests for configuration errors that might impact system security is a bigger issue, I’ll probably write a separate blog post about it.

Planet DebianLior Kaplan: LibreOffice community celebrates 7th anniversary

The Document foundation blog have a post about LibreOffice 7th anniversary:

Berlin, September 28, 2017 – Today, the LibreOffice community celebrates the 7th anniversary of the leading free office suite, adopted by millions of users in every continent. Since 2010, there have been 14 major releases and dozens of minor ones, fulfilling the personal productivity needs of both individuals and enterprises, on Linux, macOS and Windows.

I wanted to take a moment to remind people that 7 years ago the community decided to make the de facto fork of OpenOffice.org official after life under Sun (and then Oracle) were problematic. From the very first hours the project showed its effectiveness. See my post about LibreOffice first steps. Not to mention what it achieved in the past 7 years.

This is still one of my favourite open source contributions, not because it was sophisticated or hard, but because it as about using the freedom part of the free software:
Replace hardcoded “product by Oracle” with “product by %OOOVENDOR”.

On a personal note, for me, after years of trying to help with OOo l10n for Hebrew and RTL support, things started to go forward in a reasonable pace, getting patches in after years of trying, having upstream fix some of the issues, and actually able doing the translation. We made it to 100% with LibreOffice 3.5.0 in February 2012 (something we should redo soon…).


Filed under: i18n & l10n, Israeli Community, LibreOffice

CryptogramDepartment of Homeland Security to Collect Social Media of Immigrants and Citizens

New rules give the DHS permission to collect "social media handles, aliases, associated identifiable information, and search results" as part of people's immigration file. The Federal Register has the details, which seems to also include US citizens that communicate with immigrants.

This is part of the general trend to scrutinize people coming into the US more, but it's hard to get too worked up about the DHS accessing publicly available information. More disturbing is the trend of occasionally asking for social media passwords at the border.

TEDFuture visions: The talks of TEDGlobalNYC

A night of TED Talks at The Town Hall theater in Manhattan covered topics ranging from climate change and fake news to the threat AI poses for democracy. Photo: Ryan Lash / TED

The advance toward a more connected, united, compassionate world is in peril. Some voices are demanding a retreat, back to a world where insular nations battle for their own interests. But most of the big problems we face are collective in nature and global in scope. What can we do, together, about it?

In a night of talks curated and hosted by TED International Curator Bruno Giussani and TED Curator, Chris Anderson, at The Town Hall in Manhattan, eight speakers covered topics ranging from climate change and fake news to the threat AI poses for democracy and the future of markets, imagining what a globally connected world could and should look like.

What stake do we have in common? Naoko Ishii is all about building bridges between people and the environment (her organization is one of the main partners in a Herculean effort to restore the Amazon). As the CEO and chair of the Global Environment Facility, it’s her job is to get everyone on board with protecting and respecting the global commons (water, air, forests, biodiversity, the oceans), if only for the simple fact that the world’s economy is intimately linked to the wellness of Earth. Ishii opened TEDGlobal>NYC with a necessary reminder: that despite their size, these global commons have been neglected for too long, and the price is too high not to make fundamental changes in our collective behavior to save them from collapse. This current generation, she says, is the last generation that can preserve what’s left of our natural resources. If we change how eat, reduce our waste and make determined strides toward sustainable cities, there’s a chance that all hope is not lost.

Climate psychologist Per Espen Stoknes explains a new way of talking about climate change at TEDGlobal>NYC, September 20, 2017, The Town Hall, NY. Photo: Ryan Lash / TED

What we think about when we try not to think about global warming. From “scientese” to visions of the apocalypse, climate-change advocates have struggled with communicating the realities of our warming planet in a way that actually gets people to do something. “Climate psychologist” Per Espen Stoknes wondered why so many climate-change messages leave us feeling helpless and in denial instead of inspired to seek solutions. He shares with us his findings for “a more brain-friendly climate communication” — one that feels personal, doable and empowering. By scaling actions and examples down to local and more relatable levels, we can begin to feel more in control, and start to feel like our actions will have impact, Stoknes suggests. Stepping away from the doomsday narratives and instead reframing green behavior in terms of its positive additions to our lives, such as job growth and better health, can also limit our fear and increase our desire to engage in these important conversations. Our planet may be in trouble, but telling new stories could just save us.

Building the resilient cities. With fantastic new maps that provide interactive and visual representations of large data sets, Robert Muggah articulates an ancient but resurging idea: that cities should be not only the center of economic life but also the foundation of our political lives. Cities bear a significant burden of the world’s problems and have been catalysts for catastrophe, Muggah says — as an example, he shows how, in the run-up to the civil war in Syria, fragile cities like Homs and Aleppo could not bear the weight of internally displaced refugees running away from drought and famine. While this should alarm us, Muggah also sees opportunity and a chance to ride the chaotic waves of the 21st century. Looking around the world, he puts down six principles for building the resilient city. For instance, he highlights integrated and multi-use solutions like Seoul’s expanding public transportation system, where cars once dominated how people move. The current model of the nation-state that emerged in the 17th century is no longer what it once was; nation-states cannot face global crises decisively and efficiently. But the work of urban leaders and coalitions of cities like the C-40 can guide us to a healthier, more peaceful planet.

Christiane Amanpour speaks about the era of fake news at TEDGlobal>NYC, September 20, 2017, The Town Hall, NY, NY. Photo: Ryan Lash / TED

Seeking the truth. Known worldwide for her courage and clarity, Christiane Amanpour has spent the past three decades interviewing business, cultural and political leaders who have shaped history. This time she’s the one being interviewed, by TED curator Chris Anderson, in a comprehensive conversation covering fake news, objectivity in journalism, the leadership vacuum in global politics and much more. Amanpour opens with her experience reporting the Srebrenica genocide in the 1990s, and connects it to the state of journalism today, making a strong case for refusing to be an accomplice to fake news. “We’ve never faced such a massive amount of information which is not curated by those whose profession leads them to abide by the truth,” she says. “Objectivity means giving all sides an equal hearing but not creating a forced moral equivalence.” Facebook and other outlets need to step up and combat fake news, she continues, calling for a moral code of conduct and algorithms to “filter out the crap” that populates our news feeds. Amanpour — fresh from her interview with French president Emmanuel Macron, his first with an international journalist — leaves us with some wisdom: “Be careful where you get information from. Unless we are all engaged as global citizens who appreciate the truth, who understand science, empirical evidence and facts, then we are going to be wandering around — to a potential catastrophe.”

Though he had a cold and could not sing for us, Yusuf Islam (Cat Stevens) takes a moment onstage to discuss faith and music with TED’s own Chris Anderson, at TEDGlobal NYC, September 20, 2017, The Town Hall, New York. Photo: Ryan Lash / TED

A cat’s attic. Yusuf Islam (Cat Stevens)‘s music has been embraced by generations of fans as anthems of peace and unity. In conversation with TED curator Chris Anderson, Yusuf discusses the influence of his music, the arc of his career and his Muslim faith. “I discovered something beyond the facade of what we are taught to believe about others,” Yusuf says of his embrace of Islam in the late ’70s. “There are ways of looking at this world other than the material … Islam brought together all the strands of religion I could ever wish for.” Connecting his return to music after 9/11 to his current work and new album, The Laughing Apple, Yusuf sees his mission as spreading messages of peace and hope. “Be careful about exclusion,” he says. “In the [education] curriculum, we’ve got to start looking towards a globalized curriculum … We should know a bit more about the other to avoid the build up of antagonization.”

“Wherever I look, I see nuances withering away.” In a personal talk, author and political commentator Elif Shafak cautions against the dangers of a dualist worldview. A native of Turkey, she has experienced the devastation that a loss of diversity can bring firsthand, and she knows the revolutionary power of plurality in response to authoritarianism. She reminds us that there are no binaries, whether between developed and developing nations, politics and emotions, or even our own identities. By embracing our countries and societies as mosaics, we push back against tribalism and reach across borders. “One should never ever remain silent for fear of complexity,” Shafak says.

We know what we are saying “no” to, but what are we saying “yes” to? In her classic book The Shock Doctrine — and her new book No Is Not Enough — writer and activist Naomi Klein examines how governments use large-scale shocks like natural disasters, financial crises and terrorist attacks to exploit the public and push through radical pro-corporate measures. At TEDGlobal>NYC, Klein explains that resistance to policies that attack the public is not enough; we also must have a concrete plan for how we want to reorganize society. A few years ago, Klein and a consortium of indigenous leaders, urban hipsters, climate change activists, oil and gas workers, faith leaders, anarchists, migrant rights organizers and leading feminists decided to lock themselves in a room to discuss their utopian vision for the future. They emerged two days later with a manifesto known as The Leap Manifesto, which is all about caring for the earth and one another. Klein shares a few propositions from the platform, including a call for a 100 percent renewable economy, new investment in the low-carbon workforce, comprehensive programs to retrain workers who are losing their jobs in extractive and industrial sectors, and a demand that those who that profit from pollution pay for it. “We live in a time where every alarm in our house is going off,” she concludes. “It’s time to listen. It’s time — together — to leap.”

Could a Facebook algorithm tell us how to vote? Zeynep Tufekci asks why algorithms are controlling more and more of our behavior, like it or not. She speaks at TEDGlobal>NYC, September 20, 2017, The Town Hall, New York. Photo: Ryan Lash / TED

There’s nothing left to fear from AI but the humans behind it. Technosociologist Zeynep Tufecki isn’t worried about AI — it’s the intention behind the technology that’s truly concerning. Data about you is being collected and sold daily, says Tufecki, and the prodigious potential of machine learning comes with potentially catastrophic risks. Companies like Facebook and Google haven’t thoroughly factored in the ethical dilemmas that come with automated systems that are programmed to exploit human weakness in order to place ads in front of exactly the people most likely to buy. If not checked, the ads and recommendations that follow you around well after you’ve stopped searching can snowball from well-meaning to insidious. It’s not to say that social media and the internet are all bad — in fact, Tufecki has written at length about the benefits and power it has bestowed upon many — but her talk is a strong reminder to be aware of the negative potential of AI as well as the positives, and to fight for our collective digital future.

Competition is only fair, says the EU’s Commissioner for Competition, Margrethe Vestager at TEDGlobal NYC, September 20, 2017, The Town Hall, New York. Photo: Ryan Lash / TED

The fight for fairness. This June, the EU levied a record $2.7 billion fine against Google for breaching antitrust rules by unfairly favoring its comparison shopping service in search. More than double the previous largest penalty in this type of antitrust case, the penalty confirmed Margrethe Vestager, European Commissioner for Competition, as one of the world’s most powerful trustbusters. In the closing talk of TEDGlobal>NYC, Vestager makes the connection between how fairness in the markets — and corrective action to ensure it exists — can establish trust in society and each other. Competition in markets gives us the power to demand a fair deal, Vestager says; when it’s removed, either by colluding businesses or biased governments, trust disappears too. “Lack of trust in the market can rub off on society, so we lose trust in society as well,” she says. “Without trust, everything becomes harder.” But competition rules — and those that enforce them — can reestablish the balance between individuals and powerful, seemingly invulnerable multinational corporations. “Trust cannot be imposed, it has be to earned,” Vestager says. “Competition makes the market work for everyone. And that’s why I’m convinced that real and fair competition has a vital role to play in building the trust we need to get the best out of society. And that starts with enforcing our rules.”

Shine as bright as you can. Electro-soul duo Ibeyi closed out TEDGlobal>NYC with a minimalistic, deeply transportive lyrical set. A harmony of voices, piano and cajon drum filled the venue as the pair sang in a mixture of Yoruba, English and French. “Look at the star,” they sing. “I know she’s proud of who you’ve been and who you are.”

TEDGlobal>NYC was made possible by support from Ford Foundation, The Skoll FoundationUnited Nations Foundation and Global Citizen.


Worse Than FailureNews Roundup: EquiTF

We generally don’t do news roundups when yet another major company gets hacked and leaks personally compromising data about the public. We know that “big company hacked” isn’t news, it’s a Tuesday. So the Equifax hack didn’t seem like something worth spending any time to write an article about.

But then new things kept coming out. It got worse. And worse. And worse. It’s like if a dumpster caught on fire, but then the fire itself also caught on fire.

If you have been living under a rock, Equifax, a company that spies on the financial behavior of Americans and sells that intelligence to banks, credit card companies, and anyone else who’s paying, was hacked, and the culprits have everything they need to steal the identities of 143 million people.

The Equifax logo being flushed in a toilet, complete with some artsy motion blur

That’s bad, but everything else about it is worse. First, the executives kept the breach secret for months, and then sold stock just before the news went public. That is a move so utterly brazen that they might as well be a drunk guy with no shirt shouting, “Come at me bro! Come at me!” They’re daring the Securities and Exchange Commission to do something about it, and are confident that they won’t be punished.

Speaking of punishment, the CEO retired, and he’ll be crying about this over the $90M he’s collecting this year. The CIO and CSO went first, of course. They probably won’t be getting huge compensation packages, but I’m sure they’ll land cushy gigs somewhere.

Said CSO, by the way, had no real qualifications to be a Chief Security Officer. Her background is in music composition.

Now, I want to be really clear here: I don’t think her college degree is actually relevant. What you did in college isn’t nearly as important as your work experience, which is the real problem- she doesn’t really have that, either. She’s spent her entire career in “executive” roles, and while she was a CSO before going to Equifax, that was at First Data. Funny thing about First Data: up until 2013 (about when she left), it was in a death spiral that was fixed after some serious house-cleaning and restructuring- like clearing out dead-weight in their C-level.

Don't worry about the poor shareholders, though. Remember Wells Fargo, the bank that fraudulently signed up lots of people for accounts? They list Equifax as an investment opportunity that's ready to "outperform".

That’s the Peter Principle and corporate douchebaggerry in action, and it certainly starts getting me angry, but this site isn’t about class struggle- it’s about IT. And it’s on the IT side where the real WTFs come into play.

Equifax spies on you and sells the results. The US government put a mild restriction on this behavior: they can spy on you, but you have the right to demand that they stop selling the results. This is a “credit freeze”, and every credit reporting agency- every business like Equifax- has to do this. They get to charge you money for the privilege, but they have to do it.

To “secure” this transaction, when you freeze your credit, the credit reporting companies give you a “password” which you can use in the future to unfreeze it (because if you want a new credit card, you have to let Equifax share your data again). Some agencies give you a random string. Some let you choose your own password. Equifax used the timestamp on your request.

The hack itself was due to an unpatched Struts installation. The flaw itself is a pretty fascinating one, where a maliciously crafted XML file gets deserialized into a ProcessBuilder object. The flaw was discovered in March, and a patch was available shortly thereafter. Apache rightfully called it “Critical”, and encouraged all Struts users to apply the fix.

Even if they didn’t apply the fix, Apache provided workarounds- some of which were as simple as, “Turn off the REST plugin if you’re not using it,” or “if you ARE using it, turn off the XML part”. It’s certainly not the easiest fix, especially if you’re on a much older version of Struts, but you could even patch just the REST plugin, cutting down on the total work.

Now, if you’re paying attention, you might be saying to yourself, “Hey, Remy, didn’t you say that they were breached (initially) in March? The month the bug was discovered? Isn’t it kinda reasonable that they wouldn’t have rolled out the fix in time?” Yes, that would be reasonable: if a flaw exposed in March was exploited within a few days or even weeks of the flaw being discovered, I could understand that. But remember, the breach that actually got announced was in July- they were breached in March, and they still didn’t apply the patch. This honestly makes it worse.

Even then, I’d argue that we’re giving them too much of the benefit of the doubt. I’m going to posit that they simply don’t care. Not only did they not apply the patch, they likely had no intention of applying the patch, because they assumed they’d get away with it. Remember: you are the product, not the customer. If they accidentally cut the sheep while shearing, it doesn’t matter: they’ve still got the wool.

As an example of “they clearly don’t care”, let’s turn our attention to their Argentinian Branch, where their employee database was protected by the password admin/admin. Yes, with that super-secure password, you could log in from anywhere in the world and see the users usernames, employee IDs, and personal details. Of course, their passwords were obscured as “******”… in the rendered DOM. A simple “View Source” would reveal the plaintext of their passwords, in true “hunter2” fashion.

Don’t worry, it gets dumber. Along with the breach announcement, Equifax took to social media to direct users to a site where, upon entering their SSN, it would tell them whether or not they were compromised. That was the promise, but the reality was that it was little better than flipping a coin. Worse, the site was a thinly veiled ad for their "identity protection" service, and the agreement contained an arbitration clause which kept you from suing them.

That is, at least if you went to the right site. Setting aside the wisdom of encouraging users to put confidential information into random websites, for weeks Equifax’s social media team was directing people to the wrong site! In fact, it was directing them to a site which warns about the dangers of putting confidential information into random websites.

And all of that, all of that, isn’t the biggest WTF. The biggest WTF is the Social Security Number, which was never meant to be used as a private identifier, but as it’s the closest thing to unique data about every American, it substitutes for a national identification system even when it’s clearly ill-suited to the task.

I’ll leave you with the CGP Grey video on the subject:

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaMichael Still: I think I found a bug in python's unittest.mock library

Mocking is a pretty common thing to do in unit tests covering OpenStack Nova code. Over the years we've used various mock libraries to do that, with the flavor de jour being unittest.mock. I must say that I strongly prefer unittest.mock to the old mox code we used to write, but I think I just accidentally found a fairly big bug.

The problem is that python mocks are magical. Its an object where you can call any method name, and the mock will happily pretend it has that method, and return None. You can then later ask what "methods" were called on the mock.

However, you use the same mock object later to make assertions about what was called. Herein is the problem -- the mock object doesn't know if you're the code under test, or the code that's making assertions. So, if you fat finger the assertion in your test code, the assertion will just quietly map to a non-existent method which returns None, and your code will pass.

Here's an example:

    #!/usr/bin/python3
    
    from unittest import mock
    
    
    class foo(object):
        def dummy(a, b):
            return a + b
    
    
    @mock.patch.object(foo, 'dummy')
    def call_dummy(mock_dummy):
        f = foo()
        f.dummy(1, 2)
    
        print('Asserting a call should work if the call was made')
        mock_dummy.assert_has_calls([mock.call(1, 2)])
        print('Assertion for expected call passed')
    
        print()
        print('Asserting a call should raise an exception if the call wasn\'t made')
        mock_worked = False
        try:
            mock_dummy.assert_has_calls([mock.call(3, 4)])
        except AssertionError as e:
            mock_worked = True
            print('Expected failure, %s' % e)
    
        if not mock_worked:
            print('*** Assertion should have failed ***')
    
        print()
        print('Asserting a call where the assertion has a typo should fail, but '
              'doesn\'t')
        mock_worked = False
        try:
            mock_dummy.typo_assert_has_calls([mock.call(3, 4)])
        except AssertionError as e:
            mock_worked = True
            print('Expected failure, %s' % e)
            print()
    
        if not mock_worked:
            print('*** Assertion should have failed ***')
            print(mock_dummy.mock_calls)
            print()
    
    
    if __name__ == '__main__':
        call_dummy()
    


If I run that code, I get this:

    $ python3 mock_assert_errors.py 
    Asserting a call should work if the call was made
    Assertion for expected call passed
    
    Asserting a call should raise an exception if the call wasn't made
    Expected failure, Calls not found.
    Expected: [call(3, 4)]
    Actual: [call(1, 2)]
    
    Asserting a call where the assertion has a typo should fail, but doesn't
    *** Assertion should have failed ***
    [call(1, 2), call.typo_assert_has_calls([call(3, 4)])]
    


So, we should have been told that typo_assert_has_calls isn't a thing, but we didn't notice because it silently failed. I discovered this when I noticed an assertion with a (smaller than this) typo in its call in a code review yesterday.

I don't really have a solution to this right now (I'm home sick and not thinking straight), but it would be interesting to see what other people think.

Tags for this post: python unittest.mock mock testing
Related posts: Implementing SCP with paramiko; Packet capture in python; A pythonic example of recording metrics about ephemeral scripts with prometheus; mbot: new hotness in Google Talk bots; Starfish Prime; Calculating a SSH host key with paramiko

Comment

Planet DebianRuss Allbery: Review: The Seventh Bride

Review: The Seventh Bride, by T. Kingfisher

Publisher: 47North
Copyright: 2015
ISBN: 1-5039-4975-3
Format: Kindle
Pages: 225

There are two editions of this book, although only one currently for sale. This review is of the second edition, released in November of 2015. T. Kingfisher is a pen name for Ursula Vernon when she's writing for adults.

Rhea is a miller's daughter. She's fifteen, obedient, wary of swans, respectful to her parents, and engaged to Lord Crevan. The last was a recent and entirely unexpected development. It's not that she didn't expect to get married eventually, since of course that's what one does. And it's not that Lord Crevan was a stranger, since that's often how it went with marriage for people like her. But she wasn't expecting to get married now, and it was not at all clear why Lord Crevan would want to marry her in particular.

Also, something felt not right about the entire thing. And it didn't start feeling any better when she finally met Lord Crevan for the first time, some days after the proposal to her parents. The decidedly non-romantic hand kissing didn't help, nor did the smug smile. But it's not like she had any choice. The miller's daughter doesn't say no to a lord and a friend of the viscount. The miller's family certainly doesn't say no when they're having trouble paying the bills, the viscount owns the mill, and they could be turned out of their livelihood at a whim.

They still can't say no when Lord Crevan orders Rhea to come to his house in the middle of the night down a road that quite certainly doesn't exist during the day, even though that's very much not the sort of thing that is normally done. Particularly before the marriage. Friends of the viscount who are also sorcerers can get away with quite a lot. But Lord Crevan will discover that there's still a limit to how far he can order Rhea around, and practical-minded miller's daughters can make a lot of unexpected friends even in dire circumstances.

The Seventh Bride is another entry in T. Kingfisher's series of retold fairy tales, although the fairy tale in question is less clear than with The Raven and the Reindeer. Kirkus says it's a retelling of Bluebeard, but I still don't quite see that in the story. I think one could argue equally easily that it's an original story. Nonetheless, it is a fairy tale: it has that fairy tale mix of magical danger and practical morality, and it's about courage and friendships and their consequences.

It also has a hedgehog.

This is an T. Kingfisher story, so it's packed full of bits of marvelous phrasing that I want to read over and over again. It has wonderful characters, the hedgehog among them, and it has, at its heart, a sort of foundational decency and stubborn goodness that's deeply satisfying for the reader.

The Seventh Bride is a lot closer to horror than the other T. Kingfisher books I've read, but it never fell into my dislike of the horror genre, despite a few gruesome bits. I think that's because neither Rhea nor the narrator treat the horrific aspects as representative of the true shape of the world. Rhea instead confronts them with a stubborn determination and an attempt to make the best of each moment, and with a practical self-awareness that I loved reading about.

The problem with crying in the woods, by the side of a white road that leads somewhere terrible, is that the reason for crying isn't inside your head. You have a perfectly legitimate and pressing reason for crying, and it will still be there in five minutes, except that your throat will be raw and your eyes will itch and absolutely nothing else will have changed.

Lord Crevan, when Rhea finally reaches him, toys with her by giving her progressively more horrible puzzle tasks, threatening her with the promised marriage if she fails at any of them. The way this part of the book finally resolves is one of the best moments I've read in any book. Kingfisher captures an aspect of moral decisions, and a way in which evil doesn't work the way that evil people expect it to work, that I can't remember seeing an author capture this well.

There are a lot of things here for Rhea to untangle: the nature of Crevan's power, her unexpected allies in his manor, why he proposed marriage to her, and of course how to escape his power. The plot works, but I don't think it was the best part of the book, and it tends to happen to Rhea rather than being driven by her. But I have rarely read a book quite this confident of its moral center, or quite as justified in that confidence.

I am definitely reading everything Vernon has published under the T. Kingfisher name, and quite possibly most of her children's books as well. Recommended, particularly if you liked the excerpt above. There's an entire book full of paragraphs like that waiting for you.

Rating: 8 out of 10

Planet DebianDirk Eddelbuettel: RcppZiggurat 0.1.4

ziggurats

A maintenance release of RcppZiggurat is now on the CRAN network for R. It switched the vignette to the our new pinp package and its two-column pdf default.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl---all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

The NEWS file entry below lists all changes.

Changes in version 0.1.4 (2017-07-27)

  • The vignette now uses the pinp package in two-column mode.

  • Dynamic symbol registration is now enabled.

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianEnrico Zini: Systemd device units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.device units

Several devices are automatically represented inside systemd by .device units, which can be used to activate services when a given device exists in the file system.

See systemctl --all --full -t device to see a list of all decives for which systemd has a unit in your system.

For example, this .service unit plays a sound as long as a specific USB key is plugged in my system:

[Unit]
Description=Beeps while a USB key is plugged
DefaultDependencies=false
StopWhenUnneeded=true

[Install]
WantedBy=dev-disk-by\x2dlabel-ERLUG.device

[Service]
Type=simple
ExecStart=/bin/sh -ec 'while true; do /usr/bin/aplay -q /tmp/beep.wav; sleep 2; done'

If you need to work with a device not seen by default by systemd, you can add a udev rule that makes it available, by adding the systemd tag to the device with TAG+="systemd".

It is also possible to give the device an extra alias using ENV{SYSTEMD_ALIAS}="/dev/my-alias-name".

To figure out all you can use for matching a device:

  1. Run udevadm monitor --environment and plug the device
  2. Look at the DEVNAME= values and pick one that addresses your device the way you prefer
  3. udevadm info --attribute-walk --name=*the value of devname* will give you all you can use for matching in the udev rule.

See:

Planet DebianEnrico Zini: Qt cross-architecture development in Debian

Use case: use Debian Stable as an environment to run amd64 development machines to develop Qt applications for Raspberry Pi or other smallish armhf devices.

Qt Creator is used as Integrated Development Environment, and it supports cross-compiling, running the built source on the target system, and remote debugging.

Debian Stable (vanilla or Raspbian) runs on both the host and the target systems, so libraries can be kept in sync, and both systems have access to a vast amount of libraries, with security support.

On top of that, armhf libraries can be installed with multiarch also in the host machine, so cross-builders have access to the exact same libraries as the target system.

This sounds like a dream system. But. We're not quite there yet.

cross-compile attempts

I tried cross compiling a few packages:

$ sudo debootstrap stretch cross
$ echo "strech_cross" | sudo tee cross/etc/debian_chroot
$ sudo systemd-nspawn -D cross
# dpkg --add-architecture armhf
# echo "deb-src http://deb.debian.org/debian stretch main" >> /etc/apt/sources.list
# apt update
# apt install --no-install-recommends build-essential crossbuild-essential-armhf

Some packages work:

# apt source bc
# cd bc-1.06.95/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
…
dh_auto_configure -- --prefix=/usr --with-readline
        ./configure --build=x86_64-linux-gnu --prefix=/usr --includedir=\${prefix}/include --mandir=\${prefix}/share/man --infodir=\${prefix}/share/info --sysconfdir=/etc --localstatedir=/var --disable-silent-rules --libdir=\${prefix}/lib/arm-linux-gnueabihf --libexecdir=\${prefix}/lib/arm-linux-gnueabihf --disable-maintainer-mode --disable-dependency-tracking --host=arm-linux-gnueabihf --prefix=/usr --with-readline
…
dpkg-deb: building package 'dc-dbgsym' in '../dc-dbgsym_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'bc-dbgsym' in '../bc-dbgsym_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'dc' in '../dc_1.06.95-9_armhf.deb'.
dpkg-deb: building package 'bc' in '../bc_1.06.95-9_armhf.deb'.
 dpkg-genbuildinfo --build=binary
 dpkg-genchanges --build=binary >../bc_1.06.95-9_armhf.changes
dpkg-genchanges: info: binary-only upload (no source code included)
 dpkg-source --after-build bc-1.06.95
dpkg-buildpackage: info: binary-only upload (no source included)

With qmake based Qt packages, qmake is not configured for cross-building, probably because it is not currently supported:

# apt source pumpa
# cd pumpa-0.9.3/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
…
        qmake -makefile -nocache "QMAKE_CFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=.
          -fstack-protector-strong -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CXXFLAGS_RELEASE=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_CXXFLAGS_DEBUG=-g -O2 -fdebug-prefix-map=/root/pumpa-0.9.3=. -fstack-protector-strong
          -Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2"
          "QMAKE_LFLAGS_RELEASE=-Wl,-z,relro -Wl,-z,now"
          "QMAKE_LFLAGS_DEBUG=-Wl,-z,relro -Wl,-z,now" QMAKE_STRIP=: PREFIX=/usr
qmake: could not exec '/usr/lib/x86_64-linux-gnu/qt5/bin/qmake': No such file or directory
…
debian/rules:19: recipe for target 'build' failed
make: *** [build] Error 2
dpkg-buildpackage: error: debian/rules build gave error exit status 2

With cmake based Qt packages it goes a little better in that it finds the cross compiler, pkg-config and some multiarch paths, but then it tries to run armhf moc, which fails:

# apt source caneda
# cd caneda-0.3.0/
# apt-get build-dep -a armhf .
# dpkg-buildpackage -aarmhf -j2 -b
…
        cmake .. -DCMAKE_INSTALL_PREFIX=/usr -DCMAKE_VERBOSE_MAKEFILE=ON -DCMAKE_BUILD_TYPE=None
          -DCMAKE_INSTALL_SYSCONFDIR=/etc -DCMAKE_INSTALL_LOCALSTATEDIR=/var -DCMAKE_SYSTEM_NAME=Linux
          -DCMAKE_SYSTEM_PROCESSOR=arm -DCMAKE_C_COMPILER=arm-linux-gnueabihf-gcc
          -DCMAKE_CXX_COMPILER=arm-linux-gnueabihf-g\+\+
          -DPKG_CONFIG_EXECUTABLE=/usr/bin/arm-linux-gnueabihf-pkg-config
          -DCMAKE_INSTALL_LIBDIR=lib/arm-linux-gnueabihf
…
CMake Error at /usr/lib/arm-linux-gnueabihf/cmake/Qt5Core/Qt5CoreConfig.cmake:27 (message):
  The imported target "Qt5::Core" references the file

     "/usr/lib/arm-linux-gnueabihf/qt5/bin/moc"

  but this file does not exist.  Possible reasons include:

  * The file was deleted, renamed, or moved to another location.

  * An install or uninstall procedure did not complete successfully.

  * The installation package was faulty and contained

     "/usr/lib/arm-linux-gnueabihf/cmake/Qt5Core/Qt5CoreConfigExtras.cmake"

  but not all the files it references.

Note: Although I improvised a chroot to be able to fool around with it, I would use pbuilder or sbuild to do the actual builds.

Helmut suggests pbuilder --host-arch or sbuild --host.

Doing it the non-Debian way

This guide in the meantime explains how to set up a cross-compiling Qt toolchain in a rather dirty way, by recompiling Qt pointing it at pieces of the Qt deployed on the Raspberry Pi.

Following that guide, replacing the CROSS_COMPILE value with /usr/bin/arm-linux-gnueabihf- gave me a working qtbase, for which it is easy to create a Kit for Qt Creator that works, and supports linking applications with Debian development packages that do not use Qt.

However, at that point I need to recompile all dependencies that use Qt myself, and I quickly got stuck at that monster of QtWebEngine, whose sources embed the whole of Chromium.

Having a Qt based development environment in which I need to become the maintainer for the whole Qt toolchain is not a product I can offer to a customer. Cross compiling qmake based packages on stretch is not currently supported, so at the moment I had to suggest to postpone all plans for total world domination for at least two years.

Cross-building Debian

In the meantime, Helmut Grohne has been putting a lot of effort into making Debian packages cross-buildable:

helmut> enrico: yes, cross building is painful. we have ~26000 source packages. of those, ~13000 build arch-dep packages. of those, ~6000 have cross-satisfiable build-depends. of those, I tried cross building ~2300. of those 1300 cross built. so we are at about 10% working.

helmut> enrico: plus there are some 607 source packages affected by some 326 bugs with patches.

helmut> enrico: gogo nmu them

helmut> enrico: I've filed some 1000 bugs (most of them with patches) now. around 600 are fixed :)

He is doing it mostly alone, and I would like people not to be alone when they do a lot of work in Debian, so…

Join Helmut in the effort of making Debian cross-buildable!

Build any Debian package for any device right from the comfort of your own work computer!

Have a single development environment seamlessly spanning architecture boundaries, with the power of all that there is in Debian!

Join Helmut in the effort of making Debian cross-buildable!

Apply here, or join #debian-bootstrap on OFTC!

Cross-building Qt in Debian

mitya57 summarised the situation on the KDE team side:

mitya57> we have cross-building stuff on our TODO list, but it will likely require a lot of time and neither Lisandro nor I have it currently.

mitya57> see https://gobby.debian.org/export/Teams/KDE/qt-cross for a summary of what needs to be done.

mitya57> Any help or patches are always welcome :))

qemu-user-static

Helmut also suggested to use qemu-user-static to make the host system able to run binaries compiled for the target system, so that even if a non-cross-compiling Qt build tries to run moc and friends in their target architecture version, they would transparently succeed.

At that point, it would just be a matter of replacing compiler paths to point to the native cross-compiling gcc, and the build would not be slowed down by much.

Fixing bug #781226 would help in making it possible to configure a multiarch version of qmake as the qmake used for cross compiling.

I have not had a chance of trying to cross-build in this way yet.

In the meantime...

Having qtcreator able to work on an amd64 devel machine and deploy/test/debug remotely on an arm target machine, where both machine run debian stable and have libraries in sync, would be a great thing to have even though packages do not cross-build yet.

Helmut summarised the situation on IRC:

svuorela and others repeat that Qt upstream is not compatible with Debian's multiarch thinking, in that Qt upstream insists on having one toolchain for each pair of architectures, whereas the Debian way tends to be to make packages generic and split stuff such that it can be mixed and matched.

An example being that you need to run qmake (thus you need qmake for the build architecture), but qmake also embeds the relevant paths and you need to query it for them (so you need qmake for the host architecture)

Either you run it through qemu, or you have a particular cross qmake for your build/host pair, or you fix qt upstream to stop this madness

Building qmake in Debian for each host-target pair, even just limited to released architectures, would mean building Qt 100 times, and that's not going to scale.

I wonder:

  • can I have a qmake-$ARCH binary that can build a source using locally installed multiarch Qt libraries, do I need to recompile and ship the whole of Qt, or just qmake?
  • is there a recipe for building a cross-building Qt environment that would be able use Debian development libraries installed the normal multiarch way?
  • we can't do perfect yet, but can we do better than this?

Worse Than FailureCodeSOD: An Exception to the Rule

“Throw typed exceptions,” is generically good advice in a strongly typed language, like Java. It shouldn’t be followed thoughtlessly, but it’s a good rule of thumb. Some people may need a little more on the point, though.

Alexander L sends us this code:

  public boolean isCheckStarted (final String nr) throws CommonException {
    final BigDecimal sqlCheckStarted = executeDBBigDecimalQueryFirstResult (
      Query.CHECKSTARTED_BY_NR,
      nr);

    CommonException commonException = new CommonException ("DB Query fail to get 'CheckStarted'");
    int checkStarted = -1;
    checkStarted = Integer.parseInt (Utility.bigDecimalToString (sqlCheckStarted));
    if (checkStarted == 1 || checkStarted == 0) {
      return checkStarted == 1 ? true : false;
    } else {
      throw commonException;
    }
  }

At a glance, it looks ugly, but the scope of its badness doesn’t really set in until Alexander fills some of the surrounding blanks:

  • CommonException is a generic class for failures in talking to the database
  • It is almost never caught directly anywhere in the code, and the rare places that do wrap it in a RuntimeException
  • executeDBBigDecimalQueryFirstResult throws a CommonException if the query failed.

It’s also important to note that Java captures the stack trace when an exception is created, not when it’s thrown, and this method is called from pretty deep in the stack, so that’s expensive.

And all of that isn’t even the worst. The “CheckStarted” field is apparently stored in the database as a Decimal type, or at least is fetched from the database that way. Its only legal values are “0” and “1”, making this a good bit of overkill. To round out the type madness, we convert it to a string only to parse it back into an int.

And that’s still not the worst.

This line: return checkStarted == 1 ? true : false; That’s the kind of line that just sets my skin crawling. It bugs me even more than using an if statement, because the author apparently knew enough to know about ternaries, but not enough to know about boolean expressions.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianDirk Eddelbuettel: RcppAnnoy 0.0.10

A few short weeks after the more substantial 0.0.9 release of RcppAnnoy, we have a quick bug-fix update.

RcppAnnoy is our Rcpp-based R integration of the nifty Annoy library by Erik. Annoy is a small and lightweight C++ template header library for very fast approximate nearest neighbours.

Michaël Benesty noticed that our getItemsVector() function didn't, ahem, do much besides crashing. Simple bug, they happen--now fixed, and a unit test added.

Changes in this version are summarized here:

Changes in version 0.0.10 (2017-09-25)

  • The getItemsVector() function no longer crashes (#24)

Courtesy of CRANberries, there is also a diffstat report for this release.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianEnrico Zini: Systemd timer units

These are the notes of a training course on systemd I gave as part of my work with Truelite.

.timer units

Configure activation of other units (usually a .service unit) at some given time.

The functionality is similar to cron, with more features and a finer time granularity. For example, in Debian Stretch apt has a timer for running apt update which runs at a random time to distribute load on servers:

# /lib/systemd/system/apt-daily.timer
[Unit]
Description=Daily apt download activities
After=network-online.target
Wants=network-online.target

[Timer]
OnCalendar=*-*-* 6,18:00
RandomizedDelaySec=12h
Persistent=true

[Install]
WantedBy=timers.target

The corresponding apt-daily.service file then only runs when the system is on mains power, to avoid unexpected batter drains for systems like laptops:

# /lib/systemd/system/apt-daily.service
[Unit]
Description=Daily apt download activities
Documentation=man:apt(8)
ConditionACPower=true

[Service]
Type=oneshot
ExecStart=/usr/lib/apt/apt.systemd.daily update

Note that if you want to schedule tasks with an accuracy under a minute (for example to play a beep every 5 seconds when running on battery), you need to also configure AccuracySec= for the timer to a delay shorter than the default 1 minute.

This is how to make your computer beep when on battery:

# /etc/systemd/system/beep-on-battery.timer
[Unit]
Description=Beeps every 10 seconds

[Install]
WantedBy=timers.target

[Timer]
AccuracySec=1s
OnUnitActiveSec=10s
# /etc/systemd/system/beep-on-battery.service
[Unit]
Description=Beeps when on battery
ConditionACPower=false

[Service]
Type=oneshot
ExecStart=/usr/bin/aplay /tmp/beep.wav

See:

Krebs on SecurityBreach at Sonic Drive-In May Have Impacted Millions of Credit, Debit Cards

Sonic Drive-In, a fast-food chain with nearly 3,600 locations across 45 U.S. states, has acknowledged a breach affecting an unknown number of store payment systems. The ongoing breach may have led to a fire sale on millions of stolen credit and debit card accounts that are now being peddled in shadowy underground cybercrime stores, KrebsOnSecurity has learned.

sonicdrivein

The first hints of a breach at Oklahoma City-based Sonic came last week when I began hearing from sources at multiple financial institutions who noticed a recent pattern of fraudulent transactions on cards that had all previously been used at Sonic.

I directed several of these banking industry sources to have a look at a brand new batch of some five million credit and debit card accounts that were first put up for sale on Sept. 18 in a credit card theft bazaar previously featured here called Joker’s Stash:

This batch of some five million cards put up for sale Sept. 26, 2017 on the popular carding site Joker's Stash has been tied to a breach at Sonic Drive-In

This batch of some five million cards put up for sale today (Sept. 26, 2017) on the popular carding site Joker’s Stash has been tied to a breach at Sonic Drive-In. The first batch of these cards appear to have been uploaded for sale on Sept. 15.

Sure enough, two sources who agreed to purchase a handful of cards from that batch of accounts on sale at Joker’s discovered they all had been recently used at Sonic locations.

Armed with this information, I phoned Sonic, which responded within an hour that it was indeed investigating “a potential incident” at some Sonic locations.

“Our credit card processor informed us last week of unusual activity regarding credit cards used at SONIC,” reads a statement the company issued to KrebsOnSecurity. “The security of our guests’ information is very important to SONIC. We are working to understand the nature and scope of this issue, as we know how important this is to our guests. We immediately engaged third-party forensic experts and law enforcement when we heard from our processor. While law enforcement limits the information we can share, we will communicate additional information as we are able.”

Christi Woodworth, vice president of public relations at Sonic, said the investigation is still in its early stages, and the company does not yet know how many or which of its stores may be impacted.

The accounts apparently stolen from Sonic are part of a batch of cards that Joker’s Stash is calling “Firetigerrr,” and they are indexed by city, state and ZIP code. This geographic specificity allows potential buyers to purchase only cards that were stolen from Sonic customers who live near them, thus avoiding a common anti-fraud defense in which a financial institution might block out-of-state transactions from a known compromised card.

Malicious hackers typically steal credit card data from organizations that accept cards by hacking into point-of-sale systems remotely and seeding those systems with malicious software that can copy account data stored on a card’s magnetic stripe. Thieves can use that data to clone the cards and then use the counterfeits to buy high-priced merchandise from electronics stores and big box retailers.

Prices for the cards advertised in the Firetigerr batch are somewhat higher than for cards stolen in other breaches, likely because this batch is extremely fresh and unlikely to have been canceled by card-issuing banks yet.

Dumps available for sale on Joker’s Stash from the “FireTigerrr” base, which has been linked to a breach at Sonic Drive-In. Click image to enlarge.

Most of the cards range in price from $25 to $50, and the price is influenced by a number of factors, including: the type of card issued (Amex, Visa, MasterCard, etc); the card’s level (classic, standard, signature, platinum, etc.); whether the card is debit or credit; and the issuing bank.

I should note that it remains unclear whether Sonic is the only company whose customers’ cards are being sold in this particular batch of five million cards at Joker’s Stash. There are some (as yet unconfirmed) indications that perhaps Sonic customer cards are being mixed in with those stolen from other eatery brands that may be compromised by the same attackers.

The last known major card breach involving a large nationwide fast-food chain impacted more than a thousand Wendy’s locations and persisted for almost nine months after it was first disclosed here. The Wendy’s breach was extremely costly for card-issuing banks and credit unions, which were forced to continuously re-issue customer cards that kept getting re-compromised every time their customers went back to eat at another Wendy’s.

Part of the reason Wendy’s corporate offices had trouble getting a handle on the situation was that most of the breached locations were not corporate-owned but instead independently-owned franchises whose payment card systems were managed by third-party point-of-sale vendors.

According to Sonic’s Wikipedia page, roughly 90 percent of Sonic locations across America are franchised.

Dan Berger, president and CEO of the National Association of Federally Insured Credit Unions, said he’s not looking forward to the prospect of another Wendy’s-like fiasco.

“It’s going to be the financial institution that makes them whole, that pays off the charges or replaces money in the customer’s checking account, or reissues the cards, and all those costs fall back on the financial institutions,” Berger said. “These big card breaches are going to continue until there’s a national standard that holds retailers and merchants accountable.”

Financial institutions also bear some of the blame for the current state of affairs. The United States is embarrassingly the last of the G20 nations to make the shift to more secure chip-based cards, which are far more expensive and difficult for criminals to counterfeit. But many financial institutions still haven’t gotten around to replacing traditional magnetic stripe cards with chip-based cards. According to Visa, 58 percent of the more than 421 million Visa cards issued by U.S. financial institutions were chip-based as of March 2017.

Likewise, retailers that accept chip cards may present a less attractive target to hackers than those that don’t. In March 2017, Visa said the number of chip-enabled merchant locations in the country reached two million, representing 44 percent of stores that accept Visa.

LongNowIs the Bristlecone Pine in Peril? An Interview with Great Basin Scientist Scotty Strachan

Earlier this month, the bristlecone pine, one of the oldest and most isolated organisms on Earth, found itself in unfamiliar territory: in the headlines. News outlets such as the Chicago Tribune and the Washington Post reported that the bristlecone pine was “in peril” and threatened by extinction due to a warming climate. The news came from a study published in Global Change Biology that suggested that the limber pine was “leapfrogging” the bristlecone as they “raced” up the mountains, with climate change acting as the “starting gun.”

Scotty Strachan, an environmental scientist at the University of Nevada, Reno, is skeptical of the statements that this finding “imperils” bristlecone. Strachan has a background in dendrochronology with a specific focus on the Great Basin, where the bristlecone pine grows. He has previously spoken at The Interval and has been collaborating with Long Now on bristlecone pine research on its property on Mt. Washington. We had a chance to sit down with Strachan and get his take on the study and the ideal relationship between what Strachan calls “short-term science” and “long-term science.”

The following has been edited for length and clarity.

Great Basin scientist Scotty Strachan

LONG NOW: When this study first came out, you commented that the press release was speculative. Could you elaborate on what you took issue with?

The bristlecone pine as a species do not exist inside one particular seasonal climatic envelope. But this paper makes this assumption [of uniform seasonality and stand dynamics]. This doesn’t represent the regional climate variability well, especially where bristlecone biogeography and potentially centennial-scale regeneration is concerned.

The study in question is based on data that actually continue recent work in Great Basin done by Constance I. Millar, who’s been working at treeline for decades. She came up with the idea that perhaps the lower [elevation] species of limber pine in the subalpine woodlands was “leapfrogging” over the bristlecone tree line in terms of its fifty-year recruitment pattern. [Recruitment refers to the addition of new individuals into a population or community]. The difference here is that she didn’t immediately rush to “species in peril” judgement like this press release emphasizes.

Photo by Scotty Strachan

The researcher [UC Davis PhD student Brian Smithers] went to a few sites where bristlecones have been studied previously in the Great Basin. But the bristlecone occur in more than twenty-five mountain ranges in the Great Basin, and in several cases are not co-located with limber pine.

The paper spends a good deal of time, as it should, talking about what is known and what has been studied about bristlecone regeneration—which in terms of long term multi-decadal work, is actually very little. Bristlecone live in a region where you have high [climate] variability both interannually and interdecadally.

The bristlecone pine are distributed in space from the White Mountains in the western Great Basin, where they have irregular summertime input of rain, to Mt. Washington, Great Basin National Park, and ranges in central Utah, where more often than not you have a significant summertime component of moisture that actually can alleviate drought conditions. The structure of healthy bristlecone stands across these ranges can be very different, and you can bet that regeneration processes vary accordingly.

Photo by Scotty Strachan

LONG NOW: Why do you think there’s such an appetite for stories like these that sound an alarmist tone?

We see this as a recurring theme in science, and not just in the environmental fields: “We’re not telling anything new, but we’re going to make an alarmist story about it.” Sometimes these alarmist press runs can generate certain momentum inside agency mechanisms that lead policy and science down the wrong road with detrimental effects, particularly if the details of the system in question are not well-understood.

For instance, if you are the Bureau of Land Management, you control large swaths of the interior west, and you’re responsible for maintaining the viability of the land in some way. You have mixed mandates where you balance the current resource users of the land, cattle ranchers maybe, or solar farm industrialists, with sometimes competing conservation issues that range from sagebrush to horses. You’re being pulled in all these different directions, so which science is right?

Photo by Scotty Strachan

We’ve seen over the last thirty years an uptick in the amount of agency time spent in conservation efforts rather than resource use, just broadly. So the question is, how are those funds being directed and then, is it always a good idea for people to actually mess with the landscape in a sort of conservation management approach? So conservation of the forests in California for the last many decades has resulted in the catastrophe that’s waiting to happen in any given watershed in terms of forest densities and the fires that come from that.

The same thing can happen when you have niche science that says we’re going to manage the shrublands for say, a single species, like the sage grouse. So if you poke your nose in the sage grouse issue you’ll find that hundreds of millions of dollars have been spent via combinations of special interest groups and researchers to help conserve sage grouse habitat only. Well, that includes lots of cutting down woodlands that are naturally growing, amid similar “alarmist” claims that the woodlands are “invasive,” when there is plenty of science out there that says in many locations that’s simply not true. Effects to soils, other bird populations, indigenous tradition, recurring management costs, and so forth are sidelined, and that’s a problem.

Photo by Scotty Strachan

I’ll go back to a quote from Sierra bighorn sheep scientist John Wehausen:

“Ecology is quite messy statistically, unlikely to yield simple, clean answers. Be prepared to devote a long time if you want an adequate understanding at a system level; e.g. decades, not years…be open to the possibility that variables you never considered may be very important, relegating a lot of previous research to little more than preliminary study.”

And he’s talking about sheep, not bristlecone.

So you have this repeated approach of niche management as a rallying cry, to the detriment of many other considerations on the landscape system. If knee-jerk landscape-scale human interference get extended to bristlecone, then yeah, I’d say risk to the species increases.

Photo by Scotty Strachan

LONG NOW: What’s a better approach?

I look at it in terms of long-term science and short-term science. Short science operates like this: we go out there, we take a look at what’s happening, maybe we like what we see maybe we don’t like what we see, we draw some conclusions based on what we can observe at the time, we go forward and we say: “This is what we think happened, and we need policy X.” That’s great, except that’s effectively snapshot science. The same is true even if you include, say, some modeling—this is done often, like ecological modeling based on climate models—and say, “here’s what we think has been going on for the last one hundred years and therefore our look is not necessarily a snapshot.” Very often that short science is stated as fact, absolute fact, and that is the problem.

So short science is good because you need to go out there, you need to do an intensive look at something and get a snapshot, so that somebody can follow that snapshot one year, ten years, fifty years from now. That’s what’s really critical. Drawing those conclusions and stating it as absolute fact without having any long science to back it up, especially when we’re talking about landscapes or ecosystems where you have multi-century cyclic behavior in the ecology, let alone any climate changes, now all of a sudden you’ve got a bit of a conundrum. To me, one without the other is not good science from the management or landscape interference point of view.

Photo by Scotty Strachan

LONG NOW: So it’s not that you’re dismissing short term science. Rather, you’re saying the short term science should be informed by long term science.

Yes, and here’s the other thing. Very often the short science takes the easiest path, which means you aren’t studying the mechanisms so much as you are simply observing the current status of things. Obviously you have to start somewhere. So you can still do short science—by short in this context I mean less than decades—you can still do that and try to observe some mechanisms rather than rely on perhaps other mechanistic studies that came in some cases very long before you and may have been very rudimentary in nature. Scale is a critical issue, geographically and temporally. And something that I don’t usually see in papers that shoot for sweeping conclusions is a section that takes on “Sources of Uncertainty” and then lists them, explaining how each of those sources has either been controlled for, or if not controlled for then a reasoned, fact-filled explanation as to why the author believes the influence is negligible.

We’ve been working with the Long Now Foundation out on Mount Washington to study more of the mechanistic processes to do with Great Basin woodlands and bristlecone pine for a number of years now. We’ve got the first multi-year, continuous sub-daily record of bristlecone growth response to climate and interactions with seasonal resources and surrounding species, including limber pine, and the data are becoming more fascinating every year. We hope to run this study for decades. Yes, more of those papers are coming! That’s the kind of investigative approach that needs to be developed more around the west, and not just for bristlecone. You can’t manage what you don’t monitor. Investing in this kind of longer science and maintaining it is a huge challenge—because long science doesn’t write headlines, or at least, not until much much later! I think that the Long Now Foundation has a part to play in helping re-orient the dialogue around how short and long science differ, and also how each informs our views and interactions with the geography around us.  

Planet DebianColin Watson: A mysterious bug with Twisted plugins

I fixed a bug in Launchpad recently that led me deeper than I expected.

Launchpad uses Buildout as its build system for Python packages, and it’s served us well for many years. However, we’re using 1.7.1, which doesn’t support ensuring that packages required using setuptools’ setup_requires keyword only ever come from the local index URL when one is specified; that’s an essential constraint we need to be able to impose so that our build system isn’t immediately sensitive to downtime or changes in PyPI. There are various issues/PRs about this in Buildout (e.g. #238), but even if those are fixed it’ll almost certainly only be in Buildout v2, and upgrading to that is its own kettle of fish for other reasons. All this is a serious problem for us because newer versions of many of our vital dependencies (Twisted and testtools, to name but two) use setup_requires to pull in pbr, and so we’ve been stuck on old versions for some time; this is part of why Launchpad doesn’t yet support newer SSH key types, for instance. This situation obviously isn’t sustainable.

To deal with this, I’ve been working for some time on switching to virtualenv and pip. This is harder than you might think: Launchpad is a long-lived and complicated project, and it had quite a number of explicit and implicit dependencies on Buildout’s configuration and behaviour. Upgrading our infrastructure from Ubuntu 12.04 to 16.04 has helped a lot (12.04’s baseline virtualenv and pip have some deficiencies that would have required a more complicated bootstrapping procedure). I’ve dealt with most of these: for example, I had to reorganise a lot of our helper scripts (1, 2, 3), but there are still a few more things to go.

One remaining problem was that our Buildout configuration relied on building several different environments with different Python paths for various things. While this would technically be possible by way of building multiple virtualenvs, this would inflate our build time even further (we’re already going to have to cope with some slowdown as a result of using virtualenv, because the build system now has to do a lot more than constructing a glorified link farm to a bunch of cached eggs), and it seems like unnecessary complexity. The obvious thing to do seemed to be to collapse these into a single environment, since there was no obvious reason why it should actually matter if txpkgupload and txlongpoll were carefully kept off the path when running most of Launchpad: so I did that.

Then our build system got very sad.

Hmm, I thought. To keep our test times somewhat manageable, we run them in parallel across 20 containers, and we randomise the order in which they run to try to shake out test isolation bugs. It’s not completely unknown for there to be some oddities resulting from that. So I ran it again. Nope, but slightly differently sad this time. Furthermore, I couldn’t reproduce these failures locally no matter how hard I tried. Oh dear. This was obviously not going to be a good day.

In fact I spent a while on various different guesswork-based approaches. I found bug 571334 in Ampoule, an AMP-based process pool implementation that we use for some job runners, and proposed a fix for that, but cherry-picking that fix into Launchpad didn’t help matters. I tried backing out subsets of my changes and determined that if both txlongpoll and txpkgupload were absent from the Python module path in the context of the tests in question then everything was fine. I tried running strace locally and staring at the output for some time in the hope of enlightenment: that reminded me that the two packages in question install modules under twisted.plugins, which did at least establish a reason they might affect the environment that was more plausible than magic, but nothing much more specific than that.

On Friday I was fiddling about with this again and trying to insert some more debugging when I noticed some interesting behaviour around plugin caching. If I caused the txpkgupload plugin to raise an exception when loaded, the Twisted plugin system would remove its dropin.cache (because it was stale) and not create a new one (because there was now no content to put in it). After that, running the relevant tests would fail as I’d seen in our buildbot. Aha! This meant that I could also reproduce it by doing an even cleaner build than I’d previously tried to do, by removing the cached txpkgupload and txlongpoll eggs and allowing the build system to recreate them. When they were recreated, they didn’t contain dropin.cache, instead allowing that to be created on first use.

Based on this clue I was able to get to the answer relatively quickly. Ampoule has a specialised bootstrapping sequence for its worker processes that starts by doing this:

from twisted.application import reactors
reactors.installReactor(reactor)

Now, twisted.application.reactors.installReactor calls twisted.plugin.getPlugins, so the very start of this bootstrapping sequence is going to involve loading all plugins found on the module path (I assume it’s possible to write a plugin that adds an alternative reactor implementation). If dropin.cache is up to date, then it will just get the information it needs from that; but if it isn’t, it will go ahead and import the plugin. If the plugin happens (as Twisted code often does) to run from twisted.internet import reactor at some point while being imported, then that will install the platform’s default reactor, and then twisted.application.reactors.installReactor will raise ReactorAlreadyInstalledError. Since Ampoule turns this into an info-level log message for some reason, and the tests in question only passed through error-level messages or higher, this meant that all we could see was that a worker process had exited non-zero but not why.

The Twisted documentation recommends generating the plugin cache at build time for other reasons, but we weren’t doing that. Fixing that makes everything work again.

There are still a few more things needed to get us onto pip, but we’re now pretty close. After that we can finally start bringing our dependencies up to date.

Planet DebianNorbert Preining: Debian/TeX Live 2017.20170926-1

A full month or more has past since the last upload of TeX Live, so it was high time to prepare a new package. Nothing spectacular here I have to say, two small bugs fixed and the usual long list of updates and new packages.

From the new packages I found fontloader-luaotfload and interesting project. Loading fonts via lua code in luatex is by now standard, and this package allows for experiments with newer/alternative font loaders. Another very interesting new-comer is pdfreview which lets you set pages of another PDF on a lined background and add notes to it, good for reviewing.

Enjoy.

New packages

abnt, algobox, beilstein, bib2gls, cheatsheet, coelacanth, dijkstra, dynkin-diagrams, endofproofwd, fetchcls, fixjfm, fontloader-luaotfload, forms16be, hithesis, ifxptex, komacv-rg, ku-template, latex-refsheet, limecv, mensa-tex, multilang, na-box, notes-tex, octave, pdfreview, pst-poker, theatre, upzhkinsoku, witharrows.

Updated packages

2up, acmart, acro, amsmath, animate, babel, babel-french, babel-hungarian, bangorcsthesis, beamer, beebe, biblatex-gost, biblatex-philosophy, biblatex-source-division, bibletext, bidi, bpchem, bxjaprnind, bxjscls, bytefield, checkcites, chemmacros, chet, chickenize, complexity, curves, cweb, datetime2-german, e-french, epstopdf, eqparbox, esami, etoc, fbb, fithesis, fmtcount, fnspe, fontspec, genealogytree, glossaries, glossaries-extra, hvfloat, ifptex, invoice2, jfmutil, jlreq, jsclasses, koma-script, l3build, l3experimental, l3kernel, l3packages, latexindent, libertinust1math, luatexja, lwarp, markdown, mcf2graph, media9, nddiss, newpx, newtx, novel, numspell, ocgx2, philokalia, phfqit, placeat, platex, poemscol, powerdot, pst-barcode, pst-cie, pst-exa, pst-fit, pst-func, pst-geometrictools, pst-ode, pst-plot, pst-pulley, pst-solarsystem, pst-solides3d, pst-tools, pst-vehicle, pst2pdf, pstricks, pstricks-add, ptex-base, ptex-fonts, pxchfon, quran, randomlist, reledmac, robustindex, scratch, skrapport, spectralsequences, tcolorbox, tetex, tex4ht, texcount, texdef, texinfo, texlive-docindex, texlive-scripts, tikzducks, tikzsymbols, tocloft, translations, updmap-map, uplatex, widetable, xepersian, xetexref, xint, xsim, zhlipsum.

Planet DebianIain R. Learmonth: SMS Verification

I’ve received an email today from Barclaycard with the following:

“From time to time, to make sure it’s you who’s using your Barclaycard online, we’ll send you a text with a verification code for you to use on the Verified by Visa screen that’ll pop up on your payment page.”

The proprietary nature of mobile phones with the hardware specifications and the software being closed off from inspection or audit and considered to be trade secrets make my phone and my tablet the least trusted devices I own and use.

Due to this lack of trust, I’ve often held back from using my phone or tablet for certain tasks where I can still get away with not doing so. I have experimented with having read-only access to my calendars and contacts to ensure that if my phone is compromised they can’t just be wiped out, though in the end I had to give in as my calendar was becoming too difficult to manage using a paper system as part of entry for new events.

I wanted to try to reduce the attractiveness of compromising my phone. Anyone that really wants to have a go at my phone could probably get in. It’s an older Samsung Android phone on a UK network and software updates rarely come through in a timely manner. Anything that I give my phone access to is at risk and that risk needs to be balanced by some real world benefits.

These are just the problems with the phone itself. When you’re using SMS authentication, even with the most secure phone ever, you’re still going to be using the phone network. SMS authentication is about equivalent, in terms of the security it really offers, to your mobile phone number being your password when it comes to an even mildly motivated attacker. You probably don’t treat your mobile phone number as a password, nor does the provider or anyone you’ve given it to, so you can assume that it’s compromised.

Why are mobile phones so popular for two factor (on in increasing numbers of cases, single factor) authentication? Not because they improve security but because they’re convenient and everyone has one. This seems like a bad plan.

Planet DebianIain R. Learmonth: SMS Verification

I’ve received an email today from Barclaycard with the following: “From time to time, to make sure it’s you who’s using your Barclaycard online, we’ll send you a text with a verification code for you to use on the Verified by Visa screen that’ll pop up on your payment page.” The proprietary nature of mobile phones with the hardware specifications and the software being closed off from inspection or audit and considered to be trade secrets make my phone and my tablet the least trusted devices I own and use.

CryptogramThe Data Tinder Collects, Saves, and Uses

Under European law, service providers like Tinder are required to show users what information they have on them when requested. This author requested, and this is what she received:

Some 800 pages came back containing information such as my Facebook "likes," my photos from Instagram (even after I deleted the associated account), my education, the age-rank of men I was interested in, how many times I connected, when and where every online conversation with every single one of my matches happened...the list goes on.

"I am horrified but absolutely not surprised by this amount of data," said Olivier Keyes, a data scientist at the University of Washington. "Every app you use regularly on your phone owns the same [kinds of information]. Facebook has thousands of pages about you!"

As I flicked through page after page of my data I felt guilty. I was amazed by how much information I was voluntarily disclosing: from locations, interests and jobs, to pictures, music tastes and what I liked to eat. But I quickly realised I wasn't the only one. A July 2017 study revealed Tinder users are excessively willing to disclose information without realising it.

"You are lured into giving away all this information," says Luke Stark, a digital technology sociologist at Dartmouth University. "Apps such as Tinder are taking advantage of a simple emotional phenomenon; we can't feel data. This is why seeing everything printed strikes you. We are physical creatures. We need materiality."

Reading through the 1,700 Tinder messages I've sent since 2013, I took a trip into my hopes, fears, sexual preferences and deepest secrets. Tinder knows me so well. It knows the real, inglorious version of me who copy-pasted the same joke to match 567, 568, and 569; who exchanged compulsively with 16 different people simultaneously one New Year's Day, and then ghosted 16 of them.

"What you are describing is called secondary implicit disclosed information," explains Alessandro Acquisti, professor of information technology at Carnegie Mellon University. "Tinder knows much more about you when studying your behaviour on the app. It knows how often you connect and at which times; the percentage of white men, black men, Asian men you have matched; which kinds of people are interested in you; which words you use the most; how much time people spend on your picture before swiping you, and so on. Personal data is the fuel of the economy. Consumers' data is being traded and transacted for the purpose of advertising."

Tinder's privacy policy clearly states your data may be used to deliver "targeted advertising."

It's not Tinder. Surveillance is the business model of the Internet. Everyone does this.

Worse Than FailureAn Emphasized Color

One of the major goals of many software development teams is to take tedious, boring, simplistic manual tasks and automate them. An entire data entry team can be replaced by a single well-written application, saving the company money, greatly improving processing time, and potentially reducing errors.

That is, if it’s done correctly.

Peter G. worked for a state government. One of his department’s tasks involved processing carbon copies of forms for most of the state’s residents. To save costs, improve processing time, and reduce the amount of manual data entry they had to perform, the department decided to automate the process and use optical character recognition (OCR) to scan in the carbon copies and convert the handwritten data into text which was eventually entered into a database.

A pile of paperwork on a desk, with an old style phone and a stream of light artistically highlighting the paper. By By Aaron Logan
[CC BY 2.5], via Wikimedia Commons

The software was written and the department received boxes and boxes and boxes worth of the carbon copy paper forms. The printer had a very long lead time, so they ordered their entire supply of forms for the state for the next year. There were so many boxes that Peter joked about building a castle with them.

Then the system went live. And it didn’t work, at all. Something was wrong with the OCR software and Peter was pulled into the project to help find a fix.

While researching the project history, he found that much of the data on the paper forms wasn’t required, and the decision was made to print those boxes in a different, very specific color. During processing, their custom OCR software would ignore that color, blanking out the box and removing the extraneous information before it was unnecessarily entered into the system. Since it still needed to be visible, but wasn’t important, they chose, with the help of their printer, Pantone 5507.

So he filled out a sample form for one “Homer J. Simpson” and scanned it to see what was meant by “The system doesn’t work.” The system briefly churned and created a record in the test database for his form, but when he inspected the record, it was missing the mandatory unique ID. This ID came from the paper form and was comparable to a license number or Social Security Number, and was absolutely required for the data to be usable.

He filled out a couple more forms in case the system was having trouble understanding his handwriting, but they came out the same way. No unique ID.

He scratched his head and examined the paper forms some more. Eventually, he realized the issue. The box for the unique ID was considered “important” but not “something for users to interact with”, and thus was de-emphasized, and prrinted printed in that different, very specific color that the OCR software ignored: Pantone 5507. So the ID was blanked out and ignored during scanning.

Being a competent developer, Peter quickly came up with a plan to add a step to the task. After scanning, but before handing off to the OCR task, a new task would do a simple color-based find-and-replace within a region of the scan to correct the color of the ID field so it wouldn’t be blanked out.

“No, we don’t have time or money for that,” his manager explained to him. “I’ll have the offshore guys fix it for next year. For now, just cobble something together so the original scan stays with the record.”

The department hired a team of interns to perform manual data entry for the year, whose sole task was to sift through the database records, pull up the corresponding scan, and read and type in the single unique ID field that the OCR software ignored. Meanwhile, the department promised that something bigger, better, and fancier was on the way for next year…

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #126

Here's what happened in the Reproducible Builds effort between Sunday September 17th and Saturday September 23rd 2017:

Media coverage

  • Christos Zoulas gave a talk entitled Reproducible builds on NetBSD at EuroBSDCon 2017

Reproducible work in other packages

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

1 package reviews was added, 49 have been updated and 54 have been removed in this week, adding to our knowledge about identified issues.

One issue type was updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (56)
  • Bas Couwenberg (1)
  • Helmut Grohne (1)
  • Nobuhiro Iwamatsu (2)

diffoscope development

Version 87 was uploaded to unstable by Mattia Rizzolo. It included contributions from:

strip-nondeterminism development

reprotest development

Version 0.7 was uploaded to unstable by Ximin Luo:

tests.reproducible-builds.org

Vagrant Cascadian and Holger Levsen:

  • Re-add and armhf build node that had been disabled due to performance issues, but works linux 4.14-rc1 now! #876212

Holger Levsen:

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianRuss Allbery: Review: Artemis Fowl

Review: Artemis Fowl, by Eoin Colfer

Series: Artemis Fowl #1
Publisher: Disney-Hyperion
Copyright: 2001
ISBN: 1-4231-2452-9
Format: Kindle
Pages: 281

Artemis Fowl is the heir to the Fowl criminal empire and a child prodigy. He's also one of the few humans to know of the existence of fairies, who are still present in the world, hiding from humans and living by their own rules. As the book opens, he's in search of those rules: a copy of the book that governs the lives of fairies. With that knowledge, he should be able to pull off a heist worthy of his family's legacy.

Captain Holly Short is a leprechaun... or, more correctly, a LEPrecon. She's one of the fairy police officers that investigate threats to the fairies who are hiding in a vast underground civilization. The fairies have magic, but they also have advanced (and miniaturized) technology, maintained in large part by a grumpy and egotistical centaur (named Foaly, because it's that sort of book). She's also the fairy unlucky enough to be captured by Artemis's formidable personal bodyguard their first attempt to kidnap a hostage for their ransom demands.

This is the first book of a long series of young adult novels that has also spawned graphic novels and a movie currently in production. It has that lean and clear feeling of the younger side of young adult writing: larger-than-life characters who are distinctive and easy to remember, a short introductory setup that dives directly into the main plot, and a story that neatly pulls together every element raised in the story. The world-building is its strongest point, particularly the mix of tongue-in-cheek technology — ships that ride magma plumes, mechanical wings, and helmet-mounted lights to blind trolls — and science-tinged magic that the fairies build their police and army on. Fairies are far beyond humans in capability, and can be deadly and ruthless, but they have to follow a tightly constrained set of rules that are often not convenient.

Sadly, the characters don't live up to the world-building. I did enjoy a few of them, particularly Artemis's loyal bodyguards and the dwarf Mulch Diggums. But Holly, despite being likable, is a bit of a blank slate: the empathetic, overworked trooper who is mostly indistinguishable from other characters in similar stories. The gruff captain, the sarcastic technician Foaly, and the various other LEP agents all felt like they were taken straight from central casting. And then there's Artemis himself.

Artemis is the protagonist of the story, in that he's the one who initiates all of the action and the one who has the most interesting motivations. The story is about him, as the third-person narrator in the introduction makes clear. He's trying very hard to be a criminal genius with the deductive abilities of Sherlock Holmes and the speaking style of a Bond villain, but he's also twelve, his father has disappeared, and his mother is going slowly insane. I picked this book up on the recommendation of another reader who found that contrast compelling.

Unfortunately, I thought Artemis was just an abusive jerk. Yes, yes, family tragedy, yes, he's trapped in his conception of himself, but he's arrogant, utterly uncaring about how his actions affect other people, and dismissive and cruel even to his bodyguards (who are much better friends than he deserves). I think liking this book requires liking Artemis at least well enough to consider him an anti-hero, and I can squint and see that appeal if you have that reaction. But I just wanted him to lose. Not in the "you will be slowly redeemed over the course of a long series" way, but in the "you are a horrible person and I hope you get what's coming to you" way. The humor of the fairy parts of the book was undermined too much by the fact that many of them would like to kill Artemis for real, and I mostly wanted them to succeed.

This may or may not have to do with my low tolerance for egotistical smart-asses who order other people to do things that they refuse to explain.

Without some appreciation for Artemis, this is a story with some neat world-building, a fairly generic protagonist in Holly, and a plot in which the bad guys win. To make matters worse, I thought the supposedly bright note at the end of the story was just creepy, as was everything else involving Artemis's mother. The review I read was of the first three books, so it's entirely possible that this series gets better as it goes along, but there wasn't enough I enjoyed in the first book for me to keep reading.

Followed by Artemis Fowl: The Arctic Incident.

Rating: 5 out of 10

,

TEDMeet the Fall 2017 class of TED Residents

Here, two new Residents, “chief reading inspirer” Alvin Irby and filmmaker Karen Palmer, meet at the TED office on September 11, 2017, in New York.

The goal of the TED Residency is to incubate breakthrough projects of all kinds. Our Residents come from many areas of expertise, backgrounds and regions — and when they meet each other, new ideas spark. Here, two new Residents, “chief reading inspirer” Alvin Irby and filmmaker Karen Palmer, meet at the TED office on September 11, 2017, in New York. Photo: Dian Lofton / TED

On September 11, TED welcomed its latest class to the TED Residency program, an in-house incubator for breakthrough ideas. Residents spend four months in TED’s New York headquarters with other exceptional people from all over the map—including the Netherlands, the UK, Tennessee and Georgia.

The new Residents include:

  • A filmmaker creating a movie experience that progresses using your reaction
  • An entrepreneur bringing reading spaces to unlikely places
  • A journalist advocating for better support for women after they’ve given birth
  • An artist looking to bring more humanity to citizens of North Korea

Tobacco Brown is an artist whose medium is plants and gardens. In her public art installations, she comments on sociopolitical realities by bringing nature to underinvested urban environments. During her Residency, she is turning her lifetime of experiences into a book.

A former foreign-aid worker and White House staffer, Stan Byers is an expert on emerging markets, geopolitical stability and security. His current project is applying AI to the Fragile States Index to identify more innovative and effective responses to state instability. He is working to incorporate more real-time data sources and, long-term, to help design more equitable, creative and resilient social and market structures.

William Frey is a qualitative researcher and digital ethnographer at Columbia University who is using machine learning to detect patterns in social media posts and police reports to map the genesis of violence. His goal is to spot imminent violence before it erupts and then alert communities to intervene.

Inside the TED office theater, TED Residency program manager Katrina Conanan and director Cyndi Stivers welcome the new class of Residents and alumn

Inside the TED office theater, TED Residency program manager Katrina Conanan and director Cyndi Stivers welcome the new class of Residents and alumni on September 11, 2017, in New York. Photo: Dian Lofton / TED

Alvin Irby is the founder and “chief reading inspirer” at Barbershop Books, which creates child-friendly reading spaces in barbershops across America to encourage young
Black boys to read for fun. He is developing an education podcast to share insights about helping children of color realize their full potential.

London-based filmmaker Karen Palmer uses AI interactive stories to inspire and enlighten her audience. Her current project, RIOT, is a live-action film with 3D sound that helps viewers navigate through a dangerous riot. She uses facial recognition and machine-learning technology to give viewers real-time feedback about their own visceral reactions.

Web designer Derrius Quarles is the cofounder and CTO of BREAUX Capital, a financial wellness startup devoted to Black millennials. Using a combination of technology, education, and behavioral economics, he hopes to break down the systemic barriers to financial health that people of color have long faced.

TED Residency alum Liz Jackson chats with new Residents Anouk Wipprecht and Eiji Han Shimizu

From left, TED Residency alum Liz Jackson, a fashion designer and activist from our very first class, chats with new Residents Anouk Wipprecht, a fashion designer and technologist, and animator Eiji Han Shimizu, during our meetup on September 11, 2017, in New York. Photo: Dian Lofton / TED

Michael Rain is the creator of Enodi, a digital gallery that highlights the stories of first-generation Black immigrants of African, Caribbean and Latinx descent. He is also cofounder of ZNews Africa, which makes mobile, web and email products for the global Pan-African community.

Kifah Shah is cofounder of SuKi Se, an ethical fashion brand produced by artisans in Pakistan. Her company strives to offer access to technologies that ensure high production standards and inclusive supply chains. Kifah is also a digital campaign strategist for MPower Change.

How do organizations hire better employees? That is a question Jason Shen has been thinking about through his company Headlight, a platform for tech employers to manage assignments, and The Talent Playbook, an open-source repository of best practices for hiring.

Eiji Han Shimizu is a creative activist from Japan who uses animation and graphic novels to galvanize his audiences. His current project is an animated film depicting the stories of North Korean political prisoners and ordinary people whose lives are hidden behind the headlines.

Bob Stein has long been in the vanguard: Immersed in radical politics as a young man, he grew into one of the founding fathers of new media (Criterion, Voyager, Institute for Future of the Book). He’s wondering what sorts of new rituals and traditions might emerge as society expands to include increasing numbers of people in their eighties and nineties.

Kifah Shah chats during our residents meetup

Kifah Shah, cofounder of SuKi Se, chats during our residents meetup on September 11, 2017, in New York. Photo: Dian Lofton / TED

Malika Whitley is the Atlanta-based CEO and founder of ChopArt, an organization for homeless teens focused on mentorship, dignity and opportunity through the arts. ChopArt partners with local shelters and homeless organizations to provide multidisciplinary arts programming in Atlanta, New Orleans, Hyderabad and Accra.

Anouk Wipprecht is a Dutch designer and engineer whose work combines fashion and technology in what she calls “technical couture.” Her garments augment everyday interactions, using sensors, machine learning and animatronics; her designs move, breathe and react to the environment around them.

Allison Yarrow is a journalist and documentary producer examining how women recover from childbirth during what’s known as the Fourth Trimester. Particularly in the US, Allison argues, society and healthcare tend to focus on the health of babies, while the well-being of mothers is overlooked.

If you would like to be a part of the Spring 2018 TED Residency (which runs March 12 to June 15, 2018), applications open on November 1, 2017. For more information on requirements, and an advance peek at the application form, please see ted.com/residency.


Krebs on SecuritySource: Deloitte Breach Affected All Company Email, Admin Accounts

Deloitte, one of the world’s “big four” accounting firms, has acknowledged a breach of its internal email systems, British news outlet The Guardian revealed today. Deloitte has sought to downplay the incident, saying it impacted “very few” clients. But according to a source close to the investigation, the breach dates back to at least the fall of 2016, and involves the compromise of all administrator accounts at the company as well as Deloitte’s entire internal email system.

deloitte

In a story published Monday morning, The Guardian said a breach at Deloitte involved usernames, passwords and personal data on the accountancy’s top blue-chip clients.

“The Guardian understands Deloitte clients across all of these sectors had material in the company email system that was breached,” The Guardian’s Nick Hopkins wrote. “The companies include household names as well as US government departments. So far, six of Deloitte’s clients have been told their information was ‘impacted’ by the hack.”

In a statement sent to KrebsOnSecurity, Deloitte acknowledged a “cyber incident” involving unauthorized access to its email platform.

“The review of that platform is complete,” the statement reads. “Importantly, the review enabled us to understand precisely what information was at risk and what the hacker actually did and to determine that only very few clients were impacted [and] no disruption has occurred to client businesses, to Deloitte’s ability to continue to serve clients, or to consumers.”

However, information shared by a person with direct knowledge of the incident said the company in fact does not yet know precisely when the intrusion occurred, or for how long the hackers were inside of its systems.

This source, speaking on condition of anonymity, said the team investigating the breach focused their attention on a company office in Nashville known as the “Hermitage,” where the breach is thought to have begun.

The source confirmed The Guardian reporting that current estimates put the intrusion sometime in the fall of 2016, and added that investigators still are not certain that they have completely evicted the intruders from the network.

Indeed, it appears that Deloitte has known something was not right for some time. According to this source, the company sent out a “mandatory password reset” email on Oct. 13, 2016 to all Deloitte employees in the United States. The notice stated that employee passwords and personal identification numbers (PINs) needed to be changed by Oct. 17, 2016, and that employees who failed to do so would be unable to access email or other Deloitte applications. The message also included advice on how to pick complex passwords:

A screen shot of the mandatory password reset email Deloitte sent to all U.S. employees in Oct. 2016, around the time sources say the breach was first discovered.

A screen shot of the mandatory password reset message Deloitte sent to all U.S. employees in Oct. 2016, around the time sources say the breach was first discovered.

The source told KrebsOnSecurity they were coming forward with information about the breach because, “I think it’s unfortunate how we have handled this and swept it under the rug. It wasn’t a small amount of emails like reported. They accessed the entire email database and all admin accounts. But we never notified our advisory clients or our cyber intel clients.”

“Cyber intel” refers to Deloitte’s Cyber Intelligence Centre, which provides 24/7 “business-focused operational security” to a number of big companies, including CSAA Insurance, FedExInvesco, and St. Joseph’s Healthcare System, among others.

This same source said forensic investigators identified several gigabytes of data being exfiltrated to a server in the United Kingdom. The source further said the hackers had free reign in the network for “a long time” and that the company still does not know exactly how much total data was taken.

In its statement about the incident, Deloitte said it responded by “implementing its comprehensive security protocol and initiating an intensive and thorough review which included mobilizing a team of cyber-security and confidentiality experts inside and outside of Deloitte.” Additionally, the company said it contacted governmental authorities immediately after it became aware of the incident, and that it contacted each of the “very few clients impacted.”

“Deloitte remains deeply committed to ensuring that its cyber-security defenses are best in class, to investing heavily in protecting confidential information and to continually reviewing and enhancing cyber security,” the statement concludes.

Deloitte has not yet responded to follow-up requests for comment.  The Guardian reported that Deloitte notified six affected clients, but Deloitte has not said publicly yet when it notified those customers.

Deloitte has a significant cybersecurity consulting practice globally, wherein it advises many of its clients on how best to secure their systems and sensitive data from hackers. In 2012, Deloitte was ranked #1 globally in security consulting based on revenue.

Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a private company based in the United Kingdom. According to the company’s Web site, Deloitte has more than 263,000 employees at member firms delivering services in audit and insurance, tax, consulting, financial advisory, risk advisory, and related services in more than 150 countries and territories. Revenues for the fiscal year 2017 were $38.8 billion.

The breach at the big-four accountancy comes on the heels of a massive breach at big-three consumer credit bureau Equifax. That incident involved several months of unauthorized access in which intruders stole Social Security numbers, birth dates, and addresses on 143 million Americans.

This is a developing story. Any updates will be posted as available, and noted with update timestamps.

Google AdsenseBoost your multi-screen strategy with AdSense Responsive ads

We know one of the biggest challenges publishers currently face is designing websites that adapt to different screen sizes, resolutions and user needs. Responsive ad units helps you deliver the best possible user experience on your pages: You can dynamically control the presentation of your website according to the properties of the screen/device that it’s being viewed on. Responsive ads automatically adapt to the size of your user's screen, meaning publishers can spend more time creating great content, and less time thinking about the size of their ads.


Today we’re happy to share some product updates to complement and strengthen your strategy, with new features for our responsive ad units and a multi-screen optimization score now available.


The new full width ads on mobile devices
Our experiments show that full-width responsive ads perform better on mobile devices in portrait mode. Previously Responsive ads fitted to standard sizes. The new launch will now  automatically expand ads to the full-width of the user's screen when their device is orientated vertically.
Screen Shot 2017-08-16 at 3.02.57 PM.pngScreen Shot 2017-08-16 at 3.02.57 PM.png


To implement the new full-width responsive ads, you can simply create a responsive ad unit in your AdSense account.


Best practices to help improve your mobile performance


We are also happy to share with you other best practices to help improve your mobile performance. Check out this video to get tips on how to create an excellent mobile experience for your users and potentially increase your mobile revenue. Let's get started!


More information on Responsive ad units can also be found in our Help Center.

We look forward to hearing your thoughts on these new features!


Posted by: The AdSense Team

Sociological ImagesUnpacking How House of Cards Represents Sex Workers

Mild Spoiler Alert for Season 3 of House of Cards

Where is Rachel Posner?

Representations of sex workers on popular shows such as Game of Thrones, The Good Wife, and, of course, any version of CSI, are often stereotypical, completely incorrect, and infuriatingly dehumanizing. Like so many of these shows, House of Cards offers more of the same, but it uses a somewhat different narrative for a former sex worker and central character, Rachel Posner. Rachel experiences many moments of sudden empowerment that are just as quickly taken away. She is not entirely disempowered, often physically and emotionally resisting other characters and situations, but her humanization only lasts so long.  

The show follows Rachel for three full seasons, offering some hope to the viewer that her story would not end in her death, dehumanization, or any other number of sensational and tumultuous storylines. So, when she is murdered in the final episode of Season 3, viewers sensitive to her character’s role as a sex worker and invested in a new narrative for current and former sex worker characters on popular TV shows probably felt deeply let down. Her death inspired us to go back and analyze how her role in the series was both intensely invisible and visible.  

Early in the show, we learn that Rachel has information that could reveal murder and corrupt political strategizing orchestrated by the protagonist Frank Underwood.  She is the thread that weaves the entire series together. Despite this, most characters on the show do not value Rachel beyond worrying about how she could harm them. Other characters talk about her when she’s not present at all, often referring to her as “the prostitute” or “some hooker,” rather than by her name or anything else that describes who she is.

The show, too, devalues her. At the beginning of an episode, we watch Rachel making coffee one morning in her small apartment.  Yet, instead of watching her, we watch her body parts; the camera pans over her torso, her breasts in a lace bra, and then her legs before we finally see her entire body and face.  There is not one single scene even remotely like this for any other character on the show. Even the promotional material for Season 1 (pictured above) fails to include a photo of Rachel while including images of a number of other characters who were less central to the storyline and appeared in fewer episodes. Yet, whoever arranged the photoshoot didn’t think she was important enough to include.

Another major way that Rachel is marginalized in the context of the show is that she is not given many scenes or storylines that are about her—her private life, time spent with friends, or what’s important to her. This is in contrast to other characters with a similar status. For instance, the audience is made to feel sympathy for Gavin, a hacker, when an FBI agent threatens the life of his beloved guinea pig. In contrast, it is Rachel’s ninth episode before the audience sees her interact with a friend, and we never really learn what motivates her beyond fear and survival. In this sense, Rachel is almost entirely invisible in her own storyline. She only exists when people want something from her.

Rachel is also made invisible by the way she is represented or discussed in many scenes.  For instance, although she’s present, she has zero lines in her first couple scenes. After appearing (without lines) in Episodes 1 and 2, Rachel reappears in Episode 7, although she’s not really present; she re-emerges in the form of a handwritten note to Doug Stamper (Underwood’s indispensable assistant).  She writes: “I need more money.  And not in my mouth.” These are Rachel’s first two lines in the entire series; however, she’s not actually saying them, she’s asking for something and one of the lines draws attention to a sexualized body part and sexual act that she engaged in with Doug. Without judging the fact that she engaged in a sexual act with a client, what’s notable here is the fact that she isn’t given a voice or her own resources. She is constantly positioned in relation to other characters and often without the resources and ability to survive on her own.

This can clearly be seen in the way Rachel is easily pushed around by other characters in the show, who are able to force their will upon her. When viewers do finally see her in a friendship, one that blossoms into a romance, the meaning that Rachel gives the relationship is overshadowed by the reaction Doug Stamper has to it. Doug has more contact with Rachel than any other character on the show; in the beginning of the series, he acts as a sort of “protector” to Rachel, by finding her a safe place to stay, ensuring that she can work free from sexual harassment in her new job, and getting her an apartment of her own. However, all these actions highlight the fact that she does not have her own resources or connections to be able to function on her own, and they are used to manipulate her. Over Rachel’s growing objections, Doug is able to impose his wishes upon her fairly easily. The moment she is able to overpower him and escape, she disappears from the show for almost a whole season, only to reappear in the episode where she dies. In this episode, we finally see Rachel standing on her own two feet. It seems like a hard life, working lots of double shifts and living in a rundown boardinghouse, but we also see her enjoying herself with friends and building something new for herself. And yet, it is also in this episode where she has leveraged her competence into a new life that she also meets her demise. Unfortunately, after seeing this vision of Rachel on the road to empowerment, more than half of her scenes relate to her death, and in most of them she is begging Doug for her life, once again reduced to powerlessness. 

Every time we begin to see a new narrative for Rachel, one that allows her to begin a life that isn’t entirely tethered to Doug Stamper and her past, she is almost immediately drawn back into his web.  Ultimately, in this final episode, she can no longer grasp her new narrative and immediately loses hold of it.  In her final scenes, after kidnapping her, Doug temporarily lets her go.  She begins to walk in the opposite direction of his van before, only moments later, he flips the van around and heads back in her direction.  The next scene cuts suddenly to her lifeless body in a shallow grave.  The sudden shock of this scene is jarring, yet oddly expected, given how the show has treated Rachel’s character throughout the series.  It’s almost as if the show does not have any use for a sex worker character who can competently manage their own affairs.  Perhaps that idea didn’t even occur to the writers because of the place in our society in which sex workers are currently situated, perhaps it disrupts the fallen woman narrative, or perhaps for some reason, a death seems more “interesting” than a storyline where a sex worker has agency and takes an active role in shaping her own life and affecting those around her.  Whatever the reason, House of Cards ultimately fails Rachel and sex workers, in general.

Paige Connell is an undergraduate sociology student at Chico State University. Her areas of interest include intimate relationships, gender, and pop culture. 

Dr. Danielle Antoinette Hidalgo is an Assistant Professor in Sociology at California State University, Chico, specializing in theory, gender and sexuality, and embodiment studies.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityCanadian Man Gets 9 Months Detention for Serial Swattings, Bomb Threats

A 19-year-old Canadian man was found guilty of making almost three dozen fraudulent calls to emergency services across North America in 2013 and 2014. The false alarms, two of which targeted this author — involved phoning in phony bomb threats and multiple attempts at “swatting” — a dangerous hoax in which the perpetrator spoofs a call about a hostage situation or other violent crime in progress in the hopes of tricking police into responding at a particular address with deadly force.

Curtis Gervais of Ottawa was 16 when he began his swatting spree, which prompted police departments across the United States and Canada to respond to fake bomb threats and active shooter reports at a number of schools and residences.

Gervais, who taunted swatting targets using the Twitter accounts “ProbablyOnion” and “ProbablyOnion2,” got such a high off of his escapades that he hung out a for-hire shingle on Twitter, offering to swat anyone with the following tweet:

wantswat

Several Twitter users apparently took him up on that offer. On March 9, 2014, @ProbablyOnion started sending me rude and annoying messages on Twitter. A month later (and several weeks after blocking him on Twitter), I received a phone call from the local police department. It was early in the morning on Apr. 10, and the cops wanted to know if everything was okay at our address.

Since this was not the first time someone had called in a fake hostage situation at my home, the call I received came from the police department’s non-emergency number, and they were unsurprised when I told them that the Krebs manor and all of its inhabitants were just fine.

Minutes after my local police department received that fake notification, @ProbablyOnion was bragging on Twitter about swatting me, including me on his public messages: “You have 5 hostages? And you will kill 1 hostage every 6 times and the police have 25 minutes to get you $100k in clear plastic.” Another message read: “Good morning! Just dispatched a swat team to your house, they didn’t even call you this time, hahaha.”

po2-swatbk

I told this user privately that targeting an investigative reporter maybe wasn’t the brightest idea, and that he was likely to wind up in jail soon.  On May 7, @ProbablyOnion tried to get the swat team to visit my home again, and once again without success. “How’s your door?” he tweeted. I replied: “Door’s fine, Curtis. But I’m guessing yours won’t be soon. Nice opsec!”

I was referring to a document that had just been leaked on Pastebin, which identified @ProbablyOnion as a 19-year-old Curtis Gervais from Ontario. @ProbablyOnion laughed it off but didn’t deny the accuracy of the information, except to tweet that the document got his age wrong.

A day later, @ProbablyOnion would post his final tweet before being arrested: “Still awaiting for the horsies to bash down my door,” a taunting reference to the Royal Canadian Mounted Police (RCMP).

A Sept. 14, 2017 article in the Ottawa Citizen doesn’t name Gervais because it is against the law in Canada to name individuals charged with or convicted of crimes committed while they are a minor. But the story quite clearly refers to Gervais, who reportedly is now married and expecting a child.

The Citizen says the teenager was arrested by Ottawa police after the U.S. FBI traced his Internet address to his parents’ home. The story notes that “the hacker” and his family have maintained his innocence throughout the trial, and that they plan to appeal the verdict. Gervais’ attorneys reportedly claimed the youth was framed by the hacker collective Anonymous, but the judge in the case was unconvinced.

Apparently, Ontario Court Justice Mitch Hoffman handed down a lenient sentence in part because of more than 900 hours of volunteer service the accused had performed in recent years. From the story:

Hoffman said that troublesome 16-year-old was hard to reconcile with the 19-year-old, recently married and soon-to-be father who stood in court before him, accompanied in court Thursday by his wife, father and mother.

“He has a bright future ahead of him if he uses his high level of computer skills and high intellect in a pro-social way,” Hoffman said. “If he does not, he has a penitentiary cell waiting for him if he uses his skills to criminal ends.”

According to the article, the teen will serve six months of his nine-month sentence at a youth group home and three months at home “under strict restrictions, including the forfeiture of a home computer used to carry out the cyber pranks.” He also is barred from using Twitter or Skype during his 18-month probation period.

Most people involved in swatting and making bomb threats are young males under the age of 18 — the age when kids seem to have little appreciation for or care about the seriousness of their actions. According to the FBI, each swatting incident costs emergency responders approximately $10,000. Each hoax also unnecessarily endangers the lives of the responders and the public.

In February 2017, another 19-year-old — a man from Long Beach, Calif. named Eric “Cosmo the God” Taylor — was sentenced to three year’s probation for his role in swatting my home in Northern Virginia in 2013. Taylor was among several men involved in making a false report to my local police department at the time about a supposed hostage situation at our house. In response, a heavily-armed police force surrounded my home and put me in handcuffs at gunpoint before the police realized it was all a dangerous hoax.

CryptogramGPS Spoofing Attacks

Wired has a story about a possible GPS spoofing attack by Russia:

After trawling through AIS data from recent years, evidence of spoofing becomes clear. Goward says GPS data has placed ships at three different airports and there have been other interesting anomalies. "We would find very large oil tankers who could travel at the maximum speed at 15 knots," says Goward, who was formerly director for Marine Transportation Systems at the US Coast Guard. "Their AIS, which is powered by GPS, would be saying they had sped up to 60 to 65 knots for an hour and then suddenly stopped. They had done that several times."

All of the evidence from the Black Sea points towards a co-ordinated attempt to disrupt GPS. A recently published report from NRK found that 24 vessels appeared at Gelendzhik airport around the same time as the Atria. When contacted, a US Coast Guard representative refused to comment on the incident, saying any GPS disruption that warranted further investigation would be passed onto the Department of Defence.

"It looks like a sophisticated attack, by somebody who knew what they were doing and were just testing the system," Bonenberg says. Humphreys told NRK it "strongly" looks like a spoofing incident. Fire Eye's Brubaker, agreed, saying the activity looked intentional. Goward is also confident that GPS were purposely disrupted. "What this case shows us is there are entities out there that are willing and eager to disrupt satellite navigation systems for whatever reason and they can do it over a fairly large area and in a sophisticated way," he says. "They're not just broadcasting a stronger signal and denying service this is worse they're providing hazardously misleading information."

Worse Than FailureCodeSOD: The Strangelet Solution

Chris M works for a “solutions provider”. Mostly, this means taking an off-the-shelf product from Microsoft or Oracle or SAP and customizing it to fit a client’s specific needs. Since many of these clients have in-house developers, the handover usually involves training those developers up on the care and maintenance of the system.

Then, a year or two later, the client comes back, complaining about the system. “It’s broken,” or “performance is terrible,” or “we need a new feature”. Chris then goes back out to their office, and starts taking a look at what has happened to the code in his absence.

It’s things like this:

    var getAdjustType = Xbp.Page.getAttribute("cw_adjustmenttype").getText;

    var reasonCodeControl = Xbp.Page.getControl("cw_reasoncode");
    if (getAdjustType === "Short-pay/Applying Credit" || getAdjustType === "Refund/Return (Credit)") {
        var i;
        var options = (Xbp.Page.getAttribute("cw_reasoncode").getOptions());
        reasonCodeControl

        for (i = 0; i < options.length; i++) {
            if (i <= 4) {
                reasonCodeControl.removeOption(options[i].value);

            }
            if (i >= 5) {
                reasonCodeControl.clearOptions();

            }
            if (i >= 5) {
                reasonCodeControl.addOption(options[5]);
                reasonCodeControl.addOption(options[6]);
                reasonCodeControl.addOption(options[7]);
                reasonCodeControl.addOption(options[8]);
                reasonCodeControl.addOption(options[9]);
                reasonCodeControl.addOption(options[10]);
                reasonCodeControl.addOption(options[11]);
                reasonCodeControl.addOption(options[12]);
                reasonCodeControl.addOption(options[13]);
                reasonCodeControl.addOption(options[14]);
                reasonCodeControl.addOption(options[15]);
                reasonCodeControl.addOption(options[16]);
                reasonCodeControl.addOption(options[17]);
                reasonCodeControl.addOption(options[18]);
                reasonCodeControl.addOption(options[19]);
                reasonCodeControl.addOption(options[20]);
                reasonCodeControl.addOption(options[21]);


            }
        }
    }
    else {
        var options = (Xbp.Page.getAttribute("cw_reasoncode").getOptions());
        for (var i = 0; i < options.length; i++) {
            if (i >= 4) {
                reasonCodeControl.removeOption(options[i].value);

            }
            if (i <= 4) {
                reasonCodeControl.clearOptions();

            }
            if (i <= 4) {
                reasonCodeControl.addOption(options[0]);
                reasonCodeControl.addOption(options[1]);
                reasonCodeControl.addOption(options[2]);
                reasonCodeControl.addOption(options[3]);
                reasonCodeControl.addOption(options[4]);

            }
        }
    }

There are patterns and there are anti-patterns, like there is matter and anti-matter. An anti-pattern would be the “switch loop”, where you have different conditional branches that execute depending on how many times the loop has run. And then there’s this, which is superficially similar to the “switch loop” anti-pattern, but confused. Twisted, with conditional branches that execute on the same condition. It may have once been an anti-pattern, but now it’s turned into a strange pattern, and like strange matter threatens to turn everything it touches into more of itself.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Valerie AuroraRepealing Obamacare will repeal my small business

I emailed this to the U.S. Senate Finance Committee today in response to the weekly Wall-of-Us email call-to-action, and thought it would fit on my blog as well.

Hello,

I am a small business owner with a pre-existing condition who can’t go without health insurance for even one month. The Affordable Care Act made my small business possible. If ACA is repealed or replaced, I will be forced to go out of business.

Two years ago, I started my own business, Frame Shift Consulting, teaching technology companies how to improve diversity and inclusion. I also have a genetic disease called Ehlers-Danlos Syndrome. If I take about ten prescription drugs every day, see several medical professionals regularly, and exercise carefully, I can live a semi-normal life and even work full-time if I don’t have to go to an office every day. Without access to prescription drugs and medical care, I would be unable to work full-time or even care for myself, and would have to go on disability, SSDI.

Before the Affordable Care Act, no health insurance company would sell me a policy on the individual market. My only option was to get a salaried job at a company large enough to offer health insurance to their employees. If I lost my job, I could buy one or two coverage options under COBRA or HIPAA, but I was always just one missed payment away from losing my access to health insurance at any price. (I once tried to apply for health insurance on the open market; after two questions about my medical history they told me I’d never get approved.) The ACA let me quit my job and start my own small business free from fear of losing my health insurance and becoming unable to work.

At my new small business, I am doing far more innovative and valuable work than I ever did for a big company. I love being my own boss, and the flexibility I have makes it far easier to cope with the bad days of Ehlers-Danlos Syndrome. I love how high impact my work is, and that I am training other people to do the same work. I could never have done work that changed so many people’s lives for the better while working at any other company.

Every time I hear about a new bill to repeal or replace the ACA, I study it to see whether I would still be able to afford health insurance under the new system. So far, the answer has been a resounding no. Without the individual mandate, coverage for pre-existing conditions, price controls, and minimum coverage requirements that states can’t waive, no health insurance company offer me an individual policy at a price I can afford.

I’m one of the luckier ones; if the ACA is repealed or replaced and I lose my health insurance, I can probably get a salaried job at a big company with health insurance benefits. I don’t expect anyone to care about my personal satisfaction in doing work I love, or having the flexibility to stay home when my Ehlers-Danlos is acting up. But I do expect my elected representatives to care that a cutting edge, high-impact small business would go out of business if they passed Graham-Cassidy or any other repeal or replace bill. The ACA is good for business, good for innovation, and good for people. Instead of replacing it with an inferior system that would cover fewer people for more money, let’s work on improving the ACA and filling in the many gaps in its coverage.

Thank you for your time,

Valerie Aurora
Proud small business owner


Tagged: politics

,

Planet Linux AustraliaOpenSTEM: What Makes Humans Different From Most Other Mammals?

Well, there are several things that makes us different from other mammals – although perhaps fewer than one might think. We are not unique in using tools, in fact we discover more animals that use tools all the time – even fish! We pride ourselves on being a “moral animal”, however fairness, reciprocity, empathy and […]

Krebs on SecurityEquifax or Equiphish?

More than a week after it said most people would be eligible to enroll in a free year of its TrustedID identity theft monitoring service, big three consumer credit bureau Equifax has begun sending out email notifications to people who were able to take the company up on its offer. But in yet another security stumble, the company appears to be training recipients to fall for phishing scams.

Some people who signed up for the service after Equifax announced Sept. 7 that it had lost control over Social Security numbers, dates of birth and other sensitive data on 143 million Americans are still waiting for the promised notice from Equifax. But as I recently noted on Twitter, other folks have received emails from Equifax over the past few days, and the messages do not exactly come across as having emanated from a company that cares much about trying to regain the public’s trust.

Here’s a redacted example of an email Equifax sent out to one recipient recently:

equifaxcare

As we can see, the email purports to have been sent from trustedid.com, a domain that Equifax has owned for almost four years. However, Equifax apparently decided it was time for a new — and perhaps snazzier — name: trustedidpremier.com.

The above-pictured message says it was sent from one domain, and then asks the recipient to respond by clicking on a link to a completely different (but confusingly similar) domain.

My guess is the reason Equifax registered trustedidpremier.com was to help people concerned about the breach to see whether they were one of the 143 million people affected (for more on how that worked out for them, see Equifax Breach Response Turns Dumpster Fire). I’d further surmise that Equifax was expecting (and received) so much interest in the service as a result of the breach that all the traffic from the wannabe customers might swamp the trustedid.com site and ruin things for the people who were already signed up for the service before Equifax announced the breach on Sept. 7.

The problem with this dual-domain approach is that the domain trustedidpremier.com is only a few weeks old, so it had very little time to establish itself as a legitimate domain. As a result, in the first few hours after Equifax disclosed the breach the domain was actually flagged as a phishing site by multiple browsers because it was brand new and looked about as professionally designed as a phishing site.

What’s more, there is nothing tying the domain registration records for trustedidpremier.com to Equifax: The domain is registered to a WHOIS privacy service, which masks information about who really owns the domain (again, not exactly something you might expect from an identity monitoring site). Anyone looking for assurances that the site perhaps was hosted on Internet address space controlled by and assigned to Equifax would also be disappointed: The site is hosted at Amazon.

While there’s nothing wrong with that exactly, one might reasonably ask: Why didn’t Equifax just send the email from Equifax.com and host the ID theft monitoring service there as well? Wouldn’t that have considerably lessened any suspicion that this missive might be a phishing attempt?

Perhaps, but you see while TrustedID is technically owned by Equifax Inc., its services are separate from Equifax and its terms of service are different from those provided by Equifax (almost certainly to separate Equifax from any consumer liability associated with its monitoring service).

THE BACKSTORY

What’s super-interesting about trustedid.com is that it didn’t always belong to Equifax. According to the site’s Wikipedia page, TrustedID Inc. was purchased by Equifax in 2013, but it was founded in 2004 as an identity protection company which offered a service that let consumers automatically “freeze” their credit file at the major bureaus. A freeze prevents Equifax and the other major credit bureaus from selling an individual’s credit data without first getting consumer consent.

By 2006, some 17 states offered consumers the ability to freeze their credit files, and the credit bureaus were starting to see the freeze as an existential threat to their businesses (in which they make slightly more than a dollar each time a potential creditor — or ID thief — asks to peek at your credit file).

Other identity monitoring firms — such as LifeLock — were by then offering services that automated the placement of identity fraud controls — such as the “fraud alert,” a free service that consumers can request to block creditors from viewing their credit files.

[Author’s note: Fraud alerts only last for 90 days, although you can renew them as often as you like. More importantly, while lenders and service providers are supposed to seek and obtain your approval before granting credit in your name if you have a fraud alert on your file, they are not legally required to do this — and very often don’t.]

Anyway, the era of identity monitoring services automating things like fraud alerts and freezes on behalf of consumers effectively died after a landmark lawsuit filed by big-three bureau Experian (which has its own storied history of data breaches). In 2008, Experian sued LifeLock, arguing its practice of automating fraud alerts violated the Fair Credit Reporting Act.

In 2009, a court found in favor of Experian, and that decision effectively killed such services — mainly because none of the banks wanted to distribute them and sell them as a service anymore.

WHAT SHOULD YOU DO

These days, consumers in all states have a right to freeze their credit files, and I would strongly encourage all readers to do this. Yes, it can be a pain, and the bureaus certainly seem to be doing everything they can at the moment to make this process extremely difficult and frustrating for consumers. As detailed in the analysis section of last week’s story — Equifax Breach: Setting the Record Straight — many of the freeze sites are timing out, crashing or telling consumers just to mail in copies of identity documents and printed-out forms.

Other bureaus, like TransUnion and Experian, are trying mightily to steer consumers away from a freeze and toward their confusingly named “credit lock” services — which claim to be the same thing as freezes only better. The truth is these lock services do not prevent the bureaus from selling your credit reports to anyone who comes asking for them (including ID thieves); and consumers who opt for them over freezes must agree to receive a flood of marketing offers from a myriad of credit bureau industry partners.

While it won’t stop all forms of identity theft (such as tax refund fraud or education loan fraud), a freeze is the option that puts you the consumer in the strongest position to control who gets to monkey with your credit file. In contrast, while credit monitoring services might alert you when someone steals your identity, they’re not designed to prevent crooks from doing so.

That’s not to say credit monitoring services aren’t useful: They can be helpful in recovering from identity theft, which often involves a tedious, lengthy and expensive process for straightening out the phony activity with the bureaus.

The thing is, it’s almost impossible to sign up for credit monitoring services while a freeze is active on your credit file, so if you’re interested in signing up for them it’s best to do so before freezing your credit. But there’s no need to pay for these services: Hundreds of companies — many of which you have probably transacted with at some point in the last year — have disclosed data breaches and are offering free monitoring. California maintains one of the most comprehensive lists of companies that disclosed a breach, and most of those are offering free monitoring.

There’s a small catch with the freezes: Depending on the state in which you live, the bureaus may each be able to charge you for freezing your file (the fee ranges from $5 to $20); they may also be able to charge you for lifting or temporarily thawing your file in the event you need access to credit. Consumers Union has a decent rundown of the freeze fees by state.

In short, sign up for whatever free monitoring is available if that’s of interest, and then freeze your file at the four major bureaus. You can do this online, by phone, or through the mail. Given how unreliable the credit bureau Web sites have been for placing freezes these past few weeks, it may be easiest to do this over the phone. Here are the freeze Web sites and freeze phone numbers for each bureau (note the phone procedures can and likely will change as the bureaus get wise to more consumers learning how to quickly step through their automated voice response systems):

Equifax: 866-349-5191; choose option 3 for a “Security Freeze”

Experian: 888-397-3742;
–Press 2 “To learn about fraud or ADD A
SECURITY FREEZE”
–Press 2 “for security freeze options”
–Press 1 “to place a security freeze”
–Press 2 “…for all others”
–enter your info when prompted

Innovis: 800-540-2505;
–Press 1 for English
–Press 3 “to place or manage an active duty alert
or a SECURITY FREEZE”
–Press 2 “to place or manage a SECURITY
FREEZE”
–enter your info when prompted

Transunion: 888-909-8872, choose option 3

If you still have questions about freezes, fraud alerts, credit monitoring or anything else related to any of the above, check out the lengthy primer/Q&A I published here on Sept. 11, The Equifax Breach: What You Should Know.

Planet Linux AustraliaDave Hall: Drupal Puppies

Over the years Drupal distributions, or distros as they're more affectionately known, have evolved a lot. We started off passing around database dumps. Eventually we moved onto using installations profiles and features to share par-baked sites.

There are some signs that distros aren't working for people using them. Agencies often hack a distro to meet client requirements. This happens because it is often difficult to cleanly extend a distro. A content type might need extra fields or the logic in an alter hook may not be desired. This makes it difficult to maintain sites built on distros. Other times maintainers abandon their distributions. This leaves site owners with an unexpected maintenance burden.

We should recognise how people are using distros and try to cater to them better. My observations suggest there are 2 types of Drupal distributions; starter kits and targeted products.

Targeted products are easier to deal with. Increasingly monetising targeted distro products is done through a SaaS offering. The revenue can funds the ongoing development of the product. This can help ensure the project remains sustainable. There are signs that this is a viable way of building Drupal 8 based products. We should be encouraging companies to embrace a strategy built around open SaaS. Open Social is a great example of this approach. Releasing the distros demonstrates a commitment to the business model. Often the secret sauce isn't in the code, it is the team and services built around the product.

Many Drupal 7 based distros struggled to articulate their use case. It was difficult to know if they were a product, a demo or a community project that you extend. Open Atrium and Commerce Kickstart are examples of distros with an identity crisis. We need to reconceptualise most distros as "starter kits" or as I like to call them "puppies".

Why puppies? Once you take a puppy home it becomes your responsibility. Starter kits should be the same. You should never assume that a starter kit will offer an upgrade path from one release to the next. When you install a starter kit you are responsible for updating the modules yourself. You need to keep track of security releases. If your puppy leaves a mess on the carpet, no one else will clean it up.

Sites build on top of a starter kit should diverge from the original version. This shouldn't only be an expectation, it should be encouraged. Installing a starter kit is the starting point of building a unique fork.

Project pages should clearly state that users are buying a puppy. Prospective puppy owners should know if they're about to take home a little lap dog or one that will grow to the size of a pony that needs daily exercise. Puppy breeders (developers) should not feel compelled to do anything once releasing the puppy. That said, most users would like some documentation.

I know of several agencies and large organisations that are making use of starter kits. Let's support people who are adopting this approach. As a community we should acknowledge that distros aren't working. We should start working out how best to manage the transition to puppies.

,

Planet Linux AustraliaTim Serong: On Equal Rights

This is probably old news now, but I only saw it this morning, so here we go:

In case that embedded tweet doesn’t show up properly, that’s an editorial in the NT News which says:

Voting papers have started to drop through Territory mailboxes for the marriage equality postal vote and I wanted to share with you a list of why I’ll be voting yes.

1. I’m not an arsehole.

This resulted in predictable comments along the lines of “oh, so if I don’t share your views, I’m an arsehole?”

I suppose it’s unlikely that anyone who actually needs to read and understand what I’m about to say will do so, but just in case, I’ll lay this out as simply as I can:

  • A personal belief that marriage is a thing that can only happen between a man and a woman does not make you an arsehole (it might make you on the wrong side of history, or a lot of other things, but it does not necessarily make you an arsehole).
  • Voting “no” to marriage equality is what makes you an arsehole.

The survey says “Should the law be changed to allow same-sex couples to marry?” What this actually means is, “Should same-sex couples have the same rights under law as everyone else?”

If you believe everyone should have the same rights under law, you need to vote yes regardless of what you, personally, believe the word “marriage” actually means – this is to make sure things like “next of kin” work the way the people involved in a relationship want them to.

If you believe that there are minorities that should not have the same rights under law as everyone else, then I’m sorry, but you’re an arsehole.

(Personally I think the Marriage Act should be ditched entirely in favour of a Civil Unions Act – that way the word “marriage” could go back to simply meaning whatever it means to the individuals being married, and to their god(s) if they have any – but this should in no way detract from the above. Also, this vote shouldn’t have happened in the first place; our elected representatives should have done their bloody jobs and fixed the legislation already.)

TEDHow the ‘Battle of the Sexes’ influenced a generation of men: Billie Jean King’s TEDWomen update

Billie Jean King: “Bobby Riggs — he was the former number one player, he wasn’t just some hacker. He was one of my heroes and I admired him. And that’s the reason I beat him, actually, because I respected him.” She spoke with Pat Mitchell at TEDWomen2015. Photo: Marla Aufmuth/TED

Forty-three years ago this week, the number one tennis star in the world, 29-year-old Billie Jean King, agreed to take on 55-year-old Bobby Riggs, in a match dubbed the “Battle of the Sexes.” The prize was $100,000 — which compared with today’s million-dollar-winning pots wasn’t much — but it was the first time that women and men were offered the same amount of prize money for victory.

The exhibition match, which admittedly was more notable at the time for its spectacle and outrageousness — Billie Jean King entered the Houston Astrodome on a feathery litter carried by shirtless men, for instance — was the most watched tennis match ever, with an estimated worldwide television audience of 90 million people. If you are old enough to remember it, you probably watched it.

Billie Jean King won in straight sets: 6-4, 6-3, 6-3.

This weekend, a new movie based on the true story starring Emma Stone as Billie Jean King and Steve Carell as Bobby Riggs hits theaters. With the election of Donald Trump — and all the sexism and misogyny that the 2016 election entailed just behind us — the story is sadly relevant today. As Lynn Sherr wrote in her review of the movie today at BillMoyers.com, “It’s all frustratingly familiar, but this time, the over-the-hill clown won.”

I interviewed Billie Jean King at TEDWomen in 2015 about her tennis career and lifelong fight for gender parity in sports and in the workplace. She talked about the match with Riggs and the intense pressure she felt on every stroke to win for women. She recalled, “I thought, ‘If I lose, it’s going to put women back 50 years, at least.’”

After she won, many women told her that her victory empowered them to finally get up the nerve to ask for a raise at work. “Some women had waited 10, 15 years to ask. I said, ‘More importantly, did you get it?’” (They did.)

As for men, the reaction was delayed. Many years later, she came to realize that the match had made an impact on the generation of men who were children at the time – an impact that they themselves didn’t realize until they were older. She told me, “Most times, the men are the ones who have tears in their eyes, it’s very interesting.” They say, ‘Billie, I was very young when I saw that match, and now I have a daughter. And I am so happy I saw that as a young man.’”

One of those young men was President Obama.

He said: “You don’t realize it, but I saw that match at 12. And now I have two daughters, and it has made a difference in how I raise them.”

Watch my interview with Billie Jean King if you haven’t seen it:

A common refrain of those working to improve diversity and representation in media is that if you can’t see it, you can’t be it. And that’s true in sports, government and in the workplace as well. If leaders don’t represent the diversity of our globalizing world, fresh ideas, diverse talent and an inclusive society can’t flourish. Through the Billie Jean King Leadership Initiative, King works to level the playing field for all people of all backgrounds so that everyone can “achieve their maximum potential and contribute to building a better society for all.” (Full disclosure: I am a member of the BJKLI advisory council.)

Emma Stone told USA Today earlier this month, she’s proud to play a part in showing some of King’s story to a younger audience. “The nice thing about doing a film like this,” she said, “is that there’s a whole generation of people who weren’t born before the Battle of the Sexes who are going to learn about this incredible period in history and all the things that have come since, so I’m grateful for that.”

“It wasn’t about tennis,” says King. “It was about history and social change.”

TEDWomen 2017 happens November 1–3 in New Orleans, and you’re invited. Learn more!

Billie Jean King: “I started thinking about my sport and how everybody who played wore white shoes, white clothes, played with white balls — everybody who played was white. And I said to myself, at 12 years old, “Where is everyone else?” And that just kept sticking in my brain. And that moment, I promised myself I’d fight for equal rights and opportunities for boys and girls, men and women, the rest of my life.” Photo: Marla Aufmuth/TED


Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV October 2017 Workshop

Oct 21 2017 12:30
Oct 21 2017 16:30
Oct 21 2017 12:30
Oct 21 2017 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

There will also be the usual casual hands-on workshop, Linux installation, configuration and assistance and advice. Bring your laptop if you need help with a particular issue. This will now occur BEFORE the talks from 12:30 to 14:00. The talks will commence at 14:00 (2pm) so there is time for people to have lunch nearby.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121 (enter via the garage on Jonas St.) Late arrivals, please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 21, 2017 - 12:30

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV Main October 2017 Meeting: The Tor software and network

Oct 3 2017 18:30
Oct 3 2017 20:30
Oct 3 2017 18:30
Oct 3 2017 20:30
Location: 
Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

PLEASE NOTE NEW LOCATION

Tuesday, October 3, 2017
6:30 PM to 8:30 PM
Mail Exchange Hotel
688 Bourke St, Melbourne VIC 3000

Speakers:

  • Russell Coker, Tor

Tor is free software and an open network that helps you defend against traffic analysis, a form of network surveillance that threatens personal freedom and privacy, confidential business activities and relationships, and state security.

Russell Coker has done lots of Linux development over the years, mostly involved with Debian.

Mail Exchange Hotel, 688 Bourke St, Melbourne VIC 3000

Food and drinks will be available on premises.

Linux Users of Victoria is a subcommittee of Linux Australia.

October 3, 2017 - 18:30

,

TEDCassini’s final dive, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

Farewell to Cassini — and here’s to the continuing search for life beyond Earth. In mid-August, PBS released a digital short featuring Carolyn Porco, a planetary scientist and the leader of the imaging team for the Cassini mission to Saturn. In the short, Porco discusses what is required for life to exist on a planet, and how Saturn’s moon Enceladus seems a promising place to look for life outside Earth. This coincides with Cassini’s final dive on September 15, 2017. After 20 years in space, the Cassini spacecraft ended its seven-year observation of Saturn by diving into its atmosphere, where it burned and disintegrated. (Watch Porco’s TED Talk)

How old is zero really? The Bakshali manuscript is a 70-page birch bark manuscript thought to have been used by merchants in India to practice arithmetic. Notably, it contains the number zero, represented by a small dot. After carbon-dating the manuscript, scientists from the University of Oxford, including mathematics professor Marcus du Sautoy, determined that the manuscript likely dates from 200–400 A.D., much earlier than previously thought. If the carbon dating is correct, Bakshali may be the first known usage of zero as a symbol for nothing. (Watch du Sautoy’s TED Talk)

The power of taking time off. In 2009, Stefan Sagmeister took the TED stage by storm as he shared his vision of time off. In his talk, he explains that every seven years, he embarks on a sabbatical year to recharge, be creative, and feel inspired. Fast forward to 2017, and Neil Pasricha teamed up with the CEO of SimpliFlying, a global aviation strategy firm, to test Sagmeister’s approach within the company. Instead of every seven years, employees took vacation every seven weeks. Despite a few pain points, workers’ creativity, productivity and happiness increased, and the firm’s economic performance improved, Pasricha reports in the Harvard Business Review. It seems as though it pays to relax. (Watch Sagmeister’s TED Talk and Neil Pasricha’s TED Talk)

What’s wrong with US democracy — and how to fix it. In this time of divisive politics, Michael Porter and colleague Katherine Gehl released new research describing the causes of the U.S political system’s failure to serve the public interest. Their detailed report explains how the system changed over the years to benefit political parties and industry allies, and offers strategies for how we can reinvigorate our democracy. (Watch Michael Porter’s TED Talk)

The worst flag in North America gets a reboot. In Roman Mars’ TED Talk on awful city flag designs, he calls Pocatello, Idaho’s flag the worst in North America. The city’s residents didn’t stand for that; they called on local officials to create a new flag. In 2016, a flag design committee was formed, discussions were open to the public, and 709 submissions poured in. Mars even traveled to Pocatello to consult on the design process. Now, Pocatello’s flag has been transformed from what the North American Vexillological Association rated as the worst flag in North America into a flag that attempts to capture the beauty and history of Pocatello. (Watch Roman Mars’ TED Talk)  

Community Health Academy: Phase one. The news may be regularly alarming, but around the world, things are on an upward trajectory. At Goalkeepers, held September 19 and 20 in New York City, the Bill & Melinda Gates Foundation set out to celebrate the “quiet progress” being made toward the UN’s Sustainable Development Goals. Amid a speaker lineup that included Malala Yousafzai, Justin Trudeau and Barack Obama, 2017 TED Prize winner Raj Panjabi stepped up to share his vision for bringing health care to the billion people who lack it by empowering community health workers. He shared the latest on his TED Prize wish: the Community Health Academy. The project now has 15 partners and phase one, launching next year, will be a free, open-education platform for policy makers and nonprofit leaders interested in community health models. “We cannot achieve the Global Goals without investing in hiring, training and equipping community health workers,” said Panjabi. “We’re working to make sure community health workers are no longer an informal, unrecognized group but become a renowned, empowered profession like nurses and doctors.” (Watch Panjabi’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.

Featured Image Credit: NASA.

 

 


CryptogramFriday Squid Blogging: Using Squid Ink to Detect Gum Disease

A new dental imagery method, using squid ink, light, and ultrasound.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet Linux Australiasthbrx - a POWER technical blog: Stupid Solutions to Stupid Problems: Hardcoding Your SSH Key in the Kernel

The "problem"

I'm currently working on firmware and kernel support for OpenCAPI on POWER9.

I've recently been allocated a machine in the lab for development purposes. We use an internal IBM tool running on a secondary machine that triggers hardware initialisation procedures, then loads a specified skiboot firmware image, a kernel image, and a root file system directly into RAM. This allows us to get skiboot and Linux running without requiring the usual hostboot initialisation and gives us a lot of options for easier tinkering, so it's super-useful for our developers working on bringup.

When I got access to my machine, I figured out the necessary scripts, developed a workflow, and started fixing my code... so far, so good.

One day, I was trying to debug something and get logs off the machine using ssh and scp, when I got frustrated with having to repeatedly type in our ultra-secret, ultra-secure root password, abc123. So, I ran ssh-copy-id to copy over my public key, and all was good.

Until I rebooted the machine, when strangely, my key stopped working. It took me longer than it should have to realise that this is an obvious consequence of running entirely from an initrd that's reloaded every boot...

The "solution"

I mentioned something about this to Jono, my housemate/partner-in-stupid-ideas, one evening a few weeks ago. We decided that clearly, the best way to solve this problem was to hardcode my SSH public key in the kernel.

This would definitely be the easiest and most sensible way to solve the problem, as opposed to, say, just keeping my own copy of the root filesystem image. Or asking Mikey, whose desk is three metres away from mine, whether he could use his write access to add my key to the image. Or just writing a wrapper around sshpass...

One Tuesday afternoon, I was feeling bored...

The approach

The SSH daemon looks for authorised public keys in ~/.ssh/authorized_keys, so we need to have a read of /root/.ssh/authorized_keys return a specified hard-coded string.

I did a bit of investigation. My first thought was to put some kind of hook inside whatever filesystem driver was being used for the root. After some digging, I found out that the filesystem type rootfs, as seen in mount, is actually backed by the tmpfs filesystem. I took a look around the tmpfs code for a while, but didn't see any way to hook in a fake file without a lot of effort - the tmpfs code wasn't exactly designed with this in mind.

I thought about it some more - what would be the easiest way to create a file such that it just returns a string?

Then I remembered sysfs, the filesystem normally mounted at /sys, which is used by various kernel subsystems to expose configuration and debugging information to userspace in the form of files. The sysfs API allows you to define a file and specify callbacks to handle reads and writes to the file.

That got me thinking - could I create a file in /sys, and then use a bind mount to have that file appear where I need it in /root/.ssh/authorized_keys? This approach seemed fairly straightforward, so I decided to give it a try.

First up, creating a pseudo-file. It had been a while since the last time I'd used the sysfs API...

sysfs

The sysfs pseudo file system was first introduced in Linux 2.6, and is generally used for exposing system and device information.

Per the sysfs documentation, sysfs is tied in very closely with the kobject infrastructure. sysfs exposes kobjects as directories, containing "attributes" represented as files. The kobject infrastructure provides a way to define kobjects representing entities (e.g. devices) and ksets which define collections of kobjects (e.g. devices of a particular type).

Using kobjects you can do lots of fancy things such as sending events to userspace when devices are hotplugged - but that's all out of the scope of this post. It turns out there's some fairly straightforward wrapper functions if all you want to do is create a kobject just to have a simple directory in sysfs.

#include <linux/kobject.h>

static int __init ssh_key_init(void)
{
        struct kobject *ssh_kobj;
        ssh_kobj = kobject_create_and_add("ssh", NULL);
        if (!ssh_kobj) {
                pr_err("SSH: kobject creation failed!\n");
                return -ENOMEM;
        }
}
late_initcall(ssh_key_init);

This creates and adds a kobject called ssh. And just like that, we've got a directory in /sys/ssh/!

The next thing we have to do is define a sysfs attribute for our authorized_keys file. sysfs provides a framework for subsystems to define their own custom types of attributes with their own metadata - but for our purposes, we'll use the generic bin_attribute attribute type.

#include <linux/sysfs.h>

const char key[] = "PUBLIC KEY HERE...";

static ssize_t show_key(struct file *file, struct kobject *kobj,
                        struct bin_attribute *bin_attr, char *to,
                        loff_t pos, size_t count)
{
        return memory_read_from_buffer(to, count, &pos, key, bin_attr->size);
}

static const struct bin_attribute authorized_keys_attr = {
        .attr = { .name = "authorized_keys", .mode = 0444 },
        .read = show_key,
        .size = sizeof(key)
};

We provide a simple callback, show_key(), that copies the key string into the file's buffer, and we put it in a bin_attribute with the appropriate name, size and permissions.

To actually add the attribute, we put the following in ssh_key_init():

int rc;
rc = sysfs_create_bin_file(ssh_kobj, &authorized_keys_attr);
if (rc) {
        pr_err("SSH: sysfs creation failed, rc %d\n", rc);
        return rc;
}

Woo, we've now got /sys/ssh/authorized_keys! Time to move on to the bind mount.

Mounting

Now that we've got a directory with the key file in it, it's time to figure out the bind mount.

Because I had no idea how any of the file system code works, I started off by running strace on mount --bind ~/tmp1 ~/tmp2 just to see how the userspace mount tool uses the mount syscall to request the bind mount.

execve("/bin/mount", ["mount", "--bind", "/home/ajd/tmp1", "/home/ajd/tmp2"], [/* 18 vars */]) = 0

...

mount("/home/ajd/tmp1", "/home/ajd/tmp2", 0x18b78bf00, MS_MGC_VAL|MS_BIND, NULL) = 0

The first and second arguments are the source and target paths respectively. The third argument, looking at the signature of the mount syscall, is a pointer to a string with the file system type. Because this is a bind mount, the type is irrelevant (upon further digging, it turns out that this particular pointer is to the string "none").

The fourth argument is where we specify the flags bitfield. MS_MGC_VAL is a magic value that was required before Linux 2.4 and can now be safely ignored. MS_BIND, as you can probably guess, signals that we want a bind mount.

(The final argument is used to pass file system specific data - as you can see it's ignored here.)

Now, how is the syscall actually handled on the kernel side? The answer is found in fs/namespace.c.

SYSCALL_DEFINE5(mount, char __user *, dev_name, char __user *, dir_name,
                char __user *, type, unsigned long, flags, void __user *, data)
{
        int ret;

        /* ... copy parameters from userspace memory ... */

        ret = do_mount(kernel_dev, dir_name, kernel_type, flags, options);

        /* ... cleanup ... */
}

So in order to achieve the same thing from within the kernel, we just call do_mount() with exactly the same parameters as the syscall uses:

rc = do_mount("/sys/ssh", "/root/.ssh", "sysfs", MS_BIND, NULL);
if (rc) {
        pr_err("SSH: bind mount failed, rc %d\n", rc);
        return rc;
}

...and we're done, right? Not so fast:

SSH: bind mount failed, rc -2

-2 is ENOENT - no such file or directory. For some reason, we can't find /sys/ssh... of course, that would be because even though we've created the sysfs entry, we haven't actually mounted sysfs on /sys.

rc = do_mount("sysfs", "/sys", "sysfs",
              MS_NOSUID | MS_NOEXEC | MS_NODEV, NULL);

At this point, my key worked!

Note that this requires that your root file system has an empty directory created at /sys to be the mount point. Additionally, in a typical Linux distribution environment (as opposed to my hardware bringup environment), your initial root file system will contain an init script that mounts your real root file system somewhere and calls pivot_root() to switch to the new root file system. At that point, the bind mount won't be visible from children processes using the new root - I think this could be worked around but would require some effort.

Kconfig

The final piece of the puzzle is building our new code into the kernel image.

To allow us to switch this important functionality on and off, I added a config option to fs/Kconfig:

config SSH_KEY
        bool "Andrew's dumb SSH key hack"
        default y
        help
          Hardcode an SSH key for /root/.ssh/authorized_keys.

          This is a stupid idea. If unsure, say N.

This will show up in make menuconfig under the File systems menu.

And in fs/Makefile:

obj-$(CONFIG_SSH_KEY)           += ssh_key.o

If CONFIG_SSH_KEY is set to y, obj-$(CONFIG_SSH_KEY) evaluates to obj-y and thus ssh-key.o gets compiled. Conversely, obj-n is completely ignored by the build system.

I thought I was all done... then Andrew suggested I make the contents of the key configurable, and I had to oblige. Conveniently, Kconfig options can also be strings:

config SSH_KEY_VALUE
        string "Value for SSH key"
        depends on SSH_KEY
        help
          Enter in the content for /root/.ssh/authorized_keys.

Including the string in the C file is as simple as:

const char key[] = CONFIG_SSH_KEY_VALUE;

And there we have it, a nicely configurable albeit highly limited kernel SSH backdoor!

Conclusion

I've put the full code up on GitHub for perusal. Please don't use it, I will be extremely disappointed in you if you do.

Thanks to Jono for giving me stupid ideas, and the rest of OzLabs for being very angry when they saw the disgusting things I was doing.

Comments and further stupid suggestions welcome!

Sociological ImagesPunk Rock Resisting Islamophobia

Originally posted at Discoveries

Punk rock has a long history of anti-racism, and now a new wave of punk bands are turning it up to eleven to combat Islamophobia. For a recent research article, sociologist Amy D. McDowell  immersed herself into the “Taqwacore” scene — a genre of punk rock that derives its name from the Arabic word “Taqwa.” While inspired by the Muslim faith, this genre of punk is not strictly religious — Taqwacore captures the experience of the “brown kids,” Muslims and non-Muslims alike who experience racism and prejudice in the post-9/11 era. This music calls out racism and challenges stereotypes.

Through a combination of interviews and many hours of participant observation at Taqwacore events, McDowell brings together testimony from musicians and fans, describes the scene, and analyzes materials from Taqwacore forums and websites. Many participants, Muslim and non-Muslim alike, describe processes of discrimination where anti-Muslim sentiments and stereotypes have affected them. Her research shows how Taqwacore is a multicultural musical form for a collective, panethnic “brown” identity that spans multiple nationalities and backgrounds. Pushing back against the idea that Islam and punk music are incompatible, Taqwacore artists draw on the essence of punk to create music to that empowers marginalized youth.

Neeraj Rajasekar is a Ph.D. student in sociology at the University of Minnesota.

(View original at https://thesocietypages.org/socimages)

CryptogramBoston Red Sox Caught Using Technology to Steal Signs

The Boston Red Sox admitted to eavesdropping on the communications channel between catcher and pitcher.

Stealing signs is believed to be particularly effective when there is a runner on second base who can both watch what hand signals the catcher is using to communicate with the pitcher and can easily relay to the batter any clues about what type of pitch may be coming. Such tactics are allowed as long as teams do not use any methods beyond their eyes. Binoculars and electronic devices are both prohibited.

In recent years, as cameras have proliferated in major league ballparks, teams have begun using the abundance of video to help them discern opponents' signs, including the catcher's signals to the pitcher. Some clubs have had clubhouse attendants quickly relay information to the dugout from the personnel monitoring video feeds.

But such information has to be rushed to the dugout on foot so it can be relayed to players on the field -- a runner on second, the batter at the plate -- while the information is still relevant. The Red Sox admitted to league investigators that they were able to significantly shorten this communications chain by using electronics. In what mimicked the rhythm of a double play, the information would rapidly go from video personnel to a trainer to the players.

This is ridiculous. The rules about what sorts of sign stealing are allowed and what sorts are not are arbitrary and unenforceable. My guess is that the only reason there aren't more complaints is because everyone does it.

The Red Sox responded in kind on Tuesday, filing a complaint against the Yankees claiming that the team uses a camera from its YES television network exclusively to steal signs during games, an assertion the Yankees denied.

Boston's mistake here was using a very conspicuous Apple Watch as a communications device. They need to learn to be more subtle, like everyone else.

Worse Than FailureError'd: Choose Wisely

"I'm not sure how I can give feedback on this course, unless, figuring out this matrix is actually a final exam," wrote Mads.

 

Brian W. writes, "Sorry that you're not happy with our spam, but before you go...just one more."

 

"I was looking forward to getting this Gerber Dime, but I guess I'll have to wait till they port it to OS X," wrote Peter G.

 

"Deleting 7 MB frees up 6.66 GB? I smell a possible unholy alliance," Mike W. writes.

 

Bill W. wrote, "I wonder if they're wanting to know to what degree I'm 'not at all likely' to recommend Best Buy to friends and family?"

 

"So, is this a new way for the folks at WebEx to make sure that you don't get bad answers?" writes Andy B.

 

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet Linux Australiasthbrx - a POWER technical blog: NCSI - Nice Network You've Got There

A neat piece of kernel code dropped into my lap recently, and as a way of processing having to inject an entire network stack into by brain in less-than-ideal time I thought we'd have a look at it here: NCSI!

NCSI - Not the TV Show

NCSI stands for Network Controller Sideband Interface, and put most simply it is a way for a management controller (eg. a BMC like those found on our OpenPOWER machines) to share a single physical network interface with a host machine. Instead of two distinct network interfaces you plug in a single cable and both the host and the BMC have network connectivity.

NCSI-capable network controllers achieve this by filtering network traffic as it arrives and determining if it is host- or BMC-bound. To know how to do this the BMC needs to tell the network controller what to look out for, and from a Linux driver perspective this the focus of the NCSI protocol.

NCSI Overview

Hi My Name Is 70:e2:84:14:24:a1

The major components of what NCSI helps facilitate are:

  • Network Controllers, known as 'Packages' in this context. There may be multiple separate packages which contain one or more Channels.
  • Channels, most easily thought of as the individual physical network interfaces. If a package is the network card, channels are the individual network jacks. (Somewhere a pedant's head is spinning in circles).
  • Management Controllers, or our BMC, with their own network interfaces. Hypothetically there can be multiple management controllers in a single NCSI system, but I've not come across such a setup yet.

NCSI is the medium and protocol via which these components communicate.

NCSI Packages

The interface between Management Controller and one or more Packages carries both general network traffic to/from the Management Controller as well as NCSI traffic between the Management Controller and the Packages & Channels. Management traffic is differentiated from regular traffic via the inclusion of a special NCSI tag inserted in the Ethernet frame header. These management commands are used to discover and configure the state of the NCSI packages and channels.

If a BMC's network interface is configured to use NCSI, as soon as the interface is brought up NCSI gets to work finding and configuring a usable channel. The NCSI driver at first glance is an intimidating combination of state machines and packet handlers, but with enough coffee it can be represented like this:

NCSI State Diagram

Without getting into the nitty gritty details the overall process for configuring a channel enough to get packets flowing is fairly straightforward:

  • Find available packages.
  • Find each package's available channels.
  • (At least in the Linux driver) select a channel with link.
  • Put this channel into the Initial Config State. The Initial Config State is where all the useful configuration occurs. Here we find out what the selected channel is capable of and its current configuration, and set it up to recognise the traffic we're interested in. The first and most basic way of doing this is configuring the channel to filter traffic based on our MAC address.
  • Enable the channel and let the packets flow.

At this point NCSI takes a back seat to normal network traffic, transmitting a "Get Link Status" packet at regular intervals to monitor the channel.

AEN Packets

Changes can occur from the package side too; the NCSI package communicates these back to the BMC with Asynchronous Event Notification (AEN) packets. As the name suggests these can occur at any time and the driver needs to catch and handle these. There are different types but they essentially boil down to changes in link state, telling the BMC the channel needs to be reconfigured, or to select a different channel. These are only transmitted once and no effort is made to recover lost AEN packets - another good reason for the NCSI driver to periodically monitor the channel.

Filtering

Each channel can be configured to filter traffic based on MAC address, broadcast traffic, multicast traffic, and VLAN tagging. Associated with each of these filters is a filter table which can hold a finite number of entries. In the case of the VLAN filter each channel could match against 15 different VLAN IDs for example, but in practice the physical device will likely support less. Indeed the popular BCM5718 controller supports only two!

This is where I dived into NCSI. The driver had a lot of the pieces for configuring VLAN filters but none of it was actually hooked up in the configure state, and didn't have a way of actually knowing which VLAN IDs were meant to be configured on the interface. The bulk of that work appears in this commit where we take advantage of some useful network stack callbacks to get the VLAN configuration and set them during the configuration state. Getting to the configuration state at some arbitrary time and then managing to assign multiple IDs was the trickiest bit, and is something I'll be looking at simplifying in the future.


NCSI! A neat way to give physically separate users access to a single network controller, and if it works right you won't notice it at all. I'll surely be spending more time here (fleshing out the driver's features, better error handling, and making the state machine a touch more readable to start, and I haven't even mentioned HWA), so watch this space!

,

LongNowCassini Ends, but the Search for Life in the Solar System Continues

On September 15 02017, the Cassini-Huygens probe, which spent the last 13 years of a 20-year space mission studying Saturn, plummeted as planned into the ringed planet’s atmosphere, catching fire and becoming a meteor.

Cassini’s final moments, dubbed “The Grand Finale” by NASA, elicited reactions of wonder around the world. The stunning photographs Cassini captured of Saturn over the course of its mission were shared widely on social media. While the images understandably received most of the attention, the discoveries the probe made in its search for life in the solar system, especially on the Saturnian moons of Enceladus and Titan, will perhaps be its enduring legacy.

The atmosphere of Titan, a moon of Saturn. NASA/JPL-Caltech/Space Science Institute

Planetary scientist Carolyn Porco, who led the imaging team for the Cassini mission, spoke at Long Now in July 02017. In the Q&A, Stewart Brand asked Porco about what the impact of finding life in the solar system would be:

As the Cassini mission came to an end, Porco shared her reflections on the mission in a final captain’s log:

Captain’s Log

September 15, 2017

The end is now upon us. Within hours of the posting of this entry, Cassini will have burned up in the atmosphere of Saturn … a kiloton explosion, spread out against the sky in a meteoric display of light and fire, a dazzling flash to signal the dying essence of a lone emissary from another world. As if the myths of old had foretold the future, the great patriarch will consume his child. At that point, that golden machine, so dutiful and strong, will enter the realm of history, and the toils and triumphs of this long march will be done.

For those of us appointed long ago to embark on this journey, it has been a taxing 3 decades, requiring a level of dedication that I could not have predicted, and breathless times when we sprinted for the duration of a marathon. But in return, we were blessed to spend our lives working and playing in that promised land beyond the Sun.

My imaging team members and I were especially blessed to serve as the documentarians of this historic epoch and return a stirring visual record of our travels around Saturn and the glories we found there. This is our gift to the citizens of planet Earth.

So, it is with both wistful, sentimental reflection and a boundless sense of pride, in a commitment met and a job well done, that I now turn to face this looming, abrupt finality.

It is doubtful we will soon see a mission as richly suited as Cassini return to this ringed world and shoulder a task as colossal as we have borne over the last 27 years.

To have served on this mission has been to live the rewarding life of an explorer of our time, a surveyor of distant worlds. We wrote our names across the sky. We could not have asked for more.

I sign off now, grateful in knowing that Cassini’s legacy, and ours, will include our mutual roles as authors of a tale that humanity will tell for a very long time to come.

Carolyn Porco
Cassini Imaging Team Leader
Director, CICLOPS
Boulder, CO
cpcomments@ciclops.org

A few hours before its mission came to an end, Cassini took a final photograph of the planet it spent the last thirteen years exploring.

NASA/JPL-Caltech/Space Science Institute


The topic of space invites long-term thinking. Some recent Long Now talks:

Cory DoctorowBoring, complex and important: the deadly mix that blew up the open web

On Monday, the World Wide Web Consortium published EME, a standard for locking up video on the web with DRM, allowing large corporate members to proceed without taking any steps to protect accessibility work, security research, archiving or innovation.


I spent years working to get people to pay attention to the ramifications of the effort, but was stymied by the deadly combination of an issue that was super-technical and complicated, as well as kind of boring (standards-making is a slow-moving, legalistic process).

This is really the worst kind of problem, an issue that matters but that requires a lot of technical knowledge and sustained attention to engage with. I wrote up a postmortem on the effort for Wired.


The W3C is a multistakeholder body based on consensus, and that means that members are expected to compromise to find common ground. So we returned with a much milder proposal: we’d stand down on objecting to EME, provided that the consortium promised only to invoke laws such as the DMCA in tandem with some other complaint, like copyright infringement. That meant studios and their technology partners could always sue when someone infringed copyright, or stole trade secrets, or interfered with contractual arrangements, but they would not be able to abuse the W3C process to claim the right to sue over otherwise legal activities, such as automatically analysing videos to prevent strobe effects from triggering seizures in people with photosensitive epilepsy.

This proposal was a way to get at the leadership’s objection: if the law was making the mischief, then let us take the law off the table (EFF is also suing the US government to get the law overturned, but that could take years, far too long in web-time). More importantly, if EME’s advocates refused to negotiate on this point, it would suggest that they planned on using the law to enforce “rights” that they really shouldn’t have, such as the right to decide who could adapt video for people with disabilities, or whether national archives could exercise their statutory rights to make deposit copies of copyrighted works.

But EME’s proponents – a collection of browser vendors, entertainment industry trade bodies, and companies selling products based on EME – refused to negotiate. After 90 days of desultory participation, the W3C leaders allowed the process to die. Despite this intransigence, the W3C executive renewed the EME working group’s charter and allowed it to continue its work, even as the cracks among the W3C’s membership on the standard’s fate deepened.

By the time EME was ready to publish, those cracks had deepened further. The poll results on EME showed the W3C was more divided on this matter than on any in its history. Again, the W3C leadership put its thumbs on the scales for the entertainment industry’s wish-lists over the open web’s core requirements, and overrode every single objection raised by the members.

Boring, complex and important: a recipe for the web’s dire future
[Cory Doctorow/Wired]

Krebs on SecurityExperian Site Can Give Anyone Your Credit Freeze PIN

An alert reader recently pointed my attention to a free online service offered by big-three credit bureau Experian that allows anyone to request the personal identification number (PIN) needed to unlock a consumer credit file that was previously frozen at Experian.

Experian's page for retrieving someone's credit freeze PIN requires little more information than has already been leaked by big-three bureau Equifax and a myriad other breaches.

Experian’s page for retrieving someone’s credit freeze PIN requires little more information than has already been leaked by big-three bureau Equifax and a myriad other breaches.

The first hurdle for instantly revealing anyone’s freeze PIN is to provide the person’s name, address, date of birth and Social Security number (all data that has been jeopardized in breaches 100 times over — including in the recent Equifax breach — and that is broadly for sale in the cybercrime underground).

After that, one just needs to input an email address to receive the PIN and swear that the information is true and belongs to the submitter. I’m certain this warning would deter all but the bravest of identity thieves!

The final authorization check is that Experian asks you to answer four so-called “knowledge-based authentication” or KBA questions. As I have noted in countless stories published here previously, the problem with relying on KBA questions to authenticate consumers online is that so much of the information needed to successfully guess the answers to those multiple-choice questions is now indexed or exposed by search engines, social networks and third-party services online — both criminal and commercial.

What’s more, many of the companies that provide and resell these types of KBA challenge/response questions have been hacked in the past by criminals that run their own identity theft services.

“Whenever I’m faced with KBA-type questions I find that database tools like Spokeo, Zillow, etc are my friend because they are more likely to know the answers for me than I am,” said Nicholas Weaver, a senior researcher in networking and security for the International Computer Science Institute (ICSI).

The above quote from Mr. Weaver came in a story from May 2017 which looked at how identity thieves were able to steal financial and personal data for over a year from TALX, an Equifax subsidiary that provides online payroll, HR and tax services. Equifax says crooks were able to reset the 4-digit PIN given to customer employees as a password and then steal W-2 tax data after successfully answering KBA questions about those employees.

In short: Crooks and identity thieves broadly have access to the data needed to reliably answer KBA questions on most consumers. That is why this offering from Experian completely undermines the entire point of placing a freeze. 

After discovering this portal at Experian, I tried to get my PIN, but the system failed and told me to submit the request via mail. That’s fine and as far as I’m concerned the way it should be. However, I also asked my followers on Twitter who have freezes in place at Experian to test it themselves. More than a dozen readers responded in just a few minutes, and most of them reported success at retrieving their PINs on the site and via email after answering the KBA questions.

Here’s a sample of the KBA questions the site asked one reader:

1. Please select the city that you have previously resided in.

2. According to our records, you previously lived on (XXTH). Please choose the city from the following list where this street is located.

3. Which of the following people live or previously lived with you at the address you provided?

4. Please select the model year of the vehicle you purchased or leased prior to July 2017 .

Experian will display the freeze PIN on its site, and offer to send it to an email address of your choice.

Experian will display the freeze PIN on its site, and offer to send it to an email address of your choice. Image: Rob Jacques.

I understand if people who place freezes on their credit files are prone to misplacing the PIN provided by the bureaus that is needed to unlock or thaw a freeze. This is human nature, and the bureaus should absolutely have a reliable process to recover this PIN. However, the information should be sent via snail mail to the address on the credit record, not via email to any old email address.

This is yet another example of how someone or some entity other than the credit bureaus needs to be in put in charge of rethinking and rebuilding the process by which consumers apply for and manage credit freezes. I addressed some of these issues — as well as other abuses by the credit reporting bureaus — in the second half of a long story published Wednesday evening.

Experian has not yet responded to requests for comment.

While this service is disappointing, I stand by my recommendation that everyone should place a freeze on their credit files. I published a detailed Q&A a few days ago about why this is so important and how you can do it. For those wondering about whether it’s possible and advisable to do this for their kids or dependents, check out The Lowdown on Freezing Your Kid’s Credit.

CryptogramISO Rejects NSA Encryption Algorithms

The ISO has decided not to approve two NSA-designed block encryption algorithms: Speck and Simon. It's because the NSA is not trusted to put security ahead of surveillance:

A number of them voiced their distrust in emails to one another, seen by Reuters, and in written comments that are part of the process. The suspicions stem largely from internal NSA documents disclosed by Snowden that showed the agency had previously plotted to manipulate standards and promote technology it could penetrate. Budget documents, for example, sought funding to "insert vulnerabilities into commercial encryption systems."

More than a dozen of the experts involved in the approval process for Simon and Speck feared that if the NSA was able to crack the encryption techniques, it would gain a "back door" into coded transmissions, according to the interviews and emails and other documents seen by Reuters.

"I don't trust the designers," Israeli delegate Orr Dunkelman, a computer science professor at the University of Haifa, told Reuters, citing Snowden's papers. "There are quite a lot of people in NSA who think their job is to subvert standards. My job is to secure standards."

I don't trust the NSA, either.

Worse Than FailureTales from the Interview: The In-House Developer

James was getting anxious to land a job that would put his newly-minted Computer Science degree to use. Six months had come to pass since he graduated and being a barista barely paid the bills. Living in a small town didn't afford him many local opportunities, so when he saw a developer job posting for an upstart telecom company, he decided to give it a shot.

Lincoln Log Cabin 2

We do everything in-house! the posting for CallCom emphasized, piquing James' interest. He hoped that meant there would be a small in-house development team that built their systems from the ground up. Surely he could learn the ropes from them before becoming a key contributor. He filled out the online application and happily clicked Submit.

Not 15 minutes later, his phone rang with a number he didn't recognize. Usually he just ignored those calls but he decided to answer. "Hi, is James available?" a nasally female voice asked, almost sounding disinterested. "This is Janine with CallCom, you applied for the developer position."

Caught off guard by the suddenness of their response, James wasn't quite ready for a phone screening. "Oh, yeah, of course I did! Just now. I am very interested."

"Great. Louis, the owner, would like to meet with you," Janine informed him.

"Ok, sure. I'm pretty open, I usually work in the evenings so I can make most days work," he replied, checking his calendar.

"Can you be here in an hour?" she asked. James managed to hide the fact he was freaking out about how to make it in time while assuring her he could be.

He arrived at the address Janine provided after a dangerous mid-drive shave. He felt unprepared but eager to rock the interview. The front door of their suite gave way to a lobby that seemed more like a walk-in closet. Janine was sitting behind a small desk reading a trashy tabloid and barely looked up to greet him. "Louis will see you now," she motioned toward a door behind the desk and went back to reading barely plausible celebrity rumors.

James stepped through the door into what could have been a walk-in closet for the first walk-in closet. A portly, sweaty man presumed to be Louis jumped up to greet him. "John! Glad you could make it on short notice. Have a seat!"

"Actually, it's James..." he corrected Louis, while also forgiving the mixup. "Nice to meet you. I was eager to get here to learn about this opportunity."

"Well James, you were right to apply! We are a fast growing company here at CallCom and I need eager young talent like you to really drive it home!" Louis was clearly excited about his company, growing sweatier by the minute.

"That sounds good to me! I may not have any real-world experience yet, but I assure you that I am eager to learn from your more senior members," James replied, trying to sell his potential.

Louis let out a hefty chuckle at James' mention of senior members. "Oh you mean stubborn old developers who are set in their ways? You won't be finding those around here! I believe in fresh young minds like yours, unmolded and ready to take the world by storm."

"I see..." James said, growing uneasy. "I suppose then I could at least learn how your code is structured from your junior developers? The ones who do your in-house development?"

Louis wiped his glistening brow with his suit coat before making the big revelation. "There are no other developers, James. It would just be you, building our fantastic new computer system from scratch! I have all the confidence in the world that you are the man for the job!"

James sat for a moment and pondered what he had just heard. "I'm sorry but I don't feel comfortable with that arrangement, Louis. I thought that by saying you do everything in-house, that implied there was already a development team."

"What? Oh, heavens no! In-house development means we let you work from home. Surely you can tell we don't have much office space here. So that's what it means. In. House. Got it?

James quickly thanked Louis for his time and left the interconnected series of closets. In a way, James was glad for the experience. It motivated him to move out of his one horse town to a bigger city where he eventually found employment with a real in-house dev team.

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Krebs on SecurityEquifax Breach: Setting the Record Straight

Bloomberg published a story this week citing three unnamed sources who told the publication that Equifax experienced a breach earlier this year which predated the intrusion that the big-three credit bureau announced on Sept. 7. To be clear, this earlier breach at Equifax is not a new finding and has been a matter of public record for months. Furthermore, it was first reported on this Web site in May 2017.

equihaxIn my initial Sept. 7 story about the Equifax breach affecting more than 140 million Americans, I noted that this was hardly the first time Equifax or another major credit bureau has experienced a breach impacting a significant number of Americans.

On May 17, KrebsOnSecurity reported that fraudsters exploited lax security at Equifax’s TALX payroll division, which provides online payroll, HR and tax services.

That story was about how Equifax’s TALX division let customers who use the firm’s payroll management services authenticate to the service with little more than a 4-digit personal identification number (PIN).

Identity thieves who specialize in perpetrating tax refund fraud figured out that they could reset the PINs of payroll managers at various companies just by answering some multiple-guess questions — known as “knowledge-based authentication” or KBA questions — such as previous addresses and dates that past home or car loans were granted.

On Tuesday, Sept. 18, Bloomberg ran a piece with reporting from no fewer than five journalists there who relied on information provided by three anonymous sources. Those sources reportedly spoke in broad terms about an earlier breach at Equifax, and told the publication that these two incidents were thought to have been perpetrated by the same group of hackers.

The Bloomberg story did not name TALX. Only post-publication did Bloomberg reporters update the piece to include a statement from Equifax saying the breach was unrelated to the hack announced on Sept. 7, and that it had to do with a security incident involving a payroll-related service during the 2016 tax year.

I have thus far seen zero evidence that these two incidents are related. Equifax has said the unauthorized access to customers’ employee tax records (we’ll call this “the March breach” from here on) happened between April 17, 2016 and March 29, 2017.

The criminals responsible for unauthorized activity in the March breach were participating in an insidious but common form of cybercrime known as tax refund fraud, which involves filing phony tax refund requests with the IRS and state tax authorities using the personal information from identity theft victims.

My original report on the March breach was based on public breach disclosures that Equifax was required by law to file with several state attorneys general.

Because the TALX incident exposed the tax and payroll records of its customers’ employees, the victim customers were in turn required to notify their employees as well. That story referenced public breach disclosures from five companies that used TALX, including defense contractor giant Northrop Grumman; staffing firm Allegis GroupSaint-Gobain Corp.; Erickson Living; and the University of Louisville.

When asked Tuesday about previous media coverage of the March breach, Equifax pointed National Public Radio (NPR) to coverage in KrebsonSecurity.

One more thing before I move on to the analysis. For more information on why KBA is a woefully ineffective method of stopping fraudsters, see this story from 2013 about how some of the biggest vendors of these KBA questions were all hacked by criminals running an identity theft service online.

Or, check out these stories about how tax refund fraudsters used weak KBA questions to steal personal data on hundreds of thousands of taxpayers directly from the Internal Revenue Service‘s own Web site. It’s probably worth mentioning that Equifax provided those KBA questions as well.

ANALYSIS

Over the past two weeks, KrebsOnSecurity has received an unusually large number of inquiries from reporters at major publications who were seeking background interviews so that they could get up to speed on Equifax’s spotty security history (sadly, Bloomberg was not among them).

These informational interviews — in which I agree to provide context and am asked to speak mainly on background — are not unusual; I sometimes field two or three of these requests a month, and very often more when time permits. And for the most part I am always happy to help fellow journalists make sure they get the facts straight before publishing them.

But I do find it slightly disturbing that there appear to be so many reporters on the tech and security beats who apparently lack basic knowledge about what these companies do and their roles in perpetuating — not fighting — identity theft.

It seems to me that some of the world’s most influential publications have for too long given Equifax and the rest of the credit reporting industry a free pass — perhaps because of the complexities involved in succinctly explaining the issues to consumers. Indeed, I would argue the mainstream media has largely failed to hold these companies’ feet to the fire over a pattern of lax security and a complete disregard for securing the very sensitive consumer data that drives their core businesses.

To be sure, Equifax has dug themselves into a giant public relations hole, and they just keep right on digging. On Sept. 8, I published a story equating Equifax’s breach response to a dumpster fire, noting that it could hardly have been more haphazard and ill-conceived.

But I couldn’t have been more wrong. Since then, Equifax’s response to this incident has been even more astonishingly poor.

EQUIPHISH

On Tuesday, the official Equifax account on Twitter replied to a tweet requesting the Web address of the site that the company set up to give away its free one-year of credit monitoring service. That site is https://www.equifaxsecurity2017.com, but the company’s Twitter account told users to instead visit securityequifax2017[dot]com, which is currently blocked by multiple browsers as a phishing site.

equiphish

FREEZING UP

Under intense public pressure from federal lawmakers and regulators, Equifax said that for 30 days it would waive the fee it charges for placing a security freeze on one’s credit file (for more on what a security freeze entails and why you and your family should be freezing their files, please see The Equifax Breach: What You Should Know).

Unfortunately, the free freeze offer from Equifax doesn’t mean much if consumers can’t actually request one via the company’s freeze page; I have lost count of how many comments have been left here by readers over the past week complaining of being unable to load the site, let alone successfully obtain a freeze. Instead, consumers have been told to submit the requests and freeze fees in writing and to include copies of identity documents to validate the requests.

Sen. Elizabeth Warren (D-Mass) recently introduced a measure that would force the bureaus to eliminate the freeze fees and to streamline the entire process. To my mind, that bill could not get passed soon enough.

Understand that each credit bureau has a legal right to charge up to $20 in some states to freeze a credit file, and in many states they are allowed to charge additional fees if consumers later wish to lift or temporarily thaw a freeze. This is especially rich given that credit bureaus earn roughly $1 every time a potential creditor (or identity thief) inquires about your creditworthiness, according to Avivah Litan, a fraud analyst with Gartner Inc.

In light of this, it’s difficult to view these freeze fees as anything other than a bid to discourage consumers from filing them.

The Web sites where consumers can go to file freezes at the other major bureaus — including TransUnion and Experian — have hardly fared any better since Equifax announced the breach on Sept. 7. Currently, if you attempt to freeze your credit file at TransUnion, the company’s site is relentless in trying to steer you away from a freeze and toward the company’s free “credit lock” service.

That service, called TrueIdentity, claims to allow consumers to lock or unlock their credit files for free as often as they like with the touch of a button. But readers who take the bait probably won’t notice or read the terms of service for TrueIdentity, which has the consumer agree to a class action waiver, a mandatory arbitration clause, and something called ‘targeted marketing’ from TransUnion and their myriad partners.

The agreement also states TransUnion may share the data with other companies:

“If you indicated to us when you registered, placed an order or updated your account that you were interested in receiving information about products and services provided by TransUnion Interactive and its marketing partners, or if you opted for the free membership option, your name and email address may be shared with a third party in order to present these offers to you. These entities are only allowed to use shared information for the intended purpose only and will be monitored in accordance with our security and confidentiality policies. In the event you indicate that you want to receive offers from TransUnion Interactive and its marketing partners, your information may be used to serve relevant ads to you when you visit the site and to send you targeted offers.  For the avoidance of doubt, you understand that in order to receive the free membership, you must agree to receive targeted offers.

TransUnion then encourages consumers who are persuaded to use the “free” service to subscribe to “premium” services for a monthly fee with a perpetual auto-renewal.

In short, TransUnion’s credit lock service (and a similarly named service from Experian) doesn’t prevent potential creditors from accessing your files, and these dubious services allow the credit bureaus to keep selling your credit history to lenders (or identity thieves) as they see fit.

As I wrote in a Sept. 11 Q&A about the Equifax breach, I take strong exception to the credit bureaus’ increasing use of the term “credit lock” to divert people away from freezes. Their motives for saddling consumers with even more confusing terminology are suspect, and I would not count on a credit lock to take the place of a credit freeze, regardless of what these companies claim (consider the source).

Experian’s freeze Web site has performed little better since Sept. 7. Several readers pinged KrebsOnSecurity via email and Twitter to complain that while Experian’s freeze site repeatedly returned error messages stating that the freeze did not go through, these readers’ credit cards were nonetheless charged $15 freeze fees multiple times.

If the above facts are not enough to make your blood boil, consider that Equifax and other bureaus have been lobbying lawmakers in Congress to pass legislation that would dramatically limit the ability of consumers to sue credit bureaus for sloppy security, and cap damages in related class action lawsuits to $500,000.

If ever there was an industry that deserved obsolescence or at least more regulation, it is the credit bureaus. If either of those outcomes are to become reality, it is going to take much more attentive and relentless coverage on the part of the world’s top news publications. That’s because there’s a lot at stake here for an industry that lobbies heavily (and successfully) against any new laws that may restrict their businesses.

Here’s hoping the media can get up to speed quickly on this vitally important topic, and help lead the debate over legal and regulatory changes that are sorely needed.

,

TEDHurricanes, monsoons and the human rights of climate change: TEDWomen chats with Mary Robinson

Mary Robinson speaks at TEDWomen 2015 at the Monterey Conference Center. Photo: Marla Aufmuth/TED

Two years ago, former president of Ireland Mary Robinson graced the TEDWomen stage with a moving talk about why climate change is not only a threat to our environment, but also a threat to the human rights of many poor and marginalized people around the world.

Mary is an incredible person who inspires me greatly. Besides being the first woman president of Ireland, she also served as the UN High Commissioner for Human Rights from 1997 to 2002. She now leads a foundation devoted to climate justice. She received the Presidential Medal of Freedom from President Obama, is a member of the Elders, a former Chair of the Council of Women World Leaders and a member of the Club of Madrid.

“I came to [be concerned about] climate change not as a scientist or an environmental lawyer,” she told the TEDWomen crowd in California. “It was because of the impact on people, and the impact on their rights — their rights to food and safe water, health, education and shelter.”

She told stories of the people she met in her work with the United Nations and later on in her foundation work. When explaining the challenges they faced, she said they kept repeating the same pervasive sentence: “Oh, but things are so much worse now, things are so much worse.” She came to realize that they were talking about the same phenomenon — climate shocks and changes in the weather that were threatening their crops, their livelihood and their survival.

In the wake of Hurricanes Harvey and Irma in the United States, and extreme monsoons in South Asia, I reached out to Mary to get an update on her work and where things stand now in terms of climate justice and the global fight to curb climate change. Despite a busy week attending this week’s United Nations General Assembly and other events, she took the time to answer my questions via email.

Horrific hurricanes like Harvey, Irma and now Maria are bringing the issue of climate change to the doorsteps of a country that recently dropped out of the Paris Climate agreement. What would you say to Americans about climate change and the actions of their government in 2017?

Mary Robinson: In the past few weeks alone, we have seen the physical, social and economic devastation wrought on some American cities and vulnerable communities across the Caribbean by Hurricanes Harvey and Irma, and the death and destruction caused by monsoons across South Asia. The American people know from previous experience, such as Hurricane Katrina in 2005, that some people affected will be displaced from their homes forever. Many of these displaced people are drawn to cities, but the capacity to integrate these new arrivals in a manner consistent with their human rights and dignity is often woefully inadequate — reflecting an equally inadequate response from political leaders.

The profound injustice of climate change is that those who are most vulnerable in society, no matter the level of development of the country in question, will suffer most. People who are marginalised or poor, women, and indigenous communities are being disproportionately affected by climate impacts.

And yet, in the US the debate as to whether climate change is real or not continues in mainstream discourse. Throughout the world, baseless climate denial has largely disappeared into the fringes of public debate as the focus has shifted to how countries should act to avoid the potentially disastrous consequences of unchecked climate change. For many years, the US has positioned itself as a global leader in science and technology and yet in seeking to leave or renegotiate the Paris Agreement, the current administration is taking a giant leap backwards, both in terms of science-based policy making and in terms of international solidarity and cooperation.

However, while the national government is going backwards, we are seeing citizens and leaders across the country picking up the slack. I see many American people who remain determined to ensure the US plays its role in the fight against climate change. For Americans who are rightly concerned about the administration’s direction on climate change, I would say that there are still many reasons to be optimistic. The “We’re Still In” initiative offers a tangible demonstration of that desire on the part of concerned citizens to ensure that the US emerges as a leader on climate action, regardless of the approach of the current administration. States, cities, universities and businesses are committing to ambitious action to tackle climate change, to ensure clean and efficient energy services and uphold US commitments under the Paris Agreement.

As you pointed out in your TED Talk, the people who are suffering the most from climate change are those who don’t have the means to escape catastrophic events or rebuild after they have occurred. Can you talk a bit about efforts your organization and others are involved in to help those who are the most affected by climate change, but often are the least responsible for the human actions that have caused it?

As with many of the most severe storms to impact communities in recent years – including in the US with Katrina, Sandy and Ike – it is the poorest people who have suffered the worst impacts from Harvey and Irma. The people who the climate justice movement is for are the people who have the least capacity to protect themselves, their families, their homes and their incomes from the impacts of climate change, and indeed climate action policies that are not grounded in human rights. These are also the people who have the hardest time rebuilding their lives in the wake of these more frequent and intense disasters as they do not have adequate access to insurance, savings or other livelihood options necessary to provide resilience. In many cases, families lose everything.

If we then consider the devastation wrought by Irma in the Caribbean, where poverty rates are much higher than the US, we begin to understand the great injustice of climate change. People living around the world, in communities which have never seen the benefits of industrialization or even electrification, face the harshest impacts of climate change and have the most limited capacity to recover.

In seeking to advance climate justice, my foundation and other organizations which share our concerns, seek to ensure that the voices of these communities are heard and understood by those crafting the global and national response to the climate crisis to ensure that decisions are participatory, transparent and respond to the needs of the most vulnerable people in our communities. We must enable all people to realize their right to development and to benefit from the global transition to a sustainable, cleaner and more equitable future. Solutions to the climate crisis that are good for the planet but cause further suffering for people living in poverty must not be implemented.

What is the number one issue involving climate change that we should all be focused on right now as regards human rights and climate justice in the world?

There are many pressing issues which must be addressed to advance climate justice. For instance, over one billion people today live in energy poverty. The global community must ensure that appropriate financing and renewable technologies are available to allow all people to enjoy the benefits of electrification sustainably. Similarly, a compendium of evidence-based climate solutions published this summer highlighted that the most effective approach to reducing greenhouse gas emissions is through educating girls and providing for family planning*. Climate change impacts women differently to men and exacerbates existing inequalities. Empowering women and girls in the global response to climate change will result in a fairer world and better climate outcomes. This must begin by ensuring women are enabled to meaningfully participate in decision-making processes related to climate action throughout the world.

Given the recent storms and resulting devastation, one of the most pressing issues to be addressed regarding the rights of those most vulnerable to climate change is the need to ensure the necessary protections are in place for people displaced by worsening climate impacts. There can be no doubt that climate change is a driver of migration and migration owing to climate impacts will increase in the coming years. Increasingly severe and frequent catastrophic storms or slow onset events like recurrent drought, sea level rise or ocean acidification, will result in people’s livelihoods collapsing, forcing them to seek better futures elsewhere. The scale of potential future migration as a result of climate change must not be underestimated. In order to ensure that the global community is prepared to protect the wellbeing and dignity of people displaced by climate change, concrete steps must be taken now. It would be very important that the Global Compact on Migration and Refugees, currently being negotiated at the UN, recognizes the challenge of addressing displacement resulting from climate change.

In a speech earlier this month, you talked about some of the innovative ideas that are being broached around the world to address climate change and you said, “The existential threat of climate change confronts us with our global interdependence. It cannot be seen as anything other than a global problem, and each nation must play an appropriate part to tackle it.” What do you think is the most important thing the US must do to address the problem?

The US must continue to support international action on climate change. No country alone can protect its citizens from the impacts of climate change – it will only be through unprecedented international solidarity, backed up by financial and technological support, that some of the most vulnerable countries will be able to chart a sustainable development pathway for their country. It is in the interests of the US provide this support.

Without it, developing countries are faced with a choice between prohibitively expensive sustainable development and readily accessible fossil fuel based development. They will choose the latter and who would blame them – they need to lift large numbers of their people out of poverty and provide essential services like health care, education and fresh water – without international support, they will have no choice but to use fossil fuels. This would result in even more intense Atlantic hurricanes, longer and more severe drought across the western US and the inundation of coastal cities from sea level rise. In order to protect American citizens, the US must play their role as a global citizen. Solidarity and interdependence are not new ideas, but in the current climate of rising nationalism, they are innovative and potentially transformative.

What are some of the innovative solutions that you are seeing around the world that we should know about? 

When we think about innovation, we usually focus on technology. However, most of the technologies we need to avert the climate crisis are already available to us. What is lacking is the political will to enact the necessary global transition to a safer and fairer future for all. Perhaps we should be more focused on innovation in terms of global governance.

For instance, in some countries like Wales and Hungary there is an office that represents the interests of future generations in national decision making. When viewed through an intergenerational lens, the urgent need to ensure sustainable development for all people and stabilize the climate becomes clear. Decisions taken today that undermine the wellbeing of future generations become inexcusable. Intergenerational equity can help to inform decision making at the international level as well, and provide a unifying focus for international negotiations. It is a universal principle that informs constitutions, international treaties, economies, religious beliefs, traditions and customs. Putting this principle into action and allowing it to inform how we negotiate and govern would be a very innovative change.

What can regular people do to fight climate change and work for environmental justice?

I believe the most important thing a person can do is to appreciate their role as a global citizen. Ultimately, the fight against climate change will not be won by a technological silver bullet or a mass recycling campaign, but rather by an appreciation among all people that we have to live sustainably with the Earth and with each other. We need empathy for those communities on the front lines of climate change, and for those seeking to realise their right to development in the midst of a changing climate, and this empathy must help to guide how we act, how we consume and how we vote.

Watch Mary’s TED Talk and visit her website to find out more about her work and how you can get involved.

I also want to mention that registration for TEDWomen 2017 is open, so if you haven’t registered yet, please click this link and apply today — space is limited and I don’t want you to miss out. This year, TEDWomen will be held November 1–3 in New Orleans. The theme is Bridges: We build them, we cross them, and sometimes we even burn them. We’ll explore the many aspects of this year’s theme through curated TED Talks, community dinners and activities.

Join us!

– Pat

* Hawking, P. (2017) Drawdown: The most comprehensive plan ever proposed to reverse global warming


TEDStanding for art and truth: A chat with Sethembile Msezane

Standing for four hours on a platform in the scorching sun, Sethembile Msezane embodied the bird spirit of Chapungu, raising and lowering her wings, as a statue of Cecil Rhodes was lifted by crane off its own platform behind her. The work is based in her research and scholarship, while the imagery of Chapungu first came to her in a dream. “By the time I came down, I was shaking and experiencing sunstroke. But I also felt a burst of life inside.”

Sethembile Msezane’s sculptures are not made of clay, granite or marble. She is the sculpture, as you will see in her talk — which you can watch right now before you read this Q&A. We’ll wait.

The fragility of the medium combined with the power of her messages make for performances that literally stop people in their tracks and elicit strong reactions. I ask Msezane about what goes into her productions and the practical realities of physically embodying her artwork that is a powerful and often uncomfortable commentary on the reality of being a black woman in post-apartheid South Africa.

That was a great and moving talk — congratulations! How do you feel?

Thank you! It’s been a positively overwhelming experience. To have an idea, allow it to manifest through various experiments and for other people to identify with it even if it’s years later after its inception is encouraging.

The crowd at TED conferences is a fairly progressive one, but how would you describe the broader reception of your art, both on the site of your performance and off it?

Well, there’s always different responses to my work. Sometimes people focus on only scraping the surface of my practice by focusing on the female body, choosing to exoticise, sexualise or even moralise it. But then something interesting begins to happen when they start to ‘see’ the person inside the body in relation to symbols in the landscape and in dress. At times, their own insecurities become revealed to them. They start to comment on the society we live in and the effects of symbols such as statues living among us.

Putting your body out there as vessel for your messages is incredibly brave. Have you ever felt like you were in physical danger during any of your performances?

Yes, there’s always an anxiety just being a regular woman walking down the street. So when my body is standing on a plinth in public spaces, this is not a foreign feeling. Sometimes I’m surrounded by crowds, and there’s movement that could cause me to fall off. At times people touch my body, which of course is not welcome. This speaks to how we, particularly men, have been socialised to think they are entitled to women’s bodies.

I remember one time, however, when I was more scared for a colleague and friend of mine who was filming my performance The Charter. A man was passing by and noticed the performance. He started spewing out all kinds of hatred in relation to my body and the symbolic gestures being performed in that space. His hatred grew and he started displaying his prejudice and homophobia by insulting my friend. He didn’t physically harm us, but he used his words as a weapon, and that cut deep.

An image from The Charter (2016).

Could you describe what goes into each performance? Conceptualisation? Writing? Research? Staking out the location? Help with pictures and video?

My process is never constant; various circumstances come into play in formulating the performance.

I guess in the beginning I’d get fixated on an idea and start doing more research about it…online, books, films, magazines, music etc. Concurrently, I begin to source materials and costumes to construct wearable sculptures in my studio. In between sourcing materials, I make site visits, interview people and write my observations to formulate a solid concept.

I think now I realise not all of it was based solely on research — some of it was intuitive or came about in my dreams. I’d try connect with the figure I’d be embodying on the day of the performance. This happened at home in front of the mirror. This process would be carried out from the beginning of thinking about ‘her’ towards the very end on the day of the performance.

Which was the most difficult performance to enact?

I’d have to say it’s between Untitled (Youth Day) 2014 and Chapungu: The Day Rhodes Fell (2015). Untitled (Youth Day) 2014 was just over an hour, but the books stacked on my head were compressing my vertebrae, which really hurt, and I couldn’t take breaks in between.

Chapungu: The Day Rhodes Fell (2015) on the other hand was longer nearly 4 hours. Standing on 6-inch stilettos that long can’t be healthy. My toes were blue, they didn’t feel like my own. The plinth I was standing on was placed on a set of stairs, and people were standing around the plinth. The positioning was quite precarious.

It was scorching that day (I think it was 32 degrees celsius), and a lot of my body was exposed. I kept my arms outstretched about 10 minutes at a time and rested for about 5 minutes. I went between many states of consciousness being Chapungu; but also being myself, Sethembile, I was deeply in pain, fatigued, dehydrated and more. Meditating, remembering why I was there and allowing the spirit of Chapungu to be present kept me going. By the time I came down, I was shaking and experiencing sunstroke. But I also felt a burst of life inside.

For Untitled (Heritage Day) in 2014, Sethembile created a character based on her own Zulu traditions, and posed silently in front of a statue of Louis Botha, creating a rich dialogue between South Africa’s colonial, apartheid-era history and her own. For Untitled (Youth Day) 2014, at right, she stood for just over an hour with books stacked on her head, her face masked.

Which performance has affected you the most?

That’s like asking which one of my children is my favourite haha. I can’t really say, they all have contributed to my thinking and where I am in my outlook and career right now. I’ve learned valuable lessons in each performance, because in essence they comment on the societies I’ve found myself in; these spaces and people can be complex. Ultimately, I learned more about being a woman in physical space (both public and private) but also within the spiritual realm, which is very present in my daily life.

What more can we look forward to from Sethembile?

I’m looking forward to the opening of Zeitz Museum of Contemporary Art Africa (MOCAA) this September, where select pieces of my work that are part of their collection will be showing. One of my favorite pieces, Signal Her Return I (2015–2016), a living sound installation with a sea of lit candles, an 18th-century bell and long braid of hair, will also be featuring. After that I’m headed to Finland for the ANTI Festival International Prize for Live Art award ceremony where I’m one of four nominees.

That’s as much as I’m willing to reveal for now. Keep following, you won’t be disappointed …


Sociological ImagesWhat’s Trending? The Crime Drop

Over at Family Inequality, Phil Cohen has a list of demographic facts you should know cold. They include basic figures like the US population (326 million), and how many Americans have a BA or higher (30%). These got me thinking—if we want to have smarter conversations and fight fake news, it is also helpful to know which way things are moving. “What’s Trending?” is a post series at Sociological Images with quick looks at what’s up, what’s down, and what sociologists have to say about it.

The Crime Drop

You may have heard about a recent spike in the murder rate across major U.S. cities last year. It was a key talking point for the Trump campaign on policing policy, but it also may be leveling off. Social scientists can also help put this bounce into context, because violent and property crimes in the U.S. have been going down for the past twenty years.

You can read more on the social sources of this drop in a feature post at The Society Pages. Neighborhood safety is a serious issue, but the data on crime rates doesn’t always support the drama.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramWhat the NSA Collects via 702

New York Times reporter Charlie Savage writes about some bad statistics we're all using:

Among surveillance legal policy specialists, it is common to cite a set of statistics from an October 2011 opinion by Judge John Bates, then of the FISA Court, about the volume of internet communications the National Security Agency was collecting under the FISA Amendments Act ("Section 702") warrantless surveillance program. In his opinion, declassified in August 2013, Judge Bates wrote that the NSA was collecting more than 250 million internet communications a year, of which 91 percent came from its Prism system (which collects stored e-mails from providers like Gmail) and 9 percent came from its upstream system (which collects transmitted messages from network operators like AT&T).

These numbers are wrong. This blog post will address, first, the widespread nature of this misunderstanding; second, how I came to FOIA certain documents trying to figure out whether the numbers really added up; third, what those documents show; and fourth, what I further learned in talking to an intelligence official. This is far too dense and weedy for a New York Times article, but should hopefully be of some interest to specialists.

Worth reading for the details.

Worse Than FailureCodeSOD: A Dumbain Specific Language

I’ve had to write a few domain-specific-languages in the past. As per Remy’s Law of Requirements Gathering, it’s been mostly because the users needed an Excel-like formula language. The danger of DSLs, of course, is that they’re often YAGNI in the extreme, or at least a sign that you don’t really understand your problem.

XML, coupled with schemas, is a tool for building data-focused DSLs. If you have some complex structure, you can convert each of its features into an XML attribute. For example, if you had a grammar that looked something like this:

The Source specification obeys the following syntax

source = ( Feature1+Feature2+... ":" ) ? steps

Feature1 = "local" | "global"

Feature2 ="real" | "virtual" | "ComponentType.all"

Feature3 ="self" | "ancestors" | "descendants" | "Hierarchy.all"

Feature4 = "first" | "last" | "DayAllocation.all"

If features are specified, the order of features as given above has strictly to be followed.

steps = oneOrMoreNameSteps | zeroOrMoreNameSteps | componentSteps

oneOrMoreNameSteps = nameStep ( "." nameStep ) *

zeroOrMoreNameSteps = ( nameStep "." ) *

nameStep = "#" name

name is a string of characters from "A"-"Z", "a"-"z", "0"-"9", "-" and "_". No umlauts allowed, one character is minimum.

componentSteps is a list of valid values, see below.

Valid 'componentSteps' are:

- GlobalValue
- Product
- Product.Brand
- Product.Accommodation
- Product.Accommodation.SellingAccom
- Product.Accommodation.SellingAccom.Board
- Product.Accommodation.SellingAccom.Unit
- Product.Accommodation.SellingAccom.Unit.SellingUnit
- Product.OnewayFlight
- Product.OnewayFlight.BookingClass
- Product.ReturnFlight
- Product.ReturnFlight.BookingClass
- Product.ReturnFlight.Inbound
- Product.ReturnFlight.Outbound
- Product.Addon
- Product.Addon.Service
- Product.Addon.ServiceFeature

In addition to that all subsequent steps from the paths above are permitted, that is 'Board', 
'Accommodation.SellingAccom' or 'SellingAccom.Unit.SellingUnit'.
'Accommodation.Unit' in the contrary is not permitted, as here some intermediate steps are missing.

You could turn that grammar into an XML document by converting syntax elements to attributes and elements. You could do that, but Stella’s predecessor did not do that. That of course, would have been work, and they may have had to put some thought on how to relate their homebrew grammar to XSD rules, so instead they created an XML schema rule for SourceAttributeType that verifies that the data in the field is valid according to the grammar… using regular expressions. 1,310 characters of regular expressions.

<xs:simpleType>
    <xs:restriction base="xs:string">
            <xs:pattern value="(((Scope.)?(global|local|current)\+?)?((((ComponentType.)?
(real|virtual))|ComponentType.all)\+?)?((((Hierarchy.)?(self|ancestors|descendants))|Hierarchy.all)\+?)?
((((DayAllocation.)?(first|last))|DayAllocation.all)\+?)?:)?(#[A-Za-z0-9\-_]+(\.(#[A-Za-z0-9\-_]+))*|(#[A-Za-z0-
9\-_]+\.)*
(ThisComponent|GlobalValue|Product|Product\.Brand|Product\.Accommodation|Product\.Accommodation\.SellingAccom|Prod
uct\.Accommodation\.SellingAccom\.Board|Product\.Accommodation\.SellingAccom\.Unit|Product\.Accommodation\.Selling
Accom\.Unit\.SellingUnit|Product\.OnewayFlight|Product\.OnewayFlight\.BookingClass|Product\.ReturnFlight|Product\.
ReturnFlight\.BookingClass|Product\.ReturnFlight\.Inbound|Product\.ReturnFlight\.Outbound|Product\.Addon|Product\.
Addon\.Service|Product\.Addon\.ServiceFeature|Brand|Accommodation|Accommodation\.SellingAccom|Accommodation\.Selli
ngAccom\.Board|Accommodation\.SellingAccom\.Unit|Accommodation\.SellingAccom\.Unit\.SellingUnit|OnewayFlight|Onewa
yFlight\.BookingClass|ReturnFlight|ReturnFlight\.BookingClass|ReturnFlight\.Inbound|ReturnFlight\.Outbound|Addon|A
ddon\.Service|Addon\.ServiceFeature|SellingAccom|SellingAccom\.Board|SellingAccom\.Unit|SellingAccom\.Unit\.Sellin
gUnit|BookingClass|Inbound|Outbound|Service|ServiceFeature|Board|Unit|Unit\.SellingUnit|SellingUnit))"/>
    </xs:restriction>
</xs:simpleType>
</xs:union>

There’s a bug in that regex that Stella needed to fix. As she put it: “Every time you evaluate it a few little kitties die because you shouldn’t use kitties to polish your car. I’m so, so sorry, little kitties…”

The full, unexcerpted code is below, so… at least it has documentation. In two languages!

<xs:simpleType name="SourceAttributeType">
                <xs:annotation>
                        <xs:documentation xml:lang="de">
                Die Source Angabe folgt folgender Syntax

                        source = ( Eigenschaft1+Eigenschaft2+... ":" ) ? steps

                        Eigenschaft1 = "local" | "global"

                        Eigenschaft2 ="real" | "virtual" | "ComponentType.all"

                        Eigenschaft3 ="self" | "ancestors" | "descendants" | "Hierarchy.all"

                        Eigenschaft4 = "first" | "last" | "DayAllocation.all"

                        Falls Eigenschaften angegeben werden muss zwingend die oben angegebene Reihenfolge der Eigenschaften eingehalten werden.

                        steps = oneOrMoreNameSteps | zeroOrMoreNameSteps | componentSteps

                        oneOrMoreNameSteps = nameStep ( "." nameStep ) *

                        zeroOrMoreNameSteps = ( nameStep "." ) *

                        nameStep = "#" name

                        name ist eine Folge von Zeichen aus der Menge "A"-"Z", "a"-"z", "0"-"9", "-" und "_". Keine Umlaute. Mindestens ein Zeichen

                        componentSteps ist eine Liste gültiger Werte, siehe im folgenden

                Gültige 'componentSteps' sind zunächst:

                        - GlobalValue
                        - Product
                        - Product.Brand
                        - Product.Accommodation
                        - Product.Accommodation.SellingAccom
                        - Product.Accommodation.SellingAccom.Board
                        - Product.Accommodation.SellingAccom.Unit
                        - Product.Accommodation.SellingAccom.Unit.SellingUnit
                        - Product.OnewayFlight
                        - Product.OnewayFlight.BookingClass
                        - Product.ReturnFlight
                        - Product.ReturnFlight.BookingClass
                        - Product.ReturnFlight.Inbound
                        - Product.ReturnFlight.Outbound
                        - Product.Addon
                        - Product.Addon.Service
                        - Product.Addon.ServiceFeature

                Desweiteren sind alle Unterschrittfolgen aus obigen Pfaden erlaubt, also 'Board', 'Accommodation.SellingAccom' oder 'SellingAccom.Unit.SellingUnit'.
                'Accommodation.Unit' hingegen ist nicht erlaubt, da in diesem Fall einige Zwischenschritte fehlen.

                                </xs:documentation>
                        <xs:documentation xml:lang="en">
                                The Source specification obeys the following syntax

                                source = ( Feature1+Feature2+... ":" ) ? steps

                                Feature1 = "local" | "global"

                                Feature2 ="real" | "virtual" | "ComponentType.all"

                                Feature3 ="self" | "ancestors" | "descendants" | "Hierarchy.all"

                                Feature4 = "first" | "last" | "DayAllocation.all"

                                If features are specified, the order of features as given above has strictly to be followed.

                                steps = oneOrMoreNameSteps | zeroOrMoreNameSteps | componentSteps

                                oneOrMoreNameSteps = nameStep ( "." nameStep ) *

                                zeroOrMoreNameSteps = ( nameStep "." ) *

                                nameStep = "#" name

                                name is a string of characters from "A"-"Z", "a"-"z", "0"-"9", "-" and "_". No umlauts allowed, one character is minimum.

                                componentSteps is a list of valid values, see below.

                                Valid 'componentSteps' are:

                                - GlobalValue
                                - Product
                                - Product.Brand
                                - Product.Accommodation
                                - Product.Accommodation.SellingAccom
                                - Product.Accommodation.SellingAccom.Board
                                - Product.Accommodation.SellingAccom.Unit
                                - Product.Accommodation.SellingAccom.Unit.SellingUnit
                                - Product.OnewayFlight
                                - Product.OnewayFlight.BookingClass
                                - Product.ReturnFlight
                                - Product.ReturnFlight.BookingClass
                                - Product.ReturnFlight.Inbound
                                - Product.ReturnFlight.Outbound
                                - Product.Addon
                                - Product.Addon.Service
                                - Product.Addon.ServiceFeature

                                In addition to that all subsequent steps from the paths above are permitted, that is 'Board', 'Accommodation.SellingAccom' or 'SellingAccom.Unit.SellingUnit'.
                                'Accommodation.Unit' in the contrary is not permitted, as here some intermediate steps are missing.

                        </xs:documentation>
                </xs:annotation>
                <xs:union>
                        <xs:simpleType>
                                <xs:restriction base="xs:string">
                                        <xs:pattern value="(((Scope.)?(global|local|current)\+?)?((((ComponentType.)?(real|virtual))|ComponentType.all)\+?)?((((Hierarchy.)?(self|ancestors|descendants))|Hierarchy.all)\+?)?((((DayAllocation.)?(first|last))|DayAllocation.all)\+?)?:)?(#[A-Za-z0-9\-_]+(\.(#[A-Za-z0-9\-_]+))*|(#[A-Za-z0-9\-_]+\.)*(ThisComponent|GlobalValue|Product|Product\.Brand|Product\.Accommodation|Product\.Accommodation\.SellingAccom|Product\.Accommodation\.SellingAccom\.Board|Product\.Accommodation\.SellingAccom\.Unit|Product\.Accommodation\.SellingAccom\.Unit\.SellingUnit|Product\.OnewayFlight|Product\.OnewayFlight\.BookingClass|Product\.ReturnFlight|Product\.ReturnFlight\.BookingClass|Product\.ReturnFlight\.Inbound|Product\.ReturnFlight\.Outbound|Product\.Addon|Product\.Addon\.Service|Product\.Addon\.ServiceFeature|Brand|Accommodation|Accommodation\.SellingAccom|Accommodation\.SellingAccom\.Board|Accommodation\.SellingAccom\.Unit|Accommodation\.SellingAccom\.Unit\.SellingUnit|OnewayFlight|OnewayFlight\.BookingClass|ReturnFlight|ReturnFlight\.BookingClass|ReturnFlight\.Inbound|ReturnFlight\.Outbound|Addon|Addon\.Service|Addon\.ServiceFeature|SellingAccom|SellingAccom\.Board|SellingAccom\.Unit|SellingAccom\.Unit\.SellingUnit|BookingClass|Inbound|Outbound|Service|ServiceFeature|Board|Unit|Unit\.SellingUnit|SellingUnit))"/>
                                </xs:restriction>
                        </xs:simpleType>
                </xs:union>
</xs:simpleType>
[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

,

CryptogramApple's FaceID

This is a good interview with Apple's SVP of Software Engineering about FaceID.

Honestly, I don't know what to think. I am confident that Apple is not collecting a photo database, but not optimistic that it can't be hacked with fake faces. I dislike the fact that the police can point the phone at someone and have it automatically unlock. So this is important:

I also quizzed Federighi about the exact way you "quick disabled" Face ID in tricky scenarios -- like being stopped by police, or being asked by a thief to hand over your device.

"On older phones the sequence was to click 5 times [on the power button], but on newer phones like iPhone 8 and iPhone X, if you grip the side buttons on either side and hold them a little while -- we'll take you to the power down [screen]. But that also has the effect of disabling Face ID," says Federighi. "So, if you were in a case where the thief was asking to hand over your phone -- you can just reach into your pocket, squeeze it, and it will disable Face ID. It will do the same thing on iPhone 8 to disable Touch ID."

That squeeze can be of either volume button plus the power button. This, in my opinion, is an even better solution than the "5 clicks" because it's less obtrusive. When you do this, it defaults back to your passcode.

More:

It's worth noting a few additional details here:

  • If you haven't used Face ID in 48 hours, or if you've just rebooted, it will ask for a passcode.

  • If there are 5 failed attempts to Face ID, it will default back to passcode. (Federighi has confirmed that this is what happened in the demo onstage when he was asked for a passcode -- it tried to read the people setting the phones up on the podium.)

  • Developers do not have access to raw sensor data from the Face ID array. Instead, they're given a depth map they can use for applications like the Snap face filters shown onstage. This can also be used in ARKit applications.

  • You'll also get a passcode request if you haven't unlocked the phone using a passcode or at all in 6.5 days and if Face ID hasn't unlocked it in 4 hours.

Also be prepared for your phone to immediately lock every time your sleep/wake button is pressed or it goes to sleep on its own. This is just like Touch ID.

Federighi also noted on our call that Apple would be releasing a security white paper on Face ID closer to the release of the iPhone X. So if you're a researcher or security wonk looking for more, he says it will have "extreme levels of detail" about the security of the system.

Here's more about fooling it with fake faces:

Facial recognition has long been notoriously easy to defeat. In 2009, for instance, security researchers showed that they could fool face-based login systems for a variety of laptops with nothing more than a printed photo of the laptop's owner held in front of its camera. In 2015, Popular Science writer Dan Moren beat an Alibaba facial recognition system just by using a video that included himself blinking.

Hacking FaceID, though, won't be nearly that simple. The new iPhone uses an infrared system Apple calls TrueDepth to project a grid of 30,000 invisible light dots onto the user's face. An infrared camera then captures the distortion of that grid as the user rotates his or her head to map the face's 3-D shape­ -- a trick similar to the kind now used to capture actors' faces to morph them into animated and digitally enhanced characters.

It'll be harder, but I have no doubt that it will be done.

More speculation.

I am not planning on enabling it just yet.

Worse Than FailurePoor Shoe

OldShoe201707

"So there's this developer who is the end-all, be-all try-hard of the year. We call him Shoe. He's the kind of over-engineering idiot that should never be allowed near code. And, to boot, he's super controlling."

Sometimes, you'll be talking to a friend, or reading a submission, and they'll launch into a story of some crappy thing that happened to them. You expect to sympathize. You expect to agree, to tell them how much the other guy sucks. But as the tale unfolds, something starts to feel amiss.

They start telling you about the guy's stand-up desk, how it makes him such a loser, such a nerd. And you laugh nervously, recalling the article you read just the other day about the health benefits of stand-up desks. But sure, they're pretty nerdy. Why not?

"But then, get this. So we gave Shoe the task to minify a bunch of JavaScript files, right?"

You start to feel relieved. Surely this is more fertile ground. There's a ton of bad ways to minify and concatenate files on the server-side, to save bandwidth on the way out. Is this a premature optimization story? A story of an idiot writing code that just doesn't work? An over-engineered monstrosity?

"So he fires up gulp.js and gets to work."

Probably over-engineered. Gulp.js lets you write arbitrary JavaScript to do your processing. It has the advantage of being the same language as the code being minified, so you don't have to switch contexts when reading it, but the disadvantage of being JavaScript and thus impossible to read.

"He asks how to concat JavaScript, and the room tells him the right answer: find javascripts/ -name '*.js' -exec cat {} \; > main.js"

Wait, what? You blink. Surely that's not how Gulp.js is meant to work. Just piping out to shell commands? But you've never used it. Maybe that's the right answer; you don't know. So you nod along, making a sympathetic noise.

"Of course, this moron can't just take the advice. Shoe has to understand how it works. So he starts googling on the Internet, and when he doesn't find a better answer, he starts writing a shell script he can commit to the repo for his 'jay es minifications.'"

That nagging feeling is growing stronger. But maybe the punchline is good. There's gotta be a payoff here, right?

"This guy, right? Get this: he discovers that most people install gulp via npm.js. So he starts shrieking, 'This is a dependency of mah script!' and adds node.js and npm installation to the shell script!"

Stronger and stronger the feeling grows, refusing to be shut out. You swallow nervously, looking for an excuse to flee the conversation.

"We told him, just put it in the damn readme and move on! Don't install anything on anyone else's machines! But he doesn't like this solution, either, so he finally just echoes out in the shell script, requires npm. Can you believe it? What a n00b!"

That's it? That's the punchline? That's why your friend has worked himself into a lather, foaming and frothing at the mouth? Try as you might to justify it, the facts are inescapable: your friend is TRWTF.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

,

Sociological ImagesWhen Bros Hug

In February, CBS Sunday Morning aired a short news segment on the bro hug phenomenon: a supposedly new way heterosexual (white) men (i.e., bros) greet each other. According to this news piece, the advent of the bro hug can be attributed to decreased homophobia and is a sign of social progress.

I’m not so sure.

To begin, bro-ness isn’t really about any given individuals, but invokes a set of cultural norms, statuses, and meanings. A stereotypical bro is a white middle-class, heterosexual male, especially one who frequents strongly masculinized places like fraternities, business schools, and sport events. (The first part of the video, in fact, focused on fraternities and professional sports.) The bro, then, is a particular kind of guy, one that frequents traditionally male spaces with a history of homophobia and misogyny and is invested in maleness and masculinity.

The bro hug reflects this investment in masculinity and, in particular, the masculine performance in heterosexuality. To successfully complete a bro hug, the two men clasp their right hands and firmly pull their bodies towards each other until they are or appear to be touching whilst their left hands swing around to forcefully pat each other on the back. Men’s hips and chests never make full contact. Instead, the clasped hands pull in, but also act as a buffer between the men’s upper bodies, while the legs remain firmly rooted in place, maintaining the hips at a safe distance. A bro hug, in effect, isn’t about physical closeness between men, but about limiting bodily contact.

Bro hugging, moreover, is specifically a way of performing solidarity with heterosexual men. In the CBS program, the bros explain that a man would not bro hug a woman since a bro hug is, by its forcefulness, designed to be masculinity affirming. Similarly, a bro hug is not intended for gay men, lesbians, or queer people. The bro hug performs and reinforce bro identity within an exclusively bro domain. For bros, by bros. As such, the bro hug does little to signal a decrease in homophobia. Instead, it affirms men’s identities as “real” men and their difference from both women and non-heterosexual men.

In this way, the bro-hug functions similarly to the co-masturbation and same-sex sexual practices of heterosexually identified white men, documented by the sociologist Jane Ward in her book, Not Gay. Ward argues that when straight white men have sex with other straight white men they are not necessarily blurring the boundaries between homo- and heterosexuality. Instead, they are shifting the line separating what is considered normal from what is considered queer.  Touching another man’s anus during a fraternity hazing ritual is normal (i.e., straight) while touching another man’s anus in a gay porn is queer.  In other words, the white straight men can have sex with each other because it is not “real” gay sex. 

Similarly, within the context of a bro hug, straight white men can now bro hug each other because they are heterosexual. Bro hugging will not diminish either man’s heterosexual capital. In fact, it might increase it. When two bros hug, they signal to others their unshakable strength of and comfort in their heterosexuality. Even though they are touching other men in public, albeit minimally, the act itself reinforces their heterosexuality and places it beyond reproach.

Hubert Izienicki, PhD, is a professor of sociology at Purdue University Northwest. 

(View original at https://thesocietypages.org/socimages)

CryptogramBluetooth Vulnerabilities

A bunch of Bluetooth vulnerabilities are being reported, some pretty nasty.

BlueBorne concerns us because of the medium by which it operates. Unlike the majority of attacks today, which rely on the internet, a BlueBorne attack spreads through the air. This works similarly to the two less extensive vulnerabilities discovered recently in a Broadcom Wi-Fi chip by Project Zero and Exodus. The vulnerabilities found in Wi-Fi chips affect only the peripherals of the device, and require another step to take control of the device. With BlueBorne, attackers can gain full control right from the start. Moreover, Bluetooth offers a wider attacker surface than WiFi, almost entirely unexplored by the research community and hence contains far more vulnerabilities.

Airborne attacks, unfortunately, provide a number of opportunities for the attacker. First, spreading through the air renders the attack much more contagious, and allows it to spread with minimum effort. Second, it allows the attack to bypass current security measures and remain undetected, as traditional methods do not protect from airborne threats. Airborne attacks can also allow hackers to penetrate secure internal networks which are "air gapped," meaning they are disconnected from any other network for protection. This can endanger industrial systems, government agencies, and critical infrastructure.

Finally, unlike traditional malware or attacks, the user does not have to click on a link or download a questionable file. No action by the user is necessary to enable the attack.

Fully patched Windows and iOS systems are protected; Linux coming soon.

Worse Than FailureCodeSOD: Mutex.js

Just last week, I was teaching a group of back-end developers how to use Angular to develop front ends. One question that came up, which did suprise me a bit, was how to deal with race conditions and concurrency in JavaScript.

I’m glad they asked, because it’s a good question that never occurred to me. The JavaScript runtime, of course, is single-threaded. You might use Web Workers to get multiple threads, but they use an Actor model, so there’s no shared state, and thus no need for any sort of locking.

Chris R’s team did have a need for locking. Specifically, their .NET backend needed to run a long-ish bulk operation against their SqlServer. It would be triggered by an HTTP request from the client-side, AJAX-style, but only one user should be able to run it at a time.

Someone, for some reason, decided that they would implement this lock in front-end JavaScript, since that’s where the AJAX calls were coming from..

var myMutex = true; //global (as in page wide, global) variable
function onClickHandler(element) {
    if (myMutex == true) {
        myMutex = false;
        // snip...
        if ($(element).hasClass("addButton") == true) {
            $(element).removeClass("addButton").addClass("removeButton");
            // snip...
            $.get(url).done(function (r) {
                // snip... this code is almost identical to the branch below
                setTimeout("myMutex = true;", 100);
            });
        } else {
            if ($(element).hasClass("removeButton") == true) {
                $(element).removeClass("removeButton").addClass("addButton");
                // snip...
                $.get(url).done(function (r) {
                    // snip... this code is almost identical to the branch above
                    setTimeout("myMutex = true;", 100);
                });
            }
        }
    }
}

You may be shocked to learn that this solution didn’t work, and the developer responsible never actually tested it with multiple users. Obviously, a client side variable isn’t going to work as a back-end lock. Honestly, I’m not certain that’s the worst thing about this code.

First, they reinvented the mutex badly. They seem to be using CSS classes to hold application state. They have (in the snipped code) duplicate branches of code that vary only by a handful of flags. They aren’t handling errors on the request- which, when this code started failing, made it that much harder to figure out why.

But it’s the setTimeout("myMutex = true;", 100); that really gets me. Why? Why the 100ms lag? What purpose does that serve?

Chris threw this code away and put a mutex in the backend service.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

,

Planet Linux AustraliaOpenSTEM: Those Dirty Peasants!

It is fairly well known that many Europeans in the 17th, 18th and early 19th centuries did not follow the same routines of hygiene as we do today. There are anecdotal and historical accounts of people being dirty, smelly and generally unhealthy. This was particularly true of the poorer sections of society. The epithet “those […]

,

TED5 reasons to convince your boss to send you to TEDWomen this year

Inspiration, challenge, community — when we listen to great ideas together, great things can happen. Photo: Stacie McChesney / TED

Every year at TEDWomen, we gather to talk about issues that matter, to learn and bond and get energized. This year, we will be reconvening on November 1–3 in New Orleans — and we would love for you, and your amazing perspective and ideas, to join us and become part of this diverse, welcoming group that’s growing every year.

Join us at TEDWomen 2017 >>

However, there’s a challenge we’re hearing from some of you — especially those who’d like to attend in a professional capacity. And it’s this: It’s hard to explain to your boss how this conference can contribute to your professional success and development.

What we know from past attendees is, TEDWomen is an extraordinary professional development event — sending people back to work refreshed, connected and full of ideas. We’d love to encourage more people to attend with professional growth in mind. So, if you’re interested in attending TEDWomen, here are some talking points to support you when you ask for your share of the staff-development budget:

1. At TEDWomen, you’ll learn tools to craft better messages, to listen and connect more deeply, to problem-solve and spark new ideas. What you hear onstage — and from fellow attendees — will spark new thinking that you can bring back to your team. (Many TEDsters, in fact, schedule a team meeting for the week after TED to download what they learned.) As one attendee wrote: “Amazing and inspiring overall. I’m leaving a better person because of it.”

Join an audience of curious and enthusiastic lifelong learners and doers. Photo: Marla Aufmuth / TED

2. TEDWomen is where some of the boldest conversations are happening — which can help you kickstart the conversations your organization needs to have. You’ll hear about new markets and new power structures, learn how people are engaging with diversity internally and externally, and get new ideas for leveraging technology. Because you never know where your company’s next great idea may come from. As one attendee told us: “I am a VP at a Fortune 500 company and this conference was life-changing for me. There are so many execs who have the experience, money and resources to help drive the causes that were discussed.”

3. The TEDWomen community is a powerful network, offering connections across many fields and in many countries. VC Chris Fralic once described the benefit of attending TED in four words: “permission to follow up.” TEDWomen is not a place for high-pitched networking — it’s designed to be a place to connect over conversations that matter, to plant seeds for collaborations and real relationships. As one attendee said: “I connected with so many people with whom I am able to help grow their work and they are going to work with me to grow mine. I think it is terrific that TED provides such meaningful resources for attendees to connect and converse.”

Well, we make no promises that you too will get a selfie with Sandi Toksvig, left, host of the Great British Bake Off, but yes, connections like this happen at TEDWomen all the time. The audience and speakers are all part of the same amazing community. Photo: Stacie McChesney / TED

4. Finally, it’s just a great conference — offering TED’s legendary high quality, brilliant content and attention to detail at every turn, at a more approachable price. Attendees tell us things like: “Single best and most diverse event that I’ve been to” and “It was a truly immersive, brilliant experience that left me feeling mentally refreshed and inspired. This was my first TED, and I can see why people get addicted to coming back year upon year.”

5. You don’t have to wait to be invited. In fact, consider this blog post your invitation to TEDWomen. We truly want to diversify and grow the audience for this conference, to increase the network effect that happens when great people get together. Come join us for what one attendee calls “a truly transformative conference and experience. TED has become a very important part of my CEO/executive life in feeding my soul!”

Apply to attend TEDWomen 2017 — we can’t wait to meet you!

We hope to see you at TEDWomen, where our awesome audience is as vital to the magic as any speaker on stage. Photo: Marla Aufmuth / TED


Planet Linux AustraliaDave Hall: Trying Drupal

While preparing for my DrupalCamp Belgium keynote presentation I looked at how easy it is to get started with various CMS platforms. For my talk I used Contentful, a hosted content as a service CMS platform and contrasted that to the "Try Drupal" experience. Below is the walk through of both.

Let's start with Contentful. I start off by visiting their website.

Contentful homepage

In the top right corner is a blue button encouraging me to "try for free". I hit the link and I'm presented with a sign up form. I can even use Google or GitHub for authentication if I want.

Contentful signup form

While my example site is being installed I am presented with an overview of what I can do once it is finished. It takes around 30 seconds for the site to be installed.

Contentful installer wait

My site is installed and I'm given some guidance about what to do next. There is even an onboarding tour in the bottom right corner that is waving at me.

Contentful dashboard

Overall this took around a minute and required very little thought. I never once found myself thinking come on hurry up.

Now let's see what it is like to try Drupal. I land on d.o. I see a big prominent "Try Drupal" button, so I click that.

Drupal homepage

I am presented with 3 options. I am not sure why I'm being presented options to "Build on Drupal 8 for Free" or to "Get Started Risk-Free", I just want to try Drupal, so I go with Pantheon.

Try Drupal providers

Like with Contentful I'm asked to create an account. Again I have the option of using Google for the sign up or completing a form. This form has more fields than contentful.

Pantheon signup page

I've created my account and I am expecting to be dropped into a demo Drupal site. Instead I am presented with a dashboard. The most prominent call to action is importing a site. I decide to create a new site.

Pantheon dashboard

I have to now think of a name for my site. This is already feeling like a lot of work just to try Drupal. If I was a busy manager I would have probably given up by this point.

Pantheon create site form

When I submit the form I must surely be going to see a Drupal site. No, sorry. I am given the choice of installing WordPress, yes WordPress, Drupal 8 or Drupal 7. Despite being very confused I go with Drupal 8.

Pantheon choose application page

Now my site is deploying. While this happens there is a bunch of items that update above the progress bar. They're all a bit nerdy, but at least I know something is happening. Why is my only option to visit my dashboard again? I want to try Drupal.

Pantheon site installer page

I land on the dashboard. Now I'm really confused. This all looks pretty geeky. I want to try Drupal not deal with code, connection modes and the like. If I stick around I might eventually click "Visit Development site", which doesn't really feel like trying Drupal.

Pantheon site dashboard

Now I'm asked to select a language. OK so Drupal supports multiple languages, that nice. Let's select English so I can finally get to try Drupal.

Drupal installer, language selection

Next I need to chose an installation profile. What is an installation profile? Which one is best for me?

Drupal installer, choose installation profile

Now I need to create an account. About 10 minutes I already created an account. Why do I need to create another one? I also named my site earlier in the process.

Drupal installer, configuration form part 1
Drupal installer, configuration form part 2

Finally I am dropped into a Drupal 8 site. There is nothing to guide me on what to do next.

Drupal site homepage

I am left with a sense that setting up Contentful is super easy and Drupal is a lot of work. For most people wanting to try Drupal they would have abandoned someway through the process. I would love to see the conversion stats for the try Drupal service. It must miniscule.

It is worth noting that Pantheon has the best user experience of the 3 companies. The process with 1&1 just dumps me at a hosting sign up page. How does that let me try Drupal?

Acquia drops onto a page where you select your role, then you're presented with some marketing stuff and a form to request a demo. That is unless you're running an ad blocker, then when you select your role you get an Ajax error.

The Try Drupal program generates revenue for the Drupal Association. This money helps fund development of the project. I'm well aware that the DA needs money. At the same time I wonder if it is worth it. For many people this is the first experience they have using Drupal.

The previous attempt to have simplytest.me added to the try Drupal page ultimately failed due to the financial implications. While this is disappointing I don't think simplytest.me is necessarily the answer either.

There needs to be some minimum standards for the Try Drupal page. One of the key item is the number of clicks to get from d.o to a working demo site. Without this the "Try Drupal" page will drive people away from the project, which isn't the intention.

If you're at DrupalCon Vienna and want to discuss this and other ways to improve the marketing of Drupal, please attend the marketing sprints.

AttachmentSize
try-contentful-1.png342.82 KB
try-contentful-2.png214.5 KB
try-contentful-3.png583.02 KB
try-contentful-5.png826.13 KB
try-drupal-1.png1.19 MB
try-drupal-2.png455.11 KB
try-drupal-3.png330.45 KB
try-drupal-4.png239.5 KB
try-drupal-5.png203.46 KB
try-drupal-6.png332.93 KB
try-drupal-7.png196.75 KB
try-drupal-8.png333.46 KB
try-drupal-9.png1.74 MB
try-drupal-10.png1.77 MB
try-drupal-11.png1.12 MB
try-drupal-12.png1.1 MB
try-drupal-13.png216.49 KB

CryptogramFriday Squid Blogging: Another Giant Squid Caught off the Coast of Kerry

The Flannery family have caught four giant squid, two this year.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

,

Sociological ImagesResearch Finds Obesity is in the Eye of the Beholder

In an era of body positivity, more people are noting the way American culture stigmatizes obesity and discriminates by weight. One challenge for studying this inequality is that a common measure for obesity—Body Mass Index (BMI), a ratio of height to weight—has been criticized for ignoring important variation in healthy bodies. Plus, the basis for weight discrimination is what other people see as “too fat,” and that’s a standard with a lot of variation.

Recent research in Sociological Science from Vida Maralani and Douglas McKee gives us a picture of how the relationship between obesity and inequality changes with social context. Using data from the National Longitudinal Surveys of Youth (NLSY), Maralani and McKee measure BMI in two cohorts, one in 1981 and one in 2003. They then look at social outcomes seven years later, including wages, the probability of a person being married, and total family income.

The figure below shows their findings for BMI and 2010 wages for each group in the study. The dotted lines show the same relationships from 1988 for comparison.

For White and Black men, wages actually go up as their BMI increases from the “Underweight” to “Normal” ranges, then levels off and slowly decline as they cross into the “Obese” range. This pattern is fairly similar to 1988, but check out the “White Women” graph in the lower left quadrant. In 1988, the authors find a sharp “obesity penalty” in which women over a BMI of 30 reported a steady decline in wages. By 2010, this has largely leveled off, but wage inequality didn’t go away. Instead, that spike near the beginning of the graph suggests people perceived as skinny started earning more. The authors write:

The results suggest that perceptions of body size may have changed across cohorts differently by race and gender in ways that are consistent with a normalizing of corpulence for black men and women, a reinforcement of thin beauty ideals for white women, and a status quo of a midrange body size that is neither too thin nor too large for white men (pgs. 305-306).

This research brings back an important lesson about what sociologists mean when they say something is “socially constructed”—patterns in inequality can change and adapt over time as people change the way they interpret the world around them.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramAnother iPhone Change to Frustrate the Police

I recently wrote about the new ability to disable the Touch ID login on iPhones. This is important because of a weirdness in current US law that protects people's passcodes from forced disclosure in ways it does not protect actions: being forced to place a thumb on a fingerprint reader.

There's another, more significant, change: iOS now requires a passcode before the phone will establish trust with another device.

In the current system, when you connect your phone to a computer, you're prompted with the question "Trust this computer?" and you can click yes or no. Now you have to enter in your passcode again. That means if the police have an unlocked phone, they can scroll through the phone looking for things but they can't download all of the contents onto a another computer without also knowing the passcode.

More details:

This might be particularly consequential during border searches. The "border search" exception, which allows Customs and Border Protection to search anything going into the country, is a contentious issue when applied electronics. It is somewhat (but not completely) settled law, but that the U.S. government can, without any cause at all (not even "reasonable articulable suspicion", let alone "probable cause"), copy all the contents of my devices when I reenter the country sows deep discomfort in myself and many others. The only legal limitation appears to be a promise not to use this information to connect to remote services. The new iOS feature means that a Customs office can browse through a device -- a time limited exercise -- but not download the full contents.

Worse Than FailureError'd: Have it Your Way!

"You can have any graphics you want, as long as it's Intel HD Graphics 515," Mark R. writes.

 

"You know, I'm pretty sure that I've been living there for a while now," writes Derreck.

 

Sven P. wrote, "Usually, I blame production outages on developers who, I swear, have trouble counting to five. After seeing this, I may want to blame the compiler too."

 

"Whenever I hear someone complaining about their device battery life, I show them this picture," wrote Renan.

 

"Prepaying for gas, my credit card was declined," Rand H. writes, "I was worried some thief must've maxed it out, but then I saw how much I was paying in taxes."

 

Brett A. wrote, "Yo Dawg I heard you like zips, so you should zip your zips to send your zips."

 

[Advertisement] BuildMaster integrates with an ever-growing list of tools to automate and facilitate everything from continuous integration to database change scripts to production deployments. Interested? Learn more about BuildMaster!

,

CryptogramSecuring a Raspberry Pi

A Raspberry Pi is a tiny computer designed for makers and all sorts of Internet-of-Things types of projects. Make magazine has an article about securing it. Reading it, I am struck by how much work it is to secure. I fear that this is beyond the capabilities of most tinkerers, and the result will be even more insecure IoT devices.

Krebs on SecurityEquifax Hackers Stole 200k Credit Card Accounts in One Fell Swoop

Visa and MasterCard are sending confidential alerts to financial institutions across the United States this week, warning them about more than 200,000 credit cards that were stolen in the epic data breach announced last week at big-three credit bureau Equifax. At first glance, the private notices obtained by KrebsOnSecurity appear to suggest that hackers initially breached Equifax starting in November 2016. But Equifax says the accounts were all stolen at the same time — when hackers accessed the company’s systems in mid-May 2017.

equifax-hq

Both Visa and MasterCard frequently send alerts to card-issuing financial institutions with information about specific credit and debit cards that may have been compromised in a recent breach. But it is unusual for these alerts to state from which company the accounts were thought to have been pilfered.

In this case, however, Visa and MasterCard were unambiguous, referring to Equifax specifically as the source of an e-commerce card breach.

In a non-public alert sent this week to sources at multiple banks, Visa said the “window of exposure” for the cards stolen in the Equifax breach was between Nov. 10, 2016 and July 6, 2017. A similar alert from MasterCard included the same date range.

“The investigation is ongoing and this information may be amended as new details arise,” Visa said in its confidential alert, linking to the press release Equifax initially posted about the breach on Sept. 7, 2017.

The card giant said the data elements stolen included card account number, expiration date, and the cardholder’s name. Fraudsters can use this information to conduct e-commerce fraud at online merchants.

It would be tempting to conclude from these alerts that the card breach at Equifax dates back to November 2016, and that perhaps the intruders then managed to install software capable of capturing customer credit card data in real-time as it was entered on one of Equifax’s Web sites.

Indeed, that was my initial hunch in deciding to report out this story. But according to a statement from Equifax, the hacker(s) downloaded the data in one fell swoop in mid-May 2017.

“The attacker accessed a storage table that contained historical credit card transaction related information,” the company said. “The dates that you provided in your e-mail appear to be the transaction dates. We have found no evidence during our investigation to indicate the presence of card harvesting malware, or access to the table before mid-May 2017.”

Equifax did not respond to questions about how it was storing credit card data, or why only card data collected from customers after November 2016 was stolen.

In its initial breach disclosure on Sept. 7, Equifax said it discovered the intrusion on July 29, 2017. The company said the hackers broke in through a vulnerability in the software that powers some of its Web-facing applications.

In an update to its breach disclosure published Wednesday evening, Equifax confirmed reports that the application flaw in question was a weakness disclosed in March 2017 in a popular open-source software package called Apache Struts (CVE-2017-5638)

“Equifax has been intensely investigating the scope of the intrusion with the assistance of a leading, independent cybersecurity firm to determine what information was accessed and who has been impacted,” the company wrote. “We know that criminals exploited a U.S. website application vulnerability. The vulnerability was Apache Struts CVE-2017-5638. We continue to work with law enforcement as part of our criminal investigation, and have shared indicators of compromise with law enforcement.”

The Apache flaw was first spotted around March 7, 2017, when security firms began warning that attackers were actively exploiting a “zero-day” vulnerability in Apache Struts. Zero-days refer to software or hardware flaws that hackers find and figure out how to use for commercial or personal gain before the vendor even knows about the bugs.

By March 8, Apache had released new versions of the software to mitigate the vulnerability. But by that time exploit code that would allow anyone to take advantage of the flaw was already published online — making it a race between companies needing to patch their Web servers and hackers trying to exploit the hole before it was closed.

Screen shots apparently taken on March 10, 2017 and later posted to the vulnerability tracking site xss[dot]cx indicate that the Apache Struts vulnerability was present at the time on annualcreditreport.com — the only web site mandated by Congress where all Americans can go to obtain a free copy of their credit reports from each of the three major bureaus annually.

In another screen shot apparently made that same day and uploaded to xss[dot]cx, we can see evidence that the Apache Struts flaw also was present in Experian’s Web properties.

Equifax has said the unauthorized access occurred from mid-May through July 2017, suggesting either that the company’s Web applications were still unpatched in mid-May or that the attackers broke in earlier but did not immediately abuse their access.

It remains unclear when exactly Equifax managed to fully eliminate the Apache Struts flaw from their various Web server applications. But one thing we do know for sure: The hacker(s) got in before Equifax closed the hole, and their presence wasn’t discovered until July 29, 2017.

Update, Sept. 15, 12:31 p.m. ET: Visa has updated their advisory about these 200,000+ credit cards stolen in the Equifax breach. Visa now says it believes the records also included the cardholder’s Social Security number and address, suggesting that (ironically enough) the accounts were stolen from people who were signing up for credit monitoring services through Equifax.

Equifax also clarified the breach timeline to note that it patched the Apache Struts flaw in its Web applications only after taking the hacked system(s) offline on July 30, 2017. Which means Equifax left its systems unpatched for more than four months after a patch (and exploit code to attack the flaw) was publicly available.

CryptogramHacking Robots

Researchers have demonstrated hacks against robots, taking over and controlling their camera, speakers, and movements.

News article.

Worse Than FailureCodeSOD: string isValidArticle(string article)

Anonymous sends us this little blob of code, which is mildly embarassing on its own:

    static StringBuilder vsb = new StringBuilder();
    internal static string IsValidUrl(string value)
    {
        if (value == null)
        {
            return "\"\"";
        }

        vsb.Length= 0;
        vsb.Append("@\"");

        for (int i=0; i<value.Length; i++)
        {
            if (value[i] == '\"')
                vsb.Append("\"\"");
            else
                vsb.Append(value[i]);
        }

        vsb.Append("\"");
        return vsb.ToString();
    }

I’m willing to grant that re-using the same static StringBuilder object is a performance tuning thing, but everything else about this is… just plain puzzling.

The method is named IsValidUrl, but it returns a string. It doesn’t do any validation! All it appears to do is take any arbitrary string and return that string wrapped as if it were a valid C# string literal. At best, this method is horridly misnamed, but if its purpose is to truly generate valid C# strings, it has a potential bug: it doesn’t handle new-lines. Now, I’m sure that won’t be a problem that comes back up before the end of this article.

The code, taken on its own, is just bad. But when placed into context, it gets worse. This isn’t just code. It’s part of .NET’s System.Runtime.Remoting package. Still, I know, you’re saying to yourself, ‘In all the millions of lines in .NET, this is really the worst you’ve come up with?’

Well, it comes up because remember that bug with new-lines? Well, guess what. That exact flaw was a zero-day that allowed code execution… in RTF files.

Now, skim through some of the other code in wsdlparser.cs, and you'll see the real horror. This entire file has one key job: generating a class capable of parsing data according to an input WSDL file… by using string concatenation.

The real WTF is the fact that you can embed SOAP links in RTF files and Word will attempt to use them, thus running the WSDL parser against the input data. This is code that’s a little bad, used badly, creating an exploited zero-day.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Don Martianother 2x2 chart

What to do about different kinds of user data interchange:

Data collected without permission Data collected with permission
Good dataBuild tools and norms to reduce the amount of reliable data that is available without permission. Develop and test new tools and norms that enable people to share data that they choose to share.
Bad data Report on and show errors in low-quality data that was collected without permission. Offer users incentives and tools that help them choose to share accurate data and correct errors in voluntarily shared data.

Most people who want data about other people still prefer data that's collected without permission, and collaboration is something that they'll settle for. So most voluntary user data sharing efforts will need a defense side as well. Freedom-loving technologists have to help people reduce the amount of data that they allow to be taken from them without permission in order for data listen to people about sharing data.

Planet Linux AustraliaOpenSTEM: New Dates for Human Relative + ‘Explorer Classroom’ Resources

During September, National Geographic is featuring the excavations of Homo naledi at Rising Star Cave in South Africa in their Explorer Classroom, in tune with new discoveries and the publishing of dates for this enigmatic little hominid. A Teacher’s Guide and Resources are available and classes can log in to see live updates from the […]

,

TED“World peace will come from sitting around the table”: Chef Pierre Thiam chats with food blogger Ozoz Sokoh

Chef and cookbook author Pierre Thiam, left, sits down with food blogger Ozoz Sokoh to talk about the West African rice dish jollof — beloved in Nigeria, Senegal, Ghana and around the world. But who makes it best? They spoke during TEDGlobal 2017 in Arusha, Tanzania. Photo: Callie Giovanna / TED

Two African cooks walk into a bar; 30 seconds later they are arguing over whose country’s jollof rice is better. Or so the corny joke would go. The truth is, I really had no idea what would happen if we got Senegal-born chef Pierre Thiam (TED Talk: A Forgotten Ancient Grain That Could Help Africa Prosper) and Nigerian jollof promoter Ozoz Sokoh to sit down together for a friendly chat.

Based in New York, Pierre is a world-renowned chef who grew up in Senegal and is known for his exquisite dishes and his passion for spreading African cuisine across the world. He informed me that my interview request was the third jollof-related one he had granted in a week, the previous ones coming from the BBC and Wall Street Journal. It totally makes sense that in the heat of the jollof wars that now erupt every few weeks, mostly on Twitter, usually between Nigerians and Ghanaians, pundits are turning to a Senegalese chef for their take on the dispute. Jollof, after all, is named for the Wolof people, the largest ethnic group in Senegal; the country does have some claim.

Ozoz for her own part is an accomplished cook (she declined to be called a chef because it’s like a professional certification, apparently), food blogger and photographer, and probably one of the biggest promoters of jollof rice in Africa right now, an obsession that has since burst out of her Twitter timeline into a dedicated blog and the well-attended World Jollof Day festival. Was she down to interview Pierre about the jollof controversy? Of course. In fact, Ozoz had come from Lagos armed with homemade Nigerian spices, snacks and a jollof T-shirt for Pierre.

I apologize in advance to everyone who was spoiling for some sort of fiery showdown; this isn’t it. And I will admit to influencing their conversation slightly, by suggesting to them that the jollof question was merely an interesting pretext for a broader and infinitely more useful conversation about African cuisine that both of them were incredibly suited to have. What you are about to read is what happened next.

Ozoz: I think that it’s amazing that we’ve had all these ingredients for centuries but our preference is to default to what isn’t homegrown. You were talking about fonio yesterday, and I think there is an appreciation that we need to develop for homegrown products. Apart from fonio, what other things do to think we should be going crazy about? That are locally grown and could have transformative effects on food security.

Pierre: There are countless, you see. Millet is one of them. Sorghum is another one. The leaves too, especially in Nigeria where there are so many interesting leaf vegetables that are highly recommended for diets, and many cultures don’t know them as much as Nigeria does. So there is an opportunity there to share this knowledge. People talk about moringa, but moringa is just one of them.

Ozoz: One of my concerns is how do we get people in remote, non-urban areas to realise the value of what they have around them.

Pierre: Actually I don’t think it’s people in rural areas who have this problem. It’s people in urban areas who like to mimic the westerners’ way of eating and look down on the rural way of eating. Take fonio, for instance — you find it in Northern Nigeria and the Southern part of Senegal a lot, but in Lagos, Abuja, Dakar, you have to look for it. So the rural areas, they have it because there is a tradition. That’s what they have. And they can’t even afford the food that comes from the west. But us, we prefer to import from the west, and this is terrible for our economy. It’s terrible for our sense of pride, which is affected every day.

“I think there are many rituals that we’ve lost,” Ozoz says, “but sitting around the table with family and friends is one that we need to reintroduce into our way of life.” She’s speaking with Pierre Thiam at TEDGlobal 2017. Photo: Callie Giovanna / TED

Ozoz: I feel like the attitude to homegrown is changing. Nok by Alara for instance, it has an amazing menu that is tribute to homegrown, just an amazing mixture of local flavours and textures. But what other things do you think we can do to grow the whole new Nigerian or West African-style cuisine — in addition to cooking, what other ways beyond the kitchen?

Pierre: It’s a very good question, because it goes beyond the kitchen. It’s not only chefs who can wage that battle. It takes many, many levels. The media is important because information is key. Many people don’t know: We have wonderful ingredients. We have superfoods. If you look at our DNA, our background, our ancestors were strong people and they were eating that food, and because of that they were taken, because of their strength. We today want to say that that food is not good enough, and we import diseases. Many of the diseases that you see today in Nigeria or Dakar are imported. Diabetes, high cholesterol, high blood pressure, hypertension … all of which are directly connected with your diet. We use a lot of cubes now in our diet, and that is directly linked to why there is a lot of hypertension, because there is a lot of sodium in them. It’s a mind shift, we have to get back to what we have.

Ozoz: You are right, the media plays a really important role. So jollof rice. Obviously, everyone says Nigerian Jollof is the best :) what do you think?

Pierre: I hear you. When I’m in Nigeria, I eat Nigerian jollof, that’s for sure. And I enjoy it. When I’m in Ghana, I love Ghanaian jollof too. This is the great thing about jollof, jollof is a dish that’s like all these different cultures and countries just owning it. Jollof means Senegal [ed: the name derives from “Wolof“], but that doesn’t mean we own it. That is the way Africa is, food transcends borders, you know, and jollof has obviously transcended borders in a way that is powerful. This war is beautiful.

Ozoz: So you think Jollof can promote world peace?

Pierre: Absolutely. I think world peace will come from sitting around the table.

Pierre Thiam says: “When I’m in Nigeria, I eat Nigerian jollof, that’s for sure. And I enjoy it. When I’m in Ghana, I love Ghanaian jollof too. That is the way Africa is: food transcends borders.” Photo: Callie Giovanna / TED

Ozoz: I think there are many rituals that we’ve lost, but sitting around the table with family and friends is one that we need to reintroduce into our way of life.

Pierre: It is key. Simple moments like this on a daily basis can make a huge difference. And jollof rice is a symbolic dish that it’s great that everyone claims

Ozoz: it’s so refreshing to hear you say that — it’s a testament to your open and giving nature

Pierre: That’s what food is about: sharing. In Africa you go to a household and people offer you food. Food is something we don’t keep to ourselves, we have to share it. If you go to a household in Lagos, you will be offered something to drink, zobo, it’s a symbolic thing.

Ozoz: I was really, really fascinated to read modern recipes in Senegal, modern recipes from the source to the bowl. I was really intrigued by the palm oil recipes, particularly the palm oil ice cream. Really, really intrigued, it looks really amazing and it’s on my list of things to make once I get back and I settle down. I’m gonna get organic palm oil, the best quality that I can find

Pierre: That’s the best ice cream I’ve ever had.

Ozoz: It looks the part.

Pierre: I want to hear what you have to say when you make it.

Ozoz: Tell about how you developed this recipe. Were you sleeping? Was it midnight? How did it come to you?

Pierre: At first I wanted to have something vegan, something without dairy — as you can see, there is no dairy in that recipe. But when you eat it, you don’t taste that there is no dairy, it’s got the richness of the palm oil. There’s coconut milk, there is palm oil, and there is lime zest, which really brings the acidity. So you have a perfect balance, which is what you are really looking for. Creating new recipes is like chemistry. Your kitchen is your lab, and you just get creative and have fun with it.

Ozoz: I find myself thinking a lot about my memory bank…my taste bank. There are certain things I eat that transport me to a time, a place…what are some of the things that are in your memory bank, and can you share a bit about why they are there?

Pierre: Well, it usually goes back to childhood. The memories of food are powerful, and it can come from anything. Like a whiff that takes you back to your grandmother’s, the dishes that she would cook for you when you were a kid. So for me, I’m gonna come back to palm oil and okro, those are the ingredients that are very powerful to me and take me back to those moments of innocence. It’s very emotional when I get into that zone. A lot of my creations come from there, and those traditions. And that is why traditions are important. I think that any African chef before looking to the future has to go back into the past and remember what was served to them in their childhood — or do some research into the traditions and get a better grasp of the future.

Ozoz: If you were a spice, what would you be?

Pierre: Probably ginger, because I like the heat of it. Especially Nigerian ginger. I like it because it can bring the sensation of heat without being too overpowering like pepper.

Ozoz: If you were a fruit, what would your be?

Pierre: A fruit, huh? I love papaya, because I can use it as a dessert, or as a tenderiser when I’m cooking meat. I love green papaya that I can put in a salad, with red onions and chili and lime juice, that becomes a snack. It’s very versatile.

Ozoz: I think the future of food in Africa has a lot to do with collaboration. How do we grow this collective of voices around it, writers, food photographers, chefs… In the US, for instance, there are associations, foundations, but I’m not sure if those constructs would suit African needs. What should we thinking about if we are to take the appreciation of our food history and practice of the culture to the next level?

Pierre: I think that this conversation is important to have…like chef’s meetings. It could be around events. For instance, this November I’m inviting chefs to Saint-Louis, in Senegal. And they are coming from across Africa, from Cameroon, Morocco, Cote d’Ivoire, South Africa, and they are coming to this event as part of the Saint-Louis Forum. Each of us will come with our own traditions and approach to food.

Ozoz: You are absolutely right, that coming together, exchange of ideas, discussions …

Bankole in the background: blogging, food festivals…

Ozoz: Yes. We talked about the role of media earlier. Writing, podcasts, videos, how-tos, documentaries, it’s a whole range.

Pierre: And it’s the right time, right now, we have a lot of tools at our disposal. We don’t need big networks to broadcast this, we can do it ourselves and reach millions of people. As Africans, we have a unique opportunity to tell our story. African cuisine is ready to be explored, we’ve got so much to offer from each country and so many different cultures with different flavors.

Surrounded by mounds of fresh ingredients, Pierre Thiam preps fonio sushi rolls to share onstage at TEDGlobal 2017. Photo: Ryan Lash / TED

 

Ozoz: Quick fire round. Zobo or tamarind?

Pierre: Zobo.

Ozoz: What do you always have in your fridge?

Pierre: Oh boy…I don’t have much in my fridge…

Ozoz: What food can’t you live without?

Pierre: Uh? This is going to sound clichéd but I really love my fonio on a regular basis.

Ozoz: I don’t mind that. Foraging or fishing?

Pierre: Fishing.

Ozoz: Cumin or coriander seeds?

Pierre: Cumin.

Ozoz: Rain or sun?

Pierre: Sun.

Ozoz: Pancakes or French toast?

Pierre: French toast.

Ozoz: Food writing or photography?

Pierre: Both. Actually photography is very important, but good food writing can transport you to places in your imagination, which is more difficult to capture with photography.

Ozoz: Cilantro or parsley?

Pierre: Cilantro.

Ozoz: Last one. Nigerian jollof or Ghanaian jollof?

Pierre: Senegalese …

To share with Pierre, Ozoz brought a package of homemade spice mixes from Nigeria, including yaji spice, a peanut-based mixture of smoky and spicy aromatics that’s traditionally used to make suja, a popular street food. Photo: Callie Giovanna / TED


CryptogramOn the Equifax Data Breach

Last Thursday, Equifax reported a data breach that affects 143 million US customers, about 44% of the population. It's an extremely serious breach; hackers got access to full names, Social Security numbers, birth dates, addresses, driver's license numbers -- exactly the sort of information criminals can use to impersonate victims to banks, credit card companies, insurance companies, and other businesses vulnerable to fraud.

Many sites posted guides to protecting yourself now that it's happened. But if you want to prevent this kind of thing from happening again, your only solution is government regulation (as unlikely as that may be at the moment).

The market can't fix this. Markets work because buyers choose between sellers, and sellers compete for buyers. In case you didn't notice, you're not Equifax's customer. You're its product.

This happened because your personal information is valuable, and Equifax is in the business of selling it. The company is much more than a credit reporting agency. It's a data broker. It collects information about all of us, analyzes it all, and then sells those insights.

Its customers are people and organizations who want to buy information: banks looking to lend you money, landlords deciding whether to rent you an apartment, employers deciding whether to hire you, companies trying to figure out whether you'd be a profitable customer -- everyone who wants to sell you something, even governments.

It's not just Equifax. It might be one of the biggest, but there are 2,500 to 4,000 other data brokers that are collecting, storing, and selling information about you -- almost all of them companies you've never heard of and have no business relationship with.

Surveillance capitalism fuels the Internet, and sometimes it seems that everyone is spying on you. You're secretly tracked on pretty much every commercial website you visit. Facebook is the largest surveillance organization mankind has created; collecting data on you is its business model. I don't have a Facebook account, but Facebook still keeps a surprisingly complete dossier on me and my associations -- just in case I ever decide to join.

I also don't have a Gmail account, because I don't want Google storing my e-mail. But my guess is that it has about half of my e-mail anyway, because so many people I correspond with have accounts. I can't even avoid it by choosing not to write to gmail.com addresses, because I have no way of knowing if newperson@company.com is hosted at Gmail.

And again, many companies that track us do so in secret, without our knowledge and consent. And most of the time we can't opt out. Sometimes it's a company like Equifax that doesn't answer to us in any way. Sometimes it's a company like Facebook, which is effectively a monopoly because of its sheer size. And sometimes it's our cell phone provider. All of them have decided to track us and not compete by offering consumers privacy. Sure, you can tell people not to have an e-mail account or cell phone, but that's not a realistic option for most people living in 21st-century America.

The companies that collect and sell our data don't need to keep it secure in order to maintain their market share. They don't have to answer to us, their products. They know it's more profitable to save money on security and weather the occasional bout of bad press after a data loss. Yes, we are the ones who suffer when criminals get our data, or when our private information is exposed to the public, but ultimately why should Equifax care?

Yes, it's a huge black eye for the company -- this week. Soon, another company will have suffered a massive data breach and few will remember Equifax's problem. Does anyone remember last year when Yahoo admitted that it exposed personal information of a billion users in 2013 and another half billion in 2014?

This market failure isn't unique to data security. There is little improvement in safety and security in any industry until government steps in. Think of food, pharmaceuticals, cars, airplanes, restaurants, workplace conditions, and flame-retardant pajamas.

Market failures like this can only be solved through government intervention. By regulating the security practices of companies that store our data, and fining companies that fail to comply, governments can raise the cost of insecurity high enough that security becomes a cheaper alternative. They can do the same thing by giving individuals affected by these breaches the ability to sue successfully, citing the exposure of personal data itself as a harm.

By all means, take the recommended steps to protect yourself from identity theft in the wake of Equifax's data breach, but recognize that these steps are only effective on the margins, and that most data security is out of your hands. Perhaps the Federal Trade Commission will get involved, but without evidence of "unfair and deceptive trade practices," there's nothing it can do. Perhaps there will be a class-action lawsuit, but because it's hard to draw a line between any of the many data breaches you're subjected to and a specific harm, courts are not likely to side with you.

If you don't like how careless Equifax was with your data, don't waste your breath complaining to Equifax. Complain to your government.

This essay previously appeared on CNN.com.

EDITED TO ADD: In the early hours of this breach, I did a radio interview where I minimized the ramifications of this. I didn't know the full extent of the breach, and thought it was just another in an endless string of breaches. I wondered why the press was covering this one and not many of the others. I don't remember which radio show interviewed me. I kind of hope it didn't air.