Planet Russell

,

Worse Than FailureFlobble

The Inner Platform Effect, third only after booleans and dates, is one of the most complicated blunders that so-called developers (who think that they know what they're doing) do to Make Things Better.™ Combine that with multiple inheritance run-amok and a smartass junior developer who thinks documentation and method naming are good places to be cute, and you get todays' submission.

A cat attacking an impossible object illusion to get some tuna from their human

Chops,an experienced C++ developer somewhere in Europe, was working on their flagship product. It had been built slowly over 15 years by a core of 2-3 main developers, and an accompanying rotating cast of enthusiastic but inexperienced C++ developers. The principal developer had been one of those juniors himself at the start of development. When he finally left, an awful lot of knowledge walked out the door with him.

Enormous amounts of what should have been standard tools were homegrown. Homegrown reference counting was a particular bugbear, being thread dangerous as it was - memory leaks abounded. The whole thing ran across a network, and there were a half-dozen ways any one part could communicate with another. One such way was a "system event". A new message object was created and then just launched into the underlying messaging framework, in the hopes that it would magically get to whoever was interested, so long as that other party had registered an interest (not always the case).

A new system event was needed, and a trawl was made for anyone who knew anything about them. <Crickets> Nobody had any idea how they worked, or how to make a new one. The documentation was raked over, but it was found to mostly be people complaining that there was no documentation. The code suffered from inheritance fever. In a sensible system, there would be only one message type, and one would simply tag it appropriately with an identifier before inserting the data of interest.

In this system, there was an abstract base message type, and every specific message type had to inherit from it, implement some of the functions and override some others. Unfortunately, each time it seemed to be a different set of functions being implemented and a different set being overridden. Some were clearly cut and paste jobs, copying others, carrying their mistakes forward. Some were made out of several pieces of others; cut, paste and compiled until the warning messages were disabled compiler stopped complaining.

Sometimes, when developing abstract base types that were intended to be inherited from to create a concrete class for a new purpose, those early developers had created a simple, barebones concrete example implementation. A reference implementation, with "Example" in the name, that could be used as a starting point, with comments, making it clear what was necessary and what was optional. No such example class could be found for this.

Weeks of effort went into reverse-engineering the required messaging functionality, based on a few semi-related examples. Slowly, the shape of the mutant inside became apparent. Simple, do-nothing message objects were created and tested. Each time they failed, the logs were pored over, breakpoints were added, networks were watched, tracing the point of failure and learning something new.

Finally, the new message object was finished. It worked. There was still some voodoo coding in it; magic incantations that were not understood (the inheritance chain was more than five levels deep, with multiple diamonds, and one class being inherited from six times), but it worked, although nobody was certain why.

During the post development documentation phase, Mister Chops was hunting down every existing message object. Each would need reviewing and examination at some point, with the benefit of the very expensive reverse engineering. He came across one with an odd name; it wasn't used anywhere, so hadn't been touched since it was first committed. Nobody had ever had a reason to look at it. The prefix of the name was as expected, but the suffix - the part that told you at a glance what kind of message it was - was "Flobble". Chops opened it up.

It was a barebones example of a concrete implementation of the abstract base class, with useful explanatory comments on how to use/extend it, and how it worked. Back at the start, some developer, instead of naming the example class "Example" as was customary, or naming it anything at all that would have made it clear what it was, had named it "Flobble". It sat there for a decade, while people struggled to understand these objects over and over, and finally reverse engineered it at *significant* expense. Because some whimsical developer a decade previously had decided to be funny.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianJulien Danjou: How I stopped merging broken code

How I stopped merging broken code

It's been a while since I moved all my projects to GitHub. It's convenient to host Git projects, and the collaboration workflow is smooth.

I love pull requests to merge code. I review them, I send them, I merge them. The fact that you can plug them into a continuous integration system is great and makes sure that you don't merge code that will break your software. I usually have Travis-CI setup and running my unit tests and code style check.

The problem with the GitHub workflow is that it allows merging untested code.

What?

Yes, it does. If you think that your pull requests, all green decorated, are ready to be merged, you're wrong.

How I stopped merging broken code This might not be as good as you think

You see, pull requests on GitHub are marked as valid as soon as the continuous integration system passes and indicates that everything is valid. However, if the target branch (let's say, master) is updated while the pull request is opened, nothing forces to retest that pull request with this new master branch. You think that the code in this pull request works while that might have became true.

How I stopped merging broken code Master moved, the pull request is not up to date though it's still marked as passing integration.

So it might be that what went into your master branch now breaks this not-yet-merged pull request. You've no clue. You'll trust GitHub and press that green merge button, and you'll break your software. For whatever reason, it's possible that the test will break.

How I stopped merging broken code If the pull request has not been updated with the latest version of its target branch, it might break your integration.

The good news is that's something that's solvable with the strict workflow that Mergify provides. There's a nice explanation and example in Mergify's blog post You are merging untested code that I advise you to read. What Mergify provides here is a way to serialize the merge of pull requests while making sure that they are always updated with the latest version of their target branch. It makes sure that there's no way to merge broken code.

That's a workflow I've now adopted and automatized on all my repositories, and we've been using such a workflow for Gnocchi for more than a year, with great success. Once you start using it, it becomes impossible to go back!

CryptogramTraffic Analysis of the LTE Mobile Standard

Interesting research in using traffic analysis to learn things about encrypted traffic. It's hard to know how critical these vulnerabilities are. They're very hard to close without wasting a huge amount of bandwidth.

The active attacks are more interesting.

EDITED TO ADD (7/3): More information.

I have been thinking about this, and now believe the attacks are more serious than I previously wrote.

Planet DebianAthos Ribeiro: Towards Debian Unstable builds on Debian Stable OBS

This is the sixth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian Unstable builds on OBS

Lately, I have been working towards triggering Debian Unstable builds with Debian OBS packages. As reported before, We can already build packages for both Debian 8 and 9 based on the example project configurations shipped with the package in Debian Stable and with the project configuration files publicly available on OBS SUSE instance.

While trying to build packages agains Debian unstable I have been reaching the following issue:

OBS scheduler reads the project configuration and starts downloading dependencies. The deendencies get downloaded but the build is never dispatched (the package stays on a “blocked” state). The downloaded dependencies get cleaned up and the scheduler starts the downloads again. OBS enters in an infinite loop there.

This only happens for builds on sid (unstable) and buster (testing).

We realized that the OBS version packaged in Debian 9 (the one we are currently using) does not support debian source packages built with dpkg >= 1.19. At first I started applying this patch on the OBS Debian package, but after reporting the issue to Debian OBS maintainers, they pointed me to the obs-build package in Debian stable backports repositories, which included the mentioned patch.

While the backports package included the patch needed to support source packages built with newr versions of dpkg, we still get the same issue with unstable and testing builds: the scheduler downloads the dependencies, hangs for a while but the build is never dispatched (the package stays on a “blocked” state). After a while, the dependencies get cleaned up and the scheduler starts the downloads again.

The bug has been quite hard to debug since OBS logs do not provide feedback on the problem we have been facing. To debug the problem, We tried to trigger local builds with osc. First, I (successfuly) triggered a few local builds against Debian 8 and 9 to make sure the command would work. Then We proceeded to trigger builds against Debian Unstable.

The first issue we faced was that the osc package in Debian stable cannot handle builds against source packages built with new dpkg versions. We fixed that by patching osc/util/debquery.py (we just substituted the file with the latest file in osc development version). After applying the patch, we got the same results we’d get when trying to build the package remotelly, but with debug flags on, we could have a better understanding of the problem:

BuildService API error: CPIO archive is incomplete (see .errors file)

The .errors file would just contain a list of dependencies which were missing in the CPIO archive.

If we kept retrying, OBS would keep caching more and more dependencies, until the build succeeded at some point.

We now know that the issue lies with the Download on Demand feature.

We then tried a local build in a fresh OBS instance (no cached packages) using the --disable-cpio-bulk-download osc build option, which would make OBS download each dependency individually instead of doing so in bulks. For our surprise, the builds succeeded in our first attempt.

Finally, we traced the issue all the way down to the OBS API call which is triggered when OBS needs to download missing dependenies. For some reason, the number of parameters (number of dependencies to be downloaded) affects the final response of the API call. When trying to download too many packages, The CPIO archive is not built correctly and OBS builds fail.

At the moment, we are still investigating why such calls fail with too many params and why it only fails for Debian Testing and Unstable repositories.

Next steps (A TODO list to keep on the radar)

  • Fix OBS builds on Debian Testing and Unstable
  • Write patch for Debian osc’s debquery.py so it can build Debian packages with control.tar.xz
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the rake-tasks.sh script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)

,

Cory DoctorowMark Zuckerberg and his empire of oily rags

Surveillance capitalism sucks: it improves the scattershot, low-performance success-rate of untargeted advertising (well below 1 percent) and doubles or triples it (to well below 1 percent!).


But surveillance captialism is still dangerous: all those dossiers on the personal lives of whole populations can be used for blackmail, identity theft and political manipulation. As I explain in my new Locus column, Cory Doctorow: Zuck’s Empire of Oily Rags, Facebook’s secret is that they’ve found a way to turn a profit on an incredibly low-yield resource — like figuring out how to make low-grade crude out of the oil left over from oily rags.

But because the margins on surveillance data are so poor, the business is only sustainable if it fails to take the kinds of prudent precautions that would make it safe to warehouse these unimaginably gigantic piles of oily rags.

It’s as though Mark Zuckerberg woke up one morning and realized that the oily rags he’d been accumulating in his garage could be refined for an extremely low-grade, low-value crude oil. No one would pay very much for this oil, but there were a lot of oily rags, and provided no one asked him to pay for the inevitable horrific fires that would result from filling the world’s garages with oily rags, he could turn a tidy profit.

A decade later, everything is on fire and we’re trying to tell Zuck and his friends that they’re going to need to pay for the damage and install the kinds of fire-suppression gear that anyone storing oily rags should have invested in from the beginning, and the commercial surveillance industry is absolutely unwilling to contemplate anything of the sort.

That’s because dossiers on billions of people hold the power to wreak almost unimaginable harm, and yet, each dossier brings in just a few dollars a year. For commercial surveillance to be cost effective, it has to socialize all the risks associated with mass surveillance and privatize all the gains.

There’s an old-fashioned word for this: corruption. In corrupt systems, a few bad actors cost everyone else billions in order to bring in millions – the savings a factory can realize from dumping pollution in the water supply are much smaller than the costs we all bear from being poisoned by effluent. But the costs are widely diffused while the gains are tightly concentrated, so the beneficiaries of corruption can always outspend their victims to stay clear.

Facebook doesn’t have a mind-control problem, it has a corruption problem. Cambridge Analytica didn’t convince decent people to become racists; they convinced racists to become voters.

Cory Doctorow: Zuck’s Empire of Oily Rags [Cory Doctorow/Locus]

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #166

Here’s what happened in the Reproducible Builds effort between Sunday June 24 and Saturday June 30 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

diffoscope versions 97 and 98 were uploaded to Debian unstable by Chris Lamb. They included contributions already covered in previous weeks as well as new ones from:

Chris Lamb also updated the SSL certificate for try.difoscope.org.

Authors

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianSune Vuorela: 80bit x87 FPU

Once again, I got surprised by the 80 bit x87 FPU stuff.

First time was around a decade ago. Back then, it was something along the lines of a sort function like:

struct ValueSorter()
{
    bool operator (const Value& first, const Value& second) const
    {
         double valueFirst = first.amount() * first.value();
         double valueSecond = second.amount() * second.value();
         return valueFirst < valueSecond;
    }
}

With some values, first and would be smaller than second, and second smaller than first. All depending on which one got truncated to 64 bit, and which one came directly from the 80bit fpu.

This time, the 80 bit version when cast to integers was 1 smaller than the 64 bit version.

Oh. The joys of x86 CPU’s.

TEDCuring cancer one nanoparticle at a time, and more news from TED speakers

As usual, the TED community is hard at work — here are some highlights:

A new drug-delivering nanoparticle. Paula Hammond, the head of the Department of Chemical Engineering at MIT, is part of a research team that has developed a new nanoparticle designed to treat a kind of brain tumor called glioblastoma multiforme. The nanoparticles deliver drugs to the brain that work in two ways — to destroy the DNA of tumor cells, and to impede the reparation of those cells. The researchers were able to shrink tumors and stop them from growing back in mice — and there’s hope this technology can be used for human applications in the future. (Watch Hammond’s TED Talk).

Reflections on grief, loss and love. Amy Krouse Rosenthal penned a poignant, humorous and heart-rending love letter to her husband — published in The New York Times ten days before her death — that resonated deeply with readers across the world. In the year since, Jason Rosenthal established a foundation in her name to fund ovarian cancer research and childhood literacy initiatives. Following the anniversary of Amy’s death, Rosenthal responded to her letter in a moving reflection on mourning and the gifts of generosity she left in her wake. “We did our best to live in the moment until we had no more moments left,” he wrote for The New York Times. “Amy continues to open doors for me, to affect my choices, to send me off into the world to make the most of it. Recently I gave a TED Talk on the end of life and my grieving process that I hope will help others.” (Watch Rosenthal’s TED Talk.)

Why we need to change our perceptions of teenagers. Neurologist Sarah-Jayne Blakemore urges us to reconsider the way we understand and treat teenagers, especially in school settings. (She wrote a book about the secret life of the teenage brain in March.) According to the latest research, teenagers shed 17% of their grey matter in the prefrontal cortex between childhood and adulthood, which, as Blakemore says, explains that traditional “bad” behaviors like sleeping in late and moodiness are a result of cognitive changes, not laziness or abrasiveness. (Watch Blakemore’s TED Talk.)

Half empty or half full? Research by Dan Gilbert indicates that our decisions may be more faulty than we think — and that we may be predisposed to seeing problems even when they aren’t there. In a recent paper Gilbert co-authored, researchers found that our judgment doesn’t follow fixed rules, but rather, our decisions are more relative. In one experiment, participants were asked to look at dots along a color spectrum from blue to purple, and note which dots were blue; at first, the dots were shown in equal measure, but when blue dots were shown less frequently, participants began marking dots they previously considered purple as blue (this video does a good job explaing). In another experiment, participants were more likely to mark ethical papers as unethical, and nonthreatening faces as threatening, when the previously-set negative stimulus was shown less frequently. This behavior — dubbed “prevalence-induced concept change” — has broad implications; the paper suggests it may explain why social problems never seem to go away, regardless of how much work we do to fix them. (Watch Gilbert’s TED Talk).

Terrifying insights from the world of parasites. Ed Yong likes to write about the creepy and uncanny of the natural world. In his latest piece for The Atlantic, Yong offered a deeper view into the bizarre habits and powers of parasitic worms. Based on research by Nicolle Demandt and Benedikt Saus from the University of Munster, Yong described how some tapeworms capitalize on the way fish shoals guide and react to each other’s behaviors and movements. Studying stickleback fish, Demandt and Saus realized parasite-informed decisions of infected sticklebacks can influence the behavior of uninfected fish, too. This means that if enough infected fish are led to dangerous situations by the controlling powers of the tapeworms, uninfected fish will be impacted by those decisions — without ever being infected themselves. (Read more of Yong’s work and watch his TED Talk.)

A new documentary on corruption within West African football. Ghanaian investigative journalist Anas Aremeyaw Anas joined forces with BBC Africa to produce an illuminating and hard-hitting documentary exposing fraud and corruption in West Africa’s football industry. In an investigation spanning two years, almost 100 officials were recorded accepting cash “gifts” from a slew of undercover reporters from Anas’ team posing as business people and investors. The documentary has already sent shock-waves throughout Ghana — including FIFA bans and resignations from football officials across the country. (Watch the full documentary and Anas’ TED Talk.)

 

TEDIdeas from the intersections: A night of talks from TED and Brightline

Onstage to host the event, Corey Hajim, TED’s business curator, and Cloe Shasha, TED’s speaker development director, kick off TEDNYC Intersections, a night of talks presented by TED and the Brightline Initiative. (Photo: Ryan Lash / TED)

At the intersections where we meet and collaborate, we can pool our collective wisdom to seek solutions to the world’s greatest problems. But true change begs for more than incremental steps and passive reactions — we need to galvanize transformation to create our collective future.

To celebrate the effort of bold thinkers building a better world, TED has partnered with the Brightline Initiative, a noncommercial coalition of organizations dedicated to helping leaders turn ideas into reality. In a night of talks at TED HQ in New York City — hosted by TED’s speaker development director Cloe Shasha and co-curated by business curator Corey Hajim and technology curator Alex Moura — six speakers and two performers showed us how we can effect real change. After opening remarks from Brightline’s Ricardo Vargas, the session kicked off with Stanford professor Tina Seelig.

Creativity expert Tina Seelig shares three ways we can all make our own luck. (Photo: Ryan Lash / TED)

How to cultivate more luck in your life. “Are you ready to get lucky?” asks Tina Seelig, a professor at Stanford University who focuses on creativity, entrepreneurship and innovation. While luck may seem to be brought on by chance alone, it turns out that there are ways you can enhance it — no matter how lucky or unlucky you think you are. Seelig shares three simple ways you can help luck to bend a little more in your direction: Take small risks that bring you outside your comfort zone; find every opportunity to show appreciation when others help you; and find ways to look at bad or crazy ideas with a new perspective. “The winds of luck are always there,” Seelig says, and by using these three tactics, you can build a bigger and bigger sail to catch them.

A new mantra: let’s fail mindfully. We celebrate bold entrepreneurs whose ingenuity led them to success — but how do we treat those who have failed? Leticia Gasca, founder and director of the Failure Institute, thinks we need to change the way we talk about business failure. After the devastating closing of her own startup, Gasca wiped the experience from her résumé and her mind. But she later realized that by hiding her failure, she was missing out on a valuable opportunity to connect. In an effort to embrace failure as an experience to learn from, Gasca co-created the Failure Institute, which includes international Fuck-Up Nights — spaces for vulnerability and connection over shared experiences of failure. Now, she advocates for a more holistic culture around failure. The goal of failing mindfully, Gasca says, is to “be aware of the consequences of the failed business,” and “to be aware of the lessons learned and the responsibility to share those learnings with the world.” This shift in the way we address failure can help make us better entrepreneurs, better people, and yes — better failures.

A police officer for 25 years, Tracie Keesee imagines a future where communities and police co-produce public safety in local communities. Photo: Ryan Lash / TED

Preserving dignity, guaranteeing justice. We all want to be safe, and our safety is intertwined, says Tracie Keesee, cofounder of the Center for Policing Equity. Sharing lessons she’s learned from 25 years as a police officer, Keesee reflects on the challenges — and opportunities — we all have for creating safer communities together. Policies like “Stop, Question and Frisk” set police and neighborhoods as adversaries, creating alienation, specifically among African Americans; instead, Keesee shares a vision for how the police and the neighborhoods they serve can come together to co-produce public safety. One example: the New York City Police Department’s “Build the Block Program,” which helps community members interact with police officers to share their experiences. The co-production of justice also includes implicit bias training for officers — so they can better understand how this biases we all carry impact their decision-making. By ending the “us vs. them” narrative, Keesee says, we can move forward together.

We can all be influencers. ​Success was once defined by power, but today it’s tied to influence, or “the ability to have an effect on a person or outcome,” says behavioral scientist Jon Levy. It rests on two building blocks: who you’re connected to and how much they trust you. In 2010, Levy created “Influencers” dinners, gathering a dozen high-profile people (who are strangers to each other) at his apartment. But how to get them to trust him and the rest of the group? He asks his guests to cook the meal and clean up. “I had a hunch this was working,” Levy recalls, “when one day I walked into my home and 12-time NBA All-Star Isiah Thomas was washing my dishes, while singer Regina Spektor was making guac with the Science Guy himself, Bill Nye.” From the dinners have emerged friendships, professional relationships and support for social causes. He believes we can cultivate our own spheres of influence at a scale that works for us. “If I can encourage you to do anything, it’s to bring together people you admire,” says Levy. “There’s almost no greater joy in life.”

Yelle and GrandMarnier rock the TED stage with electro-pop and a pair of bright yellow jumpsuits. (Photo: Ryan Lash / TED)

The intersection of music and dance. All the way from France, Yelle and GrandMarnier grace the TEDNYC stage with two electro-pop hits, “Interpassion” and “Ba$$in.” Both songs groove with robotic beats, Yelle’s hypnotic voice, kaleidoscopic rhythms and hypersonic sounds that rouse the audience to stand up, let loose and dance in the aisles.

How to be a great ally. We’re taught to believe that working hard leads directly to getting what you deserve — but sadly, this isn’t the case for many people. Gender, race, ethnicity, religion, disability, sexual orientation, class and geography — all of these can affect our opportunities for success, says writer and advocate Melinda Epler, and it’s up to all of us to do better as allies. She shares three simple ways to start uplifting others in the workplace: do no harm (listen, apologize for mistakes and never stop learning); advocate for underrepresented people in small ways (intervene if you see them being interrupted); and change the trajectory of a life by mentoring or sponsoring someone through their career. “There is no magic wand that corrects diversity and inclusion,” Epler says. “Change happens one person at a time, one act at a time, one word at a time.”

AJ Jacobs explains the powerful benefits of gratitude — and takes us on his quest to thank everyone who made his morning cup of coffee. (Photo: Ryan Lash / TED)

Lessons from the Trail of Gratitude. Author AJ Jacobs embarked on a quest with a deceptively simple idea at its heart: to personally thank every person who helped make his morning cup of coffee. “This quest took me around the world,” Jacobs says. “I discovered that my coffee would not be possible without hundreds of people I take for granted.” His project was inspired by a desire to overcome the brain’s innate “negative bias” — the psychological tendency to focus on the bad over the good — which is most effectively combated with gratitude. Jacobs ended up thanking everyone from his barista and the inventor of his coffee cup lid to the Colombian farmers who grew the coffee beans and the steelworkers in Indiana who made their pickup truck — and more than a thousand others in between. Along the way, he learned a series of perspective-altering lessons about globalization, the importance of human connection and more, which are detailed in his new TED Book, Thanks a Thousand: A Gratitude Journey. “It allowed me to focus on the hundreds of things that go right every day, as opposed to the three or four that go wrong,” Jacobs says of his project. “And it reminded me of the astounding interconnectedness of our world.”

Worse Than FailureCodeSOD: An Eventful Career Continues

You may remember Sandra from her rather inglorious start at Initrovent. She didn't intend to continue working for Karl for very long, but she also didn't run out the door screaming. Perhaps she should have, but if she had- we wouldn't have this code.

Initrovent was an event-planning company, and thus needed to manage events, shows, and spaces. They wrote their own exotic suite of software to manage that task.

This code predates their current source control system, and thus it lacks any way to blame the person responsible. Karl, however, was happy to point out that he used to do Sandra's job, and he knew a thing or two about programming. "My fingerprints are on pretty much every line of code," he was proud to say.

if($showType == 'unassigned' || $showType == 'unassigned' || $showType == 'new') { ... }

For a taster, here's one that just leaves me puzzling. Were it a long list, I could more easily see how the same value might appear multiple times. A thirty line conditional would be more of a WTF, but I can at least understand it. There are only three options, two of them are duplicates, and they're right next to each other.

What if you wanted to conditionally enable debugging messages. Try this approach on for size.

foreach($current_open as $key => $value) { if ($value['HostOrganization']['ticket_reference'] == '400220') { //debug($value); } }

What a lovely use of magic numbers. I also like the mix of PascalCase and snake_case keys. But see, if there's any unfilled reservation for a ticket reference number of 400220, we'll print out a debugging message… if the debug statement isn't commented out, anyway.

With that in mind, let's think about a real-world problem. For a certain set of events, you don't want to send emails to the client. The planner wants to send those emails manually. Who knows why? It doesn't matter. This would be a trivial task, yes? Simply chuck a flag on the database table- manual_emails and add a code branch. You could do that, yes, but remember how we controlled the printing of debugging messages before. You know how they actually did this:

$hackSkipEventIds = array('55084514-0864-46b6-95aa-6748525ee4db'); if (in_array($eventId, $hackSkipEventIds)) { // Before we implement #<redacted>, we prefer to skip all roommate // notifications in certain events, and just let the planner send // manual emails. return; }

Look how extensible this solution is- if you ever need to disable emails for more events, you can just extend this array. There's no need to add a UI or anything!

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianSteve Kemp: Another golang port, this time a toy virtual machine.

I don't understand why my toy virtual machine has as much interest as it does. It is a simple project that compiles "assembly language" into a series of bytecodes, and then allows them to be executed.

Since I recently started messing around with interpreters more generally I figured I should revisit it. Initially I rewrote the part that interprets the bytecodes in golang, which was simple, but now I've rewritten the compiler too.

Much like the previous work with interpreters this uses a lexer and an evaluator to handle things properly - in the original implementation the compiler used a series of regular expressions to parse the input files. Oops.

Anyway the end result is that I can compile a source file to bytecodes, execute bytecodes, or do both at once:

I made a couple of minor tweaks in the port, because I wanted extra facilities. Rather than implement an opcode "STRING_LENGTH" I copied the idea of traps - so a program can call-back to the interpreter to run some operations:

int 0x00  -> Set register 0 with the length of the string in register 0.

int 0x01  -> Set register 0 with the contents of reading a string from the user

etc.

This notion of traps should allow complex operations to be implemented easily, in golang. I don't think I have the patience to do too much more, but it stands as a simple example of a "compiler" or an interpreter.

I think this program is the longest I've written. Remember how verbose assembly language is?

Otherwise: Helsinki Pride happened, Oiva learned to say his name (maybe?), and I finished reading all the James Bond novels (which were very different to the films, and have aged badly on the whole).

Planet DebianSteinar H. Gunderson: Modern OpenGL

New project, new version of OpenGL—4.5 will be my hard minimum this time. Sorry, macOS, you brought this on yourself.

First impressions: Direct state access makes things a bit less soul-sucking. Immutable textures is not really a problem when you design for it to begin with, as opposed to retrofitting them. But you still need ~150 lines of code to compile a shader and render a fullscreen quad to another texture. :-/ VAOs, you are not my friend.

Next time, maybe Vulkan? Except the amount of stuff to get that first quad on screen seems even worse there.

Don MartiWorse is better, again?

Are there parallels between the rise of Worse Is Better in software and the success of the "uncreative counterrevolution" in advertising? (for more on that second one: John Hegarty: Creativity is receding from marketing and data is to blame) The winning strategy in software is to sacrifice consistency and correctness for simplicity. (probably because of network effects, principal-agent problems, and market failures.) And it seems like advertising has similar trade-offs between

  • Signal

  • Measurability (How well can we measure this project's effect on sales?)

  • Message (Is it persuasive and on brand?)

Just as it's rational for software decision-makers to choose simplicity, it can be rational for marketing decsion-makers to choose measurability over signal and message. (This is probably why there is a brand crisis going on—short-term CMOs are better off when they choose brand-unsafe tactics, sacrificing Message.)

As we're now figuring out how to use market-based tools to fix market failures in software, where can we use better market design to fix market failures in advertising? Maybe this is where it actually makes sense to use #blockchain: give people whose decisions can affect #brandEquity some kind of #skinInTheGame?

Against privacy defeatism: why browsers can still stop fingerprinting

How to get away with financial fraud

Google invests $22M in feature phone operating system KaiOS

Inside the investor revolt that’s trying to take down Mark Zuckerberg

Ryan Wallman: Marketers must loosen their grip on the creative process

Open source sustainability

K2’s Media Transparency Report Still Rocks The Ad Industry Two Years After Its Release

Mark Ritson: How ‘influencers’ made my arse a work of art

Ad fraud one of the most profitable criminal enterprises in the world, researcher says

Cover story: Adtech won’t fix ad fraud because it is too lucrative, say specialists

https://hackernoon.com/why-funding-open-source-is-hard-652b7055569d

Sir John Hegarty: Great advertising elevates brands to a part of culture

https://www.canvas8.com/blog/2018/ju/behavioural-science-insights-nudgestock-2018.html …

,

Planet DebianDirk Eddelbuettel: nanotime 0.2.1

A new minor version of the nanotime package for working with nanosecond timestamps just arrived on CRAN.

nanotime uses the RcppCCTZ package for (efficient) high(er) resolution time parsing and formatting up to nanosecond resolution, and the bit64 package for the actual integer64 arithmetic. Initially implemented using the S3 system, it now uses a more rigorous S4-based approach thanks to a rewrite by Leonardo Silvestri.

This release brings three different enhancements / fixes that robustify usage. No new features were added.

Changes in version 0.2.1 (2018-07-01)

  • Added attribute-preserving comparison (Leonardo in #33).

  • Added two integer64 casts in constructors (Dirk in #36).

  • Added two checks for empty arguments (Dirk in #37).

We also have a diff to the previous version thanks to CRANberries. More details and examples are at the nanotime page; code, issue tickets etc at the GitHub repository.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Valerie AuroraBryan Cantrill has been accused of verbal abuse by at least seven people

It sounds like Bryan Cantrill is thinking about organizing another computer conference. When he did that in 2016, I wrote a blog post about why I wouldn’t attend, because, based on my experience as Bryan’s former co-worker, I believed that Bryan Cantrill would probably say cruel and humiliating things to people who attended.

I understand that some people still supported Bryan and his conference after they read that post. After all, Bryan is so intelligent and funny and accomplished, and it’s a “he said, she said” situation, and if you can’t take the heat get out of the kitchen, etc. etc.

What’s changed since then? Well, at least six other people spoke up publicly about their own experiences with Bryan, many of which seem worse than mine. Then #metoo happened and we learned how many people a powerful person can abuse before any of their victims speak up, and why they stay quiet: worry about their careers being destroyed, being bankrupted by a lawsuit, or being called a liar and worse. If you’re still supporting Bryan, I invite you to read this story about Jeffrey Tambor verbally abusing Jessica Walter on the set of Arrested Development, and re-examine why you are supporting someone who has been verbally abusive to so many people.

Here are six short quotes from other people speaking about their experiences with Bryan Cantrill:

Having been a Joyent ‘customer’ and working to porting an application to run on SmartOS was like being a personal punching bag for Bryan.”

I worked at Joyent from 2010 through 2013. Valerie’s experience comports with my own. This warning is brave and wise.”

All that you say is true, and if anything, toned down from reality. Bryan is a truly horrible human being.”

I know for sure Bryan’s behavior prevented or at the very least delayed other developers from reaching their potential in the kernel group. Unfortunately the lack of moral and ethical leadership in Solaris allowed this to go on for far too long.”

Sun was such a toxic environment for so many people and it is very brave of you to share your experience. After six years in this oppressive environment, my confidence was all but destroyed.”

Having known Bryan from the days of being a junior engineer…he has always been a narcissistic f_ck that proudly leaves a wake of destruction rising up on the carcasses of his perceived foes (real and imagined). His brilliance comes at too high of a cost.”

This is what six people are willing to say publicly about how Bryan treated them. If you think that isn’t a lot, please take the time to read more about #metoo and consider how Bryan’s position of power would discourage people from coming forward with their stories of verbal abuse. If you do believe that Bryan has abused these people, consider what message you are sending to others by continuing to follow him on social media or otherwise validating his behavior.


If you have been abused by Bryan, I have a request: please do not contact me to tell me your story privately, unless you want help making your story public in some way. I’m exhausted and it doesn’t do any good to tell me—I’m already convinced he’s awful. Here’s what I can say: There are dozens of you, and you have remarkably similar stories.

I’ll be heavily moderating comments on this post and in particular won’t approve anything criticizing victims of abuse for speaking up. If your comment gets stuck in the spam filter, please email me at valerie.aurora@gmail.com and I’ll post it for you.

Planet DebianJunichi Uekawa: It's been 10 years since I changed Company and Job.

It's been 10 years since I changed Company and Job. If you ask me now I think it was a successful move but not without issues. I think it's a high risk move to change company and job and location at the same time, you should change one of them. I changed job and company and marital status at the same time, that was too high risk.

Planet DebianPaul Wise: FLOSS Activities June 2018

Changes

Issues

Review

Administration

  • fossjobs: merge pull requests
  • Debian: LDAP support request
  • Debian mentors: fix disk space issue
  • Debian wiki: clean up temp files, whitelist domains, whitelist email addresses, unblacklist IP addresses, disable accounts with bouncing email

Communication

Sponsors

The apt-cacher-ng bugs, leptonlib backport and git-repair feature request were sponsored by my employer. All other work was done on a volunteer basis.

,

Planet DebianElana Hashman: Report on the Debian Bug Squashing Party

Last weekend, six folks (one new contributor and five existing contributors) attended the bug squashing party I hosted in Brooklyn. We got a lot done, and attendees demanded that we hold another one soon!

So when's the next one?

We agreed that we'd like to see the NYC Debian User Group hold two more BSPs in the next year: one in October or November of 2018, and another after the Buster freeze in early 2019, to squash RC bugs. Stay tuned for more details; you may want to join the NYC Debian User Group mailing list.

If you know of an organization that would be willing to sponsor the next NYC BSP (with space, food, and/or dollars), or you're willing to host the next BSP, please reach out.

What did folks work on?

We had a list of bugs we collected on an etherpad, which I have now mirrored to gobby (gobby.debian.org/BSP/2018/06-Brooklyn). Attendees updated the etherpad with their comments and progress. Here's what each of the participants reported.

Elana (me)

  • I filed bugs against two upstream repos (pomegranate, leiningen) to update dependencies, which are breaking the Debian builds due to the libwagon upgrade.
  • I uploaded new versions of clojure and clojure1.8 to fix a bug in how alternatives were managed: despite /usr/share/java/clojure.jar being owned by the libclojure${ver}-java binary package, the alternatives group was being managed by the postinst script in the clojure${ver} package, a holdover from when the library was not split from the CLI. Unfortunately, the upgrade is still not passing piuparts in testing, so while I thoroughly tested local upgrades and they seemed to work okay I'm hoping the failures didn't break anyone on upgrade. I'll be taking a look at further refining the preinst script to address the piuparts failure this week.
  • I fixed the Vcs error on libjava-jdbc-clojure and uploaded new version 0.7.0-2. I also added autopkgtests to the package.
  • I fixed the Vcs warning on clojure-maven-plugin and uploaded new version 1.7.1-2.
  • I helped dkg with setting up and running autopkgtests.

Clint

  • was hateful in #901327 (tigervnc)
  • uploaded a fix for #902279 (youtube-dl) to DELAYED/3-day
  • sent a patch for #902318 (monkeysphere)
  • was less hateful in #899060 (monkeysphere)

Lincoln

Editor's note: Lincoln was a new Debian contributor and made lots of progress! Thanks for joining us—we're thrilled with your contributions 🎉 Here's his report.

  • Got my environment setup to work on packages again \o/
  • Played with 901318 for a while but didn't really make progress because it seems to be a long discussion so its bad for a newcomer
  • Was indeed more successful on the python lands. Opened the following PRs
  • Now working on uploading the patches to python-pyscss & python-pytest-benchmark packages removing the dependency on python-pathlib.

dkg

  • worked on debugging/diagnosing enigmail in preparation for making it DFSG-free again (see #901556)
  • got pointers from Lincoln about understanding flow control in asynchronous javascript for debugging the failing autopkgtest suites
  • got pointers from Elana on emulating ci.debian.net's autopkgtest infrastructure so i have a better chance of replicating the failures seen on that platform

Simon

  • Moved python-requests-oauthlib to salsa
  • Updated it to 1.0 (new release), pending a couple final checks.

Geoffrey

  • Worked with Lincoln on both bugs
  • Opened #902323 about removing python-pathlib
  • Working on new pymssql upstream release / restoring it to unstable

By the numbers

All in all, we completed 6 uploads, worked on 8 bugs, filed 3 bugs, submitted 3 patches or pull requests, and closed 2 bugs. Go us! Thanks to everyone for contributing to a very productive effort.

See you all next time.

Planet DebianChris Lamb: Free software activities in June 2018

Here is my monthly update covering what I have been doing in the free software world during June 2018 (previous month):


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by promising identical results are always generated from a given source, thus allowing multiple third-parties to come to a consensus on whether a build was compromised.

This month:



Debian

Patches contributed


Debian LTS


This month I have been worked 18 hours on Debian Long Term Support (LTS) and 7 hours on its sister Extended LTS project. In that time I did the following:

  • "Frontdesk" duties, triaging CVEs, responding to user questions, etc.
  • A fair amount of initial setup and administraton to accomodate the introduction for the new "Extended LTS" initiative as well as for the transition of LTS moving from supporting Debian wheezy to jessie:
    • Fixing various shared scripts, including adding pushing to the remote repository for ELAs [...] and updating hard-coded wheezy references [...]. I also added instructions on exactly how to use the kernel offered by Extended LTS [...].
    • Updating, expanding and testing my personal scripts and workflow to also work for the new "Extended" initiative.
  • Provided some help on updating the Mercurial packages. [...]
  • Began work on updating/syncing the ca-certificates packages in both LTS and Extended LTS.
  • Issued DLA 1395-1 to fix two remote code execution vulnerabilities in php-horde-image, the image processing library for the Horde <https://www.horde.org/> groupware tool. The original fix applied upstream has a regression in that it ignores the "force aspect ratio" option which I have fixed upstream .
  • Issued ELA 9-1 to correct an arbitrary file write vulnerability in the archiver plugin for the Plexus compiler system — a specially-crafted .zip file could overwrite any file on disk, leading to a privilege esclation.
  • During the overlap time between the support of wheezy and jessie I took the opportunity to address a number of vulnerabilities in all suites for the Redis key-value database, including CVE-2018-12326, CVE-2018-11218 & CVE-2018-11219) (via #902410 & #901495).

Uploads

  • redis:
    • 4.0.9-3 — Make /var/log/redis, etc. owned by the adm group. (#900496)
    • 4.0.10-1 — New upstream security release (#901495). I also uploaded this to stretch-backports and backported the packages to stretch.
    • Proposed 3.2.6-3+deb9u2 for inclusion in the next Debian stable release to address an issue in the systemd .service file. (#901811, #850534 & #880474)
  • lastpass-cli (1.3.1-1) — New upstream release, taking over maintership and completely overhauling the packaging. (#898940, #858991 & #842875)
  • python-django:
    • 1.11.13-2 — Fix compatibility with Python 3.7. (#902761)
    • 2.1~beta1-1 — New upstream release (to experimental).
  • installation-birthday (11) — Fix an issue in calcuclating the age of the system by always prefering the oldest mtime we can find. (#901005
  • bfs (1.2.2-1) — New upstream release.
  • libfiu (0.96-4) — Apply upstream patch to make the build more robust with --as-needed. (#902363)
  • I also sponsored an upload of yaml-mode (0.0.13-1) for Nicholas Steeves.

Debian bugs filed

  • cryptsetup-initramfs: "ERROR: Couldn't find sysfs hierarchy". (#902183)
  • git-buildpackage: Assumes capable UTF-8 locale. (#901586)
  • kitty: Render and ship HTML versions of asciidoc. (#902621)
  • redis: Use the system Lua to avoid an embedded code copy. (#901669)

Planet DebianPetter Reinholdtsen: The worlds only stone power plant?

So far, at least hydro-electric power, coal power, wind power, solar power, and wood power are well known. Until a few days ago, I had never heard of stone power. Then I learn about a quarry in a mountain in Bremanger i Norway, where the Bremanger Quarry company is extracting stone and dumping the stone into a shaft leading to its shipping harbour. This downward movement in this shaft is used to produce electricity. In short, it is using falling rocks instead of falling water to produce electricity, and according to its own statements it is producing more power than it is using, and selling the surplus electricity to the Norwegian power grid. I find the concept truly amazing. Is this the worlds only stone power plant?

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.8.600.0.0

armadillo image

A new RcppArmadillo release 0.8.600.0.0, based on the new Armadillo release 8.600.0 from this week, just arrived on CRAN.

It follows our (and Conrad’s) bi-monthly release schedule. We have made interim and release candidate versions available via the GitHub repo (and as usual thoroughly tested them) but this is the real release cycle. A matching Debian release will be prepared in due course.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language–and is widely used by (currently) 479 other packages on CRAN.

A high-level summary of changes follows (which omits the two rc releases leading up to 8.600.0). Conrad did his usual impressive load of upstream changes, but we are also grateful for the RcppArmadillo fixes added by Keith O’Hara and Santiago Olivella.

Changes in RcppArmadillo version 0.8.600.0.0 (2018-06-28)

  • Upgraded to Armadillo release 8.600.0 (Sabretooth Rugrat)

    • added hess() for Hessenberg decomposition

    • added .row(), .rows(), .col(), .cols() to subcube views

    • expanded .shed_rows() and .shed_cols() to handle cubes

    • expanded .insert_rows() and .insert_cols() to handle cubes

    • expanded subcube views to allow non-contiguous access to slices

    • improved tuning of sparse matrix element access operators

    • faster handling of tridiagonal matrices by solve()

    • faster multiplication of matrices with differing element types when using OpenMP

Changes in RcppArmadillo version 0.8.500.1.1 (2018-05-17) [GH only]

  • Upgraded to Armadillo release 8.500.1 (Caffeine Raider)

    • bug fix for banded matricex
  • Added slam to Suggests: as it is used in two unit test functions [CRAN requests]

  • The RcppArmadillo.package.skeleton() function now works with example_code=FALSE when pkgKitten is present (Santiago Olivella in #231 fixing #229)

  • The LAPACK tests now cover band matrix solvers (Keith O'Hara in #230).

Courtesy of CRANberries, there is a diffstat report relative to previous release. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianGunnar Wolf: Want to set up a Tor node in Mexico? Hardware available

Hi friends,

Thanks to the work I have been carrying out with the "Derechos Digitales" NGO, I have received ten Raspberry Pi 3B computers, to help the growth of Tor nodes in Latin America.

The nodes can be intermediate (relays) or exit nodes. Most of us will only be able to connect relays, but if you have the possibility to set up an exit node, that's better than good!

Both can be set up in any non-filtered Internet connection that gives a publicly reachable IP address. I have to note that, although we haven't done a full ISP survey in Mexico (and it would be a very important thing to do — If you are interested in helping with that, please contact me!), I can tell you that connections via Telmex (be it via their home service, Infinitum, or their corporate brand, Uninet) are not good because the ISP filters most of the Tor Directory Authorities.

What do you need to do? Basically, mail me (gwolf@gwolf.org) sending a copy to Ignacio (ignacio@derechosdigitales.org), the person working at this NGO who managed to send me said computers. Oh, of course - And you have to be (physically) in Mexico.

I have ten computers ready to give out to whoever wants some. I am willing and even interested in giving you the needed tech support to do this. Who says "me"?

CryptogramFriday Squid Blogging: Fried Squid with Turmeric

Good-looking recipe.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramConservation of Threat

Here's some interesting research about how we perceive threats. Basically, as the environment becomes safer we basically manufacture new threats. From an essay about the research:

To study how concepts change when they become less common, we brought volunteers into our laboratory and gave them a simple task ­-- to look at a series of computer-generated faces and decide which ones seem "threatening." The faces had been carefully designed by researchers to range from very intimidating to very harmless.

As we showed people fewer and fewer threatening faces over time, we found that they expanded their definition of "threatening" to include a wider range of faces. In other words, when they ran out of threatening faces to find, they started calling faces threatening that they used to call harmless. Rather than being a consistent category, what people considered "threats" depended on how many threats they had seen lately.

This has a lot of implications in security systems where humans have to make judgments about threat and risk: TSA agents, police noticing "suspicious" activities, "see something say something" campaigns, and so on.

The academic paper.

Planet DebianNeil Williams: Automation & Risk

First of two posts reproducing some existing content for a wider audience due to delays in removing viewing restrictions on the originals. The first is a bit long... Those familiar with LAVA may choose to skip forward to Core elements of automation support.

A summary of this document was presented by Steve McIntyre at Linaro Connect 2018 in Hong Kong. A video of that presentation and the slides created from this document are available online: http://connect.linaro.org/resource/hkg18/hkg18-tr10/

Although the content is based on several years of experience with LAVA, the core elements are likely to be transferable to many other validation, CI and QA tasks.

I recognise that this document may be useful to others, so this blog post is under CC BY-SA 3.0: https://creativecommons.org/licenses/by-sa/3.0/legalcode See also https://creativecommons.org/licenses/by-sa/3.0/deed.en

Automation & Risk

Background

Linaro created the LAVA (Linaro Automated Validation Architecture) project in 2010 to automate testing of software using real hardware. Over the seven years of automation in Linaro so far, LAVA has also spread into other labs across the world. Millions of test jobs have been run, across over one hundred different types of devices, ARM, x86 and emulated. Varied primary boot methods have been used alone or in combination, including U-Boot, UEFI, Fastboot, IoT, PXE. The Linaro lab itself has supported over 150 devices, covering more than 40 different device types. Major developments within LAVA include MultiNode and VLAN support. As a result of this data, the LAVA team have identified a series of automated testing failures which can be traced to decisions made during hardware design or firmware development. The hardest part of the development of LAVA has always been integrating new device types, arising from issues with hardware design and firmware implementations. There are a range of issues with automating new hardware and the experience of the LAVA lab and software teams has highlighted areas where decisions at the hardware design stage have delayed deployment of automation or made the task of triage of automation failures much harder than necessary.

This document is a summary of our experience with full background and examples. The aim is to provide background information about why common failures occur, and recommendations on how to design hardware and firmware to reduce problems in the future. We describe some device design features as hard requirements to enable successful automation, and some which are guaranteed to block automation. Specific examples are used, naming particular devices and companies and linking to specific stories. For a generic summary of the data, see Automation and hardware design.

What is LAVA?

LAVA is a continuous integration system for deploying operating systems onto physical and virtual hardware for running tests. Tests can be simple boot testing, bootloader testing and system level testing, although extra hardware may be required for some system tests. Results are tracked over time and data can be exported for further analysis.

LAVA is a collection of participating components in an evolving architecture. LAVA aims to make systematic, automatic and manual quality control more approachable for projects of all sizes.

LAVA is designed for validation during development - testing whether the code that engineers are producing “works”, in whatever sense that means. Depending on context, this could be many things, for example:

  • testing whether changes in the Linux kernel compile and boot
  • testing whether the code produced by gcc is smaller or faster
  • testing whether a kernel scheduler change reduces power consumption for a certain workload etc.

LAVA is good for automated validation. LAVA tests the Linux kernel on a range of supported boards every day. LAVA tests proposed android changes in gerrit before they are landed, and does the same for other projects like gcc. Linaro runs a central validation lab in Cambridge, containing racks full of computers supplied by Linaro members and the necessary infrastucture to control them (servers, serial console servers, network switches etc.)

LAVA is good for providing developers with the ability to run customised test on a variety of different types of hardware, some of which may be difficult to obtain or integrate. Although LAVA has support for emulation (based on QEMU), LAVA is best at providing test support for real hardware devices.

LAVA is principally aimed at testing changes made by developers across multiple hardware platforms to aid portability and encourage multi-platform development. Systems which are already platform independent or which have been optimised for production may not necessarily be able to be tested in LAVA or may provide no overall gain.

What is LAVA not?

LAVA is designed for Continuous Integration not management of a board farm.

LAVA is not a set of tests - it is infrastructure to enable users to run their own tests. LAVA concentrates on providing a range of deployment methods and a range of boot methods. Once the login is complete, the test consists of whatever scripts the test writer chooses to execute in that environment.

LAVA is not a test lab - it is the software that can used in a test lab to control test devices.

LAVA is not a complete CI system - it is software that can form part of a CI loop. LAVA supports data extraction to make it easier to produce a frontend which is directly relevant to particular groups of developers.

LAVA is not a build farm - other tools need to be used to prepare binaries which can be passed to the device using LAVA.

LAVA is not a production test environment for hardware - LAVA is focused on developers and may require changes to the device or the software to enable automation. These changes are often unsuitable for production units. LAVA also expects that most devices will remain available for repeated testing rather than testing the software with a changing set of hardware.

The history of automated bootloader testing

Many attempts have been made to automate bootloader testing and the rest of this document cover the issues in detail. However, it is useful to cover some of the history in this introduction, particularly as that relates to ideas like SDMux - the SD card multiplexer which should allow automated testing of bootloaders like U-Boot on devices where the bootloader is deployed to an SD card. The problem of SDMux details the requirements to provide access to SD card filesystems to and from the dispatcher and the device. Requirements include: ethernet, no reliance on USB, removable media, cable connections, unique serial numbers, introspection and interrogation, avoiding feature creep, scalable design, power control, maintained software and mounting holes. Despite many offers of hardware, no suitable hardware has been found and testing of U-Boot on SD cards is not currently possible in automation. The identification of the requirements for a supportable SDMux unit are closely related to these device requirements.

Core elements of automation support

Reproducibility

The ability to deploy exactly the same software to the same board(s) and running exactly the same tests many times in a row, getting exactly the same results each time.

For automation to work, all device functions which need to be used in automation must always produce the same results on each device of a specific device type, irrespective of any previous operations on that device, given the same starting hardware configuration.

There is no way to automate a device which behaves unpredictably.

Reliability

The ability to run a wide range of test jobs, stressing different parts of the overall deployment, with a variety of tests and always getting a Complete test job. There must be no infrastructure failures and there should be limited variability in the time taken to run the test jobs to avoid the need for excessive Timeouts.

The same hardware configuration and infrastructure must always behave in precisely the same way. The same commands and operations to the device must always generate the same behaviour.

Scriptability

The device must support deployment of files and booting of the device without any need for a human to monitor or interact with the process. The need to press buttons is undesirable but can be managed in some cases by using relays. However, every extra layer of complexity reduces the overall reliability of the automation process and the need for buttons should be limited or eliminated wherever possible. If a device uses on LEDs to indicate the success of failure of operations, such LEDs must only be indicative. The device must support full control of that process using only commands and operations which do not rely on observation.

Scalability

All methods used to automate a device must have minimal footprint in terms of load on the workers, complexity of scripting support and infrastructure requirements. This is a complex area and can trivially impact on both reliability and reproducibility as well as making it much more difficult to debug problems which do arise. Admins must also consider the complexity of combining multiple different devices which each require multiple layers of support.

Remote power control

Devices MUST support automated resets either by the removal of all power supplied to the DUT or a full reboot or other reset which clears all previous state of the DUT.

Every boot must reliably start, without interaction, directly from the first application of power without the limitation of needing to press buttons or requiring other interaction. Relays and other arrangements can be used at the cost of increasing the overall complexity of the solution, so should be avoided wherever possible.

Networking support

Ethernet - all devices using ethernet interfaces in LAVA must have a unique MAC address on each interface. The MAC address must be persistent across reboots. No assumptions should be made about fixed IP addresses, address ranges or pre-defined routes. If more than one interface is available, the boot process must be configurable to always use the same interface every time the device is booted. WiFi is not currently supported as a method of deploying files to devices.

Serial console support

LAVA expects to automate devices by interacting with the serial port immediately after power is applied to the device. The bootloader must interact with the serial port. If a serial port is not available on the device, suitable additional hardware must be provided before integration can begin. All messages about the boot process must be visible using the serial port and the serial port should remain usable for the duration of all test jobs on the device.

Persistence

Devices supporting primary SSH connections have persistent deployments and this has implications, some positive, some negative - depending on your use case.

  • Fixed OS - the operating system (OS) you get is the OS of the device and this must not be changed or upgraded.
  • Package interference - if another user installs a conflicting package, your test can fail.
  • Process interference - another process could restart (or crash) a daemon upon which your test relies, so your test will fail.
  • Contention - another job could obtain a lock on a constrained resource, e.g. dpkg or apt, causing your test to fail.
  • Reusable scripts - scripts and utilities your test leaves behind can be reused (or can interfere) with subsequent tests.
  • Lack of reproducibility - an artifact from a previous test can make it impossible to rely on the results of a subsequent test, leading to wasted effort with false positives and false negatives.
  • Maintenance - using persistent filesystems in a test action results in the overlay files being left in that filesystem. Depending on the size of the test definition repositories, this could result in an inevitable increase in used storage becoming a problem on the machine hosting the persistent location. Changes made by the test action can also require intermittent maintenance of the persistent location.

Only use persistent deployments when essential and always take great care to avoid interfering with other tests. Users who deliberately or frequently interfere with other tests can have their submit privilege revoked.

The dangers of simplistic testing

Connect and test

Seems simple enough - it doesn’t seem as if you need to deploy a new kernel or rootfs every time, no need to power off or reboot between tests. Just connect and run stuff. After all, you already have a way to manually deploy stuff to the board. The biggest problem with this method is Persistence as above - LAVA keeps the LAVA components separated from each other but tests frequently need to install support which will persist after the test, write files which can interfere with other tests or break the manual deployment in unexpected ways when things go wrong. The second problem within this fallacy is simply the power drain of leaving the devices constantly powered on. In manual testing, you would apply power at the start of your day and power off at the end. In automated testing, these devices would be on all day, every day, because test jobs could be submitted at any time.

ssh instead of serial

This is an over-simplification which will lead to new and unusual bugs and is only a short step on from connect & test with many of the same problems. A core strength of LAVA is demonstrating differences between types of devices by controlling the boot process. By the time the system has booted to the point where sshd is running, many of those differences have been swallowed up in the boot process.

Test everything at the same time

Issues here include:

Breaking the basic scientific method of test one thing at a time

The single system contains multiple components, like the kernel and the rootfs and the bootloader. Each one of those components can fail in ways which can only be picked up when some later component produces a completely misleading and unexpected error message.

Timing

Simply deploying the entire system for every single test job wastes inordinate amounts of time when you do finally identify that the problem is a configuration setting in the bootloader or a missing module for the kernel.

Reproducibility

The larger the deployment, the more complex the boot and the tests become. Many LAVA devices are prototypes and development boards, not production servers. These devices will fail in unpredictable places from time to time. Testing a kernel build multiple times is much more likely to give you consistent averages for duration, performance and other measurements than if the kernel is only tested as part of a complete system.Automated recovery - deploying an entire system can go wrong, whether an interrupted copy or a broken build, the consequences can mean that the device simply does not boot any longer.

Every component involved in your test must allow for automated recovery

This means that the boot process must support being interrupted before that component starts to load. With a suitably configured bootloader, it is straightforward to test kernel builds with fully automated recovery on most devices. Deploying a new build of the bootloader itself is much more problematic. Few devices have the necessary management interfaces with support for secondary console access or additional network interfaces which respond very early in boot. It is possible to chainload some bootloaders, allowing the known working bootloader to be preserved.

I already have builds

This may be true, however, automation puts extra demands on what those builds are capable of supporting. When testing manually, there are any number of times when a human will decide that something needs to be entered, tweaked, modified, removed or ignored which the automated system needs to be able to understand. Examples include /etc/resolv.conf and customised tools.

Automation can do everything

It is not possible to automate every test method. Some kinds of tests and some kinds of devices lack critical elements that do not work well with automation. These are not problems in LAVA, these are design limitations of the kind of test and the device itself. Your preferred test plan may be infeasible to automate and some level of compromise will be required.

Users are all admins too

This will come back to bite! However, there are other ways in which this can occur even after administrators have restricted users to limited access. Test jobs (including hacking sessions) have full access to the device as root. Users, therefore, can modify the device during a test job and it depends on the device hardware support and device configuration as to what may happen next. Some devices store bootloader configuration in files which are accessible from userspace after boot. Some devices lack a management interface that can intervene when a device fails to boot. Put these two together and admins can face a situation where a test job has corrupted, overridden or modified the bootloader configuration such that the device no longer boots without intervention. Some operating systems require a debug setting to be enabled before the device will be visible to the automation (e.g. the Android Debug Bridge). It is trivial for a user to mistakenly deploy a default or production system which does not have this modification.

LAVA and CI

LAVA is aimed at kernel and system development and testing across a wide variety of hardware platforms. By the time the test has got to the level of automating a GUI, there have been multiple layers of abstraction between the hardware, the kernel, the core system and the components being tested. Following the core principle of testing one element at a time, this means that such tests quickly become platform-independent. This reduces the usefulness of the LAVA systems, moving the test into scope for other CI systems which consider all devices as equivalent slaves. The overhead of LAVA can become an unnecessary burden.

CI needs a timely response - it takes time for a LAVA device to be re-deployed with a system which has already been tested. In order to test a component of the system which is independent of the hardware, kernel or core system a lot of time has been consumed before the “test” itself actually begins. LAVA can support testing pre-deployed systems but this severely restricts the usefulness of such devices for actual kernel or hardware testing.

Automation may need to rely on insecure access. Production builds (hardware and software) take steps to prevent systems being released with known login identities or keys, backdoors and other security holes. Automation relies on at least one of these access methods being exposed, typically a way to access the device as the root or admin user. User identities for login must be declared in the submission and be the same across multiple devices of the same type. These access methods must also be exposed consistently and without requiring any manual intervention or confirmation. For example, mobile devices must be deployed with systems which enable debug access which all production builds will need to block.

Automation relies on remote power control - battery powered devices can be a signficant problem in this area. On the one hand, testing can be expected to involve tests of battery performance, low power conditions and recharge support. However, testing will also involve broken builds and failed deployments where the only recourse is to hard reset the device by killing power. With a battery in the loop, this becomes very complex, sometimes involving complex electrical bodges to the hardware to allow the battery to be switched out of the circuit. These changes can themselves change the performance of the battery control circuitry. For example, some devices fail to maintain charge in the battery when held in particular states artificially, so the battery gradually discharges despite being connected to mains power. Devices which have no battery can still be a challenge as some are able to draw power over the serial circuitry or USB attachments, again interfering with the ability of the automation to recover the device from being “bricked”, i.e. unresponsive to the control methods used by the automation and requiring manual admin intervention.

Automation relies on unique identification - all devices in an automation lab must be uniquely identifiable at all times, in all modes and all active power states. Too many components and devices within labs fail to allow for the problems of scale. Details like serial numbers, MAC addresses, IP addresses and bootloader timeouts must be configurable and persistent once configured.

LAVA is not a complete CI solution - even including the hardware support available from some LAVA instances, there are a lot more tools required outside of LAVA before a CI loop will actually work. The triggers from your development workflow to the build farm (which is not LAVA), the submission to LAVA from that build farm are completely separate and outside the scope of this documentation. LAVA can help with the extraction of the results into information for the developers but LAVA output is generic and most teams will benefit from some “frontend” which extracts the data from LAVA and generates relevant output for particular development teams.

Features of CI

Frequency

How often is the loop to be triggered?

Set up some test builds and test jobs and run through a variety of use cases to get an idea of how long it takes to get from the commit hook to the results being available to what will become your frontend.

Investigate where the hardware involved in each stage can be improved and analyse what kind of hardware upgrades may be useful.

Reassess the entire loop design and look at splitting the testing if the loop cannot be optimised to the time limits required by the team. The loop exists to serve the team but the expectations of the team may need to be managed compared to the cost of hardware upgrades or finite time limits.

Scale

How many branches, variants, configurations and tests are actually needed?

Scale has a direct impact on the affordability and feasibility of the final loop and frontend. Ensure that the build infrastructure can handle the total number of variants, not just at build time but for storage. Developers will need access to the files which demonstrate a particular bug or regression

Scale also provides benefits of being able to ignore anomalies.

Identify how many test devices, LAVA instances and Jenkins slaves are needed. (As a hint, start small and design the frontend so that more can be added later.)

Interface

The development of a custom interface is not a small task

Capturing the requirements for the interface may involve lengthy discussions across the development team. Where there are irreconcilable differences, a second frontend may become necessary, potentially pulling the same data and presenting it in a radically different manner.

Include discussions on how or whether to push notifications to the development team. Take time to consider the frequency of notification messages and how to limit the content to only the essential data.

Bisect support can flow naturally from the design of the loop if the loop is carefully designed. Bisect requires that a simple boolean test can be generated, built and executed across a set of commits. If the frontend implements only a single test (for example, does the kernel boot?) then it can be easy to identify how to provide bisect support. Tests which produce hundreds of results need to be slimmed down to a single pass/fail criterion for the bisect to work.

Results

This may take the longest of all elements of the final loop

Just what results do the developers actually want and can those results be delivered? There may be requirements to aggregate results across many LAVA instances, with comparisons based on metadata from the original build as well as the LAVA test.

What level of detail is relevant?

Different results for different members of the team or different teams?

Is the data to be summarised and if so, how?

Resourcing

A frontend has the potential to become complex and need long term maintenance and development

Device requirements

At the hardware design stage, there are considerations for the final software relating to how the final hardware is to be tested.

Uniqueness

All units of all devices must uniquely identify to the host machine as distinct from all other devices which may be connected at the same time. This particularly covers serial connections but also any storage devices which are exported, network devices and any other method of connectivity.

Example - the WaRP7 integration has been delayed because the USB mass storage does not export a filesystem with a unique identifier, so when two devices are connected, there is no way to distinguish which filesystem relates to which device.

All unique identifiers must be isolated from the software to be deployed onto the device. The automation framework will rely on these identifiers to distinguish one device from up to a dozen identical devices on the same machine. There must be no method of updating or modifying these identifiers using normal deployment / flashing tools. It must not be possible for test software to corrupt the identifiers which are fundamental to how the device is identified amongst the others on the same machine.

All unique identifiers must be stable across multiple reboots and test jobs. Randomly generated identifiers are never suitable.

If the device uses a single FTDI chip which offers a single UART device, then the unique serial number of that UART will typically be a permanent part of the chip. However, a similar FTDI chip which provides two or more UARTs over the same cable would not have serial numbers programmed into the chip but would require a separate piece of flash or other storage into which those serial numbers can be programmed. If that storage is not designed into the hardware, the device will not be capable of providing the required uniqueness.

Example - the WaRP7 exports two UARTs over a single cable but fails to give unique identifiers to either connection, so connecting a second device disconnects the first device when the new tty device replaces the existing one.

If the device uses one or more physical ethernet connector(s) then the MAC address for each interface must not be generated randomly at boot. Each MAC address needs to be:

  • persistent - each reboot must always use the same MAC address for each interface.
  • unique - every device of this type must use a unique MAC address for each interface.

If the device uses fastboot, then the fastboot serial number must be unique so that the device can be uniquely identified and added to the correct container. Additionally, the fastboot serial number must not be modifiable except by the admins.

Example - the initial HiKey 960 integration was delayed because the firmware changed the fastboot serial number to a random value every time the device was rebooted.

Scale

Automation requires more than one device to be deployed - the current minimum is five devices. One device is permanently assigned to the staging environment to ensure that future code changes retain the correct support. In the early stages, this device will be assigned to one of the developers to integrate the device into LAVA. The devices will be deployed onto machines which have many other devices already running test jobs. The new device must not interfere with those devices and this makes some of the device requirements stricter than may be expected.

  • The aim of automation is to create a homogenous test platform using heterogeneous devices and scalable infrastructure.

  • Do not complicate things.

  • Avoid extra customised hardware

    Relays, hardware modifications and mezzanine boards all increase complexity

    Examples - X15 needed two relay connections, the 96boards initially needed a mezzanine board where the design was rushed, causing months of serial disconnection issues.

  • More complexity raises failure risk nonlinearly

    Example - The lack of onboard serial meant that the 96boards devices could not be tested in isolation from the problematic mezzanine board. Numerous 96boards devices were deemed to be broken when the real fault lay with intermittent failures in the mezzanine. Removing and reconnecting a mezzanine had a high risk of damaging the mezzanine or the device. Once 96boards devices moved to direct connection of FTDI cables into the connector formerly used by the mezzanine, serial disconnection problems disappeared. The more custom hardware has to be designed / connected to a device to support automation, the more difficult it is to debug issues within that infrastructure.

  • Avoid unreliable protocols and connections

    Example. WiFi is not a reliable deployment method, especially inside a large lab with lots of competing signals and devices.

  • This document is not demanding enterprise or server grade support in devices.

    However, automation cannot scale with unreliable components.

    Example - HiKey 6220 and the serial mezzanine board caused massively complex problems when scaled up in LKFT.

  • Server support typically includes automation requirements as a subset:

    RAS, performance, efficiency, scalability, reliability, connectivity and uniqueness

  • Automation racks have similar requirements to data centres.

  • Things need to work reliably at scale

Scale issues also affect the infrastructure which supports the devices as well as the required reliability of the instance as a whole. It can be difficult to scale up from initial development to automation at scale. Numerous tools and utilities prove to be uncooperative, unreliable or poorly isolated from other processes. One result can be that the requirements of automation look more like the expectations of server-type hardware than of mobile hardware. The reality at scale is that server-type hardware has already had fixes implemented for scalability issues whereas many mobile devices only get tested as standalone units.

Connectivity and deployment methods

  • All test software is presumed broken until proven otherwise
  • All infrastructure and device integration support must be proven to be stable before tests can be reliable
  • All devices must provide at least one method of replacing the current software with the test software, at a level lower than you're testing.

The simplest method to automate is TFTP over physical ethernet, e.g. U-Boot or UEFI PXE. This also puts the least load on the device and automation hardware when delivering large images

Manually writing software to SD is not suitable for automation. This tends to rule out many proposed methods for testing modified builds or configurations of firmware in automation.

See https://linux.codehelp.co.uk/the-problem-of-sd-mux.html for more information on how the requirements of automation affect the hardware design requirements to provide access to SD card filesystems to and from the dispatcher and the device.

Some deployment methods require tools which must be constrained within an LXC. These include but are not limited to:

  • fastboot - due to a common need to have different versions installed for different hardware devices

    Example - Every fastboot device suffers from this problem - any running fastboot process will inspect the entire list of USB devices and attempt to connect to each one, locking out any other fastboot process which may be running at the time, which sees no devices at all.

  • IoT deployment - some deployment tools require patches for specific devices or use tools which are too complex for use on the dispatcher.

    Example - the TI CC3220 IoT device needs a patched build of OpenOCD, the WaRP7 needs a custom flashing tool compiled from a github repository.

Wherever possible, existing deployment methods and common tools are strongly encouraged. New tools are not likely to be as reliable as the existing tools.

Deployments must not make permanent changes to the boot sequence or configuration.

Testing of OS installers may require modifying the installer to not install an updated bootloader or modify bootloader configuration. The automation needs to control whether the next reboot boots the newly deployed system or starts the next test job, for example when a test job has been cancelled, the device needs to be immediately ready to run a different test job.

Interfaces

Automation requires driving the device over serial instead of via a touchscreen or other human interface device. This changes the way that the test is executed and can require the use of specialised software on the device to translate text based commands into graphical inputs.

It is possible to test video output in automation but it is not currently possible to drive automation through video input. This includes BIOS-type firmware interaction. UEFI can be used to automatically execute a bootloader like Grub which does support automation over serial. UEFI implementations which use graphical menus cannot be supported interactively.

Reliability

The objective is to have automation support which runs test jobs reliably. Reproducible failures are easy to fix but intermittent faults easily consume months of engineering time and need to be designed out wherever possible. Reliable testing means only 3 or 4 test job failures per week due to hardware or infrastructure bugs across an entire test lab (or instance). This can involve thousands of test jobs across multiple devices. Some instances may have dozens of identical devices but they still need not to exceed the same failure rate.

All devices need to reach the minimum standard of reliability, or they are not fit for automation. Some of these criteria might seem rigid, but they are not exclusive to servers or enterprise devices. To be useful mobile and IoT devices need to meet the same standards, even though the software involved and the deployment methods might be different. The reason is that the Continuous Integration strategy remains the same for all devices. The problem is the same, regardless of underlying considerations.

A developer makes a change; that change triggers a build; that build triggers a test; that test reports back to the developer whether that change worked or had unexpected side effects.

  • False positive and false negatives are expensive in terms of wasted engineering time.
  • False positives can arise when not enough of the software is fully tested, or if the testing is not rigorous enough to spot all problems.
  • False negatives arise when the test itself is unreliable, either because of the test software or the test hardware.

This becomes more noticeable when considering automated bisections which are very powerful in tracking the causes of potential bugs before the product gets released. Every test job must give a reliable result or the bisection will not reliably identify the correct change.

Automation and Risk

Linaro kernel functional test framework (LKFT) https://lkft.validation.linaro.org/

We have seen with LKFT that complexity has a non-linear relationship with the reliability of any automation process. This section aims to set out some guidelines and recommendations on just what is acceptable in the tools needed to automate testing on a device. These guidelines are based on our joint lab and software team experiences with a wide variety of hardware and software.

Adding or modifying any tool has a risk of automation failure

Risk increases non-linearly with complexity. Some of this risk can be mitigated by testing the modified code and the complete system.

Dependencies installed count as code in terms of the risks of automation failure

This is a key lesson learnt from our experiences with LAVA V1. We added a remote worker method, which was necessary at the time to improve scalability. But it massively increased the risk of automation failure simply due to the extra complexity that came with the chosen design.These failures did not just show up in the test jobs which actively used the extra features and tools; they caused problems for all jobs running on the system.

The ability in LAVA V2 to use containers for isolation is a key feature

For the majority of use cases, the small extension of the runtime of the test to set up and use a container is negligible. The extra reliability is more than worth the extra cost.

Persistent containers are themselves a risk to automation

Just as with any persistent change to the system.

Pre-installing dependencies in a persistent container does not necessarily lower the overall risk of failure. It merely substitutes one element of risk for another.

All code changes need to be tested

In unit tests and in functional tests. There is a dividing line where if something is installed as a dependency of LAVA, then when that something goes wrong, LAVA engineers will be pressured into fixing the code of that dependency whether or not we have any particular experience of that language, codebase or use case. Moving that code into a container moves that burden but also makes triage of that problem much easier by allowing debug builds / options to be substituted easily.

Complexity also increases the difficulty of debugging, again in a nonlinear fashion

A LAVA dependency needs a higher bar in terms of ease of triage.

Complexity cannot be easily measured

Although there are factors which contribute.

Monoliths

Large programs which appear as a single monolith are harder to debug than the UNIX model of one utility joined with other utilities to perform a wider task. (This applies to LAVA itself as much as any one dependency - again, a lesson from V1.)

Feature creep

Continually adding features beyond the original scope makes complex programs worse. A smaller codebase will tend to be simpler to triage than a large codebase, even if that codebase is not monolithic.

Targeted utilities are less risky than large environments

A program which supports protocol after protocol after protocol will be more difficult to maintain than 3 separate programs for each protocol. This only gets worse when the use case for that program only requires the use of one of the many protocols supported by the program. The fact that the other protocols are supported increases the complexity of the program beyond what the use case actually merits.

Metrics in this area are impossible

The risks are nonlinear, the failures are typically intermittent. Even obtaining or applying metrics takes up huge amounts of engineering time.

Mismatches in expectations

The use case of automation rarely matches up with the more widely tested use case of the upstream developers. We aren't testing the code flows typically tested by the upstream developers, so we find different bugs, raising the level of risk. Generally, the simpler it is to deploy a device in automation, the closer the test flow will be to the developer flow.

Most programs are written for the single developer model

Some very widely used programs are written to scale but this is difficult to determine without experience of trying to run it at scale.

Some programs do require special consideration

QEMU would fail most of these guidelines above, so there are mitigating factors:

  • Programs which can be easily restricted to well understood use cases lower the risk of failure. Not all use cases of the same program not need to be covered.
  • Programs which have excellent community and especially in-house support also lower the risk of failure. (Having QEMU experts in Linaro is a massive boost for having QEMU as a dispatcher dependency.)

Unfamiliar languages increase the difficulty of triage

This may affect dependencies in unexpected ways. A program which has lots of bindings into a range of other languages becomes entangled in transitions and bugs in those other languages. This commonly delays the availability of the latest version which may have a critical fix for one use case but which fails to function at all in what may seem to be an unrelated manner.

The dependency chain of the program itself increases the risk of failure in precisely the same manner as the program

In terms of maintenance, this can include the build dependencies of the program as those affect delivery / availability of LAVA in distributions like Debian.

Adding code to only one dispatcher amongst many increases the risk of failure on the instance as a whole

By having an untested element which is at variance to the rest of the system.

Conditional dependencies increase the risk

Optional components can be supported but only increase the testing burden by extending the matrix of installations.

Presence of the code in Debian main can reduce the risk of failure

This does not outweigh other considerations - there are plenty of packages in Debian (some complex, some not) which would be an unacceptable risk as a dependency of the dispatcher, fastboot for one. A small python utility from github can be a substantially lower risk than a larger program from Debian which has unused functionality.

Sometimes, "complex" simply means "buggy" or "badly designed"

fastboot is not actually a complex piece of code but we have learnt that it does not currently scale. This is a result of the disparity between the development model and the automation use case. Disparities like that actually equate to complexity, in terms of triage and maintenance. If fastboot was more complex at the codebase level, it may actually become a lower risk than currently.

Linaro as a whole does have a clear objective of harmonising the ecosystem

Adding yet another variant of existing support is at odds with the overall objective of the company. Many of the tools required in automation have no direct affect on the distinguishing factors for consumers. Adding another one "just because" is not a good reason to increase the risk of automation failure. Just as with standards.

Having the code on the dispatcher impedes development of that code

Bug fixes will take longer to be applied because the fix needs to go through a distribution or other packaging process managed by the lab admins. Applying a targeted fix inside an LXC is useful for proving that the fix works.

Not all programs can work in an LXC

LAVA also provides ways to test using those programs by deploying the code onto a test device. e.g. the V2 support for fastmodels involves only deploying the fastmodel inside a LAVA Test Shell on a test device, e.g. x86 or mustang or Juno.

Speed of running a test job in LAVA is important for CI

The goal of speed must give way to the requirement for reliability of automation

Resubmitting a test job due to a reliability failure is more harmful to the CI process than letting tests take longer to execute without such failures. Test jobs which run quickly are easier to parallelize by adding more test hardware.

Modifying software on the device

Not all parts of the software stack can be replaced automatically, typically the firmware and/or bootloader will need to be considered carefully. The boot sequence will have important effects on what kind of testing can be done automatically. Automation relies on being able to predict the behaviour of the device, interrupt that default behaviour and then execute the test. For most devices, everything which executes on the device prior to the first point at which the boot sequence can be interrupted can be considered as part of the primary boot software. None of these elements can be safely replaced or modified in automation.

The objective is to deploy the device such that as much of the software stack can be replaced as possible whilst preserving the predictable behaviour of all devices of this type so that the next test job always gets a working, clean device in a known state.

Primary boot software

For many devices, this is the bootloader, e.g. U-Boot, UEFI or fastboot.

Some devices include support for a Baseboard management controller or BMC which allows the bootloader and other firmware to be updated even if the device is bricked. The BMC software itself then be considered as the primary boot software, it cannot be safely replaced.

All testing of the primary boot software will need to be done by developers using local devices. SDMux was an idea which only fitted one specific set of hardware, the problem of testing the primary boot software is a hydra. Adding customised hardware to try to sidestep the primary boot software always increases the complexity and failure rates of the devices.

It is possible to divide the pool of devices into some which only ever use known versions of the primary boot software controlled by admins and other devices which support modifying the primary boot software. However, this causes extra work when processing the results, submitting the test jobs and administering the devices.

A secondary problem here is that it is increasingly common for the methods of updating this software to be esoteric, hacky, restricted and even proprietary.

  • Click-through licences to obtain the tools

  • Greedy tools which hog everything in /dev/bus/usb

  • NIH tools which are almost the same as existing tools but add vendor-specific "functionality"

  • GUI tools

  • Changing jumpers or DIP switches,

    Often in inaccessible locations which require removal of other ancillary hardware

  • Random, untrusted, compiled vendor software running as root

  • The need to press and hold buttons and watch for changes in LED status.

We've seen all of these - in various combinations - just in 2017, as methods of getting devices into a mode where the primary boot software can be updated.

Copyright 2018 Neil Williams linux@codehelp.co.uk

Available under CC BY-SA 3.0: https://creativecommons.org/licenses/by-sa/3.0/legalcode

Worse Than FailureError'd: Testing English in Production

Philip G. writes, "I found this gem when I was on the 'Windows USB/DVD Download Tool' page (yes, I know Rufus is better) and I decided to increment the number in the URL."

 

"Using a snowman emoji as a delimiter...yeah, I guess you could do that," writes George.

 

Seb wrote, "These signup incentives are just a little too variable for my tastes..."

 

"Wow. Vodafone UK really isn't selling the battery life of the Samsung Galaxy J3...or maybe they're just being honest?" Steve M. writes.

 

"Nice to see the Acer website here in South Africa being up front about their attempts at upselling," wrote Gabriel S.

 

"Thank you $wargaming_company_title$ for your friendly notice, I'll spend my $wot_gold_amount$ in $wot_gold_suggestion$.", Tassu writes.

 

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Google AdsenseAdSense now understands Telugu

Today, we’re excited to announce the addition of Telugu, a language spoken by over 70 million in India and many other countries around the world, to the family of AdSense supported languages. With this launch, publishers can now monetize their Telugu content and advertisers can connect to a Telugu speaking audience with relevant ads.

To start monetizing your Telugu content website with Google AdSense:

Check the AdSense program policies and make sure your website is compliant.
Sign up for an AdSense account.
Add the AdSense code to start displaying relevant ads to your users.

Welcome to AdSense! Sign up now.


Posted by:
The AdSense Internationalization Team

Planet Debianbisco: Fourth GSoC Report

As announced in the last report, i started looking into SSO solutions and evaluated and tested them. At the begining my focus was on SAML integration, but i soon realized that OAuth2 would be more important.

I started with installing Lemonldap-NG. LL-NG is a WebSSO solution writting in perl that uses ModPerl or FastCGI for delivering Webcontent. There is a Debian package in stable, so the installation was no problem at all. The configuration was a bit harder, as LL-NG has a complex architecture with different vhosts. But after some fiddling i managed to connect the installation to our test LDAP instance and was able to authenticate against the LL-NG portal. Then i started to research how to integrate an OAuth2 client. For the tests i had on the one hand a gitlab installation that i tried to connect to the OAuth2 providers using the omniauth-oauth2-generic strategy. To have a bit more fine grained control over the OAuth2 client configuration i also used the python requests-oauthlib module and modified the web app example from their documentation to my needs. After some fiddling and a bit of back and forth on the lemonldap-ng mailinglist i managed both test clients to authenticate against LL-NG.

Lemonldap-NG Screenshot

The second solution i tested was Keycloak, an identity and access management solution written in java by Redhat. There is no debian package, but nonetheless it was very easy to get it running. It is enough to install jre-default from the package repositories and then run the standalone script from the extracted keycloak folder. Because keycloak only listens on localhost and i didn’t want to get into configuring the java webserver stuff, i installed nginx and configured is as a proxy. In Keycloak too the first step was to configure the LDAP backend. When i was able to successfully login using my LDAP credentials, i looked into configuring an OAuth2 client, which wasn’t that hard either.

Keycloak Screenshot

The third solution i looked into was Glewlwyd, written by babelouest. There is a Debian package in buster, so i added the buster sources, set up apt pinning and installed the needed packages. Glewlwyd is a system service that listens on localhost:4593, so i also used nginx in this case. The configuration for the LDAP backend is done in the configuration file which is on Debian /etc/glewlwyd/glewlwyd-debian.conf. Glewlwyd provides a webinterface for managing users and clients and it is possible to store all the values in LDAP.

Keycloak Screenshot

The next steps will be to test the last candidate, which is ipsilon and also test all the solutions for some important features, like multiple backends and exporting of configurable attributes. Last but not least i want to create a table to have an overview of all the features and drawbacks of the solutions. All the evaluations are public in a salsa repository

I also carried on doing some work on nacho, though most of the issues that have to be fixed are rather small. I reguarly stumble upon texts about Python or Django, like for example the Django NewbieMistakes and try to read all of them and use that for improving on my work.

Rondam RamblingsI have no words

So I'll let Lili Loofbourow speak for me.

Planet DebianLaura Arjona Reina: Debian and free software personal misc news

Many of them probably are worth a blog post each, but it seems I cannot find the time or motivation to craft nice blog posts for now, so here’s a quick update of some of the things that happened and happen in my digital life:

  • Debian Jessie became LTS and I still didn’t upgrade my home server to stable. Well, I could say myself that now I have 2 more years to try to find the time (thanks LTS team!) and that the machine just works (and that’s probably the reason for not finding the motivation to upgrade it or to put time on it (thanks Debian and the software projects of the services I run there!)) but I have to find the way to give some love to my home server during this summer, otherwise I won’t be able to do it probably until the next summer.

 

  • Quitter.se is down since several weeks, and I’m afraid it probably won’t come back. This means my personal account larjona@quitter.se in GNU Social is not working, and the Debian one (debian@quitter.se) is not working either. I would like to find another good instance where to create both accounts (I would like to selfhost but it’s not realistic, e.g. see above point). Considering both GNU Social and Mastodon networks, but I still need to do some research on uptimes, number of users, workforce behind the instances, who’s there, etc. Meanwhile, my few social network updates are posted in larjona@identi.ca as always, and for Debian news you can follow https://micronews.debian.org (it provides RSS feed), or debian@identi.ca. When I resurrect @debian in the fediverse I’ll publicise it and I hope followers find us again.

 

  • We recently migrated the Debian website from CVS to git: https://salsa.debian.org/webmaster-team/webwml/ I am very happy and thankful to all the people that helped to make it possible. I think that most of the contributors adapted well to the changes (also because keeping the used workflows was a priority), but if you feel lost or want to comment on anything, just tell. We don’t want to loose anybody, and we’re happy to welcome and help anybody who wants to get involved.

 

  • Alioth’s shutdown and the Debian website migration triggered a lot of reviews in the website content (updating links and paragraphs, updating translations…) and scripts. Please be patient and help if you can (e.g. contact your language team, or have a look at the list of bugs: tagged or the bulk list). I will try to do remote “DebCamp18” work and attack some of them, but I’m also considering organising or attending a BSP in September/October. We’ll see.

 

  • In the Spanish translation team, I am very happy that we have several regular contributors, translating and/or reviewing. In the last months I did less translation work than what I would like, but I try not to loose pace and I hope to put more time on translations and reviews during this summer, at least in the website and in package descriptions.

 

  • One more year, I’m not coming to DebConf. This year my schedule/situation was clear from long ago, so it’s been easier to just accept that I cannot go, and continue being involved somehow. It’s sad not being able to celebrate the migration with web team mates, but I hope they celebrate anyway! I am a bit behind with DebConf publicity work but I will try to catch up soon, and for DebConf itself I will try to do the microblogging coverage as former years, and also participate in the IRC and watching the streaming, thanks timezones and siesta, I guess 😉

 

  • Since January I am enjoying my new phone (the Galaxy S III broke, and I bought a BQ Aquaris U+) with Lineage OS 14.x and F-Droid. I keep on having a look each several days to the F-Droid tab that shows the news and updated apps and it’s amazing the activity and life of the projects. A non exhaustive list of the free software apps that I use: AdAway, Number Guesser (I play with my son to this), Conversations, Daily Dozen, DavDroid, F-Droid, Fennec F-Droid, Hacker’s Keyboard, K-9 Mail, KDE Connect, Kontalk, LabCoat, Document Reader, LibreOffice Viewer (old but it works), Memetastic, NewPipe, OSMAnd~, PassAndroid, Periodical, Puma, Quasseldroid, QuickDic, RadioDroid, Reader for Pepper and Carrot, Red Moon, RedReader, Ring, Slight Backup, Termux. Some other apps that I don’t use them all the time but I find it’s nice to have them are AFWall+, Atomic (for when my Quassel server is down), Call Recorder, Pain Diary, Yalp Store. My son decided not to play games in phones/tablets so we removed Anuto TD, Apple Finger, Seafood Berserker, Shattered Pixel Dungeon and Turo (I appreciate the games but I only play some times, if another person plays too, just to share the game). My only non-free apps: the one that gives me the time I need to wait at the bus stop, Wallapop (second hand, person to person buy/sell app), and Whatsapp. I have no Google services in the phone and no Location services available for those apps, but I give the bus stop number or the Postal code number by hand, and they work.

 

  • I am very very happy with my Lenovo X230 laptop, its keyboard and everything. It runs Debian stable for now, and Plasma Desktop. I only have 2 issues with it: (1) hibernation, and (2) smart card reader. About the hibernation: sometimes, when on battery, I close the lid and it seems it does not hibernate well because when I open the lid again it does not come back, the power button blinks slowly, and pressing it, typing something or moving the touchpad, have no effect. The only ‘solution’ is to long-press the power button so it abruptly shuts down (or take the battery off, with the same effect). After that, I turn on again and the filesystem complains about the unexpected shut down but it boots correctly. About the smart card reader: I have a C3PO LTC31 smart card reader and when I connect it via USB to use my GPG smart card, I need to restart pcsc service manually to be able to use it. If I don’t do that, the smart card is not recognised (Thunderbird or whatever program asks me repeatedly to insert the card). I’m not sure why is that, and if it’s related to my setup, or to this particular reader. I have another reader (other model) at work, but always forget to switch them to make tests. Anyway I can live with it until I find time to research more.

There are probably more things that I forget, but this post became too long already. Bye!

 

,

Planet DebianJonathan McDowell: Thoughts on the acquisition of GitHub by Microsoft

Back at the start of 2010, I attended linux.conf.au in Wellington. One of the events I attended was sponsored by GitHub, who bought me beer in a fine Wellington bar (that was very proud of having an almost complete collection of BrewDog beers, including some Tactical Nuclear Penguin). I proceeded to tell them that I really didn’t understand their business model and that one of the great things about git was the very fact it was decentralised and we didn’t need to host things in one place any more. I don’t think they were offended, and the announcement Microsoft are acquiring GitHub for $7.5 billion proves that they had a much better idea about this stuff than me.

The acquisition announcement seems to have caused an exodus. GitLab reported over 13,000 projects being migrated in a single hour. IRC and Twitter were full of people throwing up their hands and saying it was terrible. Why is this? The fear factor seemed to come from was who was doing the acquiring. Microsoft. The big, bad Linux is a cancer folk. I saw a similar, though more muted, reaction when LinkedIn were acquired.

This extremely negative reaction to Microsoft seems bizarre to me these days. I’m well aware of their past, and their anti-competitive practises (dating back to MS-DOS vs DR-DOS). I’ve no doubt their current embrace of Free Software is ultimately driven by business decisions rather than a sudden fit of altruism. But I do think their current behaviour is something we could never have foreseen 15+ years ago. Did you ever think Microsoft would be a contributor to the Linux kernel? Is it fair to maintain such animosity? Not for me to say, I guess, but I think that some of it is that both GitHub and LinkedIn were services that people were already uneasy about using, and the acquisition was the straw that broke the camel’s back.

What are the issues with GitHub? I previously wrote about the GitHub TOS changes, stating I didn’t think it was necessary to fear the TOS changes, but that the centralised nature of the service was potentially something to be wary of. joeyh talked about this as long ago as 2011, discussing the aspects of the service other than the source code hosting that were only API accessible, or in some other way more restricted than a git clone away. It’s fair criticism; the extra features offered by GitHub are very much tied to their service. And yet I don’t recall the same complaints about SourceForge, long the home of choice for Free Software projects. Its problems seem to be more around a dated interface, being slow to enable distributed VCSes and the addition of advertising. People left because there were much better options, not because of idiological differences.

Let’s look at the advantages GitHub had (and still has) to offer. I held off on setting up a GitHub account for a long time. I didn’t see the need; I self-hosted my Git repositories. I had the ability to setup mailing lists if I needed them (and my projects generally aren’t popular enough that they did). But I succumbed in 2015. Why? I think it was probably as part of helping to run an OpenHatch workshop, trying to get people involved in Free software. That may sound ironic, but helping out with those workshops helped show me the benefit of the workflow GitHub offers. The whole fork / branch / work / submit a pull request approach really helps lower the barrier to entry for people getting started out. Suddenly fixing an annoying spelling mistake isn’t a huge thing; it’s easy to work in your own private playground and then make that work available to upstream and to anyone else who might be interested.

For small projects without active mailing lists that’s huge. Even for big projects that can be a huge win. And it’s not just useful to new contributors. It lowers the barrier for me to be a patch ‘n run contributor. Now that’s not necessarily appealing to some projects, because they’d rather get community involvement. And I get that, but I just don’t have the time to be active in all the projects I feel I can offer something to. Part of that ease is the power of git, the fact that a clone is a first class repo, capable of standing alone or being merged back into the parent. But another part is the interface GitHub created, and they should get some credit for that. It’s one of those things that once you’re presented with it it makes sense, but no one had done it quite as slickly up to that point. Submissions via mailing lists are much more likely to get lost in the archives compared to being able to see a list of all outstanding pull requests on GitHub, and the associated discussion. And subscribe only to that discussion rather than everything.

GitHub also seemed to appear at the right time. It, like SourceForge, enabled easy discovery of projects. Crucially it did this at a point when web frameworks were taking off and a whole range of developers who had not previously pull large chunks of code from other projects were suddenly doing so. And writing frameworks or plugins themselves and feeling in the mood to share them. GitHub has somehow managed to hit critical mass such that lots of code that I’m sure would have otherwise never seen the light of day are available to all. Perhaps the key was that repos were lightweight setups under usernames, unlike the heavier SourceForge approach of needing a complete project setup per codebase you wanted to push. Although it’s not my primary platform I engage with GitHub for my own code because the barrier is low; it’s couple of clicks on the website and then I just push to it like my other remote repos.

I seem to be coming across as a bit of a GitHub apologist here, which isn’t my intention. I just think the knee-jerk anti GitHub reaction has been fascinating to observe. I signed up to GitLab around the same time as GitHub, but I’m not under any illusions that their hosted service is significantly different from GitHub in terms of having my data hosted by a third party. Nothing that’s up on either site is only up there, and everything that is is publicly available anyway. I understand that as third parties they can change my access at any point in time, and so I haven’t built any infrastructure that assumes their continued existence. That said, why would I not take advantage of their facilities when they happen to be of use to me?

I don’t expect my use of GitHub to significantly change now they’ve been acquired.

Planet DebianDaniel Stender: Dynamic inventories for Ansible using Python

Ansible not only accepts static machine inventories represented in an inventory file, but it is capable of leveraging also dynamic inventories. To use that mechanism the only thing which is needed is a program resp. a script which creates the particular machines which are needed for a certain project and returns their addresses as a JSON object, which represents an inventory like an inventory file does it. This makes it possible to created specially crafted tools to set up the number of cloud machines which are needed for an Ansible project, and the mechanism theoretically is open to any programming language. Instead of selecting an inventory project with the option -i like with ansible-playbook, just give the name of the program you’ve set up, and Ansible executes it and evaluates the inventory which is given back by that.

Here’s a little example of an dynamic inventory for Ansible written in Python. The script uses the python-digitalocean library in Debian (https://github.com/koalalorenzo/python-digitalocean) to launch a couple of DigitalOcean droplets for a particular Ansible project:

#!/usr/bin/env python
import os
import sys
import json
import digitalocean
import ConfigParser

config = ConfigParser.ConfigParser()
config.read(os.path.dirname(os.path.realpath(__file__)) + '/inventory.cfg')
nom = config.get('digitalocean', 'number_of_machines')
keyid = config.get('digitalocean', 'key-id')
try:
    token = os.environ['DO_ACCESS_TOKEN']
except KeyError:
    token = config.get('digitalocean', 'access-token')

manager = digitalocean.Manager(token=token)

def get_droplets():
    droplets = manager.get_all_droplets(tag_name='ansible-demo')
    if not droplets:
        return False
    elif len(droplets) != 0 and len(droplets) != int(nom):
        print "The number of already set up 'ansible-demo' differs"
        sys.exit(1)
    elif len(droplets) == int(nom):
        return droplets

key = manager.get_ssh_key(keyid)
tag = digitalocean.Tag(token=token, name='ansible-demo')
tag.create()

def create_droplet(name):
    droplet = digitalocean.Droplet(token=token,
                                   name=name,
                                   region='fra1',
                                   image='debian-8-x64',
                                   size_slug='512mb',
                                   ssh_keys=[key])
    droplet.create()
    tag.add_droplets(droplet.id)
    return True

if get_droplets() is False:
    for node in range(int(nom))[1:]:
        create_droplet(name='wordpress-node'+str(node))
    create_droplet('load-balancer')

droplets = get_droplets()
inventory = {}
hosts = {}
machines = []
for droplet in droplets:
    if 'load-balancer' in droplet.name:
        machines.append(droplet.ip_address)
        hosts['hosts']=machines
        inventory['load-balancer']=hosts
hosts = {}
machines = []
for droplet in droplets:
    if 'wordpress' in droplet.name:
        machines.append(droplet.ip_address)
        hosts['hosts']=machines
        inventory['wordpress-nodes']=hosts

print json.dumps(inventory)

It’s a simple basic script to demonstrate how you can craft something for your own needs to leverage dynamic inventories for Ansible. The parameter of droplets like the size (512mb) the image (debian-8-x64) and the region (fra1) are hard coded, and can be changed easily if wanted. Other things needed like the total number of wanted machines, the access token for the DigitalOcean API and the ID of the public SSH key which is going to be applied to the virtual machines is evaluated using a simple configuration file (inventory.cfg):

[digitalocean]
access-token = 09c43afcbdf4788c611d5a02b5397e5b37bc54c04371851
number_of_machines = 4
key-id = 21699531

The script of course can be executed independently of Ansible. The first time you execute it, it creates the number of machines which is wanted (consisting of always of one load-balancer node and – given the total number of machines, which is four – three wordpress-nodes), and gives back the IP adresses of the newly created machines being put into groups:

$ ./inventory.py 
{"wordpress-nodes": {"hosts": ["159.89.111.78", "159.89.111.84", "159.89.104.60"]}, "load-balancer": {"hosts": ["159.89.98.64"]}}

droplets-ansible-demo

Any consecutive execution of this script recognizes that the wanted machines already have been created, and just returns this inventory the same way one more time:

$ ./inventory.py 
{"wordpress-nodes": {"hosts": ["159.89.111.78", "159.89.111.84", "159.89.104.60"]}, "load-balancer": {"hosts": ["159.89.98.64"]}}

If you delete the droplets then, and run the script again, a new set of machines gets created:

$ for i in $(doctl compute droplet list | awk '/ansible-demo/{print $(1)}'); do doctl compute droplet delete $i; done
$ ./inventory.py 
{"wordpress-nodes": {"hosts": ["46.101.115.214", "165.227.138.66", "165.227.153.207"]}, "load-balancer": {"hosts": ["138.68.85.93"]}}

As you can see, the JSON object1 which is given back represents an Ansible inventory, the same inventory represented in a file it would have this form:

[load-balancer]
138.68.85.93

[wordpress-nodes]
46.101.115.214
165.227.138.66
165.227.153.207

Like said, you can use this “one-trick pony” Python script instead of an inventory file, just given the name of that, and the Ansible CLI tool runs it and works on the inventory which is given back:

$ ansible wordpress-nodes -i ./inventory.py -m ping -u root --private-key=~/.ssh/id_digitalocean
165.227.153.207 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
46.101.115.214 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}
165.227.138.66 | SUCCESS => {
    "changed": false, 
    "ping": "pong"
}

Note: the script doesn’t yet supports a waiter mechanism but completes as soon as there are IP adresses available. It always could take a little while until the newly created machines are completely created, booted, and accessible via SSH, thus there could be errors on the hosts not being accessible. In that case, just wait a few seconds and run the Ansible command again.


  1. For the exact structure of the JSON object I’m drawing from: https://gist.github.com/jtyr/5213fabf2bcb943efc82f00959b91163 [return]

Planet DebianShirish Agarwal: Abuse of childhood

The blog post is in homage to any abuse victims and more directly to parents and children being separated by policies formed by a Government whose chief is supposed to be ‘The leader of the free world’. I sat on the blog post for almost a week even though I got it proof-read by two women, Miss S and Miss K to see if there is or was anything wrongful about the post. Both the women gave me their blessings as it’s something to be shared.

I am writing this blog post writing from my house in a safe environment, having chai (tea), listening to some of my favorite songs, far from trauma some children are going through.

I have been disturbed by the news of families and especially young children being separated from their own families because of state policy. I was pretty hesitant to write this post as we are told to only share our strengths and not our weaknesses or traumas of the past. I partly want to share so people who might be on the fence of whether separating families is a good idea or not might have something more to ponder over. The blog post is not limited to the ongoing and proposed U.S. Policy called Separations but all and any situations involving young children and abuse.

The first experience was when my cousin sister and her family came to visit me and mum. We often do not get relatives or entertain them due to water shortage issues. It’s such a common such issue all over India that nobody bats an eye over, we will probably talk about it in some other blog post if need be.

The sister who came, she has two daughters. The older one knew me and mum and knew that both of us have a penchant for pulling legs but at the same time like to spoil Didi and her. All of us are foodies so we have a grand time. The younger one though was unknown and I were unknown to her. In playfulness, we said we would keep the bigger sister with us and she was afraid. She clung to her sister like anything. Even though we tried to pacify her but she wasn’t free with us till the time she was safely tucked in with her sister in the family car along with her mum and dad.

While this is a small incident, it triggered a memory kept hidden over 35+ years back. I was perhaps 4-5 years old. I was being bought up by a separated working mum who had a typical government 9-5 job. My grandparents were (mother’s side) used to try and run the household in her absence, my grandmother doing all household chores, my grandfather helping here and there, while all outside responsibilities were his.

In this, there was a task to put me in school. Mum probably talked to some of her colleagues or somebody or the other suggested St. Francis, a Catholic missionary school named after one of the many saints named Saint Francis. It is and was a school nearby. There was a young man who used to do odd-jobs around the house and was trusted by all who was a fan of ( Amitabh Bachchan) and who is/was responsible for my love for first-day first shows of his movies. A genuinely nice elderly brother kind of person with whom I have had lot of beautiful memories of childhood.

Anyways, his job was to transport me back and fro to the school which he did without fail. The trouble started for me in school, I do not know the reason till date, maybe I was a bawler or whatever, I was kept in a dark, dank toilet for a year (minus the holidays). The first time I went to the dark, foreboding place, I probably shat and vomited for which I was beaten quite a bit. I learnt that if I were sent to the dark room, I had to put my knickers somewhere top where they wouldn’t get dirty so I would not get beaten. Sometimes I was also made to clean my vomit or shit which made the whole thing more worse. I would be sent to the room regularly and sometimes beaten. The only nice remembrance I had were the last hour before school used to be over as I was taken out of the toilet, made presentable and was made to sit near the window-sill from where I could see trains running by. I dunno whether it was just the smell of free, fresh air plus seeing trains and freedom got somehow mixed and a train-lover was born.

I don’t know why I didn’t ever tell my mum or anybody else about the abuse happening with me. Most probably because the teacher may have threatened me with something or the other. Somehow the year ended and I was failed. The only thing probably mother and my grandparents saw and felt that I had grown a bit thinner.

Either due to mother’s intuition or because I had failed I was made to change schools. While I was terrified of the change because I thought there was something wrong with me and things will be worse, it was actually the opposite. While corporal punishment was still the norm, there wasn’t any abuse unlike in the school before. In the eleven years I spent in the school, there was only one time that I was given toilet duty and that too because I had done something naughty like pulling a girl’s hair or something like that, and it was one or two students next to me. Rather than clean the toilets we ended up playing with water.

I told part of my experience to mum about a year, year and half after I was in the new school half-expecting something untoward to happen as the teacher has said. The only thing I remember from that conversation was shock registering on her face. I didn’t tell her about the vomit and shit part as I was embarrassed about it. I had nightmares about it till I was in my teens when with treks and everything I understood that even darkness can be a friend, just like light is.

For the next 13 odd years till I asked her to stop checking on me, she used to come to school every few months, talk to teachers and talk with class-mates. The same happened in college till I asked her to stop checking as I used to feel embarrassed when other class-mates gossiped.

It was only years after when I began working I understood what she was doing all along. She was just making sure I was ok.

The fact that it took me 30+ years to share this story/experience with the world at large also tells that somewhere I still feel a bit scarred, on the soul.

If you are feeling any sympathy or empathy towards me, while I’m thankful for it. It would be much better served to direct it towards those who are in a precarious vulnerable situation like I was. It doesn’t matter what politics you believe in or peddle in, separating children from their parents is immoral as a being forget even a human being. Even in the animal world, we see how predators only attack those young whose fathers and mothers are not around to protect them.

As in any story/experience/tale there are lessons or takeaways that I hope most parents teach their young ones, especially Indian or Asiatic parents at large –

1. Change the rule of ‘Respect all elders and obey them no matter what’ to ‘Respect everybody including yourself’ should be taught from parents to their children. This will boost their self-confidence a bit and also be share any issues that happen with them.

2. If somebody threatens you or threatens family immediately inform us (i.e. the parents).

3. The third one is perhaps the most difficult ‘telling the truth without worrying about consequences’. In Indian families we learn about ‘secrets’ and ‘modifying truth’ from our parents and elders. That needs to somehow change.

4. Few years ago, Aamir Khan (a film actor) with people specializing in working with children talked and shared about ‘Good touch, bad touch’ as a prevention method, maybe somebody could also do something similar for such kinds of violence.

At the end I recently came across an article and also Terminal.

Planet DebianAntoine Beaupré: My free software activities, June 2018

It's been a while since I haven't done a report here! Since I need to do one for LTS, I figured I would also catchup you up with the work I've done in the last three months. Maybe I'll make that my new process: quarterly reports would reduce the overhead on my side with little loss on you, my precious (few? many?) readers.

Debian Long Term Support (LTS)

This is my monthly Debian LTS report.

I omitted doing a report in May because I didn't spend a significant number of hours, so this also covers a handful of hours of work in May.

May and June were strange months to work on LTS, as we made the transition between wheezy and jessie. I worked on all three LTS releases now, and I must have been absent from the last transition because I felt this one was a little confusing to go through. Maybe it's because I was on frontdesk duty during that time...

For a week or two it was unclear if we should have worked on wheezy, jessie, or both, or even how to work on either. I documented which packages needed an update from wheezy to jessie and proposed a process for the transition period. This generated a good discussion, but I am not sure we resolved the problems we had this time around in the long term. I also sent patches to the security team in the hope they would land in jessie before it turns into LTS, but most of those ended up being postponed to LTS.

Most of my work this month was spent actually working on porting the Mercurial fixes from wheezy to jessie. Technically, the patches were ported from upstream 4.3 and led to some pretty interesting results in the test suite, which fails to build from source non-reproducibly. Because I couldn't figure out how to fix this in the alloted time, I uploaded the package to my usual test location in the hope someone else picks it up. The test package fixes 6 issues (CVE-2018-1000132, CVE-2017-9462, CVE-2017-17458 and three issues without a CVE).

I also worked on cups in a similar way, sending a test package to the security team for 2 issues (CVE-2017-18190, CVE-2017-18248). Same for Dokuwiki, where I sent a patch single issue (CVE-2017-18123). Those have yet to be published, however, and I will hopefully wrap that up in July.

Because I was looking for work, I ended up doing meta-work as well. I made a prototype that would use the embedded-code-copies file to populate data/CVE/list with related packages as a way to address a problem we have in LTS triage, where package that were renamed between suites do not get correctly added to the tracker. It ended up being rejected because the changes were too invasive, but led to Brian May suggesting another approach, we'll see where that goes.

I've also looked at splitting up that dreaded data/CVE/list but my results were negative: it looks like git is very efficient at splitting things up. While a split up list might be easier on editors, it would be a massive change and was eventually refused by the security team.

Other free software work

With my last report dating back to February, this will naturally be a little imprecise, as three months have passed. But let's see...

LWN

I wrote eigth articles in the last three months, for an average of three monthly articles. I was aiming at an average of one or two a week, so I didn't get reach my goal. My last article about Kubecon generated a lot of feedback, probably the best I have ever received. It seems I struck a chord for a lot of people, so that certainly feels nice.

Linkchecker

Usual maintenance work, but we at last finally got access to the Linkchecker organization on GitHub, which meant a bit of reorganizing. The only bit missing now it the PyPI namespace, but that should also come soon. The code of conduct and contribution guides were finally merged after we clarified project membership. This gives us issue templates which should help us deal with the constant flow of issues that come in every day.

The biggest concern I have with the project now is the C parser and the outdated Windows executable. The latter has been removed from the website so hopefully Windows users won't report old bugs (although that means we won't gain new Windows users at all) and the former might be fixed by a port to BeautifulSoup.

Email over SSH

I did a lot of work to switch away from SMTP and IMAP to synchronise my workstation and laptops with my mailserver. Having the privilege of running my own server has its perks: I have SSH access to my mail spool, which brings the opportunity for interesting optimizations.

The first I have done is called rsendmail. Inspired by work from Don Armstrong and David Bremner, rsendmail is a Python program I wrote from scratch to deliver email over a pipe, securely. I do not trust the sendmail command: its behavior can vary a lot between platforms (e.g. allow flushing the mailqueue or printing it) and I wanted to reduce the attack surface. It works with another program I wrote called sshsendmail which connects to it over a pipe. It integrates well into "dumb" MTAs like nullmailer but I also use it with the popular Postfix as well, without problems.

The second is to switch from OfflineIMAP to Syncmaildir (SMD). The latter allows synchronization over SSH only. The migration was a little difficult but I very much like the results: SMD is faster than OfflineIMAP and works transparently in the background.

I really like to use SSH for email. I used to have my email password stored all over the place: in my Postfix config, in my email clients' memory, it was a mess. With the new configuration, things just work unattended and email feels like a solved problem, at least the synchronization aspects of it.

Emacs

As often happens, I've done some work on my Emacs configuration. I switched to a new Solarized theme, the bbatsov version which has support for a light and dark mode and generally better colors. I had problems with the cursor which are unfortunately unfixed.

I learned about and used the Emacs iPython Notebook project (EIN) and filed a feature request to replicate the "restart and run" behavior of the web interface. Otherwise it's real nice to have a decent editor to work on Python notebooks and I have used this to work on the terminal emulators series and the related source code

I have also tried to complete my conversion to Magit, a pretty nice wrapper around git for Emacs. Some of my usual git shortcuts have good replacements, but not all. For example, those are equivalent:

  • vc-annotate (C-x C-v g): magit-blame
  • vc-diff (C-x C-v =): magit-diff-buffer-file

Those do not have a direct equivalent:

  • vc-next-action (C-x C-q, or F6): anarcat/magic-commit-buffer, see below
  • vc-git-grep (F8): no replacement

I wrote my own replacement for "diff and commit this file" as the following function:

(defun anarcat/magit-commit-buffer ()
  "commit the changes in the current buffer on the fly

This is different than `magit-commit' because it calls `git
commit' without going through the staging area AKA index
first. This is a replacement for `vc-next-action'.

Tip: setting the git configuration parameter `commit.verbose' to
2 will show the diff in the changelog buffer for review. See
`git-config(1)' for more information.

An alternative implementation was attempted with `magit-commit':

  (let ((magit-commit-ask-to-stage nil))
    (magit-commit (list \"commit\" \"--\"
                        (file-relative-name buffer-file-name)))))

But it seems `magit-commit' asserts that we want to stage content
and will fail with: `(user-error \"Nothing staged\")'. This is
why this function calls `magit-run-git-with-editor' directly
instead."
  (interactive)
  (magit-run-git-with-editor (list "commit" "--" (file-relative-name buffer-file-name))))

It's not very pretty, but it works... Mostly. Sometimes the magit-diff buffer becomes out of sync, but the --verbose output in the commitlog buffer still works.

I've also looked at git-annex integration. The magit-annex package did not work well for me: the file listing is really too slow. So I found the git-annex.el package, but did not try it out yet.

While working on all of this, I fell in a different rabbit hole: I found it inconvenient to "pastebin" stuff from Emacs, as it would involve selection a region, piping to pastebinit and copy-pasting the URL found in the *Messages* buffer. So I wrote this first prototype:

(defun pastebinit (begin end)
  "pass the region to pastebinit and add output to killring

TODO: prompt for possible pastebins (pastebinit -l) with prefix arg

Note that there's a `nopaste.el' project which already does this,
which we should use instead.
"
  (interactive "r")
  (message "use nopaste.el instead")
  (let ((proc (make-process :filter #'pastebinit--handle
                            :command '("pastebinit")
                            :connection-type 'pipe
                            :buffer nil
                            :name "pastebinit")))
    (process-send-region proc begin end)
    (process-send-eof proc)))

(defun pastebinit--handle (proc string)
  "handle output from pastebinit asynchronously"
  (let ((url (car (split-string string))))
    (kill-new url)
    (message "paste uploaded and URL added to kill ring: %s" url)))

It was my first foray into aynchronous process operations in Emacs: difficult and confusing, but it mostly worked. Those who know me know what's coming next, however: I found not only one, but two libraries for pastebins in Emacs: nopaste and (after patching nopaste to add asynchronous support and customize support of course) debpaste.el. I'm not sure where that will go: there is a proposal to add nopaste in Debian that was discussed a while back and I made a detailed report there.

Monkeysign

I made a minor release of Monkeysign to cover for CVE-2018-12020 and its GPG sigspoof vulnerability. I am not sure where to take this project anymore, and I opened a discussion to possibly retire the project completely. Feedback welcome.

ikiwiki

I wrote a new ikiwiki plugin called bootstrap to fix table markup to match what the Bootstrap theme expects. This was particularly important for the previous blog post which uses tables a lot. This was surprisingly easy and might be useful to tweak other stuff in the theme.

Random stuff

  • I wrote up a review of security of APT packages when compared with the TUF project, in TufDerivedImprovements
  • contributed to about 20 different repositories on GitHub, too numerous to list here

Krebs on SecurityPlant Your Flag, Mark Your Territory

Many people, particularly older folks, proudly declare they avoid using the Web to manage various accounts tied to their personal and financial data — including everything from utilities and mobile phones to retirement benefits and online banking services. The reasoning behind this strategy is as simple as it is alluring: What’s not put online can’t be hacked. But increasingly, adherents to this mantra are finding out the hard way that if you don’t plant your flag online, fraudsters and identity thieves may do it for you.

The crux of the problem is that while most types of customer accounts these days can be managed online, the process of tying one’s account number to a specific email address and/or mobile device typically involves supplying personal data that can easily be found or purchased online — such as Social Security numbers, birthdays and addresses.

Some examples of how being a modern-day Luddite can backfire are well-documented, such as when scammers create online accounts in someone’s name at the Internal Revenue Service, the U.S. Postal Service or the Social Security Administration.

Other examples may be far less obvious. Consider the case of a consumer who receives their home telephone service as part of a bundle through their broadband Internet service provider (ISP). Failing to set up a corresponding online account to manage one’s telecommunications services can provide a powerful gateway for fraudsters.

Carrie Kerskie is president of Griffon Force LLC, a company in Naples, Fla. that helps identity theft victims recover from fraud incidents. Kerskie recalled a recent case in which thieves purchased pricey items from a local jewelry store in the name of an elderly client who’d previously bought items at that location as gifts for his late wife.

In that incident, the perpetrator presented a MasterCard Black Card in the victim’s name along with a fake ID created in the victim’s name (but with the thief’s photo). When the jewelry store called the number on file to verify the transactions, the call came through to the impostor’s cell phone right there in the store.

Kerskie said a follow-up investigation revealed that the client had never set up an account at his ISP (Comcast) to manage it online. Multiple calls with the ISP’s customer support people revealed that someone had recently called Comcast pretending to be the 86-year-old client and established an online account.

“The victim never set up his account online, and the bad guy called Comcast and gave the victim’s name, address and Social Security number along with an email address,” Kerskie said. “Once that was set up, the bad guy logged in to the account and forwarded the victim’s calls to another number.”

Incredibly, Kerskie said, the fraudster immediately called Comcast to ask about the reason for the sudden account changes.

“While I was on the phone with Comcast, the customer rep told me to hold on a minute, that she’d just received a communication from the victim,” Kerskie recalled. “I told the rep that the client was sitting right beside me at the time, and that the call wasn’t from him. The minute we changed the call forwarding options, the fraudster called customer service to ask why the account had been changed.”

Two to three days after Kerskie helped the client clean up fraud with the Comcast account, she got a frantic call from the client’s daughter, who said she’d been trying her dad’s mobile phone but that he hadn’t answered in days. They soon discovered that dear old dad was just fine, but that he’d also neglected to set up an online account at his mobile phone provider.

“The bad guy had called in to the mobile carrier, provided his personal details, and established an online account,” Kerskie said. “Once they did that, they were able transfer his phone service to a new device.”

OFFLINE BANKING

Many people naively believe that if they never set up their bank or retirement accounts for online access then cyber thieves can’t get access either. But Kerskie said she recently had a client who had almost a quarter of a million dollars taken from his bank account precisely because he declined to link his bank account to an online identity.

“What we found is that the attacker linked the client’s bank account to an American Express Gift card, but in order to do that the bad guy had to know the exact amount of the microdeposit that AMEX placed in his account,” Kerskie said. “So the bad guy called the 800 number for the victim’s bank, provided the client’s name, date of birth, and Social Security number, and then gave them an email address he controlled. In this case, had the client established an online account previously, he would have received a message asking to confirm the fraudulent transaction.”

After tying the victim’s bank account to a prepaid card, the fraudster began slowly withdrawing funds in $5,000 increments. All told, thieves managed to siphon almost $170,000 over a six month period. The victim’s accounts were being managed by a trusted acquaintance, but the withdrawals didn’t raise alarms because they were roughly in line with withdrawal amounts the victim had made previously.

“But because the victim didn’t notify the bank within 60 days of the fraudulent transactions as required by law, the bank only had to refund the last 60 days worth of fraudulent transactions,” Kerskie said. “We were ultimately able to help him recover most of it, but that was a whole other ordeal.”

Kerskie said many companies try to fight fraud on accounts belonging to customers who haven’t set up a corresponding online account by sending a letter via snail mail to those customers when account changes are made.

“But not everyone does that and if the thief who’s taking advantage of the situation is smart, he’ll simply set up an online account and change the billing address, so the customer never gets that notice,” Kerskie said.

MARK YOUR TERRITORY

Kerskie said it’s a good idea for people with older relatives to help those individuals ensure they have set up and manage online identities for their various accounts — even if those relatives never intend to access any of the accounts online. Helping those relatives place a security freeze on their credit files with the four major credit bureaus (and with another, little known bureau that many mobile providers rely upon for credit checks) can go a long way toward preventing new account fraud.

Adding two-factor authentication (whenever it is available) and/or establishing a customer-specific personal identification number (PIN) also can help secure online access. For those who can’t be convinced to use a password manager, even writing down all of the account details and passwords on a slip of paper can be helpful, provided the document is secured in a safe place.

This process is doubly important, Kerskie said, for parents and relatives who have just lost a spouse.

“When someone passes away, there’s often an obituary in the paper that offers a great deal of information about the deceased and any surviving family members,” she said. “And the bad guys absolutely love obits.”

Eschewing accounts on popular social media platforms also can have consequences, mainly because most people have enough information about themselves online that anyone can create an account in their name and start messaging friends and family members with various fraud schemes.

“I always tell people if you don’t want to set up an online account for social media that’s fine, but make sure you tell your friends and family, ‘If you ever get a social media request from me, just ignore it because I’ll never do that,'” Kerskie advised.

In summary, plant your flag online or — as Kerskie puts it — “mark your territory” — before fraudsters do it for you. And consider helping less Internet-savvy friends and family members to do the same.

“It can save a lot of headache,” she said. “The sad reality is that criminals very often only need to answer two or three questions to commit fraud in your name, whereas victims typically need to spend hours of their time and answer dozens of questions to undo the resulting fraud.”

Planet DebianDaniel Kahn Gillmor: Protecting Software Updates

In my work at the ACLU, we fight for civil rights and civil liberties. This includes the ability to communicate privately, free from surveillance or censorship, and to control your own information. These are principles that I think most free software developers would agree with. In that vein, we just released a guide to securing software update channels in collaboration with students from NYU Law School.

The guide focuses specifically on what people and organizations that distribute software can do to ensure that their software update processes and mechanisms are actually things that their users can reliably trust. The goal is to make these channels trustworthy, even in the face of attempts by government agencies to force software vendors to ship malware to their users.

Why software updates specifically? Every well-engineered system on today's Internet will have a software update mechanism, since there are inevitably bugs that need fixing, or new features added to improve the system for the users. But update channels also represent a risk: they are an unclosable hole that enables installation of arbitrary software, often at the deepest, most-privileged level of the machine. This makes them a tempting target for anyone who wants to force the user to run malware, whether that's a criminal organization, a corporate or political rival, or a government surveillance agency.

I'm pleased to say that Debian has already implemented many of the technical recommendations we describe, including leading the way on reproducible builds. But as individual developers we might also be targeted, as lamby points out, and it's worth thinking about how you'd defend your users from such a situation.

As an organization, it would be great to see Debian continue to expand its protections for its users by holding ourselves even more accountable in our software update mechanisms than we already do. In particular, I'd love to see work on binary transparency, similar to what Mozilla has been doing, but that ensures that the archive signing keys (which our users trust) can't be abused/misused/compromised without public exposure, and that allows for easy monitoring and investigation of what binaries we are actually publishing.

In addition to technical measures, if you think you might ever get a government request to compromise your users, please make sure you are in touch with a lawyer who has your back, who knows how to challenge requests in court, and who understands why software update channels should not be used for deliberately shipping malware. If you're facing such a situation, and you're in the USA and you don't have a lawyer yet yourself, you can reach out to the lawyers my workplace, the ACLU's Speech, Privacy, and Technology Project for help.

Protecting software update channels is the right thing for our users, and for free software -- Debian's priorities. So please take a look at the guidance, think about how it might affect you or the people that you work with, and start a conversation about what you can do to defend these systems that everyone is obliged to trust on today's communications.

TEDAn ambitious plan to explore our oceans, and more news from TED speakers

 

The past few weeks have brimmed over with TED-related news. Below, some highlights.

Exploring the ocean like never before. A school of ocean-loving TED speakers have teamed up to launch OceanX, an international initiative dedicated to discovering more of our oceans in an effort to “inspire a human connection to the sea.” The coalition is supported by Bridgewater Capital’s Ray Dalio, along with luminaries like ocean explorer Sylvia Earle and filmmaker James Cameron, and partners such as BBC Studios, the American Museum of Natural History and the National Geographic Society. The coalition is now looking for ideas for scientific research missions in 2019, exploring the Norwegian Sea and the Indian Ocean. Dalio’s son Mark leads the media arm of the venture; from virtual reality demonstrations in classrooms to film and TV releases like the BBC show Blue Planet II and its follow-up film Oceans: Our Blue Planet, OceanX plans to build an engaged global community that seeks to “enjoy, understand and protect our oceans.” (Watch Dalio’s TED Talk, Earle’s TED Talk and Cameron’s TED Talk.)

The Ebola vaccine that’s saving lives. In response to the recent Ebola outbreak in the Democratic Republic of the Congo, GAVI — the Vaccine Alliance, led by Seth Berkeley — has deployed thousands of experimental vaccines in an outbreak control strategy. The vaccines were produced as part of a partnership between GAVI and Merck, a pharmaceutical company, committed to proactively developing and producing vaccines in case of a future Ebola epidemic. In his TED Talk, Berkeley spoke of the drastic dangers of global disease and the preventative measures necessary to ensure we are prepared for future outbreaks. (Watch his TED Talk and read our in-depth interview with Berkeley.)

A fascinating new study on the halo effect. Does knowing someone’s political leanings change how you gauge their skills? Cognitive neurologist Tali Sharot and lawyer Cass R. Sunstein shared insights from their latest research answering the question in The New York Times. Alongside a team from University College London and Harvard Law School, Sharot conducted an experiment testing whether knowing someone’s political leanings affected how we would engage and trust in other non-political aspects of their lives. The study found that people were more willing to trust someone who had the same political beliefs as them — even in completely unrelated fields, like dentistry or architecture. These findings have wide-reaching implications and can further our understanding of the social and political landscape. (Watch Sharot’s TED Talk on optimism bias).

A new essay anthology on rape culture. Roxane Gay’s newest book, Not That Bad: Dispatches from Rape Culture, was released in May to critical and commercial acclaim. The essay collection, edited and introduced by Gay, features first-person narratives on the realities and effects of harassment, assault and rape. With essays from 29 contributors, including actors Gabrielle Union and Amy Jo Burns, and writers Claire Schwartz and Lynn Melnick, Not That Bad offers feminist insights into the national and global dialogue on sexual violence. (Watch Gay’s TED Talk.)

One million pairs of 3D-printed sneakers. At TED2015, Carbon founder and CEO Joseph DeSimone displayed the latest 3D printing technology, explaining its seemingly endless applications for reshaping the future of manufacturing. Now, Carbon has partnered with Adidas for a bold new vision to 3D-print 100,000 pairs of sneakers by the end of 2018, with plans to ramp up production to millions. The company’s “Digital Light Synthesis” technique, which uses light and oxygen to fabricate materials from pools of resin, significantly streamlines manufacturing from traditional 3D-printing processes — a technology Adidas considers “revolutionary.” (Watch DeSimone’s TED Talk.)

CryptogramManipulative Social Media Practices

The Norwegian Consumer Council just published an excellent report on the deceptive practices tech companies use to trick people into giving up their privacy.

From the executive summary:

Facebook and Google have privacy intrusive defaults, where users who want the privacy friendly option have to go through a significantly longer process. They even obscure some of these settings so that the user cannot know that the more privacy intrusive option was preselected.

The popups from Facebook, Google and Windows 10 have design, symbols and wording that nudge users away from the privacy friendly choices. Choices are worded to compel users to make certain choices, while key information is omitted or downplayed. None of them lets the user freely postpone decisions. Also, Facebook and Google threaten users with loss of functionality or deletion of the user account if the user does not choose the privacy intrusive option.

[...]

The combination of privacy intrusive defaults and the use of dark patterns, nudge users of Facebook and Google, and to a lesser degree Windows 10, toward the least privacy friendly options to a degree that we consider unethical. We question whether this is in accordance with the principles of data protection by default and data protection by design, and if consent given under these circumstances can be said to be explicit, informed and freely given.

I am a big fan of the Norwegian Consumer Council. They've published some excellent research.

Worse Than FailureCodeSOD: Foggy about Security

Maverick StClare’s company recently adopted a new, SaaS solution for resource planning. Like most such solutions, it was pushed from above without regard to how people actually worked, and thus required the users to enter highly structured data into free-form, validation-free, text fields. That was dumb, so someone asked Maverick: “Hey, could you maybe write a program to enter the data for us?”

Well, you’ll be shocked to learn that there was no API, but the web pages themselves all looked pretty simple and the design implied they hadn’t changed since IE4, so Maverick decided to take a crack at writing a scraper. Step one: log in. Easy, right? Maverick fired up a trace on the HTTPS traffic and sniffed the requests. He was happy to see that his password wasn’t sent in plain text. He was less happy to see that it wasn’t sent using any of the standard HTTP authentication mechanisms, and it certainly wasn’t hashed using any algorithm he recognized. He dug into the code, and found this:

function Foggy(svInput)
{
  // Any changes must be duplicated in the server-side version of this function.
  var svOutput = "";
  var ivRnd;
  var i;
  var ivLength = svInput.length;

  if (ivLength == 0 || ivLength > 158)
  {
        svInput = svInput.replace(/"/g,"&qt;");
        return svInput;
  }

  for (i = 0; i < ivLength; i++)
  {
        ivRnd = Math.floor(Math.random() * 3);
        if (svInput.charCodeAt(i) == 32 || svInput.charCodeAt(i) == 34 || svInput.charCodeAt(i) == 62)
        {
          ivRnd = 1;
        }
        if (svInput.charCodeAt(i) == 33 || svInput.charCodeAt(i) == 58 || svInput.charCodeAt(i) == 59 || svInput.charCodeAt(i) + ivRnd > 255)
        {
          ivRnd = 0;
        }
        svOutput += String.fromCharCode(ivRnd+97);
        svOutput += String.fromCharCode(svInput.charCodeAt(i)+ivRnd);
  }

  for (i = 0; i < Math.floor(Math.random() * 8) + 8; i++)
  {
        ivRnd = Math.floor(Math.random() * 26);
        svOutput += String.fromCharCode(ivRnd+97);
  }

  svOutput += String.fromCharCode(svInput.length + 96);
  return svOutput;
}

I… have so many questions. Why do they only replace quotes if the string is empty or greater than 158 characters? Why are there random numbers involved in their “hashing” algorithm? I’m foggy about this whole thing, indeed. And ah, protip: security through obscurity works better when nobody can see how you obfuscated things. All I can say is: “aWcjaacvc0b!cVahcgc0b!cHaubdcmb/gmzyrcoqhp”.

[Advertisement] Ensure your software is built only once and then deployed consistently across environments, by packaging your applications and components. Learn how today!

Mark ShuttleworthFraud alert – scams using my name and picture

I have recently become aware of a fraudulent investment scam which falsely states that I have launched new software known as a QProfit System promoted by Jerry Douglas. I’ve seen some phishing sites like http://www.bbc-tech.news and http://pipeline-stats.club, and pop up ads on Facebook like this one:

I can’t comment on whether or not Jerry Douglas promotes a QProfit system and whether or not it’s fraud. But I can tell you categorically that there are many scams like this, and that this investment has absolutely nothing to do with me. I haven’t developed this software and I have no desire to defraud the South African government or anyone else. I’m doing what I can to get the fraudulent sites taken down. But please take heed and don’t fall for these scams.

,

Planet DebianBits from Debian: Debian Perl Sprint 2018

Three members of the Debian Perl team met in Hamburg between May 16 and May 20 2018 as part of the Mini-DebConf Hamburg to continue perl development work for Buster and to work on QA tasks across our 3500+ packages.

The participants had a good time and met other Debian friends. The sprint was productive:

  • 21 bugs were filed or worked on, many uploads were accepted.
  • The transition to Perl 5.28 was prepared, and versioned provides were again worked on.
  • Several cleanup tasks were performed, especially around the move from Alioth to Salsa in documentation, website, and wiki.
  • For src:perl, autopkgtests were enabled, and work on Versioned Provides has been resumed.

The full report was posted to the relevant Debian mailing lists.

The participants would like to thank the Mini-DebConf Hamburg organizers for providing the framework for our sprint, and all donors to the Debian project who helped to cover a large part of our expenses.

Planet DebianJonas Meurer: debian cryptsetup sprint report

Cryptsetup sprint report

The Cryptsetup team – consisting of Guilhem and Jonas – met on June 15 to 17 in order to work on the Debian cryptsetup packages. We ended up working three days (and nights) on the packages, refactored the whole initramfs integration, the SysVinit init scripts and the package build process and discussed numerous potential improvements as well as new features. The whole sprint was great fun and we enjoyed a lot sitting next to each other, being able to discuss design questions and implementation details in person instead of using clunky internet communication means. Besides, we had very nice and interesting chats, contacted other Debian folks from the Frankfurt area and met with jfs on Friday evening.

Splitting cryptsetup into cryptsetup-run and cryptsetup-initramfs

First we split the cryptsetup initramfs integration into a separate package cryptsetup-initramfs. The package that contains other Debian specific features like SysVinit scripts, keyscripts, etc. now is called cryptsetup-run and cryptsetup itself is a mere metapackage depending on both split off packages. So from now on, people can install cryptsetup-run if they don't need the cryptsetup initramfs integration. Once Buster is released we intend to rename cryptsetup-run to cryptsetup, which then will no longer have a strict dependency on cryptsetup-initramfs. This transition over two releases is necessary to avoid unexpected breakage on (dist-)upgrades. Meanwhile cryptsetup-initramfs ships a hook that upon generation of a new initramfs image detects which devices need to be unlocked early in the boot process and, in case it didn't find any, suggests the user to remove the package.

The package split allows us to define more fine-grained dependencies: since there are valid use case for wanting the cryptsetup binaries scripts but not the initramfs integration (in particular, on systems without encrypted root device), cryptsetup ≤2:2.0.2-1 was merely recommending initramfs-tools and busybox, while cryptsetup-initramfs now has hard dependencies on these packages.

We also updated the packages to latest upstream release and uploaded 2:2.0.3-1 on Friday shortly before 15:00 UTC. Due to the cryptsetup → cryptsetup-{run,initramfs} package split we hit the NEW queue, and it was manually approved by an ftpmaster… a mere 2h later. Kudos to them! That allowed us to continue with subsequent uploads during the following days, which was beyond our expectations for this sprint :-)

Extensive refactoring work

Afterwards we started working on and merging some heavy refactoring commits that touched almost all parts of the packages. First was a refactoring of the whole cryptsetup initramfs implementation that downsized both the cryptroot hook and script dramatically (less than half the size they were before). The logic to detect crypto disks was changed from parsing /etc/fstab to /proc/mounts and now the sysfs(5) block hierarchy is used to detect dm-crypt device dependencies. A lot of code duplication between the initramfs script and the SysVinit init script was removed by outsourcing common functions into a shared shell functions include file that is sourced by initramfs and SysVinit scripts. To complete the package refactoring, we also overhauled the build process by migrating it to the latest Debhelper 11 style. debian/rules as well was downsized to less than half the size and as an extra benefit we now run the upstream build-time testsuite during the package build process.

Some git statistics speak more than a thousand words:

$ git --no-pager diff --ignore-space-change --shortstat debian/2%2.0.2-1..debian/2%2.0.3-2 -- ./debian/
 92 files changed, 2247 insertions(+), 3180 deletions(-)
$ find ./debian -type f \! -path ./debian/changelog -print0 | xargs -r0 cat | wc -l
7342
$ find ./debian -type f \! -path ./debian/changelog -printf x | wc -c
106

On CVE-2016-4484

Since 2:1.7.3-2, our initramfs boot script went to sleep for a full minute when the number of failed unlocking attempts exceeds the configured value (tries crypttab(5) option, which defaults to 3). This was added in order to defeat local brute force attacks, and mitigate one aspect of CVE-2016-4484; back then Jonas wrote a blog post to cover that story. Starting with 2:2.0.3-2 we changed this behavior and the script will now sleep for one second after each unsuccessful unlocking attempt. The new value should provide better user experience while still offering protection against local brute force attacks for very fast password hashing functions. The other aspect mentioned in the security advisory — namely the fact that the initramfs boot process drops to a root (rescue/debug) shell after the user fails to unlock the root device too many times — was not addressed at the time, and still isn't. initramfs-tools has a boot parameter panic=<sec> to disable the debug shell, and while setting this is beyond the scope of cryptsetup, we're planing to ask the initramfs-tools maintainers to change the default. (Of course setting panic=<sec> alone doesn't gain much, and one would need to lock down the full boot chain, including BIOS and boot loader.)

New features (work started)

Apart from the refactoring work we started/continued work on several new features:

  • We started to integrate luksSuspend support into system suspend. The idea is to luksSuspend all dm-crypt devices before suspending the machine in order to protect the storage in suspend mode. In theory, this seemed as simple as creating a minimal chroot in ramfs with the tools required to unlock (luksResume) the disks after machine resume, running luksSuspend from that chroot, putting the machine into suspend mode and running luksResume after it got resumed. Unfortunately it turned out to be way more complicated due to unpredictable race conditions between luksSuspend and machine suspend. So we ended up spending quite some time on debugging (and understanding) the issue. In the end it seems like the final sync() before machine suspend ( https://lwn.net/Articles/582648/ ) causes races in some cases as the dm-crypt device to be synced to is already luksSuspended. We ended up sending a request for help to the dm-crypt mailinglist but unfortunately so far didn't get a helpful response yet.
  • In order to get internationalization support for the messages and password prompts in the initramfs scripts, we patched gettext and locale support into initramfs-tools.
  • We started some preliminary work on adding beep support to the cryptsetup initramfs and sysVinit scripts for better accessibility support.

The above features are not available in the current Debian package yet, but we hope they will be included in a future release.

Bugs and Documentation

We also squashed quite some longstanding bugs and improved the crypttab(5) documentation. In total, we squashed 18 bugs during the sprint, the oldest one being from June 2013.

On the need for better QA

In addition to the many crypttab(5) options we also support a huge variety of block device stacks, such as LUKS-LVM2-MD combined in all ways one can possibly imagine. And that's a Debian addition hence something we, the cryptsetup package maintainers, have to develop and maintain ourselves. The many possibilities imply corner cases (it's not a surprise that complex or unusual setups can break in subtle ways) which motivated us to completely refactor the Debian-specific code, so it becomes easier to maintain.

While our final upload squashed 18 bugs, it also introduced new ones. In particular 2 rather serious regressions which fell through our tests. We have thorough tests for the most usual setups, as well as for some complex stacks we hand-crafted in order to detect corner cases, but this approach doesn't scale to covering the full spectrum of user setups: even with minimal sid installations the disk images would just take far too much space! Ideally we would have a automated test-suite, each test deploying a new transient sid VM with a particular setup. As the current and past regressions show, that's a beyond-the-scenes area we should work on. (In fact that's an effort we started already, but didn't touch during the sprint due to lack of time.)

More to come

There's some more things on our list that we didn't find time to work on. Apart from the unfinished new features we mentioned above, that's mainly the LUKS nuke feature that Kali Linux ships and the lack of keyscripts support to crypttab(5) in systemd.

Conclusion

In our eyes, the sprint was both a great success and great fun. We definitely want to repeat it anytime soon in order to further work on the open tasks and further improve the Debian cryptsetup package. There's still plenty of work to be done. We thank the Debian project and its generous donors for funding Guilhem's travel expenses.

Guilhem and Jonas, June 25th 2018

CryptogramIEEE Statement on Strong Encryption vs. Backdoors

The IEEE came out in favor of strong encryption:

IEEE supports the use of unfettered strong encryption to protect confidentiality and integrity of data and communications. We oppose efforts by governments to restrict the use of strong encryption and/or to mandate exceptional access mechanisms such as "backdoors" or "key escrow schemes" in order to facilitate government access to encrypted data. Governments have legitimate law enforcement and national security interests. IEEE believes that mandating the intentional creation of backdoors or escrow schemes -- no matter how well intentioned -- does not serve those interests well and will lead to the creation of vulnerabilities that would result in unforeseen effects as well as some predictable negative consequences

The full statement is here.

Worse Than FailureRepresentative Line: Got Your Number

You have a string. It contains numbers. You want to turn those numbers into all “0”s, presumably to anonymize them. You’re also an utter incompetent. What do you do?

You already know what they do. Jane’s co-worker encountered this solution, and she tells us that the language was “Visual BASIC, Profanity”.

Private Function ReplaceNumbersWithZeros(ByVal strText As String) As String
     ReplaceNumbersWithZeros = Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(Replace(strText, "1", "0"), "2", "0"), "3", "0"), "4", "0"), "5", "0"), "6", "0"), "7", "0"), "8", "0"), "9", "0")
End Function

Jane adds:

My co-worker found this function while researching some legacy code. Shortly after this discovery, it took us 15 minutes to talk him down off the ledge…and we’re on the ground floor.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianLouis-Philippe Véronneau: Montreal's Debian & Stuff June Edition

Hello world!

This is me inviting you to the next Montreal Debian & Stuff. This one will take place at Koumbit's offices in Montreal on June 30th from 10:00 to 17:00 EST.

The idea behind 'Debian & Stuff' is to have an informal gatherings of the local Debian community to work on Debian-related stuff - or not. Everyone is welcome to drop by and chat with us, hack on a nice project or just hang out!

We've been trying to have monthly meetings of the Debian community in Montreal since April, so this will be the third event in a row.

Chances are we'll take a break in July because of DebConf, but I hope this will become a regular thing!

,

Planet DebianPetter Reinholdtsen: Add-on to control the projector from within Kodi

My movie playing setup involve Kodi, OpenELEC (probably soon to be replaced with LibreELEC) and an Infocus IN76 video projector. My projector can be controlled via both a infrared remote controller, and a RS-232 serial line. The vendor of my projector, InFocus, had been sensible enough to document the serial protocol in its user manual, so it is easily available, and I used it some years ago to write a small script to control the projector. For a while now, I longed for a setup where the projector was controlled by Kodi, for example in such a way that when the screen saver went on, the projector was turned off, and when the screen saver exited, the projector was turned on again.

A few days ago, with very good help from parts of my family, I managed to find a Kodi Add-on for controlling a Epson projector, and got in touch with its author to see if we could join forces and make a Add-on with support for several projectors. To my pleasure, he was positive to the idea, and we set out to add InFocus support to his add-on, and make the add-on suitable for the official Kodi add-on repository.

The Add-on is now working (for me, at least), with a few minor adjustments. The most important change I do relative to the master branch in the github repository is embedding the pyserial module in the add-on. The long term solution is to make a "script" type pyserial module for Kodi, that can be pulled in as a dependency in Kodi. But until that in place, I embed it.

The add-on can be configured to turn on the projector when Kodi starts, off when Kodi stops as well as turn the projector off when the screensaver start and on when the screesaver stops. It can also be told to set the projector source when turning on the projector.

If this sound interesting to you, check out the project github repository. Perhaps you can send patches to support your projector too? As soon as we find time to wrap up the latest changes, it should be available for easy installation using any Kodi instance.

For future improvements, I would like to add projector model detection and the ability to adjust the brightness level of the projector from within Kodi. We also need to figure out how to handle the cooling period of the projector. My projector refuses to turn on for 60 seconds after it was turned off. This is not handled well by the add-on at the moment.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianAntoine Beaupré: Historical inventory of collaborative editors

A quick inventory of major collaborative editor efforts, in chronological order.

As with any such list, it must start with an honorable mention to the mother of all demos during which Doug Engelbart presented what is basically an exhaustive list of all possible software written since 1968. This includes not only a collaborative editor, but graphics, programming and math editor.

Everything else after that demo is just a slower implementation to compensate for the acceleration of hardware.

Software gets slower faster than hardware gets faster. - Wirth's law

So without further ado, here is the list of notable collaborative editors that I could find. By "notable" i mean that they introduce a notable feature or implementation detail.

Project Date Platform Notes
SubEthaEdit 2003-2015? Mac-only first collaborative, real-time, multi-cursor editor I could find. An reverse-engineering attempt in Emacs failed to produce anything.
DocSynch 2004-2007 ? built on top of IRC!
Gobby 2005-now C, multi-platform first open, solid and reliable implementation and still around! The protocol ("libinfinoted") is notoriously hard to port to other editors (e.g. Rudel failed to implement this in Emacs). 0.7 release in jan 2017 adds possible python bindings that might improve this. Interesting plugins: autosave to disk.
Ethercalc 2005-now Web, Javascript First spreadsheet, along with Google docs
moonedit 2005-2008? ? Original website died. Other user's cursors visible and emulated keystrokes noises. Included a calculator and music sequencer!
synchroedit 2006-2007 ? First web app.
Inkscape 2007-2011 C++ First graphics editor with collaborative features backed by the "whiteboard" plugin built on top of Jabber, now defunct.
Abiword 2008-now C++ First word processor
Etherpad 2008-now Web First solid web app. Originally developped as a heavy Java app in 2008, acquired and opensourced by Google in 2009, then rewritten in Node.js in 2011. Widely used.
Wave 2009-2010 Web, Java Failed attempt at a grand protocol unification
CRDT 2011 Specification Standard for replicating a document's datastructure among different computers reliably.
Operational transform 2013 Specification Similar to CRDT, yet, well, different.
Floobits 2013-now ? Commercial, but opensource plugins for different editors
LibreOffice Online 2015-now Web free Google docs equivalent, now integrated in Nextcloud
HackMD 2015-now ? Commercial but opensource. Inspired by hackpad, which was bought up by Dropbox.
Cryptpad 2016-now web? spin-off of xwiki. encrypted, "zero-knowledge" on server
Prosemirror 2016-now Web, Node.JS "Tries to bridge the gap between Markdown text editing and classical WYSIWYG editors." Not really an editor, but something that can be used to build one.
Qill 2013-now Web, Node.JS Rich text editor, also javascript. Not sure it is really collaborative.
Teletype 2017-now WebRTC, Node.JS For the GitHub's Atom editor, introduces "portal" idea that makes guests follow what the host is doing across multiple docs. p2p with webRTC after visit to introduction server, CRDT based.
Tandem 2018-now Node.JS? Plugins for atom, vim, neovim, sublime... uses a relay to setup p2p connexions CRDT based. Dubious license issues were resolved thanks to the involvement of Debian developers, which makes it a promising standard to follow in the future.

Other lists

Planet DebianJoey Hess: two security holes and a new library

For the past week and a half, I've been working on embargoed security holes. The embargo is over, and git-annex 6.20180626 has been released, fixing those holes. I'm also announcing a new Haskell library, http-client-restricted, which could be used to avoid similar problems in other programs.

Working in secret under a security embargo is mostly new to me, and I mostly don't like it, but it seems to have been the right call in this case. The first security hole I found in git-annex turned out to have a wider impact, affecting code in git-annex plugins (aka external special remotes) that uses HTTP. And quite likely beyond git-annex to unrelated programs, but I'll let their developers talk about that. So quite a lot of people were involved in this behind the scenes.

See also: The RESTLESS Vulnerability: Non-Browser Based Cross-Domain HTTP Request Attacks

And then there was the second security hole in git-annex, which took several days to notice, in collaboration with Daniel Dent. That one's potentially very nasty, allowing decryption of arbitrary gpg-encrypted files, although exploiting it would be hard. It logically followed from the first security hole, so it's good that the first security hole was under embagro long enough for us to think it all though.

These security holes involved HTTP servers doing things to exploit clients that connect to them. For example, a HTTP server that a client asks for the content of a file stored on it can redirect to a file:// on the client's disk, or to http://localhost/ or a private web server on the client's internal network. Once the client is tricked into downloading such private data, the confusion can result in private data being exposed. See the_advisory for details.

Fixing this kind of security hole is not necessarily easy, because we use HTTP libraries, often via an API library, which may not give much control over following redirects. DNS rebinding attacks can be used to defeat security checks, if the HTTP library doesn't expose the IP address it's connecting to.

I faced this problem in git-annex's use of the Haskell http-client library. So I had to write a new library, http-client-restricted. Thanks to the good design of the http-client library, particularly its Manager abstraction, my library extends it rather than needing to replace it, and can be used with any API library built on top of http-client.

I get the impression that a lot of other language's HTTP libraries need to have similar things developed. Much like web browsers need to enforce same-origin policies, HTTP clients need to be able to reject certain redirects according to the security needs of the program using them.

I kept a private journal while working on these security holes, and am publishing it now:

Krebs on SecurityHow to Avoid Card Skimmers at the Pump

Previous stories here on the proliferation of card-skimming devices hidden inside fuel pumps have offered a multitude of security tips for readers looking to minimize their chances of becoming the next victim, such as favoring filling stations that use security cameras and tamper-evident tape on their pumps. But according to police in San Antonio, Texas, there are far more reliable ways to avoid getting skimmed at a fuel station.

San Antonio, like most major U.S. cities, is grappling with a surge in pump skimming scams. So far in 2018, the San Antonio Police Department (SAPD) has found more than 100 skimming devices in area fuel pumps, and that figure already eclipses the total number of skimmers found in the area in 2017. The skimmers are hidden inside of the pumps, and there are often few if any outward signs that a pump has been compromised.

In virtually all cases investigated by the SAPD, the incidents occurred at filling stations using older-model pumps that have not yet been upgraded with physical and digital security features which make it far more difficult for skimmer thieves to tamper with fuel pumps and siphon customer card data (and PINs from debit card users).

Lt. Marcus Booth is the financial crimes unit director for the SAPD. Booth said most filling stations in San Antonio and elsewhere use legacy pumps that have a vertical card reader and a flat, membrane-based keypad. In addition, access to the insides of these older pumps frequently is secured via a master key that opens not only all pumps at a given station, but in many cases all pumps of a given model made by the same manufacturer.

Older model fuel pumps like this one feature a flat, membrane-based keypad and vertical card reader. Image: SAPD.

In contrast, Booth said, newer and more secure pumps typically feature a horizontal card acceptance slot along with a raised metallic keypad — much like a traditional payphone keypad and referred to in the fuel industry as a “full travel” keypad:

Newer, more tamper-resistant fuel pumps include raised metallic keypads (known in the industry as “full travel” keypads), horizontal card readers and custom locks for each pump.

Booth said the SAPD has yet to see a skimming incident involving newer pump models like the one pictured directly above.

“Here in San Antonio, many of these stations with these older keypads and card slots were getting hit all the time, sometimes weekly,” he said. “But as soon as those went over to newer gear, we’ve seen zero problems.”

According to Booth, the newer pumps include not only custom keys for each pump, but also tamper protections that physically shut down a pump if the machine is improperly accessed. What’s more, these more advanced pumps do a better job of compartmentalizing individual components, very often enclosing the electronics that serve the card reader and keypad in separately secured metal cages.

“Pretty much all these full travel metallic keypads are encrypted, and if you disconnect them they disable themselves and can only be re-enabled by technician,” Booth told KrebsOnSecurity. “Also, if the pump is opened improperly, it disables itself. These two specific items: The card reader or the pad, if you pull power to them they’re dead, and then they can only be re-enabled by an authorized technician.”

Newer pumps may also include more modern mobile payment options — such as Apple Pay — although many stations with pumps that advertise this capability have not yet enabled it, which allows customers to pay for fuel without ever sharing their credit or debit card account details with the fuel station.

One reason that pump skimmers seem to be more pervasive is that authorities across the country are doing a better job of working with banks and federal investigators to determine fuel stations that appear to be compromised. The flip side is that thieves are generally opportunistic, and tend to focus on targeting systems that offer the least resistance and lowest hanging fruit.

Unfortunately, there is still a ton of low-hanging fruit, and these newer and more secure pump systems remain the exception rather than the rule, Booth said. In December 2016, Visa delayed by three years a deadline for fuel station owners to install payment terminals at the pump that are capable of handling more secure chip-based cards. The chip card technology standard, also known as EMV (short for Europay, MasterCard and Visa) makes credit and debit cards far more expensive and difficult for thieves to clone.

Under previous credit card association rules, station owners that didn’t have chip-ready readers in place by Oct. 2017 would have been on the hook to absorb 100 percent of the costs of fraud associated with transactions in which the customer presented a chip-based card yet was not asked or able to dip the chip (currently, card-issuing banks eat most of the fraud costs from fuel skimming). Currently, fuel stations have until Oct. 1, 2020 to meet the liability shift deadline.

Some pump skimming devices are capable of stealing debit card PINs as wellso it’s a good idea to avoid paying with a debit card at the pump. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

This advice often runs counter to the messaging pushed by fuel station owners themselves, many of whom offer lower prices for cash or debit card transactions. That’s because credit card transactions typically are more expensive to process.

In summary, if you have the choice, look for fuel pumps with raised keypads and horizontal card slots. And keep in mind that it may not be the best idea to frequent a particular filling station simply because it offers the lowest prices: Doing so could leave you with hidden costs down the road.

If you enjoyed this story, check out my series on all things skimmer-related: All About Skimmers. Looking for more information on fuel pump skimming? Have a look at some of these stories.

Sociological ImagesThe Half-Dozen Headline

Want to help fight fake news and manage political panics? We have to learn to talk about numbers.

While teaching basic statistics to sociology undergraduates, one of the biggest trends I noticed was students who thought they hated math experiencing a brain shutdown when it was time to interpret their results. I felt the same way when I started in this field, and so I am a big advocate for working hard to bridge the gap between numeracy and literacy. You don’t have to be a statistical wizard to make your reporting clear to readers.

Sociology is a great field to do this, because we are used to going out into the world and finding all kinds of cultural tropes (like pointlessly gendered products!). My new favorite trope is the Half-Dozen Headline. You can spot them in the wild, or through Google News with a search for “half dozen.” Every time I read one of these headlines, my brain echoes with “half of a dozen is six.”

Sometimes, six is a lot:

Sometimes, six is not:

(at least, not relative to past administrations)

Sometimes, well, we just don’t know:

Is this five deaths (nearly six)? Is a rate of about two deaths a year in a Walmart parking lot high? If people already struggle to interpret raw numbers, wrapping your findings in fuzzy language only makes the problem worse.

Spotting Half-Dozen Headlines is a great introductory exercise for classes in social statistics, public policy, journalism, or other fields that use applied data analysis. If you find a favorite Half-Dozen Headline, be sure to send it our way!

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianSteve Kemp: Hosted monitoring

I don't run hosted monitoring as a service, I just happen to do some monitoring for a few (local) people, in exchange for money.

Setting up some new tests today I realised my monitoring software had an embarassingly bad bug:

  • The IMAP probe would connect to an IMAP/IMAPS server.
  • Optionally it would login with a username & password.
    • Thus it could test the service was functional

Unfortunately the IMAP probe would never logout after determining success/failure, which would lead to errors from the remote host after a few consecutive runs:

 dovecot: imap-login: Maximum number of connections from user+IP exceeded
          (mail_max_userip_connections=10)

Oops. Anyway that bug was fixed promptly once it manifested itself, and it also gained the ability to validate SMTP authentication as a result of a customer user-request.

Otherwise I think things have been mixed recently:

  • I updated the webserver of Charlie Stross
  • Did more geekery with hardware.
  • Had a fun time in a sauna, on a boat.
  • Reported yet another security issue in an online PDF generator/converter
    • If you read a remote URL and convert the contents to PDF then be damn sure you don't let people submit file:///etc/passwd.
    • I've talked about this previously.
  • Made plaited bread for the first time.
    • It didn't suck.

(Hosted monitoring is interesting; many people will give you ping/HTTP-fetch monitoring. If you want to remotely test your email service? Far far far fewer options. I guess firewalls get involved if you're testing self-hosted services, rather than cloud-based stuff. But still an interesting niche. Feel free to tell me your budget ;)

CryptogramBypassing Passcodes in iOS

Last week, a story was going around explaining how to brute-force an iOS password. Basically, the trick was to plug the phone into an external keyboard and trying every PIN at once:

We reported Friday on Hickey's findings, which claimed to be able to send all combinations of a user's possible passcode in one go, by enumerating each code from 0000 to 9999, and concatenating the results in one string with no spaces. He explained that because this doesn't give the software any breaks, the keyboard input routine takes priority over the device's data-erasing feature.

I didn't write about it, because it seemed too good to be true. A few days later, Apple pushed back on the findings -- and it seems that it doesn't work.

This isn't to say that no one can break into an iPhone. We know that companies like Cellebrite and Grayshift are renting/selling iPhone unlock tools to law enforcement -- which means governments and criminals can do the same thing -- and that Apple is releasing a new feature called "restricted mode" that may make those hacks obsolete.

Grayshift is claiming that its technology will still work.

Former Apple security engineer Braden Thomas, who now works for a company called Grayshift, warned customers who had bought his GrayKey iPhone unlocking tool that iOS 11.3 would make it a bit harder for cops to get evidence and data out of seized iPhones. A change in the beta didn't break GrayKey, but would require cops to use GrayKey on phones within a week of them being last unlocked.

"Starting with iOS 11.3, iOS saves the last time a device has been unlocked (either with biometrics or passcode) or was connected to an accessory or computer. If a full seven days (168 hours) elapse [sic] since the last time iOS saved one of these events, the Lightning port is entirely disabled," Thomas wrote in a blog post published in a customer-only portal, which Motherboard obtained. "You cannot use it to sync or to connect to accessories. It is basically just a charging port at this point. This is termed USB Restricted Mode and it affects all devices that support iOS 11.3."

Whether that's real or marketing, we don't know.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #165

Here’s what happened in the Reproducible Builds effort between Sunday June 17 and Saturday June 23 2018:

Packages reviewed and fixed, and bugs filed

  • Bernhard M. Wiedemann:

    • gcc (sort, second attempt)
    • pip (sort hash)
    • librep (version update to fix embedded hostname)
  • Chris Lamb:

tests.reproducible-builds.org development

There were a large number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

TEDApply now to be a TED2019 Fellow

The TED Fellows program is turning ten years old next year, and we are looking for our most ambitious class yet. We select people from every discipline and every country to be Fellows, and we give them support to scale their dreams and scale their impact.

Apply to be a TED Fellow by August 26.

Who are TED Fellows? Fellows are individuals with original work, a record of achievement in their field and exceptional potential. They are also courageous, collaborative people dedicated to improving life where they work.

How do we help you dream bigger? The Fellows program is robust, long-term and, we think, unlike any other Fellowship out there. From our open application process to our rigorous support systems, we have designed a program that maximizes innovation and collaboration.

Fellows get career coaching and speaker training as well as mentorship and public relations guidance. Fellows also give a talk at a TED Conference, a huge opportunity to share their work with a wide, new audience. And perhaps most important, Fellows join the community of 450+ other Fellows who inspire one another and collaborate on new projects.

What have Fellows done after joining the program? In our nearly 10-year history, the Fellows program has sparked remarkable cultural change and reached millions of people. With the support of TED, Fellows have conserved large swaths of our planet, protecting many species in the process. They’ve made headway in understanding complex diseases like Parkinson’s, cancer and malaria. They’ve created art that shines a light on injustice and made music that celebrates our history. They’ve made huge strides in robotics and 4-D printing and launched new startups. They’ve passed laws and have gone on to win Oscars, Grammys and MacArthur “genius” grants. And in the process, Fellows have improved conditions on our planet for countless communities and inspired others to pursue their own unconventional projects.

Our application is straightforward. It’s open to everyone (no one is appointed a Fellow; everyone has to apply), and we encourage you to apply even if you’re not sure you’re qualified. We have a way of picking winners before they know it.

The online application can take as little as 20 minutes. It asks for general biographical information, short essays on your work and three references. We don’t have an upper age limit, but you must be 18 or older to apply. If you’re selected, you will be part of our 10-year anniversary class, and you will need to reserve April 13 through April 20, 2019, for TED2019 and our own very special pre-conference.

So dream bigger. Apply to be a TED Fellow today.

For more information on the TED Fellows:

Visit: ted.com/fellows

Follow: @TEDFellow

Like: facebook.com/TEDFellow

Read: fellowsblog.ted.com

TED12 books from favorite TEDWomen speakers, for your summer reading list

We all have a story to tell. And in my work as curator of the TEDWomen conference, I’ve had the pleasure of providing a platform to some of the best stories and storytellers out there. Beyond their TED Talk, of course, many TEDWomen speakers are also accomplished authors — and if you liked them on the TED stage, odds are you will enjoy spending more time with them in the pages of their books.

All of the women and men listed here have given talks at TEDWomen, though some talks are related to their books and some aren’t. See what connects with you and enjoy your summer!

luvvie-ajayi-im-judging-you-cover.jpg

Luvvie Ajayi‘s 2017 TEDWomen talk has already amassed over 2.2 million views online! In it, she talks about how she wants to leave this world better than she found it and in order to do that, she says we all have to get more comfortable saying the sometimes uncomfortable things that need to be said. What’s great about Luvvie is that she delivers her commentary with a sly side eye that pokes fun at everyone, including herself.

In her book, I’m Judging You: The Do-Better Manual — written in the form of an Emily Post-type guidebook for modern manners — Luvvie doles out criticism and advice with equal amounts of wit, charm and humor that’s often laugh-out-loud funny. As Shonda Rhimes noted in her review, “This truth-riot of a book gives us everything from hilarious lectures on the bad behavior all around us to razor sharp essays on media and culture. With I’m Judging You, Luvvie brilliantly puts the world on notice that she is not here for your foolishness — or mine.”

madeleine-albright-fascism.jpg

At the first TEDWomen in 2010, Madeleine Albright talked to me about what it was like to be a woman and a diplomat. In her new book, entitled Fascism: A Warning, the former secretary of state writes about the history of fascism and the clash that took place between two ideologies of governing: fascism and democracy. She argues that “fascism not only endured the 20th century, but now presents a more virulent threat to peace and justice than at any time since the end of World War II.”

“At a moment when the question ‘Is this how it begins?’ haunts Western democracies,” the Economist notes in its review, “[Albright] writes with rare authority.”

gretchen-carlson-be-fierce-cover.jpg

Sometimes a talk perfectly captures the zeitgeist, and that was the case with Gretchen Carlson last November at TEDWomen. At the time, the #MeToo movement founded in 2007 by Tarana Burke was seeing a huge surge online, thanks to signal-boosting from Alyssa Milano and more women with stories to share.

Carlson took to the stage to talk about her personal experience with sexual harassment at Fox News, her historic lawsuit and the lessons she’d learned and related in her just-released book, Be Fierce. In her talk, she identifies three specific things we can all do to create safer places to work. “We will no longer be underestimated, intimidated or set back,” Carlson says. “We will stand up and speak up and have our voices heard. We will be the women we were meant to be.” In her book, she writes in detail about how we can stop harassment and take our power back.

john-cary-design-for-good-cover.jpg

John Cary is an architect who thinks deeply about diversity in design — and how the field’s lack of diversity leads to thoughtless, compassionless spaces in the modern world. As he said in his 2017 TEDWomen talk, “well-designed spaces are not just a matter of taste or a questions of aesthetics. They literally shape our ideas about who we are in the world and what we deserve.”

For years, as the executive director of Public Architecture, John has advocated for the term “public interest design” to become part of the architect’s lexicon, in much the same way as it is in fields like law and health care. In his new book, Design for Good, John presents 20 building projects from around the world that exemplify how good design can improve communities, the environment, and the lives of the people who live with it.

brittney-cooper-eloquent-rage-cover.jpg

In her thought-provoking 2016 TEDWomen talk, professor Brittney Cooper examined racism through the lens of time — showing how moments of joy, connection and well-being had been lost to people of color because of delays in social progress.

Last summer, I recommended Brittney’s book on the lives and thoughts of intellectual Black women in history who had been left out of textbooks. And this year, Brittney is back with another book, one that is more personal and also very timely in this election year in which women are figuring out what a truly intersectional feminist movement looks like.

As my friend Jane Fonda wrote in a recent blog post, in order to build truly multi-racial coalitions, white people need to do the work to truly understand race and racism. For white feminists in particular, the work starts by listening to the perspectives of women of color. Brittney’s book, Eloquent Rage: A Black Feminist Discovers Her Superpower, offers just that opportunity. Brittney’s sharp observations from high school (at a predominantly white school), college (at Howard University) and as a 30-something professional make the political personal. As she told the Washington Post, “When we figure out politics at a personal level, then perhaps it wouldn’t be so hard to figure it out at the more structural level.”

susan-david-emotional-agility-cover.jpeg

Susan David is a Harvard Medical School psychologist who studies how we process our emotions. In a deeply moving talk at TEDWomen 2017, Susan suggested that the way we deal with our emotions shapes everything that matters: our actions, careers, relationships, health and happiness. “I’m not anti-happiness. I like being happy. I’m a pretty happy person,” she says. “But when we push aside normal emotions to embrace false positivity, we lose our capacity to develop skills to deal with the world as it is, not as we wish it to be.”

In her book, Emotional Agility, Susan shares strategies for the radical acceptance of all of our emotions. How do we not let our self-doubts, failings, shame, fear, or anger hold us back?

“We own our emotions,” she says. “They don’t own us.”

all-the-women-in-my-family-sing-cover.jpg

Dr. Musimbi Kanyoro is president and CEO of Global Fund for Women, one of the world’s leading publicly supported foundations for gender equality. In her TEDWomen talk last year, she introduced us to the Maragoli concept of “isirika” — a pragmatic way of life that embraces the mutual responsibility to care for one another — something she sees women practicing all over the world.

In All the Women in My Family Sing, Musimbi is one of 69 women of color who have contributed prose and poetry to this “moving anthology” that “illuminates the struggles, traditions, and life views of women at the dawn of the 21st century. The authors grapple with identity, belonging, self-esteem, and sexuality, among other topics.” Contributors range in age from 16 to 77 and represent African-American, Native American, Asian-American, Muslim, Cameroonian, Kenyan, Liberian, Mexican-American, Korean, Chinese-American and LGBTQI experiences.

anjali-kumar-book-cover.jpg

In her 2017 TEDWomen talk, author Anjali Kumar shared some of what she learned in researching her new book, Stalking God: My Unorthodox Search for Something to Believe In. A few years ago, Anjali — a pragmatic lawyer for Google who, like more than 56 million of her fellow Americans, describes herself as not religious — set off on a mission to find God.

Spoiler alert: She failed. But along the way, she learned a lot about spirituality, humanity and what binds us all together as human beings.

In her humorous and thoughtful book, Anjali writes about her search for answers to life’s most fundamental questions and finding a path to spirituality in our fragmented world. The good news is that we have a lot more in common than we might think.

peggy-orenstein-dont-call-me-princess-cover.jpg

New York Times best-selling author Peggy Orenstein is out with a new collection of essays titled Don’t Call Me Princess: Girls, Women, Sex and Life. Peggy combines a unique blend of investigative reporting, personal revelation and unexpected humor in her many books, including Schoolgirls and the book that was the subject of her 2016 TEDWomen talk, Girls & Sex.

Don’t Call Me Princess “offers a crucial evaluation of where we stand today as women — in our work lives, sex lives, as mothers, as partners — illuminating both how far we’ve come and how far we still have to go.” Don’t miss it.

caroline-paul-you-are-mighty-cover.jpg

Caroline Paul began her remarkable career as the first female firefighter in San Francisco. She wrote about that in her first book, Fighting Fires. In the 20 years since, she’s written many more books, including her most recent, You Are Mighty: A Guide to Changing the World.

This well-timed book offers advice and inspiration to young activists. She writes about the experiences of young people — from famous kids like Malala Yousafzai and Claudette Colvin to everyday kids — who stood up for what they thought was right and made a difference in their communities. Paul offers loads of tactics for young people to use in their own activism — and proves you’re never too young to change the world.

cleo-wade-heart-talk-cover.png

I first encountered Cleo Wade‘s delightful, heartfelt words of wisdom like most people, on Instagram. Cleo has over 350,000 followers on her popular feed that features short poems, bits of wisdom and pics. Cleo has been called the poet of her generationeverybody’s BFF and the millennial Oprah. In her new poetry collection, Heart Talk: Poetic Wisdom for a Better Life, the poet, artist and activist shares some of the Instagram notes she wrote “while sitting in her apartment, poems about loving, being and healing” and “the type of good ol’-fashioned heartfelt advice I would share with you if we were sitting in my home at my kitchen table.”

girl-who-smiled-beads-clementine.jpg

In 1994, the Rwandan Civil War forced six-year-old Clemantine Wamariya and her fifteen-year-old sister from their home in Kigali, leaving their parents and everything they knew behind. In her 2017 TEDWomen talk, Clemantine shared some of her experiences over the next six years growing up while living in refugee camps and migrating through seven African countries.

In her new memoir, The Girl Who Smiled Beads: A Story of War and What Comes After, Clemantine recounts her harrowing story of hunger, imprisonment, and not knowing whether her parents were alive or dead. At the age of 12, she moved to Chicago and was raised in part by an American family. It’s an incredible, poignant story and one that is so important during this time when many are denying the humanity of people who are victims of war and civil unrest. For her part, Clemantine remains hopeful. “There are a lot of great people everywhere,” she told the Washington Post. “And there are also a lot of not-so-great people. It’s all over the world. But when we stepped out of the airplane, we had people waiting for us — smiling, saying, ‘Welcome to America.’ People were happy. Many countries were not happy to have us. Right now there are people at the airport still holding those banners.”

TEDWOMEN 2018

I also want to mention that registration for TEDWomen 2018 is open now! Space is limited and I don’t want you to miss out. This year, TEDWomen will be held Nov. 28–30 in Palm Springs, California. The theme is Showing Up.

The time for silent acceptance of the status quo is over. Women around the world are taking matters into their own hands, showing up for each other and themselves to shape the future we all want to see.We’ll explore the many aspects of this year’s theme through curated TED Talks, community dinners and activities.

Join us!

— Pat

Worse Than FailureCodeSOD: External SQL

"Externalize your strings" is generally good advice. Maybe you pull them up into constants, maybe you move them into a resource file, but putting a barrier between your code and the strings you output makes everything more flexible.

But what about strings that aren't output? Things like, oh… database queries? We want to be cautious about embedding SQL directly into our application code, but our SQL code often is our business logic, so it makes sense to inline it. Most data access layers end up trying to abstract the details of SQL behind method calls, whether it's just a simple repository or an advanced ORM approach.

Sean found a… unique approach to resolving this tension in some Java code he inherited. He saw lots of references to keys in a hash-map, keys like user or pw or insert_account_table or select_all_transaction_table. But where did these keys get defined?

Like all good strings, they were externalized into a file called sql.txt. A simple regex-based parser loaded the data and created the dictionary. Now, any module which wanted to query the database had a map of any query they could possibly want to run. Just chuck 'em into a PreparedStatement object and you're ready to go.

Here, in its entirety, is the sql.txt file.

user = root
pw = password
db_name = lrc_mydb

create_account_table = create table if not exists account_table(username varchar(45) not null, password text not null, last_name text, first_name text, mid_name text, suffix_name text, primary key (username))
create_course_table = create table if not exists course_table (course_abbr char(45) not null unique, course_name text, primary key(course_abbr))
create_student_table = create table if not exists student_table (username varchar(45) not null, registration_date date, year_lvl char(45), photolink longblob, freetime time, course_abbr char(45) not null, status char(45) not null, balance double not null, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade, foreign key fk_course_abbr(course_abbr) references course_table(course_abbr) on update cascade on delete cascade, primary key(username))
create_admin_table = create table if not exists admin_table (username varchar(45) not null, delete_priv boolean, settle_priv boolean, db_access boolean, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade, primary key(username))
create_reservation_table = create table if not exists reservation_table (username varchar(45) not null, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade, primary key(username))
create_service_table = create table if not exists service_table (service_id int not null auto_increment, service_name text, amount double, page_requirement boolean, primary key (service_id))
create_pc_table = create table if not exists pc_table (pc_id char(45) not null, ip_address varchar(45), primary key (pc_id))
create_transaction_table = create table if not exists transaction_table (transaction_id int not null auto_increment, date_rendered date, amount_paid double unsigned not null,cost_payable double, username varchar(45) not null, service_id int not null, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade, foreign key fk_service_id(service_id) references service_table(service_id) on update cascade on delete cascade, primary key (transaction_id))
create_pc_usage_table = create table if not exists pc_usage_table (transaction_id int not null, pc_id char(45) not null, login_time time, logout_time time, foreign key fk_pc_id(pc_id) references pc_table(pc_id) on update cascade on delete cascade, foreign key fk_transaction_id(transaction_id) references transaction_table(transaction_id) on update cascade on delete cascade, primary key(transaction_id))
create_pasa_hour_table = create table if not exists pasa_hour_table (transaction_id int not null auto_increment, date_rendered date, sender varchar(45) not null, amount_time time, current_free_sender time, deducted_free_sender time, receiver varchar(45) not null, current_free_receiver time, added_free_receiver time, primary key(transaction_id))
create_receipt_table = create table if not exists receipt_table (dates date, receipt_id varchar(45) not null, transaction_id int not null, username varchar(45) not null, amount_paid double, amount_change double, foreign key fk_transaction_id(transaction_id) references transaction_table(transaction_id) on update cascade on delete cascade, foreign key fk_username(username) references account_table(username) on update cascade on delete cascade)
create_cash_flow_table = create table if not exists cash_flow_table (dates date, cash_in double, cash_close double, cash_out double, primary key(dates))
create_free_pc_usage_table = create table if not exists free_pc_usage_table (transaction_id int not null, foreign key fk_transaction_id(transaction_id) references transaction_table(transaction_id) on update cascade on delete cascade, primary key(transaction_id))
create_diagnostic_table = create table if not exists diagnostic_table (sem_id int not null auto_increment , date_start date, date_end date, sem_num enum('first', 'second', 'mid year'), freetime time, time_penalty double, balance_penalty double, primary key(sem_id))
create_pasa_balance_table = create table if not exists pasa_balance_table (transaction_id int not null auto_increment, date_rendered date, sender varchar(45) not null, amount double, current_balance_sender double, deducted_balance_sender double, receiver varchar(45) not null, current_balance_receiver double, added_balance_receiver double, primary key(transaction_id))

insert_account_table = insert into account_table values (?, password(?), ?, ?, ?, ?)
insert_course_table = insert into course_table values (?, ?)
insert_student_table = insert into student_table values (?, now(), ?, ?, ?, ?, ?, ?)
insert_admin_table = insert into admin_table values (?, ?, ?, ?)
insert_reservation_table = insert into reservation_table values (?)
insert_service_table = insert into service_table (service_name, amount, page_requirement) values (?, ?, ?)
insert_pc_table = insert into pc_table values (?, ?)
insert_transaction_table = insert into transaction_table (date_rendered, amount_paid, cost_payable, username, service_id) values (now(), ?, ?, ?, ?)
insert_pc_usage_table = insert into pc_usage_table values (?, ?, ?, ?)
insert_pasa_hour_table = insert into pasa_hour_table (date_rendered, sender, amount_time, current_free_sender, deducted_free_sender, receiver, current_free_receiver, added_free_receiver) values (curdate(), ?, ?, ?, ?, ?, ?, ?)
insert_free_pc_usage_table = insert into free_pc_usage_table values (?)
insert_cash_flow_table = insert into cash_flow_table values (curdate(), ?, ?, ?)
insert_receipt_table = insert into receipt_table values (curdate(), ?, ?, ?, ?, ?)
insert_diagnostic_table = insert into diagnostic_table (date_start, date_end, sem_num, freetime, time_penalty, balance_penalty) values (?, ?, ?, ?, ?, ?)
insert_pasa_balance_table = insert into pasa_balance_table (date_rendered, sender, amount, current_balance_sender, deducted_balance_sender, receiver, current_balance_receiver, added_balance_receiver) values (curdate(), ?, ?, ?, ?, ?, ?, ?)

delete_reservation_table = delete from reservation_table where username = ?
delete_course_table = delete from course_table where course_abbr = ?
delete_user_assoc_to_course = delete account_table, student_table from student_table inner join account_table on account_table.username = student_table.username where student_table.course_abbr = ?
delete_service_table = delete from service_table where service_name = ?
delete_user_student = delete account_table, student_table from student_table inner join account_table on account_table.username = student_table.username where student_table.username = ?
delete_user_staff = delete account_table, admin_table from admin_table inner join account_table on account_table.username = admin_table.username where admin_table.username = ?

select_total_cost = select sum(cost_payable - amount_paid) from transaction_table where username = ? and cost_payable > amount_paid
select_time_penalty = select time_penalty from diagnostic_table where sem_id = ?
select_balance_penalty = select balance_penalty from diagnostic_table where sem_id = ?
select_balance = select balance from student_table where username = ?
select_accountabilities = select sum(cost_payable - amount_paid) from transaction_table where username = ? and cost_payable > amount_paid
select_count_service_table = select count(*) from service_table
select_count_course_table = select count(*) from course_table
select_course_count = select count(course_abbr) from student_table where course_abbr = ?
select_course_abbr = select course_abbr from course_table where course_name = ?
select_degree_name_abbr = select * from course_table
select_service_name = select * from service_table
select_service_name1 = select service_name from service_table where service_id = ?
select_services_amount = select * from service_table
select_username = select * from account_table where username = (?) and password = password(?)
select_user = select * from account_table where username = (?)
select_reserved_user = select * from reservation_table where username = (?)
select_existing_course = select * from course_table where course_abbr = (?)
select_existing_service = select * from service_table where service_name = (?)
select_existing_transaction_id = select transaction_id from transaction_table where transaction_id = ?
select_user_is_active = select status from student_table where username = ?
select_page_requirement = select page_requirement from service_table where service_name = ?
select_user_details = select account_table.username as 'Username', concat(account_table.last_name, ', ', account_table.first_name, ' ', account_table.suffix_name, ' ', account_table.mid_name) as 'Name',  student_table.course_abbr as 'Degree Program', student_table.year_lvl as 'Year Level', student_table.freetime as 'Free Time' from account_table inner join student_table on account_table.username = student_table.username where student_table.username = ?
select_amount_service = select amount from service_table where service_name = ?
select_id_service = select * from service_table where service_name = ?
select_freetime = select student_table.freetime from student_table inner join transaction_table on student_table.username = transaction_table.username where transaction_table.transaction_id = ?
select_timediff = select timediff(time(?), timediff(time(logout_time), time(login_time))) as 'timedifference' from pc_usage_table where transaction_id = ?
select_trans_user = select username from transaction_table where transaction_id = ?
select_pc_id1 = select pc_id from pc_table where ip_address = ?
select_timedifference = select timediff(time(?), timediff(curtime(), time(?))) as 'timedifference' from pc_usage_table where transaction_id = ?
select_logout_time = select logout_time from pc_usage_table where transaction_id = ?
select_login_time = select login_time from pc_usage_table where transaction_id = ?
select_now = select curtime()
select_time_consumed = select timediff(time(logout_time), time(login_time)) as 'timedifference' from pc_usage_table where time_to_sec(timediff(time(logout_time), time(login_time))) < time_to_sec(time(?)) and transaction_id = ?
select_freetime_user = select freetime from student_table where username = ?
select_cost_transaction = select cost_payable from transaction_table where transaction_id = ?
select_amount_transaction = select amount_paid from transaction_table where transaction_id = ?
select_pc_id_from_trans = select pc_id from pc_usage_table where transaction_id = ?
select_pc_id2 = select pc_table.pc_id from pc_table
select_transactions_with_accountabilities = select transaction_id from transaction_table where username = ? and amount_paid < cost_payable
select_picture = select photolink from student_table where username = ?
select_diagnostic_table2 = select * from diagnostic_table where sem_id = ?
select_diagnostic_table = select * from diagnostic_table order by diagnostic_table.date_end desc limit 1

select_filtered_username = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table inner join student_table on account_table.username = student_table.username where account_table.username like (?) and student_table.username like (?) group by username
select_filtered_lastname = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where account_table.last_name like ? group by username
select_filtered_firstname = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where account_table.first_name like ? group by username
select_filtered_yearlvl = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where student_table.year_lvl like ? group by username
select_filtered_degprog = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where student_table.course_abbr like ? group by username

select_filtered_username2 = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.username like ? group by transaction_id
select_filtered_servicename = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where service_table.service_name like ? group by transaction_id
select_filtered_date = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.date_rendered like ? group by Transaction_id

select_all = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.freetime as 'Free Time', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username
select_filtered_active = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.freetime as 'Free Time', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where student_table.status = 'active'
select_filtered_inactive = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', student_table.year_lvl as 'Year Level', student_table.course_abbr as 'Degree Program', student_table.status as 'Status', student_table.freetime as 'Free Time', student_table.balance as 'Balance', ifnull((select sum(transaction_table.cost_payable - transaction_table.amount_paid) from transaction_table where transaction_table.cost_payable > transaction_table.amount_paid and transaction_table.username = account_table.username),0) as 'Accountabilities' from account_table join student_table on account_table.username = student_table.username where student_table.status = 'inactive'

select_online_pc =
select_reserved_pc = select reservation_table.username as 'Username' from reservation_table
select_staff_table = select account_table.username as 'Username', account_table.last_name as 'Last Name', concat(account_table.first_name, ', ', account_table.suffix_name) as 'First Name', account_table.mid_name as 'Middle Name', admin_table.delete_priv as 'Delete Privilege', admin_table.settle_priv as 'Settle Privilege', admin_table.db_access as 'Database Access' from account_table inner join admin_table on account_table.username = admin_table.username
select_degree_table = select course_table.course_name as 'Degree Program', course_table.course_abbr as 'Abbreviation' from course_table
select_service_table = select service_name as 'Service Name', amount as 'Amount' from service_table
select_pasa_hour = select pasa_hour_table.date_rendered as 'Date', pasa_hour_table.amount_time as 'Amount Time', concat(pasa_hour_table.sender, '     ( ', pasa_hour_table.current_free_sender, '  -  ', pasa_hour_table.deducted_free_sender, ' )') as 'Sender (Current - Deducted)', concat(pasa_hour_table.receiver, '     ( ', pasa_hour_table.current_free_receiver, '  -  ', pasa_hour_table.added_free_receiver, ' )') as 'Receiver (Current - Added)' from pasa_hour_table
select_pasa_bal = select date_rendered as 'Date', amount as 'Amount Time', concat(sender, '     ( ', current_balance_sender, '  -  ', deducted_balance_sender, ' )') as 'Sender (Current - Deducted)', concat(receiver, '     ( ', current_balance_receiver, '  -  ', added_balance_receiver, ' )') as 'Receiver (Current - Added)' from pasa_balance_table

select_transaction_table = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.date_rendered = curdate()
select_all_transaction_table = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id
select_paid_transaction_table = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.cost_payable <= transaction_table.amount_paid
select_unpaid_transaction_table = select transaction_table.date_rendered as 'Date', transaction_table.transaction_id as 'Transaction ID', transaction_table.username 'Username', service_table.service_name 'Service Name', substring(transaction_table.cost_payable, 1, 5) as 'Cost', substring(transaction_table.amount_paid, 1, 5) as 'Amount Rendered' from transaction_table inner join service_table on transaction_table.service_id = service_table.service_id where transaction_table.cost_payable > transaction_table.amount_paid

select_usage_daily = select distinct a.pc_id as 'PC Number', (select count(b.pc_id) from pc_usage_table b where b.pc_id = a.pc_id && b.transaction_id in (select transaction_id from transaction_table where date_rendered = ?)) as 'Total # of Transactions', (select count(distinct c.username) from transaction_table c where c.transaction_id in (select d.transaction_id from pc_usage_table d where d.pc_id = a.pc_id) && c.transaction_id in (select transaction_id from transaction_table where date_rendered = ?)) as 'Total # of Users' from pc_usage_table a join transaction_table e on a.transaction_id = e.transaction_id where date_rendered = ?
select_usage_monthly = select distinct a.pc_id as 'PC Number', (select count(b.pc_id) from pc_usage_table b where b.pc_id = a.pc_id && b.transaction_id in (select transaction_id from transaction_table where year(date_rendered) = ? and monthname(date_rendered) = ?)) as 'Total # of Transactions', (select count(distinct c.username) from transaction_table c where c.transaction_id in (select d.transaction_id from pc_usage_table d where d.pc_id = a.pc_id) && c.transaction_id in (select transaction_id from transaction_table where year(date_rendered) = ? and monthname(date_rendered) = ?)) as 'Total # of Users' from pc_usage_table a join transaction_table e on a.transaction_id = e.transaction_id where year(e.date_rendered) = ? and monthname(date_rendered) = ?
select_usage_annual = select distinct a.pc_id as 'PC Number', (select count(b.pc_id) from pc_usage_table b where b.pc_id = a.pc_id && b.transaction_id in (select transaction_id from transaction_table where year(date_rendered) = ?)) as 'Total # of Transactions', (select count(distinct c.username) from transaction_table c where c.transaction_id in (select d.transaction_id from pc_usage_table d where d.pc_id = a.pc_id) && c.transaction_id in (select transaction_id from transaction_table where year(date_rendered) = ?)) as 'Total # of Users' from pc_usage_table a join transaction_table e on a.transaction_id = e.transaction_id where year(e.date_rendered) = ?
select_usage_semestral = select distinct a.pc_id as 'PC Number', (select count(b.pc_id) from pc_usage_table b where b.pc_id = a.pc_id && b.transaction_id in (select transaction_id from transaction_table where date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)))) as 'Total # of Transactions', (select count(distinct c.username) from transaction_table c where c.transaction_id in (select d.transaction_id from pc_usage_table d where d.pc_id = a.pc_id) && c.transaction_id in (select transaction_id from transaction_table where transaction_table.date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)))) as 'Total # of Users' from pc_usage_table a join transaction_table e on a.transaction_id = e.transaction_id where e.date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?))

select_student_daily = select account_table.username, concat(account_table.last_name, ', ', account_table.first_name, ', ', account_table.suffix_name, ', ', account_table.mid_name), student_table.course_abbr from account_table inner join student_table on student_table.username = account_table.username where account_table.username in (select transaction_table.username from transaction_table inner join pc_usage_table on transaction_table.transaction_id = pc_usage_table.transaction_id where year(transaction_table.date_rendered) = ? and transaction_table.date_rendered = ?)
select_student_monthly = select account_table.username as 'Student Number', concat(account_table.last_name, ', ', account_table.first_name, ', ', account_table.suffix_name, ', ', account_table.mid_name) as 'Name', student_table.course_abbr as 'Degree Program' from account_table inner join student_table on student_table.username = account_table.username where account_table.username in (select transaction_table.username from transaction_table inner join pc_usage_table on transaction_table.transaction_id = pc_usage_table.transaction_id where year(transaction_table.date_rendered) = ? and monthname(transaction_table.date_rendered) = ?)
select_student_annual = select account_table.username as 'Student Number', concat(account_table.last_name, ', ', account_table.first_name, ', ', account_table.suffix_name, ', ', account_table.mid_name) as 'Name', student_table.course_abbr as 'Degree Program' from account_table inner join student_table on student_table.username = account_table.username where account_table.username in (select transaction_table.username from transaction_table inner join pc_usage_table on transaction_table.transaction_id = pc_usage_table.transaction_id where year(transaction_table.date_rendered) = ?)
select_student_semestral = select account_table.username as 'Student Number', concat(account_table.last_name, ', ', account_table.first_name, ', ', account_table.suffix_name, ', ', account_table.mid_name) as 'Name', student_table.course_abbr as 'Degree Program' from account_table inner join student_table on student_table.username = account_table.username where account_table.username in (select transaction_table.username from transaction_table inner join pc_usage_table on transaction_table.transaction_id = pc_usage_table.transaction_id where transaction_table.date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)))

select_transaction_daily = select service_table.service_name as 'Service Name', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where transaction_table.date_rendered = ? group by transaction_table.service_id
select_transaction_monthly = select service_table.service_name as 'Service Name', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where year(transaction_table.date_rendered) = ? and monthname(transaction_table.date_rendered) = ? group by transaction_table.service_id
select_transaction_annual = select service_table.service_name as 'Service Name', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where year(transaction_table.date_rendered) = ? group by transaction_table.service_id
select_transaction_semestral = select service_table.service_name as 'Service Name', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where transaction_table.date_rendered between (select date_start from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) and (select date_end from diagnostic_table where sem_num = ? and (year(date_start) = ? or year(date_end) = ?)) group by transaction_table.service_id

select_latest_trans = select transaction_table.date_rendered as 'Date', service_table.service_name as 'Service Name', substring(transaction_table.amount_paid,1,5) as 'Cash Rendered', substring(transaction_table.cost_payable,1,5) as "Cost Payable" from transaction_table inner join service_table on service_table.service_id = transaction_table.service_id where transaction_table.username = ? order by transaction_table.transaction_id desc limit 5
select_trans_by_user = select service_table.service_name as 'Service Name', sum(transaction_table.amount_paid) as 'Amount Paid', sum(transaction_table.cost_payable) as 'Cost Payable' from service_table join transaction_table on service_table.service_id = transaction_table.service_id where transaction_table.username = ? and transaction_table.amount_paid < transaction_table.cost_payable group by transaction_table.service_id

update_activate_student = update student_table set status = 'active' where username = ?
update_deactivate_student = update student_table set status = 'inactive' where username = ?
update_profile_pic = update student_table set photolink = ? where username = ?
update_amount = update transaction_table set amount_paid = ? where transaction_id = ?
update_cash_close = update cash_flow_table set cash_close = cash_close + ? where dates = curdate()
update_balance = update student_table set balance = ? where username = ?
update_logout_expand = update pc_usage_table set logout_time = ? where transaction_id = ?
update_cost_transaction = update transaction_table set cost_payable = (select cost_payable + ? where transaction_id = ?) where transaction_id = ?
update_cost_transaction_plain = update transaction_table set cost_payable = ? where transaction_id = ?
update_amount_transaction = update transaction_table set amount_paid = (select amount_paid + ? where transaction_id = ?) where transaction_id = ?
update_pasa_hour_table = update pasa_hour_table set deducted_free_sender = ?, added_free_receiver = ? where transaction_id = ?
update_pasa_balance_table = update pasa_balance_table set deducted_balance_sender = ?, added_balance_receiver = ? where transaction_id = ?
update_receiver_time = update student_table set freetime = (select addtime(freetime,time(?)) where username = ?) where username = ?
update_sender_time = update student_table set freetime = (select timediff(freetime,time(?)) where username = ?) where username = ?
update_logout_pending = update pc_usage_table set logout_time = (select addtime(time(login_time), time(?))) where transaction_id = ?
update_logout_time = update pc_usage_table set logout_time = curtime() where transaction_id = ?
update_logout_time_with_reference = update pc_usage_table set logout_time = ? where transaction_id = ?
update_user_time = update student_table set freetime = ? where username = ?
update_reset_pw = update account_table set password = password(?) where username = ?
update_all_status = update student_table set status = 'inactive'
update_course_table = update course_table set course_abbr = ?, course_name = ? where course_abbr = ?
update_user_password = update account_table set password = password(?) where username = ? and password = password(?)
update_account_table = update account_table set username = ?, last_name = ?, first_name = ?, mid_name = ?, suffix_name = ? where username = ?
update_admin_table = update admin_table set username = ?, delete_priv = ?, settle_priv = ?, db_access = ? where username = ?
update_student_table = update student_table set username = ?, year_lvl = ?, course_abbr = ?, status = ? where username = ?
update_service_table = update service_table set service_name = ?, amount = ?, page_requirement = ? where service_name = ?
[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianBenjamin Mako Hill: Forming, storming, norming, performing, and …chloroforming?

In 1965, Bruce Tuckman proposed a “developmental sequence in small groups.” According to his influential theory, most successful groups go through four stages with rhyming names:

  1. Forming: Group members get to know each other and define their task.
  2. Storming: Through argument and disagreement, power dynamics emerge and are negotiated.
  3. Norming: After conflict, groups seek to avoid conflict and focus on cooperation and setting norms for acceptable behavior.
  4. Performing: There is both cooperation and productive dissent as the team performs the task at a high level.

Fortunately for organizational science, 1965 was hardly the last stage of development for Tuckman’s theory!

Twelve years later, Tuckman suggested that adjourning or mourning reflected potential fifth stages (Tuckman and Jensen 1977). Since then, other organizational researchers have suggested other stages including transforming and reforming (White 2009), re-norming (Biggs), and outperforming (Rickards and Moger 2002).

What does the future hold for this line of research?

To help answer this question, we wrote a regular expression to identify candidate words and placed the full list is at this page in the Community Data Science Collective wiki.

The good news is that despite the active stream of research producing new stages that end or rhyme with -orming, there are tons of great words left!

For example, stages in a group’s development might include:

  • Scorning: In this stage, group members begin mocking each other!
  • Misinforming: Group that reach this stage start producing fake news.
  • Shoehorning: These groups try to make their products fit into ridiculous constraints.
  • Chloroforming: Groups become languid and fatigued?

One benefit of keeping our list in the wiki is that the organizational research community can use it to coordinate! If you are planning to use one of these terms—or if you know of a paper that has—feel free to edit the page in our wiki to “claim” it!


Also posted on the Community Data Science Collective blog. Although credit for this post goes primarily to Jeremy Foote and Benjamin Mako Hill, the other Community Data Science Collective members can’t really be called blameless in the matter either.

,

Cory DoctorowPodcast: Let’s get better at demanding better from tech


Here’s my reading (MP3) of Let’s get better at demanding better from tech, a Locus Magazine column about the need to enlist moral, ethical technologists in the fight for a better technological future. It was written before the death of EFF co-founder John Perry Barlow, whose life’s work was devoted to this proposition, and before the Google uprising over Project Maven, in which technologists killed millions in military contracts by refusing to build AI systems for the Pentagon’s drones.

MP3

Worse Than FailureSponsor Post: Error Logging vs. Crash Reporting

A lot of developers confuse error and crash reporting tools with traditional logging. And it’s easy to make the relation without understanding the two in more detail.

Dedicated logging tools give you a running history of events that have happened in your application. Dedicated error and crash reporting tools focus on the issues users face that occur when your app is in production, and record the diagnostic details surrounding the problem that happened to the user, so you can fix it with greater speed and accuracy.

Most error logging activities within software teams remain just that. A log of errors that are never actioned and fixed.

Traditionally speaking, when a user reports an issue, you might find yourself hunting around in log files searching for what happened so you can debug it successfully.

Having an error reporting tool running silently in production means not only do users not need to report issues, as they are identified automatically, but each one is displayed in a dashboard, ranked by severity. Teams are able to get down to the root cause of an issue in seconds, not hours.

Full diagnostic details about the issue are presented to the developer immediately. Information such as OS, browser, machine, a detailed stack trace, a history of events leading up to the issue and even which individual users have encountered the specific issue are all made available.

In short, when trying to solve issues in your applications, you immediately see the needle, without bothering with the haystack.

Error monitoring tools are designed to give you answers quickly. Once you experience how they fit into the software development workflow and work alongside your logging, you won’t want to manage your application errors in any other way.

So next time you’re struggling to resolve problems in your apps - Think Raygun.

Your life as a developer will be made so much easier.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Worse Than FailureA Hard SQL Error

Prim Maze

Padma was the new guy on the team, and that sucked. When you're the new guy, but you're not new to the field, there's this maddening combination of factors that can make onboarding rough: a combination of not knowing the product well enough to be efficient, but knowing your craft well enough to expect efficiency. After all, if you're a new intern, you can throw back general-purpose tutorials and feel like you're learning new things at least. When you're a senior trying to make sense of your new company's dizzying array of under-documented products? The only way to get that knowledge is by dragging people who are already efficient away from what they're doing to ask.

By the start of week 2, however, Padma knew enough to get his hands dirty with some smaller bug-fixes. By the end of it, he'd begun browsing the company bug tracker looking for more work on his own. That's when he came across this bug report that seemed rather urgent:

Error: Can't connect to local MySQL server

It had been in the tracker for a month. That could mean a lot of things, all of them opaque when you're new enough not to know anyone. Was it impossible to reproduce? Was it one of those reports thrown in by someone who liked to tamper with their test environment and blame things breaking on the coders? Was their survey product just low priority enough that they hadn't gotten around to fixing it? Which client was this for?

It took Padma a few hours to dig into it enough to get to the root of the problem. The repository for their survey product was stored in their private github, one of dozens of repositories with opaque names. He found the codename of the product, "Santiago," by reading older tickets filed against the same product, before someone had renamed the tag to "Survey Deluxe." There was a branch for every client, an empty Master branch, and a Development branch as the default; he reached back out to the reporter for the name of the client so he could pull up their branch. Of course they had a "clientname" branch, a "clientname-new," and a "clientname3.0," but after comparing merge histories, he eventually discovered the production code: in a totally different branch, after they had merged two clients' environments together for a joint venture. Of course.

But finally, he had the problem reproduced in his local dev environment. After an hour of digging through folders, he found the responsible code:


<h2 id="survey">Surveys</h2>
        <div style="margin-left:10px;">
        <ul class="submenu">
                <li><a href="survey1.php">Survey #1</a></li>
                <li><a href="survey2.php">Survey #2</a><span style="color:red">Can't connect to local MySQL server through socket '/tmp/mysql.sock' (2)</span></li>
        </ul>
</div>

"But ... why?!" Padma growled at the screen.

"Oh, is that Santiago?" asked his neighbor, leaning over to see his screen. "Yeah, they requested a one-for-one conversion from their previous product. Warts and all. Seems they thought that was the name of the survey, and it was important that it be in red so they could find it easily enough."

Padma stared at the code in disbelief. After a long moment, he closed the editor and the browser, deleted the code from his hard drive, and closed the ticket "won't fix."

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

CryptogramSecure Speculative Execution

We're starting to see research into designing speculative execution systems that avoid Spectre- and Meltdown-like security problems. Here's one.

I don't know if this particular design secure. My guess is that we're going to see several iterations of design and attack before we settle on something that works. But it's good to see the research results emerge.

News article.

,

Planet DebianGunnar Wolf: Yes! I am going to...

Having followed through some paperwork I was still missing...

I can finally say...

Dates

I’m going to DebCamp18! I should arrive at NCTU in the afternoon/evening of Tuesday, 2018-07-24.

I will spend a day prior to that in Tokio, visiting a friend and probably making micro-tourism.

My Agenda

Of course, DebCamp is not a vacation, so we expect people that take part of DebCamp to have at least a rough sketch of activities. There are many, many things I want to tackle, and experience shows there's only time for a fraction of what's planned. But lets try:

keyring-maint training
We want to add one more member to the keyring-maint group. There is a lot to prepare before any announcements, but I expect a good chunk of DebCamp to be spent explaining the details to a new team member.
DebConf organizing
While I'm no longer a core orga-team member, I am still quite attached to helping out during the conference. This year, I took the Content Team lead, and we will surely be ironing out details such as fixing schedule bugs.
Raspberry Pi images
I replied to Michael Stapelberg's call for adoption of the unofficial-but-blessed Raspberry Pi 3 disk images. I will surely be spending some time on that.
Key Signing Party Coordination
I just sent out the Call for keys for keysigning in Hsinchu, Taiwan. At that point, I expect very little work to be needed, but it will surely be on my radar.

Of course... I *do* want to spend some minutes outside NCTU and get to know a bit of Taiwan. This is my first time in East Asia, and don't know when, if ever, I will have the opportunity to be there again. So, I will try to have at least the time to enjoy a little bit of Taiwan!

Planet DebianDirk Eddelbuettel: #19: Intel MKL in Debian / Ubuntu follow-up

Welcome to the (very brief) nineteenth post in the ruefully recalcitrant R reflections series of posts, or R4 for short.

About two months ago, in the most recent post in the series, #18, we provided a short tutorial about how to add the Intel Math Kernel Library to a Debian or Ubuntu system thanks to the wonderful apt tool -- and the prepackaged binaries by Intel. This made for a simple, reproducible, scriptable, and even reversible (!!) solution---which a few people seem to have appreciated. Good.

In the meantime, more good things happened. Debian maintainer Mo Zhou had posted this 'intent-to-package' bug report leading to this git repo on salsa and this set of packages currently in the 'NEW' package queue.

So stay tuned, "soon" (for various definitions of "soon") we should be able to directly get the MKL onto Debian systems via apt without needing Intel's repo. And in a release or two, Ubuntu should catch up. The fastest multithreaded BLAS and LAPACK for everybody, well-integrated and package. That said, it is still a monstrously large package so I mostly stick with the (truly open source rather than just 'gratis') OpenBLAS but hey, choice is good. And yes, technically these packages are 'outside' of Debian in the non-free section but they will be visible by almost all default configurations.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianRuss Allbery: Review: The Trouble with Physics

Review: The Trouble with Physics, by Lee Smolin

Publisher: Mariner
Copyright: 2006
Printing: 2007
ISBN: 0-618-91868-X
Format: Trade paperback
Pages: 355

A brief recap of the state of theoretical physics: Quantum mechanics and particle physics have settled on the standard model, which provides an apparently complete inventory of fundamental particles and explains three of the four fundamental forces. This has been very experimentally successful up to and including the recent tentative observation of the Higgs boson, one of the few predictions of the standard model that had yet to be confirmed by experiment. Meanwhile, Einstein's theory of general relativity continues as the accepted explanation of gravity, experimentally verified once again by LIGO and Virgo detection of gravitational waves.

However, there are problems. Perhaps the largest is the independence of these two branches of theoretical physics: quantum mechanics does not include or explain gravity, and general relativity does not sit easily alongside current quantum theory. This causes theoretical understanding to break down in situations where both theories need to be in play simultaneously, such as the very early universe or event horizons of black holes.

There are other problems within both theories as well. Astronomy shows that objects in the universe behave as if there is considerably more mass in galaxies than we've been able to observe (the dark matter problem), but we don't have a satisfying theory of what would make up that mass. Worse, the universe is expanding more rapidly than it should, requiring introduction of a "dark energy" concept with no good theoretical basis. And, on the particle physics side, the standard model requires a large number (around 20, depending on how you measure them) of apparently arbitrary free constants: numbers whose values don't appear to be predicted by any basic laws and therefore could theoretically be set to any value. Worse, if those values are set even very slightly differently than we observe in our universe, the nature of the universe would change beyond recognition. This is an extremely unsatisfying property for an apparently fundamental theory of nature.

Enter string theory, which is the dominant candidate for a deeper, unifying theory behind the standard model and general relativity that tries to account for at least some of these problems. And enter this book, which is a critique of string theory as both a scientific theory and a sociological force within the theoretical physics community.

I should admit up-front that Smolin's goal in writing this book is not the same as my goal in reading it. His primary concern is the hold that string theory has on theoretical physics and the possibility that it is stifling other productive avenues, instead spinning off more and more untestable theories that can be tweaked to explain any experimental result. It may even be leading people to argue against the principles of experimental science itself (more on that in a moment). But to mount his critique for the lay reader, he has to explain the foundations of both accepted theoretical physics and string theory (and a few of the competing alternative theories). That's what I was here for.

About a third of this book is a solid explanation of the history and current problems of theoretical physics for the lay person who is already familiar with basic quantum mechanics and general relativity. Smolin is a faculty member at the Perimeter Institution for Theoretical Physics and has done significant work in string theory, loop quantum gravity (one of the competing attempts to unify quantum mechanics and general relativity), and the (looking dubious) theory of doubly special relativity, so this is an engaged and opinionated overview from an active practitioner. He lays out the gaps in existing theories quite clearly, conveys some of the excitement and disappointment of recent (well, as of 2005) discoveries and unsolved problems, provides a solid if succinct summary of string theory, and manages all of that without relying on too much complex math. This is exactly the sort of thing I was looking for after Brian Greene's The Elegant Universe.

Another third of this book is a detailed critique of string theory, and specifically the assumption that string theory is correct despite its lack of testable predictions and its introduction of new problems. I noted in my review of Greene's book that I was baffled by his embrace of a theory that appears to add even more free variables than the standard model, an objection that he skipped over entirely. Smolin tackles this head-on, along with other troublesome aspects of a theory that is actually an almost infinitely flexible family of theories and whose theorized unification (M-theory) is still just an outline of a hoped-for idea.

The core of Smolin's technical objection to string theory is that it is background-dependent. Like quantum mechanics, it assumes a static space-time backdrop against which particle or string interactions happen. However, general relativity is background-independent; indeed, that's at the core of its theoretical beauty. It states that the shape of space-time itself changes, and is a participant in the physical effects we observe (such as gravity). Smolin argues passionately that background independence is a core requirement for any theory that aims to unify general relativity and quantum mechanics. As long as a theory remains background-dependent, it is, in his view, missing Einstein's key insight.

The core of his sociological objection is that he believes string theory has lost its grounding in experimental verification and has acquired far too much aura of certainty than it deserves given its current state, and has done so partly because of the mundane but pernicious effects of academic and research politics. On this topic, I don't know nearly enough to referee the debate, but his firm dismissal of attempts to justify string theory's weaknesses via the anthropic principle rings true to me. (The anthropic principle, briefly, is the idea that the large number of finely-tuned free constants in theories of physics need not indicate a shortcoming in the theory, but may be that way simply because, if they weren't, we wouldn't be here to observe them.) Smolin's argument is that no other great breakthroughs of physics have had to rely on that type of hand-waving, elegance of a theory isn't sufficient justification to reach for this sort of defense, and that to embrace the anthropic principle and its inherent non-refutability is to turn one's back on the practice of science. I suspect this ruffled some feathers, but Smolin put his finger squarely on the discomfort I feel whenever the anthropic principle comes up in scientific discussions.

The rest of the book lays out some alternatives to string theory and some interesting lines of investigation that, as Smolin puts it, may not pan out but at least are doing real science with falsifiable predictions. This is the place where the book shows its age, and where I frequently needed to do some fast Wikipedia searching. Most of the experiments Smolin points out have proven to be dead ends: we haven't found Lorentz violations, the Pioneer anomaly had an interesting but mundane explanation, and the predictions of modified Newtonian dynamics do not appear to be panning out. But I doubt this would trouble Smolin; as he says in the book, the key to physics for him is to make bold predictions that will often be proven wrong, but that can be experimentally tested one way or another. Most of them will lead to nothing but one can reach a definitive result, unlike theories with so many tunable parameters that all of their observable effects can be hidden.

Despite not having quite the focus I was looking for, I thoroughly enjoyed this book and only wish it were more recent. The physics was pitched at almost exactly the level I wanted. The sociology of theoretical physics was unexpected but fascinating in a different way, although I'm taking it with a grain of salt until I read some opposing views. It's an odd mix of topics, so I'm not sure if it's what any other reader would be looking for, but hopefully I've given enough of an outline above for you to know if you'd be interested.

I'm still looking for the modern sequel to One Two Three... Infinity, and I suspect I may be for my entire life. It's hard to find good popularizations of theoretical physics that aren't just more examples of watching people bounce balls on trains or stand on trampolines with bowling balls. This isn't exactly that, but it's a piece of it, and I'm glad I read it. And I wish Smolin the best of luck in his quest for falsifiable theories and doable experiments.

Rating: 8 out of 10

Planet DebianHideki Yamane: OSSummit Japan 2018


I've participated OSSumit Japan 2018 as volunteer staff for three days.










 Some Debian developers (Jose from Microsoft and Michael from credativ) gave a talk during this event.





,

Planet DebianSven Hoexter: nginx, lua, uuid and a nchan bug

At work we're running nginx in several instances. Sometimes running on Debian/stretch (Woooh) and sometimes on Debian/jessie (Boooo). To improve our request tracking abilities we set out to add a header with a UUID version 4 if it does not exist yet. We expected this to be a story we could implemented in a few hours at most ...

/proc/sys/kernel/random/uuid vs lua uuid module

If you start to look around on how to implement it you might find out that there is a lua module to generate a UUID. Since this module is not packaged in Debian we started to think about packaging it, but on a second thought we wondered if simply reading from the Linux /proc interface isn't faster after all? So we build a very unscientific test case that we deemed good enough:

$ cat uuid_by_kernel.lua
#!/usr/bin/env lua5.1
local i = 0
repeat
  local f = assert(io.open("/proc/sys/kernel/random/uuid", "rb"))
  local content = f:read("*all")
  f:close()
  i = i + 1
until i == 1000


$ cat uuid_by_lua.lua
#!/usr/bin/env lua5.1
package.path = package.path .. ";/home/sven/uuid.lua"
local i = 0
repeat
  local uuid = require("uuid")
  local content = uuid()
  i = i + 1
until i == 1000

The result is in favour of using the Linux /proc interface:

$ time ./uuid_by_kernel.lua
real    0m0.013s
user    0m0.012s
sys 0m0.000s

$ time ./uuid_by_lua.lua
real    0m0.021s
user    0m0.016s
sys 0m0.004s

nginx in Debian/stretch vs nginx in Debian/jessie

Now that we had settled on the lua code

if (ngx.var.http_correlation_id == nil or ngx.var.http_correlation_id == "") then
  local f = assert(io.open("/proc/sys/kernel/random/uuid", "rb"))
  local content = f:read("*all")
  f:close()
    return content:sub(1, -2)
  else
    return ngx.var.http_correlation_id
end

and the nginx configuration

set_by_lua_file $ngx.var.http_correlation_id /etc/nginx/lua-scripts/lua_uuid.lua;

we started to roll this one out to our mixed setup of Debian/stretch and Debian/jessie hosts. While we tested this one on Debian/stretch, and it all worked fine, we never gave it a try on Debian/jessie. Within seconds of the rollout all our nginx instances on Debian/jessie started to segfault.

Half an hour later it was clear that the nginx release shipped in Debian/jessie does not yet allow you to write directly into the internal variable $ngx.var.http_correlation_id. To workaround this issue we configured nginx like this to use the add_header configuration option to create the header.

set_by_lua_file $header_correlation_id /etc/nginx/lua-scripts/lua_uuid.lua;
add_header correlation_id $header_correlation_id;

This configuration works on Debian/stretch and Debian/jessie.

Another possibility we considered was using the backported version of nginx. But this one depends on a newer openssl release. I didn't want to walk down the road of manually tracking potential openssl bugs against a release not supported by the official security team. So we rejected this option. Next item on the todo list is for sure the migration to Debian/stretch, which is overdue now anyway.

and it just stopped

A few hours later we found that the nginx running on Debian/stretch was still running, but no longer responding. Attaching strace revealed that all processes (worker and master) were waiting on a futex() call. Logs showed an assert pointing in the direction of the nchan module. I think the bug we're seeing is #446, I've added the few bits of additional information I could gather. We just moved on and disabled the module on our systems. Now it's running fine in all cases for a few weeks.

Kudos to Martin for walking down this muddy road together on a Friday.

Planet DebianSteinar H. Gunderson: Nageru deployments

As we're preparing our Nageru video chains for another Solskogen, I thought it worthwhile to make some short posts about deployments in the wild (neither of which I had much involvement with myself):

  • The Norwegian municipality of Frøya is live streaming streaming all of their council meetings using Nageru (Norwegian only). This is a fairly complex setup with a custom frontend controlling PTZ cameras, so that someone non-technical can just choose from a few select scenes and everything else just clicks into place.
  • Breizhcamp, a French technology conference, used Nageru in 2018, transitioning from OBS. If you speak French, you can watch their keynote about it (itself produced with Nageru) and all their other video online. Breizhcamp ran their own patched version of Nageru (available on Github); I've taken in most of their patches into the main repository, but not all of them yet.

Also, someone thought it was a good idea to take an old version of Nageru, strip all the version history and put it on Github with (apparently) no further changes. Like, what. :-)

Valerie AuroraYesterday’s joke protest sign just became today’s reality

Tomorrow I’m going to a protest against the forcible separation of immigrant children from their families. When I started thinking about what sign to make, I remembered my sign for the first Women’s March protest, the day after Trump took office in January 2017. It said: “Trump hates kids and puppies… for real!!!

trump_hates_puppies
My  protest sign for the 2017 Women’s March

While I expected a lot of terrifying things to happen over the next few years, I never, never thought that Trump would deliberately tear thousands of children away from their families and put them in concentration camps. I knew he hated children; I didn’t know he hated children (specifically, brown children) so much that he’d hold them hostage to force Congress to pass his racist legislation. I did not expect him and his party to try to sell cages full of weeping little boys as future gang members. I did not expect 55% of Republican voters to support splitting up families and putting them in camps. I’m smiling at the cute dog in that photo; now the entire concept of that sign seems impossibly naive and inappropriate, much less my expression in that photo. I apologize for this sign and my joking attitude.

I remember being terrified during the months between Trump’s election and his inauguration. I couldn’t sleep; I put together a go-bag; I bought three weeks worth of food and water and stored them in the closet. I read a dozen books on fascism and failed democracies. I even built a spreadsheet tracking signs of fascism so I’d know when to leave the country.

I came up with the concept of that sign as a way to increase people’s disgust for Trump; what kind of pathetic low-life creep hates kids AND puppies? But I still didn’t get how bad things truly were; I thought Trump hated kids in the sense that he didn’t want any of them around him and wouldn’t lift a finger to help them. I didn’t understand that he—and many people in his administration—took actual pleasure in knowing they were building camps full of crying, desperate, terrified kids who may never be reunited with their parents. In January 2017, I thought I understood the evil of this administration and of a significant percentage of the people in this country; actually, I way underestimated it.

At that protest, several people asked me if Trump really hated puppies, but not one person asked me if Trump really hated kids. In retrospect, this seems ominous, not funny.

I’m going to think very carefully before creating any more “joke” protest signs. Today’s “joke” could easily be tomorrow’s reality.

,

Planet DebianBenjamin Mako Hill: I’m a maker, baby

 

What does the “maker movement” think of the song “Maker” by Fink?

Is it an accidental anthem or just unfortunate evidence of the semantic ambiguity around an overloaded term?

CryptogramFriday Squid Blogging: Capturing the Giant Squid on Video

In this 2013 TED talk, oceanographer Edith Widder explains how her team captured the giant squid on video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecuritySupreme Court: Police Need Warrant for Mobile Location Data

The U.S. Supreme Court today ruled that the government needs to obtain a court-ordered warrant to gather location data on mobile device users. The decision is a major development for privacy rights, but experts say it may have limited bearing on the selling of real-time customer location data by the wireless carriers to third-party companies.

Image: Wikipedia.

At issue is Carpenter v. United States, which challenged a legal theory the Supreme Court outlined more than 40 years ago known as the “third-party doctrine.” The doctrine holds that people who voluntarily give information to third parties — such as banks, phone companies, email providers or Internet service providers (ISPs) — have “no reasonable expectation of privacy.”

That framework in recent years has been interpreted to allow police and federal investigators to obtain information — such as mobile location data — from third parties without a warrant. But in a 5-4 ruling issued today that flies in the face of the third-party doctrine, the Supreme Court cited “seismic shifts in digital technology” allowing wireless carriers to collect “deeply revealing” information about mobile users that should be protected by the 4th Amendment to the U.S. Constitution, which is intended to shield Americans against unreasonable searches and seizures by the government.

Amy Howe, a reporter for SCOTUSblog.com, writes that the decision means police will generally need to get a warrant to obtain cell-site location information, a record of the cell towers (or other sites) with which a cellphone connected.

The ruling is no doubt a big win for privacy advocates, but many readers have been asking whether this case has any bearing on the sharing or selling of real-time customer location data by the mobile providers to third party companies. Last month, The New York times revealed that a company called Securus Technologies had been selling this highly sensitive real-time location information to local police forces across the United States, thanks to agreements the company had in place with the major mobile providers.

It soon emerged that Securus was getting its location data second-hand through a company called 3Cinteractive, which in turn was reselling data from California-based “location aggregator” LocationSmart. Roughly two weeks after The Times’ scoop, KrebsOnSecurity broke the news that anyone could look up the real time location data for virtually any phone number assigned by the major carriers, using a buggy try-before-you-buy demo page that LocationSmart had made available online for years to showcase its technology.

Since those scandals broke, LocationSmart disabled its promiscuous demo page. More importantly, AT&T, Sprint, T-Mobile and Verizon all have said they are now in the process of terminating agreements with third-parties to share this real-time location data.

Still, there is no law preventing the mobile providers from hashing out new deals to sell this data going forward, and many readers here have expressed concerns that the carriers can and eventually will do exactly that.

So the question is: Does today’s Supreme Court ruling have any bearing whatsoever on mobile providers sharing location data with private companies?

According to SCOTUSblog’s Howe, the answer is probably “no.”

“[Justice] Roberts emphasized that today’s ruling ‘is a narrow one’ that applies only to cell-site location records,” Howe writes. “He took pains to point out that the ruling did not ‘express a view on matters not before us’ – such as obtaining cell-site location records in real time, or getting information about all of the phones that connected to a particular tower at a particular time. He acknowledged that law-enforcement officials might still be able to obtain cell-site location records without a warrant in emergencies, to deal with ‘bomb threats, active shootings, and child abductions.'”

However, today’s decision by the high court may have implications for companies like Securus which have marketed the ability to provide real-time mobile location data to law enforcement officials, according to Jennifer Lynch, a senior staff attorney with the Electronic Frontier Foundation, a nonprofit digital rights advocacy group.

“The court clearly recognizes the ‘deeply revealing nature’ of location data and recognizes we have a privacy interest in this kind of information, even when it’s collected by a third party (the phone companies),” Lynch wrote in an email to KrebsOnSecurity. “I think Carpenter would have implications for the Securus context where the phone companies were sharing location data with non-government third parties that were then, themselves, making that data available to the government.”

Lynch said that in those circumstances, there is a strong argument the government would need to get a warrant to access the data (even if the information didn’t come directly from the phone company).

“However, Carpenter’s impact in other contexts — specifically in contexts where the government is not involved — is much less clear,” she added. “Currently, there aren’t any federal laws that would prevent phone companies from sharing data with non-government third parties, and the Fourth Amendment would not apply in that context.”

And there’s the rub: There is nothing in the current law that prevents mobile companies from sharing real-time location data with other commercial entities. For that reality to change, Congress would need to act. For more on the prospects of that happening and how we wound up here, check out my May 26 story, Why is Your Location Data No Longer Private?

The full Supreme Court opinion in Carpenter v. United States is available here (PDF).

CryptogramThe Effects of Iran's Telegram Ban

The Center for Human Rights in Iran has released a report outlining the effect's of that country's ban on Telegram, a secure messaging app used by about half of the country.

The ban will disrupt the most important, uncensored platform for information and communication in Iran, one that is used extensively by activists, independent and citizen journalists, dissidents and international media. It will also impact electoral politics in Iran, as centrist, reformist and other relatively moderate political groups that are allowed to participate in Iran's elections have been heavily and successfully using Telegram to promote their candidates and electoral lists during elections. State-controlled domestic apps and media will not provide these groups with such a platform, even as they continue to do so for conservative and hardline political forces in the country, significantly aiding the latter.

From a Wired article:

Researchers found that the ban has had broad effects, hindering and chilling individual speech, forcing political campaigns to turn to state-sponsored media tools, limiting journalists and activists, curtailing international interactions, and eroding businesses that grew their infrastructure and reach off of Telegram.

It's interesting that the analysis doesn't really center around the security properties of Telegram, but more around its ubiquity as a messaging platform in the country.

CryptogramDomain Name Stealing at Gunpoint

I missed this story when it came around last year: someone tried to steal a domain name at gunpoint. He was just sentenced to 20 years in jail.

Worse Than FailureError'd: Be Patient!...OK?

"I used to feel nervous when making payments online, but now I feel ...um...'Close' about it," writes Jeff K.

 

"Looks like me and Microsoft have different ideas of what 75% means," Gary S. wrote.

 

George writes, "Try this one at home! Head to tdbank.com, search for 'documents for opening account' and enjoy 8 solid pages of ...this."

 

"I'm confused if the developers knew the difference between Javascript and Java. This has to be a troll...right?" wrote JM.

 

Tom S. writes, "Saw this in the Friendo app, but what I didn't spot was an Ok button. "

 

"I look at this and wonder if someone could deny a vacation requests because of a conflict of 0.000014 days with another member of staff," writes Rob.

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Sam VargheseRecycling Trump: Old news passed off as investigative reporting

Over the last three weeks, viewers of the Australian Broadcasting Corporation’s Four Corners program have been treated to what is the ultimate waste of time: a recapping of all that has gone on in the United States during the investigation into alleged Russian collusion with the Trump campaign in the 2106 presidential campaign.

There was nothing new in the nearly three hours of programming on what is the ABC’s prime investigative program. It only served as a vanity outlet for Sarah Ferguson, rated as one of the network’s better reporters, but after this, and her unnecessary Hillary Clinton interview, appearing more as someone who is interested in big-noting herself.

Exactly why Ferguson and a crew spent what must be between four to six weeks in the US, London and Moscow to put to air material that has been beaten to the death by the US and other Western media is a mystery. Had Ferguson managed to unearth one nugget of information that has gone unnoticed so far, one would not be inclined to complain.

But this same ABC has been crying itself hoarse for the last few months over cuts to its budget and trumpeting its news credentials – and then it produces garbage like the three episode of the Russia-Trump series or whatever it was called.

As an aside, the investigation has been going on for more than a year now, with special counsel Robert Mueller, a former FBI director, having been appointed on May 17, 2017. The American media have had a field day and every time there is a fresh development, these are shrieks all around that this is the straw that breaks the camel’s back. But it all turns out to be an illusion in the end.

Every little detail of the process of electing Donald Trump has been covered and dissected over and over and over again. And yet Ferguson thought it a good idea to run three hours of this garbage.

Apart from the fact that this something akin to the behaviour of a dog that revisits its own vomit, Ferguson also paraded some very dodgy individuals to bolster her program.

One was James Clapper, the director of national intelligence during the Obama presidency. Clapper is a man who has committed perjury by lying to the US Congress under oath. Clapper also leaked information about the infamous anti-Trump dossier to CNN’s Jake Tapper and then was rewarded with a contract at CNN.

Clapper does not have the best of reputations when it comes to integrity. To call him a shady character would not be a stretch. Now Ferguson may have needed to speak to him once, because he was the DNI under Obama. But she did not need to have him appear every now and then, remarking on this and that. He added no weight to an already weak program.

Another person Ferguson gave plenty of air time to was Luke Harding, a reporter with the Guardian. Harding is known for a few things: plagiarising others’ reports while he was stationed in Moscow and writing a book about Edward Snowden without having met any of the principal players in the matter. Once again, a person of dubious character.

One would also have to ask: why does the camera focus on the reporter so much? Is she the story? Or is it a way to puff herself up and appear so important that she cannot be out of sight of the lens lest the story break down? It is a curse of modern journalism, this narcissism, and Ferguson suffers from it badly.

This is the second worthless program Ferguson has produced in recent times; the first was her puff interview with Hillary Clinton.

Maybe she is gearing up to take on some kind of job in the US. Wouldn’t surprise me if public money was being used to paint the meretricious as the magnificent.

CryptogramAlgeria Shut Down the Internet to Prevent Students from Cheating on Exams

Algeria shut the Internet down nationwide to prevent high-school students from cheating on their exams.

The solution in New South Wales, Australia was to ban smartphones.

EDITED TO ADD (6/22): Slashdot thread.

,

Planet DebianLars Wirzenius: Ick ALPHA-6 released: CI/CD engine

It gives me no small amount of satisfaction to announce the ALPHA-6 version of ick, my fledgling continuous integration and deployment engine. Ick has been now deployed and used by other people than myself.

Ick can, right now:

  • Build system trees for containers.
  • Use system trees to run builds in containers.
  • Build Debian packages.
  • Publish Debian packages via its own APT repository.
  • Deploy to a production server.

There's still many missing features. Ick is by no means ready to replace your existing CI/CD system, but if you'd like to have a look at ick, and help us make it the CI/CD system of your dreams, now is a good time to give it a whirl.

(Big missing features: web UI, building for multiple CPU architectures, dependencies between projects, good documentation, a development community. I intend to make all of these happen in due time. Help would be welcome.)

Worse Than FailureWait Low Down

As mentioned previously I’ve been doing a bit of coding for microcontrollers lately. Coming from the world of desktop and web programming, it’s downright revelatory. With no other code running, and no operating system, I can use every cycle on a 16MHz chip, which suddenly seems blazing fast. You might have to worry about hardware interrupts- in fact I had to swap serial connection libraries out because the one we were using misused interrupts and threw of the timing of my process.

And boy, timing is amazing when you’re the only thing running on the CPU. I was controlling some LEDs and if I just went in a smooth ramp from one brightness level to the other, the output would be ugly steps instead of a smooth fade. I had to use a technique called temporal dithering, which is a fancy way of saying “flicker really quickly” and in this case depended on accurate, sub-microsecond timing. This is all new to me.

Speaking of sub-microsecond timing, or "subus", let's check out Jindra S’s submission. This code also runs on a microcontroller, and for… “performance” or “clock accuracy” is assembly inlined into C.

/*********************** FUNCTION v_Angie_WaitSubus *******************************//**
@brief Busy waits for a defined number of cycles.
The number of needed sys clk cycles depends on the number of flash wait states,
but due to the caching, the flash wait states are not relevant for STM32F4.
4 cycles per u32_Cnt
*******************************************************************************/
__asm void  v_Angie_WaitSubus( uint32_t u32_Cnt )
{
loop
    subs r0, #1
    cbz  r0, loop_exit
    b loop
loop_exit
    bx lr
}

Now, this assembly isn’t the most readable thing, but the equivalent C code is pretty easy to follow: while(--u32_Cnt); In other words, this is your typical busy-loop. Since this code is the only code running on the chip, no problem right? Well, check out this one:

/*********************** FUNCTION v_Angie_IRQWaitSubus *******************************//**
@brief Busy waits for a defined number of cycles.
The number of needed sys clk cycles depends on the number of flash wait states,
but due to the caching, the flash wait states are not relevant for STM32F4.
4 cycles per u32_Cnt
*******************************************************************************/
__asm void  v_Angie_IRQWaitSubus( uint32_t u32_Cnt )
{
IRQloop
    subs r0, #1
    cbz  r0, IRQloop_exit
    b IRQloop
IRQloop_exit
    bx lr
}

What do you know, it’s the same exact code, but called IRQWaitSubus, implying it’s meant to be called inside of an interrupt handler. The details can get fiendishly complicated, but for those who aren’t looking at low-level code on the regular, interrupts are the low-level cousin of event handlers. It allows a piece of hardware (or software, in multiprocessing systems) to notify the CPU that something interesting has happened, and the CPU can then execute some of your code to react to it. Like any other event handler, interrupt handlers should be fast, so they can update the program state and then allow normal execution to continue.

What you emphatically do not do is wait inside of an interrupt handler. That’s bad. Not a full-on WTF, but… bad.

There’s at least three more variations of this function, with slightly different names, scattered across different modules, all of which represent a simple busy loop.

Ugly, sure, but where’s the WTF? Well, among other things, this board needed to output precisely timed signals, like say, a 500Hz square wave with a 20% duty cycle. The on-board CPU clock was a simple oscillator which would drift- over time, with changes in temperature, etc. Also, interrupts could claim CPU cycles, throwing off the waits. So Jindra’s company had placed this code onto some STM32F4 ARM microcontrollers, shipped it into the field, and discovered that outside of their climate controlled offices, stuff started to fail.

The code fix was simple- the STM32-series of processors had a hardware timer which could provide precise timing. Switching to that approach not only made the system more accurate- it also meant that Jindra could throw away hundreds of lines of code which was complicated, buggy, and littered with inline assembly for no particular reason. There was just one problem: the devices with the bad software were already in the field. Angry customers were already upset over how unreliable the system was. And short of going on site to reflash the microcontrollers or shipping fresh replacements, the company was left with only one recourse:

They announced Rev 2 of their product, which offered higher rates of reliability and better performance, and only cost 2% more!

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

CryptogramAre Free Societies at a Disadvantage in National Cybersecurity

Jack Goldsmith and Stuart Russell just published an interesting paper, making the case that free and democratic nations are at a structural disadvantage in nation-on-nation cyberattack and defense. From a blog post:

It seeks to explain why the United States is struggling to deal with the "soft" cyber operations that have been so prevalent in recent years: cyberespionage and cybertheft, often followed by strategic publication; information operations and propaganda; and relatively low-level cyber disruptions such as denial-of-service and ransomware attacks. The main explanation is that constituent elements of U.S. society -- a commitment to free speech, privacy and the rule of law; innovative technology firms; relatively unregulated markets; and deep digital sophistication -- create asymmetric vulnerabilities that foreign adversaries, especially authoritarian ones, can exploit. These asymmetrical vulnerabilities might explain why the United States so often appears to be on the losing end of recent cyber operations and why U.S. attempts to develop and implement policies to enhance defense, resiliency, response or deterrence in the cyber realm have been ineffective.

I have long thought this to be true. There are defensive cybersecurity measures that a totalitarian country can take that a free, open, democratic country cannot. And there are attacks against a free, open, democratic country that just don't matter to a totalitarian country. That makes us more vulnerable. (I don't mean to imply -- and neither do Russell and Goldsmith -- that this disadvantage implies that free societies are overall worse, but it is an asymmetry that we should be aware of.)

I do worry that these disadvantages will someday become intolerable. Dan Geer often said that "the price of freedom is the probability of crime." We are willing to pay this price because it isn't that high. As technology makes individual and small-group actors more powerful, this price will get higher. Will there be a point in the future where free and open societies will no longer be able to survive? I honestly don't know.

EDITED TO ADD (6/21): Jack Goldsmith also wrote this.

,

Planet DebianJohn Goerzen: Making a difference

Every day, ask yourself this question: What one thing can I do today that will make this democracy stronger and honor and support its institutions? It doesn’t have to be a big thing. And it probably won’t shake the Earth. The aggregation of them will shake the Earth.

– Benjamin Wittes

I have written some over the past year or two about the dangers facing the country. I have become increasingly alarmed about the state of it. And that Benjamin Wittes quote, along with the terrible tragedy, spurred me to action. Among other things, I did two things I never have done before:

I registered to protest on June 30.

I volunteered to do phone banking with SwingLeft.

And I changed my voter registration from independent to Republican.

No, I have not gone insane. The reason for the latter is that here in Kansas, the Democrats rarely field candidates for most offices. The real action happens in the Republican primary. So if I can vote in that primary, I can have a voice in keeping the crazy out of office. It’s not much, but it’s something.

Today we witnessed, hopefully, the first victory in our battle against the abusive practices happening to children at the southern border. Donald Trump caved, and in so doing, implicitly admitted the lies he and his administration have been telling about the situation. This only happened because enough people thought like Wittes: “I am small, but I can do SOMETHING.” When I called the three Washington offices of my senators and representatives — far-right Republicans all — it was apparent that I was by no means the first to give them an earful about this, and that they were changing their tone because of what they heard. Mind you, they hadn’t taken any ACTION yet, but the calls mattered. The reporting mattered. The attention mattered.

I am going to keep doing what little bit I can. I hope everyone else will too. Let us shake the Earth.

Planet DebianJulien Danjou: Stop merging your pull requests manually

Stop merging your pull requests manually

If there's something that I hate, it's doing things manually when I know I could automate them. Am I alone in this situation? I doubt so.

Nevertheless, every day, they are thousands of developers using GitHub that are doing the same thing over and over again: they click on this button:

Stop merging your pull requests manually

This does not make any sense.

Don't get me wrong. It makes sense to merge pull requests. It just does not make sense that someone has to push this damn button every time.

It does not make any sense because every development team in the world has a known list of pre-requisite before they merge a pull request. Those requirements are almost always the same, and it's something along those lines:

  • Is the test suite passing?
  • Is the documentation up to date?
  • Does this follow our code style guideline?
  • Have N developers reviewed this?

As this list gets longer, the merging process becomes more error-prone. "Oops, John just clicked on the merge button while there were not enough developer that reviewed the patch." Rings a bell?

In my team, we're like every team out there. We know what our criteria to merge some code into our repository are. That's why we set up a continuous integration system that runs our test suite each time somebody creates a pull request. We also require the code to be reviewed by 2 members of the team before it's approbated.

When those conditions are all set, I want the code to be merged.

Without clicking a single button.

That's exactly how Mergify started.

Stop merging your pull requests manually

Mergify is a service that pushes that merge button for you. You define rules in the .mergify.yml file of your repository, and when the rules are satisfied, Mergify merges the pull request.

No need to press any button.

Take a random pull request, like this one:

Stop merging your pull requests manually

This comes from a small project that does not have a lot of continuous integration services set up, just Travis. In this pull request, everything's green: one of the owners reviewed the code, and the tests are passing. Therefore, the code should be already merged: but it's there, hanging, chilling, waiting for someone to push that merge button. Someday.

With Mergify enabled, you'd just have to put this .mergify.yml a the root of the repository:

rules:
  default:
    protection:
      required_status_checks:
        contexts:
          - continuous-integration/travis-ci
      required_pull_request_reviews:
        required_approving_review_count: 1

With such a configuration, Mergify enables the desired restrictions, i.e., Travis passes, and at least one project member reviewed the code. As soon as those conditions are positive, the pull request is automatically merged.

We built Mergify as a free service for open-source projects. The engine powering the service is also open-source.

Now go check it out and stop letting those pull requests hang out one second more. Merge them!

If you have any question, feel free to ask us or write a comment below! And stay tuned — as Mergify offers a few other features that I can't wait to talk about!

TEDTEDx talk under review

Updated June 20, 2018: An independently organized TEDx event recently posted, and subsequently removed, a talk from the TEDx YouTube channel that the event organizer titled: “Why our perception of pedophilia has to change.”

In the TEDx talk, a speaker described pedophilia as a condition some people are born with, and suggested that if we recognize it as such, we can do more to prevent those people from acting on their instincts.

TEDx events are organized independently from the main annual TED conference, with some 3,500 events held every year in more than 100 countries. Our nonprofit TED organization does not control TEDx events’ content.

This talk and its removal was recently brought to our attention. After reviewing the talk, we believe it cites research in ways that are open to serious misinterpretation. This led some viewers to interpret the talk as an argument in favor of an illegal and harmful practice.

Furthermore, after contacting the organizer to understand why it had been taken down, we learned that the speaker herself requested it be removed from the internet because she had serious concerns about her own safety in its wake.

Our policy is and always has been to remove speakers’ talks when they request we do so. That is why we support this TEDx organizer’s decision to respect this speaker’s wishes and keep the talk offline.

We will continue to take down any illegal copies of the talk posted on the Internet.

Original, posted June 19, 2018: An independently organized TEDx event recently posted, and subsequently removed, a talk from the TEDx YouTube channel that the event organizer had titled: “Why our perception of pedophilia has to change.”
We were not aware of this organizer’s actions, but understand now that their decision to remove the talk was at the speaker’s request for her safety.
In our review of the talk in question, we at TED believe it cites research open for serious misinterpretation.
TED does not support or advocate for pedophilia.
We are now reviewing the talk to determine how to move forward.
Until we can review this talk for potential harm to viewers, we are taking down any illegal copies of the talk posted on the Internet.  

CryptogramPerverse Vulnerability from Interaction between 2-Factor Authentication and iOS AutoFill

Apple is rolling out an iOS security usability feature called Security code AutoFill. The basic idea is that the OS scans incoming SMS messages for security codes and suggests them in AutoFill, so that people can use them without having to memorize or type them.

Sounds like a really good idea, but Andreas Gutmann points out an application where this could become a vulnerability: when authenticating transactions:

Transaction authentication, as opposed to user authentication, is used to attest the correctness of the intention of an action rather than just the identity of a user. It is most widely known from online banking, where it is an essential tool to defend against sophisticated attacks. For example, an adversary can try to trick a victim into transferring money to a different account than the one intended. To achieve this the adversary might use social engineering techniques such as phishing and vishing and/or tools such as Man-in-the-Browser malware.

Transaction authentication is used to defend against these adversaries. Different methods exist but in the one of relevance here -- which is among the most common methods currently used -- the bank will summarise the salient information of any transaction request, augment this summary with a TAN tailored to that information, and send this data to the registered phone number via SMS. The user, or bank customer in this case, should verify the summary and, if this summary matches with his or her intentions, copy the TAN from the SMS message into the webpage.

This new iOS feature creates problems for the use of SMS in transaction authentication. Applied to 2FA, the user would no longer need to open and read the SMS from which the code has already been conveniently extracted and presented. Unless this feature can reliably distinguish between OTPs in 2FA and TANs in transaction authentication, we can expect that users will also have their TANs extracted and presented without context of the salient information, e.g. amount and destination of the transaction. Yet, precisely the verification of this salient information is essential for security. Examples of where this scenario could apply include a Man-in-the-Middle attack on the user accessing online banking from their mobile browser, or where a malicious website or app on the user's phone accesses the bank's legitimate online banking service.

This is an interesting interaction between two security systems. Security code AutoFill eliminates the need for the user to view the SMS or memorize the one-time code. Transaction authentication assumes the user read and approved the additional information in the SMS message before using the one-time code.

Planet DebianCraig Small: Odd dependency on Google Chrome

For weeks I have had problems with Google Chrome. It would work very few times and then for reasons I didn’t understand, would stop working. On the command line you would get several screens of text, but never would the Chrome window appear.

So I tried the Beta, and it worked… once.

Deleted all the cache and configuration and it worked… once.

Every time the process would be in an infinite loop listening to a Unix socket (fd 7) but no window for the second and subsequent starts of Chrome.

By sheer luck in the screenfulls of spam I noticed this:

Gkr-Message: 21:07:10.883: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service files

Hmm, so I noticed every time I started a fresh new Chrome, I logged into my Google account. So, once again clearing things I started Chrome, didn’t login and closed and reopened.  I had Chrome running the second time! Alas, not with all the stuff synchronised.

An issue for Mailspring put me onto the right path. installing gnome-keyring (or the dependencies p11-kit and gnome-keyring-pkcs11) fixed Chrome.

So if Chrome starts but you get no window, especially if you use cinnamon, try that trick.

 

 

Worse Than FailureThe Wizard Algorithm

Password requirements can be complicated. Some minimum and maximum number of characters, alpha and numeric characters, special characters, upper and lower case, change frequency, uniqueness over the last n passwords and different rules for different systems. It's enough to make you revert to a PostIt in your desk drawer to keep track of it all. Some companies have brillant employees who feel that they can do better, and so they create a way to figure out the password for any given computer - so you need to neither remember nor even know it.

Kendall Mfg. Co. (estab. 1827) (3092720143)

History does not show who created the wizard algorithm, or when, or what they were smoking at the time.

Barry W. has the misfortune of being a Windows administrator at a company that believes in coming up with their own unique way of doing things, because they can make it better than the way that everyone else is doing it. It's a small organization, in a sleepy part of a small country. And yet, the IT department prides itself on its highly secure practices.

Take the password of the local administrator account, for instance. It's the Windows equivalent of root, so you'd better use a long and complex password. The IT team won't use software to automate and keep track of passwords, so to make things extremely secure, there's a different password for every server.

Here's where the wizard algorithm comes in.

To determine the password, all you need is the server's hostname and its IP address.

For example, take the server PRD-APP2-SERV4 which has the IP address 178.8.1.44.

Convert the hostname to upper case and discard any hyphens, yielding PRDAPP2SERV4.

Take the middle two octets of the IP address. If either is a single digit, pad it out to double digits. So 178.8.1.44 becomes 178.80.10.44 which yields 8010. Now take the last character of the host name; if that's a digit, discard it and take the last letter, otherwise just take the last letter, which gives us V. Now take the second and third letters of the hostname and concatenate them to the 8010 and then stick that V on the end. This gives us 8010RDV. Now take the fourth and fifth letters, and add them to the end, which makes 8010RDVAP. And there's your password! Easy.

It had been that way for as long as anyone could remember, until the day someone decided to enable password complexity on the domain. From then on, you had to do all of the above, and then add @!#%&$?@! to the end of the password. How would you know whether a server has a password using the old method or the new one? Why by a spreadsheet available on the firm-wide-accessible file system, of course! Oh, by the way, there is no server management software.

Critics might say the wizard algorithm has certain disadvantages. The fact that two people, given the same hostname and IP address, often come up with different results for the algorithm. Apparently, writing a script to figure it out for you never dawned on anyone.

Or the fact that when a server has lost contact with the domain and you're trying to log on locally and the phone's ringing and everyone's pressuring you to get it resolved, the last thing you want to be doing is math puzzles.

But at least it's better than the standard way people normally do it!

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianJonathan Carter: Plans for DebCamp18

Dates

I’m going to DebCamp18! I should arrive at NCTU around noon on Saturday, 2018-07-21.

My Agenda

  • DebConf Video: Research if/how MediaDrop can be used with existing Debian video archive backends (basically, just a bunch of files on http).
  • DebConf Video: Take a better look at PeerTube and prepare a summary/report for the video team so that we better know if/how we can use it for publishing videos.
  • Debian Live: I have a bunch of loose ideas that I’d like to formalize before then. At the very least I’d like to file a bunch of paper cut bugs for the live images that I just haven’t been getting to. Live team may also need some revitalization, and better co-ordination with packagers of the various desktop environments in terms of testing and release sign-offs. There’s a lot to figure out and this is great to do in person (might lead to a DebConf BoF as well).
  • Debian Live: Current live weekly images have Calamares installed, although it’s just a test and there’s no indication yet on whether it will be available on the beta or final release images, we’ll have to do a good assessment on all the consequences and weigh up what will work out the best. I want to put together an initial report with live team members who are around.
  • AIMS Desktop: Get core AIMS meta-packages in to Debian… no blockers on this but just haven’t had enough quite time to do it (And thanks to AIMS for covering my travel to Hsinchu!)
  • Get some help on ITPs that have been a little bit more tricky than expected:
    • gamemode – Adjust power saving and cpu governor settings when launching games
    • notepadqq – A linux clone of notepad++, a popular text editor on Windows
    • Possibly finish up zram-tools which I just don’t get the time for. It aims to be a set of utilities to manage compressed RAM disks that can be used for temporary space, compressed in-memory swap, etc.
  • Debian Package of the Day series: If there’s time and interest, make some in-person videos with maintainers about their packages.
  • Get to know more Debian people, relax and socialize!

Planet DebianAthos Ribeiro: Triggering Debian Builds on OBS

This is my fifth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian builds on OBS

OBS supports building Debian packages. To do so, one must properly configure a project so OBS knows it is building a .deb package and to have the packages needed to handle and build debian packages installed.

openSUSE’s OBS instance has repositories for Debian 8, Debian 9, and Debian testing.

We will use base Debian projects in our OBS instance as Download on Demand projects and use subprojects to achieve our final goal (build packages agains Clang). By using the same configurations as the ones in the openSUSE public projects, we could perform builds in Debian 8 and Debian 9 in our local OBS deploys. However, builds for Debian Testing and Unstable were failing.

With further investigation, we realized the OBS version packaged in Debian cannot decompress control.tar.xz files in .deb packages, which is the default compression format for the control tarball since dpkg-1.19 (it used to be control.tar.gz before that). This issue was reported on the OBS repositories and was fixed on a Pull Request that is not included in the current Debian OBS version yet. For now, we apply this patch in our OBS instance on our salt states.

After applying the patch, the builds on Debian 8 and 9 are still finishing with success, but builds against Debian Testing and Unstable are getting stuck in a blocked state: dependencies are being downloaded, the OBS scheduler stalls for a while, the downloaded packages get cleaned up, and then the dependencies are downloaded again. OBS backend enters in a loop doing the described procedure and never assigns a build to a worker. No logs with hints leading to a possible issue are issued, giving us no clue of the current problem.

Although I am inclined to believe we have a problem with our dependencies list, I am still debugging this issue during this week and will bring more news on my next post.

Refactoring project configuration files

Reshabh opened a Pull Request in our salt repository with the OBS configuration files for Ubuntu, also based on the openSUSE’s OBS public configurations. Based on Sylvestre comments, I have been refactoring the Debian configuration files based on the OBS docuemtation. One of the proposed improvements is to use debootstrap to mount the builder chroot. This will allow us to reduce the number of dependencies listed in the projects configuration files. The issue which generated debootstrap support in OBS is available at https://github.com/openSUSE/obs-build/issues/111 and may lead to more interesting resources on the matter.

Next steps (A TODO list to keep on the radar)

  • Fix OBS builds on Debian Testing and Unstable
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the rake-tasks.sh script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)

,

Planet DebianShashank Kumar: Google Summer of Code 2018 with Debian - Week 5

During week 5, there were 3 merge requests undergoing review process simultaneously. I learned a lot about how code should be written in order to assist the reader since the code is read more times than the time it is written.

Services and Utility

After the user has entered their information on the signin or signup screen, the job of querying the database was given to a module named updatedb. The job of updatedb was to clean user input, hash password, query the database and respond with appropriate result after the database query is executed. In a discussion with Sanyam, he said updatedb doesn't conform to its name with what functions it incorporated. And explained the virtue of Service and Utility modules/functions and that this is the best place to restructure code with the same.

Utility functions can be described roughly as the functions which perform some operations on the data without caring much about the relationship of the data with respect to the application. So, generating uuid, cleaning email address, cleaning full name and hashing password becomes out utility functions and can be seen in utils.py for signup and similarly for signin.

Service functions can be described roughly as the functions which while performing operations on the data take their relationship with the application into account. Hence, these functions are not generic and application specific. sign_up_user is one such service function which received user information, calls utility functions to modify that information, query the database with respect to the signup operation i.e. adding the new user's detail to the database or raise SignUpError if details are already present. This can be seen in services module for signup and signin as well.

Persisting database connection

This is how the connection to the database used to work before the review. The settings module used to create the connection to the database, create table schema if not present and close the connection. Few constants are saved in the module to be used by signup and signin in order to connect to the database. But, the problem is, now database connection has to be established everytime there's a query to be executed by the services of signup or signin. Since the sqlite3 database is saved in a file alongside the application, I though it'll not be a problem to make connection whenever needed. But it overhead on the OS now which can slow down the application when scaled. To resolve this, now settings return the connection object which can be used again in any other module.

Integrating SignUp with Dashboard

While the SignUp feature was being reviewed the Dashbaord was merged and I had to refactor SignUp merge request accordingly. The natural flow of this should be the SignUp being the default screen up on the UI and after successful signup operation the Dashboard should be displayed. To achieve such a flow, I used screen manager which handles different screens and transition between them with predefined animation. This is defined in main module and the entire flow can be seen in action below.

Designing Tutorials and Tools menu

Once user is on the Dashboard, they have an option of picking up from different modules and going through the tutorials and tools available in the respective modules. The idea is to display difficulty tip as well so it becomes easier for the user to begin. Hence, below is what I've designed in order to incorporate the same.

New Contributor Wizard - Tutorials and Tools Menu

Implementing Tutorials and Tools menu

Now comes the fun part, thinking about the architecture of the modules just designed in order for them to take shape of some code in the application. The idea here is to define them in a json file to be picked from the respective module afterwards. This way it'll be easier to add new tutorials and tools and hence we have this resultant json. The developement of this feature can be followed on this merge request

Now remains the quest to design and implement the structure of tutorials which can be generalized in a way that it can be populated using a json file. This will provide flexibility to the developer of tutorials and a UI module can also be implemented to modify this json to add new tutorials without even knowing how to code. Sounds amazing right? We'll see how it works out soon. If you have any suggestions this make sure to comment down below, on the merge request or reach out to me.

The Conclusion

Since the SignUp has also been merged I'll have to refactor SignIn now to integrate all of it in one happy application and complete the natural flow of things. Also, the design and development of tools/tutorials is underway and by the next blog is out you might be able to test the application with atleast one tool or tutorial from one of the modules on the dashboard.

Krebs on SecurityAT&T, Sprint, Verizon to Stop Sharing Customer Location Data With Third Parties

In the wake of a scandal involving third-party companies leaking or selling precise, real-time location data on virtually all Americans who own a mobile phone, AT&T, Sprint and Verizon now say they are terminating location data sharing agreements with third parties.

At issue are companies known in the wireless industry as “location aggregators,” entities that manage requests for real-time customer location data for a variety of purposes, such as roadside assistance and emergency response. These aggregators are supposed to obtain customer consent before divulging such information, but several recent incidents show that this third-party trust model is fundamentally broken.

On May 10, 2018, The New York Times broke the story that a little-known data broker named Securus was selling local police forces around the country the ability to look up the precise location of any cell phone across all of the major U.S. mobile networks.

Then it emerged that Securus had been hacked, its database of hundreds of law enforcement officer usernames and passwords plundered. We also learned that Securus’ data was ultimately obtained from a company called 3Cinteractive, which in turn obtained its data through a California-based location tracking firm called LocationSmart.

On May 17, KrebsOnSecurity broke the news of research by Carnegie Mellon University PhD student Robert Xiao, who discovered that a LocationSmart try-before-you-buy opt-in demo of the company’s technology was wide open — allowing real-time lookups from anyone on anyone’s mobile device — without any sort of authentication, consent or authorization.

LocationSmart disabled its demo page shortly after that story. By that time, Sen. Ron Wyden (D-Ore.) had already sent letters to AT&T, Sprint, T-Mobile and Verizon, asking them to detail any agreements to share real-time customer location data with third-party data aggregation firms.

AT&T, T-Mobile and Verizon all said they had terminated data-sharing agreements with Securus. In a written response (PDF) to Sen. Wyden, Sprint declined to share any information about third-parties with which it may share customer location data, and it was the only one of the four carriers that didn’t say it was terminating any data-sharing agreements.

T-Mobile and Verizon each said they both share real-time customer data with two companies — LocationSmart and another firm called Zumigo, noting that these companies in turn provide services to a total of approximately 75 other customers.

Verizon emphasized that Zumigo — unlike LocationSmart — has never offered any kind of mobile location information demo service via its site. Nevertheless, Verizon said it had decided to terminate its current location aggregation arrangements with both LocationSmart and Zumigo.

“Verizon has notified these location aggregators that it intends to terminate their ability to access and use our customers’ location data as soon as possible,” wrote Karen Zacharia, Verizon’s chief privacy officer. “We recognize that location information can provide many pro-consumer benefits. But our review of our location aggregator program has led to a number of internal questions about how best to protect our customers’ data. We will not enter into new location aggregation arrangements unless and until we are comfortable that we can adequately protect our customers’ location data through technological advancements and/or other practices.”

In its response (PDF), AT&T made no mention of any other company besides Securus. AT&T indicated it had no intention to stop sharing real-time location data with third-parties, stating that “without an aggregator, there would be no practical and efficient method to facilitate requests across different carriers.”

Sen. Wyden issued a statement today calling on all wireless companies to follow Verizon’s lead.

“Verizon deserves credit for taking quick action to protect its customers’ privacy and security,” Wyden said. “After my investigation and follow-up reports revealed that middlemen are selling Americans’ location to the highest bidder without their consent, or making it available on insecure web portals, Verizon did the responsible thing and promptly announced it was cutting these companies off. In contrast, AT&T, T-Mobile, and Sprint seem content to continuing to sell their customers’ private information to these shady middle men, Americans’ privacy be damned.”

Update, 5:20 p.m. ET: Shortly after Verizon’s letter became public, AT&T and Sprint have now said they, too, will start terminating agreements to share customer location data with third parties.

“Based on our current internal review, Sprint is beginning the process of terminating its current contracts with data aggregators to whom we provide location data,” the company said in an emailed statement. “This will take some time in order to unwind services to consumers, such as roadside assistance and fraud prevention services. Sprint previously suspended all data sharing with LocationSmart on May 25, 2018. We are taking this further step to ensure that any instances of unauthorized location data sharing for purposes not approved by Sprint can be identified and prevented if location data is shared inappropriately by a participating company.”

AT&T today also issued a statement: “Our top priority is to protect our customers’ information, and, to that end, we will be ending our work with aggregators for these services as soon as practical in a way that preserves important, potential lifesaving services like emergency roadside assistance.”

KrebsOnSecurity asked T-Mobile if the company planned to follow suit, and was referred to a tweet today from T-Mobile CEO John Legere, who wrote: “I’ve personally evaluated this issue & have pledged that T-Mobile will not sell customer location data to shady middlemen.” In a follow-up statement shared by T-Mobile, the company said, “We ended all transmission of customer data to Securus and we are terminating our location aggregator agreements.

Wyden’s letter asked the carriers to detail any arrangements they may have to validate that location aggregators are in fact gaining customer consent before divulging the information. Both Sprint and T-Mobile said location aggregators were contractually obligated to obtain customer consent before sharing the data, but they provided few details about any programs in place to review claims and evidence that an aggregator has obtained consent.

AT&T and Verizon each said they have processes for periodically auditing consent practices by the location aggregators, but that Securus’ unauthorized use of the data somehow flew under the radar.

AT&T noted that it began its relationship with LocationSmart in October 2012 (back when it was known by another name, “Locaid”).  Under that agreement, LocationSmart’s customer 3Cinteractive would share location information with prison officials through prison telecommunications provider Securus, which operates a prison inmate calling service.

But AT&T said after Locaid was granted that access, Securus began abusing it to sell an unauthorized “on-demand service” that allowed police departments to learn the real-time location data of any customer of the four major providers.

“We now understand that, despite AT&T’s requirements to obtain customer consent, Securus did not in fact obtain customer consent before collecting customers’ location information for its on-demand service,” wrote Timothy P. McKone, executive vice president of federal relations at AT&T. “Instead, Securus evidently relied upon law enforcement’s representation that it had appropriate legal authority to obtain customer location data, such as a warrant, court order, or other authorizing document as a proxy for customer consent.”

McKone’s letter downplays the severity of the Securus incident, saying that the on-demand location requests “comprised a tiny fraction — less than two tenths of one percent — of the total requests Securus submitted for the approved inmate calling service. AT&T has no reason to believe that there are other instances of unauthorized access to AT&T customer location data.”

Blake Reid, an associate clinical professor at the University of Colorado School of Law, said the entire mobile location-sharing debacle shows the futility of transitive trust.

“The carriers basically have arrangements with these location aggregators that contractually say, ‘You agree not to use this access we provide you without getting customer consent’,” Reid said. “Then that aggregator has a relationship with another aggregator, and so on. So what we then have is this long chain of trust where no one has ever consented to the provision of the location information, and yet it ends up getting disclosed anyhow.”

Curious how we got here and what Congress or federal regulators might do about the current situation? Check out last month’s story, Why Is Your Location Data No Longer Private.

Update, 5:20 p.m. ET: Updated headline and story to reflect statements from AT&T and Sprint that they are winding down customer location data-sharing agreements with third party companies.

Update, June 20, 2:23 p.m. ET: Added clarification from T-Mobile.

Planet DebianBenjamin Mako Hill: How markets coopted free software’s most powerful weapon (LibrePlanet 2018 Keynote)

Several months ago, I gave the closing keynote address at LibrePlanet 2018. The talk was about the thing that scares me most about the future of free culture, free software, and peer production.

A video of the talk is online on Youtube and available as WebM video file (both links should skip the first 3m 19s of thanks and introductions).

Here’s a summary of the talk:

App stores and the so-called “sharing economy” are two examples of business models that rely on techniques for the mass aggregation of distributed participation over the Internet and that simply didn’t exist a decade ago. In my talk, I argue that the firms pioneering these new models have learned and adapted processes from commons-based peer production projects like free software, Wikipedia, and CouchSurfing.

The result is an important shift: A decade ago,  the kind of mass collaboration that made Wikipedia, GNU/Linux, or Couchsurfing possible was the exclusive domain of people producing freely and openly in commons. Not only is this no longer true, new proprietary, firm-controlled, and money-based models are increasingly replacing, displacing, outcompeting, and potentially reducing what’s available in the commons. For example, the number of people joining Couchsurfing to host others seems to have been in decline since Airbnb began its own meteoric growth.

In the talk, I talk about how this happened and what I think it means for folks of that are committed to working in commons. I talk a little bit about the free culture and free software should do now that mass collaboration, these communities’ most powerful weapon, is being used against them.

I’m very much interested in feedback provided any way you want to reach me including in person, over email, in comments on my blog, on Mastodon, on Twitter, etc.


Work on the research that is reflected and described in this talk was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). Some of the initial ideas behind this talk were developed while working on this paper (official link) which was led by Maximilian Klein and contributed to by Jinhao Zhao, Jiajun Ni, Isaac Johnson, and Haiyi Zhu.

Sociological Images“Uncomfortable with Cages”: When Framing Fails

By now, you’ve probably heard about the family separation and detention policies at the U.S. border. The facts are horrifying.

Recent media coverage has led to a flurry of outrage and debate about the origins of this policy. It is a lot to take in, but this case also got me thinking about an important lesson from sociology for following politics in 2018: we’re not powerless in the face of “fake news.”

Photo Credit: Fibonacci Blue, Flickr CC

Political sociologists talk a lot about framing—the way movements and leaders select different interpretations of an issue to define and promote their position. Frames are powerful interpretive tools, and sociologists have shown how framing matters for everything from welfare reform and nuclear power advocacy to pro-life and labor movements.

One of the big assumptions in framing theory is that leaders coordinate. There might be competition to establish a message at first, but actors on the same side have to get together fairly quickly to present a clean, easy to understand “package” of ideas to people in order to make political change.

The trick is that it is easy to get cynical about framing, to think that only powerful people get to define the terms of debate. We assume that a slick, well-funded media campaign will win out, and any counter-frames will get pushed to the side. But the recent uproar over boarder separation policies shows how framing can be a very messy process. Over just a few days, these are a few of the frames coming from administration officials and border authorities:

We don’t know how this issue is going to turn out, but many of these frames have been met with skepticism, more outrage, and plenty of counter-evidence. Calling out these frames alone is not enough; it will take mobilization, activism, lobbying, and legislation to change these policies. Nevertheless, this is an important reminder that framing is a social process, and, especially in an age of social media, it is easier than ever to disrupt a political narrative before it has the chance to get organized.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianSean Whitton: I'm going to DebCamp18, Hsinchu, Taiwan

Here’s what I’m planning to work on – please get in touch if you want to get involved with any of these items.

DebCamp work

Throughout DebCamp and DebConf

  • Debian Policy: sticky bugs; process; participation; translations

  • Helping people use dgit and git-debrebase

    • Writing up or following up on feature requests and bugs

    • Design work with Ian and others

Worse Than FailureCodeSOD: A Unique Specification

One of the skills I think programmers should develop is not directly programming related: you should be comfortable reading RFCs. If, for example, you want to know what actually constitutes an email address, you may want to brush up on your BNF grammars. Reading and understanding an RFC is its own skill, and while I wouldn’t suggest getting in the habit of reading RFCs for fun, it’s something you should do from time to time.

To build the skill, I recommend picking a simple one, like UUIDs. There’s a lot of information encoded in a UUID, and five different ways to define UUIDs- though usually we use type 1 (timestamp-based) and type 4 (random). Even if you haven’t gone through and read the spec, you already know the most important fact about UUIDs: they’re unique. They’re universally unique in fact, and you can use them as identifiers. You shouldn’t have a collision happen within the lifetime of the universe, unless someone does something incredibly wrong.

Dexen encountered a database full of collisions on UUIDs. Duplicates were scattered all over the place. Since we’re not well past the heat-death of the universe, the obvious answer is that someone did something entirely wrong.

use Ramsey\Uuid\Uuid;
 
$model->uuid = Uuid::uuid5(Uuid::NAMESPACE_DNS, sprintf('%s.%s.%s.%s', 
    rand(0, time()), time(), 
    static::class, config('modelutils.namespace')))->toString();

This block of PHP code uses the type–5 UUID, which allows you to generate the UUID based on a name. Given a namespace, usually a domain name, it runs it through SHA–1 to generate the required bytes, allowing you to create specific UUIDs as needed. In this case, Dexen’s predecessor was generating a “domain name”-ish string by combining: a random number from 0 to seconds after the epoch, the number of seconds after the epoch, the name of the class, and a config key. So this developer wasn’t creating UUIDs with a specific, predictable input (the point of UUID–5), but was mixing a little from the UUID–1 time-based generation, and the UUID–4 random-based generation, but without the cryptographically secure source of randomness.

Thus, collisions. Since these UUIDs didn’t need to be sortable (no need for UUID–1), Dexen changed the generation to UUID–4.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 202 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased to 190 hours per month thanks to a few new sponsors who joined to benefit from Wheezy’s Extended LTS support.

We are currently in a transition phase. Wheezy is no longer supported by the LTS team and the LTS team will soon take over security support of Debian 8 Jessie from Debian’s regular security team.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianErich Schubert: Predatory publishers: SciencePG

I got spammed again by SciencePG (“Science Publishing Group”).

One of many (usually Chinese or Indian) fake publishers, that will publish anything as long as you pay their fees. But, unfortunately, once you published a few papers, you inevitably land on their spam list: they scrape the websites of good journals for email adresses, and you do want your contact email address on your papers.

However, this one is particularly hilarious: They have a spelling error right at the top of their home page!

SciencePG spelling

Fail.

Speaking of fake publishers. Here is another fun example:

Kim Kardashian, Satoshi Nakamoto, Tomas Pluskal
Wanion: Refinement of RPCs.
Drug Des Int Prop Int J 1(3)- 2018. DDIPIJ.MS.ID.000112.

Yes, that is a paper in the “Drug Designing & Intellectual Properties” International (Fake) Journal. And the content is a typical SciGen generated paper that throws around random computer buzzword and makes absolutely no sense. Not even the abstract. The references are also just made up. And so are the first two authors, VIP Kim Kardashian and missing Bitcoin inventor Satoshi Nakamoto…

In the PDF version, the first headline is “Introductiom”, with “m”…

So Lupine Publishers is another predatory publisher, that does not peer review, nor check if the article is on topic for the journal.

Via Retraction Watch

Conclusion: just because it was published somewhere does not mean this is real, or correct, or peer reviewed…

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #164

Here’s what happened in the Reproducible Builds effort between Sunday June 10 and Saturday June 16 2018:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 96 was uploaded to Debian unstable by Chris Lamb. It includes contributions already covered by posts in previous weeks as well as new ones from:

tests.reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

Packages reviewed and fixed, and bugs filed

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Don Martiblood donation: no good deed goes unpunished

I have been infected with the Ebola virus.

I have had sex with another man in the past year.

I am taking Coumadin®.

Actually, none of those three statements is true. And Facebook knows it.

The American Red Cross has given Facebook this highly personal information about me, by adding my contact info to an "American Red Cross Blood Donors" Facebook Custom Audience. If any of that stuff were true, I wouldn't have been allowed to give blood.

When I heard back from the American Red Cross about this personal data problem, they told me that they don't share my health information with Facebook.

That's not how it works. I'm listed in the Custom Audience as a blood donor. Anyway, too late. Facebook has the info now.

So, which of its promises about how it uses people's personal information is Facebook going to break next?

And is some creepy tech bro right now making a killer pitch to Paul Graham about a business plan to "disrupt" the health insurance market using blood donor information?

I should not have to care about this, and I don't have time to. I don't even have time to attempt a funny remark about the whole Facebook board member Peter Thiel craving blood thing.

,

Rondam RamblingsDamn straight there's a moral equivalence here

Germany, 1945: The United States of America, 2018: It's true, the kid in the second picture is not being sent to the gas chambers (yet).  But here's the thing: she doesn't know that!  This kid is two years old.  All she knows is that her mother is being taken away, and she may or may not ever see her again. The government of the United States of America has run completely off the

Krebs on SecurityGoogle to Fix Location Data Leak in Google Home, Chromecast

Google in the coming weeks is expected to fix a location privacy leak in two of its most popular consumer products. New research shows that Web sites can run a simple script in the background that collects precise location data on people who have a Google Home or Chromecast device installed anywhere on their local network.

Craig Young, a researcher with security firm Tripwire, said he discovered an authentication weakness that leaks incredibly accurate location information about users of both the smart speaker and home assistant Google Home, and Chromecast, a small electronic device that makes it simple to stream TV shows, movies and games to a digital television or monitor.

Young said the attack works by asking the Google device for a list of nearby wireless networks and then sending that list to Google’s geolocation lookup services.

“An attacker can be completely remote as long as they can get the victim to open a link while connected to the same Wi-Fi or wired network as a Google Chromecast or Home device,” Young told KrebsOnSecurity. “The only real limitation is that the link needs to remain open for about a minute before the attacker has a location. The attack content could be contained within malicious advertisements or even a tweet.”

It is common for Web sites to keep a record of the numeric Internet Protocol (IP) address of all visitors, and those addresses can be used in combination with online geolocation tools to glean information about each visitor’s hometown or region. But this type of location information is often quite imprecise. In many cases, IP geolocation offers only a general idea of where the IP address may be based geographically.

This is typically not the case with Google’s geolocation data, which includes comprehensive maps of wireless network names around the world, linking each individual Wi-Fi network to a corresponding physical location. Armed with this data, Google can very often determine a user’s location to within a few feet (particularly in densely populated areas), by triangulating the user between several nearby mapped Wi-Fi access points. [Side note: Anyone who’d like to see this in action need only to turn off location data and remove the SIM card from a smart phone and see how well navigation apps like Google’s Waze can still figure out where you are].

“The difference between this and a basic IP geolocation is the level of precision,” Young said. “For example, if I geolocate my IP address right now, I get a location that is roughly 2 miles from my current location at work. For my home Internet connection, the IP geolocation is only accurate to about 3 miles. With my attack demo however, I’ve been consistently getting locations within about 10 meters of the device.”

Young said a demo he created (a video of which is below) is accurate enough that he can tell roughly how far apart his device in the kitchen is from another device in the basement.

“I’ve only tested this in three environments so far, but in each case the location corresponds to the right street address,” Young said. “The Wi-Fi based geolocation works by triangulating a position based on signal strengths to Wi-Fi access points with known locations based on reporting from people’s phones.”

Beyond leaking a Chromecast or Google Home user’s precise geographic location, this bug could help scammers make phishing and extortion attacks appear more realistic. Common scams like fake FBI or IRS warnings or threats to release compromising photos or expose some secret to friends and family could abuse Google’s location data to lend credibility to the fake warnings, Young notes.

“The implications of this are quite broad including the possibility for more effective blackmail or extortion campaigns,” he said. “Threats to release compromising photos or expose some secret to friends and family could use this to lend credibility to the warnings and increase their odds of success.”

When Young first reached out to Google in May about his findings, the company replied by closing his bug report with a “Status: Won’t Fix (Intended Behavior)” message. But after being contacted by KrebsOnSecurity, Google changed its tune, saying it planned to ship an update to address the privacy leak in both devices. Currently, that update is slated to be released in mid-July 2018.

According to Tripwire, the location data leak stems from poor authentication by Google Home and Chromecast devices, which rarely require authentication for connections received on a local network.

“We must assume that any data accessible on the local network without credentials is also accessible to hostile adversaries,” Young wrote in a blog post about his findings. “This means that all requests must be authenticated and all unauthenticated responses should be as generic as possible. Until we reach that point, consumers should separate their devices as best as is possible and be mindful of what web sites or apps are loaded while on the same network as their connected gadgets.”

Earlier this year, KrebsOnSecurity posted some basic rules for securing your various “Internet of Things” (IoT) devices. That primer lacked one piece of advice that is a bit more technical but which can help mitigate security or privacy issues that come with using IoT systems: Creating your own “Intranet of Things,” by segregating IoT devices from the rest of your local network so that they reside on a completely different network from the devices you use to browse the Internet and store files.

“A much easier solution is to add another router on the network specifically for connected devices,” Young wrote. “By connecting the WAN port of the new router to an open LAN port on the existing router, attacker code running on the main network will not have a path to abuse those connected devices. Although this does not by default prevent attacks from the IoT devices to the main network, it is likely that most naïve attacks would fail to even recognize that there is another network to attack.”

For more on setting up a multi-router solution to mitigating threats from IoT devices, check out this in-depth post on the subject from security researcher and blogger Steve Gibson.

Update, June 19, 6:24 p.m. ET: The authentication problems that Tripwire found are hardly unique to Google’s products, according to extensive research released today by artist and programmer Brannon Dorsey. Check out Wired.com‘s story on Dorsey’s research here.

CryptogramRidiculously Insecure Smart Lock

Tapplock sells an "unbreakable" Internet-connected lock that you can open with your fingerprint. It turns out that:

  1. The lock broadcasts its Bluetooth MAC address in the clear, and you can calculate the unlock key from it.

  2. Any Tapplock account an unlock every lock.

  3. You can open the lock with a screwdriver.

Regarding the third flaw, the manufacturer has responded that "...the lock is invincible to the people who do not have a screwdriver."

You can't make this stuff up.

EDITED TO ADD: The quote at the end is from a different smart lock manufacturer. Apologies for that.

Worse Than FailureCodeSOD: The Sanity Check

I've been automating deployments at work, and for Reasons™, this is happening entirely in BASH. Those Reasons™ are that the client wants to use Salt, but doesn't want to give us access to their Salt environment. Some of our deployment targets are microcontrollers, so Salt isn't even an option.

While I know the shell well enough, I'm getting comfortable with more complicated scripts than I usually write, along with tools like xargs which may be the second best shell command ever invented. yes is the best, obviously.

The key point is that the shell, coupled with the so-called "Unix Philosophy" is an incredibly powerful tool. Even if you already know that it's powerful, it's even more powerful than you think it is.

How powerful? Well, how about ripping apart the fundamental rules of mathematics? An anonymous submitter found this prelude at the start of every shell script in their organization.

#/usr/bin/env bash declare -r ZERO=$(true; echo ${?}) declare -r DIGITZERO=0 function sanity_check() { function err_msg() { echo -e "\033[31m[ERR]:\033[0m ${@}" } if [ ${ZERO} -ne ${DIGITZERO} ]; then err_msg "The laws of physics doesn't apply to this server." err_msg "Real value ${ZERO} is not equal to ${DIGITZERO}." exit 1 fi } sanity_check

true, like yes, is one of those absurdly simple tools: it's a program that completes successfully (returning a 0 exit status back to the shell). The ${?} expression contains the last exit status. Thus, the variable $ZERO will contain… 0. Which should then be equal to 0.

Now, maybe BASH isn't BASH anymore. Maybe true has been patched to fail. Maybe, maybe, maybe, but honestly, I'm wondering whose sanity is actually being checked in the sanity_check?

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet Linux AustraliaJames Morris: Linux Security BoF at Open Source Summit Japan

This is a reminder for folks attending OSS Japan this week that I’ll be leading a  Linux Security BoF session  on Wednesday at 6pm.

If you’ve been working on a Linux security project, feel welcome to discuss it with the group.  We will have a whiteboard and projector.   This is also a good opportunity to raise topics for discussion, and to ask questions about Linux security.

See you then!

Valerie AuroraIn praise of the 30-hour work week

I’ve been working about 30 hours a week for the last two and a half years. I’m happier, healthier, and wealthier than when I was working 40, 50, or 60 hours a week as a full-time salaried software engineer (that means I was only paid for 40 hours a week). If you are a salaried professional in the U.S. who works 40 hours a week or more, there’s a pretty good chance you could also be working fewer hours, possibly even for more money. In this post, I’ll explain some of the myths and the realities that promote overwork. If you’re already convinced that you’d like to work fewer hours, you can skip straight to how you can start taking steps to work less.

A little about me: After college, I worked for about 8 years as a full-time salaried software engineer. Like many software engineers, I often worked 50 or 60 hour weeks while being paid for 40 hours a week. I hit the glass ceiling at age 29 and started working part-time hourly as a software consultant. I loved the hours but hated the instability and was about to lose my health insurance benefits (this was before the ACA passed). Then a colleague offered me a job at his storage startup, working 20 hours a week, salaried, with benefits. I thought, “You can do that???” and negotiated a 30 hour salaried job with benefits with my dream employer. I worked full-time again for about 5 years after that, and put in more 60 hour weeks while co-founding a non-profit. After shutting the non-profit down, I took 3 months off to recover. For the last two and a half years, I’ve worked for myself as a diversity and inclusion in tech consultant. I rarely work more than 30 hours a week and last year I made more money than any other year of my life.

Now, if I told my 25-year-old self this, she’d probably refuse to believe me. When I was 25, I believed my extra hours and hard work would be rewarded, that I’d be able to work 50 or 60 hours a week forever, and that I’d never enjoy anything as much as working. Needless to say, I no longer believe any of those things.

Myths about working overtime

Here are a few of the myths I used to believe about working overtime:

Myth: I can be productive for more than 8 hours a day on a sustained basis

How many hours a day can I productively write code? This will vary for everyone, but the number I hear most often is 4 hours a day 5 days a week, which is my max. I slowly learned that if I wrote code longer than that, my productivity steeply declined. After 8 hours, I was just adding bugs that I’d have to fix the next day. For the other 4 hours, I was better off dealing with email, writing papers, submitting expenses, reading books, or taking a walk (during which I’d usually figure out what I needed to do next in my program). After 8 hours, my brain is useless for anything requiring focus or discipline. I can do more work for short bursts occasionally when I’m motivated, but it takes a toll on my health and I need extra time off to recover.

I know other people can do focused productive work for more than 8 hours a day; congrats! However, keep in mind that I know plenty of people who thought they could work more than 8 hours a day, and then discovered they’d given themselves major stress-related health problems—repetitive stress injury, ulcers, heart trouble—or ignored existing health problems until they got so bad they started interfering with their work. This includes several extremely successful people who only need to sleep 5 hours a night and were using the extra time that gave them to do more work. The human body can only take so much stress.

Myth: My employer will reward me for working extra hours

Turns out, software engineering isn’t graded on effort, like kindergarten class. I remember the first year of my career when I worked my usual overtime and did not get a promotion or a raise; the company was slowly going out of business and it didn’t matter how many hours I worked—I wasn’t getting a raise. Given that my code quality fell off after 4 hours and went negative after 8 hours, it was a waste of time to work overtime anyway. At the same time, I always felt a lot of pressure to appear to be working for more than 40 hours a week, such that 40 hours became the unofficial minimum. The end result was a lot of programmers in the office late at night doing things other than coding: playing games, reading the internet, talking with each other. Which is great when you have no friends outside work, no family nearby, and no hobbies; less great when you do.

Overall, my general impression of the reward structure for software engineers is that people who fit people’s preconceptions of what a programmer looks like and who aggressively self-promote are more likely to get raises and promotions than people who produce more value. (Note that aggressive self-promotion is often punished in women of all races, people of color, disabled folks, immigrants, etc.)

Myth: People who work 40 hours or less are lazy

I was raised with fairly typical American middle-class beliefs about work: work is virtuous, if people don’t have jobs it’s because of some personal failing of theirs, etc. I started to change my mind when I read about Venezuelan medical doctors who were unable to buy shoes during an economic recession. Medical school is hard; I couldn’t believe all of those doctors were lazy! In my first full-time job, I had a co-worker who spent 40 hours a week in the office, but never did any real work. Then I realized that many of the hardest working people I knew were mothers who worked in the home for no pay at all. Nowadays I understand that I can’t judge someone’s moral character by the number of hours of labor they do (or are paid for) each week.

The kind of laziness that does concern me comes from abuse: people using coercion to extract an unfair amount of value from other people’s labor. This includes many abusive spouses, most billionaires, and many politicians. I’m not worried about people who want to work 40 hours a week or fewer so they can spend more time with their kids or crocheting or traveling; they aren’t the problem.

Myth: I work more than 40 hours because I’d be unhappy otherwise

When I was 25, I couldn’t imagine wanting to do other things with the time I was spending on work. With hindsight, I can see that’s because I was socially isolated and didn’t know how to deal with my anxiety other than by working. If I tried to stop working, I would very quickly run out of things to do that I enjoyed, and would end up writing some more code or answering some more work email just to have some positive feelings. It took years and years of therapy, building up my social circle, and developing hobbies before I had enough enjoyable things to do other than work.

Working for pay gives a lot of people joy and that is perfectly fine! It’s when you have few other ways to feel happy that overwork begins to be a problem.

Myth: The way to fix my anxiety is to work more hours

The worse the social safety net is in your country, the more anxious you probably are about your future: Will you have a place to live? Food to eat? Medical care? Clothes for your kids? We often respond to anxiety by shutting down any higher thought and focusing on what is in front of us. For many of us in this situation, the obvious answer seems to be “work more hours.” Now, if you are being paid for working more hours, this makes some sense: money contributes to security. But if you’re not, those extra hours bring no concrete reward. You are just hoping that your employer will take the extra work into consideration when deciding whether to give you a raise or end your employment. Unfortunately, in my experience, the best way to get a raise or keep your job is to be as similar to your management as possible.

If you can take the time to work with your anxiety and pull back and look at the larger picture, you’ll often find better ways to use those extra hours to improve your personal safety net. Just a few off the top of my head: building your professional network, improving your resume, learning new skills, helping friends, caring for your family, meditating, taking care of your health, and talking to a therapist about your anxiety. The future is uncertain and only partially under your control; nothing can change that fundamental truth. Consider carefully whether working unpaid hours is the best way to increase your safety.

Myth: The extra hours are helping me learn skills that will pay off later

Maybe it’s just me, but I can only learn new stuff for a few hours a day. Judging by the recommended course loads at universities, most people can’t actively learn new stuff more than 40 hours a week. If I’ve been working for more than 8 hours, all I can do is repeat things I’ve already learned (like stepping through a program in a debugger). Creative thought and breakthroughs are pretty thin on the ground after 8 hours of hard work. The only skills I’m sure I learned from working more than 40 hours a week are: how to keep going through hunger, how to ignore pain in my body, how to keep going through boredom, how to stay awake, and how to sublimate my healthy normal human desires. Oh, and which office snack foods are least nauseating at 2am.

Myth: Companies won’t hire salaried professionals part-time

Some won’t, some will. Very few companies will spontaneously offer part-time salaried work for a position that usually requires full-time, but if you have negotiating power and you’re persistent, you will be surprised how often you can get part-time work. Negotiating power usually increases as you become a more desirable employee; if you can’t swing part-time now, keeping working on your career and you may be able to get it in the future.

Myth: I can only get benefits if I work full-time

Whether a company can offer the benefits available to full-time employees to part-time employees is up to their internal policies combined with local law. Human beings create policies and laws and they can be changed. Small companies are generally more flexible about policies than large companies. Some companies offer part-time positions as a competitive advantage in hiring. Again, having more negotiating power will help here. Companies are more likely to change their policies or make exceptions if they really really want your services.

Myth: My career will inevitably suffer if I work part-time

There are absolutely some career goals that can only be achieved by working full-time. But working part-time can also help your career. You can use your extra time to learn new skills, or improve your education. You can work on unpaid projects that improve your portfolio. You can extend your professional network. You can get career coaching. You can start your own business. You can write books. You can speak at conferences. Many things are possible.

Real barriers to working fewer hours

Under capitalism, in the absence of enforced laws against working more than a certain number of hours a week, the number of hours a week employees work will grow until the employer is no longer getting a marginal benefit out of each additional hour. That means if the employer will get any additional value out of an hour above and beyond the costs of working that hour, they’ll require the employee to work that hour. This happens without regard for the cost for the employee or their dependents, in terms of health, happiness, or quality of life for their dependents.

In the U.S. and many other countries, we often act like the 40-hour working week is some kind of natural law, when the laws surrounding it were actually the result of a long, desperately fought battle between labor and capital extending over many decades. Even so, what laws we do have limiting the amount of labor an employer can demand from an employee have many loopholes, and often go unenforced. Wage theft—employers stealing wages from employees through a variety of means, including unpaid overtime—accounts for more money stolen in the U.S. than all robberies.

Due to loopholes and lax enforcement, many salaried professionals end up in a situation where all the people they are competing with for jobs or promotions are all working far more than 40 hours a week. They don’t have to be working efficiently for more than 40 hours a week for this to be of benefit to their employers, they just have to be creating more value than they are costing during those hours of work. Some notorious areas of high competition and high hours include professors on the tenure track, lawyers on the partner track, and software engineers working in competitive fields.

In particular, software engineers working for venture capital-funded startups in fields with lots of competitors are under a lot of pressure to produce more work more quickly, since timing is such an important element of success in the fields that venture capital invests in. The result is a lot of software engineers who burn themselves out working too many hours for startups for less total compensation than they’d make working at Microsoft or IBM, despite whatever stock options they were offered to make up for lower salaries and benefits. This is because (a) most startups fail, (b) most software engineers either don’t vest their stock options before they quit, or quit before the company goes public and can’t afford to buy the options during the short (usually 90-day) exercise window after they quit.

No individual actions or decisions by a single worker can change these kinds of competitive pressures, and if your goal is to succeed in one of these highly competitive, poorly governed areas, you’ll probably have to work more than 40 hours a week. Overall, unchecked capitalism leads to a Red Queen’s race, in which individual workers have to work as hard as they can just to keep up with their competition (and those who can’t, die). I don’t want to live in this world, which is why I support laws limiting working hours and requiring pay, government-paid parental and family leave, a universal basic income, and the unions and political parties that fight for and win these protections.

Tips for working fewer hours

These tips for working fewer hours are aimed primarily at software engineers in the U.S. who have some job mobility, and more generally for salaried professionals in the U.S. Some of these tips may be useful for other folks as well.

See a career counselor or career coach. Most of us are woefully unprepared to guide and shape our career paths. A career counselor can help you figure out what you value, what your goals should be, and how to achieve them, while taking into account your whole self (including family, friends, and hobbies). A career counselor will help you with the mechanics of actually working fewer hours: negotiating down your current job, finding a new job, starting your own business, etc. To find a career counselor, ask your friends for recommendations or search online review sites.

Go to therapy. If you’re voluntarily overworking, you’ve internalized a lot of ideas about what a good person is or how to be happy that are actually about how to make employers wealthier. Even if you are your own employer, you’ll still need to work these out. You’re also likely to be dealing with anxiety or unresolved problems in your life by escaping to work. You’ll need to learn new values, new ideas, and new coping mechanisms before you can work fewer hours. I’ve written about how to find therapy here. You might also want to read up on workaholics. The short version is: there is some reason you are currently overworking, and you’ll need to address that before you can stop overworking.

Find other things to do with your time. Spend more time with your kids, develop new hobbies or pick up new ones, learn a sport, watch movies, volunteer, write a novel – the options are endless. Learn to identify the voice in your head that says you shouldn’t be wasting your time on that and tell it to mind its own business.

Search for more efficient ways to make money. In general, hourly wage labor is going to have a very hard limit on how much money you can make per hour, even in highly paid positions. Work with your career counselor to figure out how to make more money per hour of labor. Often this looks like teaching, reviewing, or selling a product or service with low marginal cost.

Talk to a financial advisor. Reducing hours often means at least some period of lower income, even if your income ends up higher after that. If like many people you are living paycheck-to-paycheck, you’ll need help. A professional financial advisor can help you figure out how to get through this period and make better financial decisions in general. [Added 19-June-2018]

Finally, we can help normalize working fewer hours a week just by talking about it and, if it is safe for us, actually asking for fewer hours of work. We can also support unions, elect politicians who promise to pass legislation protecting workers, promote universal basic income, support improvements in the social safety net, and raise awareness of what working conditions are like without these protections.

,

Planet Linux AustraliaMichael Still: Rejected talk proposal: Design at scale: OpenStack versus Kubernetes

Share

This proposal was submitted for pyconau 2018. It wasn’t accepted, but given I’d put the effort into writing up the proposal I’ll post it here in case its useful some other time. The oblique references to OpensStack are because pycon had an “anonymous” review system in 2018, and I was avoiding saying things which directly identified me as the author.


OpenStack and Kubernetes solve very similar problems. Yet they approach those problems in very different ways. What can we learn from the different approaches taken? The differences aren’t just technical though, there are some interesting social differences too.

OpenStack and Kubernetes solve very similar problems – at their most basic level they both want to place workloads on large clusters of machines, and ensure that those placement decisions are as close to optimal as possible. The two projects even have similar approaches to the fundamentals – they are both orchestration systems at their core, seeking to help existing technologies run at scale instead of inventing their own hypervisors or container run times.

Yet they have very different approaches to how to perform these tasks. OpenStack takes a heavily centralised and monolithic approach to orchestration, whilst Kubernetes has a less stateful and more laissez faire approach. Some of that is about early technical choices and the heritage of the projects, but some of it is also about hubris and a desire to tightly control. To be honest I lived the OpenStack experience so I feel I should be solidly in that camp, but the Kubernetes approach is clever and elegant. There’s a lot to like on the Kubernetes side of the fence.

Its increasingly common that at some point you’ll encounter one of these systems, as neither seems likely to go away in the next few years. Understanding some of the basics of their operation is therefore useful, as well as being interesting at a purely hypothetical level.

Share

The post Rejected talk proposal: Design at scale: OpenStack versus Kubernetes appeared first on Made by Mikal.

Planet Linux AustraliaMichael Still: Accepted talk proposal: Learning from the mistakes that even big projects make

Share

This proposal was submitted for pyconau 2018. It was accepted, but hasn’t been presented yet. The oblique references to OpensStack are because pycon had an “anonymous” review system in 2018, and I was avoiding saying things which directly identified me as the author.


Since 2011, I’ve worked on a large Open Source project in python. It kind of got out of hand – 1000s of developers and millions of lines of code. Yet despite being well resourced, we made the same mistakes that those tiny scripts you whip up to solve a small problem make. Come learn from our fail.

This talk will use the privilege separation daemon that the project wrote to tell the story of decisions that were expedient at the time, and how we regretted them later. In a universe in which you can only run commands as root via sudo, dd’ing from one file on the filesystem to another seems almost reasonable. Especially if you ignore that the filenames are defined by the user. Heck, we shell out to “mv” to move files around, even when we don’t need escalated permissions to move the file in question.

While we’ll focus mainly on the security apparatus because it is the gift that keeps on giving, we’ll bump into other examples along the way as well. For example how we had pluggable drivers, but you have to turn them on by passing in python module paths. So what happens when we change the interface the driver is required to implement and you have a third party driver? The answer isn’t good. Or how we refused to use existing Open Source code from other projects through a mixture of hubris and licensing religion.

On a strictly technical front, this is a talk about how to do user space privilege separation sensibly. Although we should probably discuss why we also chose in the last six months to not do it as safely as we could.

For a softer technical take, the talk will cover how doing things right was less well documented than doing things the wrong way. Code reviewers didn’t know the anti-patterns, which were common in the code base, so made weird assumptions about what was ok or not.

On a human front, this is about herding cats. Developers with external pressures from their various employers, skipping steps because it was expedient, and how throwing automation in front of developers because having a conversation as adults is hard. Ultimately we ended up being close to stalled before we were “saved” from an unexpected direction.

In the end I think we’re in a reasonable place now, so I certainly don’t intend to give a lecture about doom and gloom. Think of us more as a light hearted object lesson.

Share

The post Accepted talk proposal: Learning from the mistakes that even big projects make appeared first on Made by Mikal.

Don MartiHelping people move ad budgets away from evil stuff

Hugo-award-winning author Charles Stross said that a corporation is some kind of sociopathic hive organism, but as far as I can tell a corporation is really more like a monkey troop cosplaying a sociopathic hive organism.

This is important to remember because, among other reasons, it turns out that the money that a corporation spends to support democracy and creative work comes from the same advertising budget as the money it spends on random white power trolls and actual no-shit Nazis. The challenge for customers is to help people at corporations who want to do the right thing with the advertising budget, but need to be able to justify it in terms that won't break character (since they have agreed to pretend to be part of a sociopathic hive organism that only cares about its stock price).

So here is a quick follow-up to my earlier post about denying permission for some kinds of ad targeting.

Techcrunch reports that "Facebook Custom Audiences," the system where advertisers upload contact lists to Facebook in order to target the people on those lists with ads, will soon require permission from the people on the list. Check it out: Introducing New Requirements for Custom Audience Targeting | Facebook Business. On July 2, Facebook's own rules will extend a subset of Europe-like protection to everyone with a Facebook account. Beaujolais!

So this is a great opportunity to help people who work for corporations and want to do the right thing. Denying permission to share your info with Facebook can move the advertising money that they spend to reach you away from evil stuff and towards sites that make something good. Here's a permission withdrawal letter to cut and paste. Pull requests welcome.

,

Rondam RamblingsSuffer the little children

Nothing illustrates the complete moral and intellectual bankruptcy of Donald Trump's supporters, apologists, and enablers better than Jeff Sessions's Biblical justification for separating children from their families: “I would cite you to the Apostle Paul and his clear and wise command in Romans 13, to obey the laws of the government because God has ordained the government for his purposes,”

Planet Linux AustraliaDonna Benjamin: The Five Whys

The Five Whys - Need to go to the hardware store?

Imagine you work in a hardware store. You notice a customer puzzling over the vast array of electric drills.

She turns to you and says, I need a drill, but I don’t know which one to pick.

You ask “So, why do you want a drill?

“To make a hole.” she replies, somewhat exasperated. “Isn’t that obvious?”

“Sure,” you might say, “But why do you want to drill a hole? It might help us decide which drill you need!” “

Oh, okay," and she goes on to describe the need to thread cable from one room, to another.

From there, we might want to know more about the walls, about the type and thickness of the cable, and perhaps about what the cable is for. But what if we keep asking why? What if the next question was something like this?

“Why do you want to pull the cable from one room to the other?”

Our customer then explains she wants to connect directly to the internet router in the other room. "Our wifi reception is terrible! This seemed the fastest, easiest way to fix that."

At this point, there may be other solutions to the bad wifi problem that don’t require a hole at all, let alone a drill.

Someone who needs a drill, rarely wants a drill, nor do they really want a hole.

It’s the utility of that hole that we’re trying to uncover with the 5 Whys.

Acknowledgement

I can't remember who first told me about this technique. I wish I could, it's been profoundly useful, and I evangelise it's simple power at every opportunity. Thank you who ever you are, I honour your generous wisdom by paying it forward today.

More about the Five whys

Image credits

Creative Commons Icons all from the Noun Project

  • Drill by Andrejs Kirma
  • Mouse Hole by Sergey Demushkin
  • Cable by Amy Schwartz
  • Internet by Vectors Market
  • Wifi by Baboon designs
  • Not allowed by Adnen Kadri

,

Planet Linux AustraliaLev Lafayette: Being An Acrobat: Linux and PDFs

The PDF file format can be efficiently manipulated in Linux and other free software that may not be easy in proprietary operating systems or applications. This includes a review of various PDF readers for Linux, creation of PDFs from office documents using LibreOffice, editing PDF documents, converting PDF documents to images, extracting text from non-OCR PDF documents, converting to PostScript, converting restructuredText, Markdown, and other formats, searching PDFs according to regular expressions, converting to text, extracting images, separating and combining PDF documents, creating PDF presentations from text, creating fillable PDF forms, encrypting and decrypting PDF documents, and parsing PDF documents.

A presentation to Linux Users of Victoria, Saturday June 16, 2018

CryptogramFriday Squid Blogging: Cephalopod Week on Science Friday

It's Cephalopod Week! "Three hearts, eight arms, can't lose."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: Just Handle It

Clint writes, "On Facebook, I tried to report a post as spam. I think I might just have to accept it."

 

"Jira seems to have strange ideas about my keyboard layout... Or is there a key that I don't know about?" writes Rob H.

 

George wrote, "There was deep wisdom bestowed upon weary travelers by the New York subway system at the Jamaica Center station this morning."

 

"Every single number field on the checkout page, including phone and credit card, was an integer. Just in case, you know, you felt like clicking a lot," Jeremiah C. writes.

 

"I don't know which is more ridiculous: that a Linux recovery image is a Windows 10, or that there's a difference between Pro and Professional," wrote Dima R.

 

"I got my weekly workout summary and, well, it looks I might have been hitting the gym a little too hard," Colin writes.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaOpenSTEM: Assessment Time

For many of us, the colder weather has started to arrive and mid-year assessment is in full swing. Teachers are under the pump to produce mid-year reports and grades. The OpenSTEM® Understanding Our World® program aims to take the pressure off teachers by providing for continuous assessment throughout the term. Not only are teachers continually […]

Planet Linux AustraliaDonna Benjamin: Makarrata

The time has come
To say fairs fair...

Dear members of the committee,

Please listen to the Uluru statement from the heart. Please hear those words. Please accept them, please act to adopt them.

Enshrine a voice for Australia’s first nation peoples in the Australian constitution.

Create a commission for Makarrata.

Invest in uncovering and telling the truth of our history.

We will be a stronger, wiser nation when we truly acknowledge the frontier wars and not only a stolen generation but stolen land, and stolen hope.

We have nothing to lose, and everything to gain through real heartfelt recognition and reconciliation.

Makarrata. Treaty. Sovereignty.

Please. I am Australian. I want this.

I felt sick shame when the prime minister rejected the Uluru statement. He did not, does not, speak for me.

Donna Benjamin
Melbourne, VIC.

Planet Linux AustraliaDonna Benjamin: Leadership, and teamwork.

Photo by Mohamed Abd El Ghany - Women protestors in Tahrir Square, Egypt 2013.

I'm angry and defensive. I don't know why. So I'm trying hard to figure that out right now.

Here's some words.

I'm writing these words for myself to try and figure this out.
I'm hoping these words might help make it clear.
I'm fearful these words will make it worse.

But I don't want to be silent about this.

Content Warning: This post refers to genocide.

This is about a discussion at the teamwork and leadership workshop at DrupalCon. For perhaps 5 mins within a 90 minute session we talked about Hitler. It was an intensely thought provoking, and uncomfortable 5 minute conversation. It was nuanced. It wasn't really tweetable.

On Holocaust memorial day, it seems timely to explore whether or not we should talk about Hitler when exploring the nature of leadership. Not all leaders are good. Call them dictators, call them tyrants, call them fascists, call them evil. Leadership is defined differently by different cultures, at different times, and in different contexts.

Some people in the room were upset and disgusted that we had that conversation. I'm really very deeply sorry about that.

Some of them then talked about it with others afterwards, which is great. It was a confronting conversation, and one, frankly, we should all be having as genocide and fascism exist in very real ways in the very real world.

But some of those they spoke with, who weren't there, seem to have extrapolated from that conversation that it was something different to what I experienced in the room. I feel they formed opinions that I can only call, well, what words can I call those opinions? Uninformed? Misinformed? Out of context? Wrong? That's probably unfair, it's just my perspective. But from those opinions, they also made assumptions, and turned those assumptions into accusations.

One person said they were glad they weren't there, but clearly happy to criticise us from afar on twitter. I responded that I thought it was a shame they didn't come to the workshop, but did choose to publicly criticise our work. Others responded to that saying this was disgusting, offensive, unacceptable and inappropriate that we would even consider having this conversation. One accused me of trying to shut down the conversation.

So, I think perhaps the reason I'm feeling angry and defensive, is I'm being accused of something I don't think I did.

And I want to defend myself.

I've studied World War Two and the Genocide that took place under Hitler's direction.

My grandmother was arrested in the early 1930's and held in a concentration camp. She was, thankfully, released and fled Germany to Australia as a refugee before the war was declared. Her mother was murdered by Hitler. My grandfather's parents and sister were also murdered by Hitler.

So, I guess I feel like I've got a pretty strong understanding of who Hitler was, and what he did.

So when I have people telling me, that it's completely disgusting to even consider discussing Hitler in the context of examining what leadership is, and what it means? Fuck that. I will not desist. Hitler was a monster, and we must never forget what he was, or what he did.

During silent reflection on a number of images, I wrote this note.

"Hitler was a powerful leader. No question. So powerful, he destroyed the world."

When asked if they thought Hitler was a leader or not, most people in the room, including me, put up their hand. We were wrong.

The four people who put their hand up to say he was NOT a leader were right.

We had not collectively defined leadership at that point. We were in the middle of a process doing exactly that.

The definition we were eventually offered is that leaders must care for their followers, and must care for people generally.

At no point, did anyone in that room, consider the possibility that Hitler was a "Good Leader" which is the misinformed accusation I most categorically reject.

Our facilitator, Adam Goodman, told us we were all wrong, except the four who rejected Hitler as an example of a Leader, by saying, that no, he was not a leader, but yes, he was a dictator, yes he was a tyrant. But he was not a leader.

Whilst I agree, and was relieved by that reframing, I would also counter argue that it is English semantics.

Someone else also reminded us, that Hitler was elected. I too, was elected to the board of the Drupal Association, I was then appointed to one of the class Director seats. My final term ends later this year, and frankly, right now, I'm kind of wondering if I should leave right now.

Other people shown in the slide deck were Oprah Winfrey, Angela Merkel, Rosa Parks, Serena Williams, Marin Alsop, Sonia Sotomayor, a woman in military uniform, and a large group of women protesting in Tahrir Square in Egypt.

It also included Gandhi, and Mandela.

I observed that I felt sad I could think of no woman that I would list in the same breath as those two men.

So... for those of you who judged us, and this workshop, from what you saw on twitter, before having all the facts?
Let me tell you what I think this was about.

This wasn't about Hitler.

This was about leadership, and learning how we can be better leaders. I felt we were also exploring how we might better support the leaders we have, and nurture the ones to come. And I now also wonder how we might respectfully acknowledge the work and effort of those who've come and gone, and learn to better pass on what's important to those doing the work now.

We need teamwork. We need leadership. It takes collective effort, and most of all, it takes collective empathy and compassion.

Dries Buytaert was the final image in the deck.

Dries shared these 5 values and their underlying principles with us to further explore, discuss and develop together.

Prioritize impact
Impact gives us purpose. We build software that is easy, accessible and safe for everyone to use.

Better together
We foster a learning environment, prefer collaborative decision-making, encourage others to get involved and to help lead our community.

Strive for excellence
We constantly re-evaluate and assume that change is constant.

Treat each other with dignity and respect
We do not tolerate intolerance toward others. We seek first to understand, then to be understood. We give each other constructive criticism, and are relentlessly optimistic.

Enjoy what you do
Be sure to have fun.

I'm sorry to say this, but I'm really not having fun right now. But I am much clearer about why I'm feeling angry.

Photo Credit "Protesters against Egyptian President Mohamed Morsi celebrate in Tahrir Square in Cairo on July 3, 2013. Egypt's armed forces overthrew elected Islamist President Morsi on Wednesday and announced a political transition with the support of a wide range of political and religious leaders." Mohamed Abd El Ghany Reuters.

Planet Linux AustraliaDonna Benjamin: DrupalCon Nashville

I'm going to Nashville!!

That is all. Carry on. Or... better yet - you should come too!

https://events.drupal.org/nashville2018

,

CryptogramE-Mail Vulnerabilities and Disclosure

Last week, researchers disclosed vulnerabilities in a large number of encrypted e-mail clients: specifically, those that use OpenPGP and S/MIME, including Thunderbird and AppleMail. These are serious vulnerabilities: An attacker who can alter mail sent to a vulnerable client can trick that client into sending a copy of the plaintext to a web server controlled by that attacker. The story of these vulnerabilities and the tale of how they were disclosed illustrate some important lessons about security vulnerabilities in general and e-mail security in particular.

But first, if you use PGP or S/MIME to encrypt e-mail, you need to check the list on this page and see if you are vulnerable. If you are, check with the vendor to see if they've fixed the vulnerability. (Note that some early patches turned out not to fix the vulnerability.) If not, stop using the encrypted e-mail program entirely until it's fixed. Or, if you know how to do it, turn off your e-mail client's ability to process HTML e-mail or -- even better -- stop decrypting e-mails from within the client. There's even more complex advice for more sophisticated users, but if you're one of those, you don't need me to explain this to you.

Consider your encrypted e-mail insecure until this is fixed.

All software contains security vulnerabilities, and one of the primary ways we all improve our security is by researchers discovering those vulnerabilities and vendors patching them. It's a weird system: Corporate researchers are motivated by publicity, academic researchers by publication credentials, and just about everyone by individual fame and the small bug-bounties paid by some vendors.

Software vendors, on the other hand, are motivated to fix vulnerabilities by the threat of public disclosure. Without the threat of eventual publication, vendors are likely to ignore researchers and delay patching. This happened a lot in the 1990s, and even today, vendors often use legal tactics to try to block publication. It makes sense; they look bad when their products are pronounced insecure.

Over the past few years, researchers have started to choreograph vulnerability announcements to make a big press splash. Clever names -- the e-mail vulnerability is called "Efail" -- websites, and cute logos are now common. Key reporters are given advance information about the vulnerabilities. Sometimes advance teasers are released. Vendors are now part of this process, trying to announce their patches at the same time the vulnerabilities are announced.

This simultaneous announcement is best for security. While it's always possible that some organization -- either government or criminal -- has independently discovered and is using the vulnerability before the researchers go public, use of the vulnerability is essentially guaranteed after the announcement. The time period between announcement and patching is the most dangerous, and everyone except would-be attackers wants to minimize it.

Things get much more complicated when multiple vendors are involved. In this case, Efail isn't a vulnerability in a particular product; it's a vulnerability in a standard that is used in dozens of different products. As such, the researchers had to ensure both that everyone knew about the vulnerability in time to fix it and that no one leaked the vulnerability to the public during that time. As you can imagine, that's close to impossible.

Efail was discovered sometime last year, and the researchers alerted dozens of different companies between last October and March. Some companies took the news more seriously than others. Most patched. Amazingly, news about the vulnerability didn't leak until the day before the scheduled announcement date. Two days before the scheduled release, the researchers unveiled a teaser -- honestly, a really bad idea -- which resulted in details leaking.

After the leak, the Electronic Frontier Foundation posted a notice about the vulnerability without details. The organization has been criticized for its announcement, but I am hard-pressed to find fault with its advice. (Note: I am a board member at EFF.) Then, the researchers published -- and lots of press followed.

All of this speaks to the difficulty of coordinating vulnerability disclosure when it involves a large number of companies or -- even more problematic -- communities without clear ownership. And that's what we have with OpenPGP. It's even worse when the bug involves the interaction between different parts of a system. In this case, there's nothing wrong with PGP or S/MIME in and of themselves. Rather, the vulnerability occurs because of the way many e-mail programs handle encrypted e-mail. GnuPG, an implementation of OpenPGP, decided that the bug wasn't its fault and did nothing about it. This is arguably true, but irrelevant. They should fix it.

Expect more of these kinds of problems in the future. The Internet is shifting from a set of systems we deliberately use -- our phones and computers -- to a fully immersive Internet-of-things world that we live in 24/7. And like this e-mail vulnerability, vulnerabilities will emerge through the interactions of different systems. Sometimes it will be obvious who should fix the problem. Sometimes it won't be. Sometimes it'll be two secure systems that, when they interact in a particular way, cause an insecurity. In April, I wrote about a vulnerability that arose because Google and Netflix make different assumptions about e-mail addresses. I don't even know who to blame for that one.

It gets even worse. Our system of disclosure and patching assumes that vendors have the expertise and ability to patch their systems, but that simply isn't true for many of the embedded and low-cost Internet of things software packages. They're designed at a much lower cost, often by offshore teams that come together, create the software, and then disband; as a result, there simply isn't anyone left around to receive vulnerability alerts from researchers and write patches. Even worse, many of these devices aren't patchable at all. Right now, if you own a digital video recorder that's vulnerable to being recruited for a botnet -- remember Mirai from 2016? -- the only way to patch it is to throw it away and buy a new one.

Patching is starting to fail, which means that we're losing the best mechanism we have for improving software security at exactly the same time that software is gaining autonomy and physical agency. Many researchers and organizations, including myself, have proposed government regulations enforcing minimal security standards for Internet-of-things devices, including standards around vulnerability disclosure and patching. This would be expensive, but it's hard to see any other viable alternative.

Getting back to e-mail, the truth is that it's incredibly difficult to secure well. Not because the cryptography is hard, but because we expect e-mail to do so many things. We use it for correspondence, for conversations, for scheduling, and for record-keeping. I regularly search my 20-year e-mail archive. The PGP and S/MIME security protocols are outdated, needlessly complicated and have been difficult to properly use the whole time. If we could start again, we would design something better and more user friendly­but the huge number of legacy applications that use the existing standards mean that we can't. I tell people that if they want to communicate securely with someone, to use one of the secure messaging systems: Signal, Off-the-Record, or -- if having one of those two on your system is itself suspicious -- WhatsApp. Of course they're not perfect, as last week's announcement of a vulnerability (patched within hours) in Signal illustrates. And they're not as flexible as e-mail, but that makes them easier to secure.

This essay previously appeared on Lawfare.com.

CryptogramRouter Vulnerability and the VPNFilter Botnet

On May 25, the FBI asked us all to reboot our routers. The story behind this request is one of sophisticated malware and unsophisticated home-network security, and it's a harbinger of the sorts of pervasive threats ­ from nation-states, criminals and hackers ­ that we should expect in coming years.

VPNFilter is a sophisticated piece of malware that infects mostly older home and small-office routers made by Linksys, MikroTik, Netgear, QNAP and TP-Link. (For a list of specific models, click here.) It's an impressive piece of work. It can eavesdrop on traffic passing through the router ­ specifically, log-in credentials and SCADA traffic, which is a networking protocol that controls power plants, chemical plants and industrial systems ­ attack other targets on the Internet and destructively "kill" its infected device. It is one of a very few pieces of malware that can survive a reboot, even though that's what the FBI has requested. It has a number of other capabilities, and it can be remotely updated to provide still others. More than 500,000 routers in at least 54 countries have been infected since 2016.

Because of the malware's sophistication, VPNFilter is believed to be the work of a government. The FBI suggested the Russian government was involved for two circumstantial reasons. One, a piece of the code is identical to one found in another piece of malware, called BlackEnergy, that was used in the December 2015 attack against Ukraine's power grid. Russia is believed to be behind that attack. And two, the majority of those 500,000 infections are in Ukraine and controlled by a separate command-and-control server. There might also be classified evidence, as an FBI affidavit in this matter identifies the group behind VPNFilter as Sofacy, also known as APT28 and Fancy Bear. That's the group behind a long list of attacks, including the 2016 hack of the Democratic National Committee.

Two companies, Cisco and Symantec, seem to have been working with the FBI during the past two years to track this malware as it infected ever more routers. The infection mechanism isn't known, but we believe it targets known vulnerabilities in these older routers. Pretty much no one patches their routers, so the vulnerabilities have remained, even if they were fixed in new models from the same manufacturers.

On May 30, the FBI seized control of toknowall.com, a critical VPNFilter command-and-control server. This is called "sinkholing," and serves to disrupt a critical part of this system. When infected routers contact toknowall.com, they will no longer be contacting a server owned by the malware's creators; instead, they'll be contacting a server owned by the FBI. This doesn't entirely neutralize the malware, though. It will stay on the infected routers through reboot, and the underlying vulnerabilities remain, making the routers susceptible to reinfection with a variant controlled by a different server.

If you want to make sure your router is no longer infected, you need to do more than reboot it, the FBI's warning notwithstanding. You need to reset the router to its factory settings. That means you need to reconfigure it for your network, which can be a pain if you're not sophisticated in these matters. If you want to make sure your router cannot be reinfected, you need to update the firmware with any security patches from the manufacturer. This is harder to do and may strain your technical capabilities, though it's ridiculous that routers don't automatically download and install firmware updates on their own. Some of these models probably do not even have security patches available. Honestly, the best thing to do if you have one of the vulnerable models is to throw it away and get a new one. (Your ISP will probably send you a new one free if you claim that it's not working properly. And you should have a new one, because if your current one is on the list, it's at least 10 years old.)

So if it won't clear out the malware, why is the FBI asking us to reboot our routers? It's mostly just to get a sense of how bad the problem is. The FBI now controls toknowall.com. When an infected router gets rebooted, it connects to that server to get fully reinfected, and when it does, the FBI will know. Rebooting will give it a better idea of how many devices out there are infected.

Should you do it? It can't hurt.

Internet of Things malware isn't new. The 2016 Mirai botnet, for example, created by a lone hacker and not a government, targeted vulnerabilities in Internet-connected digital video recorders and webcams. Other malware has targeted Internet-connected thermostats. Lots of malware targets home routers. These devices are particularly vulnerable because they are often designed by ad hoc teams without a lot of security expertise, stay around in networks far longer than our computers and phones, and have no easy way to patch them.

It wouldn't be surprising if the Russians targeted routers to build a network of infected computers for follow-on cyber operations. I'm sure many governments are doing the same. As long as we allow these insecure devices on the Internet ­ and short of security regulations, there's no way to stop them ­ we're going to be vulnerable to this kind of malware.

And next time, the command-and-control server won't be so easy to disrupt.

This essay previously appeared in the Washington Post

EDITED TO ADD: The malware is more capable than we previously thought.

CryptogramThomas Dullien on Complexity and Security

For many years, I have said that complexity is the worst enemy of security. At CyCon earlier this month, Thomas Dullien gave an excellent talk on the subject with far more detail than I've ever provided. Video. Slides.

Planet Linux AustraliaDonna Benjamin: Makarrata

The time has come
To say fairs fair...

Dear members of the committee,

Please listen to the Uluru statement from the heart. Please hear those words. Please accept them, please act to adopt them.

Enshrine a voice for Australia’s first nation peoples in the Australian constitution.

Create a commission for Makarrata.

Invest in uncovering and telling the truth of our history.

We will be a stronger, wiser nation when we truly acknowledge the frontier wars and not only a stolen generation but stolen land, and stolen hope.

We have nothing to lose, and everything to gain through real heartfelt recognition and reconciliation.

Makarrata. Treaty. Sovereignty.

Please. I am Australian. I want this.

I felt sick shame when the prime minister rejected the Uluru statement. He did not, does not, speak for me.

Donna Benjamin
Melbourne, VIC.

Worse Than FailureThe New Guy (Part II): Database Boogaloo

When we last left our hero Jesse, he was wading through a quagmire of undocumented bad systems while trying to solve an FTP issue. Several months later, Jesse had things figured out a little better and was starting to feel comfortable in his "System Admin" role. He helped the company join the rest of the world by dumping Windows NT 4.0 and XP. The users whose DNS settings he bungled were now happily utilizing Windows 10 workstations. His web servers were running Windows Server 2016, and the SQL boxes were up to SQL 2016. Plus his nemesis Ralph had since retired. Or died. Nobody knew for sure. But things were good.

Despite all these efforts, there were still several systems that relied on Access 97 haunting him every day. Jesse spent tens of dollars of his own money on well-worn Access 97 programming books to help plug holes in the leaky dike. The A97 Finance system in particular was a complete mess to deal with. There were no clear naming guidelines and table locations were haphazard at best. Stored procedures and functions were scattered between the A97 VBS and the SQL DB. Many views/functions were nested with some going as far as eight layers while others would form temporary tables in A97 then continue to nest.

One of Jesse's small wins involved improving performance of some financial reporting queries that took minutes to run before but now took seconds. A few of these sped-up reports happened to be ones that Shane, the owner of the company, used frequently. The sudden time-savings got his attention to the point of calling Jesse in to his office to meet.

"Jesse! Good to see you!" Shane said in an overly cheerful manner. "I'm glad to talk to the guy who has saved me a few hours a week with his programmering fixes." Jesse downplayed the praise before Shane got to the point. "I'd like to find out from you how we can make further improvements to our Finance program. You seem to have a real knack for this."

Jesse, without thinking about it, blurted, "This here system is a pile of shit." Shane stared at him blankly, so he continued, "It should be rebuilt from the ground up by experienced software development professionals. That's how we make further improvements."

"Great idea! Out with the old, in with the new! You seem pretty well-versed in this stuff, when can you start on it?" Shane said with growing excitement. Jesse soon realized his response had backfired and he was now on the hook to the owner for a complete system rewrite. He took a couple classes on C# and ASP.NET during his time at Totally Legit Technical Institute so it was time to put that valuable knowledge to use.

Shane didn't just let Jesse loose on redoing the Finance program though. He insisted Jesse work closely with Linda, their CFO who used it the most. Linda proved to be very resistant to any kind of change Jesse proposed. She had mastered the painstaking nuances of A97 and didn't seem to mind fixing large amounts of bad data by hand. "It makes me feel in control, you know," Linda told him once after Jesse tried to explain the benefits of the rewrite.

While Jesse pecked away at his prototype, Linda would relentlessly nitpick any UI ideas he came up with. If she had it her way, the new system would only be usable by someone as braindead as her. "I don't need all these fancy menus and buttons! Just make it look and work like it does in the current system," she would say at least once a week. "And don't you dare take my manual controls away! I don't trust your automated robotics to get these numbers right!" In the times it wasn't possible to make something work like Access 97, she would run to Shane, who would have to talk her down off the ledge.

Even though Linda opposed Jesse at every turn, the new system was faster and very expandable. Using C# .NET 4.7.1 with WPF, it was much less of an eyesore. The database was also clearly defined with full documentation, both on the tables and in the stored procedures. The database size managed to go from 8 GB to .8 GB with no loss in data.

The time came at last for go-live of Finance 2.0. The thing Jesse was most excited about was shutting down the A97 system and feeling Linda die a little bit inside. He sent out an email to the Finance department with instructions for how to use it. The system was well-received by everyone except Linda. But that still led to more headaches for Jesse.

With Finance 2.0 in their hands, the rest of the users noticed the capabilities modern technology brought. The feature requests began pouring in with no way to funnel them. Linda refused to participate in feature reviews because she still hated the new system, so they all went to Shane, who greenlighted everything. Jesse soon found himself buried in the throes of the monster he created with no end in sight. To this day, he toils at his computer cranking out features while Linda sits and reminisces about the good old days of Access 97.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

,

Krebs on SecurityLibrarian Sues Equifax Over 2017 Data Breach, Wins $600

In the days following revelations last September that big-three consumer credit bureau Equifax had been hacked and relieved of personal data on nearly 150 million people, many Americans no doubt felt resigned and powerless to control their information. But not Jessamyn West. The 49-year-old librarian from a tiny town in Vermont took Equifax to court. And now she’s celebrating a small but symbolic victory after a small claims court awarded her $600 in damages stemming from the 2017 breach.

Vermont librarian Jessamyn West sued Equifax over its 2017 data breach and won $600 in small claims court. Others are following suit.

Just days after Equifax disclosed the breach, West filed a claim with the local Orange County, Vt. courthouse asking a judge to award her almost $5,000. She told the court that her mother had just died in July, and that it added to the work of sorting out her mom’s finances while trying to respond to having the entire family’s credit files potentially exposed to hackers and identity thieves.

The judge ultimately agreed, but awarded West just $690 ($90 to cover court fees and the rest intended to cover the cost of up to two years of payments to online identity theft protection services).

In an interview with KrebsOnSecurity, West said she’s feeling victorious even though the amount awarded is a drop in the bucket for Equifax, which reported more than $3.4 billion in revenue last year.

“The small claims case was a lot more about raising awareness,” said West, a librarian at the Randolph Technical Career Center who specializes in technology training and frequently conducts talks on privacy and security.

“I just wanted to change the conversation I was having with all my neighbors who were like, ‘Ugh, computers are hard, what can you do?’ to ‘Hey, here are some things you can do’,” she said. “A lot of people don’t feel they have agency around privacy and technology in general. This case was about having your own agency when companies don’t behave how they’re supposed to with our private information.”

West said she’s surprised more people aren’t following her example. After all, if just a tiny fraction of the 147 million Americans who had their Social Security number, date of birth, address and other personal data stolen in last year’s breach filed a claim and prevailed as West did, it could easily cost Equifax tens of millions of dollars in damages and legal fees.

“The paperwork to file the claim was a little irritating, but it only cost $90,” she said. “Then again, I could see how many people probably would see this as a lark, where there’s a pretty good chance you’re not going to see that money again, and for a lot of people that probably doesn’t really make things better.”

Equifax is currently the target of several class action lawsuits related to the 2017 breach disclosure, but there have been a few other minor victories in state small claims courts.

In January, data privacy enthusiast Christian Haigh wrote about winning an $8,000 judgment in small claims court against Equifax for its 2017 breach (the amount was reduced to $5,500 after Equifax appealed).

Haigh is co-founder of litigation finance startup Legalist. According to Inc.com, Haigh’s company has started funding other people’s small claims suits against Equifax, too. (Legalist pays lawyers in plaintiff’s suits on an hourly basis, and takes a contingency fee if the case is successful.)

Days after the Equifax breach news broke, a 20-year-old Stanford University student published a free online bot that helps users sue the company in small claims court.

It’s not clear if the Web site tool is still functioning, but West said it was media coverage of this very same lawsuit bot that prompted her to file.

“I thought if some stupid online bot can do this, I could probably figure it out,” she recalled.

If you’re a DYI type person, by all means file a claim in your local small claims court. And then write and publish about your experience, just like West did in a post at Medium.com.

West said she plans to donate the money from her small claims win to the Vermont chapter of the American Civil Liberties Union (ACLU), and that she hopes her case inspires others.

“Even if all this does is get people to use better passwords, or go to the library, or to tell a company, ‘No, that’s not not good enough, you need to do better,’ that would be a good thing,” West said. “I wanted to show that there are constructive ways to seek redress of grievances about lots of different things, which makes me happy. I was willing to do the work and go to court. I look at this like an opportunity to educate and inform yourself, and realize there is a step you can take beyond just rending of garments and gnashing of teeth.”

Rondam RamblingsTrump makes it look easy

One has to wonder, after Donald Trump's tidy wrapping-up of the North Korea situation (he did everything short of come right out and say "peace for our time!"), what all the fuss was ever about.  It took only a few months (or forty minutes, depending on how you count) to go from the the brink of nuclear war to BFFs.  Today the U.S. seems to be getting along better with North Korea than with

CryptogramRussian Censorship of Telegram

Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today's Internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the Internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors' demands, how long can Internet freedom last?

The recent battle between the Russian government and the Telegram messaging app illustrates one way this might play out. Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender's phone and decrypted on the receiver's phone; no part of the network can eavesdrop on the messages.

Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn't a fixed website, it doesn't need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all.

A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.

More recently, Russia has been pressuring Apple not to offer the Telegram app in its iPhone App Store. As of this writing, Apple has not complied, and the company has allowed Telegram to download a critical software update to iPhone users (after what the app's founder called a delay last month). Roskomnadzor could further pressure Apple, though, including by threatening to turn off its entire iPhone app business in Russia.

Telegram might seem a weird app for Russia to focus on. Those of us who work in security don't recommend the program, primarily because of the nature of its cryptographic protocols. In general, proprietary cryptography has numerous fatal security flaws. We generally recommend Signal for secure SMS messaging, or, if having that program on your computer is somehow incriminating, WhatsApp. (More than 1.5 billion people worldwide use WhatsApp.) What Telegram has going for it is that it works really well on lousy networks. That's why it is so popular in places like Iran and Afghanistan. (Iran is also trying to ban the app.)

What the Russian government doesn't like about Telegram is its anonymous broadcast feature­ -- channel capability and chats -- ­which makes it an effective platform for political debate and citizen journalism. The Russians might not like that Telegram is encrypted, but odds are good that they can simply break the encryption. Telegram's role in facilitating uncontrolled journalism is the real issue.

Iran attempts to block Telegram have been more successful than Russia's, less because Iran's censorship technology is more sophisticated but because Telegram is not willing to go as far to defend Iranian users. The reasons are not rooted in business decisions. Simply put, Telegram is a Russian product and the designers are more motivated to poke Russia in the eye. Pavel Durov, Telegram's founder, has pledged millions of dollars to help fight Russian censorship.

For the moment, Russia has lost. But this battle is far from over. Russia could easily come back with more targeted pressure on Google, Amazon and Apple. A year earlier, Zello used the same trick Telegram is using to evade Russian censors. Then, Roskomnadzor threatened to block all of Amazon Web Services and Google Cloud; and in that instance, both companies forced Zello to stop its IP-hopping censorship-evasion tactic.

Russia could also further develop its censorship infrastructure. If its capabilities were as finely honed as China's, it would be able to more effectively block Telegram from operating. Right now, Russia can block only specific IP addresses, which is too coarse a tool for this issue. Telegram's voice capabilities in Russia are significantly degraded, however, probably because high-capacity IP addresses are easier to block.

Whatever its current frustrations, Russia might well win in the long term. By demonstrating its willingness to suffer the temporary collateral damage of blocking major cloud providers, it prompted cloud providers to block another and more effective anti-censorship tactic, or at least accelerated the process. In April, Google and Amazon banned­ -- and technically blocked­ -- the practice of "domain fronting," a trick anti-censorship tools use to get around Internet censors by pretending to be other kinds of traffic. Developers would use popular websites as a proxy, routing traffic to their own servers through another website­ -- in this case Google.com­ -- to fool censors into believing the traffic was intended for Google.com. The anonymous web-browsing tool Tor has used domain fronting since 2014. Signal, since 2016. Eliminating the capability is a boon to censors worldwide.

Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that Internet freedom is increasingly in the hands of the world's largest Internet companies. And while freedom may have its advocates -- ­the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban­ -- actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.

In 1993, John Gilmore famously said that "The Internet interprets censorship as damage and routes around it." That was technically true when he said it but only because the routing structure of the Internet was so distributed. As centralization increases, the Internet loses that robustness, and censorship by governments and companies becomes easier.

This essay previously appeared on Lawfare.com.

CryptogramNew Data Privacy Regulations

When Marc Zuckerberg testified before both the House and the Senate last month, it became immediately obvious that few US lawmakers had any appetite to regulate the pervasive surveillance taking place on the Internet.

Right now, the only way we can force these companies to take our privacy more seriously is through the market. But the market is broken. First, none of us do business directly with these data brokers. Equifax might have lost my personal data in 2017, but I can't fire them because I'm not their customer or even their user. I could complain to the companies I do business with who sell my data to Equifax, but I don't know who they are. Markets require voluntary exchange to work properly. If consumers don't even know where these data brokers are getting their data from and what they're doing with it, they can't make intelligent buying choices.

This is starting to change, thanks to a new law in Vermont and another in Europe. And more legislation is coming.

Vermont first. At the moment, we don't know how many data brokers collect data on Americans. Credible estimates range from 2,500 to 4,000 different companies. Last week, Vermont passed a law that will change that.

The law does several things to improve the security of Vermonters' data, but several provisions matter to all of us. First, the law requires data brokers that trade in Vermonters' data to register annually. And while there are many small local data brokers, the larger companies collect data nationally and even internationally. This will help us get a more accurate look at who's in this business. The companies also have to disclose what opt-out options they offer, and how people can request to opt out. Again, this information is useful to all of us, regardless of the state we live in. And finally, the companies have to disclose the number of security breaches they've suffered each year, and how many individuals were affected.

Admittedly, the regulations imposed by the Vermont law are modest. Earlier drafts of the law included a provision requiring data brokers to disclose how many individuals' data it has in its databases, what sorts of data it collects and where the data came from, but those were removed as the bill negotiated its way into law. A more comprehensive law would allow individuals to demand to exactly what information they have about them­ -- and maybe allow individuals to correct and even delete data. But it's a start, and the first statewide law of its kind to be passed in the face of strong industry opposition.

Vermont isn't the first to attempt this, though. On the other side of the country, Representative Norma Smith of Washington introduced a similar bill in both 2017 and 2018. It goes further, requiring disclosure of what kinds of data the broker collects. So far, the bill has stalled in the state's legislature, but she believes it will have a much better chance of passing when she introduces it again in 2019. I am optimistic that this is a trend, and that many states will start passing bills forcing data brokers to be increasingly more transparent in their activities. And while their laws will be tailored to residents of those states, all of us will benefit from the information.

A 2018 California ballot initiative could help. Among its provisions, it gives consumers the right to demand exactly what information a data broker has about them. If it passes in November, once it takes effect, lots of Californians will take the list of data brokers from Vermont's registration law and demand this information based on their own law. And again, all of us -- regardless of the state we live in­ -- will benefit from the information.

We will also benefit from another, much more comprehensive, data privacy and security law from the European Union. The General Data Protection Regulation (GDPR) was passed in 2016 and took effect on 25 May. The details of the law are far too complex to explain here, but among other things, it mandates that personal data can only be collected and saved for specific purposes and only with the explicit consent of the user. We'll learn who is collecting what and why, because companies that collect data are going to have to ask European users and customers for permission. And while this law only applies to EU citizens and people living in EU countries, the disclosure requirements will show all of us how these companies profit off our personal data.

It has already reaped benefits. Over the past couple of weeks, you've received many e-mails from companies that have you on their mailing lists. In the coming weeks and months, you're going to see other companies disclose what they're doing with your data. One early example is PayPal: in preparation for GDPR, it published a list of the over 600 companies it shares your personal data with. Expect a lot more like this.

Surveillance is the business model of the Internet. It's not just the big companies like Facebook and Google watching everything we do online and selling advertising based on our behaviors; there's also a large and largely unregulated industry of data brokers that collect, correlate and then sell intimate personal data about our behaviors. If we make the reasonable assumption that Congress is not going to regulate these companies, then we're left with the market and consumer choice. The first step in that process is transparency. These new laws, and the ones that will follow, are slowly shining a light on this secretive industry.

This essay originally appeared in the Guardian.

Worse Than FailureThe Manager Who Knew Everything

Have you ever worked for/with a manager that knows everything about everything? You know the sort; no matter what the issue, they stubbornly have an answer. It might be wrong, but they have an answer, and no amount of reason, intelligent thought, common sense or hand puppets will make them understand. For those occasions, you need to resort to a metaphorical clue-bat.

A few decades ago, I worked for a place that had a chief security officer who knew everything there was to know about securing their systems. Nothing could get past the policies she had put in place. Nobody could ever come up with any mechanism that could bypass her concrete walls, blockades and insurmountable defenses.

One day, she held an interdepartmental meeting to announce her brand spanking shiny new policies regarding this new-fangled email that everyone seemed to want to use. It would prevent unauthorized access, so only official emails sent by official individuals could be sent through her now-secured email servers.

I pointed out that email servers could only be secured to a point, because they had to have an open port to which email clients running on any internal computer could connect. As long as the port was open, anyone with internal access and nefarious intent could spoof a legitimate authorized email address and send a spoofed email.

She was incensed and informed me (and the group) that she knew more than all of us (together) about security, and that there was absolutely no way that could ever happen. I told her that I had some background in military security, and that I might know something that she didn't.

At this point, if she was smart, she would have asked me to explain. If she already handled the case, then I'd have to shut up. If she didn't handle the case, then she'd learn something, AND the system could be made more secure. She was not smart; she publicly called my bluff.

I announced that I accepted the challenge, and that I was going to use my work PC to send an email - from her - to the entire firm (using the restricted blast-to-all email address, which I would not normally be able to access as myself). In the email, I would explain that it was a spoof, and if they were seeing it, then the so-called impenetrable security might be somewhat less secure than she proselytized. In fact, I would do it in such a way that there would be absolutely no way to prove that I did it (other than my admission in the email).

She said that if I did that, that I'd be fired. I responded that 1) if the system was as secure as she thought, that there'd be nothing to fire me for, and 2) if they could prove that it was me, and tell me how I did it (aside from my admission that I had done it), that I would resign. But if not, then she had to stop the holier-than-thou act.

Fifteen minutes later, I went back to my desk, logged into my work PC using the guest account, wrote a 20 line Cold Fusion script to attach to the email server on port 25, and filled out the fields as though it was coming from her email client. Since she had legitimate access to the firm-wide email blast address, the email server allowed it. Then I sent it. Then I secure-erased the local system event and assorted other logs, as well as editor/browser/Cold Fusion/server caches, etc. that would show what I did. Finally, I did a cold boot to ensure that even the RAM was wiped out.

Not long after that, her minions the SA's showed up at my desk joking that they couldn't believe that I had actually done it. I told them that I had wiped out all the logs where they'd look, the actual script that did it, and the disk space that all of the above had occupied. Although they knew the IP address of the PC from which the request came, they agreed that without those files, there was no way they could prove that it was me. Then they checked everything and verified what I told them.

This info made its way back up the chain until the SAs, me and my boss got called into her office, along with a C-level manager. Everything was explained to the C-manager. She was expecting him to fire me.

He simply looked at me and raised an eyebrow. I responded that I spent all of ten minutes doing it in direct response to her assertion that it was un-doable, and that I had announced my intentions to expose the vulnerability - to her - in front of everyone - in advance.

He chose to tell her that maybe she needed to accept that she doesn't know quite as much about everything as she thinks, and that she might want to listen to people a little more. She then pointed out that I had proven that email was totally insecure and that it should be banned completely (this was at the point where the business had mostly moved to email). I pointed out that I had worked there for many years, had no destructive tendencies, that I was only exposing a potential gap in security, and would not do it again. The SAs also pointed out that the stunt, though it proved the point, was harmless. They also mentioned that nobody else at the firm had access to Cold Fusion. I didn't think it helpful to mention that not just Cold Fusion, but any programming language could be used to connect to port 25 and do the same thing, and so didn't. She huffed and puffed, but had no credibility at that point.

After that, my boss and I bought the SAs burgers and beer.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Krebs on SecurityMicrosoft Patch Tuesday, June 2018 Edition

Microsoft today pushed out a bevy of software updates to fix more than four dozen security holes in Windows and related software. Almost a quarter of the vulnerabilities addressed in this month’s patch batch earned Microsoft’s “critical” rating, meaning malware or miscreants can exploit the flaws to break into vulnerable systems without any help from users.

Most of the critical fixes are in Microsoft browsers or browser components. One of the flaws, CVE-2018-8267, was publicly disclosed prior to today’s patch release, meaning attackers may have had a head start figuring out how to exploit the bug to attack Internet Explorer users.

According to Recorded Future, the most important patched vulnerability is a remote code execution vulnerability in the Windows Domain Name System (DNS), which is present in all versions of supported versions of Windows from Windows 7 to Windows 10 as well as all versions of Windows Server from 2008 to 2016.

“The vulnerability allows an attacker to send a maliciously crafted DNS packet to the victim machine from a DNS server, or even send spoofed DNS responses from attack box,” wrote Allan Liska, a threat intelligence analyst at Recorded Future. “Successful exploitation of this vulnerability could allow an attacker to take control of the target machine.”

Security vendor Qualys says mobile workstations that may connect to untrusted Wi-Fi networks are at high risk and this DNS patch should be a priority for them. Qualys also notes that Microsoft this month is shipping updates to mitigate another variant of the Spectre vulnerability in Intel machines.

And of course there are updates available to address the Adobe Flash Player vulnerability that is already being exploited in active attacks. Read more on that here.

It’s a good idea to get in the habit of backing up your computer before applying monthly updates from Microsoft. Windows has some built-in tools that can help recover from bad patches, but restoring the system to a backup image taken just before installing the updates is often much less hassle and an added piece of mind when you’re sitting there praying for the machine to reboot after patching.

This assumes you can get around to backing up before Microsoft decides to patch Windows on your behalf. Microsoft says by default, Windows 10 receives updates automatically, “and for customers running previous versions, we recommend they turn on automatic updates as a best practice.” Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible.

For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

As always, if you experience any problems installing any of these updates, please leave a note about your issues in the comments below.

Additional reading:

Cisco Talos Intelligence blog take

The Zero Day Initiative’s Security Update Review

SANS Internet Storm Center

Microsoft Security Update Guide

Planet Linux AustraliaJulien Goodwin: Custom uBlox GPSDO board

For the next part of my ongoing project I needed to test the GPS reciever I'm using, a uBlox LEA-M8F (M8 series chip, LEA form factor, and with frequency outputs). Since the native 30.72MHz oscillator is useless for me I'm using an external TCVCXO (temperature compensated, voltage controlled oscillator) for now, with the DAC & reference needed to discipline the oscillator based on GPS. If uBlox would sell me the frequency version of the chip on its own that would be ideal, but they don't sell to small customers.

Here's a (rather modified) board sitting on top of an Efratom FRK rubidium standard that I'm going to mount to make a (temporary) home standard (that deserves a post of its own). To give a sense of scale the silver connector at the top of the board is a micro-USB socket.



Although a very simple board I had a mess of problems once again, both in construction and in component selection.

Unlike the PoE board from the previous post I didn't have this board manufactured. This was for two main reasons, first, the uBlox module isn't available from Digikey, so I'd still need to mount it by hand. The second, to fit all the components this board has a much greater area, and since the assembly house I use charges by board area (regardless of the number or density of components) this would have cost several hundred dollars. In the end, this might actually have been the sensible way to go.

By chance I'd picked up a new soldering iron at the same time these boards arrived, a Hakko FX-951 knock-off and gave it a try. Whilst probably an improvement over my old Hakko FX-888 it's not a great iron, especially with the knife tip it came with, and certainly nowhere near as nice to use as the JBC CD-B (I think that's the model) we have in the office lab. It is good enough that I'm probably going to buy a genuine Hakko FM-203 with an FM-2032 precision tool for the second port.

The big problem I had hand-soldering the boards was bridges on several of the components. Not just the tiny (0.65mm pitch, actually the *second largest* of eight packages for that chip) SC70 footprint of the PPS buffer, but also the much more generous 1.1mm pitch of the uBlox module. Luckily solder wick fixed most cases, plus one where I pulled the buffer and soldered a new one more carefully.

With components, once again I made several errors:
  • I ended up buying the wrong USB connectors for the footprint I chose (the same thing happened with the first run of USB-C modules I did in 2016), and while I could bodge them into use easily enough there wasn't enough mechanical retention so I ended up ripping one connector off the board. I ordered some correct ones, but because I wasn't able to wick all solder off the pads they don't attach as strongly as they should, and whilst less fragile, are hardly what I'd call solid.
  • The surface mount GPS antenna (Taoglas AP.10H.01 visible in this tweet) I used was 11dB higher gain than the antenna I'd tested with the devkit, I never managed to get it to lock while connected to the board, although once on a cable it did work ok. To allow easier testing, in the end I removed the antenna and bodged on an SMA connector for easy testing.
  • When selecting the buffer I accidentally chose one with an open-drain output, I'd meant to use one with a push-pull output. This took quite a silly long time for me to realise what mistake I'd made. Compounding this, the buffer is on the 1PPS line, which only strobes while locked to GPS, however my apartment is a concrete box, with what GPS signal I can get inside only available in my bedroom, and my oscilloscope is in my lab, so I couldn't demonstrate the issue live, and had to inject test signals. Luckily a push-pull is available in the same footprint, and a quick hot-air aided swap later (once parts arrived from Digikey) it was fixed.

Lessons learnt:
  • Yes I can solder down to ~0.5mm pitch, but not reliably.
  • More test points on dev boards, particularly all voltage rails, and notable signals not otherwise exposed.
  • Flux is magic, you probably aren't using enough.

Although I've confirmed all basic functions of the board work, including GPS locking, PPS (quick video of the PPS signal LED), and frequency output, I've still not yet tested the native serial ports and frequency stability from the oscillator. Living in an urban canyon makes such testing a pain.

Eventually I might also test moving the oscillator, DAC & reference into a mini oven to see if a custom OCXO would be any better, if small & well insulated enough the power cost of an oven shouldn't be a problem.

Also as you'll see if you look at the tweets, I really should have posted this almost a month ago, however I finished fixing the board just before heading off to California for a work trip, and whilst I meant to write this post during the trip, it's not until I've been back for more than a week that I've gotten to it. I find it extremely easy to let myself be distracted from side projects, particularly since I'm in a busy period at $ORK at the moment.

CryptogramNew iPhone OS May Include Device-Unlocking Security

iOS 12, the next release of Apple's iPhone operating system, may include features to prevent someone from unlocking your phone without your permission:

The feature essentially forces users to unlock the iPhone with the passcode when connecting it to a USB accessory everytime the phone has not been unlocked for one hour. That includes the iPhone unlocking devices that companies such as Cellebrite or GrayShift make, which police departments all over the world use to hack into seized iPhones.

"That pretty much kills [GrayShift's product] GrayKey and Cellebrite," Ryan Duff, a security researcher who has studied iPhone and is Director of Cyber Solutions at Point3 Security, told Motherboard in an online chat. "If it actually does what it says and doesn't let ANY type of data connection happen until it's unlocked, then yes. You can't exploit the device if you can't communicate with it."

This is part of a bunch of security enhancements in iOS 12:

Other enhancements include tools for generating strong passwords, storing them in the iCloud keychain, and automatically entering them into Safari and iOS apps across all of a user's devices. Previously, standalone apps such as 1Password have done much the same thing. Now, Apple is integrating the functions directly into macOS and iOS. Apple also debuted new programming interfaces that allow users to more easily access passwords stored in third-party password managers directly from the QuickType bar. The company also announced a new feature that will flag reused passwords, an interface that autofills one-time passwords provided by authentication apps, and a mechanism for sharing passwords among nearby iOS devices, Macs, and Apple TVs.

A separate privacy enhancement is designed to prevent websites from tracking people when using Safari. It's specifically designed to prevent share buttons and comment code on webpages from tracking people's movements across the Web without permission or from collecting a device's unique settings such as fonts, in an attempt to fingerprint the device.

The last additions of note are new permission dialogues macOS Mojave will display before allowing apps to access a user's camera or microphone. The permissions are designed to thwart malicious software that surreptitiously turns on these devices in an attempt to spy on users. The new protections will largely mimic those previously available only through standalone apps such as one called Oversight, developed by security researcher Patrick Wardle. Apple said similar dialog permissions will protect the file system, mail database, message history, and backups.

Worse Than FailureCodeSOD: Maximum Performance

There is some code, that at first glance, doesn’t seem great, but doesn’t leap out as a WTF. Stephe sends one such block.

double SomeClass::getMaxKeyValue(std::vector<double> list)
{
    double max = 0;
    for (int i = 0; i < list.size(); i++) {
        if (list[i] > max) {
            max = list[i];
        }
    }
    return max;
}

This isn’t great code. Naming a vector-type variable list is itself pretty confusing, the parameter should be marked as const to cut down on copy operations, and there’s an obvious potential bug: what happens if the input is nothing but negative values? You’ll incorrectly return 0, every time.

Still, this code, taken on its own, isn’t a WTF. We need more background.

First off, what this code doesn’t tell you is that we’re looking at a case of the parallel arrays anti-pattern. The list parameter might be something different depending on which key is being searched. As you can imagine, this creates spaghettified, difficult to maintain code. Code that performed terribly. Really terribly. Like “it must have crashed, no wait, no, the screen updated, no wait it crashed again, wait, it’s…” terrible.

Why was it so terrible? Well, for starters, the inputs to getMaxKeyValue were often arrays containing millions of elements. This method was called hundreds of times throughout the code, mostly inside of window redrawing code. All of that adds up to a craptacular application, but there’s one last, very important detail which brings this up to full WTF:

The inputs were already sorted in ascending order.

With a few minor changes, like taking advantage of the sorted vectors, Stephe to the 0.03333 frames-per-second performance up to something acceptable.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Krebs on SecurityBad .Men at .Work. Please Don’t .Click

Web site names ending in new top-level domains (TLDs) like .men, .work and .click are some of the riskiest and spammy-est on the Internet, according to experts who track such concentrations of badness online. Not that there still aren’t a whole mess of nasty .com, .net and .biz domains out there, but relative to their size (i.e. overall number of domains) these newer TLDs are far dicier to visit than most online destinations.

There are many sources for measuring domain reputation online, but one of the newest is The 10 Most Abused Top Level Domains list, run by Spamhaus.org. Currently at the #1 spot on the list (the worst) is .men: Spamhaus says of the 65,570 domains it has seen registered in the .men TLD, more than half (55 percent) were “bad.”

According to Spamhaus, a TLD may be “bad” because it is tied to spam or malware dissemination (or both). More specifically, the “badness” of a given TLD may be assigned in two ways:

“The ratio of bad to good domains may be higher than average, indicating that the registry could do a better job of enforcing policies and shunning abusers. Or, some TLDs with a high fraction of bad domains may be quite small, and their total number of bad domains could be relatively limited with respect to other, bigger TLDs. Their total “badness” to the Internet is limited by their small total size.”

More than 1,500 TLDs exist today, but hundreds of them were introduced in just the past few years. The nonprofit organization that runs the domain name space — the Internet Corporation for Assigned Names and Numbers (ICANN) — enabled the new TLDs in response to requests from advertisers and domain speculators — even though security experts warned that an onslaught of new, far cheaper TLDs would be a boon mainly to spammers and scammers.

And what a boon it has been. The newer TLDs are popular among spammers and scammers alike because domains in many of these TLDs can be had for pennies apiece. But not all of the TLDs on Spamhaus’ list are prized for being cheaper than generic TLDs (like .com, .net, etc.). The cheapest domains at half of Spamhaus’ top ten “baddest” TLDs go for prices between $6 and $14.50 per domain.

Still, domains in the remaining five Top Bad TLDs can be had for between 48 cents and a dollar each.

Security firm Symantec in March 2018 published its own Top 20 list of Shady TLDs:

Symantec’s “Top 20 Shady TLDs,” published in March 2018.

Spamhaus says TLD registries that allow registrars to sell high volumes of domains to professional spammers and malware operators in essence aid and abet the plague of abuse on the Internet.

“Some registrars and resellers knowingly sell high volumes of domains to these actors for profit, and many registries do not do enough to stop or limit this endless supply of domains,” Spamhaus’ World’s Most Abused TLDs page explains.

Namecheap, a Phoenix, Ariz. based domain name registrar that in Oct. 2017 was the fourth-largest registrar, currently offers by a wide margin the lowest registration prices for three out of 10 of Spamhaus’ baddest TLDs, selling most for less than 50 cents each.

Namecheap also is by far the cheapest registrar for 11 of Symantec’s Top 20 Shady New TLDs: Namecheap is easily the least expensive registrar to secure a domain in 11 of the Top 20, including .date, .trade, .review, .party, .loan, .kim, .bid, .win, .racing, .download and .stream.

I should preface the following analysis by saying the prices that domain registrars charge for various TLD name registrations vary frequently, as do the rankings in these Top Bad TLD lists. But I was curious if there was any useful data about new TLD abuse at tld-list.com — a comparison shopping page for domain registrars.

What I found is that although domains in almost all of the above-mentioned TLDs are sold by dozens of registrars, most of these registrars have priced themselves out of the market for the TLDs that are currently so-favored by spammers and scammers.

Not so with Namecheap. True to its name, when it is the cheapest Namecheap consistently offers the lowest price by approximately 98 percent off the average price that other registrars selling the same TLD charge per domain. The company appears to have specifically targeted these TLDs with price promotions that far undercut competitors.

Namecheap is by far the lowest-priced registrar for more than half of the 20 Top Bad TLDs tracked by Symantec earlier this year.

Here’s a look at the per-domain prices charged by the registrars for the TLDs named in Spamhaus’s top 10:

The lowest, highest, and average prices charged by registrars for the domains in Spamhaus’ Top 10 “Bad” TLDs. Click to enlarge.

This a price comparison for Symantec’s Top 20 list:

The lowest, highest, and average prices charged by registrars for the domains in Symantec’s Top 20 “Shady” TLDs. Click to enlarge.

I asked Namecheap’s CEO why the company’s name comes up so frequently in these lists, and if there was any strategy behind cornering the market for so many of the “bad” and “shady” TLDs.

“Our business model, as our name implies is to offer choice and value to everyone in the same way companies like Amazon or Walmart do,” Namecheap CEO Richard Kirkendall told KrebsOnSecurity. “Saying that because we offer low prices to all customers we somehow condone nefarious activity is an irresponsible assumption on your part. Our commitment to our millions of customers across the world is to continue to bring them the best value and choice whenever and wherever we can.”

Kirkendall said expecting retail registrars that compete on pricing to stop doing that is not realistic and would be the last place he would go to for change.

“On the other hand, if you do manage to secure higher pricing you will also in effect tax everyone for the bad actions of a few,” Kirkendall said. “Is this really the way to solve the problem? While a few dollars may not matter to you, there are plenty of less fortunate people out there where it does matter. They say the internet is the great equalizer, by making things cost more simply for the sake of creating barriers truly and indiscriminately creates barriers for everyone, not just for those you target.”

Incidentally, should you ever wish to block all domains from any given TLD, there are a number of tools available to do that. One of the easiest to use is Cisco‘s OpenDNS, which includes up to 30 filters for managing traffic, content and Web sites on your computer and home network — including the ability to block entire TLDs if that’s something you want to do.

I’m often asked if blocking sites from loading when they’re served from specific TLDs or countries (like .ru) would be an effective way to block malware and phishing attacks. It’s important to note here that it’s not practical to assume you can block all traffic from given countries (that somehow blacklisting .ru is going to block all traffic from Russia). It also seems likely that the .com TLD space and US-based ISPs are bigger sources of the problem overall.

But that’s not to say blocking entire TLDs a horrible idea for individual users and home network owners. I’d wager there are whole a host of TLDs (including all of the above “bad” and “shady” TLDs) that most users could block across the board without forgoing anything they might otherwise want to have seen or visited. I mean seriously: When was the last time you intentionally visited a site registered in the TLD for Gabon (.ga)?

And while many people might never click on a .party or .men domain in a malicious or spammy email, these domains are often loaded only after the user clicks on a malicious or booby-trapped link that may not look so phishy — such as a .com or .org link.

Update: 11:46 a.m. ET: An earlier version of this story incorrectly stated the name of the company that owns OpenDNS.

Sociological ImagesAnthony Bourdain, Gastrodiplomacy, and the Sociology of Food

“There is a real danger of taking food too seriously. Food needs to be part of a bigger picture”
-Anthony Bourdain

As someone who writes about food, about its ability to offer a window into the daily lives and circumstances of people around the globe, Anthony Bourdain’s passing hit me particularly hard. If you haven’t seen them, his widely-acclaimed shows such as No Reservations and Parts Unknown were a kind of personal narrative meets travelogue meets food TV. They trailed the chef as he immersed himself in the culture of a place, sometimes one heavily touristed, sometimes more removed from the lives of most food media consumers, and showed us what people ate, at home, in the streets and in local restaurants. While much of food TV focuses on high end cuisine, Bourdain’s art was to show the craftsmanship behind the everyday foods of a place. He lovingly described the food’s preparation, the labor involved, and the joy people felt in coming together to consume it in a way that was palpable, even (or especially) when the foods themselves were unusual.

At their best, these shows taught us about the history and culture of particular places, and of the ways places have suffered through the ills of global capitalism and imperialism. His visit to the Congo was particularly memorable; While eating tiger fish wrapped in banana leaves, spear-caught and prepared by local fishermen, he delved into the colonial history and present-day violence that continue to devastate this natural-resource rich country. After visiting Cambodia he railed against Henry Kissinger and the American bombing campaign that killed over 250,000 people and gave rise, in part, to the murderous regime of the Khmer Rouge. In Jerusalem, he showed his lighter side, exploring the Israeli-Palestinian conflict through debates over who invented falafel. But in the same episode, he shared maqluba, “upside down” chicken and rice, with a family of Palestinian farmers in Gaza, and showed the basic humanity and dignity of a people living under occupation.

Bourdain’s shows embodies the basic premise of the sociology of food. Food is deeply personal and cultural. Over twenty-five years ago Anthony Winson called it the “intimate commodity” because it provides a link between our bodies, our cultures and the global political economies and ecologies that shape how and by whom food is cultivated, distributed and consumed. Bourdain’s show focuses on what food studies scholars call gastrodiplomacy, the potential for food to bring people together, helping us to understand and sympathize with one another’s circumstances. As a theory, it embodies the old saying that “the best way to our hearts is through our stomachs.” This theory has been embraced by nations like Thailand, which has an official policy promoting the creation of Thai restaurants in order to drive tourism and boost the country’s prestige. And the foods of Mexico have been declared World Heritage Cuisines by UNESCO, the same arm of the United Nations that marks world heritage sites. Less officially, we’ve seen a wave of efforts to promote the cuisines of refugees and migrants through restaurants, supper clubs and incubators like San Francisco’s La Cocina that help immigrant chefs launch food businesses.

But food has often been and continues to be a site of violence as well. Since 1981 750,000 farms have gone out of business, resulting in widespread rural poverty and epidemic levels of suicide. Food system workers, from farms to processing plants to restaurants, are among the most poorly paid members of our society, and often rely on food assistance. The food industry is highly centralized. The few major players in each segment—think Wal-Mart for groceries or Tyson for chicken—exert tremendous power on suppliers, creating dire conditions for producers. Allegations of sexual assault pervade the food industry; there are numerous complaints against well-known chefs and a study from Human Rights Watch revealed that more than 80% of women farmworkers have experienced harassment or assault on the job, a situation so dire that these women refer to it as the “field of panties” because rape is so common. Racism is equally rampant, with people of color often confined to poorly-paid “back of the house” positions while whites make up the majority of high-end servers, sommeliers, and celebrity chefs.

More than any other celebrity chef, Bourdain understood that food is political, and used his platform to address current social issues. His outspoken support for immigrant workers throughout the food system, and for immigrants more generally, colored many of his recent columns. And as the former partner of Italian actress Asia Argento, one of the first women to publicly accuse Harvey Weinstein, Bourdain used his celebrity status to amplify the voice of the #metoo movement, a form of support that was beautifully incongruous with his hyper-masculine image. Here Bourdain embodied another of the fundamental ideas of the sociology of food, that understanding the food system is intricately interwoven with efforts to improve it.

Bourdain’s shows explored food in its social and political contexts, offering viewers a window into worlds that often seemed far removed. He encouraged us to eat one another’s cultural foods, and to understand the lives of those who prepared them. Through food, he urged us to develop our sociological imaginations, putting individual biographies in their social and historical contexts. And while he was never preachy, his legacy urges us to get involved in the confluence of food movements, ensuring that those who feed us are treated with dignity and fairness, and are protected from sexual harassment and assault.

The Black feminist poet Audre Lorde once wrote that “it is not our differences that divide us. It is our inability to recognize, accept, and celebrate those differences.” Bourdain showed us that by learning the stories of one another’s foods, we can learn the histories and develop the empathy necessary to work for a better world.

Rest in Peace.

Alison Hope Alkon is associate professor of sociology and food studies at University of the Pacific. Check out her Ted talk, Food as Radical Empathy

(View original at https://thesocietypages.org/socimages)

Sociological ImagesAnthony Bourdain, Honorary Sociologist

I was absolutely devastated to hear about Anthony Bourdain’s passing.

I always saw Bourdain as more than just a celebrity chef or TV host. I saw him as one of us, a sociologist of sorts, someone deeply invested in understanding and teaching about culture and community. He had a gift for teaching us about social worlds beyond our own, and making these worlds accessible. In many ways, his work accomplished what so often we as sociologists strive to do.

Photo Credit: Adam Kuban, Flickr CC

I first read Bourdain’s memoir, Kitchen Confidential, at the age of twenty. The gritty memoir is its own ethnography of sorts, detailing the stories, experiences, and personalities working behind the sweltering heat of the kitchen line. At the time I was struggling as a first-generation, blue-collar student suddenly immersed in one of the wealthiest college campuses in the United States. Between August and May of each academic year, I attended classes with the children of CEOs and world leaders, yet come June I returned to the kitchens of a country club in western New York, quite literally serving alumni of my college. I remember reading the book thinking – though I knew it wasn’t academic sociology – “wait, you can write about these things?” These social worlds? These stories we otherwise overlook and ignore? I walked into my advisor’s office soon after, convinced I too would write such in-depth narratives about food-related subcultures. “Well,” he agreed, “you could research something like food culture or alternative food movements.” Within six months of that conversation, I had successfully secured my first research fellowship and taken on my first sociology project.

Like his writing, Bourdain’s television shows taught his audience something new about our relationships to food. Each episode of A Cook’s Tour, No Reservations, and Parts Unknown, went beyond the scope of a typical celebrity chef show. He never featured the World’s Biggest Hamburger, nor did he ever critique foods as “bizarre” or “strange.” Instead, he focused on what food meant to people across the globe. Food, he taught us, and the pride attached to it, are universal.

Rather than projecting narratives or misappropriating words, he let people speak for themselves. He strived to show the way things really are and to treat people with the utmost dignity, yet was careful never to glamorize or romanticize poverty, struggle, or difference.  In one of my favorite episodes of No Reservations, Bourdain takes us through Peru, openly critiquing celebrities who have glorified the nation as a place to find peace and spiritual enlightenment:

Sting and all his buddies come down here, they’re going on and on and on and on about preserving traditional culture, right? Because that’s what we’re talking about here. But what we’re also talking about here is poverty. [It’s] backbreaking work. Isn’t it kind of patronizing to say ‘oh they’re happier, they live a simpler life closer to the soil.’ Maybe so, but it’s also a pretty hard, scrabbling, unglamorous life when you get down to it.

My parents and I met Anthony Bourdain in 2009 at a bar in Buffalo where he was filming an episode of No Reservations. My father was thrilled to tell Bourdain how much he loved the episode featuring his homeland of Colombia. It was perhaps one of the first times in my father’s 38-years in the United States that he felt like American television portrayed Colombia in a positive light, showing the beauty, resilience, and complex history of the nation rather than the images of drug wars and violence present elsewhere in depictions of the country. That night in that dive bar, Bourdain graciously spoke with my dad about how beautiful he found the country and its people. Both the episode and their conversation filled by father with immense pride, ultimately restoring some of the dignity that had been repeatedly stripped of him through years of indignant stereotypes about his home.

In the end, isn’t that what many of us sociologists are trying to do? Honor people’s stories without misusing, mistreating, or misrepresenting them?

In retrospect, maybe Bourdain influenced my path towards sociology. At the very least, he created a bridge between what I knew – food service – and what I wanted to know – the rest of the world. In our classrooms we strive to teach our students how to make these connections. Bourdain made them for us with ease, dignity, and humility.

Caty Taborda-Whitt is a Ford fellow and sociology PhD candidate at the University of Minnesota. Her research interests include embodiment, health, culture, and inequalities.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowPodcast: Petard, Part 04 — CONCLUSION


Here’s the fourth and final part of my reading (MP3) of Petard (part one, part two, part three), a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Worse Than FailureCodeSOD: The Enabler

Shaneka works on software for an embedded device for a very demanding client. In previous iterations of the software, the client had made their own modifications to the device's code, and demanded they be incorporated. Over the years, more and more of the code came from the client, until the day when the client decided it was too much effort to maintain the ball of mud and just started demanding features.

One specific feature was a new requirement for turning the display on and off. Shaneka attempted to implement the feature, and it didn't work. No matter what she did, once they turned the display off, they simply couldn't turn it back on without restarting the whole system.

She dug into the code, and found the method to enable the display was implemented like this:

/***************************************************************************//**
* @brief  Method, which enables display
*
* @param  true = turn on / false = turn off
* @return None
*******************************************************************************/
void InformationDisplay::Enable(bool state)
{
  displayEnabled = state;
  if (!displayEnabled) {
    enableDisplay(false);
  }
}

The Enable method does a great job at turning off the display, but not so great a job turning it back on, no matter what the comments say. The simple fix would be to just pass the state parameter to enableDisplay directly, but huge swathes of the code depended on this method having the incorrect behavior. Shaneka instead updated the documentation for this method and wrote a new method which behaved correctly.

As you can guess, this is one of the pieces of code which came from the client.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Don Martisimulating a market with honest and deceptive advertisers

At Nudgestock 2018 I mentioned the signaling literature that provides background for understanding the targeted advertising problem. Besides being behind paywalls, a lot of this material is written in math that takes a while to figure out. For example, it's worth working through this Gardete and Bart paper to understand a situation in which the audience is making the right move to ignore a targeted message, but it can take a while.

Are people rational to ignore or block targeted advertising in some media, because those media are set up to give an incentive to deceptive sellers? Here's a simulation of an ad market in which that might be the case. Of course, this does not show that in all advertising markets, better targeting leads to an advantage for deceptive sellers. But it is a demonstration that it is possible to design a set of rules for an advertising market that gives an advantage to deceptive sellers.

What are we looking at? Think of it as a culture medium where we can grow and evolve a population of single-celled advertisers.

The x and y coordinates are some arbitrary characteristic of offers made to customers. Customers, invisible, are scattered randomly all over the map. If a customer gets an offer for a product that is close enough to their preferences, it will buy.

Advertisers (yellow to orange squares) get to place ads that reach customers within a certain radius. The advertiser has a price that it will bid for an ad impression, and a maximum distance at which it will bid for an impression. These are assigned randomly when we populate the initial set of advertisers.

High-bidding advertisers are more orange, and lower-bidding advertisers are more pale yellow.

An advertiser is either deceptive, in which case it makes a slightly higher profit per sale, or honest. When an honest advertiser makes a sale, we draw a green line from the advertiser to the customer. When a deceptive advertiser makes a sale, we draw a red line. The lines appear to fade out because we draw a black line every time there is an ad impression that does not result in a sale.

So why don't the honest advertisers die out? One more factor: the norms enforcers. You can think of these as product reviewers or regulators. If a deceptive advertiser wins an ad impression to a norms enforcer, then the deceptive advertiser pays a cost, greater than the profit from a sale. Think of it as having to register a new domain and get a new logo. Honest advertisers can make normal sales to the norms enforcers, which are shown as blue squares. An ad impression that results in an "enforcement penalty" is shown as a blue line.

So, out of those relative simple rules—two kinds of advertisers and two kinds of customers—we can see several main strategies arise. Your run of the simulation is unique, and you can also visit the big version.

What I'm seeing on mine is some clusters of finely targeted deceptive advertisers, in areas with relatively few norms enforcers, and some low-bidding honest advertisers with a relatively broad targeting radius. Again, I don't think that this necessarily corresponds to any real-world advertising market, but it is interesting to figure out when and how an advertising market can give an advantage to deceptive sellers, and what kinds of protections on the customer side can change the game.

How The California Consumer Privacy Act Stacks Up Against GDPR

The biggest lies that the martech and adtech worlds tell themselves

‘Personalization diminished’: In the GDPR era, contextual targeting is making a comeback

How media companies lost the advertising business

Ben Miroglio, David Zeber, Jofish Kaye, and Rebecca Weiss. 2018. The Effect of Ad Blocking on User Engagement with the Web. In WWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3178876.3186162

When can deceptive sellers outbid honest sellers for ad impressions?

Google Will Enjoy Major GDPR Data Advantages, Even After Joining IAB Europe’s Industry Framework

https://www.canvas8.com/content/2018/06/07/don-marti-nudgestock.html …

Data protection laws are shining a needed light on a secretive industry | Bruce Schneier

How startups die from their addiction to paid marketing

Opinion: Europe's Strict New Privacy Rules Are Scary but Right

Announcing a new journalism entrepreneurship boot camp: Let’s “reboot the media” together

Intelligent Tracking Prevention 2.0

The alt-right has discovered an oasis for white-supremacy messages in Disqus, the online commenting system.

Teens Are Abandoning Facebook. For Real This Time.

Salesforce CEO Marc Benioff Calls for a National Privacy Law

Planet Linux AustraliaFrancois Marier: Mysterious 'everybody is busy/congested at this time' error in Asterisk

I was trying to figure out why I was getting a BUSY signal from Asterisk while trying to ring a SIP phone even though that phone was not in use.

My asterisk setup looks like this:

phone 1 <--SIP--> asterisk 1 <==IAX2==> asterisk 2 <--SIP--> phone 2

While I couldn't call SIP phone #2 from SIP phone #1, the reverse was working fine (ringing #1 from #2). So it's not a network/firewall problem. The two SIP phones can talk to one another through their respective Asterisk servers.

This is the error message I could see on the second asterisk server:

$ asterisk -r
...
  == Using SIP RTP TOS bits 184
  == Using SIP RTP CoS mark 5
    -- Called SIP/12345
    -- SIP/12345-00000002 redirecting info has changed, passing it to IAX2/iaxuser-6347
    -- SIP/12345-00000002 is busy
  == Everyone is busy/congested at this time (1:1/0/0)
    -- Executing [12345@local:2] Goto("IAX2/iaxuser-6347", "in12345-BUSY,1") in new stack
    -- Goto (local,in12345-BUSY,1)
    -- Executing [in12345-BUSY@local:1] Hangup("IAX2/iaxuser-6347", "17") in new stack
  == Spawn extension (local, in12345-BUSY, 1) exited non-zero on 'IAX2/iaxuser-6347'
    -- Hungup 'IAX2/iaxuser-6347'

where:

  • 12345 is the extension of SIP phone #2 on Asterisk server #2
  • iaxuser is the user account on server #2 that server #1 uses
  • local is the context that for incoming IAX calls on server #1

This Everyone is busy/congested at this time (1:1/0/0) was surprising since looking at each SIP channel on that server showed nobody as busy:

asterisk2*CLI> sip show inuse
* Peer name               In use          Limit           
12345                     0/0/0           2               

So I enabled the raw SIP debug output and got the following (edited for clarity):

asterisk2*CLI> sip set debug on
SIP Debugging enabled

  == Using SIP RTP TOS bits 184
  == Using SIP RTP CoS mark 5

INVITE sip:12345@192.168.0.4:2048;line=m2vlbuoc SIP/2.0
Via: SIP/2.0/UDP 192.168.0.2:5060
From: "Francois Marier" <sip:67890@192.168.0.2>
To: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: Asterisk PBX
Contact: <sip:67890@192.168.0.2:5060>
Content-Length: 274

    -- Called SIP/12345

<--- SIP read from UDP:192.168.0.4:2048 --->
SIP/2.0 100 Trying
Via: SIP/2.0/UDP 192.168.0.2:5060
From: "Francois Marier" <sip:67890@192.168.0.2>
To: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: snom300
Contact: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
Content-Length: 0

<------------->
--- (9 headers 0 lines) ---

<--- SIP read from UDP:192.168.0.4:2048 --->
SIP/2.0 480 Do Not Disturb
Via: SIP/2.0/UDP 192.168.0.2:5060
From: "Francois Marier" <sip:67890@192.168.0.2>
To: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: snom300
Contact: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
Content-Length: 0

where:

  • 12345 is the extension of SIP phone #2 on Asterisk server #2
  • 67890 is the extension of SIP phone #1 on Asterisk server #2
  • 192.168.0.4 is the IP address of SIP phone #2
  • 192.168.0.1 is the IP address of Asterisk server #2

From there, I can see that SIP phone #2 is returning a status of 408 Do Not Disturb. That's what the problem was: the phone itself was in DnD mode and set to reject all incoming calls.

,

Rondam RamblingsIf the shoe fits

Fox-and-Friends host Abby Huntsman, in a rare moment of lucidity, today referred to the upcoming summit between Donald Trump and Kim Jong Un as "a meeting between two dictators". The best part is that nobody on the show seemed to notice, perhaps because there is such a thick pile of lies and self-deceptions that Trump apologists have to keep track of that sometimes the truth can slip through the

Planet Linux AustraliaChris Samuel: Submission to Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples

Tonight I took some time to send a submission in to the Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples in support of the Uluru Statement from the Heart from the 2017 First Nations National Constitutional Convention held at Uluru. Submissions close June 11th so I wanted to get this in as I feel very strongly about this issue.

Here’s what I wrote:

To the Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples,

The first peoples of Australia have lived as part of this continent for many times longer than the ancestors of James Cook lived in the UK(*), let alone this brief period of European colonisation called Australia.

They have farmed, shaped and cared for this land over the millennia, they have seen the climate change, the shorelines move and species evolve.

Yet after all this deep time as custodians of this land they were dispossessed via the convenient lie of Terra Nullius and through killing, forced relocation and introduced sickness had their links to this land severely beaten, though not fatally broken.

Yet we still have the chance to try and make a bridge and a new relationship with these first peoples; they have offered us the opportunity for a Makarrata and I ask you to grasp this opportunity with both hands, for the sake of all Australians.

Several of the component states and territories of this recent nation of Australia are starting to investigate treaties with their first peoples, but this must also happen at the federal level as well.

Please take the Uluru Statement from the Heart to your own hearts, accept the offering of Makarrata & a commission and let us all move forward together.

Thank you for your attention.

Your sincerely,
Christopher Samuel

(*) Australia has been continuously occupied for at least 50,000 years, almost certainly for at least 60,000 years and likely longer. The UK has only been continuously occupied for around the last 10,000 years after the last Ice Age drove its previous population out into warmer parts of what is now Europe.

Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

This item originally posted here:

Submission to Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples

Planet Linux AustraliaBen Martin: A new libferris is coming! 2.0.x

A while back I ported most of the libferris suite over to using boost for smart pointers and for signals. The later was not such a problem but there were always some fringe cases to the former and this lead to a delay in releasing it because there were some known issues.

I have moved that code into a branch locally and reverted back to using the Modern C++ Loki library for intrusive reference counting and sigc++. I imported my old test suite into the main libferris repo and will flesh that out over time.

I might do a 2.0.0 or 1.9.9 release soonish so that the entire stack is out there. As this has the main memory management stuff that has been working fine for the last 10 years this shouldn't be more unstable than it was before.

I was tempted to use travis ci for testing but will likely move to using a local vm. Virtualization has gotten much more convenient and I'm happy to setup a local test VM for this task which also breaks a dependency on companies which really doesn't need to be there. Yes, I will host releases and a copy of git in some place like github or gitlab or whatever to make that distribution more convenient. On the other hand, anyone could run the test suite which will be in the main libferris distro if they feel the desire.

So after this next release I will slowly at leisure work to flesh out the testsuite and fix issues that I find by running it over time. This gives a much more incremental development which will hopefully be more friendly to the limited time patches that I throw at the project.

One upside of being fully at the mercy of my time is that the project is less likely to die or be taken over by a company and lead in an unnatural direction. The downside is that it relies on my free time which is split over robotics, cnc, and other things as well as libferris.

As some have mentioned, a flatpak or docker image for libferris would be nice. Ironically this makes the whole thing a bit more like plan9 with a filesystem microkernel like subsystem (container) than just running it as a native though rpm or deb, but whatever makes it easier.

,

Don MartiNudgestock 2018 notes and links

Thanks for coming to my Nudgestock 2018 talk. First, as promised, some links to the signaling literature. I don't know of a full bibliography for this material, and a lot of it appears to be paywalled. A good way to get into it is to start with this widely cited paper by Phillip Nelson: Advertising as Information | Journal of Political Economy: Vol 82, No 4 and work forward.

Gardete and Bart "We find that when the sender’s motives are transparent to the receiver, communication can only be influential if the sender is not well informed about the receiver’s preferences. The sender prefers an interior level of information quality, while the receiver prefers complete privacy unless disclosure is necessary to induce communication." Tailored Cheap Talk | Stanford Graduate School of Business The Gardete and Bart paper makes sense if you ever read Computer Shopper for the ads. You want to get an idea of each manufacturer's support for each hardware standard, so that you can buy parts today that will keep their value in the parts market of the near future. You don't want an ad that targets you based on what you already have.

Kihlstrom and Riordan "A great deal of advertising appears to convey no direct credible information about product qualities. Nevertheless such advertising may indirectly signal quality if there exist market mechanisms that produce a positive relationship between product quality and advertising expenditures." Advertising as a Signal

Ambler and Hollier "High perceived advertising expense enhances an advertisement's persuasiveness significantly, but largely indirectly, by strengthening perceptions of brand quality." The Waste in Advertising Is the Part That Works | the Journal of Advertising Research

Davis, Kay, and Star "It is not so much the claims made by advertisers that are helpful but the fact that they are willing to spend extravagant amounts of money." Is advertising rational- Business Strategy Review - Wiley Online Library

New research on the effect of ad blocking on user engagement. No paywall. Ben Miroglio, David Zeber, Jofish Kaye, and Rebecca Weiss. 2018. The Effect of Ad Blocking on User Engagement with the Web. In WWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3178876.3186162 (PDF)

Here's that simulation of unicellular advertisers that I showed on screen, and more on the norms enforcer situation, which IHMO is different from pure signaling.

For those of you who are verified on Twitter, so haven't seen what I'm talking about with the deceptive ads there, I have started collecting some: dmarti/deceptive-ads

I mentioned the alignment of interest between high-reputation brands and high-reputation publishers. More on the publisher side is in a series of guest posts for Digital Content Next, which represents large media companies that stand to benefit from reputation-based advertising: Don Marti, Author at Digital Content Next Also more from the publisher point of view in Notes and links from my talk at the Reynolds Journalism Institute.

If you're interested in the post-creepy advertising movement, here are some people to follow on Twitter.

What's next? The web advertising mess isn't a snarled-up mess of collective action problems. It's a complex set of problems that interact in a way that creates some big opportunities for the right projects. Work together to fix web ads? Let's not.

,

Harald WelteRe-launching openmoko USB Product ID and Ethernet OUI registry

Some time after Openmoko went out of business, they made available their USB Vendor IDs and IEEE OUI (Ethernet MAC address prefix) available to Open Source Hardware / FOSS projects.

After maintaining that for some years myself, I was unable to find time to continue the work and I had handed it over some time ago to two volunteers. However, as things go, those volunteers also stopped to respond to PID / OUI requests, and we're now launching the third attempt of continuing this service.

As the openmoko.org wiki will soon be moved into an archive of static web pages only, we're also moving the list of allocated PID and OUIs into a git repository.

Since git.openmoko.org is also about to be decommissioned, the repository is now at https://github.com/openmoko/openmoko-usb-oui, next to all the archived openmoko.org repository mirrors.

This also means that in addition to sending an e-mail application for getting an allocation in those ranges, you can now send a pull-request via github.

Thanks to cuvoodoo for volunteering to maintain the Openmoko USB PID and IEEE OUI allocations from now on!

CryptogramFriday Squid Blogging: Extinct Relatives of Squid

Interesting fossils. Note that a poster is available.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDResources for suicide prevention, post-attempt survivors and their families

Inspired by JD Schramm’s powerful TEDTalk on surviving a suicide attempt, this list of resources has been updated to help you widen your understanding of mental health, depression, suicide and suicide prevention. Whether you’re an attempt survivor, a concerned family member or friend, or struggling with suicidal thoughts yourself, this list offers helpful resources and hotlines from across the world. This list is not exhaustive so we’d love to hear from you— add suggestions to the comments or email us.

To start off, here is a TED playlist on breaking the silence around suicide.

In the US:

National Suicide Prevention Lifeline
1-800-273-TALK
http://www.suicidepreventionlifeline.org/
A free, 24-hour hotline available to anyone in suicidal crisis or emotional distress. Your call will be routed to the nearest crisis center to you.

The Trevor Project
http://www.thetrevorproject.org/localresources
866 4-U-TREVOR
The Trevor Project is determined to end suicide among LGBTQ youth by providing life-saving and life-affirming resources including a nationwide, 24/7 crisis intervention lifeline, digital community and advocacy/educational programs that create a safe, supportive and positive environment for everyone.

Samaritans USA
http://www.samaritansusa.org/
Samaritans centers provide volunteer-staffed hotlines and professional and volunteer-run public education programs, “suicide survivor” support groups and many other crisis response, outreach and advocacy activities.

Attempt Survivors
http://attemptsurvivors.com/
A two-year project that collected blog posts and stories for and by attempt survivors, set up by the American Association of Suicidology. While the active collection has stopped, the archive is a good place to explore, to hear open, honest voices exploring life after a suicide attempt.

ULifeline
http://ulifeline.org/page/main/StudentLogin.html
An anonymous online resource where you can learn about suicide  prevention and campus-specific resources.

American Foundation for Suicide Prevention:
http://www.afsp.org/
A national nonprofit organization dedicated to understanding and preventing suicide through research, education and advocacy, and to reaching out to people impacted by suicide.

Mental Health First Aid USA
http://www.mentalhealthfirstaid.org/
A public education program that  helps the public identify, understand and respond to signs of mental illnesses and substance use disorders.

Suicide Awareness Voices of Education
SAVE.org
A national nonprofit dedicated to preventing suicide through public awareness and education.

Live Through This
http://livethroughthis.org/
An organization documenting the stories and portraits of suicide attempt survivors to encourage more open dialogue around suicide and depression.

International:

International Association for Suicide Prevention
http://www.iasp.info/
IASP now includes professionals and volunteers from more than fifty different countries. IASP is a Non-Governmental Organization in official relationship with the World Health Organization (WHO) concerned with suicide prevention.

Befrienders 
A suicide prevention resource with phone helplines across the world.
https://www.befrienders.org/

Canadian Association for Suicide Prevention
A resource for survivors as well as anyone in suicidal distress.
To find the nearest crisis center: https://suicideprevention.ca/need-help/
To find the nearest support group: https://suicideprevention.ca/coping-with-suicide-loss/survivor-support-centres/

SAPTA (Mexico)
http://www.saptel.org.mx/index.html

Centro de Valorização da Vida (Brazil)
http://www.cvv.org.br/
Tel: 188 or 141

Sociedade Portuguesa de Suicidologia (Portugal)
http://www.spsuicidologia.pt/

Hulpmix (Netherlands)
https://www.113.nl/english

Samaritans Onlus (Italy)
http://www.samaritansonlus.org/

The South African Depression and Anxiety Group (South Africa)
http://www.sadag.org/index.php?option=com_content&view=article&id=11&Itemid=114

Suicide Ecoute (France)
http://www.suicide-ecoute.fr/

PHARE (France)
http://www.phare.org/

한국자살예방협회 (Korean Association for Suicide Prevention)
http://www.suicideprevention.or.kr/

한국자살협회 사이버 상담실 (Korean Suicide Prevention Cyber Counseling)
http://www.counselling.or.kr/

Hjälplinjen (Sweden)
http://www.hjalplinjen.se/

If you know of good resources available where you live, please add them to the comments section of this post.

Worse Than FailureError'd: Try Again (but with More Errors)

"Sorry, Walgreens, in the future, I'll try to make an error next time," Greg L. writes.

 

"Hmm, I'm either going to shave with my new razors that I ordered... or I won't," wrote Paul.

 

Charlie L. writes,"IFNAME would be my name, IF it were my name that is."

 

"Yep, Dell, I like to brag about my kids File, Edit, View, Tools, and Help," wrote Carl C.

 

Renato L. writes, "Low-cost airlines have come a long way. Forget the Gregorian calendar, created their own one."

 

"So is this becuase, somehow, passwords longer than 9 characters are less secure?" wrote Keith H.

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaOpenSTEM: This Week in Australian History

Today we introduce a new category for OpenSTEM® Blog articles: “This Week in Australian History”. So many important dates in Australian history seem to become forgotten over time that there seems to be a need to highlight some of these from time to time. For teachers of students from Foundation/Prep/Kindy to Year 6 looking for […]

Planet Linux AustraliaMatthew Oliver: Keystone Federated Swift – Separate Clusters + Container Sync

This is the third post in the series of Keystone Federated Swift. To bounce back to the start you can visit the first post.

Separate Clusters + Container Sync

The idea with this topology is to deploy each of your OpenStack federated clusters each with their own unique swift cluster and then use another swift feature, container sync, to push objects you create on one federated environment to another.

In this case the keystone servers are federated. A very similar topology could be a global Swift cluster, but each proxy only talks to single region’s keystone. Which would mean a user visiting a different region would authenticate via federation and be able to use the swift cluster, however would use a different account name. In both cases container sync could be used to synchronise the objects, say from the federated account to that of the original account. This is because container sync can synchronise both between containers in separate clusters or in the same.

 

Setting up container sync

Setting up container sync is pretty straight forward. And is also well documented. At a high level to goes like this. Firstly you need to setup a trust between the different clusters. This is achieved by creating a container-sync-realms.conf file, the online example is:

[realm1]
key = realm1key
key2 = realm1key2
cluster_clustername1 = https://host1/v1/
cluster_clustername2 = https://host2/v1/

[realm2]
key = realm2key
key2 = realm2key2
cluster_clustername3 = https://host3/v1/
cluster_clustername4 = https://host4/v1/

 

Each realm is a set of different trusts. And you can have as many clusters in a realm as you want, so as youcan see you can build up different realms. In our example we’d only need 1 realm, and lets use some better names.

[MyRealm]
key = someawesomekey
key2 = anotherkey
cluster_blue = https://blueproxyvip/v1
cluster_green = https://greenproxyvip/v1

NOTE: there is nothing stopping you from only having 1 cluster defined as you can use container sync within a cluster, or adding more clusters to a single realm.

 

Now in our example both the green and blue clusters need to have the MyRealm realm defined in their /etc/swift/container-sync-realms.conf file. The 2 keys are there so you can do key rotation. These keys should be kept secret as these keys will be used to define trust between the clusters.

 

The next step is to make sure you have the container_sync middleware in your proxy pipeline. There are 2 parts to container sync, the backend daemon that periodically checks containers for new objects and sends changes to the other cluster, and the middleware that is used to authenticate requests sent by container sync daemons from other clusters. We tend to place the container_sync middleware before (to the left of) any authentication middleware.

 

The last step is to tell container sync what containers to keep in sync. This is all done via container meta-data which is controlled by the user. Let’s assume we have 2 accounts, AUTH_matt on the blue and AUTH_federatedmatt on the green. And we wanted to sync a container called mycontainer. Note, the containers don’t have to be called the same. Then we’d start by making sure the 2 containers have the same container sync key, which is defined by the owner of the container, this isn’t the realm keys but work in a similar way. And then telling 1 container to sync with the other.
NOTE: you can make the relationship go both ways.

 

Let’s use curl first:

$ curl -i -X POST -H 'X-Auth-Token: <token>' \
-H 'X-Container-Sync-Key: secret' \
'http://blueproxyvip/v1/AUTH_matt/mycontainer'

$ curl -i -X POST -H 'X-Auth-Token: <token>' \
-H 'X-Container-Sync-Key: secret' \
-H 'X-Container-Sync-To: //MyRealm/blue/AUTH_matt/mycontainer' \
'http://greenproxyvip/v1/AUTH_federatedmatt/mycontainer'

Or via the swift client, noting that you need to change identities to set each account.

# To the blue cluster for AUTH_matt
$ swift  post -k 'secret' mycontainer

 

# To the green cluster for AUTH_federatedmatt
$ swift  post \
-t '//MyRealm/blue/AUTH_matt/mycontainer' \
-k 'secret' mycontainer

In a federated environment, you’d just need to set some key for each of your containers you want to work on while your away (or all of them I guess). Then when you visit you can just add the sync-to metadata when you create containers on the other side. Likewise, if you knew the name of your account on the other side you could make a sync-to if you needed to work on something over there.

 

To authenticate containersync generates and compares a hmac on both sides where the hmac consists of both the realm and container keys, the verb, object name etc.

 

The obvious next question is great, but then do I need to know the name of each cluster, well yes, but you can simply find them by asking swift via the info call. This is done by hitting the /info swift endpoint with whatever tool you want. If your using the swift client, then it’s:

$ swift info

Pros and cons

Pros

The biggest pro for this approach is you don’t have to do anything special, if you have 1 swift cluster or a bunch throughout your federated environments the all you need to do it setup a container sync trust between them and the users can sync between themselves.

 

Cons

There are a few I can think off the top of my head:

  1. You need to manually set the metadata on each container. Which might be fine if it’s just you, but if you have an app or something it’s something else you need to think about.
  2. Container sync will move the data periodically, so you may not see it in the other container straight away.
  3. More storage is used. If it’s 1 cluster or many, the objects will exist in both accounts.

Conclusion

This is an interesting approach, but I think it would be much better to have access to the same set of objects everywhere I go and it just worked. I’ll talk about how to go about that in the next post as well as talk about 1 specific way I got working as a POC.

 

Container sync is pretty cool, Swiftstack have recently open sourced a another tool 1space, that can do something similar. 1space looks awesome but I haven’t have a chance to play with it yet. And so will add it to the list of Swift things I want to play with whenever I get a chance.

,

Krebs on SecurityAdobe Patches Zero-Day Flash Flaw

Adobe has released an emergency update to address a critical security hole in its Flash Player browser plugin that is being actively exploited to deploy malicious software. If you’ve got Flash installed — and if you’re using Google Chrome or a recent version of Microsoft Windows you do — it’s time once again to make sure your copy of Flash is either patched, hobbled or removed.

In an advisory published today, Adobe said it is aware of a report that an exploit for the previously unknown Flash flaw — CVE-2018-5002 — exists in the wild, and “is being used in limited, targeted attacks against Windows users. These attacks leverage Microsoft Office documents with embedded malicious Flash Player content distributed via email.”

The vulnerable versions of Flash include v. 29.0.0.171 and earlier. The version of Flash released today brings the program to v. 30.0.0.113 for Windows, Mac, Linux and Chrome OS. Check out this link to detect the presence of Flash in your browser and the version number installed.

Both Internet Explorer/Edge on Windows 10 and Chrome should automatically prompt users to update Flash when newer versions are available. At the moment, however, I can’t see any signs yet that either Microsoft or Google has pushed out new updates to address the Flash flaw. I’ll update this post if that changes. (Update: June 8, 11:01 a.m. ET: Looks like the browser makers are starting to push this out. You may still need to restart your browser for the update to take effect.)

Adobe credits Chinese security firm Qihoo 360 with reporting the zero-day Flash flaw. Qihoo said in a blog post that the exploit was seen being used to target individuals and companies in Doha, Qatar, and is believed to be related to a nation-state backed cyber-espionage campaign that uses booby-trapped Office documents to deploy malware.

In February 2018, Adobe patched another zero-day Flash flaw that was tied to cyber espionage attacks launched by North Korean hackers.

Hopefully, most readers here have taken my longstanding advice to disable or at least hobble Flash, a buggy and insecure component that nonetheless ships by default with Google Chrome and Internet Explorer. More on that approach (as well as slightly less radical solutions) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist/blacklist specific sites.

By default, Mozilla Firefox on Windows computers with Flash installed runs Flash in a “protected mode,” which prompts the user to decide if they want to enable the plugin before Flash content runs on a Web site.

Another, perhaps less elegant, alternative to wholesale kicking Flash to the curb is to keeping it installed in a browser that you don’t normally use, and then only using that browser on sites that require Flash.

Administrators have the ability to change Flash Player’s behavior when running on Internet Explorer on Windows 7 and below by prompting the user before playing Flash content. A guide on how to do that is here (PDF). Administrators may also consider implementing Protected View for Office. Protected View opens a file marked as potentially unsafe in Read-only mode.

CryptogramAn Example of Deterrence in Cyberspace

In 2016, the US was successfully deterred from attacking Russia in cyberspace because of fears of Russian capabilities against the US.

I have two citations for this. The first is from the book Russian Roulette: The Inside Story of Putin's War on America and the Election of Donald Trump, by Michael Isikoff and David Corn. Here's the quote:

The principals did discuss cyber responses. The prospect of hitting back with cyber caused trepidation within the deputies and principals meetings. The United States was telling Russia this sort of meddling was unacceptable. If Washington engaged in the same type of covert combat, some of the principals believed, Washington's demand would mean nothing, and there could be an escalation in cyber warfare. There were concerns that the United States would have more to lose in all-out cyberwar.

"If we got into a tit-for-tat on cyber with the Russians, it would not be to our advantage," a participant later remarked. "They could do more to damage us in a cyber war or have a greater impact." In one of the meetings, Clapper said he was worried that Russia might respond with cyberattacks against America's critical infrastructure­ -- and possibly shut down the electrical grid.

The second is from the book The World as It Is, by President Obama's deputy national security advisor Ben Rhodes. Here's the New York Times writing about the book.

Mr. Rhodes writes he did not learn about the F.B.I. investigation until after leaving office, and then from the news media. Mr. Obama did not impose sanctions on Russia in retaliation for the meddling before the election because he believed it might prompt Moscow into hacking into Election Day vote tabulations. Mr. Obama did impose sanctions after the election but Mr. Rhodes's suggestion that the targets include President Vladimir V. Putin was rebuffed on the theory that such a move would go too far.

When people try to claim that there's no such thing as deterrence in cyberspace, this serves as a counterexample.

EDITED TO ADD: Remember the blog rules. Comments that are not about the narrow topic of deterrence in cyberspace will be deleted. Please take broader discussions of the 2016 US election elsewhere.

Worse Than FailureImprov for Programmers: The Internet of Really Bad Things

Things might get a little dark in the season (series?) finale of Improv for Programmers, brought to you by Raygun. Remy, Erin, Ciarán and Josh are back, and not only is everything you're about to hear entirely made up on the spot: everything you hear will be a plot point in the next season of Mr. Robot.

Raygun provides a window into how users are really experiencing your software applications.

Unlike traditional logging, Raygun silently monitors applications for issues affecting end users in production, then allows teams to pinpoint the root cause behind a problem with greater speed and accuracy by providing detailed diagnostic information for developers. Raygun makes fixing issues 1000x faster than traditional debugging methods using logs and incomplete information.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integrated, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaGary Pendergast: Podcasting: Tavern Style

Earlier today, I joined JJJ and Jeff on episode 319 of the WP Tavern’s WordPress Weekly podcast!

We chatted about GitHub being acquired by Microsoft (and what that might mean for the future of WordPress using Trac), the state of Gutenberg, WordCamp Europe, as well as getting into a bit of the philosophy that drives WordPress’ auto-update system.

Finally, Jeff was kind enough to name me a Friend of the Show, despite my previous appearance technically not being a WordPress Weekly episode. 🎉

WPWeekly Episode 319 – The Gutenberg Plugin Turns 30