Planet Russell

,

Planet DebianSteinar H. Gunderson: Nageru deployments

As we're preparing our Nageru video chains for another Solskogen, I thought it worthwhile to make some short posts about deployments in the wild (neither of which I had much involvement with myself):

  • The Norwegian municipality of Frøya is live streaming streaming all of their council meetings using Nageru (Norwegian only). This is a fairly complex setup with a custom frontend controlling PTZ cameras, so that someone non-technical can just choose from a few select scenes and everything else just clicks into place.
  • Breizhcamp, a French technology conference, used Nageru in 2018, transitioning from OBS. If you speak French, you can watch their keynote about it (itself produced with Nageru) and all their other video online. Breizhcamp ran their own patched version of Nageru (available on Github); I've taken in most of their patches into the main repository, but not all of them yet.

Also, someone thought it was a good idea to take an old version of Nageru, strip all the version history and put it on Github with (apparently) no further changes. Like, what. :-)

Valerie AuroraYesterday’s joke protest sign just became today’s reality

Tomorrow I’m going to a protest against the forcible separation of immigrant children from their families. When I started thinking about what sign to make, I remembered my sign for the first Women’s March protest, the day after Trump took office in January 2017. It said: “Trump hates kids and puppies… for real!!!

trump_hates_puppies
My  protest sign for the 2017 Women’s March

While I expected a lot of terrifying things to happen over the next few years, I never, never thought that Trump would deliberately tear thousands of children away from their families and put them in concentration camps. I knew he hated children; I didn’t know he hated children (specifically, brown children) so much that he’d hold them hostage to force Congress to pass his racist legislation. I did not expect him and his party to try to sell cages full of weeping little boys as future gang members. I did not expect 55% of Republican voters to support splitting up families and putting them in camps. I’m smiling at the cute dog in that photo; now the entire concept of that sign seems impossibly naive and inappropriate, much less my expression in that photo. I apologize for this sign and my joking attitude.

I remember being terrified during the months between Trump’s election and his inauguration. I couldn’t sleep; I put together a go-bag; I bought three weeks worth of food and water and stored them in the closet. I read a dozen books on fascism and failed democracies. I even built a spreadsheet tracking signs of fascism so I’d know when to leave the country.

I came up with the concept of that sign as a way to increase people’s disgust for Trump; what kind of pathetic low-life creep hates kids AND puppies? But I still didn’t get how bad things truly were; I thought Trump hated kids in the sense that he didn’t want any of them around him and wouldn’t lift a finger to help them. I didn’t understand that he—and many people in his administration—took actual pleasure in knowing they were building camps full of crying, desperate, terrified kids who may never be reunited with their parents. In January 2017, I thought I understood the evil of this administration and of a significant percentage of the people in this country; actually, I way underestimated it.

At that protest, several people asked me if Trump really hated puppies, but not one person asked me if Trump really hated kids. In retrospect, this seems ominous, not funny.

I’m going to think very carefully before creating any more “joke” protest signs. Today’s “joke” could easily be tomorrow’s reality.

,

Planet DebianBenjamin Mako Hill: I’m a maker, baby

 

What does the “maker movement” think of the song “Maker” by Fink?

Is it an accidental anthem or just unfortunate evidence of the semantic ambiguity around an overloaded term?

CryptogramFriday Squid Blogging: Capturing the Giant Squid on Video

In this 2013 TED talk, oceanographer Edith Widder explains how her team captured the giant squid on video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecuritySupreme Court: Police Need Warrant for Mobile Location Data

The U.S. Supreme Court today ruled that the government needs to obtain a court-ordered warrant to gather location data on mobile device users. The decision is a major development for privacy rights, but experts say it may have limited bearing on the selling of real-time customer location data by the wireless carriers to third-party companies.

Image: Wikipedia.

At issue is Carpenter v. United States, which challenged a legal theory the Supreme Court outlined more than 40 years ago known as the “third-party doctrine.” The doctrine holds that people who voluntarily give information to third parties — such as banks, phone companies, email providers or Internet service providers (ISPs) — have “no reasonable expectation of privacy.”

That framework in recent years has been interpreted to allow police and federal investigators to obtain information — such as mobile location data — from third parties without a warrant. But in a 5-4 ruling issued today that flies in the face of the third-party doctrine, the Supreme Court cited “seismic shifts in digital technology” allowing wireless carriers to collect “deeply revealing” information about mobile users that should be protected by the 4th Amendment to the U.S. Constitution, which is intended to shield Americans against unreasonable searches and seizures by the government.

Amy Howe, a reporter for SCOTUSblog.com, writes that the decision means police will generally need to get a warrant to obtain cell-site location information, a record of the cell towers (or other sites) with which a cellphone connected.

The ruling is no doubt a big win for privacy advocates, but many readers have been asking whether this case has any bearing on the sharing or selling of real-time customer location data by the mobile providers to third party companies. Last month, The New York times revealed that a company called Securus Technologies had been selling this highly sensitive real-time location information to local police forces across the United States, thanks to agreements the company had in place with the major mobile providers.

It soon emerged that Securus was getting its location data second-hand through a company called 3Cinteractive, which in turn was reselling data from California-based “location aggregator” LocationSmart. Roughly two weeks after The Times’ scoop, KrebsOnSecurity broke the news that anyone could look up the real time location data for virtually any phone number assigned by the major carriers, using a buggy try-before-you-buy demo page that LocationSmart had made available online for years to showcase its technology.

Since those scandals broke, LocationSmart disabled its promiscuous demo page. More importantly, AT&T, Sprint, T-Mobile and Verizon all have said they are now in the process of terminating agreements with third-parties to share this real-time location data.

Still, there is no law preventing the mobile providers from hashing out new deals to sell this data going forward, and many readers here have expressed concerns that the carriers can and eventually will do exactly that.

So the question is: Does today’s Supreme Court ruling have any bearing whatsoever on mobile providers sharing location data with private companies?

According to SCOTUSblog’s Howe, the answer is probably “no.”

“[Justice] Roberts emphasized that today’s ruling ‘is a narrow one’ that applies only to cell-site location records,” Howe writes. “He took pains to point out that the ruling did not ‘express a view on matters not before us’ – such as obtaining cell-site location records in real time, or getting information about all of the phones that connected to a particular tower at a particular time. He acknowledged that law-enforcement officials might still be able to obtain cell-site location records without a warrant in emergencies, to deal with ‘bomb threats, active shootings, and child abductions.'”

However, today’s decision by the high court may have implications for companies like Securus which have marketed the ability to provide real-time mobile location data to law enforcement officials, according to Jennifer Lynch, a senior staff attorney with the Electronic Frontier Foundation, a nonprofit digital rights advocacy group.

“The court clearly recognizes the ‘deeply revealing nature’ of location data and recognizes we have a privacy interest in this kind of information, even when it’s collected by a third party (the phone companies),” Lynch wrote in an email to KrebsOnSecurity. “I think Carpenter would have implications for the Securus context where the phone companies were sharing location data with non-government third parties that were then, themselves, making that data available to the government.”

Lynch said that in those circumstances, there is a strong argument the government would need to get a warrant to access the data (even if the information didn’t come directly from the phone company).

“However, Carpenter’s impact in other contexts — specifically in contexts where the government is not involved — is much less clear,” she added. “Currently, there aren’t any federal laws that would prevent phone companies from sharing data with non-government third parties, and the Fourth Amendment would not apply in that context.”

And there’s the rub: There is nothing in the current law that prevents mobile companies from sharing real-time location data with other commercial entities. For that reality to change, Congress would need to act. For more on the prospects of that happening and how we wound up here, check out my May 26 story, Why is Your Location Data No Longer Private?

The full Supreme Court opinion in Carpenter v. United States is available here (PDF).

CryptogramThe Effects of Iran's Telegram Ban

The Center for Human Rights in Iran has released a report outlining the effect's of that country's ban on Telegram, a secure messaging app used by about half of the country.

The ban will disrupt the most important, uncensored platform for information and communication in Iran, one that is used extensively by activists, independent and citizen journalists, dissidents and international media. It will also impact electoral politics in Iran, as centrist, reformist and other relatively moderate political groups that are allowed to participate in Iran's elections have been heavily and successfully using Telegram to promote their candidates and electoral lists during elections. State-controlled domestic apps and media will not provide these groups with such a platform, even as they continue to do so for conservative and hardline political forces in the country, significantly aiding the latter.

From a Wired article:

Researchers found that the ban has had broad effects, hindering and chilling individual speech, forcing political campaigns to turn to state-sponsored media tools, limiting journalists and activists, curtailing international interactions, and eroding businesses that grew their infrastructure and reach off of Telegram.

It's interesting that the analysis doesn't really center around the security properties of Telegram, but more around its ubiquity as a messaging platform in the country.

CryptogramDomain Name Stealing at Gunpoint

I missed this story when it came around last year: someone tried to steal a domain name at gunpoint. He was just sentenced to 20 years in jail.

Worse Than FailureError'd: Be Patient!...OK?

"I used to feel nervous when making payments online, but now I feel ...um...'Close' about it," writes Jeff K.

 

"Looks like me and Microsoft have different ideas of what 75% means," Gary S. wrote.

 

George writes, "Try this one at home! Head to tdbank.com, search for 'documents for opening account' and enjoy 8 solid pages of ...this."

 

"I'm confused if the developers knew the difference between Javascript and Java. This has to be a troll...right?" wrote JM.

 

Tom S. writes, "Saw this in the Friendo app, but what I didn't spot was an Ok button. "

 

"I look at this and wonder if someone could deny a vacation requests because of a conflict of 0.000014 days with another member of staff," writes Rob.

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Sam VargheseRecycling Trump: Old news passed off as investigative reporting

Over the last three weeks, viewers of the Australian Broadcasting Corporation’s Four Corners program have been treated to what is the ultimate waste of time: a recapping of all that has gone on in the United States during the investigation into alleged Russian collusion with the Trump campaign in the 2106 presidential campaign.

There was nothing new in the nearly three hours of programming on what is the ABC’s prime investigative program. It only served as a vanity outlet for Sarah Ferguson, rated as one of the network’s better reporters, but after this, and her unnecessary Hillary Clinton interview, appearing more as someone who is interested in big-noting herself.

Exactly why Ferguson and a crew spent what must be between four to six weeks in the US, London and Moscow to put to air material that has been beaten to the death by the US and other Western media is a mystery. Had Ferguson managed to unearth one nugget of information that has gone unnoticed so far, one would not be inclined to complain.

But this same ABC has been crying itself hoarse for the last few months over cuts to its budget and trumpeting its news credentials – and then it produces garbage like the three episode of the Russia-Trump series or whatever it was called.

As an aside, the investigation has been going on for more than a year now, with special counsel Robert Mueller, a former FBI director, having been appointed on May 17, 2017. The American media have had a field day and every time there is a fresh development, these are shrieks all around that this is the straw that breaks the camel’s back. But it all turns out to be an illusion in the end.

Every little detail of the process of electing Donald Trump has been covered and dissected over and over and over again. And yet Ferguson thought it a good idea to run three hours of this garbage.

Apart from the fact that this something akin to the behaviour of a dog that revisits its own vomit, Ferguson also paraded some very dodgy individuals to bolster her program.

One was James Clapper, the director of national intelligence during the Obama presidency. Clapper is a man who has committed perjury by lying to the US Congress under oath. Clapper also leaked information about the infamous anti-Trump dossier to CNN’s Jake Tapper and then was rewarded with a contract at CNN.

Clapper does not have the best of reputations when it comes to integrity. To call him a shady character would not be a stretch. Now Ferguson may have needed to speak to him once, because he was the DNI under Obama. But she did not need to have him appear every now and then, remarking on this and that. He added no weight to an already weak program.

Another person Ferguson gave plenty of air time to was Luke Harding, a reporter with the Guardian. Harding is known for a few things: plagiarising others’ reports while he was stationed in Moscow and writing a book about Edward Snowden without having met any of the principal players in the matter. Once again, a person of dubious character.

One would also have to ask: why does the camera focus on the reporter so much? Is she the story? Or is it a way to puff herself up and appear so important that she cannot be out of sight of the lens lest the story break down? It is a curse of modern journalism, this narcissism, and Ferguson suffers from it badly.

This is the second worthless program Ferguson has produced in recent times; the first was her puff interview with Hillary Clinton.

Maybe she is gearing up to take on some kind of job in the US. Wouldn’t surprise me if public money was being used to paint the meretricious as the magnificent.

CryptogramAlgeria Shut Down the Internet to Prevent Students from Cheating on Exams

Algeria shut the Internet down nationwide to prevent high-school students from cheating on their exams.

The solution in New South Wales, Australia was to ban smartphones.

EDITED TO ADD (6/22): Slashdot thread.

,

Planet DebianLars Wirzenius: Ick ALPHA-6 released: CI/CD engine

It gives me no small amount of satisfaction to announce the ALPHA-6 version of ick, my fledgling continuous integration and deployment engine. Ick has been now deployed and used by other people than myself.

Ick can, right now:

  • Build system trees for containers.
  • Use system trees to run builds in containers.
  • Build Debian packages.
  • Publish Debian packages via its own APT repository.
  • Deploy to a production server.

There's still many missing features. Ick is by no means ready to replace your existing CI/CD system, but if you'd like to have a look at ick, and help us make it the CI/CD system of your dreams, now is a good time to give it a whirl.

(Big missing features: web UI, building for multiple CPU architectures, dependencies between projects, good documentation, a development community. I intend to make all of these happen in due time. Help would be welcome.)

Worse Than FailureWait Low Down

As mentioned previously I’ve been doing a bit of coding for microcontrollers lately. Coming from the world of desktop and web programming, it’s downright revelatory. With no other code running, and no operating system, I can use every cycle on a 16MHz chip, which suddenly seems blazing fast. You might have to worry about hardware interrupts- in fact I had to swap serial connection libraries out because the one we were using misused interrupts and threw of the timing of my process.

And boy, timing is amazing when you’re the only thing running on the CPU. I was controlling some LEDs and if I just went in a smooth ramp from one brightness level to the other, the output would be ugly steps instead of a smooth fade. I had to use a technique called temporal dithering, which is a fancy way of saying “flicker really quickly” and in this case depended on accurate, sub-microsecond timing. This is all new to me.

Speaking of sub-microsecond timing, or "subus", let's check out Jindra S’s submission. This code also runs on a microcontroller, and for… “performance” or “clock accuracy” is assembly inlined into C.

/*********************** FUNCTION v_Angie_WaitSubus *******************************//**
@brief Busy waits for a defined number of cycles.
The number of needed sys clk cycles depends on the number of flash wait states,
but due to the caching, the flash wait states are not relevant for STM32F4.
4 cycles per u32_Cnt
*******************************************************************************/
__asm void  v_Angie_WaitSubus( uint32_t u32_Cnt )
{
loop
    subs r0, #1
    cbz  r0, loop_exit
    b loop
loop_exit
    bx lr
}

Now, this assembly isn’t the most readable thing, but the equivalent C code is pretty easy to follow: while(--u32_Cnt); In other words, this is your typical busy-loop. Since this code is the only code running on the chip, no problem right? Well, check out this one:

/*********************** FUNCTION v_Angie_IRQWaitSubus *******************************//**
@brief Busy waits for a defined number of cycles.
The number of needed sys clk cycles depends on the number of flash wait states,
but due to the caching, the flash wait states are not relevant for STM32F4.
4 cycles per u32_Cnt
*******************************************************************************/
__asm void  v_Angie_IRQWaitSubus( uint32_t u32_Cnt )
{
IRQloop
    subs r0, #1
    cbz  r0, IRQloop_exit
    b IRQloop
IRQloop_exit
    bx lr
}

What do you know, it’s the same exact code, but called IRQWaitSubus, implying it’s meant to be called inside of an interrupt handler. The details can get fiendishly complicated, but for those who aren’t looking at low-level code on the regular, interrupts are the low-level cousin of event handlers. It allows a piece of hardware (or software, in multiprocessing systems) to notify the CPU that something interesting has happened, and the CPU can then execute some of your code to react to it. Like any other event handler, interrupt handlers should be fast, so they can update the program state and then allow normal execution to continue.

What you emphatically do not do is wait inside of an interrupt handler. That’s bad. Not a full-on WTF, but… bad.

There’s at least three more variations of this function, with slightly different names, scattered across different modules, all of which represent a simple busy loop.

Ugly, sure, but where’s the WTF? Well, among other things, this board needed to output precisely timed signals, like say, a 500Hz square wave with a 20% duty cycle. The on-board CPU clock was a simple oscillator which would drift- over time, with changes in temperature, etc. Also, interrupts could claim CPU cycles, throwing off the waits. So Jindra’s company had placed this code onto some STM32F4 ARM microcontrollers, shipped it into the field, and discovered that outside of their climate controlled offices, stuff started to fail.

The code fix was simple- the STM32-series of processors had a hardware timer which could provide precise timing. Switching to that approach not only made the system more accurate- it also meant that Jindra could throw away hundreds of lines of code which was complicated, buggy, and littered with inline assembly for no particular reason. There was just one problem: the devices with the bad software were already in the field. Angry customers were already upset over how unreliable the system was. And short of going on site to reflash the microcontrollers or shipping fresh replacements, the company was left with only one recourse:

They announced Rev 2 of their product, which offered higher rates of reliability and better performance, and only cost 2% more!

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

CryptogramAre Free Societies at a Disadvantage in National Cybersecurity

Jack Goldsmith and Stuart Russell just published an interesting paper, making the case that free and democratic nations are at a structural disadvantage in nation-on-nation cyberattack and defense. From a blog post:

It seeks to explain why the United States is struggling to deal with the "soft" cyber operations that have been so prevalent in recent years: cyberespionage and cybertheft, often followed by strategic publication; information operations and propaganda; and relatively low-level cyber disruptions such as denial-of-service and ransomware attacks. The main explanation is that constituent elements of U.S. society -- a commitment to free speech, privacy and the rule of law; innovative technology firms; relatively unregulated markets; and deep digital sophistication -- create asymmetric vulnerabilities that foreign adversaries, especially authoritarian ones, can exploit. These asymmetrical vulnerabilities might explain why the United States so often appears to be on the losing end of recent cyber operations and why U.S. attempts to develop and implement policies to enhance defense, resiliency, response or deterrence in the cyber realm have been ineffective.

I have long thought this to be true. There are defensive cybersecurity measures that a totalitarian country can take that a free, open, democratic country cannot. And there are attacks against a free, open, democratic country that just don't matter to a totalitarian country. That makes us more vulnerable. (I don't mean to imply -- and neither do Russell and Goldsmith -- that this disadvantage implies that free societies are overall worse, but it is an asymmetry that we should be aware of.)

I do worry that these disadvantages will someday become intolerable. Dan Geer often said that "the price of freedom is the probability of crime." We are willing to pay this price because it isn't that high. As technology makes individual and small-group actors more powerful, this price will get higher. Will there be a point in the future where free and open societies will no longer be able to survive? I honestly don't know.

EDITED TO ADD (6/21): Jack Goldsmith also wrote this.

,

Planet DebianJohn Goerzen: Making a difference

Every day, ask yourself this question: What one thing can I do today that will make this democracy stronger and honor and support its institutions? It doesn’t have to be a big thing. And it probably won’t shake the Earth. The aggregation of them will shake the Earth.

– Benjamin Wittes

I have written some over the past year or two about the dangers facing the country. I have become increasingly alarmed about the state of it. And that Benjamin Wittes quote, along with the terrible tragedy, spurred me to action. Among other things, I did two things I never have done before:

I registered to protest on June 30.

I volunteered to do phone banking with SwingLeft.

And I changed my voter registration from independent to Republican.

No, I have not gone insane. The reason for the latter is that here in Kansas, the Democrats rarely field candidates for most offices. The real action happens in the Republican primary. So if I can vote in that primary, I can have a voice in keeping the crazy out of office. It’s not much, but it’s something.

Today we witnessed, hopefully, the first victory in our battle against the abusive practices happening to children at the southern border. Donald Trump caved, and in so doing, implicitly admitted the lies he and his administration have been telling about the situation. This only happened because enough people thought like Wittes: “I am small, but I can do SOMETHING.” When I called the three Washington offices of my senators and representatives — far-right Republicans all — it was apparent that I was by no means the first to give them an earful about this, and that they were changing their tone because of what they heard. Mind you, they hadn’t taken any ACTION yet, but the calls mattered. The reporting mattered. The attention mattered.

I am going to keep doing what little bit I can. I hope everyone else will too. Let us shake the Earth.

Planet DebianJulien Danjou: Stop merging your pull requests manually

Stop merging your pull requests manually

If there's something that I hate, it's doing things manually when I know I could automate them. Am I alone in this situation? I doubt so.

Nevertheless, every day, they are thousands of developers using GitHub that are doing the same thing over and over again: they click on this button:

Stop merging your pull requests manually

This does not make any sense.

Don't get me wrong. It makes sense to merge pull requests. It just does not make sense that someone has to push this damn button every time.

It does not make any sense because every development team in the world has a known list of pre-requisite before they merge a pull request. Those requirements are almost always the same, and it's something along those lines:

  • Is the test suite passing?
  • Is the documentation up to date?
  • Does this follow our code style guideline?
  • Have N developers reviewed this?

As this list gets longer, the merging process becomes more error-prone. "Oops, John just clicked on the merge button while there were not enough developer that reviewed the patch." Rings a bell?

In my team, we're like every team out there. We know what our criteria to merge some code into our repository are. That's why we set up a continuous integration system that runs our test suite each time somebody creates a pull request. We also require the code to be reviewed by 2 members of the team before it's approbated.

When those conditions are all set, I want the code to be merged.

Without clicking a single button.

That's exactly how Mergify started.

Stop merging your pull requests manually

Mergify is a service that pushes that merge button for you. You define rules in the .mergify.yml file of your repository, and when the rules are satisfied, Mergify merges the pull request.

No need to press any button.

Take a random pull request, like this one:

Stop merging your pull requests manually

This comes from a small project that does not have a lot of continuous integration services set up, just Travis. In this pull request, everything's green: one of the owners reviewed the code, and the tests are passing. Therefore, the code should be already merged: but it's there, hanging, chilling, waiting for someone to push that merge button. Someday.

With Mergify enabled, you'd just have to put this .mergify.yml a the root of the repository:

rules:
  default:
    protection:
      required_status_checks:
        contexts:
          - continuous-integration/travis-ci
      required_pull_request_reviews:
        required_approving_review_count: 1

With such a configuration, Mergify enables the desired restrictions, i.e., Travis passes, and at least one project member reviewed the code. As soon as those conditions are positive, the pull request is automatically merged.

We built Mergify as a free service for open-source projects. The engine powering the service is also open-source.

Now go check it out and stop letting those pull requests hang out one second more. Merge them!

If you have any question, feel free to ask us or write a comment below! And stay tuned — as Mergify offers a few other features that I can't wait to talk about!

TEDTEDx talk under review

Updated June 20, 2018: An independently organized TEDx event recently posted, and subsequently removed, a talk from the TEDx YouTube channel that the event organizer titled: “Why our perception of pedophilia has to change.”

In the TEDx talk, a speaker described pedophilia as a condition some people are born with, and suggested that if we recognize it as such, we can do more to prevent those people from acting on their instincts.

TEDx events are organized independently from the main annual TED conference, with some 3,500 events held every year in more than 100 countries. Our nonprofit TED organization does not control TEDx events’ content.

This talk and its removal was recently brought to our attention. After reviewing the talk, we believe it cites research in ways that are open to serious misinterpretation. This led some viewers to interpret the talk as an argument in favor of an illegal and harmful practice.

Furthermore, after contacting the organizer to understand why it had been taken down, we learned that the speaker herself requested it be removed from the internet because she had serious concerns about her own safety in its wake.

Our policy is and always has been to remove speakers’ talks when they request we do so. That is why we support this TEDx organizer’s decision to respect this speaker’s wishes and keep the talk offline.

We will continue to take down any illegal copies of the talk posted on the Internet.

Original, posted June 19, 2018: An independently organized TEDx event recently posted, and subsequently removed, a talk from the TEDx YouTube channel that the event organizer had titled: “Why our perception of pedophilia has to change.”
We were not aware of this organizer’s actions, but understand now that their decision to remove the talk was at the speaker’s request for her safety.
In our review of the talk in question, we at TED believe it cites research open for serious misinterpretation.
TED does not support or advocate for pedophilia.
We are now reviewing the talk to determine how to move forward.
Until we can review this talk for potential harm to viewers, we are taking down any illegal copies of the talk posted on the Internet.  

CryptogramPerverse Vulnerability from Interaction between 2-Factor Authentication and iOS AutoFill

Apple is rolling out an iOS security usability feature called Security code AutoFill. The basic idea is that the OS scans incoming SMS messages for security codes and suggests them in AutoFill, so that people can use them without having to memorize or type them.

Sounds like a really good idea, but Andreas Gutmann points out an application where this could become a vulnerability: when authenticating transactions:

Transaction authentication, as opposed to user authentication, is used to attest the correctness of the intention of an action rather than just the identity of a user. It is most widely known from online banking, where it is an essential tool to defend against sophisticated attacks. For example, an adversary can try to trick a victim into transferring money to a different account than the one intended. To achieve this the adversary might use social engineering techniques such as phishing and vishing and/or tools such as Man-in-the-Browser malware.

Transaction authentication is used to defend against these adversaries. Different methods exist but in the one of relevance here -- which is among the most common methods currently used -- the bank will summarise the salient information of any transaction request, augment this summary with a TAN tailored to that information, and send this data to the registered phone number via SMS. The user, or bank customer in this case, should verify the summary and, if this summary matches with his or her intentions, copy the TAN from the SMS message into the webpage.

This new iOS feature creates problems for the use of SMS in transaction authentication. Applied to 2FA, the user would no longer need to open and read the SMS from which the code has already been conveniently extracted and presented. Unless this feature can reliably distinguish between OTPs in 2FA and TANs in transaction authentication, we can expect that users will also have their TANs extracted and presented without context of the salient information, e.g. amount and destination of the transaction. Yet, precisely the verification of this salient information is essential for security. Examples of where this scenario could apply include a Man-in-the-Middle attack on the user accessing online banking from their mobile browser, or where a malicious website or app on the user's phone accesses the bank's legitimate online banking service.

This is an interesting interaction between two security systems. Security code AutoFill eliminates the need for the user to view the SMS or memorize the one-time code. Transaction authentication assumes the user read and approved the additional information in the SMS message before using the one-time code.

Planet DebianCraig Small: Odd dependency on Google Chrome

For weeks I have had problems with Google Chrome. It would work very few times and then for reasons I didn’t understand, would stop working. On the command line you would get several screens of text, but never would the Chrome window appear.

So I tried the Beta, and it worked… once.

Deleted all the cache and configuration and it worked… once.

Every time the process would be in an infinite loop listening to a Unix socket (fd 7) but no window for the second and subsequent starts of Chrome.

By sheer luck in the screenfulls of spam I noticed this:

Gkr-Message: 21:07:10.883: secret service operation failed: The name org.freedesktop.secrets was not provided by any .service files

Hmm, so I noticed every time I started a fresh new Chrome, I logged into my Google account. So, once again clearing things I started Chrome, didn’t login and closed and reopened.  I had Chrome running the second time! Alas, not with all the stuff synchronised.

An issue for Mailspring put me onto the right path. installing gnome-keyring (or the dependencies p11-kit and gnome-keyring-pkcs11) fixed Chrome.

So if Chrome starts but you get no window, especially if you use cinnamon, try that trick.

 

 

Worse Than FailureThe Wizard Algorithm

Password requirements can be complicated. Some minimum and maximum number of characters, alpha and numeric characters, special characters, upper and lower case, change frequency, uniqueness over the last n passwords and different rules for different systems. It's enough to make you revert to a PostIt in your desk drawer to keep track of it all. Some companies have brillant employees who feel that they can do better, and so they create a way to figure out the password for any given computer - so you need to neither remember nor even know it.

Kendall Mfg. Co. (estab. 1827) (3092720143)

History does not show who created the wizard algorithm, or when, or what they were smoking at the time.

Barry W. has the misfortune of being a Windows administrator at a company that believes in coming up with their own unique way of doing things, because they can make it better than the way that everyone else is doing it. It's a small organization, in a sleepy part of a small country. And yet, the IT department prides itself on its highly secure practices.

Take the password of the local administrator account, for instance. It's the Windows equivalent of root, so you'd better use a long and complex password. The IT team won't use software to automate and keep track of passwords, so to make things extremely secure, there's a different password for every server.

Here's where the wizard algorithm comes in.

To determine the password, all you need is the server's hostname and its IP address.

For example, take the server PRD-APP2-SERV4 which has the IP address 178.8.1.44.

Convert the hostname to upper case and discard any hyphens, yielding PRDAPP2SERV4.

Take the middle two octets of the IP address. If either is a single digit, pad it out to double digits. So 178.8.1.44 becomes 178.80.10.44 which yields 8010. Now take the last character of the host name; if that's a digit, discard it and take the last letter, otherwise just take the last letter, which gives us V. Now take the second and third letters of the hostname and concatenate them to the 8010 and then stick that V on the end. This gives us 8010RDV. Now take the fourth and fifth letters, and add them to the end, which makes 8010RDVAP. And there's your password! Easy.

It had been that way for as long as anyone could remember, until the day someone decided to enable password complexity on the domain. From then on, you had to do all of the above, and then add @!#%&$?@! to the end of the password. How would you know whether a server has a password using the old method or the new one? Why by a spreadsheet available on the firm-wide-accessible file system, of course! Oh, by the way, there is no server management software.

Critics might say the wizard algorithm has certain disadvantages. The fact that two people, given the same hostname and IP address, often come up with different results for the algorithm. Apparently, writing a script to figure it out for you never dawned on anyone.

Or the fact that when a server has lost contact with the domain and you're trying to log on locally and the phone's ringing and everyone's pressuring you to get it resolved, the last thing you want to be doing is math puzzles.

But at least it's better than the standard way people normally do it!

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianJonathan Carter: Plans for DebCamp18

Dates

I’m going to DebCamp18! I should arrive at NCTU around noon on Saturday, 2018-07-21.

My Agenda

  • DebConf Video: Research if/how MediaDrop can be used with existing Debian video archive backends (basically, just a bunch of files on http).
  • DebConf Video: Take a better look at PeerTube and prepare a summary/report for the video team so that we better know if/how we can use it for publishing videos.
  • Debian Live: I have a bunch of loose ideas that I’d like to formalize before then. At the very least I’d like to file a bunch of paper cut bugs for the live images that I just haven’t been getting to. Live team may also need some revitalization, and better co-ordination with packagers of the various desktop environments in terms of testing and release sign-offs. There’s a lot to figure out and this is great to do in person (might lead to a DebConf BoF as well).
  • Debian Live: Current live weekly images have Calamares installed, although it’s just a test and there’s no indication yet on whether it will be available on the beta or final release images, we’ll have to do a good assessment on all the consequences and weigh up what will work out the best. I want to put together an initial report with live team members who are around.
  • AIMS Desktop: Get core AIMS meta-packages in to Debian… no blockers on this but just haven’t had enough quite time to do it (And thanks to AIMS for covering my travel to Hsinchu!)
  • Get some help on ITPs that have been a little bit more tricky than expected:
    • gamemode – Adjust power saving and cpu governor settings when launching games
    • notepadqq – A linux clone of notepad++, a popular text editor on Windows
    • Possibly finish up zram-tools which I just don’t get the time for. It aims to be a set of utilities to manage compressed RAM disks that can be used for temporary space, compressed in-memory swap, etc.
  • Debian Package of the Day series: If there’s time and interest, make some in-person videos with maintainers about their packages.
  • Get to know more Debian people, relax and socialize!

Planet DebianAthos Ribeiro: Triggering Debian Builds on OBS

This is my fifth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

My GSoC contributions can be seen at the following links

Debian builds on OBS

OBS supports building Debian packages. To do so, one must properly configure a project so OBS knows it is building a .deb package and to have the packages needed to handle and build debian packages installed.

openSUSE’s OBS instance has repositories for Debian 8, Debian 9, and Debian testing.

We will use base Debian projects in our OBS instance as Download on Demand projects and use subprojects to achieve our final goal (build packages agains Clang). By using the same configurations as the ones in the openSUSE public projects, we could perform builds in Debian 8 and Debian 9 in our local OBS deploys. However, builds for Debian Testing and Unstable were failing.

With further investigation, we realized the OBS version packaged in Debian cannot decompress control.tar.xz files in .deb packages, which is the default compression format for the control tarball since dpkg-1.19 (it used to be control.tar.gz before that). This issue was reported on the OBS repositories and was fixed on a Pull Request that is not included in the current Debian OBS version yet. For now, we apply this patch in our OBS instance on our salt states.

After applying the patch, the builds on Debian 8 and 9 are still finishing with success, but builds against Debian Testing and Unstable are getting stuck in a blocked state: dependencies are being downloaded, the OBS scheduler stalls for a while, the downloaded packages get cleaned up, and then the dependencies are downloaded again. OBS backend enters in a loop doing the described procedure and never assigns a build to a worker. No logs with hints leading to a possible issue are issued, giving us no clue of the current problem.

Although I am inclined to believe we have a problem with our dependencies list, I am still debugging this issue during this week and will bring more news on my next post.

Refactoring project configuration files

Reshabh opened a Pull Request in our salt repository with the OBS configuration files for Ubuntu, also based on the openSUSE’s OBS public configurations. Based on Sylvestre comments, I have been refactoring the Debian configuration files based on the OBS docuemtation. One of the proposed improvements is to use debootstrap to mount the builder chroot. This will allow us to reduce the number of dependencies listed in the projects configuration files. The issue which generated debootstrap support in OBS is available at https://github.com/openSUSE/obs-build/issues/111 and may lead to more interesting resources on the matter.

Next steps (A TODO list to keep on the radar)

  • Fix OBS builds on Debian Testing and Unstable
  • Write patches for the OBS worker issue described in post 3
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the rake-tasks.sh script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)

,

TED12 books from favorite TEDWomen speakers, for your summer reading list

We all have a story to tell. And in my work as curator of the TEDWomen conference, I’ve had the pleasure of providing a platform to some of the best stories and storytellers out there. Beyond their TED Talk, of course, many TEDWomen speakers are also accomplished authors — and if you liked them on the TED stage, odds are you will enjoy spending more time with them in the pages of their books.

All of the women and men listed here have given talks at TEDWomen, though some talks are related to their books and some aren’t. See what connects with you and enjoy your summer!

luvvie-ajayi-im-judging-you-cover.jpg

Luvvie Ajayi‘s 2017 TEDWomen talk has already amassed over 2.2 million views online! In it, she talks about how she wants to leave this world better than she found it and in order to do that, she says we all have to get more comfortable saying the sometimes uncomfortable things that need to be said. What’s great about Luvvie is that she delivers her commentary with a sly side eye that pokes fun at everyone, including herself.

In her book, I’m Judging You: The Do-Better Manual — written in the form of an Emily Post-type guidebook for modern manners — Luvvie doles out criticism and advice with equal amounts of wit, charm and humor that’s often laugh-out-loud funny. As Shonda Rhimes noted in her review, “This truth-riot of a book gives us everything from hilarious lectures on the bad behavior all around us to razor sharp essays on media and culture. With I’m Judging You, Luvvie brilliantly puts the world on notice that she is not here for your foolishness — or mine.”

madeleine-albright-fascism.jpg

At the first TEDWomen in 2010, Madeleine Albright talked to me about what it was like to be a woman and a diplomat. In her new book, entitled Fascism: A Warning, the former secretary of state writes about the history of fascism and the clash that took place between two ideologies of governing: fascism and democracy. She argues that “fascism not only endured the 20th century, but now presents a more virulent threat to peace and justice than at any time since the end of World War II.”

“At a moment when the question ‘Is this how it begins?’ haunts Western democracies,” the Economist notes in its review, “[Albright] writes with rare authority.”

gretchen-carlson-be-fierce-cover.jpg

Sometimes a talk perfectly captures the zeitgeist, and that was the case with Gretchen Carlson last November at TEDWomen. At the time, the #MeToo movement founded in 2007 by Tarana Burke was seeing a huge surge online, thanks to signal-boosting from Alyssa Milano and more women with stories to share.

Carlson took to the stage to talk about her personal experience with sexual harassment at Fox News, her historic lawsuit and the lessons she’d learned and related in her just-released book, Be Fierce. In her talk, she identifies three specific things we can all do to create safer places to work. “We will no longer be underestimated, intimidated or set back,” Carlson says. “We will stand up and speak up and have our voices heard. We will be the women we were meant to be.” In her book, she writes in detail about how we can stop harassment and take our power back.

john-cary-design-for-good-cover.jpg

John Cary is an architect who thinks deeply about diversity in design — and how the field’s lack of diversity leads to thoughtless, compassionless spaces in the modern world. As he said in his 2017 TEDWomen talk, “well-designed spaces are not just a matter of taste or a questions of aesthetics. They literally shape our ideas about who we are in the world and what we deserve.”

For years, as the executive director of Public Architecture, John has advocated for the term “public interest design” to become part of the architect’s lexicon, in much the same way as it is in fields like law and health care. In his new book, Design for Good, John presents 20 building projects from around the world that exemplify how good design can improve communities, the environment, and the lives of the people who live with it.

brittney-cooper-eloquent-rage-cover.jpg

In her thought-provoking 2016 TEDWomen talk, professor Brittney Cooper examined racism through the lens of time — showing how moments of joy, connection and well-being had been lost to people of color because of delays in social progress.

Last summer, I recommended Brittney’s book on the lives and thoughts of intellectual Black women in history who had been left out of textbooks. And this year, Brittney is back with another book, one that is more personal and also very timely in this election year in which women are figuring out what a truly intersectional feminist movement looks like.

As my friend Jane Fonda wrote in a recent blog post, in order to build truly multi-racial coalitions, white people need to do the work to truly understand race and racism. For white feminists in particular, the work starts by listening to the perspectives of women of color. Brittney’s book, Eloquent Rage: A Black Feminist Discovers Her Superpower, offers just that opportunity. Brittney’s sharp observations from high school (at a predominantly white school), college (at Howard University) and as a 30-something professional make the political personal. As she told the Washington Post, “When we figure out politics at a personal level, then perhaps it wouldn’t be so hard to figure it out at the more structural level.”

susan-david-emotional-agility-cover.jpeg

Susan David is a Harvard Medical School psychologist who studies how we process our emotions. In a deeply moving talk at TEDWomen 2017, Susan suggested that the way we deal with our emotions shapes everything that matters: our actions, careers, relationships, health and happiness. “I’m not anti-happiness. I like being happy. I’m a pretty happy person,” she says. “But when we push aside normal emotions to embrace false positivity, we lose our capacity to develop skills to deal with the world as it is, not as we wish it to be.”

In her book, Emotional Agility, Susan shares strategies for the radical acceptance of all of our emotions. How do we not let our self-doubts, failings, shame, fear, or anger hold us back?

“We own our emotions,” she says. “They don’t own us.”

all-the-women-in-my-family-sing-cover.jpg

Dr. Musimbi Kanyoro is president and CEO of Global Fund for Women, one of the world’s leading publicly supported foundations for gender equality. In her TEDWomen talk last year, she introduced us to the Maragoli concept of “isirika” — a pragmatic way of life that embraces the mutual responsibility to care for one another — something she sees women practicing all over the world.

In All the Women in My Family Sing, Musimbi is one of 69 women of color who have contributed prose and poetry to this “moving anthology” that “illuminates the struggles, traditions, and life views of women at the dawn of the 21st century. The authors grapple with identity, belonging, self-esteem, and sexuality, among other topics.” Contributors range in age from 16 to 77 and represent African-American, Native American, Asian-American, Muslim, Cameroonian, Kenyan, Liberian, Mexican-American, Korean, Chinese-American and LGBTQI experiences.

anjali-kumar-book-cover.jpg

In her 2017 TEDWomen talk, author Anjali Kumar shared some of what she learned in researching her new book, Stalking God: My Unorthodox Search for Something to Believe In. A few years ago, Anjali — a pragmatic lawyer for Google who, like more than 56 million of her fellow Americans, describes herself as not religious — set off on a mission to find God.

Spoiler alert: She failed. But along the way, she learned a lot about spirituality, humanity and what binds us all together as human beings.

In her humorous and thoughtful book, Anjali writes about her search for answers to life’s most fundamental questions and finding a path to spirituality in our fragmented world. The good news is that we have a lot more in common than we might think.

peggy-orenstein-dont-call-me-princess-cover.jpg

New York Times best-selling author Peggy Orenstein is out with a new collection of essays titled Don’t Call Me Princess: Girls, Women, Sex and Life. Peggy combines a unique blend of investigative reporting, personal revelation and unexpected humor in her many books, including Schoolgirls and the book that was the subject of her 2016 TEDWomen talk, Girls & Sex.

Don’t Call Me Princess “offers a crucial evaluation of where we stand today as women — in our work lives, sex lives, as mothers, as partners — illuminating both how far we’ve come and how far we still have to go.” Don’t miss it.

caroline-paul-you-are-mighty-cover.jpg

Caroline Paul began her remarkable career as the first female firefighter in San Francisco. She wrote about that in her first book, Fighting Fires. In the 20 years since, she’s written many more books, including her most recent, You Are Mighty: A Guide to Changing the World.

This well-timed book offers advice and inspiration to young activists. She writes about the experiences of young people — from famous kids like Malala Yousafzai and Claudette Colvin to everyday kids — who stood up for what they thought was right and made a difference in their communities. Paul offers loads of tactics for young people to use in their own activism — and proves you’re never too young to change the world.

cleo-wade-heart-talk-cover.png

I first encountered Cleo Wade‘s delightful, heartfelt words of wisdom like most people, on Instagram. Cleo has over 350,000 followers on her popular feed that features short poems, bits of wisdom and pics. Cleo has been called the poet of her generationeverybody’s BFF and the millennial Oprah. In her new poetry collection, Heart Talk: Poetic Wisdom for a Better Life, the poet, artist and activist shares some of the Instagram notes she wrote “while sitting in her apartment, poems about loving, being and healing” and “the type of good ol’-fashioned heartfelt advice I would share with you if we were sitting in my home at my kitchen table.”

girl-who-smiled-beads-clementine.jpg

In 1994, the Rwandan Civil War forced six-year-old Clemantine Wamariya and her fifteen-year-old sister from their home in Kigali, leaving their parents and everything they knew behind. In her 2017 TEDWomen talk, Clemantine shared some of her experiences over the next six years growing up while living in refugee camps and migrating through seven African countries.

In her new memoir, The Girl Who Smiled Beads: A Story of War and What Comes After, Clemantine recounts her harrowing story of hunger, imprisonment, and not knowing whether her parents were alive or dead. At the age of 12, she moved to Chicago and was raised in part by an American family. It’s an incredible, poignant story and one that is so important during this time when many are denying the humanity of people who are victims of war and civil unrest. For her part, Clemantine remains hopeful. “There are a lot of great people everywhere,” she told the Washington Post. “And there are also a lot of not-so-great people. It’s all over the world. But when we stepped out of the airplane, we had people waiting for us — smiling, saying, ‘Welcome to America.’ People were happy. Many countries were not happy to have us. Right now there are people at the airport still holding those banners.”

TEDWOMEN 2018

I also want to mention that registration for TEDWomen 2018 is open now! Space is limited and I don’t want you to miss out. This year, TEDWomen will be held Nov. 28–30 in Palm Springs, California. The theme is Showing Up.

The time for silent acceptance of the status quo is over. Women around the world are taking matters into their own hands, showing up for each other and themselves to shape the future we all want to see.We’ll explore the many aspects of this year’s theme through curated TED Talks, community dinners and activities.

Join us!

— Pat

Planet DebianShashank Kumar: Google Summer of Code 2018 with Debian - Week 5

During week 5, there were 3 merge requests undergoing review process simultaneously. I learned a lot about how code should be written in order to assist the reader since the code is read more times than the time it is written.

Services and Utility

After the user has entered their information on the signin or signup screen, the job of querying the database was given to a module named updatedb. The job of updatedb was to clean user input, hash password, query the database and respond with appropriate result after the database query is executed. In a discussion with Sanyam, he said updatedb doesn't conform to its name with what functions it incorporated. And explained the virtue of Service and Utility modules/functions and that this is the best place to restructure code with the same.

Utility functions can be described roughly as the functions which perform some operations on the data without caring much about the relationship of the data with respect to the application. So, generating uuid, cleaning email address, cleaning full name and hashing password becomes out utility functions and can be seen in utils.py for signup and similarly for signin.

Service functions can be described roughly as the functions which while performing operations on the data take their relationship with the application into account. Hence, these functions are not generic and application specific. sign_up_user is one such service function which received user information, calls utility functions to modify that information, query the database with respect to the signup operation i.e. adding the new user's detail to the database or raise SignUpError if details are already present. This can be seen in services module for signup and signin as well.

Persisting database connection

This is how the connection to the database used to work before the review. The settings module used to create the connection to the database, create table schema if not present and close the connection. Few constants are saved in the module to be used by signup and signin in order to connect to the database. But, the problem is, now database connection has to be established everytime there's a query to be executed by the services of signup or signin. Since the sqlite3 database is saved in a file alongside the application, I though it'll not be a problem to make connection whenever needed. But it overhead on the OS now which can slow down the application when scaled. To resolve this, now settings return the connection object which can be used again in any other module.

Integrating SignUp with Dashboard

While the SignUp feature was being reviewed the Dashbaord was merged and I had to refactor SignUp merge request accordingly. The natural flow of this should be the SignUp being the default screen up on the UI and after successful signup operation the Dashboard should be displayed. To achieve such a flow, I used screen manager which handles different screens and transition between them with predefined animation. This is defined in main module and the entire flow can be seen in action below.

Designing Tutorials and Tools menu

Once user is on the Dashboard, they have an option of picking up from different modules and going through the tutorials and tools available in the respective modules. The idea is to display difficulty tip as well so it becomes easier for the user to begin. Hence, below is what I've designed in order to incorporate the same.

New Contributor Wizard - Tutorials and Tools Menu

Implementing Tutorials and Tools menu

Now comes the fun part, thinking about the architecture of the modules just designed in order for them to take shape of some code in the application. The idea here is to define them in a json file to be picked from the respective module afterwards. This way it'll be easier to add new tutorials and tools and hence we have this resultant json. The developement of this feature can be followed on this merge request

Now remains the quest to design and implement the structure of tutorials which can be generalized in a way that it can be populated using a json file. This will provide flexibility to the developer of tutorials and a UI module can also be implemented to modify this json to add new tutorials without even knowing how to code. Sounds amazing right? We'll see how it works out soon. If you have any suggestions this make sure to comment down below, on the merge request or reach out to me.

The Conclusion

Since the SignUp has also been merged I'll have to refactor SignIn now to integrate all of it in one happy application and complete the natural flow of things. Also, the design and development of tools/tutorials is underway and by the next blog is out you might be able to test the application with atleast one tool or tutorial from one of the modules on the dashboard.

Krebs on SecurityAT&T, Sprint, Verizon to Stop Sharing Customer Location Data With Third Parties

In the wake of a scandal involving third-party companies leaking or selling precise, real-time location data on virtually all Americans who own a mobile phone, AT&T, Sprint and Verizon now say they are terminating location data sharing agreements with third parties.

At issue are companies known in the wireless industry as “location aggregators,” entities that manage requests for real-time customer location data for a variety of purposes, such as roadside assistance and emergency response. These aggregators are supposed to obtain customer consent before divulging such information, but several recent incidents show that this third-party trust model is fundamentally broken.

On May 10, 2018, The New York Times broke the story that a little-known data broker named Securus was selling local police forces around the country the ability to look up the precise location of any cell phone across all of the major U.S. mobile networks.

Then it emerged that Securus had been hacked, its database of hundreds of law enforcement officer usernames and passwords plundered. We also learned that Securus’ data was ultimately obtained from a company called 3Cinteractive, which in turn obtained its data through a California-based location tracking firm called LocationSmart.

On May 17, KrebsOnSecurity broke the news of research by Carnegie Mellon University PhD student Robert Xiao, who discovered that a LocationSmart try-before-you-buy opt-in demo of the company’s technology was wide open — allowing real-time lookups from anyone on anyone’s mobile device — without any sort of authentication, consent or authorization.

LocationSmart disabled its demo page shortly after that story. By that time, Sen. Ron Wyden (D-Ore.) had already sent letters to AT&T, Sprint, T-Mobile and Verizon, asking them to detail any agreements to share real-time customer location data with third-party data aggregation firms.

AT&T, T-Mobile and Verizon all said they had terminated data-sharing agreements with Securus. In a written response (PDF) to Sen. Wyden, Sprint declined to share any information about third-parties with which it may share customer location data, and it was the only one of the four carriers that didn’t say it was terminating any data-sharing agreements.

T-Mobile and Verizon each said they both share real-time customer data with two companies — LocationSmart and another firm called Zumigo, noting that these companies in turn provide services to a total of approximately 75 other customers.

Verizon emphasized that Zumigo — unlike LocationSmart — has never offered any kind of mobile location information demo service via its site. Nevertheless, Verizon said it had decided to terminate its current location aggregation arrangements with both LocationSmart and Zumigo.

“Verizon has notified these location aggregators that it intends to terminate their ability to access and use our customers’ location data as soon as possible,” wrote Karen Zacharia, Verizon’s chief privacy officer. “We recognize that location information can provide many pro-consumer benefits. But our review of our location aggregator program has led to a number of internal questions about how best to protect our customers’ data. We will not enter into new location aggregation arrangements unless and until we are comfortable that we can adequately protect our customers’ location data through technological advancements and/or other practices.”

In its response (PDF), AT&T made no mention of any other company besides Securus. AT&T indicated it had no intention to stop sharing real-time location data with third-parties, stating that “without an aggregator, there would be no practical and efficient method to facilitate requests across different carriers.”

Sen. Wyden issued a statement today calling on all wireless companies to follow Verizon’s lead.

“Verizon deserves credit for taking quick action to protect its customers’ privacy and security,” Wyden said. “After my investigation and follow-up reports revealed that middlemen are selling Americans’ location to the highest bidder without their consent, or making it available on insecure web portals, Verizon did the responsible thing and promptly announced it was cutting these companies off. In contrast, AT&T, T-Mobile, and Sprint seem content to continuing to sell their customers’ private information to these shady middle men, Americans’ privacy be damned.”

Update, 5:20 p.m. ET: Shortly after Verizon’s letter became public, AT&T and Sprint have now said they, too, will start terminating agreements to share customer location data with third parties.

“Based on our current internal review, Sprint is beginning the process of terminating its current contracts with data aggregators to whom we provide location data,” the company said in an emailed statement. “This will take some time in order to unwind services to consumers, such as roadside assistance and fraud prevention services. Sprint previously suspended all data sharing with LocationSmart on May 25, 2018. We are taking this further step to ensure that any instances of unauthorized location data sharing for purposes not approved by Sprint can be identified and prevented if location data is shared inappropriately by a participating company.”

AT&T today also issued a statement: “Our top priority is to protect our customers’ information, and, to that end, we will be ending our work with aggregators for these services as soon as practical in a way that preserves important, potential lifesaving services like emergency roadside assistance.”

KrebsOnSecurity asked T-Mobile if the company planned to follow suit, and was referred to a tweet today from T-Mobile CEO John Legere, who wrote: “I’ve personally evaluated this issue & have pledged that T-Mobile will not sell customer location data to shady middlemen.” In a follow-up statement shared by T-Mobile, the company said, “We ended all transmission of customer data to Securus and we are terminating our location aggregator agreements.

Wyden’s letter asked the carriers to detail any arrangements they may have to validate that location aggregators are in fact gaining customer consent before divulging the information. Both Sprint and T-Mobile said location aggregators were contractually obligated to obtain customer consent before sharing the data, but they provided few details about any programs in place to review claims and evidence that an aggregator has obtained consent.

AT&T and Verizon each said they have processes for periodically auditing consent practices by the location aggregators, but that Securus’ unauthorized use of the data somehow flew under the radar.

AT&T noted that it began its relationship with LocationSmart in October 2012 (back when it was known by another name, “Locaid”).  Under that agreement, LocationSmart’s customer 3Cinteractive would share location information with prison officials through prison telecommunications provider Securus, which operates a prison inmate calling service.

But AT&T said after Locaid was granted that access, Securus began abusing it to sell an unauthorized “on-demand service” that allowed police departments to learn the real-time location data of any customer of the four major providers.

“We now understand that, despite AT&T’s requirements to obtain customer consent, Securus did not in fact obtain customer consent before collecting customers’ location information for its on-demand service,” wrote Timothy P. McKone, executive vice president of federal relations at AT&T. “Instead, Securus evidently relied upon law enforcement’s representation that it had appropriate legal authority to obtain customer location data, such as a warrant, court order, or other authorizing document as a proxy for customer consent.”

McKone’s letter downplays the severity of the Securus incident, saying that the on-demand location requests “comprised a tiny fraction — less than two tenths of one percent — of the total requests Securus submitted for the approved inmate calling service. AT&T has no reason to believe that there are other instances of unauthorized access to AT&T customer location data.”

Blake Reid, an associate clinical professor at the University of Colorado School of Law, said the entire mobile location-sharing debacle shows the futility of transitive trust.

“The carriers basically have arrangements with these location aggregators that contractually say, ‘You agree not to use this access we provide you without getting customer consent’,” Reid said. “Then that aggregator has a relationship with another aggregator, and so on. So what we then have is this long chain of trust where no one has ever consented to the provision of the location information, and yet it ends up getting disclosed anyhow.”

Curious how we got here and what Congress or federal regulators might do about the current situation? Check out last month’s story, Why Is Your Location Data No Longer Private.

Update, 5:20 p.m. ET: Updated headline and story to reflect statements from AT&T and Sprint that they are winding down customer location data-sharing agreements with third party companies.

Update, June 20, 2:23 p.m. ET: Added clarification from T-Mobile.

Planet DebianBenjamin Mako Hill: How markets coopted free software’s most powerful weapon (LibrePlanet 2018 Keynote)

Several months ago, I gave the closing keynote address at LibrePlanet 2018. The talk was about the thing that scares me most about the future of free culture, free software, and peer production.

A video of the talk is online on Youtube and available as WebM video file (both links should skip the first 3m 19s of thanks and introductions).

Here’s a summary of the talk:

App stores and the so-called “sharing economy” are two examples of business models that rely on techniques for the mass aggregation of distributed participation over the Internet and that simply didn’t exist a decade ago. In my talk, I argue that the firms pioneering these new models have learned and adapted processes from commons-based peer production projects like free software, Wikipedia, and CouchSurfing.

The result is an important shift: A decade ago,  the kind of mass collaboration that made Wikipedia, GNU/Linux, or Couchsurfing possible was the exclusive domain of people producing freely and openly in commons. Not only is this no longer true, new proprietary, firm-controlled, and money-based models are increasingly replacing, displacing, outcompeting, and potentially reducing what’s available in the commons. For example, the number of people joining Couchsurfing to host others seems to have been in decline since Airbnb began its own meteoric growth.

In the talk, I talk about how this happened and what I think it means for folks of that are committed to working in commons. I talk a little bit about the free culture and free software should do now that mass collaboration, these communities’ most powerful weapon, is being used against them.

I’m very much interested in feedback provided any way you want to reach me including in person, over email, in comments on my blog, on Mastodon, on Twitter, etc.


Work on the research that is reflected and described in this talk was supported by the National Science Foundation (awards IIS-1617129 and IIS-1617468). Some of the initial ideas behind this talk were developed while working on this paper (official link) which was led by Maximilian Klein and contributed to by Jinhao Zhao, Jiajun Ni, Isaac Johnson, and Haiyi Zhu.

Sociological Images“Uncomfortable with Cages”: When Framing Fails

By now, you’ve probably heard about the family separation and detention policies at the U.S. border. The facts are horrifying.

Recent media coverage has led to a flurry of outrage and debate about the origins of this policy. It is a lot to take in, but this case also got me thinking about an important lesson from sociology for following politics in 2018: we’re not powerless in the face of “fake news.”

Photo Credit: Fibonacci Blue, Flickr CC

Political sociologists talk a lot about framing—the way movements and leaders select different interpretations of an issue to define and promote their position. Frames are powerful interpretive tools, and sociologists have shown how framing matters for everything from welfare reform and nuclear power advocacy to pro-life and labor movements.

One of the big assumptions in framing theory is that leaders coordinate. There might be competition to establish a message at first, but actors on the same side have to get together fairly quickly to present a clean, easy to understand “package” of ideas to people in order to make political change.

The trick is that it is easy to get cynical about framing, to think that only powerful people get to define the terms of debate. We assume that a slick, well-funded media campaign will win out, and any counter-frames will get pushed to the side. But the recent uproar over boarder separation policies shows how framing can be a very messy process. Over just a few days, these are a few of the frames coming from administration officials and border authorities:

We don’t know how this issue is going to turn out, but many of these frames have been met with skepticism, more outrage, and plenty of counter-evidence. Calling out these frames alone is not enough; it will take mobilization, activism, lobbying, and legislation to change these policies. Nevertheless, this is an important reminder that framing is a social process, and, especially in an age of social media, it is easier than ever to disrupt a political narrative before it has the chance to get organized.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianSean Whitton: I'm going to DebCamp18, Hsinchu, Taiwan

Here’s what I’m planning to work on – please get in touch if you want to get involved with any of these items.

DebCamp work

Throughout DebCamp and DebConf

  • Debian Policy: sticky bugs; process; participation; translations

  • Helping people use dgit and git-debrebase

    • Writing up or following up on feature requests and bugs

    • Design work with Ian and others

Worse Than FailureCodeSOD: A Unique Specification

One of the skills I think programmers should develop is not directly programming related: you should be comfortable reading RFCs. If, for example, you want to know what actually constitutes an email address, you may want to brush up on your BNF grammars. Reading and understanding an RFC is its own skill, and while I wouldn’t suggest getting in the habit of reading RFCs for fun, it’s something you should do from time to time.

To build the skill, I recommend picking a simple one, like UUIDs. There’s a lot of information encoded in a UUID, and five different ways to define UUIDs- though usually we use type 1 (timestamp-based) and type 4 (random). Even if you haven’t gone through and read the spec, you already know the most important fact about UUIDs: they’re unique. They’re universally unique in fact, and you can use them as identifiers. You shouldn’t have a collision happen within the lifetime of the universe, unless someone does something incredibly wrong.

Dexen encountered a database full of collisions on UUIDs. Duplicates were scattered all over the place. Since we’re not well past the heat-death of the universe, the obvious answer is that someone did something entirely wrong.

use Ramsey\Uuid\Uuid;
 
$model->uuid = Uuid::uuid5(Uuid::NAMESPACE_DNS, sprintf('%s.%s.%s.%s', 
    rand(0, time()), time(), 
    static::class, config('modelutils.namespace')))->toString();

This block of PHP code uses the type–5 UUID, which allows you to generate the UUID based on a name. Given a namespace, usually a domain name, it runs it through SHA–1 to generate the required bytes, allowing you to create specific UUIDs as needed. In this case, Dexen’s predecessor was generating a “domain name”-ish string by combining: a random number from 0 to seconds after the epoch, the number of seconds after the epoch, the name of the class, and a config key. So this developer wasn’t creating UUIDs with a specific, predictable input (the point of UUID–5), but was mixing a little from the UUID–1 time-based generation, and the UUID–4 random-based generation, but without the cryptographically secure source of randomness.

Thus, collisions. Since these UUIDs didn’t need to be sortable (no need for UUID–1), Dexen changed the generation to UUID–4.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, May 2018

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 202 work hours have been dispatched among 12 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased to 190 hours per month thanks to a few new sponsors who joined to benefit from Wheezy’s Extended LTS support.

We are currently in a transition phase. Wheezy is no longer supported by the LTS team and the LTS team will soon take over security support of Debian 8 Jessie from Debian’s regular security team.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianErich Schubert: Predatory publishers: SciencePG

I got spammed again by SciencePG (“Science Publishing Group”).

One of many (usually Chinese or Indian) fake publishers, that will publish anything as long as you pay their fees. But, unfortunately, once you published a few papers, you inevitably land on their spam list: they scrape the websites of good journals for email adresses, and you do want your contact email address on your papers.

However, this one is particularly hilarious: They have a spelling error right at the top of their home page!

SciencePG spelling

Fail.

Speaking of fake publishers. Here is another fun example:

Kim Kardashian, Satoshi Nakamoto, Tomas Pluskal
Wanion: Refinement of RPCs.
Drug Des Int Prop Int J 1(3)- 2018. DDIPIJ.MS.ID.000112.

Yes, that is a paper in the “Drug Designing & Intellectual Properties” International (Fake) Journal. And the content is a typical SciGen generated paper that throws around random computer buzzword and makes absolutely no sense. Not even the abstract. The references are also just made up. And so are the first two authors, VIP Kim Kardashian and missing Bitcoin inventor Satoshi Nakamoto…

In the PDF version, the first headline is “Introductiom”, with “m”…

So Lupine Publishers is another predatory publisher, that does not peer review, nor check if the article is on topic for the journal.

Via Retraction Watch

Conclusion: just because it was published somewhere does not mean this is real, or correct, or peer reviewed…

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #164

Here’s what happened in the Reproducible Builds effort between Sunday June 10 and Saturday June 16 2018:

diffoscope development

diffoscope is our in-depth “diff-on-steroids” utility which helps us diagnose reproducibility issues in packages. This week, version 96 was uploaded to Debian unstable by Chris Lamb. It includes contributions already covered by posts in previous weeks as well as new ones from:

tests.reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

Packages reviewed and fixed, and bugs filed

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Don Martiblood donation: no good deed goes unpunished

I have been infected with the Ebola virus.

I have had sex with another man in the past year.

I am taking Coumadin®.

Actually, none of those three statements is true. And Facebook knows it.

The American Red Cross has given Facebook this highly personal information about me, by adding my contact info to an "American Red Cross Blood Donors" Facebook Custom Audience. If any of that stuff were true, I wouldn't have been allowed to give blood.

When I heard back from the American Red Cross about this personal data problem, they told me that they don't share my health information with Facebook.

That's not how it works. I'm listed in the Custom Audience as a blood donor. Anyway, too late. Facebook has the info now.

So, which of its promises about how it uses people's personal information is Facebook going to break next?

And is some creepy tech bro right now making a killer pitch to Paul Graham about a business plan to "disrupt" the health insurance market using blood donor information?

I should not have to care about this, and I don't have time to. I don't even have time to attempt a funny remark about the whole Facebook board member Peter Thiel craving blood thing.

Planet DebianArthur Del Esposte: GSoC Status Update - First Month

In the past month I have been working on my GSoC project in Debian’s Distro Tracker. This project aims at designing and implementing new features in Distro Tracker to better support Debian teams to track the health of their packages and to prioritize their work efforts. In this post, I will describe the current status of my contributions, highlight the main challenges, and point the next steps.

Work Management and Communication

I communicate with Lucas Kanashiro (my mentor) constantly via IRC and personally at least once a week as we live in the same city. We have a weekly meeting with Raphael Hertzog at #debian-qa IRC channel to report advances, collect feedback, solve technical doubts, and planning the next steps.

I created a new repository in Salsa to save the log of our IRC meetings and to track my tasks through the repository’s issue tracker.Besides that, once a month I’ll post a new status update in my blog, such as this one, with more details regarding my contributions.

Advances

When GSoC officially started, Distro Tracker already had some team-related features. Briefly, a team is an entity composed by one or more users that are interested in the same set of packages. Teams are created manually by users and anyone may join public teams. The team page aggregates some basic information about the team and the list of packages of interest.

Distro Tracker offers a page to enable users to browser public teams which shows a paginated, sorted list of names. It used to be hard to find a team based on this list since Distro Tracker has more 110 teams distributed over 6 pages. In this sense, I created a new search field with auto-complete on the top of teams page to enable users to find a team’s page faster, as show in the following figure:

Search Field for Teams Page

Also, I have been working on improving the current teams infrastructure to enable Debian’s teams to better track the health of their packages. Initially, we decided to use the current data available in Distro Tracker to create the first version of a new team’s page based on PET.

Presenting team’s packages data in a table on the team’s page would be a relatively trivial task. However, Distro Tracker architecture aims to provide a generic core which can be extended through specific distro applications, such as Kali Linux. The core source code provides generic infrastructure to import data related to deb packages and also to present them in HTML pages. Therefore, we had to consider this Distro Tracker requirement to properly provide a extensible infrastructure to show packages data through tables in so that it would be easy to add new table fields and to change the default behavior of existing columns provided by the core source code.

So, based on the previously existing panels feature and on Hertzog’s suggestions, I designed and developed a framework to create customizable package tables for teams. This framework is composed of two main classes:

  • BaseTableField - A base class representing fields to be displayed on package tables. Among other things, it must define the column name and a template to render the cell content for a package.
  • BasePackageTable - A base class representing package tables which are displayed on a team page. It may have several BaseTableFields to display package’s information. Different tables may show a different list of packages based on its scope.

We have been discussing my implementation in an open Merge Request, although we are very close to the version that should be incorporated. The following figures show the comparison between the earlier PET’s table and our current implementation.

PET Packages Table Distro Tracker Packages Table
PET Packages Table Current Teams Page

Currently, the team’s page only have one table, which displays all packages related to that team. We are already presenting a very similar set of data to PET’s table. More specifically, the following columns are shown:

  • Package - displays the package name on the cell. It is implemented by the core’s GeneralInformationTableField class
  • VCS - by default, it displays the type of package’s repository (i.e. GIT, SVN) or Unknown. It is implemented by the core’s VcsTableField class. However, Debian app extend this behavior by adding the changelog version on the latest repository tag and displaying issues identified by Debian’s VCS Watch.
  • Archive - displays the package version on distro archive. It is implemented by the core’s ArchiveTableField class.
  • Bugs - displays the total number of bugs of a package. It is implemented by the core’s BugsTableField class. Ideally, each third-party apps should extend this field table to both add links for their bug tracker system.
  • Upstream - displays the upstream latest version available. This is a specific table field implemented by Debian app since this data is imported through Debian-specific tasks. In this sense, it is not available for other distros.

As the table’s cells are small to present detailed information, we have added Popper.js, a javascript library to display popovers. In this sense, some columns show a popover with more details regarding its content which is displayed on mouse hover. The following figure shows the popover to the Package column:

Package's Popover

In additional to designing the table framework, the main challenge were to avoid the N+1 problem which introduces performance issues since for a set of N packages displayed in a table, each field element must perform 1 or more lookup for additional data for a given package. To solve this problem, each subclass of BaseTableField must define a set of Django’s Prefetch objects to enable BasePackageTable objects to load all required data in batch in advance through prefetch_related, as listed bellow.

class BasePackageTable(metaclass=PluginRegistry):
    @property
    def packages_with_prefetch_related(self):
        """
        Returns the list of packages with prefetched relationships defined by
        table fields
        """
        package_query_set = self.packages
        for field in self.table_fields:
            for l in field.prefetch_related_lookups:
                package_query_set = package_query_set.prefetch_related(l)

        additional_data, implemented = vendor.call(
            'additional_prefetch_related_lookups'
        )
        if implemented and additional_data:
            for l in additional_data:
                package_query_set = package_query_set.prefetch_related(l)
        return package_query_set

    @property
    def packages(self):
        """
        Returns the list of packages shown in the table. One may define this
        based on the scope
        """
        return PackageName.objects.all().order_by('name')


class ArchiveTableField(BaseTableField):
    prefetch_related_lookups = [
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='general'),
            to_attr='general_archive_data'
        ),
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='versions'),
            to_attr='versions'
        )
    ]

    @cached_property
    def context(self):
        try:
            info = self.package.general_archive_data[0]
        except IndexError:
            # There is no general info for the package
            return

        general = info.value

        try:
            info = self.package.versions[0].value
            general['default_pool_url'] = info['default_pool_url']
        except IndexError:
            # There is no versions info for the package
            general['default_pool_url'] = '#'

        return general

Finally, it is worth noticing that we also improved the team’s management page by moving all team management features to a single page and improving its visual structure:

Teams Management

Next Steps

Now, we are moving towards adding other tables with different scopes, such as the tables presented by PET:

PET tables

To this end, we will introduce the Tag model class to categorize the packages based on their characteristics. Thus, we will create an additional task responsible for tagging packages based on their available data. The relationship between packages and tags should be ManyToMany. In the end, we want to perform a simple query to define the scope of a new table, such as the following example to query all packages with Release Critical (RC) bugs:

class RCPackageTable(BasePackageTable):
    def packages(self):
      tag = Tag.objects.filter(name='rc-bugs')
      return tag.packages.all()

We probably will need to work on Debian’s VCSWatch to enable it to receive update through Salsa’s webhook, especially for real-time monitoring of repositories.


Let’s get moving on! \m/

Planet DebianGunnar Wolf: Demoting multi-factor authentication

I started teaching at Facultad de Ingeniería, UNAM in January 2013. Back then, I was somewhat surprised (for good!) that the university required me to create a digital certificate for registering student grades at the end of the semester. The setup had some not-so-minor flaws (i.e. the private key was not generated at my computer but centrally, so there could be copies of it outside my control — Not only could, but I noted for a fact a copy was kept at the relevant office at my faculty, arguably to be able to timely help poor teachers if they lost their credentials or patience), but was decent...
Authentication was done via a Java applet, as there needs to be a verifiably(?)-secure way to ensure the certificate was properly checked at the client without transfering it over the network. Good thing!
But... Java applets grow out of favor. I don't think I have ever been able to register my grading from a Linux desktop (of course, I don't have a typical Linux desktop, so luck might smile to other people). But last semester and this semester I suffered even to get the grades registered from Windows — Seems that every browser has deprecated the extensions for the Java runtime, and applets are no longer a thing. I mean, I could get the Oracle site to congratulate me for having Java 8 installed, but it just would not run the university's applet!
So, after losing the better part of an already-busy evening... I got a mail. It says (partial translation mine):

Subject: Problems to electronically sign at UNAM

We are from the Advance Electronic Signature at UNAM. We are sending you this mail as we have detected you have problems to sign the grades, probably due to the usage of Java.

Currently, we have a new Electronic Signature system that does not use Java, we can migrate you to this system.
(...)

The certificate will thus be stored in the cloud, we will deposit it at signing time, you just have to enter the password you will have assigned.
(...)

Of course, I answered asking which kind of "cloud" was it, as we all know that the cloud does not exist, it's just other people's computers... And they decided to skip this question.

You can go see what is required for this implementation at https://www.fea.unam.mx/Prueba de la firma (Test your signature): It asks me for my CURP (publicly known number that identifies every Mexican resident). Then, it asks me for a password. And that's it. Yay :-Þ

Anyway I accepted, as losing so much time to grade is just too much. And... Yes, many people will be happy. Partly, I'm releieved by this (I have managed to hate Java for over 20 years). I am just saddened by the fact we have lost an almost-decent-enough electronic signature implementation and fallen back to just a user-password scheme. There are many ways to do crypto verification on the client side nowadays; I know JavaScript is sandboxed and cannot escape to touch my filesystem, but... It is amazing we are losing this simple and proven use case.

And it's amazing they are pulling it off as if it were a good thing.

,

Rondam RamblingsDamn straight there's a moral equivalence here

Germany, 1945: The United States of America, 2018: It's true, the kid in the second picture is not being sent to the gas chambers (yet).  But here's the thing: she doesn't know that!  This kid is two years old.  All she knows is that her mother is being taken away, and she may or may not ever see her again. The government of the United States of America has run completely off the

Planet DebianBenjamin Mako Hill: Honey Buckets

When I was growing up in Washington state, a company called Honey Bucket held a dominant position in the local portable toilet market. Their toilets are still a common sight in the American West.

Honey Bucket brand portable toilet. Photo by donielle. (CC BY-SA)

They were so widespread when I was a child that I didn’t know that “Honey Bucket” was the name of a company at all until I moved to Massachusetts for college. I thought “honey bucket” was just the generic term for toilets that could be moved from place-to-place!

So for the first five years that I lived in Massachusetts, I continued to call all portable toilets “honey buckets.”

Until somebody asked me why I called them that—five years after moving!—all my friends in Massachusetts thought that “honey bucket” was just a personal, idiosyncratic, and somewhat gross, euphemism.

Krebs on SecurityGoogle to Fix Location Data Leak in Google Home, Chromecast

Google in the coming weeks is expected to fix a location privacy leak in two of its most popular consumer products. New research shows that Web sites can run a simple script in the background that collects precise location data on people who have a Google Home or Chromecast device installed anywhere on their local network.

Craig Young, a researcher with security firm Tripwire, said he discovered an authentication weakness that leaks incredibly accurate location information about users of both the smart speaker and home assistant Google Home, and Chromecast, a small electronic device that makes it simple to stream TV shows, movies and games to a digital television or monitor.

Young said the attack works by asking the Google device for a list of nearby wireless networks and then sending that list to Google’s geolocation lookup services.

“An attacker can be completely remote as long as they can get the victim to open a link while connected to the same Wi-Fi or wired network as a Google Chromecast or Home device,” Young told KrebsOnSecurity. “The only real limitation is that the link needs to remain open for about a minute before the attacker has a location. The attack content could be contained within malicious advertisements or even a tweet.”

It is common for Web sites to keep a record of the numeric Internet Protocol (IP) address of all visitors, and those addresses can be used in combination with online geolocation tools to glean information about each visitor’s hometown or region. But this type of location information is often quite imprecise. In many cases, IP geolocation offers only a general idea of where the IP address may be based geographically.

This is typically not the case with Google’s geolocation data, which includes comprehensive maps of wireless network names around the world, linking each individual Wi-Fi network to a corresponding physical location. Armed with this data, Google can very often determine a user’s location to within a few feet (particularly in densely populated areas), by triangulating the user between several nearby mapped Wi-Fi access points. [Side note: Anyone who’d like to see this in action need only to turn off location data and remove the SIM card from a smart phone and see how well navigation apps like Google’s Waze can still figure out where you are].

“The difference between this and a basic IP geolocation is the level of precision,” Young said. “For example, if I geolocate my IP address right now, I get a location that is roughly 2 miles from my current location at work. For my home Internet connection, the IP geolocation is only accurate to about 3 miles. With my attack demo however, I’ve been consistently getting locations within about 10 meters of the device.”

Young said a demo he created (a video of which is below) is accurate enough that he can tell roughly how far apart his device in the kitchen is from another device in the basement.

“I’ve only tested this in three environments so far, but in each case the location corresponds to the right street address,” Young said. “The Wi-Fi based geolocation works by triangulating a position based on signal strengths to Wi-Fi access points with known locations based on reporting from people’s phones.”

Beyond leaking a Chromecast or Google Home user’s precise geographic location, this bug could help scammers make phishing and extortion attacks appear more realistic. Common scams like fake FBI or IRS warnings or threats to release compromising photos or expose some secret to friends and family could abuse Google’s location data to lend credibility to the fake warnings, Young notes.

“The implications of this are quite broad including the possibility for more effective blackmail or extortion campaigns,” he said. “Threats to release compromising photos or expose some secret to friends and family could use this to lend credibility to the warnings and increase their odds of success.”

When Young first reached out to Google in May about his findings, the company replied by closing his bug report with a “Status: Won’t Fix (Intended Behavior)” message. But after being contacted by KrebsOnSecurity, Google changed its tune, saying it planned to ship an update to address the privacy leak in both devices. Currently, that update is slated to be released in mid-July 2018.

According to Tripwire, the location data leak stems from poor authentication by Google Home and Chromecast devices, which rarely require authentication for connections received on a local network.

“We must assume that any data accessible on the local network without credentials is also accessible to hostile adversaries,” Young wrote in a blog post about his findings. “This means that all requests must be authenticated and all unauthenticated responses should be as generic as possible. Until we reach that point, consumers should separate their devices as best as is possible and be mindful of what web sites or apps are loaded while on the same network as their connected gadgets.”

Earlier this year, KrebsOnSecurity posted some basic rules for securing your various “Internet of Things” (IoT) devices. That primer lacked one piece of advice that is a bit more technical but which can help mitigate security or privacy issues that come with using IoT systems: Creating your own “Intranet of Things,” by segregating IoT devices from the rest of your local network so that they reside on a completely different network from the devices you use to browse the Internet and store files.

“A much easier solution is to add another router on the network specifically for connected devices,” Young wrote. “By connecting the WAN port of the new router to an open LAN port on the existing router, attacker code running on the main network will not have a path to abuse those connected devices. Although this does not by default prevent attacks from the IoT devices to the main network, it is likely that most naïve attacks would fail to even recognize that there is another network to attack.”

For more on setting up a multi-router solution to mitigating threats from IoT devices, check out this in-depth post on the subject from security researcher and blogger Steve Gibson.

Update, June 19, 6:24 p.m. ET: The authentication problems that Tripwire found are hardly unique to Google’s products, according to extensive research released today by artist and programmer Brannon Dorsey. Check out Wired.com‘s story on Dorsey’s research here.

Planet DebianRussell Coker: Cooperative Learning

This post is about my latest idea for learning about computers. I posted it to my local LUG mailing list and received no responses. But I still think it’s a great idea and that I just need to find the right way to launch it.

I think it would be good to try cooperative learning about Computer Science online. The idea is that everyone would join an IRC channel at a suitable time with virtual machine software configured and try out new FOSS software at the same time and exchange ideas about it via IRC. It would be fairly informal and people could come and go as they wish, the session would probably go for about 4 hours but if people want to go on longer then no-one would stop them.

I’ve got some under-utilised KVM servers that I could use to provide test VMs for network software, my original idea was to use those for members of my local LUG. But that doesn’t scale well. If a larger group people are to be involved they would have to run their own virtual machines, use physical hardware, or use trial accounts from VM companies.

The general idea would be for two broad categories of sessions, ones where an expert provides a training session (assigning tasks to students and providing suggestions when they get stuck) and ones where the coordinator has no particular expertise and everyone just learns together (like “let’s all download a random BSD Unix and see how it compares to Linux”).

As this would be IRC based there would be no impediment for people from other regions being involved apart from the fact that it might start at 1AM their time (IE 6PM in the east coast of Australia is 1AM on the west coast of the US). For most people the best times for such education would be evenings on week nights which greatly limits the geographic spread.

While the aims of this would mostly be things that relate to Linux, I would be happy to coordinate a session on ReactOS as well. I’m thinking of running training sessions on etbemon, DNS, Postfix, BTRFS, ZFS, and SE Linux.

I’m thinking of coordinating learning sessions about DragonflyBSD (particularly HAMMER2), ReactOS, Haiku, and Ceph. If people are interested in DragonflyBSD then we should do that one first as in a week or so I’ll probably have learned what I want to learn and moved on (but not become enough of an expert to run a training session).

One of the benefits of this idea is to help in motivation. If you are on your own playing with something new like a different Unix OS in a VM you will be tempted to take a break and watch YouTube or something when you get stuck. If there are a dozen other people also working on it then you will have help in solving problems and an incentive to keep at it while help is available.

So the issues to be discussed are:

  1. What communication method to use? IRC? What server?
  2. What time/date for the first session?
  3. What topic for the first session? DragonflyBSD?
  4. How do we announce recurring meetings? A mailing list?
  5. What else should we setup to facilitate training? A wiki for notes?

Finally while I list things I’m interested in learning and teaching this isn’t just about me. If this becomes successful then I expect that there will be some topics that don’t interest me and some sessions at times when I am have other things to do (like work). I’m sure people can have fun without me. If anyone has already established something like this then I’d be happy to join that instead of starting my own, my aim is not to run another hobbyist/professional group but to learn things and teach things.

There is a Wikipedia page about Cooperative Learning. While that’s interesting I don’t think it has much relevance on what I’m trying to do. The Wikipedia article has some good information on the benefits of cooperative education and situations where it doesn’t work well. My idea is to have a self-selecting people who choose it because of their own personal goals in terms of fun and learning. So it doesn’t have to work for everyone, just for enough people to have a good group.

CryptogramRidiculously Insecure Smart Lock

Tapplock sells an "unbreakable" Internet-connected lock that you can open with your fingerprint. It turns out that:

  1. The lock broadcasts its Bluetooth MAC address in the clear, and you can calculate the unlock key from it.

  2. Any Tapplock account an unlock every lock.

  3. You can open the lock with a screwdriver.

Regarding the third flaw, the manufacturer has responded that "...the lock is invincible to the people who do not have a screwdriver."

You can't make this stuff up.

EDITED TO ADD: The quote at the end is from a different smart lock manufacturer. Apologies for that.

Worse Than FailureCodeSOD: The Sanity Check

I've been automating deployments at work, and for Reasons™, this is happening entirely in BASH. Those Reasons™ are that the client wants to use Salt, but doesn't want to give us access to their Salt environment. Some of our deployment targets are microcontrollers, so Salt isn't even an option.

While I know the shell well enough, I'm getting comfortable with more complicated scripts than I usually write, along with tools like xargs which may be the second best shell command ever invented. yes is the best, obviously.

The key point is that the shell, coupled with the so-called "Unix Philosophy" is an incredibly powerful tool. Even if you already know that it's powerful, it's even more powerful than you think it is.

How powerful? Well, how about ripping apart the fundamental rules of mathematics? An anonymous submitter found this prelude at the start of every shell script in their organization.

#/usr/bin/env bash declare -r ZERO=$(true; echo ${?}) declare -r DIGITZERO=0 function sanity_check() { function err_msg() { echo -e "\033[31m[ERR]:\033[0m ${@}" } if [ ${ZERO} -ne ${DIGITZERO} ]; then err_msg "The laws of physics doesn't apply to this server." err_msg "Real value ${ZERO} is not equal to ${DIGITZERO}." exit 1 fi } sanity_check

true, like yes, is one of those absurdly simple tools: it's a program that completes successfully (returning a 0 exit status back to the shell). The ${?} expression contains the last exit status. Thus, the variable $ZERO will contain… 0. Which should then be equal to 0.

Now, maybe BASH isn't BASH anymore. Maybe true has been patched to fail. Maybe, maybe, maybe, but honestly, I'm wondering whose sanity is actually being checked in the sanity_check?

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet Linux AustraliaJames Morris: Linux Security BoF at Open Source Summit Japan

This is a reminder for folks attending OSS Japan this week that I’ll be leading a  Linux Security BoF session  on Wednesday at 6pm.

If you’ve been working on a Linux security project, feel welcome to discuss it with the group.  We will have a whiteboard and projector.   This is also a good opportunity to raise topics for discussion, and to ask questions about Linux security.

See you then!

Planet DebianJohn Goerzen: Memories, Father’s Day, and an 89-year-old plane

“Oh! I have slipped the surly bonds of Earth
And danced the skies on laughter-silvered wings;
Sunward I’ve climbed, and joined the tumbling mirth
of sun-split clouds, — and done a hundred things”

– John Gillespie Magee, Jr.

I clicked on the radio transmitter in my plane.

O’Neill Traffic, Bonanza xx departing to the south. And Trimotor, thanks for flight #1. We really enjoyed it.

And we had. Off to my left, a 1929 Ford Trimotor airliner was heading off into the distance, looking as if it were just hanging in the air, glinting in the morning sun, 1000 feet above the ground. Earlier that morning, my boys and I had been passengers in that very plane. But now we had taken off right after them, as they were taking another load of passengers up for a flight and we were flying back home. To my right was my 8-year-old, and my 11-year-old was in back, both watching out the windows. The radio clicked on, and the three of us heard the other pilot’s response:

Oh thank you. We’re glad you came!

A few seconds later, they were gone out of sight.

The experience of flying in an 89-year-old airliner is quite something. As with the time we rode on the Durango & Silverton railroad, it felt like stepping back into a time machine — into the early heyday of aviation.

Jacob and Oliver had been excited about this day a long time. We had tried to get a ride when it was on tour in Oklahoma, much closer, but one of them got sick on the drive that day and it didn’t work out. So Saturday morning, we took the 1.5-hour-flight up to northern Nebraska. We’d heard they’d have a pancake breakfast fundraiser, and the boys were even more excited. They asked to set the alarm early, so we’d have no risk of missing out on airport pancakes.

Jacob took this photo of the sunrise at the airport while I was doing my preflight checks:

IMG_1574

Here’s one of the beautiful views we got as we flew north to meet the Trimotor.

IMG_20180616_070810_v1

It was quite something to share a ramp with that historic machine. Here’s a photo of our plane not far from the Trimotor.

IMG_20180616_082051

After we got there, we checked in for the flight, had a great pancake and sausage breakfast, and then into the Trimotor. The engines fired up with a most satisfying low rumble, and soon we were aloft — cruising along at 1000 feet, in that (by modern standards) noisy, slow, and beautiful machine. We explored the Nebraska countryside from the air before returning 20 minutes later. I asked the boys what they thought.

“AWESOME!” was the reply. And I agreed.

IMG_20180616_090828

Jacob and Oliver have long enjoyed pretending to be flight attendants when we fly somewhere. They want me to make airline-sounding announcements, so I’ll say something like, “This is your captain speaking. In a few moments, we’ll begin our descent into O’Neill. Flight attendants, prepare the cabin for arrival.” Then Jacob will say, “Please return your tray tables that you don’t have to their full upright and locked position, make sure your seat belt is tightly fastened, and your luggage is stowed. This is your last chance to visit the lavatory that we don’t have. We’ll be on the ground shortly.”

Awhile back, I loaded up some zip-lock bags with peanuts and found some particularly small bottles of pop. Since then, it’s become tradition on our longer flights for them to hand out bags of peanuts and small quantities of pop as we cruise along — “just like the airlines.” A little while back, I finally put a small fridge in the hangar so they get to choose a cold beverage right before we leave. (We don’t typically have such things around, so it’s a special treat.)

Last week, as I was thinking about Father’s Day, I told them how I remembered visiting my dad at work, and how he’d let me get a bottle of Squirt from the pop machine there (now somewhat rare). So when we were at the airport on Saturday, it brought me a smile to hear, “DAD! This pop machine has Squirt! Can we get a can? It’s only 75 cents!” “Sure – after our Trimotor flight.” “Great! Oh, thank you dad!”

I realized then I was passing a small but special memory on to another generation. I’ve written before of my childhood memories of my dad, and wondering what my children will remember of me. Martha isn’t old enough yet to remember her cackles of delight as we play peek-a-boo or the books we read at bedtime. Maybe Jacob and Oliver will remember our flights, or playing with mud, or researching dusty maps in a library, playing with radios, or any of the other things we do. Maybe all three of them will remember the cans of Squirt I’m about to stock that hangar fridge with.

But if they remember that I love them and enjoy doing things with them, they will have remembered the most important thing. And that is another special thing I got from my parents, and can pass on to another generation.

Valerie AuroraIn praise of the 30-hour work week

I’ve been working about 30 hours a week for the last two and a half years. I’m happier, healthier, and wealthier than when I was working 40, 50, or 60 hours a week as a full-time salaried software engineer (that means I was only paid for 40 hours a week). If you are a salaried professional in the U.S. who works 40 hours a week or more, there’s a pretty good chance you could also be working fewer hours, possibly even for more money. In this post, I’ll explain some of the myths and the realities that promote overwork. If you’re already convinced that you’d like to work fewer hours, you can skip straight to how you can start taking steps to work less.

A little about me: After college, I worked for about 8 years as a full-time salaried software engineer. Like many software engineers, I often worked 50 or 60 hour weeks while being paid for 40 hours a week. I hit the glass ceiling at age 29 and started working part-time hourly as a software consultant. I loved the hours but hated the instability and was about to lose my health insurance benefits (this was before the ACA passed). Then a colleague offered me a job at his storage startup, working 20 hours a week, salaried, with benefits. I thought, “You can do that???” and negotiated a 30 hour salaried job with benefits with my dream employer. I worked full-time again for about 5 years after that, and put in more 60 hour weeks while co-founding a non-profit. After shutting the non-profit down, I took 3 months off to recover. For the last two and a half years, I’ve worked for myself as a diversity and inclusion in tech consultant. I rarely work more than 30 hours a week and last year I made more money than any other year of my life.

Now, if I told my 25-year-old self this, she’d probably refuse to believe me. When I was 25, I believed my extra hours and hard work would be rewarded, that I’d be able to work 50 or 60 hours a week forever, and that I’d never enjoy anything as much as working. Needless to say, I no longer believe any of those things.

Myths about working overtime

Here are a few of the myths I used to believe about working overtime:

Myth: I can be productive for more than 8 hours a day on a sustained basis

How many hours a day can I productively write code? This will vary for everyone, but the number I hear most often is 4 hours a day 5 days a week, which is my max. I slowly learned that if I wrote code longer than that, my productivity steeply declined. After 8 hours, I was just adding bugs that I’d have to fix the next day. For the other 4 hours, I was better off dealing with email, writing papers, submitting expenses, reading books, or taking a walk (during which I’d usually figure out what I needed to do next in my program). After 8 hours, my brain is useless for anything requiring focus or discipline. I can do more work for short bursts occasionally when I’m motivated, but it takes a toll on my health and I need extra time off to recover.

I know other people can do focused productive work for more than 8 hours a day; congrats! However, keep in mind that I know plenty of people who thought they could work more than 8 hours a day, and then discovered they’d given themselves major stress-related health problems—repetitive stress injury, ulcers, heart trouble—or ignored existing health problems until they got so bad they started interfering with their work. This includes several extremely successful people who only need to sleep 5 hours a night and were using the extra time that gave them to do more work. The human body can only take so much stress.

Myth: My employer will reward me for working extra hours

Turns out, software engineering isn’t graded on effort, like kindergarten class. I remember the first year of my career when I worked my usual overtime and did not get a promotion or a raise; the company was slowly going out of business and it didn’t matter how many hours I worked—I wasn’t getting a raise. Given that my code quality fell off after 4 hours and went negative after 8 hours, it was a waste of time to work overtime anyway. At the same time, I always felt a lot of pressure to appear to be working for more than 40 hours a week, such that 40 hours became the unofficial minimum. The end result was a lot of programmers in the office late at night doing things other than coding: playing games, reading the internet, talking with each other. Which is great when you have no friends outside work, no family nearby, and no hobbies; less great when you do.

Overall, my general impression of the reward structure for software engineers is that people who fit people’s preconceptions of what a programmer looks like and who aggressively self-promote are more likely to get raises and promotions than people who produce more value. (Note that aggressive self-promotion is often punished in women of all races, people of color, disabled folks, immigrants, etc.)

Myth: People who work 40 hours or less are lazy

I was raised with fairly typical American middle-class beliefs about work: work is virtuous, if people don’t have jobs it’s because of some personal failing of theirs, etc. I started to change my mind when I read about Venezuelan medical doctors who were unable to buy shoes during an economic recession. Medical school is hard; I couldn’t believe all of those doctors were lazy! In my first full-time job, I had a co-worker who spent 40 hours a week in the office, but never did any real work. Then I realized that many of the hardest working people I knew were mothers who worked in the home for no pay at all. Nowadays I understand that I can’t judge someone’s moral character by the number of hours of labor they do (or are paid for) each week.

The kind of laziness that does concern me comes from abuse: people using coercion to extract an unfair amount of value from other people’s labor. This includes many abusive spouses, most billionaires, and many politicians. I’m not worried about people who want to work 40 hours a week or fewer so they can spend more time with their kids or crocheting or traveling; they aren’t the problem.

Myth: I work more than 40 hours because I’d be unhappy otherwise

When I was 25, I couldn’t imagine wanting to do other things with the time I was spending on work. With hindsight, I can see that’s because I was socially isolated and didn’t know how to deal with my anxiety other than by working. If I tried to stop working, I would very quickly run out of things to do that I enjoyed, and would end up writing some more code or answering some more work email just to have some positive feelings. It took years and years of therapy, building up my social circle, and developing hobbies before I had enough enjoyable things to do other than work.

Working for pay gives a lot of people joy and that is perfectly fine! It’s when you have few other ways to feel happy that overwork begins to be a problem.

Myth: The way to fix my anxiety is to work more hours

The worse the social safety net is in your country, the more anxious you probably are about your future: Will you have a place to live? Food to eat? Medical care? Clothes for your kids? We often respond to anxiety by shutting down any higher thought and focusing on what is in front of us. For many of us in this situation, the obvious answer seems to be “work more hours.” Now, if you are being paid for working more hours, this makes some sense: money contributes to security. But if you’re not, those extra hours bring no concrete reward. You are just hoping that your employer will take the extra work into consideration when deciding whether to give you a raise or end your employment. Unfortunately, in my experience, the best way to get a raise or keep your job is to be as similar to your management as possible.

If you can take the time to work with your anxiety and pull back and look at the larger picture, you’ll often find better ways to use those extra hours to improve your personal safety net. Just a few off the top of my head: building your professional network, improving your resume, learning new skills, helping friends, caring for your family, meditating, taking care of your health, and talking to a therapist about your anxiety. The future is uncertain and only partially under your control; nothing can change that fundamental truth. Consider carefully whether working unpaid hours is the best way to increase your safety.

Myth: The extra hours are helping me learn skills that will pay off later

Maybe it’s just me, but I can only learn new stuff for a few hours a day. Judging by the recommended course loads at universities, most people can’t actively learn new stuff more than 40 hours a week. If I’ve been working for more than 8 hours, all I can do is repeat things I’ve already learned (like stepping through a program in a debugger). Creative thought and breakthroughs are pretty thin on the ground after 8 hours of hard work. The only skills I’m sure I learned from working more than 40 hours a week are: how to keep going through hunger, how to ignore pain in my body, how to keep going through boredom, how to stay awake, and how to sublimate my healthy normal human desires. Oh, and which office snack foods are least nauseating at 2am.

Myth: Companies won’t hire salaried professionals part-time

Some won’t, some will. Very few companies will spontaneously offer part-time salaried work for a position that usually requires full-time, but if you have negotiating power and you’re persistent, you will be surprised how often you can get part-time work. Negotiating power usually increases as you become a more desirable employee; if you can’t swing part-time now, keeping working on your career and you may be able to get it in the future.

Myth: I can only get benefits if I work full-time

Whether a company can offer the benefits available to full-time employees to part-time employees is up to their internal policies combined with local law. Human beings create policies and laws and they can be changed. Small companies are generally more flexible about policies than large companies. Some companies offer part-time positions as a competitive advantage in hiring. Again, having more negotiating power will help here. Companies are more likely to change their policies or make exceptions if they really really want your services.

Myth: My career will inevitably suffer if I work part-time

There are absolutely some career goals that can only be achieved by working full-time. But working part-time can also help your career. You can use your extra time to learn new skills, or improve your education. You can work on unpaid projects that improve your portfolio. You can extend your professional network. You can get career coaching. You can start your own business. You can write books. You can speak at conferences. Many things are possible.

Real barriers to working fewer hours

Under capitalism, in the absence of enforced laws against working more than a certain number of hours a week, the number of hours a week employees work will grow until the employer is no longer getting a marginal benefit out of each additional hour. That means if the employer will get any additional value out of an hour above and beyond the costs of working that hour, they’ll require the employee to work that hour. This happens without regard for the cost for the employee or their dependents, in terms of health, happiness, or quality of life for their dependents.

In the U.S. and many other countries, we often act like the 40-hour working week is some kind of natural law, when the laws surrounding it were actually the result of a long, desperately fought battle between labor and capital extending over many decades. Even so, what laws we do have limiting the amount of labor an employer can demand from an employee have many loopholes, and often go unenforced. Wage theft—employers stealing wages from employees through a variety of means, including unpaid overtime—accounts for more money stolen in the U.S. than all robberies.

Due to loopholes and lax enforcement, many salaried professionals end up in a situation where all the people they are competing with for jobs or promotions are all working far more than 40 hours a week. They don’t have to be working efficiently for more than 40 hours a week for this to be of benefit to their employers, they just have to be creating more value than they are costing during those hours of work. Some notorious areas of high competition and high hours include professors on the tenure track, lawyers on the partner track, and software engineers working in competitive fields.

In particular, software engineers working for venture capital-funded startups in fields with lots of competitors are under a lot of pressure to produce more work more quickly, since timing is such an important element of success in the fields that venture capital invests in. The result is a lot of software engineers who burn themselves out working too many hours for startups for less total compensation than they’d make working at Microsoft or IBM, despite whatever stock options they were offered to make up for lower salaries and benefits. This is because (a) most startups fail, (b) most software engineers either don’t vest their stock options before they quit, or quit before the company goes public and can’t afford to buy the options during the short (usually 90-day) exercise window after they quit.

No individual actions or decisions by a single worker can change these kinds of competitive pressures, and if your goal is to succeed in one of these highly competitive, poorly governed areas, you’ll probably have to work more than 40 hours a week. Overall, unchecked capitalism leads to a Red Queen’s race, in which individual workers have to work as hard as they can just to keep up with their competition (and those who can’t, die). I don’t want to live in this world, which is why I support laws limiting working hours and requiring pay, government-paid parental and family leave, a universal basic income, and the unions and political parties that fight for and win these protections.

Tips for working fewer hours

These tips for working fewer hours are aimed primarily at software engineers in the U.S. who have some job mobility, and more generally for salaried professionals in the U.S. Some of these tips may be useful for other folks as well.

See a career counselor or career coach. Most of us are woefully unprepared to guide and shape our career paths. A career counselor can help you figure out what you value, what your goals should be, and how to achieve them, while taking into account your whole self (including family, friends, and hobbies). A career counselor will help you with the mechanics of actually working fewer hours: negotiating down your current job, finding a new job, starting your own business, etc. To find a career counselor, ask your friends for recommendations or search online review sites.

Go to therapy. If you’re voluntarily overworking, you’ve internalized a lot of ideas about what a good person is or how to be happy that are actually about how to make employers wealthier. Even if you are your own employer, you’ll still need to work these out. You’re also likely to be dealing with anxiety or unresolved problems in your life by escaping to work. You’ll need to learn new values, new ideas, and new coping mechanisms before you can work fewer hours. I’ve written about how to find therapy here. You might also want to read up on workaholics. The short version is: there is some reason you are currently overworking, and you’ll need to address that before you can stop overworking.

Find other things to do with your time. Spend more time with your kids, develop new hobbies or pick up new ones, learn a sport, watch movies, volunteer, write a novel – the options are endless. Learn to identify the voice in your head that says you shouldn’t be wasting your time on that and tell it to mind its own business.

Search for more efficient ways to make money. In general, hourly wage labor is going to have a very hard limit on how much money you can make per hour, even in highly paid positions. Work with your career counselor to figure out how to make more money per hour of labor. Often this looks like teaching, reviewing, or selling a product or service with low marginal cost.

Talk to a financial advisor. Reducing hours often means at least some period of lower income, even if your income ends up higher after that. If like many people you are living paycheck-to-paycheck, you’ll need help. A professional financial advisor can help you figure out how to get through this period and make better financial decisions in general. [Added 19-June-2018]

Finally, we can help normalize working fewer hours a week just by talking about it and, if it is safe for us, actually asking for fewer hours of work. We can also support unions, elect politicians who promise to pass legislation protecting workers, promote universal basic income, support improvements in the social safety net, and raise awareness of what working conditions are like without these protections.

Planet DebianSteve Kemp: Monkeying around with intepreters - Result

So I challenged myself to writing a BASIC intepreter over the weekend, unfortunately I did not succeed.

What I did was take an existing monkey-repl and extend it with a series of changes to make sure that I understood all the various parts of the intepreter design.

Initially I was just making basic changes:

  • Added support for single-line comments.
    • For example "// This is a comment".
  • Added support for multi-line comments.
    • For example "/* This is a multi-line comment */".
  • Expand \n and \t in strings.
  • Allow the index operation to be applied to strings.
    • For example "Steve Kemp"[0] would result in S.
  • Added a type function.
    • For example "type(3.13)" would return "float".
    • For example "type(3)" would return "integer".
    • For example "type("Moi")" would return "string".

Once I did that I overhauled the built-in functions, allowing callers to register golang functions to make them available to their monkey-scripts. Using this I wrote a simple "standard library" with some simple math, string, and file I/O functions.

The end result was that I could read files, line-by-line, or even just return an array of the lines in a file:

 // "wc -l /etc/passwd" - sorta
 let lines = file.lines( "/etc/passwd" );
 if ( lines ) {
    puts( "Read ", len(lines), " lines\n" )
 }

Adding file I/O was pretty neat, although I only did reading. Handling looping over a file-contents is a little verbose:

 // wc -c /etc/passwd, sorta.
 let handle = file.open("/etc/passwd");
 if ( handle < 0 ) {
   puts( "Failed to open file" )
 }

 let c = 0;       // count of characters
 let run = true;  // still reading?

 for( run == true ) {

    let r = read(handle);
    let l = len(r);
    if ( l > 0 ) {
        let c = c + l;
    }
    else {
        let run = false;
    }
 };

 puts( "Read " , c, " characters from file.\n" );
 file.close(handle);

This morning I added some code to interpolate hash-values into a string:

 // Hash we'll interpolate from
 let data = { "Name":"Steve", "Contact":"+358449...", "Age": 41 };

 // Expand the string using that hash
 let out = string.interpolate( "My name is ${Name}, I am ${Age}", data );

 // Show it worked
 puts(out + "\n");

Finally I added some type-conversions, allowing strings/floats to be converted to integers, and allowing other value to be changed to strings. With the addition of a math.random function we then got:

 // math.random() returns a float between 0 and 1.
 let rand = math.random();

 // modify to make it from 1-10 & show it
 let val = int( rand * 10 ) + 1 ;
 puts( "math.random() -> ", val , "\n");

The only other signification change was the addition of a new form of function definition. Rather than defining functions like this:

 let hello = fn() { puts( "Hello, world\n" ) };

I updated things so that you could also define a function like this:

 function hello() { puts( "Hello, world\n" ) };

(The old form still works, but this is "clearer" in my eyes.)

Maybe next weekend I'll try some more BASIC work, though for the moment I think my monkeying around is done. The world doesn't need another scripting language, and as I mentioned there are a bunch of implementations of this around.

The new structure I made makes adding a real set of standard-libraries simple, and you could embed the project, but I'm struggling to think of why you would want to. (Though I guess you could pretend you're embedding something more stable than anko and not everybody loves javascript as a golang extension language.)

Planet DebianArthur Del Esposte: GSoC Status Update - First Month

In the past month I have been working on my GSoC project in Debian’s Distro Tracker. This project aims at designing and implementing new features in Distro Tracker to better support Debian teams to track the health of their packages and to prioritize their work efforts. In this post, I will describe the current status of my contributions, highlight the main challenges, and point the next steps.

Work Management and Communication

I communicate with Lucas Kanashiro (my mentor) constantly via IRC and personally at least once a week as we live in the same city. We have a weekly meeting with Raphael Hertzog at #debian-qa IRC channel to report advances, collect feedback, solve technical doubts, and planning the next steps.

I created a new repository in Salsa to save the log of our IRC meetings and to track my tasks through the repository’s issue tracker.Besides that, once a month I’ll post a new status update in my blog, such as this one, with more details regarding my contributions.

Advances

When GSoC officially started, Distro Tracker already had some team-related features. Briefly, a team is an entity composed by one or more users that are interested in the same set of packages. Teams are created manually by users and anyone may join public teams. The team page aggregates some basic information about the team and the list of packages of interest.

Distro Tracker offers a page to enable users to browser public teams which shows a paginated, sorted list of names. It used to be hard to find a team based on this list since Distro Tracker has more 110 teams distributed over 6 pages. In this sense, I created a new search field with auto-complete on the top of teams page to enable users to find a team’s page faster, as show in the following figure:

Search Field for Teams Page

Also, I have been working on improving the current teams infrastructure to enable Debian’s teams to better track the health of their packages. Initially, we decided to use the current data available in Distro Tracker to create the first version of a new team’s page based on PET.

Presenting team’s packages data in a table on the team’s page would be a relatively trivial task. However, Distro Tracker architecture aims to provide a generic core which can be extended through specific distro applications, such as Kali Linux. The core source code provides generic infrastructure to import data related to deb packages and also to present them in HTML pages. Therefore, we had to consider this Distro Tracker requirement to properly provide a extensible infrastructure to show packages data through tables in so that it would be easy to add new table fields and to change the default behavior of existing columns provided by the core source code.

So, based on the previously existing panels feature and on Hertzog’s suggestions, I designed and developed a framework to create customizable package tables for teams. This framework is composed of two main classes:

  • BaseTableField - A base class representing fields to be displayed on package tables. Among other things, it must define the column name and a template to render the cell content for a package.
  • BasePackageTable - A base class representing package tables which are displayed on a team page. It may have several BaseTableFields to display package’s information. Different tables may show a different list of packages based on its scope.

We have been discussing my implementation in an open Merge Request, although we are very close to the version that should be incorporated. The following figures show the comparison between the earlier PET’s table and our current implementation.

PET Packages Table Distro Tracker Packages Table
PET Packages Table Current Teams Page

Currently, the team’s page only have one table, which displays all packages related to that team. We are already presenting a very similar set of data to PET’s table. More specifically, the following columns are shown:

  • Package - displays the package name on the cell. It is implemented by the core’s GeneralInformationTableField class
  • VCS - by default, it displays the type of package’s repository (i.e. GIT, SVN) or Unknown. It is implemented by the core’s VcsTableField class. However, Debian app extend this behavior by adding the changelog version on the latest repository tag and displaying issues identified by Debian’s VCS Watch.
  • Archive - displays the package version on distro archive. It is implemented by the core’s ArchiveTableField class.
  • Bugs - displays the total number of bugs of a package. It is implemented by the core’s BugsTableField class. Ideally, each third-party apps should extend this field table to both add links for their bug tracker system.
  • Upstream - displays the upstream latest version available. This is a specific table field implemented by Debian app since this data is imported through Debian-specific tasks. In this sense, it is not available for other distros.

As the table’s cells are small to present detailed information, we have added Popper.js, a javascript library to display popovers. In this sense, some columns show a popover with more details regarding its content which is displayed on mouse hover. The following figure shows the popover to the Package column:

Package's Popover

In additional to designing the table framework, the main challenge were to avoid the N+1 problem which introduces performance issues since for a set of N packages displayed in a table, each field element must perform 1 or more lookup for additional data for a given package. To solve this problem, each subclass of BaseTableField must define a set of Django’s Prefetch objects to enable BasePackageTable objects to load all required data in batch in advance through prefetch_related, as listed bellow.

class BasePackageTable(metaclass=PluginRegistry):
    @property
    def packages_with_prefetch_related(self):
        """
        Returns the list of packages with prefetched relationships defined by
        table fields
        """
        package_query_set = self.packages
        for field in self.table_fields:
            for l in field.prefetch_related_lookups:
                package_query_set = package_query_set.prefetch_related(l)

        additional_data, implemented = vendor.call(
            'additional_prefetch_related_lookups'
        )
        if implemented and additional_data:
            for l in additional_data:
                package_query_set = package_query_set.prefetch_related(l)
        return package_query_set

    @property
    def packages(self):
        """
        Returns the list of packages shown in the table. One may define this
        based on the scope
        """
        return PackageName.objects.all().order_by('name')


class ArchiveTableField(BaseTableField):
    prefetch_related_lookups = [
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='general'),
            to_attr='general_archive_data'
        ),
        Prefetch(
            'data',
            queryset=PackageData.objects.filter(key='versions'),
            to_attr='versions'
        )
    ]

    @cached_property
    def context(self):
        try:
            info = self.package.general_archive_data[0]
        except IndexError:
            # There is no general info for the package
            return

        general = info.value

        try:
            info = self.package.versions[0].value
            general['default_pool_url'] = info['default_pool_url']
        except IndexError:
            # There is no versions info for the package
            general['default_pool_url'] = '#'

        return general

Finally, it is worth noticing that we also improved the team’s management page by moving all team management features to a single page and improving its visual structure:

Teams Management

Next Steps

Now, we are moving towards adding other tables with different scopes, such as the tables presented by PET:

PET tables

To this end, we will introduce the Tag model class to categorize the packages based on their characteristics. Thus, we will create an additional task responsible for tagging packages based on their available data. The relationship between packages and tags should be ManyToMany. In the end, we want to perform a simple query to define the scope of a new table, such as the following example to query all packages with Release Critical (RC) bugs:

class RCPackageTable(BasePackageTable):
    def packages(self):
      tag = Tag.objects.filter(name='rc-bugs')
      return tag.packages.all()

We probably will need to work on Debian’s VCSWatch to enable it to receive update through Salsa’s webhook, especially for real-time monitoring of repositories.


Let’s get moving on! \m/

,

Planet DebianClint Adams: Before the combination with all the asterisks

We assembled at the rally point on the wrong side of the tracks. When consensus was achieved, we began our march to the Candy Kingdom. Before we had made it even a single kilometer, a man began yelling at us.

„It’s not here,” he exclaimed. “It’s that way.”

This seemed incredible. It became apparent that, despite his fedora, he was probably the King of Ooo.

Nevertheless, we followed him in the direction he indicated. He did not offer us space in his vehicle, but we managed to catch up eventually.

„It’s to the right of the cafe. Look for сиська,” he announced.

It occurred to me that the only sign I had seen that said сиська was right by where he had intercepted us. It also occurred to me that the cafe had three sides, and “right” was rather ambiguous.

There was much confusion until the Banana Man showed up.

Posted on 2018-06-17
Tags: mintings

Planet DebianBits from Debian: Debian Artwork: Call for Proposals for Debian 10 (Buster)

This is the official call for artwork proposals for the Buster cycle.

For the most up to date details, please refer to the wiki.

We would also like to take this opportunity to thank Juliette Taka Belin for doing the Softwaves theme for stretch.

The deadlines for submissions is: 2018-09-05

The artwork is usually picked based on which themes look the most:

  • ''Debian'': admittedly not the most defined concept, since everyone has their own take on what Debian means to them.
  • ''plausible to integrate without patching core software'': as much as we love some of the insanely hot looking themes, some would require heavy GTK+ theming and patching GDM/GNOME.
  • ''clean / well designed'': without becoming something that gets annoying to look at a year down the road. Examples of good themes include Joy, Lines and Softwaves.

If you'd like more information, please use the Debian Desktop mailing list.

Planet Linux AustraliaMichael Still: Rejected talk proposal: Design at scale: OpenStack versus Kubernetes

Share

This proposal was submitted for pyconau 2018. It wasn’t accepted, but given I’d put the effort into writing up the proposal I’ll post it here in case its useful some other time. The oblique references to OpensStack are because pycon had an “anonymous” review system in 2018, and I was avoiding saying things which directly identified me as the author.


OpenStack and Kubernetes solve very similar problems. Yet they approach those problems in very different ways. What can we learn from the different approaches taken? The differences aren’t just technical though, there are some interesting social differences too.

OpenStack and Kubernetes solve very similar problems – at their most basic level they both want to place workloads on large clusters of machines, and ensure that those placement decisions are as close to optimal as possible. The two projects even have similar approaches to the fundamentals – they are both orchestration systems at their core, seeking to help existing technologies run at scale instead of inventing their own hypervisors or container run times.

Yet they have very different approaches to how to perform these tasks. OpenStack takes a heavily centralised and monolithic approach to orchestration, whilst Kubernetes has a less stateful and more laissez faire approach. Some of that is about early technical choices and the heritage of the projects, but some of it is also about hubris and a desire to tightly control. To be honest I lived the OpenStack experience so I feel I should be solidly in that camp, but the Kubernetes approach is clever and elegant. There’s a lot to like on the Kubernetes side of the fence.

Its increasingly common that at some point you’ll encounter one of these systems, as neither seems likely to go away in the next few years. Understanding some of the basics of their operation is therefore useful, as well as being interesting at a purely hypothetical level.

Share

The post Rejected talk proposal: Design at scale: OpenStack versus Kubernetes appeared first on Made by Mikal.

Planet Linux AustraliaMichael Still: Accepted talk proposal: Learning from the mistakes that even big projects make

Share

This proposal was submitted for pyconau 2018. It was accepted, but hasn’t been presented yet. The oblique references to OpensStack are because pycon had an “anonymous” review system in 2018, and I was avoiding saying things which directly identified me as the author.


Since 2011, I’ve worked on a large Open Source project in python. It kind of got out of hand – 1000s of developers and millions of lines of code. Yet despite being well resourced, we made the same mistakes that those tiny scripts you whip up to solve a small problem make. Come learn from our fail.

This talk will use the privilege separation daemon that the project wrote to tell the story of decisions that were expedient at the time, and how we regretted them later. In a universe in which you can only run commands as root via sudo, dd’ing from one file on the filesystem to another seems almost reasonable. Especially if you ignore that the filenames are defined by the user. Heck, we shell out to “mv” to move files around, even when we don’t need escalated permissions to move the file in question.

While we’ll focus mainly on the security apparatus because it is the gift that keeps on giving, we’ll bump into other examples along the way as well. For example how we had pluggable drivers, but you have to turn them on by passing in python module paths. So what happens when we change the interface the driver is required to implement and you have a third party driver? The answer isn’t good. Or how we refused to use existing Open Source code from other projects through a mixture of hubris and licensing religion.

On a strictly technical front, this is a talk about how to do user space privilege separation sensibly. Although we should probably discuss why we also chose in the last six months to not do it as safely as we could.

For a softer technical take, the talk will cover how doing things right was less well documented than doing things the wrong way. Code reviewers didn’t know the anti-patterns, which were common in the code base, so made weird assumptions about what was ok or not.

On a human front, this is about herding cats. Developers with external pressures from their various employers, skipping steps because it was expedient, and how throwing automation in front of developers because having a conversation as adults is hard. Ultimately we ended up being close to stalled before we were “saved” from an unexpected direction.

In the end I think we’re in a reasonable place now, so I certainly don’t intend to give a lecture about doom and gloom. Think of us more as a light hearted object lesson.

Share

The post Accepted talk proposal: Learning from the mistakes that even big projects make appeared first on Made by Mikal.

Don MartiHelping people move ad budgets away from evil stuff

Hugo-award-winning author Charles Stross said that a corporation is some kind of sociopathic hive organism, but as far as I can tell a corporation is really more like a monkey troop cosplaying a sociopathic hive organism.

This is important to remember because, among other reasons, it turns out that the money that a corporation spends to support democracy and creative work comes from the same advertising budget as the money it spends on random white power trolls and actual no-shit Nazis. The challenge for customers is to help people at corporations who want to do the right thing with the advertising budget, but need to be able to justify it in terms that won't break character (since they have agreed to pretend to be part of a sociopathic hive organism that only cares about its stock price).

So here is a quick follow-up to my earlier post about denying permission for some kinds of ad targeting.

Techcrunch reports that "Facebook Custom Audiences," the system where advertisers upload contact lists to Facebook in order to target the people on those lists with ads, will soon require permission from the people on the list. Check it out: Introducing New Requirements for Custom Audience Targeting | Facebook Business. On July 2, Facebook's own rules will extend a subset of Europe-like protection to everyone with a Facebook account. Beaujolais!

So this is a great opportunity to help people who work for corporations and want to do the right thing. Denying permission to share your info with Facebook can move the advertising money that they spend to reach you away from evil stuff and towards sites that make something good. Here's a permission withdrawal letter to cut and paste. Pull requests welcome.

,

Rondam RamblingsSuffer the little children

Nothing illustrates the complete moral and intellectual bankruptcy of Donald Trump's supporters, apologists, and enablers better than Jeff Sessions's Biblical justification for separating children from their families: “I would cite you to the Apostle Paul and his clear and wise command in Romans 13, to obey the laws of the government because God has ordained the government for his purposes,”

Planet DebianArturo Borrero González: Netfilter Workshop 2018 Berlin summary

Netfilter logo

This weekend we had Netfilter Workshop 2018 in Berlin, Germany.

Lots of interesting talks happened, mostly surrounding nftables and how to move forward from the iptables legacy world to the new, modern nft framework.

In a nutshell, the Netfilter project, the FLOSS community driven project, has agreed to consider iptables as a legacy tool. This confidence comes from the maturity of the nftables framework, which is fairly fully-compliant with the old iptables API, including extensions (matches and targets).

Starting now, next iptables upstream releases will include the old iptables binary as /sbin/iptables-legacy, and the same for the other friends.

To summarize:

  • /sbin/iptables-legacy
  • /sbin/iptables-legacy-save
  • /sbin/iptables-legacy-restore
  • /sbin/ip6tables-legacy
  • /sbin/ip6tables-legacy-save
  • /sbin/ip6tables-legacy-restore
  • /sbin/arptables-legacy
  • /sbin/ebtables-legacy

The new binary will be using the nf_tables kernel backend instead, what was formely known as ‘iptables-compat’. Should you find some rough edges with the new binary, you could always use the old -legacy tools. This is for people who want to keep using the old iptables semantics, but the recommendation is to migrate to nftables as soon as possible.

Moving to nftables will add the benefits of improved performance, new features, new semantics, and in general, a modern framework. All major distributions will implement these changes soon, including RedHat, Fedora, CentOS, Suse, Debian and derivatives. We also had some talks regarding firewalld, the firewalling service in use by some rpm-based distros. They gained support for nftables starting with v0.6.0. This is great news, since firewalld is the main firewalling top-level mechanism in these distributions. Good news is that the libnftables high level API is in great shape. It recently gained a new high level JSON API thanks to Phil Sutter. The firewalld tool will use this new JSON API soon.

I gave a talk about the status of Netfilter software packages at Debian, and shared my plans to implement these iptables -> nftables changes in the near future.

We also had an interesting talk by a CloudFlare engineer about how they use the TPROXY Netfilter infraestructure to serve thousand customers. Some discussion happened about caveats and improvements and how nftables could be a better fit if it gains TPROXY-like features. In the field of networking at scale, some vmware engineers also joined the conversation for nft connlimit and nf_conncount, a new approach in nftables for rate-limiting/policing based on conntrack data. This was followed up by a presentation by Pablo Neira about the new flow offload infrastructure for nftables, which can act as a complete kernel bypass in case of packet forwarding.

The venue

Jozsef Kadlecsik shared a deep and detailed investigation on ipset vs nftables and how we could match both frameworks. He gave an overview of what’s missing, what’s already there and what could be a benefit from users migrating from ipset to nftables.

We had some space for load-balancing as well. Laura García shared the last news regarding the nftlb project, the nftables-based load balancer. She shared some interesting numbers about how reptoline affects Netfilter performance. She mentioned that the impact of reptoline is about 17% in nftables and 40% for iptables for her use cases.

Florian Westphal gave a talk regarding br_netfilter and how we could improve the linux kernel networking stack from the Netfilter point of view for bridge use cases. Right now all sorts of nasty things are done to store required information and context for packets traveling bridges (which may need to be evaluated by Netfilter). We have a lot of marging for improvement and Florian’s plan is to invest time in these.

We had a very interesting legal talk by Dr. Till Jaeger regarding GPL enforcement in Germany, related to the Patrick McHardly situation. Some good work is being done in this field to defend the community against activities which hurts the interest of all the Linux users and developers.

Harsha Sharma, 18 years old from India, gave a talk explaining her work on nftables to the rest of Netfilter contributors. This is possible thanks to internship programs like Outreachy and Google Summer of Code. Varsha and Harsha, both are so brave for traveling so far from home to join a mostly european-white-men-only meeting. We where joined by 3 women this workshop and I would like to believe this is a symbol of our inclusiveness, of being a healthy community.

The group

The workshop was sponsorized by vmware, zevenet, redhat, intra2net, oisf, stamus networks, and suricata.

Planet DebianSteve Kemp: Monkeying around with intepreters

Recently I've had an overwhelming desire to write a BASIC intepreter. I can't think why, but the idea popped into my mind, and wouldn't go away.

So I challenged myself to spend the weekend looking at it.

Writing an intepreter is pretty well-understood problem:

  • Parse the input into tokens, such as "LET", "GOTO", "INT:3"
    • This is called lexical analysis / lexing.
  • Taking those tokens and building an abstract syntax tree.
    • The AST
  • Walking the tree, evaluating as you go.
    • Hey ho.

Of course BASIC is annoying because a program is prefixed by line-numbers, for example:

 10 PRINT "HELLO, WORLD"
 20 GOTO 10

The naive way of approaching this is to repeat the whole process for each line. So a program would consist of an array of input-strings each line being treated independently.

Anyway reminding myself of all this fun took a few hours, and during the course of that time I came across Writing an intepreter in Go which seems to be well-regarded. The book walks you through creating an interpreter for a language called "Monkey".

I found a bunch of implementations, which were nice and clean. So to give myself something to do I started by adding a new built-in function rnd(). Then I tested this:

let r = 0;
let c = 0;

for( r != 50 ) {
   let r = rnd();
   let c = c + 1;
}

puts "It took ";
puts c;
puts " attempts to find a random-number equalling 50!";

Unfortunately this crashed. It crashed inside the body of the loop, and it seemed that the projects I looked at each handled the let statement in a slightly-odd way - the statement wouldn't return a value, and would instead fall-through a case statement, hitting the next implementation.

For example in monkey-intepreter we see that happen in this section. (Notice how there's no return after the env.Set call?)

So I reported this as a meta-bug to the book author. It might be the master source is wrong, or might be that the unrelated individuals all made the same error - meaning the text is unclear.

Anyway the end result is I have a language, in go, that I think I understand and have been able to modify. Now I'll have to find some time to go back to BASIC-work.

I found a bunch of basic-intepreters, including ubasic, but unfortunately almost all of them were missing many many features - such as implementing operations like RND(), ABS(), COS().

Perhaps room for another interpreter after all!

Planet Linux AustraliaDonna Benjamin: The Five Whys

The Five Whys - Need to go to the hardware store?

Imagine you work in a hardware store. You notice a customer puzzling over the vast array of electric drills.

She turns to you and says, I need a drill, but I don’t know which one to pick.

You ask “So, why do you want a drill?

“To make a hole.” she replies, somewhat exasperated. “Isn’t that obvious?”

“Sure,” you might say, “But why do you want to drill a hole? It might help us decide which drill you need!” “

Oh, okay," and she goes on to describe the need to thread cable from one room, to another.

From there, we might want to know more about the walls, about the type and thickness of the cable, and perhaps about what the cable is for. But what if we keep asking why? What if the next question was something like this?

“Why do you want to pull the cable from one room to the other?”

Our customer then explains she wants to connect directly to the internet router in the other room. "Our wifi reception is terrible! This seemed the fastest, easiest way to fix that."

At this point, there may be other solutions to the bad wifi problem that don’t require a hole at all, let alone a drill.

Someone who needs a drill, rarely wants a drill, nor do they really want a hole.

It’s the utility of that hole that we’re trying to uncover with the 5 Whys.

Acknowledgement

I can't remember who first told me about this technique. I wish I could, it's been profoundly useful, and I evangelise it's simple power at every opportunity. Thank you who ever you are, I honour your generous wisdom by paying it forward today.

More about the Five whys

Image credits

Creative Commons Icons all from the Noun Project

  • Drill by Andrejs Kirma
  • Mouse Hole by Sergey Demushkin
  • Cable by Amy Schwartz
  • Internet by Vectors Market
  • Wifi by Baboon designs
  • Not allowed by Adnen Kadri

,

Planet Linux AustraliaLev Lafayette: Being An Acrobat: Linux and PDFs

The PDF file format can be efficiently manipulated in Linux and other free software that may not be easy in proprietary operating systems or applications. This includes a review of various PDF readers for Linux, creation of PDFs from office documents using LibreOffice, editing PDF documents, converting PDF documents to images, extracting text from non-OCR PDF documents, converting to PostScript, converting restructuredText, Markdown, and other formats, searching PDFs according to regular expressions, converting to text, extracting images, separating and combining PDF documents, creating PDF presentations from text, creating fillable PDF forms, encrypting and decrypting PDF documents, and parsing PDF documents.

A presentation to Linux Users of Victoria, Saturday June 16, 2018

CryptogramFriday Squid Blogging: Cephalopod Week on Science Friday

It's Cephalopod Week! "Three hearts, eight arms, can't lose."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianSven Hoexter: imagine you no longer own your infrastructure

Sounds crazy and nobody would ever do that, but just for a moment imagine you no longer own your infrastructure.

Imagine you just run your container on something like GKE with Kubernetes.

Imagine you build your software with something like Jenkins running in a container, using the GKE provided docker interface to build stuff in another container.

And for a $reason imagine you're not using the Google provided container registry, but your own one hosted somewhere else on the internet.

Of course you access your registry via HTTPS, so your connection is secured at the transport level.

Now imagine your certificate is at the end of its validity period. Like ending the next day.

Imagine you just do what you do every time that happens, and you just order a new certificate from one of the left over CAs like DigiCert.

You receive your certificate within 15 minutes.

You deploy it to your registry.

You validate that your certificate chain validates against different certificate stores.

The one shipped in ca-certificates on various Debian releases you run.

The one in your browser.

Maybe you even test it with Google Chrome.

Everything is cool and validates. I mean, of course it does. DigiCert is a known CA player and the root CA certificate was created five years ago. A lot of time for a CA to be included and shipped in many places.

But still there is one issue. The docker commands you run in your build jobs fail to pull images from your registry because the certificate can not be validated.

You take a look at the underlying OS and indeed it's not shipping the 5 year old root CA certificate that issued your intermediate CA that just issued your new server certificate.

If it were your own infrastructure you would now just ship the missing certificate.

Maybe by including it in your internal ca-certificates build.

Or by just deploying it with ansible to /usr/share/ca-certificates/myfoo/ and adding that to the configuration in /etc/ca-certificates.conf so update-ca-certificates can create the relevant hash links for you in /etc/ssl/certs/.

But this time it's not your infrastructure and you can not modify the operating system context your docker container are running in.

Sounds insane, right? Luckily we're just making up a crazy story and something like that would never happen in the real world, because we all insist on owning our infrastructure.

Planet DebianSune Vuorela: Partially initialized objects

I found this construct some time ago. It took some reading to understand why it worked. I’m still not sure if it is actually legal, or just works only because m_derivedData is not accessed in Base::Base.

struct Base {
    std::string& m_derivedData;
    Base(std::string& data) : m_derivedData(data) {
    }
};

struct Derived : public Base {
    std::string m_data;
    struct Derived() : Base(m_data), m_data("foo") {
    }
};

TEDAn ambitious plan to explore our oceans, and more news from TED speakers

 

The past few weeks have brimmed over with TED-related news. Below, some highlights.

Exploring the ocean like never before. A school of ocean-loving TED speakers have teamed up to launch OceanX, an international initiative dedicated to discovering more of our oceans in an effort to “inspire a human connection to the sea.” The coalition is supported by Bridgewater Capital’s Ray Dalio, along with luminaries like ocean explorer Sylvia Earle and filmmaker James Cameron, and partners such as BBC Studios, the American Museum of Natural History and the National Geographic Society. The coalition is now looking for ideas for scientific research missions in 2019, exploring the Norwegian Sea and the Indian Ocean. Dalio’s son Mark leads the media arm of the venture; from virtual reality demonstrations in classrooms to film and TV releases like the BBC show Blue Planet II and its follow-up film Oceans: Our Blue Planet, OceanX plans to build an engaged global community that seeks to “enjoy, understand and protect our oceans.” (Watch Dalio’s TED Talk, Earle’s TED Talk and Cameron’s TED Talk.)

The Ebola vaccine that’s saving lives. In response to the recent Ebola outbreak in the Democratic Republic of the Congo, GAVI — the Vaccine Alliance, led by Seth Berkeley — has deployed thousands of experimental vaccines in an outbreak control strategy. The vaccines were produced as part of a partnership between GAVI and Merck, a pharmaceutical company, committed to proactively developing and producing vaccines in case of a future Ebola epidemic. In his TED Talk, Berkeley spoke of the drastic dangers of global disease and the preventative measures necessary to ensure we are prepared for future outbreaks. (Watch his TED Talk and read our in-depth interview with Berkeley.)

A fascinating new study on the halo effect. Does knowing someone’s political leanings change how you gauge their skills? Cognitive neurologist Tali Sharot and lawyer Cass R. Sunstein shared insights from their latest research answering the question in The New York Times. Alongside a team from University College London and Harvard Law School, Sharot conducted an experiment testing whether knowing someone’s political leanings affected how we would engage and trust in other non-political aspects of their lives. The study found that people were more willing to trust someone who had the same political beliefs as them — even in completely unrelated fields, like dentistry or architecture. These findings have wide-reaching implications and can further our understanding of the social and political landscape. (Watch Sharot’s TED Talk on optimism bias).

A new essay anthology on rape culture. Roxane Gay’s newest book, Not That Bad: Dispatches from Rape Culture, was released in May to critical and commercial acclaim. The essay collection, edited and introduced by Gay, features first-person narratives on the realities and effects of harassment, assault and rape. With essays from 29 contributors, including actors Gabrielle Union and Amy Jo Burns, and writers Claire Schwartz and Lynn Melnick, Not That Bad offers feminist insights into the national and global dialogue on sexual violence. (Watch Gay’s TED Talk.)

One million pairs of 3D-printed sneakers. At TED2015, Carbon founder and CEO Joseph DeSimone displayed the latest 3D printing technology, explaining its seemingly endless applications for reshaping the future of manufacturing. Now, Carbon has partnered with Adidas for a bold new vision to 3D-print 100,000 pairs of sneakers by the end of 2018, with plans to ramp up production to millions. The company’s “Digital Light Synthesis” technique, which uses light and oxygen to fabricate materials from pools of resin, significantly streamlines manufacturing from traditional 3D-printing processes — a technology Adidas considers “revolutionary.” (Watch DeSimone’s TED Talk.)

Planet DebianAndrej Shadura: Working in open source: part 1

Three years ago on this day I joined Collabora to work on free software full-time. It still feels a bit like yesterday, despite so much time passing since then. In this post, I’m going to reconstruct the events of that year.

Back in 2015, I worked for Alcatel-Lucent, who had a branch in Bratislava. I can’t say I didn’t like my job — quite contrary, I found it quite exciting: I worked with mobile technologies such as 3G and LTE, I had really knowledgeable and smart colleagues, and it was the first ‘real’ job (not counting the small business my father and I ran) where using Linux for development was not only not frowned upon, but was a mandatory part of the standard workflow, and running it on your workstation was common too, even though not official.

However, after working for Alcatel-Lucent for a year, I found I don’t like some of the things about this job. We developed proprietary software for the routers and gateways the company produced, and despite the fact we used quite a lot of open source libraries and free software tools, we very rarely contributed anything back, and if this happened at all, it usually happened unofficially and not on the company’s time. Each time I tried to suggest we need to upstream our local changes so that we don’t have to maintain three different patchsets for different upstream versions ourselves, I was told I know nothing about how the business works, and that doing that would give up the control on the code, and we can’t do that. At the same time, we had no issue incorporating permissively-licensed free software code. The more I worked at Alcatel-Lucent, the more I felt I am just getting useless knowledge of a proprietary product I will never be able to reuse once and if I leave the company. At some point, in a discussion at work someone said that doing software development (including my free software work) even on my free time may constitute a conflict of interests, and the company may be unhappy about it. Add to that that despite relatively flexible hours, working from home was almost never allowed, as was working from other offices of the company.

These were the major reasons I quit my job at Alcatel-Lucent, and my last day was 10 April 2018. Luckily, we reached an agreement that I will still get my normal pay while on the notice period despite not actually going to the office or doing any work, which allowed me to enjoy two months of working on my hobby projects while not having to worry about money.

To be honest, I don’t want to seem like I quit my job just because it was all proprietary software, and I did plan to live from donations or something, it wasn’t quite like that. While still working for Alcatel-Lucent, I was offered a job which was developing real-time software running inside the Linux kernel. While I have declined this job offer, mostly because it was a small company with less than a dozen employees, and I would need to take over the responsibility for a huge piece of code — which was, in fact, also proprietary, this job offer taught me this thing: there were jobs out there where my knowledge of Linux was of an actual use, even in the city I lived in. The other thing I learnt was this: there were remote Linux jobs too, but I needed to become self-employed to be able to take them, since my immigration status at the moment didn’t allow me to be employed abroad.

Picture of the business license. Text in Slovak: ‘Osvedčenie o živnostenskom opravnení. Andrei Shadura’.

The business license I received within a few days of quitting my job

Feeling free as a bird, having the business registered, I’ve spent two months hacking, relaxing, travelling to places in Slovakia and Ukraine, and thinking about how am I going to earn money when my two months vacation ends.

A street in Trenčín; the castle can be seen above the building’s roof.

In Trenčín

The obvious idea was to consult, but that wouldn’t guarantee me constant income. I could consult on Debian or Linux in general, or on version control systems — in 2015 I was an active member of the Kallithea project and I believed I could help companies migrate from CVS and Subversion to Mercurial and Git hosted internally on Kallithea. (I’ve actually also got a job offer from Unity Technologies to hack on Kallithea and related tools, but I had to decline it since it would require moving to Copenhagen, which I wasn’t ready for, despite liking the place when I visited them in May 2015.)

Another obvious idea was working for Red Hat, but knowing how slow their HR department was, I didn’t put too much hope into it. Besides, when I contacted them, they said they need to get an approval for me to work for them remotely and as a self-employed, lowering my chances on getting a job there without having to relocate to Brno or elsewhere.

At some point, reading Debian Planet, I found a blog post by Simon McVittie on polkit, in which he mentioned Collabora. Soon, I applied, had my interviews and a job offer.

To be continued later today…

Worse Than FailureError'd: Just Handle It

Clint writes, "On Facebook, I tried to report a post as spam. I think I might just have to accept it."

 

"Jira seems to have strange ideas about my keyboard layout... Or is there a key that I don't know about?" writes Rob H.

 

George wrote, "There was deep wisdom bestowed upon weary travelers by the New York subway system at the Jamaica Center station this morning."

 

"Every single number field on the checkout page, including phone and credit card, was an integer. Just in case, you know, you felt like clicking a lot," Jeremiah C. writes.

 

"I don't know which is more ridiculous: that a Linux recovery image is a Windows 10, or that there's a difference between Pro and Professional," wrote Dima R.

 

"I got my weekly workout summary and, well, it looks I might have been hitting the gym a little too hard," Colin writes.

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianSteinar H. Gunderson: Qt flag types

typeid(Qt::AlignRight) = Qt::AlignmentFlag (implicitly convertible to QVariant
typeid(Qt::AlignRight | Qt::AlignVCenter) = QFlags<Qt::AlignmentFlag> (not implicitly convertible to QVariant)
typeid(Qt::AlignRight + Qt::AlignVCenter) = int (implicitly convertible to QVariant)

Qt, what is wrong with you?

Planet DebianDaniel Pocock: The questions you really want FSFE to answer

As the last man standing as a fellowship representative in FSFE, I propose to give a report at the community meeting at RMLL.

I'm keen to get feedback from the wider community as well, including former fellows, volunteers and anybody else who has come into contact with FSFE.

It is important for me to understand the topics you want me to cover as so many things have happened in free software and in FSFE in recent times.

last man standing

Some of the things people already asked me about:

  • the status of the fellowship and the membership status of fellows
  • use of non-free software and cloud services in FSFE, deviating from the philosophy that people associate with the FSF / FSFE family
  • measuring both the impact and cost of campaigns, to see if we get value for money (a high level view of expenditure is here)

What are the issues you would like me to address? Please feel free to email me privately or publicly. If I don't have answers immediately I would seek to get them for you as I prepare my report. Without your support and feedback, I don't have a mandate to pursue these issues on your behalf so if you have any concerns, please reply.

Your fellowship representative

Planet Debianbisco: Third GSoC Report

The last two weeks went by pretty fast, probably also because the last courses this semester started and i have a lot of additional work to do.

I closed the last report with writing about the implementation of the test suite. I’ve added a lot more tests since then and there are now around 80 tests that are run with every commit. Using unit tests that do some basic testing really makes life a lot easier- next time i start a software project i’ll definitly start early on with writing tests. I’ve also read a bit about the difference of integration and unit tests. A unit test should only test one specific functionality, so i refactored some of the old tests and made them more granular.

I then also looked into coding style checkers and decided to go with flake8. There were a huge pile of coding style violations in my code, most of them lines that were more than 79 characters. I’ve integrated flake8 in the test suite and removed all the violations. One more thing about python: i’ve read python3 with pleasure which gives a great overview about some of the new features of python3 and i’ve made some notes about stuff i want to integrate (i.e. pathlib)

Regarding the functionality of nacho i’ve added the possibility to delete an account. SSH keys are now validated on upload and it is possilbe to configure the key types that are allowed. I initially just checked if the key string consists of valid base64 encoded data, but that was not really a good solution so i decided to use sshpubkeys to check the validity of the keys. Nacho now also checks the profile image before storing it in the LDAP database- it is possible to configure the image size and list allowed image types, which is verified using python-magic. I also made a big change concerning the configuration: all the relevant configuration options are now moved to a seperate configuration file in json format, which is parsed when nacho is started. This makes it also a lot easier to have default values and to let users override them in their local config. I also updated the documentation and the debian package.

Now that the issues with nacho are slowly becoming smaller, i’ll start to look into existing SSO solutions that then can be used with the LDAP backend. There are four solutions i’ve on my list at the moment, that are keycloak, ipsilon, lemonldap-ng and glewlwyd.

Planet Linux AustraliaOpenSTEM: Assessment Time

For many of us, the colder weather has started to arrive and mid-year assessment is in full swing. Teachers are under the pump to produce mid-year reports and grades. The OpenSTEM® Understanding Our World® program aims to take the pressure off teachers by providing for continuous assessment throughout the term. Not only are teachers continually […]

Planet Linux AustraliaDonna Benjamin: Makarrata

The time has come
To say fairs fair...

Dear members of the committee,

Please listen to the Uluru statement from the heart. Please hear those words. Please accept them, please act to adopt them.

Enshrine a voice for Australia’s first nation peoples in the Australian constitution.

Create a commission for Makarrata.

Invest in uncovering and telling the truth of our history.

We will be a stronger, wiser nation when we truly acknowledge the frontier wars and not only a stolen generation but stolen land, and stolen hope.

We have nothing to lose, and everything to gain through real heartfelt recognition and reconciliation.

Makarrata. Treaty. Sovereignty.

Please. I am Australian. I want this.

I felt sick shame when the prime minister rejected the Uluru statement. He did not, does not, speak for me.

Donna Benjamin
Melbourne, VIC.

Planet Linux AustraliaDonna Benjamin: Leadership, and teamwork.

Photo by Mohamed Abd El Ghany - Women protestors in Tahrir Square, Egypt 2013.

I'm angry and defensive. I don't know why. So I'm trying hard to figure that out right now.

Here's some words.

I'm writing these words for myself to try and figure this out.
I'm hoping these words might help make it clear.
I'm fearful these words will make it worse.

But I don't want to be silent about this.

Content Warning: This post refers to genocide.

This is about a discussion at the teamwork and leadership workshop at DrupalCon. For perhaps 5 mins within a 90 minute session we talked about Hitler. It was an intensely thought provoking, and uncomfortable 5 minute conversation. It was nuanced. It wasn't really tweetable.

On Holocaust memorial day, it seems timely to explore whether or not we should talk about Hitler when exploring the nature of leadership. Not all leaders are good. Call them dictators, call them tyrants, call them fascists, call them evil. Leadership is defined differently by different cultures, at different times, and in different contexts.

Some people in the room were upset and disgusted that we had that conversation. I'm really very deeply sorry about that.

Some of them then talked about it with others afterwards, which is great. It was a confronting conversation, and one, frankly, we should all be having as genocide and fascism exist in very real ways in the very real world.

But some of those they spoke with, who weren't there, seem to have extrapolated from that conversation that it was something different to what I experienced in the room. I feel they formed opinions that I can only call, well, what words can I call those opinions? Uninformed? Misinformed? Out of context? Wrong? That's probably unfair, it's just my perspective. But from those opinions, they also made assumptions, and turned those assumptions into accusations.

One person said they were glad they weren't there, but clearly happy to criticise us from afar on twitter. I responded that I thought it was a shame they didn't come to the workshop, but did choose to publicly criticise our work. Others responded to that saying this was disgusting, offensive, unacceptable and inappropriate that we would even consider having this conversation. One accused me of trying to shut down the conversation.

So, I think perhaps the reason I'm feeling angry and defensive, is I'm being accused of something I don't think I did.

And I want to defend myself.

I've studied World War Two and the Genocide that took place under Hitler's direction.

My grandmother was arrested in the early 1930's and held in a concentration camp. She was, thankfully, released and fled Germany to Australia as a refugee before the war was declared. Her mother was murdered by Hitler. My grandfather's parents and sister were also murdered by Hitler.

So, I guess I feel like I've got a pretty strong understanding of who Hitler was, and what he did.

So when I have people telling me, that it's completely disgusting to even consider discussing Hitler in the context of examining what leadership is, and what it means? Fuck that. I will not desist. Hitler was a monster, and we must never forget what he was, or what he did.

During silent reflection on a number of images, I wrote this note.

"Hitler was a powerful leader. No question. So powerful, he destroyed the world."

When asked if they thought Hitler was a leader or not, most people in the room, including me, put up their hand. We were wrong.

The four people who put their hand up to say he was NOT a leader were right.

We had not collectively defined leadership at that point. We were in the middle of a process doing exactly that.

The definition we were eventually offered is that leaders must care for their followers, and must care for people generally.

At no point, did anyone in that room, consider the possibility that Hitler was a "Good Leader" which is the misinformed accusation I most categorically reject.

Our facilitator, Adam Goodman, told us we were all wrong, except the four who rejected Hitler as an example of a Leader, by saying, that no, he was not a leader, but yes, he was a dictator, yes he was a tyrant. But he was not a leader.

Whilst I agree, and was relieved by that reframing, I would also counter argue that it is English semantics.

Someone else also reminded us, that Hitler was elected. I too, was elected to the board of the Drupal Association, I was then appointed to one of the class Director seats. My final term ends later this year, and frankly, right now, I'm kind of wondering if I should leave right now.

Other people shown in the slide deck were Oprah Winfrey, Angela Merkel, Rosa Parks, Serena Williams, Marin Alsop, Sonia Sotomayor, a woman in military uniform, and a large group of women protesting in Tahrir Square in Egypt.

It also included Gandhi, and Mandela.

I observed that I felt sad I could think of no woman that I would list in the same breath as those two men.

So... for those of you who judged us, and this workshop, from what you saw on twitter, before having all the facts?
Let me tell you what I think this was about.

This wasn't about Hitler.

This was about leadership, and learning how we can be better leaders. I felt we were also exploring how we might better support the leaders we have, and nurture the ones to come. And I now also wonder how we might respectfully acknowledge the work and effort of those who've come and gone, and learn to better pass on what's important to those doing the work now.

We need teamwork. We need leadership. It takes collective effort, and most of all, it takes collective empathy and compassion.

Dries Buytaert was the final image in the deck.

Dries shared these 5 values and their underlying principles with us to further explore, discuss and develop together.

Prioritize impact
Impact gives us purpose. We build software that is easy, accessible and safe for everyone to use.

Better together
We foster a learning environment, prefer collaborative decision-making, encourage others to get involved and to help lead our community.

Strive for excellence
We constantly re-evaluate and assume that change is constant.

Treat each other with dignity and respect
We do not tolerate intolerance toward others. We seek first to understand, then to be understood. We give each other constructive criticism, and are relentlessly optimistic.

Enjoy what you do
Be sure to have fun.

I'm sorry to say this, but I'm really not having fun right now. But I am much clearer about why I'm feeling angry.

Photo Credit "Protesters against Egyptian President Mohamed Morsi celebrate in Tahrir Square in Cairo on July 3, 2013. Egypt's armed forces overthrew elected Islamist President Morsi on Wednesday and announced a political transition with the support of a wide range of political and religious leaders." Mohamed Abd El Ghany Reuters.

Planet Linux AustraliaDonna Benjamin: DrupalCon Nashville

I'm going to Nashville!!

That is all. Carry on. Or... better yet - you should come too!

https://events.drupal.org/nashville2018

Planet DebianGunnar Wolf: «Understanding the Digital World» — By Brian Kernighan

I came across Kernighan's 2017 book, Understanding the Digital World — What You Need to Know about Computers, the Internet, Privacy, and Security. I picked it up thanks to a random recommendation I read somewhere I don't recall. And it's really a great read.
Of course, basically every reader that usually comes across this blog will be familiar with Kernighan. Be it because his most classic books from the 1970s, The Unix Programming Environment or The C Programming Language, or from the much more recent The Practice of Programming or The Go Programming Language, Kernighan is a world-renowned authority for technical content, for highly technical professionals at the time of their writing — And they tend to define the playing field later on.
But this book I read is... For the general public. And it is superb at that.
Kernighan states in his Preface that he teaches a very introductory course at Princeton (a title he admits to be too vague, Computers in our World) to people in the social sciences and humanities field. And this book shows how he explains all sorts of scary stuff to newcomers.
As it's easier than doing a full commentary on it, I'll just copy the table of contents (only to the section level, it gets just too long if I also list subsections). The list of contents is very thorough (and the book is only 238 pages long!), but take a look at basically every chapter... And picture explaining those topics to computing laymen. An admirable feat!

  • Part I: Hardware
    • 1. What's in a computer?
      • Logical construction
      • Physical construction
      • Moore's Law
      • Summary
    • 2. Bits, Bytes, and Representation of Information
      • Analog versus Digital
      • Analog-Digital Conversion
      • Bits, Bytes and Binary
      • Summary
    • 3. Inside the CPU
      • The Toy Computer
      • Real CPUs
      • Caching
      • Other Kinds of Computers
      • Summary

    Wrapup on Hardware

  • Part II: Software
    • 4. Algorithms
      • Linear Algorithms
      • Binary Search
      • Sorting
      • Hard Problems and Complexity
      • Summary
    • 5. Programming and Programming Languages
      • Assembly Language
      • High Level Languages
      • Software Development
      • Intellectual Property
      • Standards
      • Open Source
      • Summary
    • 6. Software Systems
      • Operating Systems
      • How an Operating System works
      • Other Operating Systems
      • File Systems
      • Applications
      • Layers of Software
      • Summary
    • 7. Learning to Program
      • Programming Language Concepts
      • A First JavaScript Example
      • A Second JavaScript Example
      • Loops
      • Conditionals
      • Libraries and Interfaces
      • How JavaScript Works
      • Summary

    Wrapup on Software

  • Part III: Communications
    • 8. Networks
      • Telephones and Modems
      • Cable and DSL
      • Local Area Networks and Ethernet
      • Wireless
      • Cell Phones
      • Bandwidth
      • Compression
      • Error Detection and Correction
      • Summary
    • The Internet
      • An Internet Overview
      • Domain Names and Addresses
      • Routing
      • TCP/IP protocols
      • Higher-Level Protocols
      • Copyright on the Internet
      • The Internet of Things
      • Summary
    • 10. The World Wide Web
      • How the Web works
      • HTML
      • Cookies
      • Active Content in Web Pages
      • Active Content Elsewhere
      • Viruses, Worms and Trojan Horses
      • Web Security
      • Defending Yourself
      • Summary
    • 11. Data and Information
      • Search
      • Tracking
      • Social Networks
      • Data Mining and Aggregation
      • Cloud Computing
      • Summary
    • 12. Privacy and Security
      • Cryptography
      • Anonymity
      • Summary
    • 13. Wrapping up

I must say, I also very much enjoyed learning of my overall ideological alignment with Brian Kernighan. I am very opinionated, but I believe he didn't make me do a even mild scoffing — and he goes to many issues I have strong feelings about (free software, anonymity, the way the world works...)
So, maybe I enjoyed this book so much because I enjoy teaching, and it conveys great ways to teach the topics I'm most passionate about. But, anyway, I have felt for several days the urge to share this book with the group of people that come across my blog ☺

,

Planet DebianKees Cook: security things in Linux v4.17

Previously: v4.16.

Linux kernel v4.17 was released last week, and here are some of the security things I think are interesting:

Jailhouse hypervisor

Jan Kiszka landed Jailhouse hypervisor support, which uses static partitioning (i.e. no resource over-committing), where the root “cell” spawns new jails by shrinking its own CPU/memory/etc resources and hands them over to the new jail. There’s a nice write-up of the hypervisor on LWN from 2014.

Sparc ADI

Khalid Aziz landed the userspace support for Sparc Application Data Integrity (ADI or SSM: Silicon Secured Memory), which is the hardware memory coloring (tagging) feature in Sparc M7. I’d love to see this extended into the kernel itself, as it would kill linear overflows between allocations, since the base pointer being used is tagged to belong to only a certain allocation (sized to a multiple of cache lines). Any attempt to increment beyond, into memory with a different tag, raises an exception. Enrico Perla has some great write-ups on using ADI in allocators and a comparison of ADI to Intel’s MPX.

new kernel stacks cleared on fork

It was possible that old memory contents would live in a new process’s kernel stack. While normally not visible, “uninitialized” memory read flaws or read overflows could expose these contents (especially stuff “deeper” in the stack that may never get overwritten for the life of the process). To avoid this, I made sure that new stacks were always zeroed. Oddly, this “priming” of the cache appeared to actually improve performance, though it was mostly in the noise.

MAP_FIXED_NOREPLACE

As part of further defense in depth against attacks like Stack Clash, Michal Hocko created MAP_FIXED_NOREPLACE. The regular MAP_FIXED has a subtle behavior not normally noticed (but used by some, so it couldn’t just be fixed): it will replace any overlapping portion of a pre-existing mapping. This means the kernel would silently overlap the stack into mmap or text regions, since MAP_FIXED was being used to build a new process’s memory layout. Instead, MAP_FIXED_NOREPLACE has all the features of MAP_FIXED without the replacement behavior: it will fail if a pre-existing mapping overlaps with the newly requested one. The ELF loader has been switched to use MAP_FIXED_NOREPLACE, and it’s available to userspace too, for similar use-cases.

pin stack limit during exec

I used a big hammer and pinned the RLIMIT_STACK values during exec. There were multiple methods to change the limit (through at least setrlimit() and prlimit()), and there were multiple places the limit got used to make decisions, so it seemed best to just pin the values for the life of the exec so no games could get played with them. Too much assumed the value wasn’t changing, so better to make that assumption actually true. Hopefully this is the last of the fixes for these bad interactions between stack limits and memory layouts during exec (which have all been defensive measures against flaws like Stack Clash).

Variable Length Array removals start

Following some discussion over Alexander Popov’s ongoing port of the stackleak GCC plugin, Linus declared that Variable Length Arrays (VLAs) should be eliminated from the kernel entirely. This is great because it kills several stack exhaustion attacks, including weird stuff like stepping over guard pages with giant stack allocations. However, with several hundred uses in the kernel, this wasn’t going to be an easy job. Thankfully, a whole bunch of people stepped up to help out: Gustavo A. R. Silva, Himanshu Jha, Joern Engel, Kyle Spiers, Laura Abbott, Lorenzo Bianconi, Nikolay Borisov, Salvatore Mesoraca, Stephen Kitt, Takashi Iwai, Tobin C. Harding, and Tycho Andersen. With Linus Torvalds and Martin Uecker, I also helped rewrite the max() macro to eliminate false positives seen by the -Wvla compiler option. Overall, about 1/3rd of the VLA instances were solved for v4.17, with many more coming for v4.18. I’m hoping we’ll have entirely eliminated VLAs by the time v4.19 ships.

That’s in for now! Please let me know if you think I missed anything. Stay tuned for v4.18; the merge window is open. :)

© 2018, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

CryptogramE-Mail Vulnerabilities and Disclosure

Last week, researchers disclosed vulnerabilities in a large number of encrypted e-mail clients: specifically, those that use OpenPGP and S/MIME, including Thunderbird and AppleMail. These are serious vulnerabilities: An attacker who can alter mail sent to a vulnerable client can trick that client into sending a copy of the plaintext to a web server controlled by that attacker. The story of these vulnerabilities and the tale of how they were disclosed illustrate some important lessons about security vulnerabilities in general and e-mail security in particular.

But first, if you use PGP or S/MIME to encrypt e-mail, you need to check the list on this page and see if you are vulnerable. If you are, check with the vendor to see if they've fixed the vulnerability. (Note that some early patches turned out not to fix the vulnerability.) If not, stop using the encrypted e-mail program entirely until it's fixed. Or, if you know how to do it, turn off your e-mail client's ability to process HTML e-mail or -- even better -- stop decrypting e-mails from within the client. There's even more complex advice for more sophisticated users, but if you're one of those, you don't need me to explain this to you.

Consider your encrypted e-mail insecure until this is fixed.

All software contains security vulnerabilities, and one of the primary ways we all improve our security is by researchers discovering those vulnerabilities and vendors patching them. It's a weird system: Corporate researchers are motivated by publicity, academic researchers by publication credentials, and just about everyone by individual fame and the small bug-bounties paid by some vendors.

Software vendors, on the other hand, are motivated to fix vulnerabilities by the threat of public disclosure. Without the threat of eventual publication, vendors are likely to ignore researchers and delay patching. This happened a lot in the 1990s, and even today, vendors often use legal tactics to try to block publication. It makes sense; they look bad when their products are pronounced insecure.

Over the past few years, researchers have started to choreograph vulnerability announcements to make a big press splash. Clever names -- the e-mail vulnerability is called "Efail" -- websites, and cute logos are now common. Key reporters are given advance information about the vulnerabilities. Sometimes advance teasers are released. Vendors are now part of this process, trying to announce their patches at the same time the vulnerabilities are announced.

This simultaneous announcement is best for security. While it's always possible that some organization -- either government or criminal -- has independently discovered and is using the vulnerability before the researchers go public, use of the vulnerability is essentially guaranteed after the announcement. The time period between announcement and patching is the most dangerous, and everyone except would-be attackers wants to minimize it.

Things get much more complicated when multiple vendors are involved. In this case, Efail isn't a vulnerability in a particular product; it's a vulnerability in a standard that is used in dozens of different products. As such, the researchers had to ensure both that everyone knew about the vulnerability in time to fix it and that no one leaked the vulnerability to the public during that time. As you can imagine, that's close to impossible.

Efail was discovered sometime last year, and the researchers alerted dozens of different companies between last October and March. Some companies took the news more seriously than others. Most patched. Amazingly, news about the vulnerability didn't leak until the day before the scheduled announcement date. Two days before the scheduled release, the researchers unveiled a teaser -- honestly, a really bad idea -- which resulted in details leaking.

After the leak, the Electronic Frontier Foundation posted a notice about the vulnerability without details. The organization has been criticized for its announcement, but I am hard-pressed to find fault with its advice. (Note: I am a board member at EFF.) Then, the researchers published -- and lots of press followed.

All of this speaks to the difficulty of coordinating vulnerability disclosure when it involves a large number of companies or -- even more problematic -- communities without clear ownership. And that's what we have with OpenPGP. It's even worse when the bug involves the interaction between different parts of a system. In this case, there's nothing wrong with PGP or S/MIME in and of themselves. Rather, the vulnerability occurs because of the way many e-mail programs handle encrypted e-mail. GnuPG, an implementation of OpenPGP, decided that the bug wasn't its fault and did nothing about it. This is arguably true, but irrelevant. They should fix it.

Expect more of these kinds of problems in the future. The Internet is shifting from a set of systems we deliberately use -- our phones and computers -- to a fully immersive Internet-of-things world that we live in 24/7. And like this e-mail vulnerability, vulnerabilities will emerge through the interactions of different systems. Sometimes it will be obvious who should fix the problem. Sometimes it won't be. Sometimes it'll be two secure systems that, when they interact in a particular way, cause an insecurity. In April, I wrote about a vulnerability that arose because Google and Netflix make different assumptions about e-mail addresses. I don't even know who to blame for that one.

It gets even worse. Our system of disclosure and patching assumes that vendors have the expertise and ability to patch their systems, but that simply isn't true for many of the embedded and low-cost Internet of things software packages. They're designed at a much lower cost, often by offshore teams that come together, create the software, and then disband; as a result, there simply isn't anyone left around to receive vulnerability alerts from researchers and write patches. Even worse, many of these devices aren't patchable at all. Right now, if you own a digital video recorder that's vulnerable to being recruited for a botnet -- remember Mirai from 2016? -- the only way to patch it is to throw it away and buy a new one.

Patching is starting to fail, which means that we're losing the best mechanism we have for improving software security at exactly the same time that software is gaining autonomy and physical agency. Many researchers and organizations, including myself, have proposed government regulations enforcing minimal security standards for Internet-of-things devices, including standards around vulnerability disclosure and patching. This would be expensive, but it's hard to see any other viable alternative.

Getting back to e-mail, the truth is that it's incredibly difficult to secure well. Not because the cryptography is hard, but because we expect e-mail to do so many things. We use it for correspondence, for conversations, for scheduling, and for record-keeping. I regularly search my 20-year e-mail archive. The PGP and S/MIME security protocols are outdated, needlessly complicated and have been difficult to properly use the whole time. If we could start again, we would design something better and more user friendly­but the huge number of legacy applications that use the existing standards mean that we can't. I tell people that if they want to communicate securely with someone, to use one of the secure messaging systems: Signal, Off-the-Record, or -- if having one of those two on your system is itself suspicious -- WhatsApp. Of course they're not perfect, as last week's announcement of a vulnerability (patched within hours) in Signal illustrates. And they're not as flexible as e-mail, but that makes them easier to secure.

This essay previously appeared on Lawfare.com.

CryptogramRouter Vulnerability and the VPNFilter Botnet

On May 25, the FBI asked us all to reboot our routers. The story behind this request is one of sophisticated malware and unsophisticated home-network security, and it's a harbinger of the sorts of pervasive threats ­ from nation-states, criminals and hackers ­ that we should expect in coming years.

VPNFilter is a sophisticated piece of malware that infects mostly older home and small-office routers made by Linksys, MikroTik, Netgear, QNAP and TP-Link. (For a list of specific models, click here.) It's an impressive piece of work. It can eavesdrop on traffic passing through the router ­ specifically, log-in credentials and SCADA traffic, which is a networking protocol that controls power plants, chemical plants and industrial systems ­ attack other targets on the Internet and destructively "kill" its infected device. It is one of a very few pieces of malware that can survive a reboot, even though that's what the FBI has requested. It has a number of other capabilities, and it can be remotely updated to provide still others. More than 500,000 routers in at least 54 countries have been infected since 2016.

Because of the malware's sophistication, VPNFilter is believed to be the work of a government. The FBI suggested the Russian government was involved for two circumstantial reasons. One, a piece of the code is identical to one found in another piece of malware, called BlackEnergy, that was used in the December 2015 attack against Ukraine's power grid. Russia is believed to be behind that attack. And two, the majority of those 500,000 infections are in Ukraine and controlled by a separate command-and-control server. There might also be classified evidence, as an FBI affidavit in this matter identifies the group behind VPNFilter as Sofacy, also known as APT28 and Fancy Bear. That's the group behind a long list of attacks, including the 2016 hack of the Democratic National Committee.

Two companies, Cisco and Symantec, seem to have been working with the FBI during the past two years to track this malware as it infected ever more routers. The infection mechanism isn't known, but we believe it targets known vulnerabilities in these older routers. Pretty much no one patches their routers, so the vulnerabilities have remained, even if they were fixed in new models from the same manufacturers.

On May 30, the FBI seized control of toknowall.com, a critical VPNFilter command-and-control server. This is called "sinkholing," and serves to disrupt a critical part of this system. When infected routers contact toknowall.com, they will no longer be contacting a server owned by the malware's creators; instead, they'll be contacting a server owned by the FBI. This doesn't entirely neutralize the malware, though. It will stay on the infected routers through reboot, and the underlying vulnerabilities remain, making the routers susceptible to reinfection with a variant controlled by a different server.

If you want to make sure your router is no longer infected, you need to do more than reboot it, the FBI's warning notwithstanding. You need to reset the router to its factory settings. That means you need to reconfigure it for your network, which can be a pain if you're not sophisticated in these matters. If you want to make sure your router cannot be reinfected, you need to update the firmware with any security patches from the manufacturer. This is harder to do and may strain your technical capabilities, though it's ridiculous that routers don't automatically download and install firmware updates on their own. Some of these models probably do not even have security patches available. Honestly, the best thing to do if you have one of the vulnerable models is to throw it away and get a new one. (Your ISP will probably send you a new one free if you claim that it's not working properly. And you should have a new one, because if your current one is on the list, it's at least 10 years old.)

So if it won't clear out the malware, why is the FBI asking us to reboot our routers? It's mostly just to get a sense of how bad the problem is. The FBI now controls toknowall.com. When an infected router gets rebooted, it connects to that server to get fully reinfected, and when it does, the FBI will know. Rebooting will give it a better idea of how many devices out there are infected.

Should you do it? It can't hurt.

Internet of Things malware isn't new. The 2016 Mirai botnet, for example, created by a lone hacker and not a government, targeted vulnerabilities in Internet-connected digital video recorders and webcams. Other malware has targeted Internet-connected thermostats. Lots of malware targets home routers. These devices are particularly vulnerable because they are often designed by ad hoc teams without a lot of security expertise, stay around in networks far longer than our computers and phones, and have no easy way to patch them.

It wouldn't be surprising if the Russians targeted routers to build a network of infected computers for follow-on cyber operations. I'm sure many governments are doing the same. As long as we allow these insecure devices on the Internet ­ and short of security regulations, there's no way to stop them ­ we're going to be vulnerable to this kind of malware.

And next time, the command-and-control server won't be so easy to disrupt.

This essay previously appeared in the Washington Post

EDITED TO ADD: The malware is more capable than we previously thought.

CryptogramThomas Dullien on Complexity and Security

For many years, I have said that complexity is the worst enemy of security. At CyCon earlier this month, Thomas Dullien gave an excellent talk on the subject with far more detail than I've ever provided. Video. Slides.

Planet Linux AustraliaDonna Benjamin: Makarrata

The time has come
To say fairs fair...

Dear members of the committee,

Please listen to the Uluru statement from the heart. Please hear those words. Please accept them, please act to adopt them.

Enshrine a voice for Australia’s first nation peoples in the Australian constitution.

Create a commission for Makarrata.

Invest in uncovering and telling the truth of our history.

We will be a stronger, wiser nation when we truly acknowledge the frontier wars and not only a stolen generation but stolen land, and stolen hope.

We have nothing to lose, and everything to gain through real heartfelt recognition and reconciliation.

Makarrata. Treaty. Sovereignty.

Please. I am Australian. I want this.

I felt sick shame when the prime minister rejected the Uluru statement. He did not, does not, speak for me.

Donna Benjamin
Melbourne, VIC.

Worse Than FailureThe New Guy (Part II): Database Boogaloo

When we last left our hero Jesse, he was wading through a quagmire of undocumented bad systems while trying to solve an FTP issue. Several months later, Jesse had things figured out a little better and was starting to feel comfortable in his "System Admin" role. He helped the company join the rest of the world by dumping Windows NT 4.0 and XP. The users whose DNS settings he bungled were now happily utilizing Windows 10 workstations. His web servers were running Windows Server 2016, and the SQL boxes were up to SQL 2016. Plus his nemesis Ralph had since retired. Or died. Nobody knew for sure. But things were good.

Despite all these efforts, there were still several systems that relied on Access 97 haunting him every day. Jesse spent tens of dollars of his own money on well-worn Access 97 programming books to help plug holes in the leaky dike. The A97 Finance system in particular was a complete mess to deal with. There were no clear naming guidelines and table locations were haphazard at best. Stored procedures and functions were scattered between the A97 VBS and the SQL DB. Many views/functions were nested with some going as far as eight layers while others would form temporary tables in A97 then continue to nest.

One of Jesse's small wins involved improving performance of some financial reporting queries that took minutes to run before but now took seconds. A few of these sped-up reports happened to be ones that Shane, the owner of the company, used frequently. The sudden time-savings got his attention to the point of calling Jesse in to his office to meet.

"Jesse! Good to see you!" Shane said in an overly cheerful manner. "I'm glad to talk to the guy who has saved me a few hours a week with his programmering fixes." Jesse downplayed the praise before Shane got to the point. "I'd like to find out from you how we can make further improvements to our Finance program. You seem to have a real knack for this."

Jesse, without thinking about it, blurted, "This here system is a pile of shit." Shane stared at him blankly, so he continued, "It should be rebuilt from the ground up by experienced software development professionals. That's how we make further improvements."

"Great idea! Out with the old, in with the new! You seem pretty well-versed in this stuff, when can you start on it?" Shane said with growing excitement. Jesse soon realized his response had backfired and he was now on the hook to the owner for a complete system rewrite. He took a couple classes on C# and ASP.NET during his time at Totally Legit Technical Institute so it was time to put that valuable knowledge to use.

Shane didn't just let Jesse loose on redoing the Finance program though. He insisted Jesse work closely with Linda, their CFO who used it the most. Linda proved to be very resistant to any kind of change Jesse proposed. She had mastered the painstaking nuances of A97 and didn't seem to mind fixing large amounts of bad data by hand. "It makes me feel in control, you know," Linda told him once after Jesse tried to explain the benefits of the rewrite.

While Jesse pecked away at his prototype, Linda would relentlessly nitpick any UI ideas he came up with. If she had it her way, the new system would only be usable by someone as braindead as her. "I don't need all these fancy menus and buttons! Just make it look and work like it does in the current system," she would say at least once a week. "And don't you dare take my manual controls away! I don't trust your automated robotics to get these numbers right!" In the times it wasn't possible to make something work like Access 97, she would run to Shane, who would have to talk her down off the ledge.

Even though Linda opposed Jesse at every turn, the new system was faster and very expandable. Using C# .NET 4.7.1 with WPF, it was much less of an eyesore. The database was also clearly defined with full documentation, both on the tables and in the stored procedures. The database size managed to go from 8 GB to .8 GB with no loss in data.

The time came at last for go-live of Finance 2.0. The thing Jesse was most excited about was shutting down the A97 system and feeling Linda die a little bit inside. He sent out an email to the Finance department with instructions for how to use it. The system was well-received by everyone except Linda. But that still led to more headaches for Jesse.

With Finance 2.0 in their hands, the rest of the users noticed the capabilities modern technology brought. The feature requests began pouring in with no way to funnel them. Linda refused to participate in feature reviews because she still hated the new system, so they all went to Shane, who greenlighted everything. Jesse soon found himself buried in the throes of the monster he created with no end in sight. To this day, he toils at his computer cranking out features while Linda sits and reminisces about the good old days of Access 97.

[Advertisement] Utilize BuildMaster to release your software with confidence, at the pace your business demands. Download today!

Planet DebianAthos Ribeiro: Some notes on the OBS Documentation

This is my fourth post of my Google Summer of Code 2018 series. Links for the previous posts can be found below:

Open Build Service Manuals

OBS provides several manuals on their web site, including an admin and a user guide. Since I needed to travel to an academic conference last week (too many hours in airplanes), I took some time to read the full OBS documentation to have a better understanding of the tool we have been deploying. While reading the documentation, I took some notes on relevant points for our GSoC project (and sent a patch to fix a few typos in OBS documentaion, which I discuss below.

Hardware requirements

There is no need to distribute all different services in OBS server since our instance will not process heavy build loads. We do want to separate the server services from the OBS workers (package builders) so expensive builds will not compromise our server performance.

According to OBS documentation, we need

  • 1 core for each scheduler architecture
  • 4GB ram for each scheduler architecture
  • 50GB disk per architecture for each build distribution supported

We are working with a single build distribution (Debian unstable). Therefore, we need 50GB disk for our OBS instance for each supported architecture (unless we want to mirror the whole distribution instead of using the Download on Demand OBS feature).

We would like to work with 3 different architectures: i686, x86_64 and arm. Hence, we need 150GB, 12GB ram and 3 cores according to the OBS admin guide.

Summary:

  • 12GB RAM
  • 150GB disk
  • 3 cores

OBS Instance Configuration

We want to change some instance configurations like

  • Change OBS instance description
  • Set administrator email
  • Disable new users sign up: since all builds in this OBS instance will be fired automatically and no new projects will be configured for now, we will not allow people to create accounts in our OBS instance.

It is important to note that the proper way of changing a project’s configuration is through the API calls. Therefore, we will need to make such calls in our salt scripts.

To list OBS configurations:

osc -A https://irill8.siege.inria.fr api /configuration

To redefine OBS configurations:

osc -A https://irill8.siege.inria.fr api /configuration -T new_config_file.xml

Workers configuration

OBS workers need to be allowed to connect to the server in /etc/obs/BSConfig.pm. The server accepts connections from any node in the network by default, but we can (and should) force OBS to accept connections only from our own nodes.

Source Services

OBS provide a way to run scripts to change sources before builds. This may be useful for building against Clang.

To create a source service, we must create a script in the /usr/lib/obs/service/ directory and create a new _service file either in the package or in the project repository level.

_service is a XML file pointing to our script under /usr/lib/obs/service/ and providing possible parameters to the script:

<services>
 <service name="foobar.sh" mode="MODE">
 <param name="PARAMETER1">PARAMETER1_VALUE</param>
 </service>
</services>

Self signed certificates

For testing purposes, there is no need to generate proper SSL certificates, we can generate and self sign our own:

mkdir /srv/obs/certs
openssl genrsa -out /srv/obs/certs/server.key 1024
openssl req -new -key /srv/obs/certs/server.key -out /srv/obs/certs/server.csr
openssl x509 -req -days 365 -in /srv/obs/certs/server.csr -signkey /srv/obs/certs/server.key -out /srv/obs/certs/server.crt
cat /srv/obs/certs/server.key /srv/obs/certs/server.crt > /srv/obs/certs/server.pem

Finally, we must trust our certificate:

cp /srv/obs/certs/server.pem /etc/ssl/certs/
c_rehash /etc/ssl/certs/

Message bus

OBS supports rabbitMQ usage to publish events such as build results, package updates, etc. In the future, we could also set a rabbitMQ instance so other services can listen to a queue with our Clang build results.

Next steps (A TODO list to keep on the radar)

  • Write patches for the OBS worker issue described in post 3
  • Configure Debian projects on OBS with salt, not manually
  • Change the default builder to perform builds with clang
  • Trigger new builds by using the dak/mailing lists messages
  • Verify the rake-tasks.sh script idempotency and propose patch to opencollab repository
  • Separate salt recipes for workers and server (locally)
  • Properly set hostnames (locally)

Planet DebianLouis-Philippe Véronneau: IMAP Spam Begone (ISBG) version 2.1.0 is out!

When I first started at the non-profit where I work, one of the problems people had was rampant spam on their email boxes. The email addresses we use are pretty old (+10 years) and over time they have been added to all the possible spam lists there are.

That would not be a real problem if our email hosting company did not have very bad spam filters. They are a worker's coop and charge us next to nothing for hosting our emails, but sadly they lack the resources to run a real bayesian-based spam filtering solution like SpamAssassin. "Luckily" for us, it seems that a lot of ISPs and email hosting enterprises also tend to have pretty bad spam filtering on the email boxes they provide and there were a few programs out there to fix this.

One of the solutions I found to alleviate this problem was to use IMAP Spam Begone (ISBG), a script that makes it easy to scan an IMAP inbox for spam using your own SpamAssassin server and get your spam moved around via IMAP. Since then, I've been maintaining the upstream project.

At the time, ISBG was somewhat abandoned and was mostly a script made of old python2 code. No classes, no functions, just a long script that ran from top to bottom.

Well, I'm happy to say that ISBG now has a new major release! Version 2.1.0 is out and replaces the last main release, 1.0.0. From a script, ISBG has now evolved into a full-fledged python module using classes and functions. Although the code still works with python2, everything is now python3 compliant as well. We even started using CI tests recently!

That, and you know, tons of bugs were fixed. I'd like to thank all the folks who submitted patches, as very few of the actual code was written by me.

If you want to give ISBG a try, you can find the documentation here. Here's also a nice terminal capture I made of ISBG working in verbose mode:

,

Planet DebianElana Hashman: Looking back on "Teaching Python: The Hard Parts"

One of my goals when writing talks is to produce content with a long shelf life. Because I'm one of those weird people that prefers to write new talks for new events, I feel like it'd be a waste of effort if my talks didn't at least age well. So how do things measure up if I look back on one of my oldest?

"Teaching Python: The Hard Parts" remains one of my most popular talks, despite presenting it just one time at PyCon US 2016. For most of the past two years, it held steady as a top 10 talk from PyCon 2016 by popularity on YouTube (although it was recently overtaken by a few hundred views 😳), even when counting it against the keynotes (!), and most of the YouTube comments are shockingly nice (!!).

Well, actually

Not everyone was a fan. Obviously I should have known better than to tell instructors they didn't have to use Python 3:

Matt Williams: Obviously Python 3 should be taught over Python 2. In a few years time 2 will be completely unsupported http://pythonclock.org/

Did I give bad advice? Was mentiontioning the usability advantage of better library support and documentation SEO with Python 2 worth the irreparable damage I might have done to the community?

Matt's not the only one with a chip on his shoulder: the Python 2 → 3 transition has been contentious, and much ink has been spilled on the topic. A popular author infamously wrote a long screed claiming "PYTHON 3 IS SUCH A FAILURE IT WILL KILL PYTHON". Spoiler alert: Python is still alive, and the author updated his book for Python 3.

I've now spent a few years writing 2/3 compatible code, and am on the cusp of dropping Python 2 entirely. I've felt bad for not weighing in on the topic publicly, because people might have looked to this talk for guidance and wouldn't know my advice has changed over the past two years.

A little history

I wrote this talk based on my experiences teaching Python in the winter and fall of 2014, and presented it in early 2016. Back then, it wasn't clear if Python 3 adoption was going to pick up: Hynek wrote an article about Python 3 adoption a few months before PyCon that contained the ominous subheading "GLOOM". Python 3 only reached a majority mindshare of Python developers in May 2017!

Why? That's a topic long enough to fill a series of blog posts, but briefly: the number of breaking changes introduced in the first few releases in Python 3, coupled with the lack of compelling features to incentivize migration led to slow adoption. Personally, until the Python 3.3 release, I don't think Python 3 had that balance right to really take off. Version 3.3 was released in fall of 2012. Python 3.4 was only released in early 2014, just before I mentored at my first set of workshops!

This is a long-winded way to say, "when I gave this talk, it wasn't clear that telling workshop organizers to teach Python 3 would be good advice, because the ecosystem wasn't there yet."

The brave new world

But this is no longer the case! Python 3 adoption is overtaking Python 2 use, even in the enterprise space. The Python 2 clock keeps on ticking. Latest releases of Python 3 have compelling features to lure users, including strong, native concurrency support, formatted strings, better cross-system path support, and type hints.

This is to say, if I had to pick just one change to make to this talk if I gave it today, I would tell folks

USE PYTHON 3! ✨

Other updates

  • The documentation for packaging Python is a lot better now. There have been many good talks presented on the subject.
  • Distributing Python is still hard. There isn't a widely adopted practice for cross-platform management of compiled dependencies yet, although wheels are picking up steam. I'm currently working on the manylinux2010 update to address this problem on Linux systems.

Endorsements

Not to let one YouTube commenter rain on my parade, I am thrilled to say that some people in the community have written some awfully nice things about my talk. Thanks to all for doing so—pulling this together really brightened my day!

Blog Posts

Roxanne Johnson writes, "Elana Hashman’s PyCon talk on Teaching Python: The Hard Parts had me nodding so hard I thought I might actually be headbanging." 😄

Georgia Reh writes, "I am just in love with this talk. Any one who has seen me speak about teaching git knows I try really hard to not overload students with information, and Elana has a very clear idea of what a beginner needs to know when learning python versus what they can learn later." 💖

Tweets

When I presented this talk, I was too shy to attach my twitter handle to my slides, so all these folks tweeted at me by name. Wow!

Other

My talk was included in the "Awesome Python in Education" list. How cool 😎

Declaring a small victory

Writing this post has convinced me that "Teaching Python: The Hard Parts" meets some arbitrary criteria for "sufficiently forward-thinking." Much of the content still strikes me as fresh: as an occasional mentor for various technical workshops, I still keep running into trouble with platform diversity, the command line, and packaging; the "general advice" section is evergreen for Python and non-Python workshops alike. So with all that said, here's hoping that looking back on this talk will keep it alive. Give it a watch if you haven't seen it before!

If you like what you see and you're interested in checking out my speaking portfolio or would like to invite me to speak at your conference or event, do check out my talks page.

Krebs on SecurityLibrarian Sues Equifax Over 2017 Data Breach, Wins $600

In the days following revelations last September that big-three consumer credit bureau Equifax had been hacked and relieved of personal data on nearly 150 million people, many Americans no doubt felt resigned and powerless to control their information. But not Jessamyn West. The 49-year-old librarian from a tiny town in Vermont took Equifax to court. And now she’s celebrating a small but symbolic victory after a small claims court awarded her $600 in damages stemming from the 2017 breach.

Vermont librarian Jessamyn West sued Equifax over its 2017 data breach and won $600 in small claims court. Others are following suit.

Just days after Equifax disclosed the breach, West filed a claim with the local Orange County, Vt. courthouse asking a judge to award her almost $5,000. She told the court that her mother had just died in July, and that it added to the work of sorting out her mom’s finances while trying to respond to having the entire family’s credit files potentially exposed to hackers and identity thieves.

The judge ultimately agreed, but awarded West just $690 ($90 to cover court fees and the rest intended to cover the cost of up to two years of payments to online identity theft protection services).

In an interview with KrebsOnSecurity, West said she’s feeling victorious even though the amount awarded is a drop in the bucket for Equifax, which reported more than $3.4 billion in revenue last year.

“The small claims case was a lot more about raising awareness,” said West, a librarian at the Randolph Technical Career Center who specializes in technology training and frequently conducts talks on privacy and security.

“I just wanted to change the conversation I was having with all my neighbors who were like, ‘Ugh, computers are hard, what can you do?’ to ‘Hey, here are some things you can do’,” she said. “A lot of people don’t feel they have agency around privacy and technology in general. This case was about having your own agency when companies don’t behave how they’re supposed to with our private information.”

West said she’s surprised more people aren’t following her example. After all, if just a tiny fraction of the 147 million Americans who had their Social Security number, date of birth, address and other personal data stolen in last year’s breach filed a claim and prevailed as West did, it could easily cost Equifax tens of millions of dollars in damages and legal fees.

“The paperwork to file the claim was a little irritating, but it only cost $90,” she said. “Then again, I could see how many people probably would see this as a lark, where there’s a pretty good chance you’re not going to see that money again, and for a lot of people that probably doesn’t really make things better.”

Equifax is currently the target of several class action lawsuits related to the 2017 breach disclosure, but there have been a few other minor victories in state small claims courts.

In January, data privacy enthusiast Christian Haigh wrote about winning an $8,000 judgment in small claims court against Equifax for its 2017 breach (the amount was reduced to $5,500 after Equifax appealed).

Haigh is co-founder of litigation finance startup Legalist. According to Inc.com, Haigh’s company has started funding other people’s small claims suits against Equifax, too. (Legalist pays lawyers in plaintiff’s suits on an hourly basis, and takes a contingency fee if the case is successful.)

Days after the Equifax breach news broke, a 20-year-old Stanford University student published a free online bot that helps users sue the company in small claims court.

It’s not clear if the Web site tool is still functioning, but West said it was media coverage of this very same lawsuit bot that prompted her to file.

“I thought if some stupid online bot can do this, I could probably figure it out,” she recalled.

If you’re a DYI type person, by all means file a claim in your local small claims court. And then write and publish about your experience, just like West did in a post at Medium.com.

West said she plans to donate the money from her small claims win to the Vermont chapter of the American Civil Liberties Union (ACLU), and that she hopes her case inspires others.

“Even if all this does is get people to use better passwords, or go to the library, or to tell a company, ‘No, that’s not not good enough, you need to do better,’ that would be a good thing,” West said. “I wanted to show that there are constructive ways to seek redress of grievances about lots of different things, which makes me happy. I was willing to do the work and go to court. I look at this like an opportunity to educate and inform yourself, and realize there is a step you can take beyond just rending of garments and gnashing of teeth.”

Planet DebianShirish Agarwal: students, suicides, pressures and solutions.

Couple of days back, I heard of a student whose body was found hung from the ceiling in a college nearby. It felt a bit shocked as I had visited that college just sometime back. It is also possible that I may have run into him and even had a conversation with him. No names were shared and even if there were shared it’s doubtful I would remember him as during events you meet so many people, it’s difficult to parse and remember names 😦 . I do feel sort of stretched at events but that’s life I guess.

As no suicide note was found, the police are investigating from all angles as to the nature of the death. While it’s too early to come to conclusions whether the student decided to take his own life or someone else decided to end his life for some reason or the other, I saw that nobody whom I talked to felt perturbed even a tiny bit probably because it has become a new normal. The major reasons apart from those shared in a blog post are that the costs of the education is too high for today’s students.

There are also perceived career biases that people have, believing that Computer Science is better than being a lawyer, even though IT layoffs have become a new normal. In the above specific case, it was reported that apparently the student who killed himself wanted to be a lawyer while the family wanted him to do CS (Computer Science) .

Also the whole reskilling and STEM culture may be harder as at least Government syllabuses are 10-15 years too late. The same goes for the teachers who would have to change a lot and sadly, it is too common for teachers to be paid a pittance, even college professors.

I know of quite a few colleges in the city in different domains where suicides have taken place, the authorities have tried putting wellness rooms where students who feel depressed could share their feelings but probably due to feelings of shame or weaknesses, the ones who are most at risk do not allow the true feelings to surface. The eastern philosophy of ‘saving face’ is killing our young ones. There is one non-profit I know, Connecting NGO 18002094353 (toll-free) and 9922001122 (mobile) that students or whoever is in depression can call. The listeners don’t give any advice as they are not mental health experts but just give a patient hearing. Sometimes sharing or describing whatever you are facing may give enough either hope or a mini-solution that you can walk towards.

I hope people would use the resources listed above.

Update – 15/06/2018 – A friend/acquaintance recently passed a link which helped her and her near and dear ones to better support her throughout her facing depression. It pretty much seems like a yo-yo but that’s how people might feel in a given situation.

I was shared an email where I had asked the concerned non-profit to see if it needed any more addition to the blog post and this is what I heard from them –

Hello Shirish,

Warm Greetings from Connecting NGO

I read the blog link sent by you and the article looks good. I dont think anything needs to be added to that. Someday if you can come to the office, we can sit and talk about articles regarding emotional distress and suicides and how they need to be written. You have done a good job and thanks for sharing the link.
We will surely try to get in touch with X college sometime this month and talk with the teaching staff and authorities there along with the students. Thanks for the lead. Hoping to see you soon.

Regards,
Vikramsinh Pawar
Senior Programme Coordinator,

I was simply being cautious and short of words as words carelessly used could be a trigger as well.

On one of the groups I am a member of, I came to know of another institute where there have been quite a few suicides. A few of us have decided to visit the institute with a trained mental health professional and see if we can be of any assistance in anyway, in some ways sharing our tales of loss in the hopes that others are able to grieve their loss or at least come to terms with.

We have also asked the non-profit so maybe they would also do an intervention on their own.

Rondam RamblingsTrump makes it look easy

One has to wonder, after Donald Trump's tidy wrapping-up of the North Korea situation (he did everything short of come right out and say "peace for our time!"), what all the fuss was ever about.  It took only a few months (or forty minutes, depending on how you count) to go from the the brink of nuclear war to BFFs.  Today the U.S. seems to be getting along better with North Korea than with

Planet DebianSean Whitton: Debian Policy call for participation -- June 2018

I’d like to push a substantive release of Policy but I’m waiting for DDs to review and second patches in the following bugs. I’d be grateful for your involvement!

If a bug already has two seconds, or three seconds if the proposer of the patch is not a DD, please consider reviewing one of the others, instead, unless you have a particular interest in the topic of the bug.

If you’re not a DD, you are welcome to review, but it might be a more meaningful contribution to spend your time writing patches bugs that lack them, instead.

#786470 [copyright-format] Add an optional “License-Grant” field

#846970 Proposal for a Build-Indep-Architecture: control file field

#864615 please update version of posix standard for scripts (section 10.4)

#880920 Document Rules-Requires-Root field

#891216 Requre d-devel consultation for epoch bump

#897217 Vcs-Hg should support -b too

CryptogramRussian Censorship of Telegram

Internet censors have a new strategy in their bid to block applications and websites: pressuring the large cloud providers that host them. These providers have concerns that are much broader than the targets of censorship efforts, so they have the choice of either standing up to the censors or capitulating in order to maximize their business. Today's Internet largely reflects the dominance of a handful of companies behind the cloud services, search engines and mobile platforms that underpin the technology landscape. This new centralization radically tips the balance between those who want to censor parts of the Internet and those trying to evade censorship. When the profitable answer is for a software giant to acquiesce to censors' demands, how long can Internet freedom last?

The recent battle between the Russian government and the Telegram messaging app illustrates one way this might play out. Russia has been trying to block Telegram since April, when a Moscow court banned it after the company refused to give Russian authorities access to user messages. Telegram, which is widely used in Russia, works on both iPhone and Android, and there are Windows and Mac desktop versions available. The app offers optional end-to-end encryption, meaning that all messages are encrypted on the sender's phone and decrypted on the receiver's phone; no part of the network can eavesdrop on the messages.

Since then, Telegram has been playing cat-and-mouse with the Russian telecom regulator Roskomnadzor by varying the IP address the app uses to communicate. Because Telegram isn't a fixed website, it doesn't need a fixed IP address. Telegram bought tens of thousands of IP addresses and has been quickly rotating through them, staying a step ahead of censors. Cleverly, this tactic is invisible to users. The app never sees the change, or the entire list of IP addresses, and the censor has no clear way to block them all.

A week after the court ban, Roskomnadzor countered with an unprecedented move of its own: blocking 19 million IP addresses, many on Amazon Web Services and Google Cloud. The collateral damage was widespread: The action inadvertently broke many other web services that use those platforms, and Roskomnadzor scaled back after it became clear that its action had affected services critical for Russian business. Even so, the censor is still blocking millions of IP addresses.

More recently, Russia has been pressuring Apple not to offer the Telegram app in its iPhone App Store. As of this writing, Apple has not complied, and the company has allowed Telegram to download a critical software update to iPhone users (after what the app's founder called a delay last month). Roskomnadzor could further pressure Apple, though, including by threatening to turn off its entire iPhone app business in Russia.

Telegram might seem a weird app for Russia to focus on. Those of us who work in security don't recommend the program, primarily because of the nature of its cryptographic protocols. In general, proprietary cryptography has numerous fatal security flaws. We generally recommend Signal for secure SMS messaging, or, if having that program on your computer is somehow incriminating, WhatsApp. (More than 1.5 billion people worldwide use WhatsApp.) What Telegram has going for it is that it works really well on lousy networks. That's why it is so popular in places like Iran and Afghanistan. (Iran is also trying to ban the app.)

What the Russian government doesn't like about Telegram is its anonymous broadcast feature­ -- channel capability and chats -- ­which makes it an effective platform for political debate and citizen journalism. The Russians might not like that Telegram is encrypted, but odds are good that they can simply break the encryption. Telegram's role in facilitating uncontrolled journalism is the real issue.

Iran attempts to block Telegram have been more successful than Russia's, less because Iran's censorship technology is more sophisticated but because Telegram is not willing to go as far to defend Iranian users. The reasons are not rooted in business decisions. Simply put, Telegram is a Russian product and the designers are more motivated to poke Russia in the eye. Pavel Durov, Telegram's founder, has pledged millions of dollars to help fight Russian censorship.

For the moment, Russia has lost. But this battle is far from over. Russia could easily come back with more targeted pressure on Google, Amazon and Apple. A year earlier, Zello used the same trick Telegram is using to evade Russian censors. Then, Roskomnadzor threatened to block all of Amazon Web Services and Google Cloud; and in that instance, both companies forced Zello to stop its IP-hopping censorship-evasion tactic.

Russia could also further develop its censorship infrastructure. If its capabilities were as finely honed as China's, it would be able to more effectively block Telegram from operating. Right now, Russia can block only specific IP addresses, which is too coarse a tool for this issue. Telegram's voice capabilities in Russia are significantly degraded, however, probably because high-capacity IP addresses are easier to block.

Whatever its current frustrations, Russia might well win in the long term. By demonstrating its willingness to suffer the temporary collateral damage of blocking major cloud providers, it prompted cloud providers to block another and more effective anti-censorship tactic, or at least accelerated the process. In April, Google and Amazon banned­ -- and technically blocked­ -- the practice of "domain fronting," a trick anti-censorship tools use to get around Internet censors by pretending to be other kinds of traffic. Developers would use popular websites as a proxy, routing traffic to their own servers through another website­ -- in this case Google.com­ -- to fool censors into believing the traffic was intended for Google.com. The anonymous web-browsing tool Tor has used domain fronting since 2014. Signal, since 2016. Eliminating the capability is a boon to censors worldwide.

Tech giants have gotten embroiled in censorship battles for years. Sometimes they fight and sometimes they fold, but until now there have always been options. What this particular fight highlights is that Internet freedom is increasingly in the hands of the world's largest Internet companies. And while freedom may have its advocates -- ­the American Civil Liberties Union has tweeted its support for those companies, and some 12,000 people in Moscow protested against the Telegram ban­ -- actions such as disallowing domain fronting illustrate that getting the big tech companies to sacrifice their near-term commercial interests will be an uphill battle. Apple has already removed anti-censorship apps from its Chinese app store.

In 1993, John Gilmore famously said that "The Internet interprets censorship as damage and routes around it." That was technically true when he said it but only because the routing structure of the Internet was so distributed. As centralization increases, the Internet loses that robustness, and censorship by governments and companies becomes easier.

This essay previously appeared on Lawfare.com.

Planet DebianEnrico Zini: Progress bar for file descriptors

I ran gzip on an 80Gb file, it's processing, but who knows how much it has done yet, and when it will end? I wish gzip had a progressbar. Or MySQL. Or…

Ok. Now every program that reads a file sequentially can have a progressbar:

https://gitlab.com/spanezz/fdprogress

fdprogress

Print progress indicators for programs that read files sequentially.

fdprogress monitors file descriptor offsets and prints progressbars comparing them to file sizes.

Pattern can be any glob expression.

usage: fdprogress [-h] [--verbose] [--debug] [--pid PID] [pattern]

show progress from file descriptor offsets

positional arguments:
  pattern            file name to monitor

optional arguments:
  -h, --help         show this help message and exit
  --verbose, -v      verbose output
  --debug            debug output
  --pid PID, -p PID  PID of process to monitor

pv

pv has a --watchfd option that does most of what fdprogress is trying to do: use that instead.

fivi

fivi also exists, with specific features to show progressbars for filter commands.

CryptogramNew Data Privacy Regulations

When Marc Zuckerberg testified before both the House and the Senate last month, it became immediately obvious that few US lawmakers had any appetite to regulate the pervasive surveillance taking place on the Internet.

Right now, the only way we can force these companies to take our privacy more seriously is through the market. But the market is broken. First, none of us do business directly with these data brokers. Equifax might have lost my personal data in 2017, but I can't fire them because I'm not their customer or even their user. I could complain to the companies I do business with who sell my data to Equifax, but I don't know who they are. Markets require voluntary exchange to work properly. If consumers don't even know where these data brokers are getting their data from and what they're doing with it, they can't make intelligent buying choices.

This is starting to change, thanks to a new law in Vermont and another in Europe. And more legislation is coming.

Vermont first. At the moment, we don't know how many data brokers collect data on Americans. Credible estimates range from 2,500 to 4,000 different companies. Last week, Vermont passed a law that will change that.

The law does several things to improve the security of Vermonters' data, but several provisions matter to all of us. First, the law requires data brokers that trade in Vermonters' data to register annually. And while there are many small local data brokers, the larger companies collect data nationally and even internationally. This will help us get a more accurate look at who's in this business. The companies also have to disclose what opt-out options they offer, and how people can request to opt out. Again, this information is useful to all of us, regardless of the state we live in. And finally, the companies have to disclose the number of security breaches they've suffered each year, and how many individuals were affected.

Admittedly, the regulations imposed by the Vermont law are modest. Earlier drafts of the law included a provision requiring data brokers to disclose how many individuals' data it has in its databases, what sorts of data it collects and where the data came from, but those were removed as the bill negotiated its way into law. A more comprehensive law would allow individuals to demand to exactly what information they have about them­ -- and maybe allow individuals to correct and even delete data. But it's a start, and the first statewide law of its kind to be passed in the face of strong industry opposition.

Vermont isn't the first to attempt this, though. On the other side of the country, Representative Norma Smith of Washington introduced a similar bill in both 2017 and 2018. It goes further, requiring disclosure of what kinds of data the broker collects. So far, the bill has stalled in the state's legislature, but she believes it will have a much better chance of passing when she introduces it again in 2019. I am optimistic that this is a trend, and that many states will start passing bills forcing data brokers to be increasingly more transparent in their activities. And while their laws will be tailored to residents of those states, all of us will benefit from the information.

A 2018 California ballot initiative could help. Among its provisions, it gives consumers the right to demand exactly what information a data broker has about them. If it passes in November, once it takes effect, lots of Californians will take the list of data brokers from Vermont's registration law and demand this information based on their own law. And again, all of us -- regardless of the state we live in­ -- will benefit from the information.

We will also benefit from another, much more comprehensive, data privacy and security law from the European Union. The General Data Protection Regulation (GDPR) was passed in 2016 and took effect on 25 May. The details of the law are far too complex to explain here, but among other things, it mandates that personal data can only be collected and saved for specific purposes and only with the explicit consent of the user. We'll learn who is collecting what and why, because companies that collect data are going to have to ask European users and customers for permission. And while this law only applies to EU citizens and people living in EU countries, the disclosure requirements will show all of us how these companies profit off our personal data.

It has already reaped benefits. Over the past couple of weeks, you've received many e-mails from companies that have you on their mailing lists. In the coming weeks and months, you're going to see other companies disclose what they're doing with your data. One early example is PayPal: in preparation for GDPR, it published a list of the over 600 companies it shares your personal data with. Expect a lot more like this.

Surveillance is the business model of the Internet. It's not just the big companies like Facebook and Google watching everything we do online and selling advertising based on our behaviors; there's also a large and largely unregulated industry of data brokers that collect, correlate and then sell intimate personal data about our behaviors. If we make the reasonable assumption that Congress is not going to regulate these companies, then we're left with the market and consumer choice. The first step in that process is transparency. These new laws, and the ones that will follow, are slowly shining a light on this secretive industry.

This essay originally appeared in the Guardian.

Worse Than FailureThe Manager Who Knew Everything

Have you ever worked for/with a manager that knows everything about everything? You know the sort; no matter what the issue, they stubbornly have an answer. It might be wrong, but they have an answer, and no amount of reason, intelligent thought, common sense or hand puppets will make them understand. For those occasions, you need to resort to a metaphorical clue-bat.

A few decades ago, I worked for a place that had a chief security officer who knew everything there was to know about securing their systems. Nothing could get past the policies she had put in place. Nobody could ever come up with any mechanism that could bypass her concrete walls, blockades and insurmountable defenses.

One day, she held an interdepartmental meeting to announce her brand spanking shiny new policies regarding this new-fangled email that everyone seemed to want to use. It would prevent unauthorized access, so only official emails sent by official individuals could be sent through her now-secured email servers.

I pointed out that email servers could only be secured to a point, because they had to have an open port to which email clients running on any internal computer could connect. As long as the port was open, anyone with internal access and nefarious intent could spoof a legitimate authorized email address and send a spoofed email.

She was incensed and informed me (and the group) that she knew more than all of us (together) about security, and that there was absolutely no way that could ever happen. I told her that I had some background in military security, and that I might know something that she didn't.

At this point, if she was smart, she would have asked me to explain. If she already handled the case, then I'd have to shut up. If she didn't handle the case, then she'd learn something, AND the system could be made more secure. She was not smart; she publicly called my bluff.

I announced that I accepted the challenge, and that I was going to use my work PC to send an email - from her - to the entire firm (using the restricted blast-to-all email address, which I would not normally be able to access as myself). In the email, I would explain that it was a spoof, and if they were seeing it, then the so-called impenetrable security might be somewhat less secure than she proselytized. In fact, I would do it in such a way that there would be absolutely no way to prove that I did it (other than my admission in the email).

She said that if I did that, that I'd be fired. I responded that 1) if the system was as secure as she thought, that there'd be nothing to fire me for, and 2) if they could prove that it was me, and tell me how I did it (aside from my admission that I had done it), that I would resign. But if not, then she had to stop the holier-than-thou act.

Fifteen minutes later, I went back to my desk, logged into my work PC using the guest account, wrote a 20 line Cold Fusion script to attach to the email server on port 25, and filled out the fields as though it was coming from her email client. Since she had legitimate access to the firm-wide email blast address, the email server allowed it. Then I sent it. Then I secure-erased the local system event and assorted other logs, as well as editor/browser/Cold Fusion/server caches, etc. that would show what I did. Finally, I did a cold boot to ensure that even the RAM was wiped out.

Not long after that, her minions the SA's showed up at my desk joking that they couldn't believe that I had actually done it. I told them that I had wiped out all the logs where they'd look, the actual script that did it, and the disk space that all of the above had occupied. Although they knew the IP address of the PC from which the request came, they agreed that without those files, there was no way they could prove that it was me. Then they checked everything and verified what I told them.

This info made its way back up the chain until the SAs, me and my boss got called into her office, along with a C-level manager. Everything was explained to the C-manager. She was expecting him to fire me.

He simply looked at me and raised an eyebrow. I responded that I spent all of ten minutes doing it in direct response to her assertion that it was un-doable, and that I had announced my intentions to expose the vulnerability - to her - in front of everyone - in advance.

He chose to tell her that maybe she needed to accept that she doesn't know quite as much about everything as she thinks, and that she might want to listen to people a little more. She then pointed out that I had proven that email was totally insecure and that it should be banned completely (this was at the point where the business had mostly moved to email). I pointed out that I had worked there for many years, had no destructive tendencies, that I was only exposing a potential gap in security, and would not do it again. The SAs also pointed out that the stunt, though it proved the point, was harmless. They also mentioned that nobody else at the firm had access to Cold Fusion. I didn't think it helpful to mention that not just Cold Fusion, but any programming language could be used to connect to port 25 and do the same thing, and so didn't. She huffed and puffed, but had no credibility at that point.

After that, my boss and I bought the SAs burgers and beer.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianNorbert Preining: Microsoft fixed the Open R Debian package

I just got notice that Microsoft has updated the Debian packaging of Open R to properly use dpkg-divert. I checked the Debian packaging scripts and they now properly divert R and Rscript, and revert back to the Debian provided (r-base) version after removal of the packages.

The version 3.5.0 has been rereleased, if you have downloaded it from MRAN you will need to redownload the file and be careful to use the new one, the file name of the downloaded file is the same.

Thanks Microsoft for the quick fix, it is good news that those playing with Open R will not be left with a hosed system.

PS: I guess this post will by far not get the incredible attention the first one got 😉

,

Krebs on SecurityMicrosoft Patch Tuesday, June 2018 Edition

Microsoft today pushed out a bevy of software updates to fix more than four dozen security holes in Windows and related software. Almost a quarter of the vulnerabilities addressed in this month’s patch batch earned Microsoft’s “critical” rating, meaning malware or miscreants can exploit the flaws to break into vulnerable systems without any help from users.

Most of the critical fixes are in Microsoft browsers or browser components. One of the flaws, CVE-2018-8267, was publicly disclosed prior to today’s patch release, meaning attackers may have had a head start figuring out how to exploit the bug to attack Internet Explorer users.

According to Recorded Future, the most important patched vulnerability is a remote code execution vulnerability in the Windows Domain Name System (DNS), which is present in all versions of supported versions of Windows from Windows 7 to Windows 10 as well as all versions of Windows Server from 2008 to 2016.

“The vulnerability allows an attacker to send a maliciously crafted DNS packet to the victim machine from a DNS server, or even send spoofed DNS responses from attack box,” wrote Allan Liska, a threat intelligence analyst at Recorded Future. “Successful exploitation of this vulnerability could allow an attacker to take control of the target machine.”

Security vendor Qualys says mobile workstations that may connect to untrusted Wi-Fi networks are at high risk and this DNS patch should be a priority for them. Qualys also notes that Microsoft this month is shipping updates to mitigate another variant of the Spectre vulnerability in Intel machines.

And of course there are updates available to address the Adobe Flash Player vulnerability that is already being exploited in active attacks. Read more on that here.

It’s a good idea to get in the habit of backing up your computer before applying monthly updates from Microsoft. Windows has some built-in tools that can help recover from bad patches, but restoring the system to a backup image taken just before installing the updates is often much less hassle and an added piece of mind when you’re sitting there praying for the machine to reboot after patching.

This assumes you can get around to backing up before Microsoft decides to patch Windows on your behalf. Microsoft says by default, Windows 10 receives updates automatically, “and for customers running previous versions, we recommend they turn on automatic updates as a best practice.” Microsoft doesn’t make it easy for Windows 10 users to change this setting, but it is possible.

For all other Windows OS users, if you’d rather be alerted to new updates when they’re available so you can choose when to install them, there’s a setting for that in Windows Update.

As always, if you experience any problems installing any of these updates, please leave a note about your issues in the comments below.

Additional reading:

Cisco Talos Intelligence blog take

The Zero Day Initiative’s Security Update Review

SANS Internet Storm Center

Microsoft Security Update Guide

Planet DebianJonathan McDowell: Hooking up Home Assistant to Alexa + Google Assistant

I have an Echo Dot. Actually I have two; one in my study and one in the dining room. Mostly we yell at Alexa to play us music; occasionally I ask her to set a timer, tell me what time it is or tell me the news. Having setup Home Assistant it seemed reasonable to try and enable control of the light in the dining room via Alexa.

Perversely I started with Google Assistant, even though I only have access to it via my phone. Why? Because the setup process was a lot easier. There are a bunch of hoops to jump through that are documented on the Google Assistant component page, but essentially you create a new home automation component in the Actions on Google interface, connect it with the Google OAuth stuff for account linking, and open up your Home Assistant instance to the big bad internet so Google can connect.

This final step is where I differed from the provided setup. My instance is accessible internally at home, but I haven’t wanted to expose it externally yet (and I suspect I never well, but instead have the ability to VPN back in to access or similar). The default instructions need you to open up API access publicly, and configure up Google with your API password, which allows access to everything. I’d rather not.

So, firstly I configured up my external host with an Apache instance and a Let’s Encrypt cert (luckily I have a static IP, so this was actually the base host that the Home Assistant container runs on). Rather than using this to proxy the entire Home Assistant setup I created a unique /external/google/randomstring proxy just for the Google Assistant API endpoint. It looks a bit like this:

<VirtualHost *:443>
  ServerName my.external.host

  ProxyPreserveHost On
  ProxyRequests off

  RewriteEngine on

  # External access for Google Assistant
  ProxyPassReverse /external/google/randomstring http://hass-host:8123/api/google_assistant
  RewriteRule ^/external/google/randomstring$ http://hass-host:8123/api/google_assistant?api_password=myapipassword [P]
  RewriteRule ^/external/google/randomstring/auth$ http://hass-host:8123/api/google_assistant/auth?%{QUERY_STRING}&&api_password=myapipassword [P]

  SSLEngine on
  SSLCertificateFile /etc/ssl/my.external.host.crt
  SSLCertificateKeyFile /etc/ssl/private/my.external.host.key
  SSLCertificateChainFile /etc/ssl/lets-encrypt-x3-cross-signed.crt
</VirtualHost>

This locks down the external access to just being the Google Assistant end point, and means that Google have a specific shared secret rather than the full API password. I needed to configure up Home Assistant as well, so configuration.yaml gained:

google_assistant:
  project_id: homeautomation-8fdab
  client_id: oFqHKdawWAOkeiy13rtr5BBstIzN1B7DLhCPok1a6Jtp7rOI2KQwRLZUxSg00rIEib2NG8rWZpH1cW6N
  access_token: l2FrtQyyiJGo8uxPio0hE5KE9ZElAw7JGcWRiWUZYwBhLUpH3VH8cJBk4Ct3OzLwN1Fnw39SR9YArfKq
  agent_user_id: noodles@earth.li
  api_key: nyAxuFoLcqNIFNXexwe7nfjTu2jmeBbAP8mWvNea
  exposed_domains:
    - light

Setting up Alexa access is more complicated. Amazon Smart Home skills must call an AWS Lambda - the code that services the request is essential a small service run within Lambda. Home Assistant supports all the appropriate requests, so the Lambda code is a very simple proxy these days. I used Haaska which has a complete setup guide. You must do all 3 steps - the OAuth provider, the AWS Lambda and the Alexa Skill. Again, I wanted to avoid exposing the full API or the API password, so I forked Haaska to remove the use of a password and instead use a custom URL. I then added the following additional lines to the Apache config above:

# External access for Amazon Alexa
ProxyPassReverse /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home
RewriteRule /external/amazon/stringrandom http://hass-host:8123/api/alexa/smart_home?api_password=myapipassword [P]

In the config.json I left the password field blank and set url to https://my.external.host/external/amazon/stringrandom. configuration.yaml required less configuration than the Google equivalent:

alexa:
  smart_home:
    filter:
      include_entities:
        - light.dining_room_lights
        - light.living_room_lights
        - light.kitchen
        - light.snug

(I’ve added a few more lights, but more on the exact hardware details of those at another point.)

To enable in Alexa I went to the app on my phone, selected the “Smart Home” menu option, enabled my Home Assistant skill and was able to search for the available devices. I can then yell “Alexa, turn on the snug” and magically the light turns on.

Aside from being more useful (due to the use of the Dot rather than pulling out a phone) the Alexa interface is a bit smoother - the command detection is more reliable (possibly due to the more limited range of options it has to work out?) and adding new devices is a simple rescan. Adding new devices with Google Assistant seems to require unlinking and relinking the whole setup.

The only problem with this setup so far is that it’s only really useful for the room with the Alexa in it. Shouting from the living room in the hope the Dot will hear is a bit hit and miss, and I haven’t yet figured out a good alternative method for controlling the lights there that doesn’t mean using a phone or a tablet device.

TEDIdeas from the intersections: A night of talks from TED and Brightline

Onstage to host the event, Corey Hajim, TED’s business curator, and Cloe Shasha, TED’s speaker development director, kick off TEDNYC Intersections, a night of talks presented by TED and the Brightline Initiative. (Photo: Ryan Lash / TED)

At the intersections where we meet and collaborate, we can pool our collective wisdom to seek solutions to the world’s greatest problems. But true change begs for more than incremental steps and passive reactions — we need to galvanize transformation to create our collective future.

To celebrate the effort of bold thinkers building a better world, TED has partnered with the Brightline Initiative, a noncommercial coalition of organizations dedicated to helping leaders turn ideas into reality. In a night of talks at TED HQ in New York City — hosted by TED’s speaker development director Cloe Shasha and co-curated by business curator Corey Hajim and technology curator Alex Moura — six speakers and two performers showed us how we can effect real change. After opening remarks from Brightline’s Ricardo Vargas, the session kicked off with Stanford professor Tina Seelig.

Creativity expert Tina Seelig shares three ways we can all make our own luck. (Photo: Ryan Lash / TED)

How to cultivate more luck in your life. “Are you ready to get lucky?” asks Tina Seelig, a professor at Stanford University who focuses on creativity, entrepreneurship and innovation. While luck may seem to be brought on by chance alone, it turns out that there are ways you can enhance it — no matter how lucky or unlucky you think you are. Seelig shares three simple ways you can help luck to bend a little more in your direction: Take small risks that bring you outside your comfort zone; find every opportunity to show appreciation when others help you; and find ways to look at bad or crazy ideas with a new perspective. “The winds of luck are always there,” Seelig says, and by using these three tactics, you can build a bigger and bigger sail to catch them.

A new mantra: let’s fail mindfully. We celebrate bold entrepreneurs whose ingenuity led them to success — but how do we treat those who have failed? Leticia Gasca, founder and director of the Failure Institute, thinks we need to change the way we talk about business failure. After the devastating closing of her own startup, Gasca wiped the experience from her résumé and her mind. But she later realized that by hiding her failure, she was missing out on a valuable opportunity to connect. In an effort to embrace failure as an experience to learn from, Gasca co-created the Failure Institute, which includes international Fuck-Up Nights — spaces for vulnerability and connection over shared experiences of failure. Now, she advocates for a more holistic culture around failure. The goal of failing mindfully, Gasca says, is to “be aware of the consequences of the failed business,” and “to be aware of the lessons learned and the responsibility to share those learnings with the world.” This shift in the way we address failure can help make us better entrepreneurs, better people, and yes — better failures.

A police officer for 25 years, Tracie Keesee imagines a future where communities and police co-produce public safety in local communities. Photo: Ryan Lash / TED

Preserving dignity, guaranteeing justice. We all want to be safe, and our safety is intertwined, says Tracie Keesee, cofounder of the Center for Policing Equity. Sharing lessons she’s learned from 25 years as a police officer, Keesee reflects on the challenges — and opportunities — we all have for creating safer communities together. Policies like “Stop, Question and Frisk” set police and neighborhoods as adversaries, creating alienation, specifically among African Americans; instead, Keesee shares a vision for how the police and the neighborhoods they serve can come together to co-produce public safety. One example: the New York City Police Department’s “Build the Block Program,” which helps community members interact with police officers to share their experiences. The co-production of justice also includes implicit bias training for officers — so they can better understand how this biases we all carry impact their decision-making. By ending the “us vs. them” narrative, Keesee says, we can move forward together.

We can all be influencers. ​Success was once defined by power, but today it’s tied to influence, or “the ability to have an effect on a person or outcome,” says behavioral scientist Jon Levy. It rests on two building blocks: who you’re connected to and how much they trust you. In 2010, Levy created “Influencers” dinners, gathering a dozen high-profile people (who are strangers to each other) at his apartment. But how to get them to trust him and the rest of the group? He asks his guests to cook the meal and clean up. “I had a hunch this was working,” Levy recalls, “when one day I walked into my home and 12-time NBA All-Star Isiah Thomas was washing my dishes, while singer Regina Spektor was making guac with the Science Guy himself, Bill Nye.” From the dinners have emerged friendships, professional relationships and support for social causes. He believes we can cultivate our own spheres of influence at a scale that works for us. “If I can encourage you to do anything, it’s to bring together people you admire,” says Levy. “There’s almost no greater joy in life.”

Yelle and GrandMarnier rock the TED stage with electro-pop and a pair of bright yellow jumpsuits. (Photo: Ryan Lash / TED)

The intersection of music and dance. All the way from France, Yelle and GrandMarnier grace the TEDNYC stage with two electro-pop hits, “Interpassion” and “Ba$$in.” Both songs groove with robotic beats, Yelle’s hypnotic voice, kaleidoscopic rhythms and hypersonic sounds that rouse the audience to stand up, let loose and dance in the aisles.

How to be a great ally. We’re taught to believe that working hard leads directly to getting what you deserve — but sadly, this isn’t the case for many people. Gender, race, ethnicity, religion, disability, sexual orientation, class and geography — all of these can affect our opportunities for success, says writer and advocate Melinda Epler, and it’s up to all of us to do better as allies. She shares three simple ways to start uplifting others in the workplace: do no harm (listen, apologize for mistakes and never stop learning); advocate for underrepresented people in small ways (intervene if you see them being interrupted); and change the trajectory of a life by mentoring or sponsoring someone through their career. “There is no magic wand that corrects diversity and inclusion,” Epler says. “Change happens one person at a time, one act at a time, one word at a time.”

AJ Jacobs explains the powerful benefits of gratitude — and takes us on his quest to think everyone who made his morning cup of coffee. (Photo: Ryan Lash / TED)

Lessons from the Trail of Gratitude. Author AJ Jacobs embarked on a quest with a deceptively simple idea at its heart: to personally thank every person who helped make his morning cup of coffee. “This quest took me around the world,” Jacobs says. “I discovered that my coffee would not be possible without hundreds of people I take for granted.” His project was inspired by a desire to overcome the brain’s innate “negative bias” — the psychological tendency to focus on the bad over the good — which is most effectively combated with gratitude. Jacobs ended up thanking everyone from his barista and the inventor of his coffee cup lid to the Colombian farmers who grew the coffee beans and the steelworkers in Indiana who made their pickup truck — and more than a thousand others in between. Along the way, he learned a series of perspective-altering lessons about globalization, the importance of human connection and more, which are detailed in his new TED Book, Thanks a Thousand: A Gratitude Journey. “It allowed me to focus on the hundreds of things that go right every day, as opposed to the three or four that go wrong,” Jacobs says of his project. “And it reminded me of the astounding interconnectedness of our world.”

Planet Linux AustraliaJulien Goodwin: Custom uBlox GPSDO board

For the next part of my ongoing project I needed to test the GPS reciever I'm using, a uBlox LEA-M8F (M8 series chip, LEA form factor, and with frequency outputs). Since the native 30.72MHz oscillator is useless for me I'm using an external TCVCXO (temperature compensated, voltage controlled oscillator) for now, with the DAC & reference needed to discipline the oscillator based on GPS. If uBlox would sell me the frequency version of the chip on its own that would be ideal, but they don't sell to small customers.

Here's a (rather modified) board sitting on top of an Efratom FRK rubidium standard that I'm going to mount to make a (temporary) home standard (that deserves a post of its own). To give a sense of scale the silver connector at the top of the board is a micro-USB socket.



Although a very simple board I had a mess of problems once again, both in construction and in component selection.

Unlike the PoE board from the previous post I didn't have this board manufactured. This was for two main reasons, first, the uBlox module isn't available from Digikey, so I'd still need to mount it by hand. The second, to fit all the components this board has a much greater area, and since the assembly house I use charges by board area (regardless of the number or density of components) this would have cost several hundred dollars. In the end, this might actually have been the sensible way to go.

By chance I'd picked up a new soldering iron at the same time these boards arrived, a Hakko FX-951 knock-off and gave it a try. Whilst probably an improvement over my old Hakko FX-888 it's not a great iron, especially with the knife tip it came with, and certainly nowhere near as nice to use as the JBC CD-B (I think that's the model) we have in the office lab. It is good enough that I'm probably going to buy a genuine Hakko FM-203 with an FM-2032 precision tool for the second port.

The big problem I had hand-soldering the boards was bridges on several of the components. Not just the tiny (0.65mm pitch, actually the *second largest* of eight packages for that chip) SC70 footprint of the PPS buffer, but also the much more generous 1.1mm pitch of the uBlox module. Luckily solder wick fixed most cases, plus one where I pulled the buffer and soldered a new one more carefully.

With components, once again I made several errors:
  • I ended up buying the wrong USB connectors for the footprint I chose (the same thing happened with the first run of USB-C modules I did in 2016), and while I could bodge them into use easily enough there wasn't enough mechanical retention so I ended up ripping one connector off the board. I ordered some correct ones, but because I wasn't able to wick all solder off the pads they don't attach as strongly as they should, and whilst less fragile, are hardly what I'd call solid.
  • The surface mount GPS antenna (Taoglas AP.10H.01 visible in this tweet) I used was 11dB higher gain than the antenna I'd tested with the devkit, I never managed to get it to lock while connected to the board, although once on a cable it did work ok. To allow easier testing, in the end I removed the antenna and bodged on an SMA connector for easy testing.
  • When selecting the buffer I accidentally chose one with an open-drain output, I'd meant to use one with a push-pull output. This took quite a silly long time for me to realise what mistake I'd made. Compounding this, the buffer is on the 1PPS line, which only strobes while locked to GPS, however my apartment is a concrete box, with what GPS signal I can get inside only available in my bedroom, and my oscilloscope is in my lab, so I couldn't demonstrate the issue live, and had to inject test signals. Luckily a push-pull is available in the same footprint, and a quick hot-air aided swap later (once parts arrived from Digikey) it was fixed.

Lessons learnt:
  • Yes I can solder down to ~0.5mm pitch, but not reliably.
  • More test points on dev boards, particularly all voltage rails, and notable signals not otherwise exposed.
  • Flux is magic, you probably aren't using enough.

Although I've confirmed all basic functions of the board work, including GPS locking, PPS (quick video of the PPS signal LED), and frequency output, I've still not yet tested the native serial ports and frequency stability from the oscillator. Living in an urban canyon makes such testing a pain.

Eventually I might also test moving the oscillator, DAC & reference into a mini oven to see if a custom OCXO would be any better, if small & well insulated enough the power cost of an oven shouldn't be a problem.

Also as you'll see if you look at the tweets, I really should have posted this almost a month ago, however I finished fixing the board just before heading off to California for a work trip, and whilst I meant to write this post during the trip, it's not until I've been back for more than a week that I've gotten to it. I find it extremely easy to let myself be distracted from side projects, particularly since I'm in a busy period at $ORK at the moment.

Planet DebianJohn Goerzen: Syncing with a memory: a unique use of tar –listed-incremental

I have a Nextcloud instance that various things automatically upload photos to. These automatic folders sync to a directory on my desktop. I wanted to pull things out of that directory without deleting them, and only once. (My wife might move them out of the directory on her computer, and I might arrange them into targets on my end.)

In other words, I wanted to copy a file from a source to a destination, but remember what had been copied before so it only ever copies once.

rsync doesn’t quite do this. But it turns out that tar’s listed-incremental feature can do exactly that. Ordinarily, it would delete files that were deleted on the source. But if we make the tar file with the incremental option, but extract it without, it doesn’t try to delete anything at extract time.

Here’s my synconce script:

#!/bin/bash

set -e

if [ -z "$3" ]; then
    echo "Syntax: $0 snapshotfile sourcedir destdir"
    exit 5
fi

SNAPFILE="$(realpath "$1")"
SRCDIR="$2"
DESTDIR="$(realpath "$3")"

cd "$SRCDIR"
if [ -e "$SNAPFILE" ]; then
    cp "$SNAPFILE" "${SNAPFILE}.new"
fi
tar "--listed-incremental=${SNAPFILE}.new" -cpf - . | \
    tar -xf - -C "$DESTDIR"
mv "${SNAPFILE}.new" "${SNAPFILE}"

Just have the snapshotfile be outside both the sourcedir and destdir and you’re good to go!

CryptogramNew iPhone OS May Include Device-Unlocking Security

iOS 12, the next release of Apple's iPhone operating system, may include features to prevent someone from unlocking your phone without your permission:

The feature essentially forces users to unlock the iPhone with the passcode when connecting it to a USB accessory everytime the phone has not been unlocked for one hour. That includes the iPhone unlocking devices that companies such as Cellebrite or GrayShift make, which police departments all over the world use to hack into seized iPhones.

"That pretty much kills [GrayShift's product] GrayKey and Cellebrite," Ryan Duff, a security researcher who has studied iPhone and is Director of Cyber Solutions at Point3 Security, told Motherboard in an online chat. "If it actually does what it says and doesn't let ANY type of data connection happen until it's unlocked, then yes. You can't exploit the device if you can't communicate with it."

This is part of a bunch of security enhancements in iOS 12:

Other enhancements include tools for generating strong passwords, storing them in the iCloud keychain, and automatically entering them into Safari and iOS apps across all of a user's devices. Previously, standalone apps such as 1Password have done much the same thing. Now, Apple is integrating the functions directly into macOS and iOS. Apple also debuted new programming interfaces that allow users to more easily access passwords stored in third-party password managers directly from the QuickType bar. The company also announced a new feature that will flag reused passwords, an interface that autofills one-time passwords provided by authentication apps, and a mechanism for sharing passwords among nearby iOS devices, Macs, and Apple TVs.

A separate privacy enhancement is designed to prevent websites from tracking people when using Safari. It's specifically designed to prevent share buttons and comment code on webpages from tracking people's movements across the Web without permission or from collecting a device's unique settings such as fonts, in an attempt to fingerprint the device.

The last additions of note are new permission dialogues macOS Mojave will display before allowing apps to access a user's camera or microphone. The permissions are designed to thwart malicious software that surreptitiously turns on these devices in an attempt to spy on users. The new protections will largely mimic those previously available only through standalone apps such as one called Oversight, developed by security researcher Patrick Wardle. Apple said similar dialog permissions will protect the file system, mail database, message history, and backups.

Worse Than FailureCodeSOD: Maximum Performance

There is some code, that at first glance, doesn’t seem great, but doesn’t leap out as a WTF. Stephe sends one such block.

double SomeClass::getMaxKeyValue(std::vector<double> list)
{
    double max = 0;
    for (int i = 0; i < list.size(); i++) {
        if (list[i] > max) {
            max = list[i];
        }
    }
    return max;
}

This isn’t great code. Naming a vector-type variable list is itself pretty confusing, the parameter should be marked as const to cut down on copy operations, and there’s an obvious potential bug: what happens if the input is nothing but negative values? You’ll incorrectly return 0, every time.

Still, this code, taken on its own, isn’t a WTF. We need more background.

First off, what this code doesn’t tell you is that we’re looking at a case of the parallel arrays anti-pattern. The list parameter might be something different depending on which key is being searched. As you can imagine, this creates spaghettified, difficult to maintain code. Code that performed terribly. Really terribly. Like “it must have crashed, no wait, no, the screen updated, no wait it crashed again, wait, it’s…” terrible.

Why was it so terrible? Well, for starters, the inputs to getMaxKeyValue were often arrays containing millions of elements. This method was called hundreds of times throughout the code, mostly inside of window redrawing code. All of that adds up to a craptacular application, but there’s one last, very important detail which brings this up to full WTF:

The inputs were already sorted in ascending order.

With a few minor changes, like taking advantage of the sorted vectors, Stephe to the 0.03333 frames-per-second performance up to something acceptable.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianDirk Eddelbuettel: R 3.5.0 on Debian and Ubuntu: An Update

Overview

R 3.5.0 was released a few weeks ago. As it changes some (important) internals, packages installed with a previous version of R have to be rebuilt. This was known and expected, and we took several measured steps to get R binaries to everybody without breakage.

The question of but how do I upgrade without breaking my system was asked a few times, e.g., on the r-sig-debian list as well as in this StackOverflow question.

Debian

Core Distribution As usual, we packaged R 3.5.0 as soon as it was released – but only for the experimental distribution, awaiting a green light from the release masters to start the transition. A one-off repository [drr35](https://github.com/eddelbuettel/drr35) was created to provide R 3.5.0 binaries more immediately; this was used, e.g., by the r-base Rocker Project container / the official R Docker container which we also update after each release.

The actual transition was started last Friday, June 1, and concluded this Friday, June 8. Well over 600 packages have been rebuilt under R 3.5.0, and are now ready in the unstable distribution from which they should migrate to testing soon. The Rocker container r-base was also updated.

So if you use Debian unstable or testing, these are ready now (or will be soon once migrated to testing). This should include most Rocker containers built from Debian images.

Contributed CRAN Binaries Johannes also provided backports with a -cran35 suffix in his CRAN-mirrored Debian backport repositories, see the README.

Ubuntu

Core (Upcoming) Distribution Ubuntu, for the upcoming 18.10, has undertaken a similar transition. Few users access this release yet, so the next section may be more important.

Contributed CRAN and PPA Binaries Two new Launchpad PPA repositories were created as well. Given the rather large scope of thousands of packages, multiplied by several Ubuntu releases, this too took a moment but is now fully usable and should get mirrored to CRAN ‘soon’. It covers the most recent and still supported LTS releases as well as the current release 18.04.

One PPA contains base R and the recommended packages, RRutter3.5. This is source of the packages that will soon be available on CRAN. The second PPA (c2d4u3.5) contains over 3,500 packages mainly derived from CRAN Task Views. Details on updates can be found at Michael’s R Ubuntu Blog.

This can used for, e.g., Travis if you managed your own sources as Dirk’s r-travis does. We expect to use this relatively soon, possibly as an opt-in via a variable upon which run.sh selects the appropriate repository set. It will also be used for Rocker releases built based off Ubuntu.

In both cases, you may need to adjust the sources list for apt accordingly.

Others

There may also be ongoing efforts within Arch and other Debian-derived distributions, but we are not really aware of what is happening there. If you use those, and coordination is needed, please feel free to reach out via the the r-sig-debian list.

Closing

In case of questions or concerns, please consider posting to the r-sig-debian list.

Dirk, Michael and Johannes, June 2018

,

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #163

Here’s what happened in the Reproducible Builds effort between Sunday June 3 and Saturday June 9 2018:

Development work

Upcoming events

tests.reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

In addition, Mattia Rizzolo has been working in a large refactor of the Python part of the setup.

Documentation updates

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Mattia Rizzolo, Santiago Torres, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianSune Vuorela: Kirigaming – Kolorfill

Last time, I was doing a recipe manager. This time I’ve been doing a game with javascript and QtQuick, and for the first time dipping my feet into the Kirigami framework.

I’ve named the game Kolorfill, because it is about filling colors. It looks like this:

Kolorfill

The end goal is to make the board into one color in as few steps as possible. The way to do it is “Paint bucket”-tool from top left corner with various colors.

But enough talk. Let’s see some code:
https://cgit.kde.org/scratch/sune/kolorfill.git/

And of course, there is some QML tests for the curious.
A major todo item is saving the high score and getting that to work. Patches welcome. Or pointer to what QML components that can help me with that.

Planet DebianShashank Kumar: Google Summer of Code 2018 with Debian - Week 4

After working on designs and getting my hands dirty with KIVY for the first 3 weeks, I became comfortable with my development environment and was able to deliver features within a couple of days with UI, tests, and documentation. In this blog, I explain how I converted all my Designs into Code and what I've learned along the way.

The Sign Up

New Contributor Wizard - SignUp

In order to implement above design in KIVY, the best way is to write a user kv-lang. It involves writing a kv file which contains widget tree of the layout and a lot more. One can learn more about kv-lang from the documentation. To begin with, let us look at the simplest kv file.

BoxLayout:
    Label:
        text: 'Hello'
    Label:
        text: 'World'
KV Language

In KIVY, in order to build UI widgets are used. Also, widget base class is what is derived to create all other UI elements like layouts, button, label and so on in KIVY. Indentation is used in kv just like in Python to define children. In our kv file above, we're using BoxLayout which allows us to arrange all its children in either horizontal(by default) or vertical orientation. So, both the Labels will be oriented horizontally one after another.

Just like children widgets, one can also set values to properties like Hello to text of the first Label in above code. More information about what properties can be defined for BoxLayout and Label can be seen from their API documentaion. All which remains is importing this .kv (say sample.kv) file from your module which runs KIVY app. You might notice that for now Language and Timezone are kept static. The reason is, Language support architecture is yet to be finalized and both the options would require a Drop Down list, design and implementation for which will be handled separately.

In order for me to build the UI following the design, I had to experiment with widgets. When all was done, signup.kv file contained the resultant UI.

Validations

Now, the good part is we have a UI, the user can input data. And the bad part is user can input any data! So, it's very important to validate whether the user is submitting data in the correct format or not. Specifically for Sign Up module, I had to validate Email, Passwords and Full Name submitted by the user. Validation module can be found here which contains classes and methods for what I intended to do.

It's important that user gets feedback after validation if something is wrong with the input. This is done by exchanging the Label's text with error message and color with bleeding red by calling prompt_error_message for unsuccessful validation.

Updating The Database

After successful validation, Sign Up module steps forward to update the database in sqlite3 module. But before that, Email and Full Name is cleaned for any unnecessary whitespaces, tabs and newline characters. Universally unique identifier or uuid is generated for the user_id. Plain text Password in changed to sha256 hash string for security. Finally, sqlite3 is integrated to updatedb.py to update the database. SQlite database is stored in a single file and named new_contributor_wizard.db. For user information, the table named USERS is created if not present during initialization of UpdateDB instance. Finally, information is stored or error is returned if the Email already exists. This is how the USERS schema looks like.

id VARCHAR(36) PRIMARY KEY,
email UNIQUE,
pass VARCHAR(64),
fullname TEXT,
language TEXT,
timezone TEXT

After the Database is updated, i.e. successful account creation of user, the natural flow is to take the user to the Dashboard screen. In order to make this feature atomic, integration with Dashboard would be done once all 3 (SignUp, SignIn, and Dashboard) features are merged. So, in order to showcase successful sign-up, I've used text confirmation. Below is the screencast of how the feature looks and what changes it makes in the database.

The Sign In

New Contributor Wizard - SignIn

If you look into the difference in UI of SignIn module in comparison with the SignUp, you might notice a few changes.

  • The New Contributor Wizard is now right-aligned
  • Instead of 2 columns taking user information, here we have just one with Email and Password

Hence, the UI experiences only a little change and the result can be seen in singin.py.

Validations

Just like in the Sign Up modules, we are not trusting user's input to be sane. Hence, we validate whether the user is giving us a good format Email and Password. The resultant validations of Sign In modules can be seen in validations.py.

Updating The Database

After successful validation, next step would be cleaning Email and hashing the Password entered by the user. Here we have two possibilities of unsuccessful signin,

  • Either the Email entered by the user doesn't exist in the database
  • Or the Password entered by the user is not correct

Else, the user is signed in successfully. For the unsuccessful signin, I have created a exceptions.py module to prompt the error correctly. updatedb.py contains the database operations for Sign In module.

The Exceptions

Exceptions.py of Sign In contains Exception classes and they are defined as

  • UserError - this class is used to throw an exception when Email doesn't exist
  • PasswordError - this class is used to throw an exception when Password doesn't match the one saved in the database with the corresponding email.

All these modules are integrated with signin.py and the resultant feature can be seen in action in the screencast below. Also, here's the merge request for the same.

The Dashboard

New Contributor Wizard - Dashboard

The Dashboard is completely different than the above two modules. If New Contributor Wizard is the culmination of different user stories and interactive screen then Dashboard is the protagonist of all the other features. A successful SignIn or SignUp will direct the user to the Dashboard. All the tutorials and tools will be available to the user henceforth.

The UI

There are 2 segments of the Dashboard screen, one is for all the menu options on the left and another is for the tutorials and tools for the selected menu option on the right. So, it was needed to change the screen on the right all the time while selecting the menu options. KIVY provides a widget named Screen Manager to manage such an issue gracefully. But in order to have control over the transition of just a part of the screen rather than the entire screen, one has to dig deep into the API and work it out. Here's when I remembered a sentence from the Zen of Python, "Simple is better than complex" and I chose the simple way of changing the screen i.e. by adding/removing widget functions.

In the dashboard.py, I'm overidding on_touch_down function to check which menu option the user clicks on and calling enable_menu accordingly.

The menu options on the left are not the Button widget. I had an option of using the Button directly but it would need customization to make them look pretty. Instead, I used BoxLayout and Label to incorporate a button like feature. In enable_menu I only check on top of which option user is clicking using the touch API. Now, all I have to do is highlight the selected option and unfocus all the other options. The final UI can be seen here in dashboard.kv.

Courseware

Along with highlighting the selected option, Dashboard also changes to the courseware i.e. tools and tutorials for the selected option on the right. To provide a modular structure to application, all these options are build as separate modules and then integrated into the Dashboard. Here are all the modules for the courseware build for the Dashboard,

  • blog - Users will be given tools to create and deploy their blogs and also learn the best practices.
  • cli - Understanding Command Line Interface will be the goal with all the tutorials provided in this module.
  • communication - Communication module will have tutorials for IRC and mailing lists and showcase best communication practices. The tools in this module will help user subscribe to the mailing lists of different open source communities.
  • encryption - Encrypting communication and data will be tough using this module.
  • how_to_use - This would be an introductory module for the user for them to understand how to user this application.
  • vcs - Version Control Systems like git is important while working on a project whether personal or with a team and everything in between.
  • way_ahead - This module will help users reach out to different open source communities and organizations. It will also showcase open source project to the user with respect to their preference and information about programs like Google Summer of Code and Outreachy.
Settings

Below the menu are the options for settings. These settings also have separate modules just like courseware. Specifically, they are described as

  • application_settings - Would help out user to manage setting which are specific to KIVY application like resolutions.
  • theme_settings - User can manage theme related setting like color schema using this option
  • profile_settings - Would help the user manage information about themselves

The merge request which incorporates the Dashboard feature in the project can be seen in action in the screencast below.

The Conclusion

The week 4 was a bit satisfying for me as I felt like adding value to the project with these merge requests. As soon as the merge requests are reviewed and merged in the repository, I'll work on integrating all these features together to create a seamless experience as it should be for the user. There are few necessary modifications to be made in the features like supporting multiple languages and adding the gradient to the background as it can be seen in the design. I'll create issues on redmine for the same and will work on them as soon as integration is done. My next task would be designing how tutorials and tasks would look in the right segment of the Dashboard.

Krebs on SecurityBad .Men at .Work. Please Don’t .Click

Web site names ending in new top-level domains (TLDs) like .men, .work and .click are some of the riskiest and spammy-est on the Internet, according to experts who track such concentrations of badness online. Not that there still aren’t a whole mess of nasty .com, .net and .biz domains out there, but relative to their size (i.e. overall number of domains) these newer TLDs are far dicier to visit than most online destinations.

There are many sources for measuring domain reputation online, but one of the newest is The 10 Most Abused Top Level Domains list, run by Spamhaus.org. Currently at the #1 spot on the list (the worst) is .men: Spamhaus says of the 65,570 domains it has seen registered in the .men TLD, more than half (55 percent) were “bad.”

According to Spamhaus, a TLD may be “bad” because it is tied to spam or malware dissemination (or both). More specifically, the “badness” of a given TLD may be assigned in two ways:

“The ratio of bad to good domains may be higher than average, indicating that the registry could do a better job of enforcing policies and shunning abusers. Or, some TLDs with a high fraction of bad domains may be quite small, and their total number of bad domains could be relatively limited with respect to other, bigger TLDs. Their total “badness” to the Internet is limited by their small total size.”

More than 1,500 TLDs exist today, but hundreds of them were introduced in just the past few years. The nonprofit organization that runs the domain name space — the Internet Corporation for Assigned Names and Numbers (ICANN) — enabled the new TLDs in response to requests from advertisers and domain speculators — even though security experts warned that an onslaught of new, far cheaper TLDs would be a boon mainly to spammers and scammers.

And what a boon it has been. The newer TLDs are popular among spammers and scammers alike because domains in many of these TLDs can be had for pennies apiece. But not all of the TLDs on Spamhaus’ list are prized for being cheaper than generic TLDs (like .com, .net, etc.). The cheapest domains at half of Spamhaus’ top ten “baddest” TLDs go for prices between $6 and $14.50 per domain.

Still, domains in the remaining five Top Bad TLDs can be had for between 48 cents and a dollar each.

Security firm Symantec in March 2018 published its own Top 20 list of Shady TLDs:

Symantec’s “Top 20 Shady TLDs,” published in March 2018.

Spamhaus says TLD registries that allow registrars to sell high volumes of domains to professional spammers and malware operators in essence aid and abet the plague of abuse on the Internet.

“Some registrars and resellers knowingly sell high volumes of domains to these actors for profit, and many registries do not do enough to stop or limit this endless supply of domains,” Spamhaus’ World’s Most Abused TLDs page explains.

Namecheap, a Phoenix, Ariz. based domain name registrar that in Oct. 2017 was the fourth-largest registrar, currently offers by a wide margin the lowest registration prices for three out of 10 of Spamhaus’ baddest TLDs, selling most for less than 50 cents each.

Namecheap also is by far the cheapest registrar for 11 of Symantec’s Top 20 Shady New TLDs: Namecheap is easily the least expensive registrar to secure a domain in 11 of the Top 20, including .date, .trade, .review, .party, .loan, .kim, .bid, .win, .racing, .download and .stream.

I should preface the following analysis by saying the prices that domain registrars charge for various TLD name registrations vary frequently, as do the rankings in these Top Bad TLD lists. But I was curious if there was any useful data about new TLD abuse at tld-list.com — a comparison shopping page for domain registrars.

What I found is that although domains in almost all of the above-mentioned TLDs are sold by dozens of registrars, most of these registrars have priced themselves out of the market for the TLDs that are currently so-favored by spammers and scammers.

Not so with Namecheap. True to its name, when it is the cheapest Namecheap consistently offers the lowest price by approximately 98 percent off the average price that other registrars selling the same TLD charge per domain. The company appears to have specifically targeted these TLDs with price promotions that far undercut competitors.

Namecheap is by far the lowest-priced registrar for more than half of the 20 Top Bad TLDs tracked by Symantec earlier this year.

Here’s a look at the per-domain prices charged by the registrars for the TLDs named in Spamhaus’s top 10:

The lowest, highest, and average prices charged by registrars for the domains in Spamhaus’ Top 10 “Bad” TLDs. Click to enlarge.

This a price comparison for Symantec’s Top 20 list:

The lowest, highest, and average prices charged by registrars for the domains in Symantec’s Top 20 “Shady” TLDs. Click to enlarge.

I asked Namecheap’s CEO why the company’s name comes up so frequently in these lists, and if there was any strategy behind cornering the market for so many of the “bad” and “shady” TLDs.

“Our business model, as our name implies is to offer choice and value to everyone in the same way companies like Amazon or Walmart do,” Namecheap CEO Richard Kirkendall told KrebsOnSecurity. “Saying that because we offer low prices to all customers we somehow condone nefarious activity is an irresponsible assumption on your part. Our commitment to our millions of customers across the world is to continue to bring them the best value and choice whenever and wherever we can.”

Kirkendall said expecting retail registrars that compete on pricing to stop doing that is not realistic and would be the last place he would go to for change.

“On the other hand, if you do manage to secure higher pricing you will also in effect tax everyone for the bad actions of a few,” Kirkendall said. “Is this really the way to solve the problem? While a few dollars may not matter to you, there are plenty of less fortunate people out there where it does matter. They say the internet is the great equalizer, by making things cost more simply for the sake of creating barriers truly and indiscriminately creates barriers for everyone, not just for those you target.”

Incidentally, should you ever wish to block all domains from any given TLD, there are a number of tools available to do that. One of the easiest to use is Cisco‘s OpenDNS, which includes up to 30 filters for managing traffic, content and Web sites on your computer and home network — including the ability to block entire TLDs if that’s something you want to do.

I’m often asked if blocking sites from loading when they’re served from specific TLDs or countries (like .ru) would be an effective way to block malware and phishing attacks. It’s important to note here that it’s not practical to assume you can block all traffic from given countries (that somehow blacklisting .ru is going to block all traffic from Russia). It also seems likely that the .com TLD space and US-based ISPs are bigger sources of the problem overall.

But that’s not to say blocking entire TLDs a horrible idea for individual users and home network owners. I’d wager there are whole a host of TLDs (including all of the above “bad” and “shady” TLDs) that most users could block across the board without forgoing anything they might otherwise want to have seen or visited. I mean seriously: When was the last time you intentionally visited a site registered in the TLD for Gabon (.ga)?

And while many people might never click on a .party or .men domain in a malicious or spammy email, these domains are often loaded only after the user clicks on a malicious or booby-trapped link that may not look so phishy — such as a .com or .org link.

Update: 11:46 a.m. ET: An earlier version of this story incorrectly stated the name of the company that owns OpenDNS.

Sociological ImagesAnthony Bourdain, Gastrodiplomacy, and the Sociology of Food

“There is a real danger of taking food too seriously. Food needs to be part of a bigger picture”
-Anthony Bourdain

As someone who writes about food, about its ability to offer a window into the daily lives and circumstances of people around the globe, Anthony Bourdain’s passing hit me particularly hard. If you haven’t seen them, his widely-acclaimed shows such as No Reservations and Parts Unknown were a kind of personal narrative meets travelogue meets food TV. They trailed the chef as he immersed himself in the culture of a place, sometimes one heavily touristed, sometimes more removed from the lives of most food media consumers, and showed us what people ate, at home, in the streets and in local restaurants. While much of food TV focuses on high end cuisine, Bourdain’s art was to show the craftsmanship behind the everyday foods of a place. He lovingly described the food’s preparation, the labor involved, and the joy people felt in coming together to consume it in a way that was palpable, even (or especially) when the foods themselves were unusual.

At their best, these shows taught us about the history and culture of particular places, and of the ways places have suffered through the ills of global capitalism and imperialism. His visit to the Congo was particularly memorable; While eating tiger fish wrapped in banana leaves, spear-caught and prepared by local fishermen, he delved into the colonial history and present-day violence that continue to devastate this natural-resource rich country. After visiting Cambodia he railed against Henry Kissinger and the American bombing campaign that killed over 250,000 people and gave rise, in part, to the murderous regime of the Khmer Rouge. In Jerusalem, he showed his lighter side, exploring the Israeli-Palestinian conflict through debates over who invented falafel. But in the same episode, he shared maqluba, “upside down” chicken and rice, with a family of Palestinian farmers in Gaza, and showed the basic humanity and dignity of a people living under occupation.

Bourdain’s shows embodies the basic premise of the sociology of food. Food is deeply personal and cultural. Over twenty-five years ago Anthony Winson called it the “intimate commodity” because it provides a link between our bodies, our cultures and the global political economies and ecologies that shape how and by whom food is cultivated, distributed and consumed. Bourdain’s show focuses on what food studies scholars call gastrodiplomacy, the potential for food to bring people together, helping us to understand and sympathize with one another’s circumstances. As a theory, it embodies the old saying that “the best way to our hearts is through our stomachs.” This theory has been embraced by nations like Thailand, which has an official policy promoting the creation of Thai restaurants in order to drive tourism and boost the country’s prestige. And the foods of Mexico have been declared World Heritage Cuisines by UNESCO, the same arm of the United Nations that marks world heritage sites. Less officially, we’ve seen a wave of efforts to promote the cuisines of refugees and migrants through restaurants, supper clubs and incubators like San Francisco’s La Cocina that help immigrant chefs launch food businesses.

But food has often been and continues to be a site of violence as well. Since 1981 750,000 farms have gone out of business, resulting in widespread rural poverty and epidemic levels of suicide. Food system workers, from farms to processing plants to restaurants, are among the most poorly paid members of our society, and often rely on food assistance. The food industry is highly centralized. The few major players in each segment—think Wal-Mart for groceries or Tyson for chicken—exert tremendous power on suppliers, creating dire conditions for producers. Allegations of sexual assault pervade the food industry; there are numerous complaints against well-known chefs and a study from Human Rights Watch revealed that more than 80% of women farmworkers have experienced harassment or assault on the job, a situation so dire that these women refer to it as the “field of panties” because rape is so common. Racism is equally rampant, with people of color often confined to poorly-paid “back of the house” positions while whites make up the majority of high-end servers, sommeliers, and celebrity chefs.

More than any other celebrity chef, Bourdain understood that food is political, and used his platform to address current social issues. His outspoken support for immigrant workers throughout the food system, and for immigrants more generally, colored many of his recent columns. And as the former partner of Italian actress Asia Argento, one of the first women to publicly accuse Harvey Weinstein, Bourdain used his celebrity status to amplify the voice of the #metoo movement, a form of support that was beautifully incongruous with his hyper-masculine image. Here Bourdain embodied another of the fundamental ideas of the sociology of food, that understanding the food system is intricately interwoven with efforts to improve it.

Bourdain’s shows explored food in its social and political contexts, offering viewers a window into worlds that often seemed far removed. He encouraged us to eat one another’s cultural foods, and to understand the lives of those who prepared them. Through food, he urged us to develop our sociological imaginations, putting individual biographies in their social and historical contexts. And while he was never preachy, his legacy urges us to get involved in the confluence of food movements, ensuring that those who feed us are treated with dignity and fairness, and are protected from sexual harassment and assault.

The Black feminist poet Audre Lorde once wrote that “it is not our differences that divide us. It is our inability to recognize, accept, and celebrate those differences.” Bourdain showed us that by learning the stories of one another’s foods, we can learn the histories and develop the empathy necessary to work for a better world.

Rest in Peace.

Alison Hope Alkon is associate professor of sociology and food studies at University of the Pacific. Check out her Ted talk, Food as Radical Empathy

(View original at https://thesocietypages.org/socimages)

Sociological ImagesAnthony Bourdain, Honorary Sociologist

I was absolutely devastated to hear about Anthony Bourdain’s passing.

I always saw Bourdain as more than just a celebrity chef or TV host. I saw him as one of us, a sociologist of sorts, someone deeply invested in understanding and teaching about culture and community. He had a gift for teaching us about social worlds beyond our own, and making these worlds accessible. In many ways, his work accomplished what so often we as sociologists strive to do.

Photo Credit: Adam Kuban, Flickr CC

I first read Bourdain’s memoir, Kitchen Confidential, at the age of twenty. The gritty memoir is its own ethnography of sorts, detailing the stories, experiences, and personalities working behind the sweltering heat of the kitchen line. At the time I was struggling as a first-generation, blue-collar student suddenly immersed in one of the wealthiest college campuses in the United States. Between August and May of each academic year, I attended classes with the children of CEOs and world leaders, yet come June I returned to the kitchens of a country club in western New York, quite literally serving alumni of my college. I remember reading the book thinking – though I knew it wasn’t academic sociology – “wait, you can write about these things?” These social worlds? These stories we otherwise overlook and ignore? I walked into my advisor’s office soon after, convinced I too would write such in-depth narratives about food-related subcultures. “Well,” he agreed, “you could research something like food culture or alternative food movements.” Within six months of that conversation, I had successfully secured my first research fellowship and taken on my first sociology project.

Like his writing, Bourdain’s television shows taught his audience something new about our relationships to food. Each episode of A Cook’s Tour, No Reservations, and Parts Unknown, went beyond the scope of a typical celebrity chef show. He never featured the World’s Biggest Hamburger, nor did he ever critique foods as “bizarre” or “strange.” Instead, he focused on what food meant to people across the globe. Food, he taught us, and the pride attached to it, are universal.

Rather than projecting narratives or misappropriating words, he let people speak for themselves. He strived to show the way things really are and to treat people with the utmost dignity, yet was careful never to glamorize or romanticize poverty, struggle, or difference.  In one of my favorite episodes of No Reservations, Bourdain takes us through Peru, openly critiquing celebrities who have glorified the nation as a place to find peace and spiritual enlightenment:

Sting and all his buddies come down here, they’re going on and on and on and on about preserving traditional culture, right? Because that’s what we’re talking about here. But what we’re also talking about here is poverty. [It’s] backbreaking work. Isn’t it kind of patronizing to say ‘oh they’re happier, they live a simpler life closer to the soil.’ Maybe so, but it’s also a pretty hard, scrabbling, unglamorous life when you get down to it.

My parents and I met Anthony Bourdain in 2009 at a bar in Buffalo where he was filming an episode of No Reservations. My father was thrilled to tell Bourdain how much he loved the episode featuring his homeland of Colombia. It was perhaps one of the first times in my father’s 38-years in the United States that he felt like American television portrayed Colombia in a positive light, showing the beauty, resilience, and complex history of the nation rather than the images of drug wars and violence present elsewhere in depictions of the country. That night in that dive bar, Bourdain graciously spoke with my dad about how beautiful he found the country and its people. Both the episode and their conversation filled by father with immense pride, ultimately restoring some of the dignity that had been repeatedly stripped of him through years of indignant stereotypes about his home.

In the end, isn’t that what many of us sociologists are trying to do? Honor people’s stories without misusing, mistreating, or misrepresenting them?

In retrospect, maybe Bourdain influenced my path towards sociology. At the very least, he created a bridge between what I knew – food service – and what I wanted to know – the rest of the world. In our classrooms we strive to teach our students how to make these connections. Bourdain made them for us with ease, dignity, and humility.

Caty Taborda-Whitt is a Ford fellow and sociology PhD candidate at the University of Minnesota. Her research interests include embodiment, health, culture, and inequalities.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowPodcast: Petard, Part 04 — CONCLUSION


Here’s the fourth and final part of my reading (MP3) of Petard (part one, part two, part three), a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Worse Than FailureCodeSOD: The Enabler

Shaneka works on software for an embedded device for a very demanding client. In previous iterations of the software, the client had made their own modifications to the device's code, and demanded they be incorporated. Over the years, more and more of the code came from the client, until the day when the client decided it was too much effort to maintain the ball of mud and just started demanding features.

One specific feature was a new requirement for turning the display on and off. Shaneka attempted to implement the feature, and it didn't work. No matter what she did, once they turned the display off, they simply couldn't turn it back on without restarting the whole system.

She dug into the code, and found the method to enable the display was implemented like this:

/***************************************************************************//**
* @brief  Method, which enables display
*
* @param  true = turn on / false = turn off
* @return None
*******************************************************************************/
void InformationDisplay::Enable(bool state)
{
  displayEnabled = state;
  if (!displayEnabled) {
    enableDisplay(false);
  }
}

The Enable method does a great job at turning off the display, but not so great a job turning it back on, no matter what the comments say. The simple fix would be to just pass the state parameter to enableDisplay directly, but huge swathes of the code depended on this method having the incorrect behavior. Shaneka instead updated the documentation for this method and wrote a new method which behaved correctly.

As you can guess, this is one of the pieces of code which came from the client.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianNorbert Preining: Microsoft’s failed attempt on Debian packaging

Just recently Microsoft Open R 3.5 was announced, as an open source implementation of R with some improvements. Binaries are available for Windows, Mac, and Linux. I dared to download and play around with the files, only to get shocked how incompetent Microsoft is in packaging.

From the microsoft-r-open-mro-3.5.0 postinstall script:

#!/bin/bash

#TODO: Avoid hard code VERSION number in all scripts
VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk  -F. '{print $1 "." $2 "." $3}'`
INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}"

echo $VERSION

ln -s "${INSTALL_PREFIX}/lib64/R/bin/R" /usr/bin/R
ln -s "${INSTALL_PREFIX}/lib64/R/bin/Rscript" /usr/bin/Rscript

rm /bin/sh
ln -s /bin/bash /bin/sh

First of all, the ln -s will not work in case the standard R package is installed, but much worse, forcibly relinking /bin/sh to bash is something I didn’t expect to see.

Then, looking at the prerm script, it is getting even more funny:

#!/bin/bash

VERSION=`echo $DPKG_MAINTSCRIPT_PACKAGE | sed 's/[[:alpha:]|(|[:space:]]//g' | sed 's/\-*//' | awk  -F. '{print $1 "." $2 "." $3}'`
INSTALL_PREFIX="/opt/microsoft/ropen/${VERSION}/"

rm /usr/bin/R
rm /usr/bin/Rscript
rm -rf "${INSTALL_PREFIX}/lib64/R/backup"

Stop, wait, you are removing /usr/bin/R without even checking that it points to the R you have installed???

I guess Microsoft should read a bit up, in particular about dpkg-divert and proper packaging. What came in here was such an exhibition of incompetence that I can only assume they are doing it on purpose.

PostScriptum: A short look into the man page of dpkg-divert will give a nice example how it should be done.

PPS: I first reported these problems in the R Open Forums and later got an answer that they look into it.

Planet DebianJohn Goerzen: Running Digikam inside Docker

After my recent complaint about AppImage, I thought I’d describe how I solved my problem. I needed a small patch to Digikam, which was already in Debian’s 5.9.0 package, and the thought of rebuilding the AppImage was… unpleasant.

I thought – why not just run it inside Buster in Docker? There are various sources on the Internet for X11 apps in Docker. It took a little twiddling to make it work, but I did.

My Dockerfile was pretty simple:

FROM debian:buster
MAINTAINER John Goerzen 

RUN apt-get update && \
    apt-get -yu dist-upgrade && \
    apt-get --install-recommends -y install firefox-esr digikam digikam-doc \
         ffmpegthumbs imagemagick minidlna hugin enblend enfuse minidlna pulseaudio \
         strace xterm less breeze && \
    apt-get clean && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*
RUN adduser --disabled-password --uid 1000 --gecos "John Goerzen" jgoerzen && \
    rm -r /home/jgoerzen/.[a-z]*
RUN rm /etc/machine-id
CMD /usr/bin/docker

RUN mkdir -p /nfs/personalmedia /run/user/1000 && chown -R jgoerzen:jgoerzen /nfs /run/user/1000

I basically create the container and my account in it.

Then this script starts up Digikam:

#!/bin/bash

set -e

# This will be unnecessary with docker 18.04 theoretically....  --privileged see
# https://stackoverflow.com/questions/48995826/which-capabilities-are-needed-for-statx-to-stop-giving-eperm
# and https://bugs.launchpad.net/ubuntu/+source/docker.io/+bug/1755250

docker run -ti \
       -v /tmp/.X11-unix:/tmp/.X11-unix -v "/run/user/1000/pulse:/run/user/1000/pulse" -v /etc/machine-id:/etc/machine-id \
       -v /etc/localtime:/etc/localtime \
       -v /dev/shm:/dev/shm -v /var/lib/dbus:/var/lib/dbus -v /var/run/dbus:/var/run/dbus -v /run/user/1000/bus:/run/user/1000/bus  \
       -v "$HOME:$HOME" -v "/nfs/personalmedia/Pictures:/nfs/personalmedia/Pictures" \
     -e DISPLAY="$DISPLAY" \
     -e XDG_RUNTIME_DIR="$XDG_RUNTIME_DIR" \
     -e DBUS_SESSION_BUS_ADDRESS="$DBUS_SESSION_BUS_ADDRESS" \
     -e LANG="$LANG" \
     --user "$USER" \
     --hostname=digikam \
     --name=digikam \
     --privileged \
     --rm \
     jgoerzen/digikam "$@"  /usr/bin/digikam

The goal here was not total security isolation; if it had been, then all the dbus mounting and $HOME mounting was a poor idea. But as an alternative to AppImage — well, it worked perfectly. I could even get security updates if I wanted.

Don Martisimulating a market with honest and deceptive advertisers

At Nudgestock 2018 I mentioned the signaling literature that provides background for understanding the targeted advertising problem. Besides being behind paywalls, a lot of this material is written in math that takes a while to figure out. For example, it's worth working through this Gardete and Bart paper to understand a situation in which the audience is making the right move to ignore a targeted message, but it can take a while.

Are people rational to ignore or block targeted advertising in some media, because those media are set up to give an incentive to deceptive sellers? Here's a simulation of an ad market in which that might be the case. Of course, this does not show that in all advertising markets, better targeting leads to an advantage for deceptive sellers. But it is a demonstration that it is possible to design a set of rules for an advertising market that gives an advantage to deceptive sellers.

What are we looking at? Think of it as a culture medium where we can grow and evolve a population of single-celled advertisers.

The x and y coordinates are some arbitrary characteristic of offers made to customers. Customers, invisible, are scattered randomly all over the map. If a customer gets an offer for a product that is close enough to their preferences, it will buy.

Advertisers (yellow to orange squares) get to place ads that reach customers within a certain radius. The advertiser has a price that it will bid for an ad impression, and a maximum distance at which it will bid for an impression. These are assigned randomly when we populate the initial set of advertisers.

High-bidding advertisers are more orange, and lower-bidding advertisers are more pale yellow.

An advertiser is either deceptive, in which case it makes a slightly higher profit per sale, or honest. When an honest advertiser makes a sale, we draw a green line from the advertiser to the customer. When a deceptive advertiser makes a sale, we draw a red line. The lines appear to fade out because we draw a black line every time there is an ad impression that does not result in a sale.

So why don't the honest advertisers die out? One more factor: the norms enforcers. You can think of these as product reviewers or regulators. If a deceptive advertiser wins an ad impression to a norms enforcer, then the deceptive advertiser pays a cost, greater than the profit from a sale. Think of it as having to register a new domain and get a new logo. Honest advertisers can make normal sales to the norms enforcers, which are shown as blue squares. An ad impression that results in an "enforcement penalty" is shown as a blue line.

So, out of those relative simple rules—two kinds of advertisers and two kinds of customers—we can see several main strategies arise. Your run of the simulation is unique, and you can also visit the big version.

What I'm seeing on mine is some clusters of finely targeted deceptive advertisers, in areas with relatively few norms enforcers, and some low-bidding honest advertisers with a relatively broad targeting radius. Again, I don't think that this necessarily corresponds to any real-world advertising market, but it is interesting to figure out when and how an advertising market can give an advantage to deceptive sellers, and what kinds of protections on the customer side can change the game.

How The California Consumer Privacy Act Stacks Up Against GDPR

The biggest lies that the martech and adtech worlds tell themselves

‘Personalization diminished’: In the GDPR era, contextual targeting is making a comeback

How media companies lost the advertising business

Ben Miroglio, David Zeber, Jofish Kaye, and Rebecca Weiss. 2018. The Effect of Ad Blocking on User Engagement with the Web. In WWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3178876.3186162

When can deceptive sellers outbid honest sellers for ad impressions?

Google Will Enjoy Major GDPR Data Advantages, Even After Joining IAB Europe’s Industry Framework

https://www.canvas8.com/content/2018/06/07/don-marti-nudgestock.html …

Data protection laws are shining a needed light on a secretive industry | Bruce Schneier

How startups die from their addiction to paid marketing

Opinion: Europe's Strict New Privacy Rules Are Scary but Right

Announcing a new journalism entrepreneurship boot camp: Let’s “reboot the media” together

Intelligent Tracking Prevention 2.0

The alt-right has discovered an oasis for white-supremacy messages in Disqus, the online commenting system.

Teens Are Abandoning Facebook. For Real This Time.

Salesforce CEO Marc Benioff Calls for a National Privacy Law

Planet Linux AustraliaFrancois Marier: Mysterious 'everybody is busy/congested at this time' error in Asterisk

I was trying to figure out why I was getting a BUSY signal from Asterisk while trying to ring a SIP phone even though that phone was not in use.

My asterisk setup looks like this:

phone 1 <--SIP--> asterisk 1 <==IAX2==> asterisk 2 <--SIP--> phone 2

While I couldn't call SIP phone #2 from SIP phone #1, the reverse was working fine (ringing #1 from #2). So it's not a network/firewall problem. The two SIP phones can talk to one another through their respective Asterisk servers.

This is the error message I could see on the second asterisk server:

$ asterisk -r
...
  == Using SIP RTP TOS bits 184
  == Using SIP RTP CoS mark 5
    -- Called SIP/12345
    -- SIP/12345-00000002 redirecting info has changed, passing it to IAX2/iaxuser-6347
    -- SIP/12345-00000002 is busy
  == Everyone is busy/congested at this time (1:1/0/0)
    -- Executing [12345@local:2] Goto("IAX2/iaxuser-6347", "in12345-BUSY,1") in new stack
    -- Goto (local,in12345-BUSY,1)
    -- Executing [in12345-BUSY@local:1] Hangup("IAX2/iaxuser-6347", "17") in new stack
  == Spawn extension (local, in12345-BUSY, 1) exited non-zero on 'IAX2/iaxuser-6347'
    -- Hungup 'IAX2/iaxuser-6347'

where:

  • 12345 is the extension of SIP phone #2 on Asterisk server #2
  • iaxuser is the user account on server #2 that server #1 uses
  • local is the context that for incoming IAX calls on server #1

This Everyone is busy/congested at this time (1:1/0/0) was surprising since looking at each SIP channel on that server showed nobody as busy:

asterisk2*CLI> sip show inuse
* Peer name               In use          Limit           
12345                     0/0/0           2               

So I enabled the raw SIP debug output and got the following (edited for clarity):

asterisk2*CLI> sip set debug on
SIP Debugging enabled

  == Using SIP RTP TOS bits 184
  == Using SIP RTP CoS mark 5

INVITE sip:12345@192.168.0.4:2048;line=m2vlbuoc SIP/2.0
Via: SIP/2.0/UDP 192.168.0.2:5060
From: "Francois Marier" <sip:67890@192.168.0.2>
To: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: Asterisk PBX
Contact: <sip:67890@192.168.0.2:5060>
Content-Length: 274

    -- Called SIP/12345

<--- SIP read from UDP:192.168.0.4:2048 --->
SIP/2.0 100 Trying
Via: SIP/2.0/UDP 192.168.0.2:5060
From: "Francois Marier" <sip:67890@192.168.0.2>
To: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: snom300
Contact: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
Content-Length: 0

<------------->
--- (9 headers 0 lines) ---

<--- SIP read from UDP:192.168.0.4:2048 --->
SIP/2.0 480 Do Not Disturb
Via: SIP/2.0/UDP 192.168.0.2:5060
From: "Francois Marier" <sip:67890@192.168.0.2>
To: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
CSeq: 102 INVITE
User-Agent: snom300
Contact: <sip:12345@192.168.0.4:2048;line=m2vlbuoc>
Content-Length: 0

where:

  • 12345 is the extension of SIP phone #2 on Asterisk server #2
  • 67890 is the extension of SIP phone #1 on Asterisk server #2
  • 192.168.0.4 is the IP address of SIP phone #2
  • 192.168.0.1 is the IP address of Asterisk server #2

From there, I can see that SIP phone #2 is returning a status of 408 Do Not Disturb. That's what the problem was: the phone itself was in DnD mode and set to reject all incoming calls.

,

Rondam RamblingsIf the shoe fits

Fox-and-Friends host Abby Huntsman, in a rare moment of lucidity, today referred to the upcoming summit between Donald Trump and Kim Jong Un as "a meeting between two dictators". The best part is that nobody on the show seemed to notice, perhaps because there is such a thick pile of lies and self-deceptions that Trump apologists have to keep track of that sometimes the truth can slip through the

Planet DebianMichal Čihař: Weblate 3.0.1

Weblate 3.0.1 has been released today. It contains several bug fixes, most importantly possible migration issue on users when migrating from 2.20. There was no data corruption, just some of the foreign keys were possibly not properly migrated. Upgrading from 3.0 to 3.0.1 will fix this as well as going directly from 2.20 to 3.0.1.

Full list of changes:

  • Fixed possible migration issue from 2.20.
  • Localization updates.
  • Removed obsolete hook examples.
  • Improved caching documentation.
  • Fixed displaying of admin documentation.
  • Improved handling of long language names.

If you are upgrading from older version, please follow our upgrading instructions, the upgrade is more complex this time.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

Planet DebianDirk Eddelbuettel: RcppZiggurat 0.1.5

ziggurats

A maintenance release 0.1.5 of RcppZiggurat is now on the CRAN network for R.

The RcppZiggurat package updates the code for the Ziggurat generator which provides very fast draws from a Normal distribution. The package provides a simple C++ wrapper class for the generator improving on the very basic macros, and permits comparison among several existing Ziggurat implementations. This can be seen in the figure where Ziggurat from this package dominates accessing the implementations from the GSL, QuantLib and Gretl—all of which are still way faster than the default Normal generator in R (which is of course of higher code complexity).

Per a request from CRAN, we changed the vignette to accomodate pandoc 2.* just as we did with the most recent pinp release two days ago. No other changes were made. Other changes that have been pending are a minor rewrite of DOIs in DESCRIPTION, a corrected state setter thanks to a PR by Ralf Stubner, and a tweak for function registration to have user_norm_rand() visible.

The NEWS file entry below lists all changes.

Changes in version 0.1.5 (2018-06-10)

  • Description rewritten using doi for references.

  • Re-setting the Ziggurat generator seed now correctly re-sets state (Ralf Stubner in #7 fixing #3)

  • Dynamic registration reverts to manual mode so that user_norm_rand() is visible as well (#7).

  • The vignette was updated to accomodate pandoc 2* [CRAN request].

Courtesy of CRANberries, there is also a diffstat report for the most recent release. More information is on the RcppZiggurat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet Linux AustraliaChris Samuel: Submission to Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples

Tonight I took some time to send a submission in to the Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples in support of the Uluru Statement from the Heart from the 2017 First Nations National Constitutional Convention held at Uluru. Submissions close June 11th so I wanted to get this in as I feel very strongly about this issue.

Here’s what I wrote:

To the Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples,

The first peoples of Australia have lived as part of this continent for many times longer than the ancestors of James Cook lived in the UK(*), let alone this brief period of European colonisation called Australia.

They have farmed, shaped and cared for this land over the millennia, they have seen the climate change, the shorelines move and species evolve.

Yet after all this deep time as custodians of this land they were dispossessed via the convenient lie of Terra Nullius and through killing, forced relocation and introduced sickness had their links to this land severely beaten, though not fatally broken.

Yet we still have the chance to try and make a bridge and a new relationship with these first peoples; they have offered us the opportunity for a Makarrata and I ask you to grasp this opportunity with both hands, for the sake of all Australians.

Several of the component states and territories of this recent nation of Australia are starting to investigate treaties with their first peoples, but this must also happen at the federal level as well.

Please take the Uluru Statement from the Heart to your own hearts, accept the offering of Makarrata & a commission and let us all move forward together.

Thank you for your attention.

Your sincerely,
Christopher Samuel

(*) Australia has been continuously occupied for at least 50,000 years, almost certainly for at least 60,000 years and likely longer. The UK has only been continuously occupied for around the last 10,000 years after the last Ice Age drove its previous population out into warmer parts of what is now Europe.

Chris Samuel : http://www.csamuel.org/ : Melbourne, VIC

This item originally posted here:

Submission to Joint Select Committee on Constitutional Recognition Relating to Aboriginal and Torres Strait Islander Peoples

Planet Linux AustraliaBen Martin: A new libferris is coming! 2.0.x

A while back I ported most of the libferris suite over to using boost for smart pointers and for signals. The later was not such a problem but there were always some fringe cases to the former and this lead to a delay in releasing it because there were some known issues.

I have moved that code into a branch locally and reverted back to using the Modern C++ Loki library for intrusive reference counting and sigc++. I imported my old test suite into the main libferris repo and will flesh that out over time.

I might do a 2.0.0 or 1.9.9 release soonish so that the entire stack is out there. As this has the main memory management stuff that has been working fine for the last 10 years this shouldn't be more unstable than it was before.

I was tempted to use travis ci for testing but will likely move to using a local vm. Virtualization has gotten much more convenient and I'm happy to setup a local test VM for this task which also breaks a dependency on companies which really doesn't need to be there. Yes, I will host releases and a copy of git in some place like github or gitlab or whatever to make that distribution more convenient. On the other hand, anyone could run the test suite which will be in the main libferris distro if they feel the desire.

So after this next release I will slowly at leisure work to flesh out the testsuite and fix issues that I find by running it over time. This gives a much more incremental development which will hopefully be more friendly to the limited time patches that I throw at the project.

One upside of being fully at the mercy of my time is that the project is less likely to die or be taken over by a company and lead in an unnatural direction. The downside is that it relies on my free time which is split over robotics, cnc, and other things as well as libferris.

As some have mentioned, a flatpak or docker image for libferris would be nice. Ironically this makes the whole thing a bit more like plan9 with a filesystem microkernel like subsystem (container) than just running it as a native though rpm or deb, but whatever makes it easier.

,

Don MartiNudgestock 2018 notes and links

Thanks for coming to my Nudgestock 2018 talk. First, as promised, some links to the signaling literature. I don't know of a full bibliography for this material, and a lot of it appears to be paywalled. A good way to get into it is to start with this widely cited paper by Phillip Nelson: Advertising as Information | Journal of Political Economy: Vol 82, No 4 and work forward.

Gardete and Bart "We find that when the sender’s motives are transparent to the receiver, communication can only be influential if the sender is not well informed about the receiver’s preferences. The sender prefers an interior level of information quality, while the receiver prefers complete privacy unless disclosure is necessary to induce communication." Tailored Cheap Talk | Stanford Graduate School of Business The Gardete and Bart paper makes sense if you ever read Computer Shopper for the ads. You want to get an idea of each manufacturer's support for each hardware standard, so that you can buy parts today that will keep their value in the parts market of the near future. You don't want an ad that targets you based on what you already have.

Kihlstrom and Riordan "A great deal of advertising appears to convey no direct credible information about product qualities. Nevertheless such advertising may indirectly signal quality if there exist market mechanisms that produce a positive relationship between product quality and advertising expenditures." Advertising as a Signal

Ambler and Hollier "High perceived advertising expense enhances an advertisement's persuasiveness significantly, but largely indirectly, by strengthening perceptions of brand quality." The Waste in Advertising Is the Part That Works | the Journal of Advertising Research

Davis, Kay, and Star "It is not so much the claims made by advertisers that are helpful but the fact that they are willing to spend extravagant amounts of money." Is advertising rational- Business Strategy Review - Wiley Online Library

New research on the effect of ad blocking on user engagement. No paywall. Ben Miroglio, David Zeber, Jofish Kaye, and Rebecca Weiss. 2018. The Effect of Ad Blocking on User Engagement with the Web. In WWW 2018: The 2018 Web Conference, April 23–27, 2018, Lyon, France. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3178876.3186162 (PDF)

Here's that simulation of unicellular advertisers that I showed on screen, and more on the norms enforcer situation, which IHMO is different from pure signaling.

For those of you who are verified on Twitter, so haven't seen what I'm talking about with the deceptive ads there, I have started collecting some: dmarti/deceptive-ads

I mentioned the alignment of interest between high-reputation brands and high-reputation publishers. More on the publisher side is in a series of guest posts for Digital Content Next, which represents large media companies that stand to benefit from reputation-based advertising: Don Marti, Author at Digital Content Next Also more from the publisher point of view in Notes and links from my talk at the Reynolds Journalism Institute.

If you're interested in the post-creepy advertising movement, here are some people to follow on Twitter.

What's next? The web advertising mess isn't a snarled-up mess of collective action problems. It's a complex set of problems that interact in a way that creates some big opportunities for the right projects. Work together to fix web ads? Let's not.

,

Harald WelteRe-launching openmoko USB Product ID and Ethernet OUI registry

Some time after Openmoko went out of business, they made available their USB Vendor IDs and IEEE OUI (Ethernet MAC address prefix) available to Open Source Hardware / FOSS projects.

After maintaining that for some years myself, I was unable to find time to continue the work and I had handed it over some time ago to two volunteers. However, as things go, those volunteers also stopped to respond to PID / OUI requests, and we're now launching the third attempt of continuing this service.

As the openmoko.org wiki will soon be moved into an archive of static web pages only, we're also moving the list of allocated PID and OUIs into a git repository.

Since git.openmoko.org is also about to be decommissioned, the repository is now at https://github.com/openmoko/openmoko-usb-oui, next to all the archived openmoko.org repository mirrors.

This also means that in addition to sending an e-mail application for getting an allocation in those ranges, you can now send a pull-request via github.

Thanks to cuvoodoo for volunteering to maintain the Openmoko USB PID and IEEE OUI allocations from now on!

CryptogramFriday Squid Blogging: Extinct Relatives of Squid

Interesting fossils. Note that a poster is available.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDResources for suicide prevention, post-attempt survivors and their families

Inspired by JD Schramm’s powerful TEDTalk on surviving a suicide attempt, this list of resources has been updated to help you widen your understanding of mental health, depression, suicide and suicide prevention. Whether you’re an attempt survivor, a concerned family member or friend, or struggling with suicidal thoughts yourself, this list offers helpful resources and hotlines from across the world. This list is not exhaustive so we’d love to hear from you— add suggestions to the comments or email us.

To start off, here is a TED playlist on breaking the silence around suicide.

In the US:

National Suicide Prevention Lifeline
1-800-273-TALK
http://www.suicidepreventionlifeline.org/
A free, 24-hour hotline available to anyone in suicidal crisis or emotional distress. Your call will be routed to the nearest crisis center to you.

The Trevor Project
http://www.thetrevorproject.org/localresources
866 4-U-TREVOR
The Trevor Project is determined to end suicide among LGBTQ youth by providing life-saving and life-affirming resources including a nationwide, 24/7 crisis intervention lifeline, digital community and advocacy/educational programs that create a safe, supportive and positive environment for everyone.

Samaritans USA
http://www.samaritansusa.org/
Samaritans centers provide volunteer-staffed hotlines and professional and volunteer-run public education programs, “suicide survivor” support groups and many other crisis response, outreach and advocacy activities.

Attempt Survivors
http://attemptsurvivors.com/
A two-year project that collected blog posts and stories for and by attempt survivors, set up by the American Association of Suicidology. While the active collection has stopped, the archive is a good place to explore, to hear open, honest voices exploring life after a suicide attempt.

ULifeline
http://ulifeline.org/page/main/StudentLogin.html
An anonymous online resource where you can learn about suicide  prevention and campus-specific resources.

American Foundation for Suicide Prevention:
http://www.afsp.org/
A national nonprofit organization dedicated to understanding and preventing suicide through research, education and advocacy, and to reaching out to people impacted by suicide.

Mental Health First Aid USA
http://www.mentalhealthfirstaid.org/
A public education program that  helps the public identify, understand and respond to signs of mental illnesses and substance use disorders.

Suicide Awareness Voices of Education
SAVE.org
A national nonprofit dedicated to preventing suicide through public awareness and education.

Live Through This
http://livethroughthis.org/
An organization documenting the stories and portraits of suicide attempt survivors to encourage more open dialogue around suicide and depression.

International:

International Association for Suicide Prevention
http://www.iasp.info/
IASP now includes professionals and volunteers from more than fifty different countries. IASP is a Non-Governmental Organization in official relationship with the World Health Organization (WHO) concerned with suicide prevention.

Befrienders 
A suicide prevention resource with phone helplines across the world.
https://www.befrienders.org/

Canadian Association for Suicide Prevention
A resource for survivors as well as anyone in suicidal distress.
To find the nearest crisis center: https://suicideprevention.ca/need-help/
To find the nearest support group: https://suicideprevention.ca/coping-with-suicide-loss/survivor-support-centres/

SAPTA (Mexico)
http://www.saptel.org.mx/index.html

Centro de Valorização da Vida (Brazil)
http://www.cvv.org.br/
Tel: 188 or 141

Sociedade Portuguesa de Suicidologia (Portugal)
http://www.spsuicidologia.pt/

Hulpmix (Netherlands)
https://www.113.nl/english

Samaritans Onlus (Italy)
http://www.samaritansonlus.org/

The South African Depression and Anxiety Group (South Africa)
http://www.sadag.org/index.php?option=com_content&view=article&id=11&Itemid=114

Suicide Ecoute (France)
http://www.suicide-ecoute.fr/

PHARE (France)
http://www.phare.org/

한국자살예방협회 (Korean Association for Suicide Prevention)
http://www.suicideprevention.or.kr/

한국자살협회 사이버 상담실 (Korean Suicide Prevention Cyber Counseling)
http://www.counselling.or.kr/

Hjälplinjen (Sweden)
http://www.hjalplinjen.se/

If you know of good resources available where you live, please add them to the comments section of this post.

Worse Than FailureError'd: Try Again (but with More Errors)

"Sorry, Walgreens, in the future, I'll try to make an error next time," Greg L. writes.

 

"Hmm, I'm either going to shave with my new razors that I ordered... or I won't," wrote Paul.

 

Charlie L. writes,"IFNAME would be my name, IF it were my name that is."

 

"Yep, Dell, I like to brag about my kids File, Edit, View, Tools, and Help," wrote Carl C.

 

Renato L. writes, "Low-cost airlines have come a long way. Forget the Gregorian calendar, created their own one."

 

"So is this becuase, somehow, passwords longer than 9 characters are less secure?" wrote Keith H.

 

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaOpenSTEM: This Week in Australian History

Today we introduce a new category for OpenSTEM® Blog articles: “This Week in Australian History”. So many important dates in Australian history seem to become forgotten over time that there seems to be a need to highlight some of these from time to time. For teachers of students from Foundation/Prep/Kindy to Year 6 looking for […]

Planet Linux AustraliaMatthew Oliver: Keystone Federated Swift – Separate Clusters + Container Sync

This is the third post in the series of Keystone Federated Swift. To bounce back to the start you can visit the first post.

Separate Clusters + Container Sync

The idea with this topology is to deploy each of your OpenStack federated clusters each with their own unique swift cluster and then use another swift feature, container sync, to push objects you create on one federated environment to another.

In this case the keystone servers are federated. A very similar topology could be a global Swift cluster, but each proxy only talks to single region’s keystone. Which would mean a user visiting a different region would authenticate via federation and be able to use the swift cluster, however would use a different account name. In both cases container sync could be used to synchronise the objects, say from the federated account to that of the original account. This is because container sync can synchronise both between containers in separate clusters or in the same.

 

Setting up container sync

Setting up container sync is pretty straight forward. And is also well documented. At a high level to goes like this. Firstly you need to setup a trust between the different clusters. This is achieved by creating a container-sync-realms.conf file, the online example is:

[realm1]
key = realm1key
key2 = realm1key2
cluster_clustername1 = https://host1/v1/
cluster_clustername2 = https://host2/v1/

[realm2]
key = realm2key
key2 = realm2key2
cluster_clustername3 = https://host3/v1/
cluster_clustername4 = https://host4/v1/

 

Each realm is a set of different trusts. And you can have as many clusters in a realm as you want, so as youcan see you can build up different realms. In our example we’d only need 1 realm, and lets use some better names.

[MyRealm]
key = someawesomekey
key2 = anotherkey
cluster_blue = https://blueproxyvip/v1
cluster_green = https://greenproxyvip/v1

NOTE: there is nothing stopping you from only having 1 cluster defined as you can use container sync within a cluster, or adding more clusters to a single realm.

 

Now in our example both the green and blue clusters need to have the MyRealm realm defined in their /etc/swift/container-sync-realms.conf file. The 2 keys are there so you can do key rotation. These keys should be kept secret as these keys will be used to define trust between the clusters.

 

The next step is to make sure you have the container_sync middleware in your proxy pipeline. There are 2 parts to container sync, the backend daemon that periodically checks containers for new objects and sends changes to the other cluster, and the middleware that is used to authenticate requests sent by container sync daemons from other clusters. We tend to place the container_sync middleware before (to the left of) any authentication middleware.

 

The last step is to tell container sync what containers to keep in sync. This is all done via container meta-data which is controlled by the user. Let’s assume we have 2 accounts, AUTH_matt on the blue and AUTH_federatedmatt on the green. And we wanted to sync a container called mycontainer. Note, the containers don’t have to be called the same. Then we’d start by making sure the 2 containers have the same container sync key, which is defined by the owner of the container, this isn’t the realm keys but work in a similar way. And then telling 1 container to sync with the other.
NOTE: you can make the relationship go both ways.

 

Let’s use curl first:

$ curl -i -X POST -H 'X-Auth-Token: <token>' \
-H 'X-Container-Sync-Key: secret' \
'http://blueproxyvip/v1/AUTH_matt/mycontainer'

$ curl -i -X POST -H 'X-Auth-Token: <token>' \
-H 'X-Container-Sync-Key: secret' \
-H 'X-Container-Sync-To: //MyRealm/blue/AUTH_matt/mycontainer' \
'http://greenproxyvip/v1/AUTH_federatedmatt/mycontainer'

Or via the swift client, noting that you need to change identities to set each account.

# To the blue cluster for AUTH_matt
$ swift  post -k 'secret' mycontainer

 

# To the green cluster for AUTH_federatedmatt
$ swift  post \
-t '//MyRealm/blue/AUTH_matt/mycontainer' \
-k 'secret' mycontainer

In a federated environment, you’d just need to set some key for each of your containers you want to work on while your away (or all of them I guess). Then when you visit you can just add the sync-to metadata when you create containers on the other side. Likewise, if you knew the name of your account on the other side you could make a sync-to if you needed to work on something over there.

 

To authenticate containersync generates and compares a hmac on both sides where the hmac consists of both the realm and container keys, the verb, object name etc.

 

The obvious next question is great, but then do I need to know the name of each cluster, well yes, but you can simply find them by asking swift via the info call. This is done by hitting the /info swift endpoint with whatever tool you want. If your using the swift client, then it’s:

$ swift info

Pros and cons

Pros

The biggest pro for this approach is you don’t have to do anything special, if you have 1 swift cluster or a bunch throughout your federated environments the all you need to do it setup a container sync trust between them and the users can sync between themselves.

 

Cons

There are a few I can think off the top of my head:

  1. You need to manually set the metadata on each container. Which might be fine if it’s just you, but if you have an app or something it’s something else you need to think about.
  2. Container sync will move the data periodically, so you may not see it in the other container straight away.
  3. More storage is used. If it’s 1 cluster or many, the objects will exist in both accounts.

Conclusion

This is an interesting approach, but I think it would be much better to have access to the same set of objects everywhere I go and it just worked. I’ll talk about how to go about that in the next post as well as talk about 1 specific way I got working as a POC.

 

Container sync is pretty cool, Swiftstack have recently open sourced a another tool 1space, that can do something similar. 1space looks awesome but I haven’t have a chance to play with it yet. And so will add it to the list of Swift things I want to play with whenever I get a chance.

,

Krebs on SecurityAdobe Patches Zero-Day Flash Flaw

Adobe has released an emergency update to address a critical security hole in its Flash Player browser plugin that is being actively exploited to deploy malicious software. If you’ve got Flash installed — and if you’re using Google Chrome or a recent version of Microsoft Windows you do — it’s time once again to make sure your copy of Flash is either patched, hobbled or removed.

In an advisory published today, Adobe said it is aware of a report that an exploit for the previously unknown Flash flaw — CVE-2018-5002 — exists in the wild, and “is being used in limited, targeted attacks against Windows users. These attacks leverage Microsoft Office documents with embedded malicious Flash Player content distributed via email.”

The vulnerable versions of Flash include v. 29.0.0.171 and earlier. The version of Flash released today brings the program to v. 30.0.0.113 for Windows, Mac, Linux and Chrome OS. Check out this link to detect the presence of Flash in your browser and the version number installed.

Both Internet Explorer/Edge on Windows 10 and Chrome should automatically prompt users to update Flash when newer versions are available. At the moment, however, I can’t see any signs yet that either Microsoft or Google has pushed out new updates to address the Flash flaw. I’ll update this post if that changes. (Update: June 8, 11:01 a.m. ET: Looks like the browser makers are starting to push this out. You may still need to restart your browser for the update to take effect.)

Adobe credits Chinese security firm Qihoo 360 with reporting the zero-day Flash flaw. Qihoo said in a blog post that the exploit was seen being used to target individuals and companies in Doha, Qatar, and is believed to be related to a nation-state backed cyber-espionage campaign that uses booby-trapped Office documents to deploy malware.

In February 2018, Adobe patched another zero-day Flash flaw that was tied to cyber espionage attacks launched by North Korean hackers.

Hopefully, most readers here have taken my longstanding advice to disable or at least hobble Flash, a buggy and insecure component that nonetheless ships by default with Google Chrome and Internet Explorer. More on that approach (as well as slightly less radical solutions) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist/blacklist specific sites.

By default, Mozilla Firefox on Windows computers with Flash installed runs Flash in a “protected mode,” which prompts the user to decide if they want to enable the plugin before Flash content runs on a Web site.

Another, perhaps less elegant, alternative to wholesale kicking Flash to the curb is to keeping it installed in a browser that you don’t normally use, and then only using that browser on sites that require Flash.

Administrators have the ability to change Flash Player’s behavior when running on Internet Explorer on Windows 7 and below by prompting the user before playing Flash content. A guide on how to do that is here (PDF). Administrators may also consider implementing Protected View for Office. Protected View opens a file marked as potentially unsafe in Read-only mode.

CryptogramAn Example of Deterrence in Cyberspace

In 2016, the US was successfully deterred from attacking Russia in cyberspace because of fears of Russian capabilities against the US.

I have two citations for this. The first is from the book Russian Roulette: The Inside Story of Putin's War on America and the Election of Donald Trump, by Michael Isikoff and David Corn. Here's the quote:

The principals did discuss cyber responses. The prospect of hitting back with cyber caused trepidation within the deputies and principals meetings. The United States was telling Russia this sort of meddling was unacceptable. If Washington engaged in the same type of covert combat, some of the principals believed, Washington's demand would mean nothing, and there could be an escalation in cyber warfare. There were concerns that the United States would have more to lose in all-out cyberwar.

"If we got into a tit-for-tat on cyber with the Russians, it would not be to our advantage," a participant later remarked. "They could do more to damage us in a cyber war or have a greater impact." In one of the meetings, Clapper said he was worried that Russia might respond with cyberattacks against America's critical infrastructure­ -- and possibly shut down the electrical grid.

The second is from the book The World as It Is, by President Obama's deputy national security advisor Ben Rhodes. Here's the New York Times writing about the book.

Mr. Rhodes writes he did not learn about the F.B.I. investigation until after leaving office, and then from the news media. Mr. Obama did not impose sanctions on Russia in retaliation for the meddling before the election because he believed it might prompt Moscow into hacking into Election Day vote tabulations. Mr. Obama did impose sanctions after the election but Mr. Rhodes's suggestion that the targets include President Vladimir V. Putin was rebuffed on the theory that such a move would go too far.

When people try to claim that there's no such thing as deterrence in cyberspace, this serves as a counterexample.

EDITED TO ADD: Remember the blog rules. Comments that are not about the narrow topic of deterrence in cyberspace will be deleted. Please take broader discussions of the 2016 US election elsewhere.

Worse Than FailureImprov for Programmers: The Internet of Really Bad Things

Things might get a little dark in the season (series?) finale of Improv for Programmers, brought to you by Raygun. Remy, Erin, Ciarán and Josh are back, and not only is everything you're about to hear entirely made up on the spot: everything you hear will be a plot point in the next season of Mr. Robot.

Raygun provides a window into how users are really experiencing your software applications.

Unlike traditional logging, Raygun silently monitors applications for issues affecting end users in production, then allows teams to pinpoint the root cause behind a problem with greater speed and accuracy by providing detailed diagnostic information for developers. Raygun makes fixing issues 1000x faster than traditional debugging methods using logs and incomplete information.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integrated, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet Linux AustraliaGary Pendergast: Podcasting: Tavern Style

Earlier today, I joined JJJ and Jeff on episode 319 of the WP Tavern’s WordPress Weekly podcast!

We chatted about GitHub being acquired by Microsoft (and what that might mean for the future of WordPress using Trac), the state of Gutenberg, WordCamp Europe, as well as getting into a bit of the philosophy that drives WordPress’ auto-update system.

Finally, Jeff was kind enough to name me a Friend of the Show, despite my previous appearance technically not being a WordPress Weekly episode. 🎉

WPWeekly Episode 319 – The Gutenberg Plugin Turns 30

,

Rondam RamblingsPSA: Blogger comment notifications appear to be kerfliggered

I normally get an email notification whenever anyone posts comment here, but I just noticed that this feature doesn't seem to be working any more.  I hope this is temporary, but I wouldn't bet my life savings on it.  I don't think the Blogger platform is a top priority for Google.  So until I can figure out what to do about it just be aware that I might not be as responsive to comments as I

Krebs on SecurityFurther Down the Trello Rabbit Hole

Last month’s story about organizations exposing passwords and other sensitive data via collaborative online spaces at Trello.com only scratched the surface of the problem. A deeper dive suggests a large number of government agencies, marketing firms, healthcare organizations and IT support companies are publishing credentials via public Trello boards that quickly get indexed by the major search engines.

By default, Trello boards for both enterprise and personal use are set to either private (requires a password to view the content) or team-visible only (approved members of the collaboration team can view).

But individual users may be able to manually share personal boards that include personal or proprietary employer data, information that gets cataloged by Internet search engines and available to anyone with a Web browser.

David Shear is an analyst at Flashpoint, a New York City based threat intelligence company. Shear spent several weeks last month exploring the depths of sensitive data exposed on Trello. Amid his digging, Shear documented hundreds of public Trello boards that were exposing passwords and other sensitive information. KrebsOnSecurity worked with Shear to document and report these boards to Trello.

Shear said he’s amazed at the number of companies selling IT support services that are using Trello not only to store their own passwords, but even credentials to manage customer assets online.

“There’s a bunch of different IT shops using it to troubleshoot client requests, and to do updates to infrastructure,” Shear said. “We also found a Web development team that’s done a lot of work for various dental offices. You could see who all their clients were and see credentials for clients to log into their own sites. These are IT companies doing this. And they tracked it all via [public] Trello pages.”

One particularly jarring misstep came from someone working for Seceon, a Westford, Mass. cybersecurity firm that touts the ability to detect and stop data breaches in real time. But until a few weeks ago the Trello page for Seceon featured multiple usernames and passwords, including credentials to log in to the company’s WordPress blog and iPage domain hosting.

Credentials shared on Trello by an employee of Seceon, a cybersecurity firm.

Shear also found that a senior software engineer working for Red Hat Linux in October 2017 posted administrative credentials to two different servers apparently used to test new builds.

Credentials posted by a senior software engineer at Red Hat.

The Maricopa County Department of Public Health (MCDPH) in Arizona used public Trello boards to document a host of internal resources that are typically found behind corporate intranets, such as this board that aggregated information for new hires (including information about how to navigate the MCDPH’s payroll system):

The (now defunct) Trello page for the Maricopa County Department of Public Health.

Even federal health regulators have made privacy missteps with Trello. Shear’s sleuthing uncovered a public Trello page maintained by HealthIT.gov — the official Web site of the National Coordinator for Health Information Technology, a component of the U.S. Department of Health and Human Services (HHS) — that was leaking credentials.

There appear to be a great many marketers and realtors who are using public Trello boards as their personal password notepads. One of my favorites is a Trello page maintained by a “virtual assistant” who specializes in helping realtors find new clients and sales leads. Apparently, this person re-used her Trello account password somewhere else (and/or perhaps re-used it from a list of passwords available on her Trello page), and as a result someone added a “You hacked” card to the assistant’s Trello board, urging her to change the password.

One realtor from Austin, Texas who posted numerous passwords to her public Trello board apparently had her Twitter profile hijacked and defaced with a photo featuring a giant Nazi flag and assorted Nazi memorabilia. It’s not clear how the hijacker obtained her password, but it appears to have been on Trello for some time.

Other entities that inadvertently shared passwords for private resources via public Trello boards included a Chinese aviation authority; the International AIDS Society; and the global technology consulting and research firm Analysis Mason, which also exposed its Twitter account credentials on Trello until very recently.

Trello responded to this report by making private many of the boards referenced above; other reported boards appear to remain public, minus the sensitive information. Trello said it was working with Google and other search engine providers to have any cached copies of the exposed boards removed.

“We have put many safeguards in place to make sure that public boards are being created intentionally and have clear language around each privacy setting, as well as persistent visibility settings at the top of each board,” a Trello spokesperson told KrebsOnSecurity in response to this research. “With regard to the search-engine indexing, we are currently sending the correct HTTP response code to Google after a board is made private. This is an automated, immediate action that happens upon users making the change. But we are trying to see if we can speed up the time it takes Google to realize that some of the URLs are no longer available.”

If a Trello board is Team Visible it means any members of that team can view, join, and edit cards. If a board is Private, only members of that specific board can see it. If a board is Public, anyone with the link to the board can see it.

Flashpoint’s Shear said Trello should be making a more concerted effort to proactively find sensitive data exposed by its users. For example, Shear said Trello’s platform could perform some type of automated analysis that looks for specific keywords (like “password”) and if the page is public display a reminder to the board’s author about how to make the page private.

“They could easily do input validation on things like passwords if they’re not going to proactively search their own network for this stuff,” Shear said.

Trello co-founder Michael Pryor said the company was grateful for the suggestion and would consider it.

“We are looking at other cloud apps of our size and how they balance the vast majority of useful sharing of public info with helping people not make a mistake,” Pryor said. “We’ll continue to explore the topic and potential solutions, and appreciate the work you put into the list you shared with us.”

Shear said he doubts his finds even come close to revealing the true extent of the sensitive data organizations are exposing via misconfigured Trello boards. He added that even in cases where public Trello boards don’t expose passwords or financial data, the information that countless organizations publish to these boards can provide plenty of ammunition for phishers and cybercriminals looking to target specific entities.

“I don’t think we’ve even uncovered the real depth of what’s probably there,” he said. “I’d be surprised if someone isn’t at least trying to collect a bunch of user passwords and configuration files off lots of Trello accounts for bad guy operations.”

Update, 11:56 p.m. ET: Corrected location of MCDPH.

Worse Than FailureSponsor Post: Six Months of Free Monitoring at Panopta for TDWTF Readers

You may not have noticed, but in the footer of the site, there is a little banner that says:

Monitored by Panopta

Actually, The Daily WTF has been monitored with Panopta for nearly ten years. I've also been using it to monitor Inedo's important public and on-prem servers, and email and text us when there are issues.

I started using Panopta because it's easy to use and allows you to monitor using a number of different methods (public probes, private probes and server agents). I may install agents for more detailed monitoring going forward, but having Panopta probe HTTP, HTTPS, VPN, and SMTP is sufficient for our needs at the moment. We send custom HTTP payloads to mimic our actual use cases, especially with our registration APIs.

If you're not using a monitoring / alerting platform or want to try something new, now's the time to start!

Panopta is offering The Daily WTF readers six months of free monitoring!

Give it a shot. You may find yourself coming to dread those server outage emails and SMS messages. PROTIP: configure the alerting workflow to send outage notices to someone else to worry about.

Disclaimer: while Panopta is not paid sponsor, they been generously providing free monitoring for The Daily WTF (and Inedo) because they're fans of the site; I thought it was high time to tell you about them!

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

CryptogramThe Habituation of Security Warnings

We all know that it happens: when we see a security warning too often -- and without effect -- we start tuning it out. A new paper uses fMRI, eye tracking, and field studies to prove it.

EDITED TO ADD (6/6): This blog post summarizes the findings.

MEBTRFS and SE Linux

I’ve had problems with systems running SE Linux on BTRFS losing the XATTRs used for storing the SE Linux file labels after a power outage.

Here is the link to the patch that fixes this [1]. Thanks to Hans van Kranenburg and Holger Hoffstätte for the information about this patch which was already included in kernel 4.16.11. That was uploaded to Debian on the 27th of May and got into testing about the time that my message about this issue got to the SE Linux list (which was a couple of days before I sent it to the BTRFS developers).

The kernel from Debian/Stable still has the issue. So using a testing kernel might be a good option to deal with this problem at the moment.

Below is the information on reproducing this problem. It may be useful for people who want to reproduce similar problems. Also all sysadmins should know about “reboot -nffd”, if something really goes wrong with your kernel you may need to do that immediately to prevent corrupted data being written to your disks.

The command “reboot -nffd” (kernel reboot without flushing kernel buffers or writing status) when run on a BTRFS system with SE Linux will often result in /var/log/audit/audit.log being unlabeled. It also results in some systemd-journald files like /var/log/journal/c195779d29154ed8bcb4e8444c4a1728/system.journal being unlabeled but that is rarer. I think that the same
problem afflicts both systemd-journald and auditd but it’s a race condition that on my systems (both production and test) is more likely to affect auditd.

root@stretch:/# xattr -l /var/log/audit/audit.log 
security.selinux: 
0000   73 79 73 74 65 6D 5F 75 3A 6F 62 6A 65 63 74 5F    system_u:object_ 
0010   72 3A 61 75 64 69 74 64 5F 6C 6F 67 5F 74 3A 73    r:auditd_log_t:s 
0020   30 00                                              0.

SE Linux uses the xattr “security.selinux”, you can see what it’s doing with xattr(1) but generally using “ls -Z” is easiest.

If this issue just affected “reboot -nffd” then a solution might be to just not run that command. However this affects systems after a power outage.

I have reproduced this bug with kernel 4.9.0-6-amd64 (the latest security update for Debian/Stretch which is the latest supported release of Debian). I have also reproduced it in an identical manner with kernel 4.16.0-1-amd64 (the latest from Debian/Unstable). For testing I reproduced this with a 4G filesystem in a VM, but in production it has happened on BTRFS RAID-1 arrays, both SSD and HDD.

#!/bin/bash 
set -e 
COUNT=$(ps aux|grep [s]bin/auditd|wc -l) 
date 
if [ "$COUNT" = "1" ]; then 
 echo "all good" 
else 
 echo "failed" 
 exit 1 
fi

Firstly the above is the script /usr/local/sbin/testit, I test for auditd running because it aborts if the context on it’s log file is wrong. When SE Linux is in enforcing mode an incorrect/missing label on the audit.log file causes auditd to abort.

root@stretch:~# ls -liZ /var/log/audit/audit.log 
37952 -rw-------. 1 root root system_u:object_r:auditd_log_t:s0 4385230 Jun  1 
12:23 /var/log/audit/audit.log

Above is before I do the tests.

while ssh stretch /usr/local/sbin/testit ; do 
 ssh stretch "reboot -nffd" > /dev/null 2>&1 & 
 sleep 20 
done

Above is the shell code I run to do the tests. Note that the VM in question runs on SSD storage which is why it can consistently boot in less than 20 seconds.

Fri  1 Jun 12:26:13 UTC 2018 
all good 
Fri  1 Jun 12:26:33 UTC 2018 
failed

Above is the output from the shell code in question. After the first reboot it fails. The probability of failure on my test system is greater than 50%.

root@stretch:~# ls -liZ /var/log/audit/audit.log  
37952 -rw-------. 1 root root system_u:object_r:unlabeled_t:s0 4396803 Jun  1 12:26 /var/log/audit/audit.log

Now the result. Note that the Inode has not changed. I could understand a newly created file missing an xattr, but this is an existing file which shouldn’t have had it’s xattr changed. But somehow it gets corrupted.

The first possibility I considered was that SE Linux code might be at fault. I asked on the SE Linux mailing list (I haven’t been involved in SE Linux kernel code for about 15 years) and was informed that this isn’t likely at
all. There have been no problems like this reported with other filesystems.

Worse Than FailureCodeSOD: Many Happy Returns

We've all encountered a situation where changing requirements caused some function that had a single native return type to need to return a second value. One possible solution is to put the two return values in some wrapper class as follows:

  class ReturnValues {
    private int    numDays;
    private String lastName;

    public ReturnValues(int i, String s) {
      numDays  = i;
      lastName = s;
    }

    public int    getNumDays()  { return numDays;  }
    public String getLastname() { return lastName; }
  }

It is trivial to add additional return values to this mechanism. If this is used as the return value to an interface function and you don't have access to change the ReturnValues object itself, you can simply subclass the ReturnValues wrapper to include additional fields as needed and return the base class reference.

Then you see something like this spread out over a codebase and wonder if maybe they should have been just a little less agile and that perhaps a tad more planning was required:

  class AlsoReturnTransactionDate extends ReturnValues {
    private Date txnDate;
    public AlsoReturnTransactionDate(int i, String s, Date td) {
      super(i,s);
      txnDate = td;
    }
    public Date getTransactionDate() { return txnDate; }
  }
  
  class AddPriceToReturn extends AlsoReturnTransactionDate {
    private BigDecimal price;
    public AddPriceToReturn(int i, String s, Date td, BigDecimal px) {
      super(i,s,td);
      price = px;
    }
    public BigDecimal getPrice() { return price; }
  }

  class IncludeTransactionData extends AddPriceToReturn {
    private Transaction txn;
    public IncludeTransactionData(int i, String s, Date td, BigDecimal px, Transaction t) {
      super(i,s,td,px);
      txn = t;
    }
    public Transaction getTransaction() { return txn; }
  }

  class IncludeParentTransactionId extends IncludeTransactionData {
    private long id;
    public IncludeParentTransactionId(int i, String s, Date td, BigDecimal px, Transaction t, long id) {
      super(i,s,td,px,t);
      this.id = id;
    }
    public long getParentTransactionId() { return id; }
  }

  class ReturnWithRelatedData extends IncludeParentTransactionId {
    private RelatedData rd;
    public ReturnWithRelatedData(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd) {
      super(i,s,td,px,t,id);
      this.rd = rd;
    }
    public RelatedData getRelatedData() { return rd; }
  }

  class ReturnWithCalculatedFees extends ReturnWithRelatedData {
    private BigDecimal calcedFees;
    public ReturnWithCalculatedFees(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf) {
      super(i,s,td,px,t,id,rd);
      calcedFees = cf;
    }
    public BigDecimal getCalculatedFees() { return calcedFees; }
  }

  class ReturnWithExpiresDate extends ReturnWithCalculatedFees {
    private Date expiresDate;
    public ReturnWithExpiresDate(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf, Date ed) {
      super(i,s,td,px,t,id,rd,cf);
      expiresDate = ed;
    }
    public Date getExpiresDate() { return expiresDate; }
  }

  class ReturnWithRegulatoryQuantities extends ReturnWithExpiresDate {
    private RegulatoryQuantities regQty;
    public ReturnWithRegulatoryQuantities(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf, Date ed, RegulatoryQuantities rq) {
      super(i,s,td,px,t,id,rd,cf,ed);
      regQty = rq;
    }
    public RegulatoryQuantities getRegulatoryQuantities() { return regQty; }
  }

  class ReturnWithPriorities extends ReturnWithRegulatoryQuantities {
    private Map<String,Double> priorities;
    public ReturnWithPriorities(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf, Date ed, RegulatoryQuantities rq, Map<String,Double> p) {
      super(i,s,td,px,t,id,rd,cf,ed,rq);
      priorities = p;
    }
    public Map<String,Double> getPriorities() { return priorities; }
  }

  class ReturnWithRegulatoryValues extends ReturnWithPriorities {
    private Map<String,Double> regVals;
    public ReturnWithRegulatoryValues(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf, Date ed, RegulatoryQuantities rq, Map<String,Double> p, Map<String,Double> rv) {
        super(i,s,td,px,t,id,rd,cf,ed,rq,p);
        regVals = rv;
    }
    public Map<String,Double> getRegulatoryValues() { return regVals; }
  }

The icing on the cake is that everywhere the added values are used, the base return type has to be cast to at least the level that contains the needed field.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

,

Rondam RamblingsBlame where it's due

I can't say I'm even a little bit surprised that the summit with North Korea has fallen through.  I wouldn't even bother blogging about this except that back in April I expressed some cautious optimism that maybe, just maybe, Trump's bull-in-the-china-shop tactics could be working.  Nothing makes me happier than having my pessimistic prophecies be proven wrong, but alas, Donald Trump seems to be

Planet Linux AustraliaMichael Still: Mirroring all your repos from github

Share

So let me be clear here, I don’t think its a bad thing that Microsoft bought github. No one is forcing you to use their services, in fact they make it trivial to stop using them. So what’s the big deal.

I’ve posted about a few git mirror scripts I run at home recently: one to mirror gerrit repos; and one to mirror arbitrary github users.

It was therefore trivial to whip up a slightly nicer script intended to help you forklift your repos out of github if you’re truly concerned. Its posted on github now (irony intended).

Now you can just do something like:

$ pip install -U -r requirements.txt
$ python download.py --github_token=foo --username=mikalstill

I intend to add support for auto-creating and importing gitlab repos into the script, but haven’t gotten around to that yet. Pull requests welcome.

Share

The post Mirroring all your repos from github appeared first on Made by Mikal.

Rondam RamblingsSCOTUS got the Masterpiece Cake Shop decision badly wrong

The Supreme Court issued its much-anticipated decision in the gay wedding cake case yesterday.  It hasn't made as much of a splash as expected because the justices tried to split the baby and sidestep making what might otherwise have been a contentious decision.  But I think they failed and got it wrong anyway. The gist of the ruling was that Jack Phillips, the cake shop owner, wins the case

Krebs on SecurityResearcher Finds Credentials for 92 Million Users of DNA Testing Firm MyHeritage

MyHeritage, an Israeli-based genealogy and DNA testing company, disclosed today that a security researcher found on the Internet a file containing the email addresses and hashed passwords of more than 92 million of its users.

MyHeritage says it has no reason to believe other user data was compromised, and it is urging all users to change their passwords. It says sensitive customer DNA data is stored on IT systems that are separate from its user database, and that user passwords were “hashed” — or churned through a mathematical model designed to turn them into unique pieces of gibberish text that is (in theory, at least) difficult to reverse.

MyHeritage did not say in its blog post which method it used to obfuscate user passwords, but suggested that it had added some uniqueness to each password (beyond the hashing) to make them all much harder to crack.

“MyHeritage does not store user passwords, but rather a one-way hash of each password, in which the hash key differs for each customer,” wrote Omer Deutsch, MyHeritage’s chief information security officer. “This means that anyone gaining access to the hashed passwords does not have the actual passwords.”

The company said the security researcher who found the user database reported it on Monday, June 4. The file contained the email addresses and hashed passwords of 92,283,889 users who created accounts at MyHeritage up to and including Oct. 26, 2017, which MyHeritage says was “the date of the breach.”

MyHeritage added that it is expediting work on an upcoming two-factor authentication option that the company plans to make available to all MyHeritage users soon.

“This will allow users interested in taking advantage of it, to authenticate themselves using a mobile device in addition to a password, which will further harden their MyHeritage accounts against illegitimate access,” the blog post concludes.

MyHeritage has not yet responded to requests for comment and clarification on several points. I will update this post if that changes.

ANALYSIS

MyHeritage’s repeated assurances that nothing related to user DNA ancestry tests and genealogy data was impacted by this incident are not reassuring. Much depends on the strength of the hashing routine used to obfuscate user passwords.

Thieves can use open-source tools to crack large numbers of passwords that are scrambled by weaker hashing algorithms (MD5 and SHA-1, e.g.) with very little effort. Passwords jumbled by more advanced hashing methods — such as Bcrypt — are typically far more difficult to crack, but I would expect any breach victim who was using Bcrypt to disclose this and point to it as a mitigating factor in a cybersecurity incident.

In its blog post, MyHeritage says it enabled a unique “hash key” for each user password. It seems likely the company is talking about adding random “salt” to each password, which can be a very effective method for blunting large-scale password cracking attacks (if implemented properly).

If indeed the MyHeritage user database was taken and stored by a malicious hacker (as opposed to inadvertently exposed by an employee), there is a good chance that the attackers will be trying to crack all user passwords. And if any of those passwords are crackable, the attackers will then of course get access to the more personal data on those users.

In light of this and the sensitivity of the data involved, it would seem prudent for MyHeritage to simply expire all existing passwords and force a password reset for all of users, instead of relying on them to do it themselves at some point (hopefully, before any attackers might figure out how to crack the user password hashes).

Finally, it’s astounding that 92 million+ users thought it was okay to protect such sensitive data with just a username and password. And that MyHeritage is only now getting around developing two-factor solutions.

It’s now 2018, and two-factor authentication is not a new security technology by any stretch. A word of advice: If a Web site you trust with sensitive personal or financial information doesn’t offer some form of multi-factor authentication, it’s time to shop around.

Check out twofactorauth.org, and compare how your bank, email, Web/cloud hosting or domain name provider stacks up against the competition. If you find a competitor with better security, consider moving your data and business there.

Every company (including MyHeritage) likes to say that “your privacy and the security of your data are our highest priority.” Maybe it’s time we stopped patronizing companies that don’t outwardly demonstrate that priority.

For more on MyHeritage, check out this March 2018 story in The Atlantic about how the company recently mapped out a 13-million person family tree.

Update, June 6, 3:12 p.m. ET: MyHeritage just updated their statement to say that they are now forcing a password reset for all users. From the new section:

“To maximize the security of our users, we have started the process of expiring ALL user passwords on MyHeritage. This process will take place over the next few days. It will include all 92.3 million affected user accounts plus all 4 million additional accounts that have signed up to MyHeritage after the breach date of October 26, 2017.”

“As of now, we’ve already expired the passwords of more than half of the user accounts on MyHeritage. Users whose passwords were expired are forced to set a new password and will not be able to access their account and data on MyHeritage until they complete this. This procedure can only be done through an email sent to their account’s email address at MyHeritage. This will make it more difficult for any unauthorized person, even someone who knows the user’s password, to access the account.”

“We plan to complete the process of expiring all the passwords in the next few days, at which point all the affected passwords will no longer be usable to access accounts and data on MyHeritage. Note that other websites and services owned and operated by MyHeritage, such as Geni.com and Legacy Family Tree, have not been affected by the incident.”

Sociological ImagesStaying Cool as Social Policy

This week I came across a fascinating working paper on air conditioning in schools by Joshua Goodman, Michael Hurwitz, Jisung Park, and Jonathan Smith. Using data from ten million students, the authors find a relationship between hotter school instruction days and lower PSAT scores. They also find that air conditioning offsets this problem, but students of color in lower income school districts are less likely to attend schools with adequate air conditioning, making them more vulnerable to the effects of hot weather.

Climate change is a massive global problem, and the heat is a deeply sociological problem, highlighting who has the means or the social ties to survive dangerous heat waves. For much of our history, however, air conditioning has been understood as a luxury good, from wealthy citizens in ancient Rome to cinemas in the first half of the twentieth century. Classic air conditioning ads make the point:

This is a key problem for making social policy in a changing world. If global temperatures are rising, at what point does adequate air conditioning become essential for a school to serve students? At what point is it mandatory to provide AC for the safety of residents, just like landlords have to provide heat? If a school has to undergo budget cuts today, I would bet that most politicians or administrators wouldn’t think to fix the air conditioning first. The estimates from Goodman and coauthors suggest that doing so could offset the cost, though, boosting learning to the tune of thousands of dollars in future earnings for students, all without a curriculum overhaul.

Making such improvements requires cultural changes as well as policy changes. We would need to shift our understanding of what air conditioning means and what it provides: security, rather than luxury. It also means we can’t always focus social policy as something that provides just the bare minimum, we also have to think about what it means to provide for a thriving society, rather than one that just squeaks by. In an era of climate change, it might be time to rethink the old cliché, “if you can’t stand the heat, get out of the kitchen.”

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramRegulating Bitcoin

Ross Anderson has a new paper on cryptocurrency exchanges. From his blog:

Bitcoin Redux explains what's going wrong in the world of cryptocurrencies. The bitcoin exchanges are developing into a shadow banking system, which do not give their customers actual bitcoin but rather display a "balance" and allow them to transact with others. However if Alice sends Bob a bitcoin, and they're both customers of the same exchange, it just adjusts their balances rather than doing anything on the blockchain. This is an e-money service, according to European law, but is the law enforced? Not where it matters. We've been looking at the details.

The paper.

Worse Than FailureRepresentative Line: A Test Configuration

Tyler Zale's organization is a automation success story of configuration-as-code. Any infrastructure change is scripted, those scripts are tested, and deployments happen at the push of a button.

They'd been running so smoothly that Tyler was shocked when his latest automated pull request for changes to their HAProxy load balancer config triggered a stack of errors long enough to circle the moon and back.

The offending line in the test:

assert File(check_lbconfig).exists and File(check_lbconfig).size == 2884

check_lbconfig points to their load balancer config. Their test asserts that the file exists… and that it's exactly 2884 bytes long. Which, of course, raises its own question: if this worked for years, how on Earth was the file size never changing? I'll let Tyler explain:

To make matters worse, the file being checked is one of the test files, not the actual haproxy config being changed.

As it turns out, at least when it comes to the load balancer, they've never actually tested the live config script. In fact, Tyler is the one who broke their tests by actually changing the test config file and making his own assertions about what it should do.

It was a lot more changes before the tests actually became useful.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Don MartiEvil stuff on the Internet and following the money

Rule number one of dealing with the big Internet companies is: never complain to them about all the evil stuff they support. It's a waste of time and carpal tunnels. All of the major Internet companies have software, processes, and, most important, contract moderators, to attenuate complaints. After all, if Big Company employees came in to work and saw real user screenshots of the beheading videos, or the child abuse channel, or the ethnic cleansing memes, then that would harsh their mellow and severely interfere with their ability to, as they say in California, bro down and crush code.

Fortunately, we have better options than engaging with a process that's designed to mute a complaint. Follow the money.

Your average Internet ad does not come from some ominous all-seeing data-driven Panopticon. It's probably placed by some marketing person looking at an ad dashboard screen that's just as confusing to them as the ad placement is confusing to you.

So I'm borrowing the technique that "Spocko" started for talk radio, and Sleeping Giants scaled up for ads on extremist sites.

  • Contact a brand's marketing decision makers directly.

  • Briefly make a specific request.

  • Put your request in terms that make not granting it riskier and more time-consuming.

This should be pretty well known by now. What's new is a change in European privacy regulations. The famous European GDPR applies not just to Europeans, but to natural persons. So I'm going to test the idea that if I ask for something specific and easy to do, it will be easier for people to just do it, instead of having to figure out that (1) they have a different policy for people who they won't honor GDPR requests from and (2) they can safely assign me to the non-GDPR group and ignore me.

My simple request is not to include me in a Facebook Custom Audience. I can find the brands that are doing this by downloading ad data from Facebook, and here's a letter-making web thingy that I can use. Try it if you like. I'll follow up with how it's going.

Planet Linux AustraliaSimon Lyall: Audiobooks – May 2018

Ramble On by Sinclair McKay

The history of walking in Britain and some of the author’s experiences. A pleasant listen. 7/10

Inherit the Stars by James P. Hogan

Very hard-core Sci Fi (all tech, no character) about a 50,000 year old astronaut’s body being found on the moon. Dated in places (everybody smokes) but I liked it. 7/10

Sapiens: A Brief History of Humankind by Yuval Noah Harari

A good overview of pre-history of human species plus an overview of central features of cultures (government, religion, money, etc). Interesting throughout. 9/10

The Adventures of Sherlock Holmes II by Sir Arthur Conan Doyle, read by David Timson

Another four Holmes stories. I’m pretty happy with Timson’s version. Each is only about an hour long. 7/10

The Happy Traveler: Unpacking the Secrets of Better Vacations by Jaime Kurtz

Written by a “happiness researcher” rather than a travel expert. A bit different from what I expected. Lots about structuring your trips to maximize your memories. 7/10

Mrs. Kennedy and Me: An Intimate Memoir by Clint Hill with Lisa McCubbin

I’ve read several of Hill’s books of his time in the US Secret Service, this overlaps a lot of these but with some extra Jackie-orientated material. I’d recommend reading the others first. 7/10

The Lost Continent: Travels in Small Town America by Bill Bryson

The author drives through small-town American making funny observations. Just 3 hours long so good bang for buck. Almost 30 years old so feels a little dated. 7/10

A Splendid Exchange: How Trade Shaped the World by William J. Bernstein

A pretty good overview of the growth of trade. Concentrates on the evolution of  routes between Asia and Europe. Only brief coverage post-1945. 7/10

The Adventures of Sherlock Holmes III by Sir Arthur Conan Doyle

The Adventure of the Cardboard Box; The Musgrave Ritual; The Man with the Twisted Lip; The Adventure of the Blue Carbuncle. All well done. 7/10

The Gentle Giants of Ganymede (Giants Series, Book 2) by James P. Hogan

Almost as hard-core as the previous book but with less of a central mystery. Worth reading if you like the 1st in the series. 7/10

An Army at Dawn: The War in North Africa, 1942-1943 – The Liberation Trilogy, Book 1 by Rick Atkinson

I didn’t like this as much as I expected or as much as similar books. Can’t quite place the problem though. Perhaps works better when written. 7/10

The Adventures of Sherlock Holmes IV by Sir Arthur Conan Doyle

A Case of Identity; The Crooked Man; The Naval Treaty; The Greek Interpreter. I’m happy with Timson’s version . 7/10

Share

Planet Linux AustraliaMichael Still: Quick note: pre-pulling docker images for ONAP OOM installs

Share

Writing this down here because it took me a while to figure out for myself…

ONAP OOM deploys ONAP using Kubernetes, which effectively means Docker images at the moment. It needs to fetch a lot of Docker images, so there is a convenient script provided to pre-pull those images to make install faster and more reliable.

The script in the OOM codebase isn’t very flexible, so Jira issue OOM-655 was filed for a better script. The script was covered in code review 30169. Disappointingly, the code reviewer there doesn’t seem to have actually read the jira issue or the code before abandoning the patch — which isn’t very impressive.

So how do you get the nicer pre-pull script?

Its actually not too hard once you know the review ID. Just do this inside your OOM git clone:

$ git review -d 30169

You might be prompted for your gerrit details because the ONAP gerrit requires login. Once git review has run, you’ll be left sitting in a branch from when the review was uploaded that includes the script:

$ git branch
  master
* review/james_forsyth/30169

Now just rebase that to bring it in mine with master and get on with your life:

$ git rebase -i origin
Successfully rebased and updated refs/heads/review/james_forsyth/30169.

You’re welcome. I’d like to see the ONAP community take code reviews a bit more seriously, but ONAP seems super corporate (even compared to OpenStack), so I’m not surprised that they haven’t done a very good job here.

Share

The post Quick note: pre-pulling docker images for ONAP OOM installs appeared first on Made by Mikal.

,

Worse Than FailureCodeSOD: A/F Testing

A/B testing is a strange beast, to me. I understand the motivations, but to me, it smacks of "I don't know what the requirements should be, so I'll just randomly show users different versions of my software until something 'sticks'". Still, it's a standard practice in modern UI design.

What isn't standard is this little blob of code sent to us anonymously. It was found in a bit of code responsible for A/B testing.

    var getModalGreen = function() {
      d = Math.random() * 100;
      if ((d -= 99.5) < 0) return 1;
      return 2;
    };

You might suspect that this code controls the color of a modal dialog on the page. You'd be wrong. It controls which state of the A/B test this run should use, which has nothing to do with the color green or modal dialogs. Perhaps it started that way, but it isn't used that way. Documentation in code can quickly become outdated as the code changes, and this apparently extends to self documenting code.

The key logic of this is that 0.5% of the time, we want to go down the 2 path. You or I might do a check like Math.random() < 0.005. Perhaps, for "clarity" we might multiply the values by 100, maybe. What we wouldn't do is subtract 99.5. What we definitely wouldn't do is subtract using the assignment operator.

You'll note that d isn't declared with a var or let keyword. JavaScript doesn't particularly care, but it does mean that if the containing scope declared a d variable, this would be touching that variable.

In fact, there did just so happen to be a global variable d, and many functions dropped values there, for no reason.

This A/B test gets a solid D-.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaDavid Rowe: Rowetel Blog Post Archive

I’ve written so many blog posts in the last 12 years I can’t find them when I need them again. So here is an Archive page…

,

Planet Linux AustraliaDavid Rowe: Bench Testing HF Radios with a HackRF

This post describes how we implemented a HF channel simulator to bench test a digital HF radio using modern SDRs.

Yesterday Mark and I bench tested a HF radio with calibrated SNR over simulated AWGN and HF channels. We recorded the radios transmit signal with an AirSpy HF and GQRX, added calibrated noise and “CCIR Poor” fading, and replayed the signal using a HackRF.

For the FreeDV 700C and 700D work I have developed a utility called cohpsk_ch, that takes a real modem signal, adds channel impairments like noise and fading, and outputs another real signal. It has a built in Hilbert Transformer so it can do complex math cleverness like small frequency shifts and ITUT/CCIR HF fading channel models.

Set Up

The basic idea is to upconvert a 8 kHz real sample file to HF in real time. I have some utilities to help with this in codec2-dev:

$ svn co https://svn.code.sf.net/p/freetel/code/codec2-dev codec2-dev
$ cd codec2-dev/octave
$ octave --no-gui
octave:1> cohpsk_ch_fading("../raw/fast_fading_samples.float", 8000, 1.0, 8000*60)
octave:2> cohpsk_ch_fading("../raw/slow_fading_samples.float", 8000, 0.1, 8000*60)
$ exit
$ cd ..
$ cd codec2-dev && mkdir build_linux && cd build_linux
$ cmake -DCMAKE_BUILD_TYPE=Debug ..
$ make
$ cd unittest 

You also need GNU Octave to generate the HF fading files for cohpsk_ch, and you need to install the very useful CSDR tools.

Connect the HackRF to your SSB receiver, we put a 30dB attenuator in line. Tune the radio to 7.177 MHz LSB. First generate a carrier with your HackRF, offset so we get a 500Hz tone in the SSB radio in LSB mode:

$ hackrf_transfer -f 7176500 -s 8000000 -c 127

Now lets try some DSB audio:

$ cat ../../wav/ve9qrp.wav | csdr mono2stereo_s16 | ./tsrc - - 10 -c |
./tlininterp - - 100 -df | hackrf_transfer -f 5177000 -s 8000000  -t - 2>/dev/null

Don’t change the frequency, but try switching the mode between USB and LSB. Should sound about the same, with a slight frequency offset due to the HackRF. Note that HackRF is tuned to Fs/4 = 2MHz beneath 7.177MHz. “tlininterp” has a simple Fs/4 mixer that we use to shift the signal away from the HackRF DC spike. We up-sample from 8 kHz to 8 MHz in two steps to save MIPs.

The “csdr mono2stereo_s16” just repeats the real output samples, so we get a DSB signal at HF. A bit lazy I know, a better approach would be to modify cohpsk_ch to have a complex output option. Let me know if you want to modify cohpsk_ch – I can tell you how.

Checking Calibration

Now I’m pretty confident that cohpsk_ch works well at baseband on digital signals as I have used it extensively in my HF DV work. However I wanted to make sure the off air signal had the correct SNR.

To check the calibration, we generated a 1000 Hz sine wave Signal + Noise signal:

$ ./mksine - 1000 30  | ./../src/cohpsk_ch - - -30 --Fs 8000 --ssbfilt 0 | csdr mono2stereo_s16 | ./tsrc - - 10 -c | ./tlininterp - - 100 -df | hackrf_transfer -f 12177000 -s 8000000  -t - 2>/dev/null 

Then just a noise signal:

cat /dev/zero | ./../src/cohpsk_ch - - -30 --Fs 8000 --ssbfilt 0 | csdr mono2stereo_s16 | ./tsrc - - 10 -c | ./tlininterp - - 100 -df | hackrf_transfer -f 5177000 -s 8000000  -t - 2>/dev/null

With moderate SNRs (say 10dB), Signal + Noise power is roughly Signal power. So I measured the off air power of the above signals using my FT817 connected to a USB sound card, and an Octave script:

$ rec -t raw -r 8000 -s -2 -c 1 - -q | octave --no-gui -qf power_from_stdio.m

I used alsamixer and the plots from the script to make sure I wasn’t overloading the ADC. You need to turn your receiver AGC OFF, and adjust RF/AF gain to get the levels right.

However from the FT817 I was getting results a few dB off due to the crystal filter bandwidth and non-rectangular shape factor. Mark hooked up his AirSpy HF and GQRX, and we piped the received audio over the LAN to the script:

nc -ul 7355 | octave --no-gui -qf power_from_stdio.m

GQRX had a nice flat response from a few 100 Hz to 3kHz, the same bandwidth cohpsk_ch uses for SNR measurement. OK, so now we had sensible numbers, within 0.2dB of the SNR reported by cohpsk_ch. We moved the levels up and down 3dB, made sure everything was repeatable and linear. We went down to 0dB, where signal and noise power is the same, and Signal+Noise power should be 3dB more than Noise alone. Check.

Tests

Then we could play the HF tx signal at a variety of SNRS, by tweaking third (No) argument. In this case we set No to -100dB, so no noise:

cat tx_file_from_radio.wav | ./../src/cohpsk_ch - - -100 --Fs 8000 --ssbfilt 0 | csdr mono2stereo_s16 | ./tsrc - - 10 -c | ./tlininterp - - 100 -df | hackrf_transfer -f 5177000 -s 8000000  -t - 2>/dev/null

At the end of the cohpsk_ch run, it will print the SNR is has measured. So you read that and tweak No as required to get the SNR you need. In our case around -30 was 8dB SNR. You can also add fast (–fast) or slow (–slow) fading, here is a fast fading run at about 2dB SNR:

cat tx_file_from_radio.wav | ./../src/cohpsk_ch - - -24 --Fs 8000 --ssbfilt 0 --fast | csdr mono2stereo_s16 | ./tsrc - - 10 -c | ./tlininterp - - 100 -df | hackrf_transfer -f 5177000 -s 8000000  -t - 2>/dev/null

The “–ssbfilt 0” option switches off the 300-2600 Hz filter inside cohpsk_ch, that is used to simulate a SSB radio crystal filter. For out tests, the modem waveform was too wide for that filter.

Thoughts

I guess we could also have used the HackRF to sample the signal. The nice thing about SDRs is the frequency response is ‘flat”, no crystal filters messing things up.

The only thing we weren’t sure about was the sample rate and frequency offset accuracy of the HackRF, for example if the sample clock was a bit off that might upset modems.

The radio we tested delivered performance pretty much on it’s data sheet at the SNRs tested, giving us extra confidence in the bench testing system described here.

Reading Further

Measuring SDR Noise Figure in Real Time
High Speed Balloon Data Link, here we bench test a UHF FSK data radios
README_ofdm.txt, Lots of examples of using cohpsk_ch to test the latest and greatest OFDM modem.
PathSim is a very nice Windows GUI HF path simulator, that runs well on Linux using Wine.

Planet Linux AustraliaLev Lafayette: Installation of MrTrix 3.0_RC2 on HPC Systems

MrTrix is "a set of tools to perform various types of diffusion MRI analyses, from various forms of tractography through to next-generation group-level analyses". It is mostly designed with post-processing visualisation in mind, but for intensive computational tasks it can make use of high-performance computing systems. It is not designed with messing-passing in mind, but it can be useful for job arrays.

Download the tarball from github and extract. Curiously, MrTrix has had version inflation, moving from 0.x versions to 3.x versions. One is nevertheless thankfully that they use conventional versioning numbers at all, given how many software projects don't bother these days (every commit is a version, right?).

MrTrix has a number of dependencies, and in this particular example Eigen/3.2.9, Python/3.5.2, and GMP/6.1.1 are included in the environment. The build system block is a "ConfigureMakePythonPackage" to use the Easybuild vernacular. This means build a Python package and module with python configure/make/make install. The configuration option configure -nogui is recommended - if not, start adding the appropriate dependencies to the installation.

Now one the annoying things about this use of the Python ConfigureMake build block is that prefix commands typical in standard autotools are absent. Thus one must add these after installation. As one of their developers has said "our configure script, while written is python, is completely specific to MRtrix3 – there’s no way you could possibly have come across anything like it before."

As usual, HPC systems (and development enviroments) find it very useful to have to have multiple versions of the same software available. Thus, create an appropriate for directory software to live (e.g., mkdir -p /usr/local/MRtrix/3-3.0_RC3).

Following this there MRtrix software will be built in the source directory, again, less than ideal. Separation between source, build, and install directories would be a useful improvement. However these can be copied over to the preferred directories.

cp -r bin/ lib/ share/ docs/ /usr/local/MRtrix/3-3.0_RC3

Copying over the docs directory is particularly important, as it provides RST files of core concepts and examples. It is essential that these are provided on a manner that are readable by users on the system they are using without context-switching and in their immediate environment (external sources may not be available). Others have expressed disagreement. but it is fairly obvious that they are not speaking from a position of familiarity with such environments.

The following is a sample EasyBuild script for MRtrix (MRtrix-3.0_RC2-GCC-6.2.0-Python-3.5.2.eb).

easyblock = 'ConfigureMakePythonPackage'
name = 'MRtrix'
version = '3.0_RC2'
homepage = 'http://www.brain.org.au/software/index.html#mrtrix'
description = """MRtrix provides a set of tools to perform diffusion-weighted MR white-matter tractography in a manner robust to crossing fibres, using constrained spherical deconvolution (CSD) and probabilistic streamlines."""
toolchain = {'name': 'GCC', 'version': '6.2.0'}
toolchainopts = {'cstd': 'c++11'}
configopts = ['configure -nogui']
buildcmd = ['build']
source_urls = ['https://github.com/MRtrix3/mrtrix3/archive/']
sources = ['%(version)s.tar.gz']
checksums = ['88187f3498f4ee215b2a51d16acb7f2e6c33217e72403a7d48c2ec5da6e2218b']
dependencies = [
('Eigen', '3.2.9'),
('Python', '3.5.2'),
('GMP', '6.1.1'),
]
moduleclass = 'bio'

,

Don MartiRon Estes, US Congress

If Ron Estes, running for US Congress was a candidate with the same name as a well-known Democratic Party politician, clearly the right-wing pranksters of the USA would give him a bunch of inbound links just for lulz, and to force the better-known politician to spend money on SEO of his own.

But he's not, so people will probably just tweet about the election and stuff.

Don MartiOpting into European mode

Trans Europa Express was covered on ghacks.net. This is an experimental Firefox extension that tries to get web sites to give you European-level privacy rights, even if the site classifies you as non-European.

Since the version they mentioned, I have updated it with a few new features.

Anyway, check it out. Seems to have actual users now, so I've got that going for me. But lots of secret European mode switches still remain unactivated. If you see one, please make a new issue.

,

CryptogramFriday Squid Blogging: Do Cephalopods Contain Alien DNA?

Maybe not DNA, but biological somethings.

"Cause of Cambrian explosion -- Terrestrial or Cosmic?":

Abstract: We review the salient evidence consistent with or predicted by the Hoyle-Wickramasinghe (H-W) thesis of Cometary (Cosmic) Biology. Much of this physical and biological evidence is multifactorial. One particular focus are the recent studies which date the emergence of the complex retroviruses of vertebrate lines at or just before the Cambrian Explosion of ~500 Ma. Such viruses are known to be plausibly associated with major evolutionary genomic processes. We believe this coincidence is not fortuitous but is consistent with a key prediction of H-W theory whereby major extinction-diversification evolutionary boundaries coincide with virus-bearing cometary-bolide bombardment events. A second focus is the remarkable evolution of intelligent complexity (Cephalopods) culminating in the emergence of the Octopus. A third focus concerns the micro-organism fossil evidence contained within meteorites as well as the detection in the upper atmosphere of apparent incoming life-bearing particles from space. In our view the totality of the multifactorial data and critical analyses assembled by Fred Hoyle, Chandra Wickramasinghe and their many colleagues since the 1960s leads to a very plausible conclusion -- life may have been seeded here on Earth by life-bearing comets as soon as conditions on Earth allowed it to flourish (about or just before 4.1 Billion years ago); and living organisms such as space-resistant and space-hardy bacteria, viruses, more complex eukaryotic cells, fertilised ova and seeds have been continuously delivered ever since to Earth so being one important driver of further terrestrial evolution which has resulted in considerable genetic diversity and which has led to the emergence of mankind.

Two commentaries.

This is almost certainly not true.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramDamaging Hard Drives with an Ultrasonic Attack

Playing a sound over the speakers can cause computers to crash and possibly even physically damage the hard drive.

Academic paper.

Krebs on SecurityAre Your Google Groups Leaking Data?

Google is reminding organizations to review how much of their Google Groups mailing lists should be public and indexed by Google.com. The notice was prompted in part by a review that KrebsOnSecurity undertook with several researchers who’ve been busy cataloging thousands of companies that are using public Google Groups lists to manage customer support and in some cases sensitive internal communications.

Google Groups is a service from Google that provides discussion groups for people sharing common interests. Because of the organic way Google Groups tend to grow as more people are added to projects — and perhaps given the ability to create public accounts on otherwise private groups — a number of organizations with household names are leaking sensitive data in their message lists.

Many Google Groups leak emails that should probably not be public but are nevertheless searchable on Google, including personal information such as passwords and financial data, and in many cases comprehensive lists of company employee names, addresses and emails.

By default, Google Groups are set to private. But Google acknowledges that there have been “a small number of instances where customers have accidentally shared sensitive information as a result of misconfigured Google Groups privacy settings.”

In early May, KrebsOnSecurity heard from two researchers at Kenna Security who started combing through Google Groups for sensitive data. They found thousands of organizations that seem to be inadvertently leaking internal or customer information.

The researchers say they discovered more than 9,600 organizations with public Google Groups settings, and estimate that about one-third of those organizations are currently leaking some form of sensitive email. Those affected include Fortune 500 companies, hospitals, universities and colleges, newspapers and television stations and U.S. government agencies.

In most cases, to find sensitive messages it’s enough to load the company’s public Google Groups page and start typing in key search terms, such as “password,” “account,” “hr,” “accounting,” “username” and “http:”.

Many organizations seem to have used Google Groups to index customer support emails, which can contain all kinds of personal information — particularly in cases where one employee is emailing another.

Here are just a few of their more eyebrow-raising finds:

• Re: Document(s) for Review for Customer [REDACTED]. Group: Accounts Payable
• Re: URGENT: Past Due Invoice. Group: Accounts Payable
• Fw: Password Recovery. Group: Support
• GitHub credentials. Group: [REDACTED]
• Sandbox: Finish resetting your Salesforce password. Group: [REDACTED]
• RE: [REDACTED] Suspension Documents. Group: Risk and Fraud Management

Apart from exposing personal and financial data, misconfigured Google Groups accounts sometimes publicly index a tremendous amount of information about the organization itself, including links to employee manuals, staffing schedules, reports about outages and application bugs, as well as other internal resources.

This information could be a potential gold mine for hackers seeking to conduct so-called “spearphishing” attacks that single out specific employees at a targeted organization. Such information also would be useful for criminals who specialize in “business email compromise” (BEC) or “CEO fraud” schemes, in which thieves spoof emails from top executives to folks in finance asking for large sums of money to be wired to a third-party account in another country.

“The possible implications include spearphishing, account takeover, and a wide variety of case-specific fraud and abuse,” the Kenna Security team wrote.

In its own blog post on the topic, Google said organizations using Google Groups should carefully consider whether to change the access to groups from “private” to “public” on the Internet. The company stresses that public groups have the marker “shared publicly” right at the top, next to the group name.

“If you give your users the ability to create public groups, you can always change the domain-level setting back to private,” Google said. “This will prevent anyone outside of your company from accessing any of your groups, including any groups previously set to public by your users.”

If your organization is using Google Groups mailing lists, please take a moment to read Google’s blog post about how to check for oversharing.

Also, unless you require some groups to be available to external users, it might be a good idea to turn your domain-level Google Group settings to default “private,” Kenna Security advises.

“This will prevent new groups from being shared to anonymous users,” the researchers wrote. “Secondly, check the settings of individual groups to ensure that they’re configured as expected. To determine if external parties have accessed information, Google Groups provides a feature that counts the number of ‘views’ for a specific thread. In almost all sampled cases, this count is currently at zero for affected organizations, indicating that neither malicious nor regular users are utilizing the interface.”

Worse Than FailureError'd: I Beg Your Entschuldigung?

"Delta does not seem to be so sure of what language to address me in," writes Pat.

 

"I'm wondering if the person writing the release notes made that typo when their mind was...ahem...somewhere else?" writes Pieter V.

 

Brad W. wrote, "For having "Caterpillar," "Revolver," and "Steel Toe" in the description the shoe seems a bit wimpy...maybe the wearer is expected to have an actual steel toe?"

 

"Tomato...tomahto...potato...potahto...GDPR...GPRD...all the same thing. Right?" writes Paul K.

 

"Apparently installing Ubuntu 18.04 on your laptop comes with free increase of battery capacity by almost 40x! Now that's what I call FREE software!" Jordan D. wrote.

 

Ian O. writes, "I don't know why Putin cares about the NE-2 Democratic primary, but I'm sure he added those eight extra precincts for a good reason."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

,

TEDEbola and the future of vaccines: In conversation with Seth Berkley

At TED2015, Seth Berkley showed two Ebola vaccines under review at the time. One of these vaccines is now being deployed in the current Ebola outbreak in the DRC. Photo: Bret Hartman/TED

Dr. Seth Berkley is an epidemiologist and the CEO of Gavi, the Vaccine Alliance, a global health organization dedicated to improving access to vaccines in developing countries. When he last spoke at TED, in 2015, Seth showed the audience two experimental vaccines for Ebola — both of them in active testing at the time, as the world grappled with the deadly 2014–2016 outbreak. Just last week, one of these vaccines, the Merck rVSV-ZEBOV, was deployed in the Democratic Republic of the Congo to help slow the spread of a new Ebola outbreak in and around the city of Mbandaka. With more than 30 confirmed cases and a contact list of more than 600 people who may be at risk, the situation in the DRC is “on a knife edge,” according to the World Health Organization. Seth flew to the DRC to help launch the vaccine; now back in Geneva, he spoke to TED on the challenges of vaccine development and the stunning risks we are overlooking around global health epidemics.

This interview has been edited and condensed.

You were on the scene in Mbandaka; what were you working on there?

My role was to launch the vaccine — to make sure that this technology which wasn’t going to get made was made, and was made available in case there was another big emergency. And lo and behold, there it is. Obviously, given the emergency nature, a lot of the activity recently has been about how to accelerate the work and prepare the critical pieces that are going to be necessary to get this under control, and not have it spin out of control.

Health workers in the DRC prepare the first dose of the Ebola vaccine. Photo: Pascal Emmanuel Barollier/Gavi

This is the ninth outbreak in the DRC. They are more experienced [with Ebola] than any other country in the world, but the DRC is a massive country, and the people in Mbandaka, Bikoro and Iboko are in very isolated communities. The challenge right now is to set up the basic pillars of Ebola care — basic infection control procedures, making sure that you identify every case, that you create a line-list of cases, and that you identify the context that those cases have had. All of that is the prerequisite to vaccination.

The other thing you have to do is educate the population. They know vaccines — we vaccinate for all diseases in DRC, as we do across most countries in Africa — but the challenge is, people know we do vaccine campaigns where everybody goes to a clinic and get vaccinations, so the idea that somebody comes to your community, goes to a sick person’s house, and vaccinates just people in that house and surrounding family and friends is a concept that won’t make sense. The other important thing is, although the vaccine was 100% effective in the clinical trial … well, it’s 100% effective after 10 days, so people who were already incubating Ebola will go ahead and get diseased. If people don’t understand that, then they’re going to say the vaccine didn’t work and that the vaccine gave them Ebola.

The good news is, logistics is set up. There is an air-bridge from Kinshasa, there’s helicopters to go out to Bikoro, a cold chain of the vaccine is set up in Mbandaka and Bikoro, and there are these cool carriers that keep the vaccine cold so you can transport it out to vaccination campaigns in isolated areas. We have 16,000 doses there, with 300,000 doses total, and we can release more doses as it makes sense.

You mentioned the local communities — how do you navigate that intersection of medical necessity and the lack of education or misinformation? I read that some people are refusing medical treatment and are turning to local healers or churches, instead of getting vaccinated.

There is no treatment right now available in DRC; the hope is that some experimental treatments will come in. We don’t have the equivalent for the vaccines on the treatment side. It’s going to be very important to get those treatments because, without them, what you’re saying to people is: Leave your loved ones, go to an Ebola care facility and get isolated until you most likely die, and if you don’t die, you’ll be sick for a long time. Compare that to the normal process when you get hospitalized in the DRC, which is that your family will take care of you, feed you and provide nursing care. These are tough issues for people to understand even in the best of circumstances. In an ideal world, [health workers will] work with anthropologists and social scientists, but of course, it all has to be done in the local language by people who are trusted. It’s a matter of working to bring in workers from the DRC, religious leaders and elders to educate the community so that they understand what is happening, and can cooperate with the rather chaotic but rapid effort that needs to occur to get this under control.

We know now it’s in three different health zones; we don’t yet know whether cases are connected to other cases or if these are the correct numbers of cases. It could be twice or three or ten times as many. You don’t know until you begin to do the detective work of line-listing. In an ideal world, you know you’re getting where you need to get when 100% of new cases are from the contact list of previous cases, but if 50% or 30% or 80% of the cases are not connected to previous cases. then there’s rings of transmission that are occurring that you haven’t yet identified. This is painstaking, careful detective work.

The EPI manager Dr. Guillaume Ngoie Mwamba is vaccinated in the DRC in response to the 2018 Ebola outbreak. Photo: Pascal Emmanuel Barollier/Gavi

What is different about this outbreak from the 2014 crisis? What will be the impact of this particular vaccine?

It’s the same strain, the Ebola Zaire, just like in West Africa. The difference in West Africa is that they hadn’t seen Ebola before; they initially thought it was lassa fever or cholera, so it took a long time for them to realize this was Ebola. As Isaid, the DRC has had nine outbreaks, so the government and health workers are familiar with the situation and were able to say, “Okay, we know this is Ebola, let’s call for help and bring people in.” For the vaccine campaign, they brought in a lot of the vaccinators that worked in Guinea and other countries to help do the vaccination work, because it’s an experimental vaccine under clinical trial protocols, so informed consent is required.

The impact of the vaccine is that once the line-listings are there — it was highly effective in Guinea — if this is an accelerating epidemic and you get good listing of cases, you can stop the epidemic with intervention. The other thing is that you don’t want health workers or others to say “Oh, I got the vaccine now, I don’t have to worry about it!” They still need to use full precautions, because although the vaccine was 100% effective in previous trials, the confidence interval given the size was between 78% and 100%.

In your TED Talk, you mentioned the inevitability of deadly viruses; that they will incubate, that they are an evolutionary reality. On a global level, what more can be done to anticipate epidemics, and how can we be more proactive?

I talked about the concept of prevention: How do you build vaccines for these diseases before they become real problems, and try to treat them like they’re at global health emergency before they become one? There was the creation of the new initiative at last year’s Davos called CEPI (Coalition for Epidemic Preparedness and Innovation) that is working to develop new vaccines against agents that haven’t yet caused major epidemics but have caused small outbreaks, with an understanding that they could. The idea would be to make a risk assessment and leave the vaccines frozen like they were with Ebola; you can’t do a human trial until you have an outbreak.

In 2015, at the TED Conference, Seth Berkley showed this outbreak map. During our conversation last week, he told us: “The last outbreak in 2014 was the first major outbreak. There had been 24 previous outbreaks, a handful of cases to a few hundred cases, but that was the first case that had gone in the tens of thousands. This vaccine was tried in the waning days of that outbreak, so we know what it looks like in an emergency situation.” Photo: Bret Hartman/TED

Now, the biggest threat of all — and I did a different TED talk on this — is global flu. We’re not prepared in case of a flu pandemic. A hundred years ago, the Spanish flu killed between 50 and 100 million people, and today in an interconnected world, it could be many, many times more than that. A billion people travel outside of their countries these days, and there are 66 million displaced people. I often have dinner in Nairobi, breakfast in London, and lunch in New York, and that’s within the incubation period of any of these infections. It’s a very different world now, and we really have to take that seriously. Flu is the worst one; the good thing about Ebola is that it’s not so easy to transmit, whereas the flu is really easy to transmit, as are many other infectious diseases.

It’s interesting to go back to the panic that existed with Ebola — there were only a few cases in the US but this was the “ISIS of diseases,” “the news story of the decade”. The challenge is, people get so worked up and there’s such fear, and then as soon as the epidemic goes away, they forget about it. I tried to raise money after that TED Talk, and people in general weren’t interested: “Oh, that’s yesterday’s disease.” We persevered and made sure that in our agreement with Merck that they would produce those doses, even though these are not licensed doses — as soon as they get licensed, they’ll have to get rid of those doses and make more. This was a big commitment, but we said, “Can you imagine what would happen if they had an 100% efficacious vaccine and then an outbreak occurred and we didn’t have any doses of the vaccine?” It was a risky thing to do, but it was the right thing to do from a global risk perspective, and here we are in an outbreak. Maybe it’ll stay small, but right now in the DRC, we’re seeing new cases occurring every day. It’s a scary thing.

The idea that we can make a difference is exciting — we announced the Advance Purchase Commitment in January 2017, and it’s now about a year later and here we have it being used. And it’s amazing that Merck has put this much effort in. They’ve done great work and they deserve credit for this, because it’s not like they’re going to make any money out of this. If they break even, it’ll be lucky. They’re doing this because it’s important and because they can help. We need to bring together all of the groups who can help in these circumstances — it’s the dedication of all the people on the ground from the DRC, as well as international volunteers and agencies, that will provide the systems to get this epidemic under control. There’s a lot of heroes here.

The Wangata Hospital in Mbandaka. Photo: Pascal Emmanuel Barollier/Gavi

The financial aspect is interesting — with the scale and scope of a potential global health crisis like Ebola or the flu, once it’s too late, you wouldn’t even be thinking about the relatively small financial risk of creating a vaccine that could have kept us prepared. Even if there is an immediate financial risk, in the long term, it seems incomparable.

The costs of the last Ebola outbreak were huge. In those three countries, their GDP went from positive to negative, health workers died, it affected health work going forward, travel on the continent, selling of commodities, etc. Even in the US, the cost of vanishing the few cases that were there was huge. Even if you’re a cynic and say, “I don’t care about the people, I’m only interested in a capitalistic view of the world”, these outbreaks are really expensive. The problem is there isn’t necessarily a direct link between that and getting products developed and having them stockpiled and ready to go.

The challenge is investing years ahead of time not knowing when a virus will occur or what the strain is going to be. That’s the same thing here with Ebola — we agreed to invest up to $390 million to create a stockpile, at a time when we didn’t have the money and when others weren’t interested. But if we didn’t have those doses, we’d be sitting here saying, “Well gee, shouldn’t we make some doses now?” — it takes a long time to produce the doses, to quality assure and check them, to fill and finish them, and to get them to the site. [It’s important to have] that be done by the world even when the financial incentives aren’t there.

In an interview with NPR’s TED Radio Hour, you mention the “paradox of prevention”, the idea that we seem to view health care with a treatment-centered approach, rather than prevention. With diseases that kill quickly and spread rapidly, we can’t have a solely treatment mindset, we have to be thinking about preventing it from becoming epidemics.

That is right, but we can’t ignore the treatment too [and the context in which you give it]. Personalize it: If your mother gets sick, and you’re dedicated — you would give your life for your mother in that culture, family takes care of family — do you now ship your mother to a center that you’ve heard through the grapevine will lock her up and isolate her, where she will die alone, or do you hide her and pretend she has malaria or something else? But if a doctor can say, “There might be treatment that can save your mother’s life,” well, then you want to do that for her. It [helps create] the right mindset in the population, to know that people are trying to give the best treatment, that this isn’t hopeless.

How do you think that the current Ebola situation will affect the way that we approach vaccine development? The Advance Purchase Commitment was an instance of an industry innovation. How can we continue to create incentives for pharmaceutical companies to invest in long-term development of vaccines that don’t have an immediate or guaranteed market demand?

Every time we support industry with this type of public-private partnership, it increases confidence that vaccines will be bought and supported, and increases the likelihood of industry engagement for future projects. However, it is important to state that this will not be a highly profitable vaccine. There are opportunity costs associated with it, and risks. The commitment helps but doesn’t fully solve the problem. Using push mechanisms like the funding from BARDA, Wellcome Trust and others, or a mechanism like CEPI, also helps with the risk. In an ideal world, there would be more generous mechanisms to actively incentivize industry engagement. Also, by [offering] priority review vouchers, fast track designations and others, governments can put in really good incentives for these types of programs.

Outside of closely monitoring the DRC, what are the next steps in your work?

We just opened a window for typhoid vaccines. And this is perfect timing as we have just seen the first cluster of extreme antibiotic-resistant typhoid in Pakistan, with a case exported to the UK. Pakistan has already submitted an application for support, and the Gates Foundation has provided some doses in the interim. This is an example where prevention is way, way better than cure.