Planet Russell


CryptogramFacebook Will Verify the Physical Location of Ad Buyers with Paper Postcards

Krebs on SecurityMoney Laundering Via Author Impersonation on Amazon?

Patrick Reames had no idea why sent him a 1099 form saying he’d made almost $24,000 selling books via Createspace, the company’s on-demand publishing arm. That is, until he searched the site for his name and discovered someone has been using it to peddle a $555 book that’s full of nothing but gibberish.

The phony $555 book sold more than 60 times on Amazon using Patrick Reames’ name and Social Security number.

Reames is a credited author on Amazon by way of several commodity industry books, although none of them made anywhere near the amount Amazon is reporting to the Internal Revenue Service. Nor does he have a personal account with Createspace.

But that didn’t stop someone from publishing a “novel” under his name. That word is in quotations because the publication appears to be little more than computer-generated text, almost like the gibberish one might find in a spam email.

“Based on what I could see from the ‘sneak peak’ function, the book was nothing more than a computer generated ‘story’ with no structure, chapters or paragraphs — only lines of text with a carriage return after each sentence,” Reames said in an interview with KrebsOnSecurity.

The impersonator priced the book at $555 and it was posted to multiple Amazon sites in different countries. The book — which as been removed from most Amazon country pages as of a few days ago — is titled “Lower Days Ahead,” and was published on Oct 7, 2017.

Reames said he suspects someone has been buying the book using stolen credit and/or debit cards, and pocketing the 60 percent that Amazon gives to authors. At $555 a pop, it would only take approximately 70 sales over three months to rack up the earnings that Amazon said he made.

“This book is very unlikely to ever sell on its own, much less sell enough copies in 12 weeks to generate that level of revenue,” Reames said. “As such, I assume it was used for money laundering, in addition to tax fraud/evasion by using my Social Security number. Amazon refuses to issue a corrected 1099 or provide me with any information I can use to determine where or how they were remitting the royalties.”

Reames said the books he has sold on Amazon under his name were done through his publisher, not directly via a personal account (the royalties for those books accrue to his former employer) so he’d never given Amazon his Social Security number. But the fraudster evidently had, and that was apparently enough to convince Amazon that the imposter was him.

Reames said after learning of the impersonation, he got curious enough to start looking for other examples of author oddities on Amazon’s Createspace platform.

“I have reviewed numerous Createspace titles and its clear to me that there may be hundreds if not thousands of similar fraudulent books on their site,” Reames said. “These books contain no real content, only dozens of pages of gibberish or computer generated text.”

For example, searching Amazon for the name Vyacheslav Grzhibovskiy turns up dozens of Kindle “books” that appear to be similar gibberish works — most of which have the words “quadrillion,” “trillion” or a similar word in their titles. Some retail for just one or two dollars, while others are inexplicably priced between $220 and $320.

Some of the “books” for sale on Amazon attributed to a Vyacheslav Grzhibovskiy.

“Its not hard to imagine how these books could be used to launder money using stolen credit cards or facilitating transactions for illicit materials or funding of illegal activities,” Reames said. “I can not believe Amazon is unaware of this and is unwilling to intercede to stop it. I also believe they are not properly vetting their new accounts to limit tax fraud via stolen identities.”

Reames said Amazon refuses to send him a corrected 1099, or to discuss anything about the identity thief.

“They say all they can do at this point is send me a letter acknowledging than I’m disputing ever having received the funds, because they said they couldn’t prove I didn’t receive the funds. So I told them, ‘If you’re saying you can’t say whether I did receive the funds, tell me where they went?’ And they said, “Oh, no, we can’t do that.’ So I can’t clear myself and they won’t clear me.”

Amazon said in a statement that the security of customer accounts is one of its highest priorities.

“We have policies and security measures in place to help protect them. Whenever we become aware of actions like the ones you describe, we take steps to stop them. If you’re concerned about your account, please contact Amazon customer service immediately using the help section on our website.”

Beware, however, if you plan to contact Amazon customer support via phone. Performing a simple online search for Amazon customer support phone numbers can turn up some dubious and outright fraudulent results.

Earlier this month, KrebsOnSecurity heard from a fraud investigator for a mid-sized bank who’d recently had several customers who got suckered into scams after searching for the customer support line for Amazon. She said most of these customers were seeking to cancel an Amazon Prime membership after the trial period ended and they were charged a $99 fee.

The fraud investigator said her customers ended up calling fake Amazon support numbers, which were answered by people with a foreign accent who proceeded to request all manner of personal data, including bank account and credit card information. In short order, the customers’ accounts were used to set up new Amazon accounts as well as accounts at, a service that facilitates the purchase of virtual currencies like Bitcoin.

This Web site does a good job documenting the dozens of phony Amazon customer support numbers that are hoodwinking unsuspecting customers. Amazingly, many of these numbers seem to be heavily promoted using Amazon’s own online customer support discussion forums, in addition to third-party sites like

Interestingly, clicking on the Customer Help Forum link link from the Amazon Support Options and Contact Us page currently sends visitors to a the page pictured below, which displays a “Sorry, We Couldn’t Find That Page” error. Perhaps the company is simply cleaning things up after being notified last week by KrebsOnSecurity about the bogus phone numbers being promoted on the forum.

In any case, it appears some of these fake Amazon support numbers are being pimped by a number dubious-looking e-books for sale on Amazon that are all about — you guessed it — how to contact Amazon customer support.

If you wish to contact Amazon by phone, the only numbers you should use are 888-280-3321 and 888-280-4331. Amazon’s main customer help page is here.

Planet DebianDaniel Pocock: Hacking at EPFL Toastmasters, Lausanne, tonight

As mentioned in my earlier blog, I give a talk about Hacking at the Toastmasters club at EPFL tonight. Please feel free to join us and remember to turn off your mobile device or leave it at home, you never know when it might ring or become part of a demonstration.

Worse Than FailureCodeSOD: The Telltale Snippet

True! nervous, very, very dreadfully nervous I had been and am; but why will you say that I am mad? The disease had sharpened my senses, not destroyed, not dulled them. Above all was the sense of hearing acute. I heard all things in the heaven and in the earth. I heard many things in hell. How then am I mad? Hearken! and observe how healthily, how calmly I can tell you the whole story. - “The Telltale Heart” by Edgar Allen Poe

Today’s submitter credits themselves as Too Afraid To Say (TATS) who they are. Why? Because like a steady “thump thump” from beneath the floorboards, they are haunted by their crimes. The haunting continues to this very day.

It is impossible to say how the idea entered TATS’s brain, but as a fresh-faced junior developer, they set out to write a flexible web-control in JavaScript. What they wanted was to dynamically add items to the control. Each item was a set of fields- an ID, a tool tip, a description, etc.

Think about how you might pass a list of objects to a method.

    ObjectLookupField.prototype._AddItems = function _AddItems(objItems)
        if (objItems && objItems.length > 0)
            var objItemIDs = [];
            var objTooltips = [];
            var objImages = [];
            var objTypes = [];
            var objDeleted = [];
            var objDescriptions = [];
            var objParentTreeCodes = [];
            var objHasChilderen = [];
            var objPath = [];
            var objMarked = [];
            var objLocked = [];

            var blnSkip;

            for (var intI = 0; intI < objItems.length; intI++)
                objImages.push((objItems[intI].TypeIconURL ? objItems[intI].TypeIconURL : objItems[intI].IconURL));
                objTooltips.push(objItems[intI].Tooltip ? objItems[intI].Tooltip : '');
                objMarked.push(objItems[intI].Marked ? 'Marked' : '');

                                // SNIP, not really related
                            //TATS also implemented `addItems` which requires all these arrays
            window[this._strControlID].addItems([objItemIDs, objImages, objPath, objTooltips, objLocked, objMarked, objParentTreeCodes, objHasChilderen]);

TATS used the infamous “Arrject” pattern. Instead of having a list of objects, where each object has all of the fields it needs, the Arrject pattern has one array per field, and then we’ll hope that each index holds all the related data for a given item. For example:

    arrNames = {"Joebob", "Sallybob", "Suebob"};
    arrAddresses = {"123 Street St", "234 Road Rd", "345 Lane Ln"};
    arrPhones = {"555-1234", "555-2345", "555-3456"};

The 0th index of every array contains everything you want to know about Joebob.

Most uses of the Arrject pattern end up in code that doesn’t use objects at all, but TATS adds their own little twist. They explode an object into a set of arrays, and then pass those arrays to their own method which creates the necessary DOM elements.

TATS smiled, for what did they have to fear? They bade the senior developers welcome: use my code. And they did.

Before long, this little bit of code propagated throughout their entire codebase; copied, pasted, dropped in, loaded as a JS dependency, hosted on a private CDN. It was everywhere. Time passed, and careers changed. TATS got promoted up to senior. Other seniors left and handed their code off to TATS. And that’s when the thumping beneath the floorboards became intolerable. That is why they are “Too Afraid to Say”. This little ghost, this reminder of their mistakes as a junior dev is always there, waiting beneath their feet, and it keeps. getting. louder.

“Villains!” I shrieked, “dissemble no more! I admit the deed!—tear up the planks!—here, here!—it is the beating of his hideous heart!”

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!


CryptogramOn the Security of Walls

Interesting history of the security of walls:

Dún Aonghasa presents early evidence of the same principles of redundant security measures at work in 13th century castles, 17th century star-shaped artillery fortifications, and even "defense in depth" security architecture promoted today by the National Institute of Standards and Technology, the Nuclear Regulatory Commission, and countless other security organizations world-wide.

Security advances throughout the centuries have been mostly technical adjustments in response to evolving weaponry. Fortification -- the art and science of protecting a place by imposing a barrier between you and an enemy -- is as ancient as humanity. From the standpoint of theory, however, there is very little about modern network or airport security that could not be learned from a 17th century artillery manual. That should trouble us more than it does.

Fortification depends on walls as a demarcation between attacker and defender. The very first priority action listed in the 2017 National Security Strategy states: "We will secure our borders through the construction of a border wall, the use of multilayered defenses and advanced technology, the employment of additional personnel, and other measures." The National Security Strategy, as well as the executive order just preceding it, are just formal language to describe the recurrent and popular idea of a grand border wall as a central tool of strategic security. There's been a lot said about the costs of the wall. But, as the American finger hovers over the Hadrian's Wall 2.0 button, whether or not a wall will actually improve national security depends a lot on how walls work, but moreso, how they fail.

Lots more at the link.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, January 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In January, about 160 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly at 187 hours per month. It would be nice if the slow growth could continue as the amount of work seems to be slowly growing too.

The security tracker currently lists 23 packages with a known CVE and the dla-needed.txt file 23 too. The number of open issues seems to be stable compared to last month which is a good sign.

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Krebs on SecurityIRS Scam Leverages Hacked Tax Preparers, Client Bank Accounts

Identity thieves who specialize in tax refund fraud have been busy of late hacking online accounts at multiple tax preparation firms, using them to file phony refund requests. Once the Internal Revenue Service processes the return and deposits money into bank accounts of the hacked firms’ clients, the crooks contact those clients posing as a collection agency and demand that the money be “returned.”

In one version of the scam, criminals are pretending to be debt collection agency officials acting on behalf of the IRS. They’ll call taxpayers who’ve had fraudulent tax refunds deposited into their bank accounts, claim the refund was deposited in error, and threaten recipients with criminal charges if they fail to forward the money to the collection agency.

This is exactly what happened to a number of customers at a half dozen banks in Oklahoma earlier this month. Elaine Dodd, executive vice president of the fraud division at the Oklahoma Bankers Association, said many financial institutions in the Oklahoma City area had “a good number of customers” who had large sums deposited into their bank accounts at the same time.

Dodd said the bank customers received hefty deposits into their accounts from the U.S. Treasury, and shortly thereafter were contacted by phone by someone claiming to be a collections agent for a firm calling itself DebtCredit and using the Web site name debtcredit[dot]us.

“We’re having customers getting refunds they have not applied for,” Dodd said, noting that the transfers were traced back to a local tax preparer who’d apparently gotten phished or hacked. Those banks are now working with affected customers to close the accounts and open new ones, Dodd said. “If the crooks have breached a tax preparer and can send money to the client, they can sure enough pull money out of those accounts, too.”

Several of the Oklahoma bank’s clients received customized notices from a phony company claiming to be a collections agency hired by the IRS.

The domain debtcredit[dot]us hasn’t been active for some time, but an exact copy of the site to which the bank’s clients were referred by the phony collection agency can be found at jcdebt[dot]com — a domain that was registered less than a month ago. The site purports to be associated with a company in New Jersey called Debt & Credit Consulting Services, but according to a record (PDF) retrieved from the New Jersey Secretary of State’s office, that company’s business license was revoked in 2010.

“You may be puzzled by an erroneous payment from the Internal Revenue Service but in fact it is quite an ordinary situation,” reads the HTML page shared with people who received the fraudulent IRS refunds. It includes a video explaining the matter, and references a case number, the amount and date of the transaction, and provides a list of personal “data reported by the IRS,” including the recipient’s name, Social Security Number (SSN), address, bank name, bank routing number and account number.

All of these details no doubt are included to make the scheme look official; most recipients will never suspect that they received the bank transfer because their accounting firm got hacked.

The scammers even supposedly assign the recipients an individual “appointed debt collector,” complete with a picture of the employee, her name, telephone number and email address. However, the emails to the domain used in the email address from the screenshot above (debtcredit[dot]com) bounced, and no one answers at the provided telephone number.

Along with the Web page listing the recipient’s personal and bank account information, each recipient is given a “transaction error correction letter” with IRS letterhead (see image below) that includes many of the same personal and financial details on the HTML page. It also gives the recipient instructions on the account number, ACH routing and wire number to which the wayward funds are to be wired.

A phony letter from the IRS instructing recipients on how and where to wire the money that was deposited into their bank account as a result of a fraudulent tax refund request filed in their name.

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

On Feb. 2, 2018, the IRS issued a warning to tax preparers, urging them to step up their security in light of increased attacks. On Feb. 13, the IRS warned that phony refunds through hacked tax preparation accounts are a “quickly growing scam.”

“Thieves know it is more difficult to identify and halt fraudulent tax returns when they are using real client data such as income, dependents, credits and deductions,” the agency noted in the Feb. 2 alert. “Generally, criminals find alternative ways to get the fraudulent refunds delivered to themselves rather than the real taxpayers.”

The IRS says taxpayer who receive fraudulent transfers from the IRS should contact their financial institution, as the account may need to be closed (because the account details are clearly in the hands of cybercriminals). Taxpayers receiving erroneous refunds also should consider contacting their tax preparers immediately.

If you go to file your taxes electronically this year and the return is rejected, it may mean fraudsters have beat you to it. The IRS advises taxpayers in this situation to follow the steps outlined in the Taxpayer Guide to Identity Theft. Those unable to file electronically should mail a paper tax return along with Form 14039 (PDF) — the Identity Theft Affidavit — stating they were victims of a tax preparer data breach.

Worse Than FailureCousin of ITAPPMONROBOT

Logitech Quickcam Pro 4000

Every year, Initrode Global was faced with further and further budget shortages in their IT department. This wasn't because the company was doing poorly—on the contrary, the company overall was doing quite well, hitting record sales every quarter. The only way to spin that into a smaller budget was to dream bigger. Thus, every quarter, the budget demanded greater and greater increases in sales, and the exceptional growth was measured against the desired phenomenal growth and found wanting.

IT, being a cost center, was always hit by budget cuts the hardest. What did they need money for? The lights were still on, the mainframes still churning; any additional funds would only encourage them to take wild risks and break things.

One of the things people were worried about breaking were the thin clients. These had been purchased some years ago from Smyrt, who had been acquired the previous year by Hell Computers. There would be no tech support or patching, not from Hell. The IT department was on their own to ensure the clients kept running.

Unfortunately, the things seemed to have a will of their own—and that will did not include remaining up for weeks on end. Every once in a while, when booting Linux on the thin clients, the Thin Film Transistor screen would turn dark as soon as the X server started. They would remain dark after that; however, when the helpdesk SSH'd into the system, the screen would of course render perfectly on their end. So there was nothing to do to troubleshoot except lug a thin client to their work area and test workarounds from there.

The worst part of this kind of troubleshooting is when the problem is an intermittent one. The only way they could think to reproduce the problem was to spend hours in front of the client, turning it off and back on again. In the face of budget cuts, the already understaffed desk had no manpower to do something so trivial and dull.

Tedium is the mother of invention. Many of the most ingenious pieces of automation were put in place when an enterprising programmer was faced with performing a mind-numbing task over and over for the foreseeable future. Such is the case in this instance. Lacking the support staff to power cycle the machine over and over, the staff instead built a robot.

A webcam was found in the back room, dusty and abandoned, the last vestige of a proposed work-from-home solution that never quite came to fruition years before. A sticker of transparent rubber someone found in their desk was placed over the metal rim of the camera so it wouldn't leave any scratches on the glass of the TFT screen. The webcam was placed up close against one strategically chosen corner of the screen, and attached to a Raspberry Pi someone brought from home.

The Pi was programmed to run a bash script, which in turn called a CLI image-grabbing tool and then applied some ImageMagick filters to determine the brightness value of the patch of screen it could see. This brightness value was compared against a known list of brightnesses to determine which state the machine was in: the boot menu, the Linux kernel messages scrolling past, the colorful login screen, or the solid black screen representing the problem. When the Pi detected a login screen, it would run a scripted reboot on the thin client using SSH and a keypair. If, instead, the screen remained dark for a long period of time, it would send an IM through the company messaging solution to alert the staff that they could begin their testing, then exit.

We've seen machines with the ability to manipulate physical servers. Now, we have machines seeing and evaluating the world in front of them. How long before we reach peak Skynet potential here at TDWTF? And what would the robot revolution look like, with founding members such as these?

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet DebianSteve Kemp: How we care for our child

This post is a departure from the regular content, which is supposed to be "Debian and Free Software", but has accidentally turned into a hardware blog recently!

Anyway, we have a child who is now about 14 months old. The way that my wife and I care for him seems logical to us, but often amuses local people. So in the spirit of sharing this is what we do:

  • We divide the day into chunks of time.
  • At any given time one of us is solely responsible for him.
    • The other parent might be nearby, and might help a little.
    • But there is always a designated person who will be changing nappies, feeding, and playing at any given point in the day.
  • The end.

So our weekend routine, covering Saturday and Sunday, looks like this:

  • 07:00-08:00: Husband
  • 08:01-13:00: Wife
  • 13:01-17:00: Husband
  • 17:01-18:00: Wife
  • 18:01-19:30: Husband

Our child, Oiva, seems happy enough with this and he sometimes starts walking from one parent to the other at the appropriate time. But the real benefit is that each of us gets some time off - in my case I get "the morning" off, and my wife gets the afternoon off. We can hide in our bedroom, go shopping, eat cake, or do anything we like.

Week-days are similar, but with the caveat that we both have jobs. I take the morning, and the evenings, and in exchange if he wakes up overnight my wife helps him sleep and settle between 8PM-5AM, and if he wakes up later than 5AM I deal with him.

Most of the time our child sleeps through the night, but if he does wake up it tends to be in the 4:30AM/5AM timeframe. I'm "happy" to wake up at 5AM and stay up until I go to work because I'm a morning person and I tend to go to bed early these days.

Day-care is currently a complex process. There are three families with small children, and ourselves. Each day of the week one family hosts all the children, and the baby-sitter arrives there too (all the families live within a few blocks of each other).

All of the parents go to work, leaving one carer in charge of 4 babies for the day, from 08:15-16:15. On the days when we're hosting the children I greet the carer then go to work - on the days the children are at a different families house I take him there in the morning, on my way to work, and then my wife collects him in the evening.

At the moment things are a bit terrible because most of the children have been a bit sick, and the carer too. When a single child is sick it's mostly OK, unless that is the child which is supposed to be host-venue. If that child is sick we have to panic and pick another house for that day.

Unfortunately if the child-carer is sick then everybody is screwed, and one parent has to stay home from each family. I guess this is the downside compared to sending the children to public-daycare.

This is private day-care, Finnish-style. The social-services (kela) will reimburse each family €700/month if you're in such a scheme, and carers are limited to a maximum of 4 children. The net result is that prices are stable, averaging €900-€1000 per-child, per month.

(The €700 is refunded after a month or two, so in real-terms people like us pay €200-€300/month for Monday-Friday day-care. Plus a bit of beaurocracy over deciding which family is hosting, and which parents are providing food. With the size being capped, and the fees being pretty standard the carers earn €3600-€4000/month, which is a good amount. To be a school-teacher you need to be very qualified, but to do this caring is much simpler. It turns out that being an English-speaker can be a bonus too, for some families ;)

Currently our carer has a sick-note for three days, so I'm staying home today, and will likely stay tomorrow too. Then my wife will skip work on Wednesday. (We usually take it in turns but sometimes that can't happen easily.)

But all of this is due to change in the near future, because we've had too many sick days, and both of us have missed too much work.

More news on that in the future, unless I forget.


Planet DebianDaniel Pocock: SwissPost putting another nail in the coffin of Swiss sovereignty

A few people have recently asked me about the SwissID, as SwissPost has just been sending spam emails out to people telling them "Link your Swiss Post user account to SwissID".

This coercive new application of technology demands users email addresses and mobile phone numbers "for security". A web site coercing people to use text messages "for security" has quickly become a red flag for most people and many blogs have already covered why it is only an illusion of security, putting your phone account at risk so companies can profit from another vector for snooping on you.

SwissID is not the only digital identity solution in Switzerland but as it is run by SwissPost and has a name similar to another service it is becoming very well known.

In 2010 they began offering a solution which they call SuisseID (notice the difference? They are pronounced the same way.) based on digital certificates and compliant with Swiss legislation. Public discussion focussed on the obscene cost with little comment about the privacy consequences and what this means for Switzerland as a nation.

Digital certificates often embed an email address in the certificate.

With SwissID, however, they have a web site that looks like little more than vaporware, giving no details at all whether certificates are used. It appears they are basically promoting an app that is designed to harvest the email addresses and phone numbers of any Swiss people who install it, lulling them into that folly by using a name that looks like their original SuisseID. If it looks like phishing, if it feels like phishing and if it smells like phishing to any expert takes a brief sniff of their FAQ, then what else is it?

The thing is, the original SuisseID runs on a standalone smartcard so it doesn't need to have your mobile phone number, have permissions to all the data in your phone and be limited to working in areas with mobile phone signal.

The emails currently being sent by SwissPost tell people they must "Please use a private e-mail address for this purpose" but they don't give any information about the privacy consequences of creating such an account or what their app will do when it has access to read all the messages and contacts in your phone.

The actions you can take that they didn't tell you about

  • You can post a registered letter to SwissPost and tell them that for privacy reasons, you are immediately retracting the email addresses and mobile phone numbers they currently hold on file and that you are exercising your right not to give an email address or mobile phone number to them in future.
  • If you do decide you want a SwissID, create a unique email address for it and only use that email address with SwissPost so that it can't be cross-referenced with other companies. This email address is also like a canary in a coal mine: if you start receiving spam on that email address then you know SwissPost/SwissID may have been hacked or the data has been leaked or sold.
  • Don't install their app and if you did, remove it and you may want to change your mobile phone number.

Oddly enough, none of these privacy-protecting ideas were suggested in the email from SwissPost. Who's side are they on?

Why should people be concerned?

SwissPost, like every postal agency, has seen traditional revenues drop and so they seek to generate more revenue from direct marketing and they are constantly looking for ways to extract and profit from data about the public. They are also a huge company with many employees: when dealing with vast amounts of data in any computer system, it only takes one employee to compromise everything: just think of how Edward Snowden was able to act alone to extract many of the NSA's most valuable secrets.

SwissPost is going to great lengths to get accurate data on every citizen and resident in Switzerland, including deploying an app to get your mobile phone number and demanding an email address when you use their web site. That also allows them to cross-reference with your IP addresses.

  • Any person or organization who has your email address or mobile number may find it easier to get your home address.
  • Any person or organization who has your home address may be able to get your email address or mobile phone number.
  • When you call a company from your mobile phone and their system recognizes your phone number, it becomes easier for them to match it to your home address.
  • If SwissPost and the SBB successfully convince a lot of people to use a SwissID, some other large web sites may refuse to allow access without getting you to link them to your SwissID and all the data behind it too. Think of how many websites already try to coerce you to give them your mobile phone number and birthday to "secure" your account, but worse.

The Google factor

The creepiest thing is that over seventy percent of people are apparently using Gmail addresses in Switzerland and these will be a dependency of their registration for SwissID.

Given that SwissID is being promoted as a solution compliant with ZertES legislation that can act as an interface between citizens and the state, the intersection with such a powerful foreign actor as Gmail is extraordinary. For example, if people are registering to vote in Switzerland's renowned referendums and their communication is under the surveillance of a foreign power like the US, that is a mockery of democracy and it makes the allegations of Russian election hacking look like child's play.

Switzerland's referendums, decentralized system of Government, part-time army and privacy regime are all features that maintain a balance between citizen and state: by centralizing power in the hands of SwissID and foreign IT companies, doesn't it appear that the very name SwissID is a mockery of the Swiss identity?

Yellow in motion

No canaries were harmed in the production of this blog.

Planet DebianPetter Reinholdtsen: The SysVinit upstream project just migrated to git

Surprising as it might sound, there are still computers using the traditional Sys V init system, and there probably will be until systemd start working on Hurd and FreeBSD. The upstream project still exist, though, and up until today, the upstream source was available from Savannah via subversion. I am happy to report that this just changed.

The upstream source is now in Git, and consist of three repositories:

I do not really spend much time on the project these days, and I has mostly retired, but found it best to migrate the source to a good version control system to help those willing to move it forward.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Don MartiThe tracker will always get through?

(I work for Mozilla. None of this is secret. None of this is Mozilla policy. Not speaking for Mozilla here.)

A big objection to tracking protection is the idea that the tracker will always get through. Some people suggest that as browsers give users more ability to control how their personal information gets leaked across sites, things won't get better for users, because third-party tracking will just keep up. On this view, today's easy-to-block third-party cookies will be replaced by techniques such as passive fingerprinting where it's hard to tell if the browser is succeeding at protecting the user or not, and users will be stuck in the same place they are now, or worse.

I doubt this is the case because we're playing a more complex game than just trackers vs. users. The game has at least five sides, and some of the fastest-moving players with the best understanding of the game are the adfraud hackers. Right now adfraud is losing in some areas where they had been winning, and the resulting shift in adfraud is likely to shift the risks and rewards of tracking techniques.

Data center adfraud

Fraudbots, running in data centers, visit legit sites (with third-party ads and trackers) to pick up a realistic set of third-party cookies to make them look like high-value users. Then the bots visit dedicated fraudulent "cash out" sites (whose operators have the same third-party ads and trackers) to generate valuable ad impressions for those sites. If you wonder why so many sites made a big deal out of "pivot to video" but can't remember watching a video ad, this is why. Fraudbots are patient enough to get profiled as, say, a car buyer, and watch those big-money ads. And the money is good enough to motivate fraud hackers to make good bots, usually based on real browser code. When a fraudbot network gets caught and blocked from high-value ads, it gets recycled for lower and lower value forms of advertising. By the time you see traffic for sale on fraud boards, those bots are probably only getting past just enough third-party anti-fraud services to be worth running.

This version of adfraud has minimal impact on real users. Real users don't go to fraud sites, and fraudbots do their thing in data centers Doesn't everyone do their Christmas shopping while chilling out in the cold aisle at an Amazon AWS data center? Seems legit to me. and don't touch users' systems. The companies that pay for it are legit publishers, who not only have to serve pages to fraudbots—remember, a bot needs to visit enough legit sites to look like a real user—but also end up competing with adfraud for ad revenue. Adfraud has only really been a problem for legit publishers. The adtech business is fine with it, since they make more money from fraud than the fraud hackers do, and the advertisers are fine with it because fraud is priced in, so they pay the fraud-adjusted price even for real impressions.

What's new for adfraud

So what's changing? More fraudbots in data centers are getting caught, just because the adtech firms have mostly been shamed into filtering out the embarassingly obvious traffic from IP addresses that everyone can tell probably don't have a human user on them. So where is fraud going now? More fraud is likely to move to a place where a bot can look more realistic but probably not stay up as long—your computer or mobile device. Expect adfraud concealed within web pages, as a payload for malware, and of course in lots and lots of cheesy native mobile apps.The Google Play Store has an ongoing problem with adfraud, which is content marketing gold for Check Point Software, if you like "shitty app did WHAT?" stories. Adfraud makes way more money than cryptocurrency mining, using less CPU and battery.

So the bad news is that you're going to have to reformat your uncle's computer a lot this year, because more client-side fraud is coming. Data center IPs don't get by the ad networks as well as they once did, so adfraud is getting personal. The good news, is, hey, you know all that big, scary passive fingerprinting that's supposed to become the harder-to-beat replacement for the third-party cookie? Client-side fraud has to beat it in order to get paid, so they'll beat it. As a bonus, client-side bots are way better at attribution fraud (where a fraudulent ad gets credit for a real sale) than data center bots.

Users don't have to get protected from every possible tracking technique in order to shift the web advertising game from a hacking contest to a reputation contest. It often helps simply to shift the advertiser's ROI from negative-externality advertising below the ROI of positive-externality advertising.
Advertisers have two possible responses to adfraud: either try to out-hack it, or join the "flight to quality" and cut back on trying to follow big-money users to low-reputation sites in the first place. Hard-to-detect client-side bots, by making creepy fingerprinting techniques less trustworthy, tend to increase the uncertainty of the hacking option and make flight to quality relatively more attractive.


Planet DebianJoey Hess: futures of distributions

Seems Debian is talking about why they are unable to package whole categories of modern software, such as anything using npm. It's good they're having a conversation about that, and I want to give a broader perspective.

Lars Wirzenius's blog post about it explains the problem well from the Debian perspective. In short: The granularity at which software is built has fundamentally changed. It's now typical for hundreds of small libraries to be used by any application, often pegged to specific versions. Language-specific tools manage all the resulting complexity automatically, but distributions can't muster the manpower to package a fraction of this stuff.

Lars lists some ideas for incremental improvements, but the space within which a Linux distribution exists has changed, and that calls not for incremental changes, but for a fundamental rethink from the ground up. Whether Debian is capable of making such fundamental changes at this point in its lifecycle is up to its developers to decide.

Perhaps other distributions are dealing with the problem better? One way to evaluate this is to look at how a given programming language community feels about a distribution's handling of their libraries. Do they generally see the distribution as a road block that must be worked around, or is the distribution a useful part of their workflow? Do they want their stuff included in the distribution, or does that seem like a lot of pointless bother?

I can only speak about the Haskell community. While there are some exceptions, it generally is not interested in Debian containing Haskell packages, and indeed system-wide installations of Haskell packages can be an active problem for development. This is despite Debian having done a much better job at packaging a lot of Haskell libraries than it has at say, npm libraries. Debian still only packages one version of anything, and there is lag and complex process involved, and so friction with the Haskell community.

On the other hand, there is a distribution that the Haskell community broadly does like, and that's Nix. A subset of the Haskell community uses Nix to manage and deploy Haskell software, and there's generally a good impression of it. Nix seems to be doing something right, that Debian is not doing.

It seems that Nix also has pretty good support for working with npm packages, including ingesting a whole dependency chain into the package manager with a single command, and thousands of npm libraries included in the distribution I don't know how the npm community feels about Nix, but my guess is they like it better than Debian.

Nix is a radical rethink of the distribution model. And it's jettisoned a lot of things that Debian does, like manually packaging software, or extreme license vetting. It's interesting that Guix, which uses the same technologies as Nix, but seems in many ways more Debian-like with its care about licensing etc, has also been unable to manage npm packaging. This suggests to me that at least some of the things that Nix has jettisoned need to be jettisoned in order to succeed in the new distribution space.

But. Nix is not really exploding in popularity from what I can see. It seems to have settled into a niche of its own, and is perhaps expanding here and there, but not rapidly. It's insignificant compared with things like Docker, that also radically rethink the distribution model.

We could easily end up with some nightmare of lithification, as described by Robert "r0ml" Lefkowitz in his talk. Endlessly copied and compacted layers of code, contained or in the cloud. Programmer-archeologists right out of a Vinge SF novel.

r0ml suggests that we assume that's where things are going (or indeed where they already are outside little hermetic worlds like Debian), and focus on solving technical problems, like deployment of modifications of cloud apps, that prevent users from exercising software freedoms.

In a way, r0ml's ideas are what led me to thinking about extending Scuttlebutt with Annah, and indeed if you squint at that right, it's an idea for a radically different kind of distribution.

Well, that's all I have. No answers of course.

Planet DebianJohn Goerzen: The downfall of… Trump or Democracy?

The future of the United States as a democracy is at risk. That’s plenty scary. More scary is that many Americans know this, but don’t care. And even more astonishing is that this same thing happened 45 years ago.

I remember it clearly. January 30, just a couple weeks ago. On that day, we had the news that FBI deputy director McCabe — a frequent target of apparently-baseless Trump criticism — had been pushed out. The Trump administration refused to enforce the bipartisan set of additional sanctions on Russia. And the House Intelligence Committee voted on party lines to release what we all knew then, and since have seen confirmed, was a memo filled with errors designed to smear people investigating the president, but which nonetheless contained enough classified material to cause an almighty kerfuffle in Washington.

I told my wife that evening, “I think today will be remembered as a turning point. Either to the downfall of Trump, or the downfall of our democracy, but I don’t know which.”

I have not written much about this scandal, because so many quality words have already been written. But it is time to add something.

I was interested in Watergate years ago. Back in middle school, I read All the President’s Men. I wondered what it must have been like to live through those events — corruption at the highest level of government, dirty tricks, not knowing how it would play out. I wished I could have experienced it.

A couple of decades later, I have got my wish and I am not amused. After all:

“If these allegations prove to be true, what they were seeking to steal was not the jewels, money or other property of American citizens, but something much more valuable — their most precious heritage, the right to vote in a free election…

If the allegations… are substantiated, there has been a very serious subversion of the integrity of the electoral process, and the committee will be obliged to consider the manner in which such a subversion affects the continued existence of this nation as a representative democracy, and how, if we are to survive, such subversions may be prevented in the future.”

Sen. Sam Ervin Jr, May 17, 1973

That statement from 45 years ago captures accurately my contemporary fears. If foreign interference in our elections is not only tolerated but embraced, where does that leave us? Are we really a republic anymore?

I have been diving back into Watergate. In One Man Against The World: The Tragedy of Richard Nixon, written by Tim Weiner in 2015, he dives into the Nixon story in unprecedented detail, thanks to the release of many more files from that time. In his very first page, he writes:

[Nixon] made war in pursuit of peace. He committed crimes in the name of the law. He tore the country apart while trying to unite it. He sabotaged his presidency by violating the Constitution. He destroyed himself and damaged the nation through deliberate acts of folly…

He practiced geopolitics without subtlety; he preferred subterfuge and brutality. He dropped bombs and napalm without remorse; he believed they delivered a political message beyond flood and fire. Hr charted the course of the war without a strategy; he delivered victory to his adversaries.

His gravest decisions undermined his allies abroad. His grandest delusions armed his enemies at home…

The truth was not in him; secrecy and deception were his touchstones.

That these words describe another American president, one that I’m sure Weiner had not foreseen, is jarring. The parallels between Nixon and Trump in the pages of Weiner’s book are so strong that one sometimes wonders if Weiner has a more accurate story of Trump than Wolff got – and also if the pages of his book let us see what’s in store for us this year.

Today I started listening to the excellent podcast Slow Burn. If you have time for nothing else, listen to episode 5: True Believers. It discusses the politicization of the Senate Watergate committee, and more ominously, the efforts of reports to understand the people that still supported Nixon — despite all the damning testimony already out there.

Gail Sheehy went to a bar where Nixon supporters gathered, wanting to get their reaction to the Watergate hearings. The supporters didn’t want to watch. They thought the hearings were just an attempt by liberals to take down Nixon. Sheehy found the president’s people to be “angry, demoralized, and disconcertingly comfortable with the idea of a police state run by Richard Nixon.”

These guys felt they were nobodies… except Richard Nixon gave them an identity. He was a tough guy who was “going to get rid of all those anti-war people, anarchists, terrorists… the people that were tearing down our country!”

Art Buchwald’s tongue-in-cheek handy excuses for Nixon backers seems to be copied almost verbatim by Fox News (substitute Hillary’s emails for Chappaquiddick).

And what happened to the scum of Richard Nixon’s era? Yes, some went to jail, but not all.

  • Steve King, one of Nixon’s henchmen that kidnapped Martha Mitchell (wife of Attorney General and Nixon henchman John Mitchell) for a week to keep her from spilling the beans on Watergate, beat her up, and had her drugged — well he was appointed by Trump to be ambassador to the Czech Republic and confirmed by the Senate.
  • The man that said that the Watergate burglars were “not criminal at heart” because “their only aim was to re-elect the president” later got elected president himself, and pardoned one of the burglars. (Ronald Reagan)
  • The man that said “just let the president do his job!” was also elected president (George H. W. Bush)
  • The man that finally carried out Nixon’s order to fire special prosecutor Archibald Cox was nominated to the Supreme Court, but his nomination was blocked in the Senate. (Robert Bork) He was, however, on the United States Court of Appeals for 6 years.
  • And in an odd conspiracy-laden introduction to a reprint of a youth’s history book on Watergate, none other than Roger Stone, wrapped up in Trump’s shenanigans, was trying to defend Nixon. Oh, and he was a business partner with Paul Manafort and lobbyist for Ferdinand Marcos.

One comfort from all of this is the knowledge that we had been there before. We had lived through an era of great progress in civil rights, and right after that elected a dictatorial crook president. We survived the president’s fervent supporters refusing to believe overwhelming evidence of his crookedness. We survived.

And yet, that is no guarantee. After all, as John Dean put it, Nixon “might have survived if there’d been a Fox News.”

Planet Linux AustraliaPia Waugh: An optimistic future

This is my personal vision for an event called “Optimistic Futures” to explore what we could be aiming for and figure out the possible roles for government in future.

Technology is both an enabler and a disruptor in our lives. It has ushered in an age of surplus, with decentralised systems enabled by highly empowered global citizens, all creating increasing complexity. It is imperative that we transition into a more open, collaborative, resilient and digitally enabled society that can respond exponentially to exponential change whilst empowering all our people to thrive. We have the means now by which to overcome our greatest challenges including poverty, hunger, inequity and shifting job markets but we must be bold in collectively designing a better future, otherwise we may unintentionally reinvent past paradigms and inequities with shiny new things.

Technology is only as useful as it affects actual people, so my vision starts, perhaps surprisingly for some, with people. After all, if people suffer, the system suffers, so the well being of people is the first and foremost priority for any sustainable vision. But we also need to look at what all sectors and communities across society need and what part they can play:

  • People: I dream of a future where the uniqueness of local communities, cultures and individuals is amplified, where diversity is embraced as a strength, and where all people are empowered with the skills, capacity and confidence to thrive locally and internationally. A future where everyone shares in the benefits and opportunities of a modern, digital and surplus society/economy with resilience, and where everyone can meaningfully contribute to the future of work, local communities and the national/global good.
  • Public sectors: I dream of strong, independent, bold and highly accountable public sectors that lead, inform, collaborate, engage meaningfully and are effective enablers for society and the economy. A future where we invest as much time and effort on transformational digital public infrastructure and skills as we do on other public infrastructure like roads, health and traditional education, so that we can all build on top of government as a platform. Where everyone can have confidence in government as a stabilising force of integrity that provides a minimum quality of life upon which everyone can thrive.
  • The media: I dream of a highly effective fourth estate which is motivated systemically with resilient business models that incentivise behaviours to both serve the public and hold power to account, especially as “news” is also arguably becoming exponential. Actionable accountability that doesn’t rely on the linearity and personal incentives of individuals to respond will be critical with the changing pace of news and with more decisions being made by machines.
  • Private, academic and non-profit sectors: I dream of a future where all sectors can more freely innovate, share, adapt and succeed whilst contributing meaningfully to the public good and being accountable to the communities affected by decisions and actions. I also see a role for academic institutions in particular, given their systemic motivation for high veracity outcomes without being attached to one side, as playing a role in how national/government actions are measured, planned, tested and monitored over time.
  • Finally, I dream of a world where countries are not celebrated for being just “digital nations” but rather are engaged in a race to the top in using technology to improve the lives of all people and to establish truly collaborative democracies where people can meaningfully participate in the shaping the optimistic and inclusive futures.

Technology is a means, not an ends, so we need to use technology to both proactively invent the future we need (thank you Alan Kay) and to be resilient to change including emerging tech and trends.

Let me share a few specific optimistic predictions for 2070:

  • Automation will help us redesign our work expectations. We will have a 10-20 hour work week supported by machines, freeing up time for family, education, civic duties and innovation. People will have less pressure to simply survive and will have more capacity to thrive (this is a common theme, but something I see as critical).
  • 3D printing of synthetic foods and nanotechnology to deconstruct and reconstruct molecular materials will address hunger, access to medicine, clothes and goods, and community hubs (like libraries) will become even more important as distribution, education and social hubs, with drones and other aerial travel employed for those who can’t travel. Exoskeletons will replace scooters :)
  • With rocket travel normalised, and only an hour to get anywhere on the planet, nations will see competitive citizenships where countries focus on the best quality of life to attract and retain people, rather than largely just trying to attract and retain companies as we do today. We will also likely see the emergence of more powerful transnational communities that have nationhood status to represent the aspects of people’s lives that are not geopolitically bound.
  • The public service has highly professional, empathetic and accountable multi-disciplinary experts on responsive collaborative policy, digital legislation, societal modeling, identifying necessary public digital infrastructure for investment, and well controlled but openly available data, rules and transactional functions of government to enable dynamic and third party services across myriad channels, provided to people based on their needs but under their control. We will also have a large number of citizens working 1 or 2 days a week in paid civic duties on areas where they have passion, skills or experience to contribute.
  • The paralympics will become the main game, as it were, with no limits on human augmentation. We will do the 100m sprint with rockets, judo with cyborgs, rock climbing with tentacles. We have access to medical capabilities to address any form of disease or discomfort but we don’t use the technologies to just comply to a normative view of a human. People are free to choose their form and we culturally value diversity and experimentation as critical attributes of a modern adaptable community.

I’ve only been living in New Zealand a short time but I’ve been delighted and inspired by what I’ve learned from kiwi and Māori cultures, so I’d like to share a locally inspired analogy.

Technology is on one hand, just a waka (canoe), a vehicle for change. We all have a part to play in the journey and in deciding where we want to go. On the other hand, technology is also the winds, the storms, the thunder, and we have to continually work to understand and respond to emerging technologies and trends so we stay safely on course. It will take collaboration and working towards common goals if we are to chart a better future for all.

Planet DebianLars Wirzenius: What is Debian all about, really? Or: friction, packaging complex applications

Another weekend, another big mailing list thread

This weekend, those interested in Debian development have been having a discussion on the debian-devel mailing list about "What can Debian do to provide complex applications to its users?". I'm commenting on that in my blog rather than the mailing list, since this got a bit too long to be usefully done in an email.

directhex's recent blog post "Packaging is hard. Packager-friendly is harder." is also relevant.

The problem

To start with, I don't think the email that started this discussion poses the right question. The problem not really about complex applications, we already have those in Debian. See, for example, LibreOffice. The discussion is really about how Debian should deal with the way some types of applications are developed upstream these days. They're not all complex, and they're not all big, but as usual, things only get interesting when n is big.

A particularly clear example is the whole nodejs ecosystem, but it's not limited to that and it's not limited to web applications. This is also not the first time this topic arises, but we've never come to any good conclusion.

My understanding of the problem is as follows:

A current trend in software development is to use programming languages, often interpreted high level languages, combined with heavy use of third-party libraries, and a language-specific package manager for installing libraries for the developer to use, and sometimes also for the sysadmin installing the software for production to use. This bypasses the Linux distributions entirely. The benefit is that it has allowed ecosystems for specific programming languages where there is very little friction for using libraries written in that language to be used by developers, speeding up development cycles a lot.

When I was young(er) the world was horrible

In comparison, in the old days, which for me means the 1990s, and before Debian took over my computing life, the cycle was something like this:

I would be writing an application, and would need to use a library to make some part of my application easier to write. To use that library, I would download the source code archive of the latest release, and laboriously decipher and follow the build and installation instructions, fix any problems, rinse, repeat. After getting the library installed, I would get back to developing my application. Often the installation of the dependency would take hours, so not a thing to be undertaken lightly.

Debian made some things better

With Debian, and apt, and having access to hundreds upon hundreds of libraries packaged for Debian, this become a much easier process. But only for the things packaged for Debian.

For those developing and publishing libraries, Debian didn't make the process any easier. They would still have to publish a source code archive, but also hope that it would eventually be included in Debian. And updates to libraries in the Debian stable release would not get into the hands of users until the next Debian stable release. This is a lot of friction. For C libraries, that friction has traditionally been tolerable. The effort of making the library in the first place is considerable, so any friction added by Debian is small by comparison.

The world has changed around Debian

In the modern world, developing a new library is much easier, and so also the friction caused by Debian is much more of a hindrance. My understanding is that things now happen more like this:

I'm developing an application. I realise I could use a library. I run the language-specific package manager (pip, cpan, gem, npm, cargo, etc), it downloads the library, installs it in my home directory or my application source tree, and in less than the time it takes to have sip of tea, I can get back to developing my application.

This has a lot less friction than the Debian route. The attraction to application programmers is clear. For library authors, the process is also much streamlined. Writing a library, especially in a high-level language, is fairly easy, and publishing it for others to use is quick and simple. This can lead to a virtuous cycle where I write a useful little library, you use and tell me about a bug or a missing feature, I add it, publish the new version, you use it, and we're both happy as can be. Where this might have taken weeks or months in the old days, it can now happen in minutes.

The big question: why Debian?

In this brave new world, why would anyone bother with Debian anymore? Or any traditional Linux distribution, since this isn't particularly specific to Debian. (But I mention Debian specifically, since it's what I now best.)

A number of things have been mentioned or alluded to in the discussion mentioned above, but I think it's good for the discussion to be explicit about them. As a computer user, software developer, system administrator, and software freedom enthusiast, I see the following reasons to continue to use Debian:

  • The freeness of software included in Debian has been vetted. I have a strong guarantee that software included in Debian is free software. This goes beyond the licence of that particular piece of software, but includes practical considerations like the software can actually be built using free tooling, and that I have access to that tooling, because the tooling, too, is included in Debian.

    • There was a time when Debian debated (with itself) whether it was OK to include a binary that needed to be built using a proprietary C compiler. We decided that it isn't, or not in the main package archive.

    • These days we have the question of whether "minimised Javascript" is OK to be included in Debian, if it can't be produced using tools packaged in Debian. My understanding is that we have already decided that it's not, but the discussion continues. To me, this seems equivalent to the above case.

  • I have a strong guarantee that software in a stable Debian release won't change underneath me in incompatible ways, except in special circumstances. This means that if I'm writing my application and targeting Debian stable, the library API won't change, at least not until the next Debian stable release. Likewise for every other bit of software I use. Having things to continue to work without having to worry is a good thing.

    • Note that a side-effect of the low friction of library development current ecosystems sometimes results in the library API changing. This would mean my application would need to change to adapt to the API change. That's friction for my work.
  • I have a strong guarantee that a dependency won't just disappear. Debian has a large mirror network of its package archive, and there are easy tools to run my own mirror, if I want to. While running my own mirror is possible for other package management systems, each one adds to the friction.

    • The nodejs NPM ecosystem seems to be especially vulnerable to this. More than once packages have gone missing, resulting other projects, which depend on the missing packages, to start failing.

    • The way the Debian project is organised, it is almost impossible for this to happen in Debian. Not only are package removals carefully co-ordinated, packages that are depended on on by other packages aren't removed.

  • I have a strong guarantee that a Debian package I get from a Debian mirror is the official package from Debian: either the actual package uploaded by a Debian developer or a binary package built by a trusted Debian build server. This is because Debian uses cryptographic signatures of the package lists and I have a trust path to the Debian signing key.

    • At least some of the language specific package managers fail to have such a trust path. This means that I have no guarantees that the library package I download today, was the same code uploaded by library author.

    • Note that https does not help here. It protects the transfer from the package manger's web server to me, but makes absolutely no guarantees about the validity of the package. There's been enough cases of the package repository having been attacked that this matters to me. Debian's signatures protect against malicious changes on mirror hosts.

  • I have a reasonably strong guarantee that any problem I find can be fixed, by me or someone else. This is not a strong guarantee, because Debian can't do anything about insanely complicated code, for example, but at least I can rely on being able to rebuild the software. That's a basic requirement for fixing a bug.

  • I have a reasonably strong guarantee that, after upgrading to the next Debian stable release, my stuff continues to work. Upgrades may always break, but at least Debian tests them and treats it as a bug if an upgrade doesn't work, or loses user data.

These are the reasons why I think Debian and the way it packages and distributes software is still important and relevant. (You may disagree. I'm OK with that.)

What about non-Linux free operating systems

I don't have much personal experience with non-Linux systems, so I've only talked about Linux here. I don't think the BSD systems, for example, are actually all that different from Linux distributions. Feel free to substitute "free operating system" for "Linux" throughout.

What is it Debian tries to do, anyway?

The previous section is one level of abstraction too low. It's important, but it's beneficial take a further step back and consider what it is Debian actually tries to achieve. Why does Debian exist?

The primary goal of Debian is to enable its users to use their computers using only free software. The freedom aspect is fundamentally important and a principle that Debian is not willing to compromise on.

The primary approach to achieve this goal is to produce a "distribution" of free software, to make installing a free software operating system and applications, and to maintain such a computer, a feasible thing for our users.

This leads to secondary goals, such as:

  • Making it easy to install Debian on a computer. (For values of easy that should be compared to toggling boot sector bytes manually.)

    We've achieved this, though of course things can always be improved.

  • Making it easy to install applications on a computer with Debian. (Again, compared to the olden days, when that meant configuring and compiling everything from scratch, with no guidance.)

    We've achieved this, too.

  • A system with Debian installed is reasonably secure, and easy to keep reasonably secure.

    This means Debian will provide security support for software it distributes, and has ways in which to install security fixes. We've achieved this, though this, too, can always be improved.

  • A system with Debian installed should keep working for extended periods of time. This is important to make using Debian feasible. If it takes too much effort to have a computer running Debian, it's not feasible for many people to that, and then Debian fails its primary goal.

    This is why Debian has stable releases with years of security support. We've achieved this.

The disconnect

On the one hand, we have Debian, which pretty much has achieved what I declare to be its primary goal. On the other hand, a lot of developers now expect much less friction than what Debian offers. This is a disconnect that is cause, I believe, the debian-devel discussion, and variants of that discussion all over the open source landscape.

These discussions often go one of two ways, depending on which community is talking.

  • In the distribution and more old-school communities, the low-friction approach of language-specific package managers is often considered to be a horror, and an abandonment of all the good things that the Linux world has achieved. "Young saplings, who do they think they are, all agile and bendy and with no principles at all, get off our carefully cultivated lawn."

  • In the low-friction communities, Linux distributions are something only old, stodgy, boring people care about. "Distributions are dead, they only get in the way, nobody bothers with them anymore."

This disconnect will require effort by both sides to close the gap.

On the one hand, so much new software is being written by people using the low-friction approach, that Linux distributions may fail to attract new users and especially new developers, and this will hurt them and their users.

On the other hand, the low-friction people may be sawing the tree branch they're sitting on. If distributions suffer, the base on which low-friction development relies on, will wither away, and we'll be left with running low-friction free software on proprietary platforms.

Things for low-friction proponents to improve

Here's a few things I've noticed that go wrong in the various communities oriented towards the low-friction approach.

  • Not enough care is given to copyright licences. This is a boring topic, but it's the legal basis that all of free software and open source is based on. If copyright licences are violated, or copyrights are not respected, or copyrights or licences are not expressed well enough, or incompatible licences are mixed, the result is very easily not actually either free software or open source.

    It's boring, but be sufficiently pedantic here. It's not even all that difficult.

  • Do provide actual source. It seems quite a number of Javascript projects only distribute "minimised" versions of code. That's not actually source code, any more than, say, Java byte code is, even if a de-compiler can make it kind of editable. If source isn't available, it's not free software or open source.

  • Please try to be careful with API changes. What used to work should still work with a new version of a library. If you need to make an API change that breaks compatibility, find a way to still support those who rely on the old API, using whatever mechanisms available to you. Ideally, support the old API for a long time, years. Two weeks is really not enough.

  • Do be careful with your dependencies. Locking down dependencies on a specific version makes things difficult for distributions, because they often can only provide one or a very small number of versions of any one package. Likewise, avoid embedding dependencies in your own source tree, because that explodes the amount of work distributions have to do to patch security holes. (No, distributions can't rely on tends of thousands of upstream to each do the patching correctly and promptly.)

Things for Debian to improve

There are many sources of friction that come from Debian itself. Some of them are unavoidable: if upstream projects don't take care of copyright licence hygiene, for example, then Debian will impose that on them and that can't be helped. Other things are more avoidable, however. Here's a list off the top of my head:

  • A lot of stuff in Debian happens over email, which might happen using a web application, if it were not for historical reasons. For example, the Debian bug tracking system ( requires using email, and given delays caused by spam filtering, this can cause delays of more than fifteen minutes. This is a source of friction that could be avoided.

  • Likewise, Debian voting happens over email, which can cause friction from delays.

  • Debian lets its package maintainers use any version control system, any packaging helper tooling, and packaging workflow they want. This means that every package is, to some extent, a new territory for someone other than its primary maintainers. Even when the same tools are used, they can be used in variety of different ways. Consistency should reduce friction.

  • There's too little infrastructure to do things like collecting copyright information into debian/control. This really shouldn't be a manual task.

  • Debian packaging uses arcane file formats, loosely based on email headers. More standard formats might make things easier, and reduce friction.

  • There's not enough automated testing, or it's too hard to use, making it too hard to know if a new package will work, or a modified package doesn't break anything that used to work.

  • Overall, making a Debian package tends to require too much manual work. Packaging helpers like dh certainly help, but not enough. I don't have a concrete suggestion how to reduce it, but it seems like an area Debian should work on.

  • Maybe consider supporting installing multiple versions of a package, even if only for, say, Javascript libraries. Possibly with a caveat that only specific versions will be security supported, and a way to alert the sysadmin if vulnerable packages are installed. Dunno, this is a difficult one.

  • Maybe consider providing something where the source package gets automatically updated to every new upstream release (or commit), with binary packages built from that, and those automatically tested. This might be a separate section of the archive, and packages would be included into the normal part of the archive only by manual decision.

  • There's more, but mostly not relevant to this discussion, I think. For example, Debian is a big project, and the mere size is a cause of friction.


I don't allow comments on my blog, and I don't want to debate this in private. If you have comments on anything I've said above, please post to the debian-devel mailing list. Thanks.


To ensure I get some responses, I will leave these bait here:

Anyone who's been programming less than 12332 days is a young whipper-snapper and shouldn't be taken seriously.

Depending on the latest commit of a library is too slow. The proper thing to do for really fast development is to rely on the version in the unsaved editor buffer of the library developer.

You shouldn't have read any of this. I'm clearly a troll.

Planet DebianMartín Ferrari: OSM in IkiWiki

Since about 15 years ago, I have been thinking of creating a geo-referenced wiki of pubs, with loads of structured data to help searching. I don't know if that would be useful for anybody else, but I know I would use it!

Sadly, the many times I started coding something towards that goal, I ended blocked by something, and I keep postponing my dream project.

Independently of that, for the past two years I have been driving a regular social meeting in Dublin for CouchSurfers, called the Dublin Mingle. The idea is pretty simple: to go every week to a different pub, and make friends.

I wanted to make a map marking all the places visited. Completely useless, but pretty! So, I went back to looking into IkiWiki internals, as the current osm plugin would not fulfill all my needs, and has a few annoying bugs.

After a few days of work, I made it: a refurbished osm plugin that uses the modern and pretty Leaflet library. If the javascript is not lost in the way (because you are reading from an aggregator, for example), below you should see the result. Otherwise, you can see it in action on its own page: Mingle)


The code is still not ready for merging into Ikiwiki, as I need to write tests and documentation. But you can find the changes in my GitHub repo.

It is still a long way to go before I can create my pubs wiki, but it is the first building block! Now I need a way to easily import and sync data from OSM, and then to create a structured search function.


Don MartiThis is why we can't have nice brands.

What if I told you that there was an Internet ad technology that...

  • can reach the same user on mobile and desktop

  • uses open-standard persistent identifiers for users

  • can connect users to their purchase history

  • reaches the users that the advertiser chooses, at the time the advertiser chooses

  • and doesn't depend on the Google/Facebook duopoly?

Don't go looking for it on the Lumascape.

I'm describing email spam.

Every feature that adtech is bragging on, or working toward? Email spam had it in the 1990s.

So why didn't brand advertisers jump all over spam? Why did they mostly leave it to low-reputation brands and scammers?

To be honest, it probably wasn't a decision decision in most cases, just corporate sloth. But staying away from spam was the right answer. In the email inbox, spam from a high-reputation brand doesn't look any different from spam that any fly-by-night operation can send. All spammers can do the same stuff:

They can sell to people...for a fraction of what marketing used to cost. And they can collect data on these consumers, track what they buy, what they love and hate about the experience, and market to them directly much more effectively.

Oh, wait. That's not about spam in the 1990s. That's about targeted advertising on social media sites today. The CEO of digital advertising's biggest trade group says most big marketers are screwed unless they completely change their business models.

It's the direct consumer relationships, and the use of consumer data, that is completely game-changing for the marketing world. And most big marketers, such as Procter & Gamble and Unilever, are not ready for this new reality, the IAB says.

But of course they're ready. The difference is that those established brand advertisers aren't any more ready than some guy who watched a YouTube video series on growth hacking and is ready to start buying targeted ads and drop-shipping.

The "new reality," the targeted advertising business that the IAB wants brands to join them in, is a place where you win based not on how much the audience trusts you, but on how well you can out-hack the competition. And like any information space organized by hacking skill, it's a hellscape of deceptive crap. Read The Strange Brands in Your Instagram Feed by Alexis C. Madrigal.

Some Instagram retailers are legit brands with employees and products. Others are simply middlemen for Chinese goods, built in bedrooms, and launched with no capital or inventory. All of them have been pulled into existence by the power of Instagram and Facebook ads combined with a suite of e-commerce tools based around Shopify.

Of course, not every brand that buys a social media ad or other targeted ad is crap.

But a social media ad is useless for telling crap brands from non-crap ones. It doesn't carry economic signal. There's no such thing as a free watch. (PDF)

Rory Sutherand writes, in Reducing activities to their core misses the point,

Many billions of pounds of advertising expenditure have been shifted from conventional media, most notably newspapers, and moved into digital media in a quest for targeted efficiency. If advertising simply works by the conveyance of messages, this would be a sensible thing to do. However, it is beginning to become apparent that not all, perhaps not even most, advertising works this way. It seems that a large part of advertising creates trust and conviction in its audience precisely because it is perceived to be costly.

If anyone knows that any seller can watch a few YouTube videos and do a certain activity, does that activity really help the audience distinguish a high-reputation seller from a low-reputation one?

And how does it affect a legit brand when its ads show up on the same medium with all the crappy ones?Twitter has a solution for this: just don't show any ads to the important people. I'm surprised they can get away with this, but given the mix of rip-off and real brand ads I keep seeing there, it seems to be working.

Extremists and state-sponsored misinformation campaigns aren't "abusing" targeted advertising. They're just taking advantage of a system optimized for deception and using it normally.

Now, I don't want to blame targeted advertising for all of the problems of brand equity. When you put high-fructose corn syrup in your product, brand equity suffers. When you outsource or de-skill the customer support function, brand equity suffers. All the half-ass "looks good this quarter" stuff that established brands are doing is bad for brand equity. It just turns out that the only kinds of advertising that you can do on the Internet today are all half-ass "looks good this quarter" stuff. If you want to send a credible economic signal, buy TV time or put a flagship store on some expensive real estate. The Internet's got nothing for you.

Failure to create signal-carrying ad units should be more of a concern for people who want to earn ad money on the Internet than it is. All that work that went into building the most complicated ad medium ever? It went into building an ad medium optimized for low-reputation advertisers. And that kind of ad medium tends to see rates go down over time. It doesn't hold value.

What's the answer? The medium can't gain value until the users trust it, which means they have to trust the client. In-browser tracking protection is going to have to enable the legit web advertising industry the same way that spam filters enables the legit email newsletter industry.

Here’s why the epidemic of malicious ads grew so much worse last year

Facebook and Google could lose $2B in ad revenue over ‘toxic content’

How I Cracked Facebook’s New Algorithm And Tortured My Friends

Wanted: Console Text Editor for Windows

Where Did All the Advertising Jobs Go?

Facebook patents tech to determine social class

The Mozilla Blog: A Perspective: Firefox Quantum’s Tracking Protection Gives Users The Right To Be Curious

Breaking up with Facebook: users confess they're spending less time

Survey: Facebook is the big tech company that people trust least

The Perils of Paid Content


Unilever pledges to cut ties with ‘platforms that create division’

Content recommendation services Outbrain and Taboola are no longer a guaranteed source of revenue for digital publishers

The House That Spied on Me

Why Facebook's Disclosure to the City of Seattle Doesn't Add Up

Debunking common blockchain-saving-advertising myths

SF tourist industry struggles to explain street misery to horrified visitors

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

How Facebook Helped Ruin Cambodia's Democracy

Planet DebianLouis-Philippe Véronneau: Downloading all the Critical Role podcasts in one batch

I've been watching Critical Role1 for a while now and since I've started my master's degree I haven't had much time to sit down and watch the show on YouTube as I used to do.

I thus started listening to the podcasts instead; that way, I can listen to the show while I'm doing other productive tasks. Pretty quickly, I grew tired of manually downloading every episode each time I finished the last one. To make things worst, the podcast is hosted on PodBean and they won't let you download episodes on a mobile device without their app. Grrr.

After the 10th time opening the terminal on my phone to download the podcast using some wget magic I decided enough was enough: I was going to write a dumb script to download them all in one batch.

I'm a little ashamed to say it took me more time than I had intended... The PodBean website uses semi-randomized URLs, so I could not figure out a way to guess the paths to the hosted audio files. I considered using youtube-dl to get the DASH version of the show on YouTube, but Google has been heavily throttling DASH streams recently. Not cool Google.

I then had the idea to use iTune's RSS feed to get the audio files. Surely they would somehow be included there? Of course Apple doesn't give you a simple RSS feed link on the iTunes podcast page, so I had to rummage around and eventually found out this is the link you have to use:

Surprise surprise, from the json file this links points to, I found out the main Critical Role podcast page has a proper RSS feed. To my defense, the RSS button on the main podcast page brings you to some PodBean crap page.

Anyway, once you have the RSS feed, it's only a matter of using grep and sed until you get what you want.

Around 20 minutes later, I had downloaded all the episodes, for a total of 22Gb! Victory dance!


Here's the bash script I wrote. You will need recode to run it, as the RSS feed includes some HTML entities.

# Get the whole RSS feed
wget -qO /tmp/criticalrole.rss

# Extract the URLS and the episode titles
mp3s=( $(grep -o "http.\+mp3" /tmp/criticalrole.rss) )
titles=( $(tail -n +45 /tmp/criticalrole.rss | grep -o "<title>.\+</title>" \
           | sed -r 's@</?title>@@g; s@ @\\@g' | recode html..utf8) )

# Download all the episodes under their titles
for i in ${!titles[*]}
  wget -qO "$(sed -e "s@\\\@\\ @g" <<< "${titles[$i]}").mp3" ${mp3s[$i]}

1 - For those of you not familiar with Critical Role, it's web series where a group of voice actresses and actors from LA play Dungeons & Dragons. It's so good even people like me who never played D&D can enjoy it..

Planet DebianSergio Durigan Junior: Hello, Planet Debian

Hey, there. This is long overdue: my entry in Planet Debian! I’m creating this post because, until now, I didn’t have a debian tag in my blog! Well, not anymore.

Stay tunned!

Planet Linux AustraliaDonna Benjamin: Site building with Drupal

What even is "Site Building"?

At DrupalDownunder some years back, the wonderful Erica Bramham named her talk "All node, no code". Nodes were the fundamental building blocks in Drupal, they were like single drops of content. These days though, it's all about entities.

But hang on a minute, I'm using lots of buzz words, and worse, I'm using words that mean different things in different contexts. Jargon is one of the first hurdles you need to jump to understand the diverse worlds of the web. People who grow up multi-lingual learn that the meanings of words is somewhat arbitrary. They learn the same thing has different names. This is true for the web too. So the first thing to know about Site Building, is it means different things to different people. 

To me, it means being able to build a website with out knowing how to code. I also believe it means I can build a website without having to set up my own development environment. I know people who vehemently disagree with me about this. But that's ok. This is my blog, and these are my rules.

So - this is a post about site building, using SimplyTest.Me and Drupal 8 out of the box.

1. Go to

2. Type Drupal Core in the search field, and select "Drupal core" from the list

3. Choose the latest development branch, right at the bottom of the list.


For me, right now, that's 8.6.x, and here's a screenshot of what that looks like.

SimplyTest Me Screenshot, showing drop down fields described in the text.


4. Click "Launch sandbox".

Now wait.

In a few moments, you should see a fresh shiny Drupal 8 site, ready for you to explore.

For me today, it looks like this.  

Drupal 8.6.x front page screenshot


In the top right of the window, you should see a "Log in" link.

Click that, and enter admin/admin to login. 

You're now ready to practice some site building!

First, you'll need to create some content to play with.  Here's a short screencast that shows you how to login, add an article, and change the title using Quick Edit.

A guide to what's next

Follow the Drupal User guide to start building your site!

If you want to start at the beginning, you'll get a great overview of Drupal, and some important info on how to plan your site. But if you want to roll up your sleeves and get building, you can skip the chapter on site installation and jump straight to chapter 4, and dive into basic site configuration.



You have 24 hours to experiment with the sandbox - after that it disappears.


Get in touch

If you want something more permanent, you might want to "try drupal" or contact us at to discuss our Drupal services.

Planet DebianBenjamin Mako Hill: My Kuro5hin Diary Entries

Kuro5hin logo

Kuro5hin (pronounced “corrosion” and abbreviated K5) was a website created in 1999 that was popular in the early 2000s. K5 users could post stories to be voted upon as well as entries to their personal diaries.

I posted a couple dozen diary entries between 2002 and 2003 during my final year of college and the months immediately after.

K5 was taken off-line in 2016 and the Internet Archive doesn’t seem to have snagged comments or full texts of most diary entries. Luckily, someone managed to scrape most of them before they went offline.

Thanks to this archive, you can now once again hear from 21-year-old-me in the form of my old K5 diary entries which I’ve imported to my blog Copyrighteous. I fixed the obvious spelling errors but otherwise restrained myself and left them intact.

If you’re interested in preserving your own K5 diaries, I wrote some Python code to parse the K5 HTML files for diary pages and import them into WordPress using it’s XML-RPC API. You’ll need to tweak the code to use it but it’s pretty straightforward.

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV February 2018 Workshop: Installing an Open Source OS on your tablet or phone

Feb 24 2018 12:30
Feb 24 2018 16:30
Feb 24 2018 12:30
Feb 24 2018 16:30
Infoxchange, 33 Elizabeth St. Richmond

Installing an Open Source OS on your tablet or phone

Andrew Pam will demonstrate how to install LineageOS, previously known as CyanogenMod and based on the Android Open Source Project, on tablets and phones.  Feel free to bring your own tablets and phones and have a go, but please ensure you back them up if there is anything you still need stored on them!

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

February 24, 2018 - 12:30

read more


CryptogramFriday Squid Blogging: Squid Pin

There's a squid pin on Kickstarter.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianSteve Kemp: Updated my package-repository

Yesterday I overhauled my Debian package-hosting repository, in response to user-complaints.

I started down the rabit hole due to:

  W: No Hash entry in Release file /.._._Release which is considered strong enough for security purposes

I fixed that by changing my hashes from SHA1 to SHA256 + SHA512, but I was only making a little progress, due to the more serious problem, my repository-signing key was DSA-based and "small". I replaced it with a modern key, then changed how I generate my packages and all is well.

In the past I was generating the Release files manually, via a silly shell-script. Anyway here is my trivial Makefile for making the per-project and per-distribution archive, no doubt it could be improved:

   all: repo

       @rm -f InRelease Packages Sources Packages.gz Sources.gz Release Release.gpg

   Packages: $(wildcard *.deb)
       @apt-ftparchive packages . > Packages 2>/dev/null
       @gzip -c Packages > Packages.gz

   Sources: $(wildcard *.tar.gz)
       @apt-ftparchive sources . > Sources 2>/dev/null
       @gzip -c Sources > Sources.gz

   repo: Packages Sources
       @apt-ftparchive release . > Release
       @gpg --yes --clearsign -o InRelease Release
       @gpg --yes -abs -o Release.gpg Release

In conclusion, in the unlikely event you're using my packages please see GPG-instructions. I've also hidden any packages which were solely for Squeeze and Wheezy, but they continue to exist to avoid breaking links.

Rondam RamblingsYes, code is data, but that's not what makes Lisp cool

There has been some debate on Hacker News lately about what makes Lisp cool, in particular about whether the secret sauce is homo-iconicity, or the idea that "code is data", or something else.  I've read through a fair amount of the discussion, and there is a lot of misinformation and bad pedagogy floating around.  Because this is a topic that is near and dear to my heart, I thought I'd take a

CryptogramNew National Academies Report on Crypto Policy

The National Academies has just published "Decrypting the Encryption Debate: A Framework for Decision Makers." It looks really good, although I have not read it yet.

Not much news or analysis yet. Please post any links you find in the comments, and I will summarize them here.

Planet Linux AustraliaOpenSTEM: Australia at the Olympics

The modern Olympic games were started by Frenchman Henri de Baillot-Latour to promote international understanding. The first games of the modern era were held in 1896 in Athens, Greece. Australia has competed in all the Olympic games of the modern era, although our participation in the first one was almost by chance. Of course, the […]

Worse Than FailureError'd: Preparing for the Future

George B. wrote, "Wait, so is it done...or not done?"


George B. (different George, but is in good company) is seeing nearly the same thing with Crash Plan Pro where the backup is done ...maybe.


"I swear, that's the last time that I'm flying with Icarus Airlines" Allison V. writes.


"The best I can figure, someone wanted to see what the simulation app would do if executed in some far flung future where months don't matter and nothing makes any sense," writes M.C.


Joel C. wrote "I can't help it - Next time my train is late, I'm going to immediately think that it's because someone didn't click to dismiss a popup."


"I'm not sure what this means, but I guess it's to point out that there are website buttons, and then there are buttons on the website," Brian R. wrote.


[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.


Planet DebianErich Schubert: Disable Web Notification Prompts

Recently, tons of website ask you for the permission to display browser notifications. 99% of the time, you will not want these. In fact, all the notifications increase stress, so you should try to get rid of them for your own productivity. Eliminate distractions.

I find even the prompt for these notifications very annoying. With Chrome/Chromium it is even worse than with Firefox.

In Chrome, you can disable the functionality by going to the location chrome://settings/content/notifications and toggling the switch (the label will turn to “blocked”, from “ask”).

In Firefox, go to about:config, and toggle dom.webnotifications.enabled is supposed to help, but does not disable the prompts here. You need to even disable dom.push.enabled completely. That may break some services that you want, but I have not yet noticed anything.

Cory DoctorowDo We Need a New Internet?

I was one of the interview subjects on an episode of BBC’s Tomorrow’s World called Do We Need a New Internet? (MP3); it’s a fascinating documentary, including some very thoughtful commentary from Edward Snowden.

Planet DebianJoachim Breitner: Interleaving normalizing reduction strategies

A little, not very significant, observation about lambda calculus and reduction strategies.

A reduction strategy determines, for every lambda term with redexes left, which redex to reduce next. A reduction strategy is normalizing if this procedure terminates for every lambda term that has a normal form.

A fun fact is: If you have two normalizing reduction strategies s1 and s2, consulting them alternately may not yield a normalizing strategy.

Here is an example. Consider the lambda-term o = (λ, and note that oo → ooo → oooo → …. Let Mi = (λx.(λx.x))(oooo) (with i ocurrences of o). Mi has two redexes, and reduces to either (λx.x) or Mi + 1. In particular, Mi has a normal form.

The two reduction strategies are:

  • s1, which picks the second redex if given Mi for an even i, and the first (left-most) redex otherwise.
  • s2, which picks the second redex if given Mi for an odd i, and the first (left-most) redex otherwise.

Both stratgies are normalizing: If during a reduction we come across Mi, then the reduction terminates in one or two steps; otherwise we are just doing left-most reduction, which is known to be normalizing.

But if we alternatingly consult s1 and s2 while trying to reduce M2, we get the sequence

M2 → M3 → M4 → …

which shows that this strategy is not normalizing.

Afterthought: The interleaved strategy is not actually a reduction strategy in the usual definition, as it not a pure (stateless) function from lambda term to redex.

Cory DoctorowThe 2018 Locus Poll is open: choose your favorite science fiction of 2017!

Following the publication of its editorial board’s long-list of the best science fiction of 2017, science fiction publishing trade-journal Locus now invites its readers to vote for their favorites in the annual Locus Award. I’m honored to have won this award in the past, and doubly honored to see my novel Walkaway on the short list, and in very excellent company indeed.

While you’re thinking about your Locus List picks, you might also use the list as an aide-memoire in picking your nominees for the Hugo Awards.

Krebs on SecurityNew EU Privacy Law May Weaken Security

Companies around the globe are scrambling to comply with new European privacy regulations that take effect a little more than three months from now. But many security experts are worried that the changes being ushered in by the rush to adhere to the law may make it more difficult to track down cybercriminals and less likely that organizations will be willing to share data about new online threats.

On May 25, 2018, the General Data Protection Regulation (GDPR) takes effect. The law, enacted by the European Parliament, requires technology companies to get affirmative consent for any information they collect on people within the European Union. Organizations that violate the GDPR could face fines of up to four percent of global annual revenues.

In response, the Internet Corporation for Assigned Names and Numbers (ICANN) — the nonprofit entity that manages the global domain name system — is poised to propose changes to the rules governing how much personal information Web site name registrars can collect and who should have access to the data.

Specifically, ICANN has been seeking feedback on a range of proposals to redact information provided in WHOIS, the system for querying databases that store the registered users of domain names and blocks of Internet address ranges (IP addresses).

Under current ICANN rules, domain name registrars should collect and display a variety of data points when someone performs a WHOIS lookup on a given domain, such as the registrant’s name, address, email address and phone number. (Most registrars offer a privacy protection service that shields this information from public WHOIS lookups; some registrars charge a nominal fee for this service, while others offer it for free).

In a bid to help domain registrars comply with the GDPR regulations, ICANN has floated several proposals, all of which would redact some of the registrant data from WHOIS records. Its mildest proposal would remove the registrant’s name, email, and phone number, while allowing self-certified 3rd parties to request access to said data at the approval of a higher authority — such as the registrar used to register the domain name.

The most restrictive proposal would remove all registrant data from public WHOIS records, and would require legal due process (such as a subpoena or court order) to reveal any information supplied by the domain registrant.

ICANN’s various proposed models for redacting information in WHOIS domain name records.

The full text of ICANN’s latest proposed models (from which the screenshot above was taken) can be found here (PDF). A diverse ICANN working group made up of privacy activists, technologists, lawyers, trademark holders and security experts has been arguing about these details since 2016. For the curious and/or intrepid, the entire archive of those debates up to the current day is available at this link.


To drastically simplify the discussions into two sides, those in the privacy camp say WHOIS records are being routinely plundered and abused by all manner of ne’er-do-wells, including spammers, scammers, phishers and stalkers. In short, their view seems to be that the availability of registrant data in the WHOIS records causes more problems than it is designed to solve.

Meanwhile, security experts are arguing that the data in WHOIS records has been indispensable in tracking down and bringing to justice those who seek to perpetrate said scams, spams, phishes and….er….stalks.

Many privacy advocates seem to take a dim view of any ICANN system by which third parties (and not just law enforcement officials) might be vetted or accredited to look at a domain registrant’s name, address, phone number, email address, etc. This sentiment is captured in public comments made by the Electronic Frontier Foundation‘s Jeremy Malcolm, who argued that — even if such information were only limited to anti-abuse professionals — this also wouldn’t work.

“There would be nothing to stop malicious actors from identifying as anti-abuse professionals – neither would want to have a system to ‘vet’ anti-abuse professionals, because that would be even more problematic,” Malcolm wrote in October 2017. “There is no added value in collecting personal information – after all, criminals are not going to provide correct information anyway, and if a domain has been compromised then the personal information of the original registrant isn’t going to help much, and its availability in the wild could cause significant harm to the registrant.”

Anti-abuse and security experts counter that there are endless examples of people involved in spam, phishing, malware attacks and other forms of cybercrime who include details in WHOIS records that are extremely useful for tracking down the perpetrators, disrupting their operations, or building reputation-based systems (such as anti-spam and anti-malware services) that seek to filter or block such activity.

Moreover, they point out that the overwhelming majority of phishing is performed with the help of compromised domains, and that the primary method for cleaning up those compromises is using WHOIS data to contact the victim and/or their hosting provider.

Many commentators observed that, in the end, ICANN is likely to proceed in a way that covers its own backside, and that of its primary constituency — domain registrars. Registrars pay a fee to ICANN for each domain a customer registers, although revenue from those fees has been falling of late, forcing ICANN to make significant budget cuts.

Some critics of the WHOIS privacy effort have voiced the opinion that the registrars generally view public WHOIS data as a nuisance issue for their domain registrant customers and an unwelcome cost-center (from being short-staffed to field a constant stream of abuse complaints from security experts, researchers and others in the anti-abuse community).

“Much of the registrar market is a race to the bottom, and the ability of ICANN to police the contractual relationships in that market effectively has not been well-demonstrated over time,” commenter Andrew Sullivan observed.

In any case, sources close to the debate tell KrebsOnSecurity that ICANN is poised to recommend a WHOIS model loosely based on Model 1 in the chart above.

Specifically, the system that ICANN is planning to recommend, according to sources, would ask registrars and registries to display just the domain name, city, state/province and country of the registrant in each record; the public email addresses would be replaced by a form or message relay link that allows users to contact the registrant. The source also said ICANN plans to leave it up to the registries/registrars to apply these changes globally or only to natural persons living in the European Economic Area (EEA).

In addition, sources say non-public WHOIS data would be accessible via a credentialing system to identify law enforcement agencies and intellectual property rights holders. However, it’s unlikely that such a system would be built and approved before the May 25, 2018 effectiveness date for the GDPR, so the rumor is that ICANN intends to propose a self-certification model in the meantime.

ICANN spokesman Brad White declined to confirm or deny any of the above, referring me instead to a blog post published Tuesday evening by ICANN CEO Göran Marby. That post does not, however, clarify which way ICANN may be leaning on the matter.

“Our conversations and work are on-going and not yet final,” White wrote in a statement shared with KrebsOnSecurity. “We are converging on a final interim model as we continue to engage, review and assess the input we receive from our stakeholders and Data Protection Authorities (PDAs).”

But with the GDPR compliance deadline looming, some registrars are moving forward with their own plans on WHOIS privacy. GoDaddy, one of the world’s largest domain registrars, recently began redacting most registrant data from WHOIS records for domains that are queried via third-party tools. And it seems likely that other registrars will follow GoDaddy’s lead.


For my part, I can say without hesitation that few resources are as critical to what I do here at KrebsOnSecurity than the data available in the public WHOIS records. WHOIS records are incredibly useful signposts for tracking cybercrime, and they frequently allow KrebsOnSecurity to break important stories about the connections between and identities behind various cybercriminal operations and the individuals/networks actively supporting or enabling those activities. I also very often rely on WHOIS records to locate contact information for potential sources or cybercrime victims who may not yet be aware of their victimization.

In a great many cases, I have found that clues about the identities of those who perpetrate cybercrime can be found by following a trail of information in WHOIS records that predates their cybercriminal careers. Also, even in cases where online abusers provide intentionally misleading or false information in WHOIS records, that information is still extremely useful in mapping the extent of their malware, phishing and scamming operations.

Anyone looking for copious examples of both need only to search this Web site for the term “WHOIS,” which yields dozens of stories and investigations that simply would not have been possible without the data currently available in the global WHOIS records.

Many privacy activists involved in to the WHOIS debate have argued that other data related to domain and Internet address registrations — such as name servers, Internet (IP) addresses and registration dates — should also be considered private information. My chief concern if this belief becomes more widely held is that security companies might stop sharing such information for fear of violating the GDPR, thus hampering the important work of anti-abuse and security professionals.

This is hardly a theoretical concern. Last month I heard from a security firm based in the European Union regarding a new Internet of Things (IoT) botnet they’d discovered that was unusually complex and advanced. Their outreach piqued my curiosity because I had already been working with a researcher here in the United States who was investigating a similar-sounding IoT botnet, and I wanted to know if my source and the security company were looking at the same thing.

But when I asked the security firm to share a list of Internet addresses related to their discovery, they told me they could not do so because IP addresses could be considered private data — even after I assured them I did not intend to publish the data.

“According to many forums, IPs should be considered personal data as it enters the scope of ‘online identifiers’,” the researcher wrote in an email to KrebsOnSecurity, declining to answer questions about whether their concern was related to provisions in the GDPR specifically.  “Either way, it’s IP addresses belonging to people with vulnerable/infected devices and sharing them may be perceived as bad practice on our end. We consider the list of IPs with infected victims to be private information at this point.”

Certainly as the Internet matures and big companies develop ever more intrusive ways to hoover up data on consumers, we also need to rein in the most egregious practices while giving Internet users more robust tools to protect and preserve their privacy. In the context of Internet security and the privacy principles envisioned in the GDPR, however, I’m worried that cybercriminals may end up being the biggest beneficiaries of this new law.

Planet DebianHolger Levsen: 20180215-mini-debconf-hamburg

Everything about the Mini-DebConf in Hamburg in May 2018


With great joy we are finally offically announcing the Debian MiniDebConf which will take place in Hamburg (Germany) from May 16 to 20, with three days of Debcamp style hacking, followed by two days of talks, workshops and more hacking. And then, Monday the 21st is also a holiday in Germany, so you might choose to extend your stay by a day! (Though there will not be an official schedule for the 21st.)

;tl;dr: We're having a MiniDebConf in Hamburg on May 16-20. It's going to be awesome. You should all come! Register now!

the longer version:


Please register now, registration is free and now open until May 1st.

In order to register, add your name and details to the registration page in the Debian wiki.

There's space for approximately 150 people due to limited space in the main auditorium.

Please register ASAP, as we need this information for planning food and hacking space size calculations.

Talks wanted (CfP)

We have assembled a content team (consisting of Margarita Manterola, Michael Banck and Lee Garrett), who soon will publish an extra post for the CfP. Though you don't need to wait for that and can already send your proposals to

We will have talks on Saturday and Sunday, the exact slots are yet to be determined by the content team.

We expect submissions and talks to be held in English, as this is the working language in Debian and at this event.

Debian Sprints

The miniDebcamp from Wednesday to Friday is a perfect opportunity to host Debian sprints. We would welcome if teams assemble and work together on their projects.


The event will be hosted in the former Victoria Kaserne, now called Fux (or Frappant), which is a collective art space located in a historical monument. It is located between S-Altona and S-Holstenstraße, so there is a direct subway connection to/from the Hamburg Airport (HAM) and Altona is also a long distance train station.

There's a Gigabit-Fiber uplink connection and wireless coverage (almost) everywhere in the venue and in the outside areas. (And then, we can also fix locations without wireless coverage.)

Within the venue, there are three main areas we will use, plus the garden and corridors:

dock europe

dock europe is an international educational centre with a meeting space within the venue which offers three rooms which can be combined into one big one. During the Mini-DebCamp from Wednesday to Friday we will probably use the rooms in the split configuration, while on Saturday and Sunday it will be one big room hosting presentations and such stuff. There are also two small rooms we can use as small hacklabs for 4-6 people.

dock europe also provides accomodation for some us, see further below.

CCCHH hackerspace

Just down two corridors in the same floor and building as dock europe there is the CCC Hamburg Hackerspace which will be open for us on all five days and which can be used for "regular Debian hacking" or, if you find some nice CCCHH members to help you, you might also be able to use the lasercutter, 3d printer, regular printer and many other tools and devices. It's definitly also suitable for smaller ad-hoc workshops but beware, it will also somewhat be the noisy hacklab, as it will also be open to regular CCC folks when we are there.

fux und ganz

The Fux also has a cantina called "fux und ganz" which will serve us (and other visitors of the venue) with lunch and dinner. Please register until May 1st to ease their planning as well!


The Mini-DebConf will take place in the center of Hamburg, so there are many accomodation options available. Some suggestions for housing options are given in the wiki and you might want to share your findings there too.

There is also limited on-site accomodation available, dock europe provides 36 beds in double-rooms in the venue. The rooms are nice, small, clean, have a locker, wireless and are just one floor away from our main spaces. There's also a sufficient amount of showers and toilets and breakfast is available (for those 36 people) as well.

Thankfully nattie has agreed to be in charge of distributing these 36 beds, so please mail her if you want a bed. The beds will be distributed from two buckets on a first come, first serve base:

  • 24 beds for anyone, first come, first serve, costs 27e/night.
  • 12 beds for video team, frontdesk desk, talk meisters, etc, also by first come, first served and nattie decides, whether you qualify indeed. Those also costs 27e/night.

Sponsors wanted

Making a Mini DebConf happen costs money, we need to rent the venue, video gear, hopefully can pay hard working volunteers lunch and dinner and maybe also sponsor some travel. So we really appreciate companies willing to support this meeting!

We have three sponsor categories:

  • 1000€ = sponsor, listed as such in all material.

  • 2500€ = gold sponsor, listed as such in all material, logo featured in the videos.

  • 5000€ = platinum sponsor, listed as such prominently in all material, logo featured prominently in the videos

Plus, there's corporate registration as an option too, where we will charge you 250€ for the registration. Please contact us if you are interested in that!

More volunteers wanted

Some things still need more helping hands:

So far we thankfully have Nattie volunteering for frontdesk duties. In turn, she'd be very thankful if some people join her staffing frontdesk, because shared work is more joyful!

The same goes for the video team. So far, we know the gear will arrive, and probably a person knowing how to operate it, but that's it. Please consider making sure we'll have videos released! ;) (And streams hopefully too.)

Also, please consider submitting a talk or holding a workshop! is waiting for you!

Finally, we would also very much welcome a nice logo and t-shirts with it being printed. Can you come up with a logo? Print shirts?


If you want to help, need help, have comments or want to contact us for other reasons, there are several ways:

Looking forward to see you in Hamburg!

Holger, for the 2018 Mini DebConf Hamburg team

Planet DebianMichal Čihař: Weblate 2.19

Weblate 2.19 has been released today. The biggest improvement are probably addons to customize translation workflow, but there are some other enhancements as well.

Full list of changes:

  • Fixed imports across some file formats.
  • Display human friendly browser information in audit log.
  • Added TMX exporter for files.
  • Various performance improvements for loading translation files.
  • Added option to disable access management in Weblate in favor of Django one.
  • Improved glossary lookup speed for large strings.
  • Compatibility with django_auth_ldap 1.3.0.
  • Configuration errors are now stored and reported persistently.
  • Honor ignore flags in whitespace autofixer.
  • Improved compatibility with some Subversion setups.
  • Improved built in machine translation service.
  • Added support for SAP Translation Hub service.
  • Added support for Microsoft Terminology service.
  • Removed support for advertisement in notification mails.
  • Improved translation progress reporting at language level.
  • Improved support for different plural formulas.
  • Added support for Subversion repositories not using stdlayout.
  • Added addons to customize translation workflows.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

CryptogramElection Security

Good Washington Post op-ed on the need to use voter-verifiable paper ballots to secure elections, as well as risk-limiting audits.

Worse Than FailureIt's Called Abstraction, and It's a Good Thing

Steven worked for a company that sold “big iron” to big companies, for big bucks. These companies didn’t just want the machines, though, they wanted support. They wanted lots of support. With so many systems, processing so many transactions, installed at so many customer sites, Steven’s company needed a better way to analyze when things went squirrelly.

Thus was born a suite of applications called “DICS”- the Diagnostic Investigation Console System. It was, at its core, a processing pipeline. On one end, it would reach out to a customer’s site and download log files. The log files would pass through a series of analytic steps, and eventually reports would come out the other end. Steven mostly worked on the reporting side of things.

While working on reports, he’d sometimes hear about hiccups in the downloader portion of the pipeline, but as it was “not his circus, not his monkeys”, he didn’t pry too deeply. At least, he didn’t until one day, when his boss knocked on his cubicle divider.

“Hey, Steven. You know Perl, right?”

“Uh… sure.”

“And you’ve worked with XML files, right?”

“I… yes?”

“Great. Bob’s leaving. You’re going to need to take over the downloader portion of DICS. Talk to him ASAP. Great, thanks!”

Perl gets a reputation for being a “write only language”, which is at least partially undeserved. Bob was quite sensitive about that reputation, so he stressed, “I’ve worked really, really hard to keep the code as clean and clear as possible. Everything in the design is object oriented.”

Bob wasn’t kidding. Everything was wrapped up as a class. Everything. It was so class-happy it made the Spring framework jealous. JEE consultants would look at it and say, “Whoa, maybe slow down with the classes there.” A UML diagram of the architecture would drain ten printers worth of toner. The config file was stored in XML, and just for parsing out that file and storing the results, Bob had written 25 different classes, some as small as three lines. All in all, the whole downloader weighed in at about 5,000 lines of Perl code.

In the whirlwind tour, Steven asked Bob about the complexity. “It’s not complex. Each class is extremely simple. Well, aside from the config file wrapper, but it needs to have lots of methods because it has lots of data! There are so many fields in the XML file, and I needed to create getters and setters for them all! That way we can have Data Abstraction! That’s important! Data Abstraction is how we keep this project maintainable. What if the XML file format changes? It’s happened, you know. This will make it easy to keep our code in sync!”

Steven marveled at Bob’s ability to pronounce “data abstraction” as if it were in bold face, and resolved to touch the downloader script as little as possible. That resolution failed pretty much a week after Bob left, when the script fell down in production, leaving the DICS pipeline empty. Steven had to roll up his sleeves and get hands on with the code.

Now, one of Perl’s selling points is its rich library. While CPAN may have its own issues as a package manager, if you want to do something like parse an XML file, there’s a library that does it. There’s a dozen libraries that’ll do it. And they all follow a vaguely Perl-idiom, and instead of classes, they’ll favor associative arrays. That way, when you want to get something like the contents of the ip_addr tag from the config file, you could write code like this:

$ip_addr = $config->{hosts}[$n]{ip_addr}

This makes it easy to understand how the structure of the XML file relates to the Perl data structure, but that kind of mapping means that there isn’t any Data Abstraction, and thus was utterly the wrong approach. Instead, everything was done as a getter/setter method.

$ip_addr = $Config_object->host($n)->get_addr();

That doesn’t look too different, perhaps, but the devil is in the details. First, 90% of the getters were “thin”, so get_addr might look something like this:

sub get_addr { return $self->{Addr}; }

That raises questions about the value of these getters/setters for fetching config values, but the bigger problem was this: there was nothing in the config file called “Addr”. Does this method return the IP address? Or a string in the from “$ip_addr:$port”? Or maybe even an array, like [$ip_addr, $port].

Throughout the whole API, it was a bit of a crapshoot as to what any given method might return. And as for checking the documentation- they’d created a system that provided Data Abstraction, they didn’t need documentation, did they?

To track any given getter back to the actual field in the XML file it was getting, Steven had to trace through half a dozen different classes. It was frustrating and tedious, and Steven had half a mind to just throw the whole thing out and start over, consequences be damned. When he saw the “Translation” subsystem, he decided that it really did need to be thrown out, entirely.

You see, Bob’s goal with Data Abstraction was to make it so that, if the XML file changed, it would be easy to adapt the code. But the code was a mess. So when the XML file did change a few years back, Bob couldn’t update the config handling classes in any way that worked. So he did the next best thing- he wrote a “translation” module that would, using regular expressions, convert the new-style XML files back into the old-style XML files. Then his config-file classes could load and parse the old-style files.

Steven sums it up perfectly:

Bob’s classes weren’t data abstraction. It was just… data abstracturbation.

When Steven was done reimplementing Bob's work, he had about 500 lines of code, and the downloader stopped failing every few days.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianArturo Borrero González: New round of GSoC: 2018

GSoC goodies

The other day Google published the list of accepted projects for this year round of Google Summer of Code. Many organizations were accepted, and there are 3 that are specially interesting to me: Netfilter, Wikimedia Foundation and Debian.

The GSoC initiative is a great opportunity to enter the professional FLOSS world, knowing more of your favorite project, having a mentor and earning a stipend along the way.

The Netfilter project (check the dashboard) has published a list of ideas for students to work on. I will likely be mentoring here. Be aware that students who submit patches as part of the warmup period are more likely to be elected.

The Debian project (check the dashboard) also has a great list of proposals in a variety of different technologies, from packaging to Android, also with some web and backend project ideas. Is great to see again Debian participating in GSoC. Last year we weren’t present.

The Wikimedia Foundation (check the dashboard) has like 8 projects for students to work on, also with different scopes, including an interesting project for improving Toolforge.

So, students, don’t be afraid to participate! There are a lot of projects, different technologies and people to work with, so there should be one waiting for you.

Planet DebianLouis-Philippe Véronneau: Xerox printers on Debian - an update

This blog post is 90% rant and 10% update on how I made our new Xerox Altalink printer work on Debian. Skip the rant by clicking here.

I think the lamest part of my current job is that we heavily rely on multifunction printers. We need to print a high volume of complicated documents on demand. You know, 1500 copies of a color booklet printed on 11x17 paper folded in 3 stapled in the middle kind of stuff.

Pardon my French, but printers suck big time. The printer market is an oligopoly clusterfuck and it seems it keeps getting worse (looking at you, Fuji-Xerox merger). None of the drivers support Linux properly, all the printers are big piles of proprietary code and somehow the corporations selling them keep adding features no one needs.

My reaction when I learnt we needed to replace our current printer

Good job XeroxFuji Xerox, the new shiny printer you forced us to rent1 comes with an app that lets you print directly from Dropbox. I guess they expect people:

  • not to use a password manager
  • not to use long randomly generated passwords
  • not to use 2FA
  • to use proprietary services like Dropbox

Oh wait, I guess that's what people actually do. My bad.

As for fixing their stupid bugs, try again.

Xerox Altalink C8045

Rant aside, here's a short follow-up on the blog post I wrote two years ago on how to install Xerox printers on Debian.

As far as I can see, the Xerox Altalink C8045 seems to be working properly with the x86_64-5.20.606.3946 version of the Xerox driver for Debian. Make sure that you use the bi-directional setup or else you might have trouble. Sadly, all the gimmicks I wrote about two years ago still stand.

If you find that interesting, I also rewrote our Puppet module that manages all of this for you to be compatible with Puppet 4. Yay?

1 - Long story short, we used to have a Xerox ColorQube printer that used wax instead of toner to print, but Xerox doesn't support them anymore and bought back our support contract. E-waste FTW.


Planet DebianDaniel Silverstone: Epic Journey in my Ioniq

This weekend just-gone was my father's 90th birthday, so since we don't go to Wales very often, we figured we should head down to visit. As this would be our first major journey in the Ioniq (I've done Manchester to Cambridge a few times now, but this is almost 3 times further) we took an additional day off (Friday) so that we could easily get from our home in southern Manchester to my parent's house in St Davids, Pembrokeshire.

I am not someone to enter into these experiences lightly. I spent several hours consulting with zap-map and also Google maps, looking at chargers en-route. In the UK there's a significant number of chargers on the motorway system provided by Ecotricity but this infrastructure is not pervasive and doesn't really extend beyond the motorway service stations (and some IKEAs). I made my plan for the journey to Wales, ensuring that each planned stop was simply the first in a line of possible stops in order that if something went wrong, I'd have enough charge to move forwards from there.

First leg took us from our home to the Ecotricity charger at Hilton Park Southbound services. My good and dear friend Tim very kindly offered to charge us for free and he used one of his fifty-two free charges to top us up. This went flawlessly and set us in a very good mood for the journey to come. Since we would then have a very long jump from the M5 to the M4, we decided that our second charge would be to top-up at Chateau Impney which has a Polar charger. Unfortunately by this point the wind and rain were up and the charger failed to work properly, eventually telling us that its input voltages were unbalanced and then powering itself off entirely. We decided to head to the other Polar charger at Webbs of Wychbold. That charger started up fine so we headed in, had a loo visit, grabbed some lunch, watched the terrapins swimming around, and when a sufficient time had passed for the car to charge, headed back only to discover that it had emergency-stopped mere moments after we'd left the car, so we had no charge for the entire time we were there. No matter we thought - we'd sit in the car while it charged, and eat our lunch. Sadly we were defeated, the charger repeatedly e-stopped, so we gave up.

Our fallback position was to charge at the Strensham services at the M5/M50 junction. Sadly the southbound services have no chargers at all (they're under a lot of building work right now, so perhaps that's part of it) so we had to get to the northbound services and charge there. That charge went fine, and with a £2.85 bill from Ecotricity automatically paid, we snuck our way along back-roads and secret junctions to the southbound services, and headed off down the M50. Sadly we're now a lot later than we should have been, having lost about ninety minutes in total to the wasted time at the two Polar chargers, which meant that we hit a lot of congestion at Monmouth and around Newport on the M4.

We made it to Cardiff Gate where we plugged in, set it charging, and then headed into the service area where we happened to meet my younger brother who was heading home too. He went off, and I looked at the Ecotricity app on my phone which had decided at that point that I wasn't charging at all. I went out to check, the charger was still delivering current, so, chalking it up to a bit of a de-sync, we went in, had a coffee and a relax, and then headed out to the car to wait for it to finish charging. It finished, we unplugged, and headed out. But to this day I've not been charged by Ecotricity for that so "yay".

Our final stop along the M4 was Swansea West. Unfortunately the Pont Abraham services don't have a rapid charger compatible with my car so we have to stop earlier. Fortunately there are three chargers at Swansea West. Unfortunately the CCS was plugged into an i3 which wasn't charging but was set to keep the connector locked in so I couldn't snarf it. I plugged into a slower (AC) charger to get a bit of juice while we went in to wait for the i3 owner to leave. I nipped out after 10 minutes and conveniently they'd gone, so I swapped the car over to the CCS charger and set it going. 37 minutes later and that charger had actually worked, charged me up, and charged me a princely £5.52 for the privilege.

From here we nipped along the A48/A40, dropped in on my sister-in-law to collect a gift for my father, and then got to St Davids at around nine pm. A mere eleven hours after we left Manchester. By comparison, when I drove a Passat, I would leave Manchester at 3pm, drive 100 fewer miles, and arrive at around 9pm, having had a few nice stops for loo breaks and dinner.

Saturday it had been raining quite hard overnight, St Davids has one (count it, ONE) charger compatible with my car (type 2 in this instance) but fortunately it's free to use (please make donation in the tourist-information-office). Unfortunately after the rain, the parking space next to the charger was under a non-trivial amount of water, so poor Rob had to mountaineer next to the charger to plug in without drowning. We set the car charging and went to have a nice breakfast in St Davids. A few hours later, I wandered back up to the car park with Rob and we unplugged and retrieved the car. Top marks for the charger, but a pity the space was a swimming pool.

Sunday morning dawned bright and early we headed out to Llandewi Velfrey to visit my brother who runs Silverstone Green Energy. We topped up there and then headded to Sarn Parc at his suggestion. It's a nice service area, unfortunately the AC/Chademo charger was giving 'Remote Start Error' so the Leaf there was on the Chademo/CCS charger. However as luck would have it, that charger was on free-vend, so once we got on the charger (30m later or so) we got to charge for free. Thanks Ecotricity.

From Sarn Parc, we decided that since we'd had such a good experience at Strensham North, we'd go directly there. We arrived with 18m to spare in the "tank" but unfortunately the CCS/Chademo charger was broken (with an error along the lines of PWB1 is 0x0008) and there was an eGolf there which also had wanted to use CCS but had to charge slowly in order to get enough range to get to another charger. As a result we had to sit there for an hour to wait for him to have enough in his 'tank' that he was prepared to let us charge. We then got a "full" 45 minute charge (£1.56, 5.2kWh) which gave us enough to get north again to Chateau Impney (which had been marked working again on Zap-map).

The charge there worked fine (yay) so we drove on north to Keele services. We arrived in the snow/hail/rain (yay northern weather) found the charger, plugged in, tried to set it going using the app, and we were told "Unable to contact charger". So I went through the process again and we were told "Charger in use". It bloody well wasn't in use, because I was plugged into it and it definitely wasn't charging my car. We waited for the rain to die down again and looked at the charger, which at that moment said "Connect vehicle" and then it started up charging the car (yay). We headed in for a loo and dinner break. Unfortunately the app couldn't report on progress but it had started charging so we were confident we'd be fine. More fool us. It had stopped charging moments after we'd left the car and once again we wasted time because it wasn't charging when we thought it was. We returned, discovered the car hadn't charged, but then discovered the charger had switched to free-vend so we charged up again for free, but that was another 40 minute wait.

Finally we got home (via a short stop at the pub) and on Monday I popped along to a GMEV rapid charger, and it worked perfectly as it has every single time I've used it.

So, in conclusion, the journey was reasonably cheap, which is nice, but we had two failed charge attempts on Polar, and several Ecotricity cockups (though they did mostly end up in our favour in terms of money) which cost us around 90 to 120 minutes in each direction. The driving itself (in the Ioniq) was fine and actually meant I wasn't frazzled and unhappy the whole time, but the charging infrastructure is simply not good enough. It's unreliable, Ecotricity don't have support lines at the weekend (or evenings/early mornings), and is far too sparse to be useful when one wishes to travel somewhere not on the motorway network. If I'd tried to drive my usual route, I'd have had to spend four hours in Aberystwyth using my granny charger to put about 40 miles in the tank from a public 3 pin socket.

Planet DebianErich Schubert: Online Dating Cannot Work Well

Daniel Pocock (via points out what tracking services online dating services expose you to. This certainly is an issue, and of course to be expected by a free service (you are the product – advertisers are the customer). Oh, and in case you forgot already: some sites employ fake profiles to retain you as long as possible on their site… But I’d like to point out how deeply flawed online dating is. It is surprising that some people meet successfully there; and I am not surprised that so many dates turn out to not work: they earn money if you remain single, and waste time on their site, not if you are successful.

I am clearly not an expert on online dating, because I am happily married. I met my wife in a very classic setting: offline, in my extended social circle. The motivation for this post is that I am concerned about seeing people waste their time. If you want to improve your life, eliminate apps and websites that are just distraction! And these days, we see more online/app distraction than ever. Smartphone zombie apocalpyse.

There are some obvious issues with online dating:

  • you treat people as if they were an object in an online shop. If you want to find a significant other, don’t treat him/her like a shoe.
  • you get too many choices. So if one turns out to be just 99% okay, then you will ignore this in favor of another 100% potential match.
  • you get to choose exactly what you want. No need to tolerate. And of course you know exactly what fits to you, don’t you? No, actually we are pretty bad at that, and a good relationship will require you to be tolerant.
  • inflated expectations: in reality, the 100s turn out to be more like 55% matches, because the image was photoshopped, they are too nervous, and their profile was written by a ghostwriter. Oh, and some of them will simply be chatbots, or employees, or already married, or …. So they don’t even exist.
  • because you are also just 99%, everybody seems to prefer someone else, and you are only the second choice, if chosen at all. You don’t get picked.
  • you will never be comfortable on the actual first date. Because of inflated expectations, it will be disappointing, and you just want to get away.
  • the companies earn money if you are online at their site, not if you are successful.

And yes, there is scientific research backing up these things. For example:

Online Dating: A Critical Analysis From the Perspective of Psychological Science

Eli J. Finkel, Paul W. Eastwick, Benjamin R. Karney, Harry T. Reis, Susan Sprecher, Psychological Science in the Public Interest, 13(1), 3-66.

“the ready access to a large pool of potential partners can elicit an evaluative, assessment-oriented mindset that leads online daters to objectify potential partners and might even undermine their willingness to commit to one of them”


Dating preferences and meeting opportunities in mate choice decisions

Belot, Michèle, and Marco Francesconi, Journal of Human Resources 48.2 (2013): 474-508.

“[in speed dating] suggesting that a highly popular individual is almost 5 times more likely to get a date with another highly popular mate than with a less popular individual”

which means that if you are not in the top most attractive accounts, you probably just get swiped away.

If you want to maximize your chances of meeting someone, you probably have to use this approach (

And you can find many more reports on “Generation Tinder” and its hard time to find partners because of inflated expectations. It is also because these apps and online services make you unhappy, and that makes you unattractive.

Instead, I suggest you extend your offline social circle.

For example, I used to go dancing a lot. Not the “drunken, and too loud music to talk” kind, but ballroom. Not only this can drastically improve your social and communication skills (in particular, non-verbal communication, but also just being natural rather than nervous), but it also provides great opportunities to meet new people with a shared interest. And quite a lot of my friends in dancing got married to a partner they first met at a dance.

For others, other social sport does this job (although many find chit chat at the gym or yoga annoying). Walk your dog in a new area - you may meet some new faces there. But it is best if you get to talk. Apparently, some people love meeting strangers for cooking (where you’d cook and eat antipasti, main dishes, and dessert in different places). Go to some board game nights, etc. I think anything will do that lets you meet new people with at least some shared interest or social connection, and where you are not just going because of dating (because then you’ll be stressed out), but where you can relax. If you are authentically relaxed and happy, this will make you attractive. And hey, maybe someone will want to meet you a second time.

Spending all that time online chatting or swiping certainly will not improve your social skills when you actually have to face someone face-to-face… it is the worst thing to do, if you aren’t already a very open person that easily chats up strangers (and then you won’t need it).

Forget all that online crap you get pulled into all the time. Don’t let technology hijack your social life, and make you addicted to scrolling through online profiles of people you are not going to meet. Don’t be the product, and nor is your significant other.

They earn money if you spend time on their website, not if you meet your significant other.

So don’t expect them to work. They don’t need to, and they don’t intend to. Dating is something you need to do offline.

Planet DebianSean Whitton: A better defence against the evil maid attack on a laptop

The attack

Laptops need full disc encryption. Indeed, my university has explicitly banned us keeping any information on our students’ grades on our laptops unless we use FDE. Not even comments on essays, apparently, as that counts as grade information.

There must, though, exist unencrypted code that tells the computer how to decrypt everything else. Otherwise you can’t turn your laptop on. If you’re only trying to protect your data from your laptop being permanently stolen, it’s fine for this to be in an unencrypted partition on the laptop’s HDD: when your laptop is stolen, the data you are trying to protect remains encrypted.

An evil maid attack involves the replacement of this unencrypted code with something malicious – perhaps it e-mails data from the encrypted partition to someone who wants it. Of course, if someone has access to your laptop without your knowledge, they can always work around any security scheme you might develop. They might add a hardware keylogger, for example. So why might we want to try to protect against the evil maid attack – haven’t we already lost if someone is able to modify the contents of the unencrypted partition of our hard drive?

Well, the thing about the evil maid attack is that it is very quick and easy to modify the contents of a laptop’s hard drive, as compared to other security-bypassing hardware modifications, which take much longer to perform without detection. Users expect to be able to easily replace their hard drives, so they are usually accessible with the removal of just a single screw. It could take less than five minutes to deploy the evil maid payload.

Laptops are often left unattended for the two or three minutes it would take to deliver an evil maid payload; they are less often left for long enough that deeper hardware modifications could be made. So it is worth taking steps to prevent evil maid attacks.

The best solution

UEFI Secure Boot. But

  • Debian does not support this yet; and
  • my laptop does not have the hardware support anyway.

My current solution

The standard solution is to put the unencrypted hard drive partition on a USB key, and keep that on one’s keychain. Then there is no unencrypted code on the laptop at all; you boot from the USB, which decrypts the root partition, and then you unmount the USB key.

Problem with this solution

The big problem with this is kernel and bootloader upgrades. You have to ensure your USB key is mounted before your package manager upgrades the kernel. This effectively rules out using unattended-upgrades to get security upgrades for the kernel. They must be applied manually. Further, you probably want a backup USB key with the kernel and bootloader on it. Now you have to upgrade both, using commands like apt-get --reinstall.

This is a real maintenance burden and is likely to delay your security upgrades. And the whole point of putting /boot on a USB key was to improve security!

Something better

Recent GRUB is able to decrypt partitions itself. So /boot can reside within your encrypted root partition. GRUB’s setup scripts are smart enough that you can switch over to this in just a few steps:

  1. Move contents of /boot from USB drive into root partition.
  2. Remove/comment /boot from /etc/fstab.
  3. Set GRUB_ENABLE_CRYPTODISK=y in /etc/default/grub.
  4. grub-install /dev/sda
  5. update-grub

It’s still true that there must be unencrypted code that knows how to decrypt the root partition. Where does that go? grub-install is the command that installs that code; where does it put it? The ArchLinux wiki has the answer. If you’re using EFI, it will go in the EFI system partition (ESP). Under BIOS, if your drive is formatted with an MBR, it goes in the “post-MBR gap” between the MBR and the first partition (on drive partitioned with very old tools, this post-MBR gap might be too small to accommodate the larger GRUB image that contains the decryption code; however, drives partitioned with recent tools that “support 1 MiB partition alignment” (including the Debian stretch installer) will be fine – to check fdisk -l and look at where your first partition starts). Under BIOS, if your drive is formatted with a GPT, you have to add a 1MiB BIOS boot partition, and the code goes there.

We’ve resolved the issue of package updates modifying /boot, which now resides in the encrypted root partition. However, this is not all of the picture. If we are using EFI, now we have unencrypted code in the EFI system partition which is subject to the evil maid attack. And if we respond by moving the EFI system partition onto a USB drive, the package update problem reoccurs: the EFI system partition will not always be mounted. If we are using BIOS, the evil maid reoccurs since it is not that much harder to modify the code in the post-MBR gap or the BIOS boot partition.

My proposed solution, pending UEFI Secure Boot, is to use BIOS boot with a MBR partition table, keep /boot in the encrypted root partition and grub-install to the USB drive. dpkg-reconfigure grub-pc and tell it to never grub-install to anything. Then set the laptop’s boot order to never try to boot from the HDD, only from USB. (There’s no real advantage of GPT with my simple partitioning setup but I think that would also work fine.)

How does this solve the various issues I’ve raised? Well, the amount of code on the USB drive is very small (less than 1MiB) so it is much less likely to require manual updates. Kernel updates will modify /boot; only bootloader could require me to manually run grub-install to modify the contents of the post-MBR gap, but these are very infrequent.

Of course, the BIOS could be cracked such that the laptop will boot from the HDD no matter what USB I have plugged in, or even only when some USB is plugged in, but that’s a hardware modification beyond the evil maid, against which we are not trying to protect.

As a nice bonus, the USB drive’s single FAT32 partition is now usable for sneakernet.

Planet DebianRenata D'Avila: Debugging MoinMoin and using an IDE


When I was creating the cal_action, I didn't quite know how to debug MoinMoin. Could I use pudb with the wiki? I wasn't sure how. To figure out if the code I was writing worked, I ended up consulting the error logs from Apache. It sort of worked, but of course that was very far from the ideal. What if I wanted to check something that wasn't an error?

Well, MoinMoin supposedly has a logging module, that lives on moin-1.V.V/wiki/config/logging/, but I simply couldn't get it to work as I wanted.

I searched some more and found a guide on setting up Winpdb Source Level Debugger, but I don't use Windows (really, where is the GNU/Linux guide to debug?), so that didn't help. 😭

But... MoinMoin does offer a guide on setting up a development enviroment with Eclipse that I ended up following.

Using an IDE

Up until this point, most of the code I created in Python where simple scripts that could be ran and debugged in the terminal. I had used IDLE while I was taking the Python para Zumbis (Python for Zombies) course, but other than that, I just used a code editor (Sublime, then Vim and, finally, Atom) when programming in Python.

When I was taking a tech vocational course, I used Eclipse, an Integrated development environment, or IDE to code in Java, but that was it. After I passed the course and didn't need to code in Java again, I simply let go of the IDE.

As it turns out, going back to Eclipse, along with the PyDev plugin - both free software - was what actually helped me in debugging and figuring my way around the MoinMoin macro.

The steps I had to take:

  1. Install eclipse-pydev and it's dependencies using Synaptic (Debian package manager)
  2. Define Python 2.7 as the interpreter in preferences
  3. Create a new workspace
  4. Create a new project
  5. Import the installed MoinMoin into the new project
  6. Configure the new wiki
  7. Run

To develop the plugins (macro and actions):

  1. Create a new workdir for the Plugins, that goes alongsite Moin
  2. Copy the contents from the plugin directory of the wiki to the new directory

On step 2, though, instead of copying I just created a symbolic link to the files I had been working on that where in another directory. It would make no sense to have two copies of the same file in different places in the computer - besides, it would just complicate tracking what changes had been made ans where. To create a symbolic link:


More on symbolic links can be found using the command man ln on Debian's terminal.

With the Eclipse console, I could use print help(request) to figure out what methods would be available to me with the request provided by the macro. With this, I finally began to figure out how to create the response we want (without returning the whole wiki page with it, just the event information in the icalendar format).

Eclipse IDE running the from MoinMoin and showing the debug output in a terminal

If you don't know what I mean with request/response: in simple terms, when you click something on a webpage (for instance, my ical link in the bottom of the calendar) in your internet browser, you are requesting a resource (the icalendar file). It's up to the server to respond with the appropriate resource (the file) or with an status code explaining why it can't fulfill your request (for instance, you get an 404 error when the page - resource - you're trying to access - requesting - can't be found).

A simplified diagram of a static web server showing the request-response cycle.

Here you can find more information on client-Server overview, by Mozilla web docs.

So now I'm working on constructing that response. Thanks to the Eclipse console, now I know that just trying to use the response.write() method with the return value of my method I get a TypeError: Expected bytes. I will probably have to transform the result of the method to generate the icalendar into bytes instead of InstanceClass. Well, at least I can say that the choices that have been made when writing the ExportPDF macro come to me more clearly now.

Planet DebianRenata D'Avila: Debugging MoinMoin and using an IDE


When I was creating the cal_action, I didn't quite know how to debug MoinMoin. Could I use pudb with the wiki? I wasn't sure how. To figure out if the code I was writing worked, I ended up consulting the error logs from Apache. It sort of worked, but of course that was very far from the ideal. What if I wanted to check something that wasn't an error?

Well, MoinMoin supposedly has a logging module, that lives on moin-1.V.V/wiki/config/logging/, but I simply couldn't get it to work as I wanted.

I searched some more and found a guide on setting up Winpdb Source Level Debugger, but I don't use Windows (really, where is the GNU/Linux guide to debug?), so that didn't help. 😭

But... MoinMoin does offer a guide on setting up a development enviroment with Eclipse that I ended up following.

Using an IDE

Up until this point, most of the code I created in Python where simple scripts that could be ran and debugged in the terminal. I had used IDLE while I was taking the Python para Zumbis (Python for Zombies) course, but other than that, I just used a code editor (Sublime, then Vim and, finally, Atom) when programming in Python.

When I was taking a tech vocational course, I used Eclipse, an Integrated development environment, or IDE to code in Java, but that was it. After I passed the course and didn't need to code in Java again, I simply let go of the IDE.

As it turns out, going back to Eclipse, along with the PyDev plugin - both free software - was what actually helped me in debugging and figuring my way around the MoinMoin macro.

The steps I had to take:

  1. Install eclipse-pydev and it's dependencies using Synaptic (Debian package manager)
  2. Define Python 2.7 as the interpreter in preferences
  3. Create a new workspace
  4. Create a new project
  5. Import the installed MoinMoin into the new project
  6. Configure the new wiki
  7. Run

To develop the plugins (macro and actions):

  1. Create a new workdir for the Plugins, that goes alongsite Moin
  2. Copy the contents from the plugin directory of the wiki to the new directory

On step 2, though, instead of copying I just created a symbolic link to the files I had been working on that where in another directory. It would make no sense to have two copies of the same file in different places in the computer - besides, it would just complicate tracking what changes had been made ans where. To create a symbolic link:


More on symbolic links can be found using the command man ln on Debian's terminal.

With the Eclipse console, I could use print help(request) to figure out what methods would be available to me with the request provided by the macro. With this, I finally began to figure out how to create the response we want (without returning the whole wiki page with it, just the event information in the icalendar format).

Eclipse IDE running the from MoinMoin and showing the debug output in a terminal

If you don't know what I mean with request/response: in simple terms, when you click something on a webpage (for instance, my ical link in the bottom of the calendar) in your internet browser, you are requesting a resource (the icalendar file). It's up to the server to respond with the appropriate resource (the file) or with an status code explaining why it can't fulfill your request (for instance, you get an 404 error when the page - resource - you're trying to access - requesting - can't be found).

A simplified diagram of a static web server showing the request-response cycle.

Here you can find more information on client-Server overview, by Mozilla web docs.

So now I'm working on constructing that response. Thanks to the Eclipse console, now I know that just trying to use the response.write() method with the return value of my method I get a TypeError: Expected bytes. I will probably have to transform the result of the method to generate the icalendar into bytes instead of InstanceClass. Well, at least I can say that the choices that have been made when writing the ExportPDF macro come to me more clearly now.

Sociological ImagesWhat’s Trending? Feeling the Love

Valentine’s Day is upon us, but in a world of hookups and breakups many people are concerned about the state of romance. Where do Americans actually stand on sex and relationships? We took a look at some trends from the General Social Survey. They highlight an important point: while Americans are more accepting of things like divorce and premarital sex, that doesn’t necessarily mean that both are running rampant in society.

For example, since the mid 1970s, Americans have become much more accepting of sex before marriage. Today more than half of respondents say it isn’t wrong at all.

However, these attitudes don’t necessarily mean people are having more sex. Younger Americans today actually report having no sexual partners more frequently than people of the same age in earlier surveys.

And what about marriage? Americans are more accepting of divorce now, with more saying a divorce should be easier to obtain.

But again, this doesn’t necessarily mean everyone is flying the coop. While self-reported divorce rates had been on the rise since the mid 1970s, they have largely leveled off in recent years.

It is important to remember that for core social practices like love and marriage, we are extra susceptible to moral panics when faced with social change. These trends show how changes in attitudes don’t always line up with changes in behavior, and they remind us that sometimes we can save the drama for the rom-coms.

Inspired by demographic facts you should know cold, “What’s Trending?” is a post series at Sociological Images featuring quick looks at what’s up, what’s down, and what sociologists have to say about it.

Ryan Larson is a graduate student from the Department of Sociology, University of Minnesota – Twin Cities. He studies crime, punishment, and quantitative methodology. He is a member of the Graduate Editorial Board of The Society Pages, and his work has appeared in Poetics, Contexts, and Sociological Perspectives.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Planet DebianDaniel Pocock: What is the best online dating site and the best way to use it?

Somebody recently shared this with me, this is what happens when you attempt to access Parship, an online dating site, from the anonymous Tor Browser.

Experian is basically a private spy agency. Their website boasts about how they can:

  • Know who your customers are regardless of channel or device
  • Know where and how to reach your customers with optimal messages
  • Create and deliver exceptional experiences every time

Is that third objective, an "exceptional experience", what you were hoping for with their dating site honey trap? You are out of luck: you are not the customer, you are the product.

When the Berlin wall came down, people were horrified at what they found in the archives of the Stasi. Don't companies like Experian and Facebook gather far more data than this?

So can you succeed with online dating?

There are only three strategies that are worth mentioning:

  • Access sites you can't trust (which includes all dating sites, whether free or paid for) using anonymous services like Tor Browser and anonymous email addresses. Use fake photos and fake all other data. Don't send your real phone number through the messaging or chat facility in any of these sites because they can use that to match your anonymous account to a real identity: instead, get an extra SIM card that you pay for and top-up with cash. One person told me they tried this for a month as an experiment, expediently cutting and pasting a message to each contact to arrange a meeting for coffee. At each date they would give the other person a card that apologized for their completely fake profile photos and offering to start over now they could communicate beyond the prying eyes of the corporation.
  • Join online communities that are not primarily about dating and if a relationship comes naturally, it is a bonus.
  • If you really care about your future partner and don't want your photo to be a piece of bait used to exploit and oppress them, why not expand your real-world activities?

CryptogramCan Consumers' Online Data Be Protected?

Everything online is hackable. This is true for Equifax's data and the federal Office of Personal Management's data, which was hacked in 2015. If information is on a computer connected to the Internet, it is vulnerable.

But just because everything is hackable doesn't mean everything will be hacked. The difference between the two is complex, and filled with defensive technologies, security best practices, consumer awareness, the motivation and skill of the hacker and the desirability of the data. The risks will be different if an attacker is a criminal who just wants credit card details ­ and doesn't care where he gets them from ­ or the Chinese military looking for specific data from a specific place.

The proper question isn't whether it's possible to protect consumer data, but whether a particular site protects our data well enough for the benefits provided by that site. And here, again, there are complications.

In most cases, it's impossible for consumers to make informed decisions about whether their data is protected. We have no idea what sorts of security measures Google uses to protect our highly intimate Web search data or our personal e-mails. We have no idea what sorts of security measures Facebook uses to protect our posts and conversations.

We have a feeling that these big companies do better than smaller ones. But we're also surprised when a lone individual publishes personal data hacked from the infidelity site, or when the North Korean government does the same with personal information in Sony's network.

Think about all the companies collecting personal data about you ­ the websites you visit, your smartphone and its apps, your Internet-connected car -- and how little you know about their security practices. Even worse, credit bureaus and data brokers like Equifax collect your personal information without your knowledge or consent.

So while it might be possible for companies to do a better job of protecting our data, you as a consumer are in no position to demand such protection.

Government policy is the missing ingredient. We need standards and a method for enforcement. We need liabilities and the ability to sue companies that poorly secure our data. The biggest reason companies don't protect our data online is that it's cheaper not to. Government policy is how we change that.

This essay appeared as half of a point/counterpoint with Priscilla Regan, in a CQ Researcher report titled "Privacy and the Internet."

Worse Than FailureCodeSOD: All the Rest Have Thirty One…

Aleksei received a bunch of notifications from their CI system, announcing a build failure. This was interesting, because no code had changed recently, so what was triggering the failure?

        private BillingRun CreateTestBillingRun(int billingRunGroupId, DateTime? billingDate, int? statusId)
            return new BillingRun
                BillingRunGroupId = billingRunGroupId,
                PeriodStart = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 1),
                BillingDate = billingDate ?? new DateTime(DateTime.Today.Year, DateTime.Today.Month, 15),
                CreatedDate = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 30),
                ItemsPreparedDate = new DateTime(2017, 4, 7),
                CompletedDate = new DateTime(2017, 4, 8),
                DueDate = new DateTime(DateTime.Today.Year, DateTime.Today.Month, 13),
                StatusId = statusId ?? BillingRunStatusConsts.Completed,
                ErrorCode = "ERR_CODE",
                Error = "Full error description",
                ModifiedOn = new DateTime(2017, 1, 1)

Take a look at the instantiation of CreatedDate. I imagine the developer’s internal monologue went something like this:

Okay, the Period Start is the beginning of the month, the Billing Date is the middle of the month, and Created Date is the end of the month. Um… okay, well, beginning is easy. That’s the 1st. Phew. Okay, but the middle of the month. That’s hard. Oh, wait, wait a second! It’s billing, so I bet the billing department has a day they always send out the bills. Let me send an email to Steve in billing… oh, look at that. It’s always the 15th. Great. Boy. This programming stuff is easy. Whew. Okay, so now the end of the month. This one’s tricky, because months have different lengths, sometimes 30 days, and sometimes 31. Let me ask Steve again, if they have any specific requirements there… oh, look at that. They don’t really care so long as it’s the last day or two of the month. Great. I’ll just use 30, then. Good thing there aren’t any months with a shorter length.
Y’know, I vaguely remember reading a thing that said tests should always use the same values, so that every run tests exactly the same combination of inputs. I think I saved a bookmark to read it later. Should I read it now? No! I should commit this code, let the CI build run, and then mark the requirement as complete.
Boy, this programming stuff is easy.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianJo Shields: Packaging is hard. Packager-friendly is harder.

Releasing software is no small feat, especially in 2018. You could just upload your source code somewhere (a Git, Subversion, CVS, etc, repo – or tarballs on Sourceforge, or whatever), but it matters what that source looks like and how easy it is to consume. What does the required build environment look like? Are there any dependencies on other software, and if so, which versions? What if the versions don’t match exactly?

Most languages feature solutions to the build environment dependency – Ruby has Gems, Perl has CPAN, Java has Maven. You distribute a manifest with your source, detailing the versions of the dependencies which work, and users who download your source can just use those.

Then, however, we have distributions. If openSUSE or Debian wants to include your software, then it’s not just a case of calling into CPAN during the packaging process – distribution builds need to be repeatable, and work offline. And it’s not feasible for packagers to look after 30 versions of every library – generally a distribution will contain 1-3 versions of a given library, and all software in the distribution will be altered one way or another to build against their version of things. It’s a long, slow, arduous process.

Life is easier for distribution packagers, the more the software released adheres to their perfect model – no non-source files in the distribution, minimal or well-formed dependencies on third parties, swathes of #ifdefs to handle changes in dependency APIs between versions, etc.

Problem is, this can actively work against upstream development.

Developers love npm or NuGet because it’s so easy to consume – asking them to abandon those tools is a significant impediment to developer flow. And it doesn’t scale – maybe a friendly upstream can drop one or two dependencies. But 10? 100? If you’re consuming a LOT of packages via the language package manager, as a developer, being told “stop doing that” isn’t just going to slow you down – it’s going to require a monumental engineering effort. And there’s the other side effect – moving from Yarn or Pip to a series of separate download/build/install steps will slow down CI significantly – and if your project takes hours to build as-is, slowing it down is not going to improve the project.

Therein lies the rub. When a project has limited developer time allocated to it, spending that time on an effort which will literally make development harder and worse, for the benefit of distribution maintainers, is a hard sell.

So, a concrete example: MonoDevelop. MD in Debian is pretty old. Why isn’t it newer? Well, because the build system moved away from a packager ideal so far it’s basically impossible at current community & company staffing levels to claw it back. Build-time dependency downloads went from a half dozen in the 5.x era (somewhat easily patched away in distributions) to over 110 today. The underlying build system changed from XBuild (Mono’s reimplementation of Microsoft MSBuild, a build system for Visual Studio projects) to real MSbuild (now FOSS, but an enormous shipping container of worms of its own when it comes to distribution-shippable releases, for all the same reasons & worse). It’s significant work for the MonoDevelop team to spend time on ensuring all their project files work on XBuild with Mono’s compiler, in addition to MSBuild with Microsoft’s compiler (and any mix thereof). It’s significant work to strip out the use of NuGet and Paket packages – especially when their primary OS, macOS, doesn’t have “distribution packages” to depend on.

And then there’s the integration testing problem. When a distribution starts messing with your dependencies, all your QA goes out the window – users are getting a combination of literally hundreds of pieces of software which might carry your app’s label, but you have no idea what the end result of that combination is. My usual anecdote here is when Ubuntu shipped Banshee built against a new, not-regression-tested version of SQLite, which caused a huge performance regression in random playback. When a distribution ships a broken version of an app with your name on it – broken by their actions, because you invested significant engineering resources in enabling them to do so – users won’t blame the distribution, they’ll blame you.

Releasing software is hard.

Planet DebianPetter Reinholdtsen: Using VLC to stream bittorrent sources

A few days ago, a new major version of VLC was announced, and I decided to check out if it now supported streaming over bittorrent and webtorrent. Bittorrent is one of the most efficient ways to distribute large files on the Internet, and Webtorrent is a variant of Bittorrent using WebRTC as its transport channel, allowing web pages to stream and share files using the same technique. The network protocols are similar but not identical, so a client supporting one of them can not talk to a client supporting the other. I was a bit surprised with what I discovered when I started to look. Looking at the release notes did not help answering this question, so I started searching the web. I found several news articles from 2013, most of them tracing the news from Torrentfreak ("Open Source Giant VLC Mulls BitTorrent Streaming Support"), about a initiative to pay someone to create a VLC patch for bittorrent support. To figure out what happend with this initiative, I headed over to the #videolan IRC channel and asked if there were some bug or feature request tickets tracking such feature. I got an answer from lead developer Jean-Babtiste Kempf, telling me that there was a patch but neither he nor anyone else knew where it was. So I searched a bit more, and came across an independent VLC plugin to add bittorrent support, created by Johan Gunnarsson in 2016/2017. Again according to Jean-Babtiste, this is not the patch he was talking about.

Anyway, to test the plugin, I made a working Debian package from the git repository, with some modifications. After installing this package, I could stream videos from The Internet Archive using VLC commands like this:


The plugin is supposed to handle magnet links too, but since The Internet Archive do not have magnet links and I did not want to spend time tracking down another source, I have not tested it. It can take quite a while before the video start playing without any indication of what is going on from VLC. It took 10-20 seconds when I measured it. Some times the plugin seem unable to find the correct video file to play, and show the metadata XML file name in the VLC status line. I have no idea why.

I have created a request for a new package in Debian (RFP) and asked if the upstream author is willing to help make this happen. Now we wait to see what come out of this. I do not want to maintain a package that is not maintained upstream, nor do I really have time to maintain more packages myself, so I might leave it at this. But I really hope someone step up to do the packaging, and hope upstream is still maintaining the source. If you want to help, please update the RFP request or the upstream issue.

I have not found any traces of webtorrent support for VLC.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Sky CroeserMotherhood, hope, and 76% less snark

Oh, hi.

I had that baby I was growing in my last post. She’s an amazing little person. She’s learned to clap her hands in the last week, and I am full of wonder and delight. She’s been sick, and I fretted for hours about her rash. (Should I call the doctor? Should I not? Is it a purple rash? Is it getting worse.)

I’m back at work, sitting in my office, relieved to have time to read and write and teach, and missing her fiercely. I feel this all at once: the relief of time and space away, and the missing. I think about her all the time, but also get bored by the way motherhood enfolds me.

At home, we walk in endless circles around the house as she holds out a hand for mine, demands the other hand, then drags me off to open cupboards or visit each room in turn. (At the same time, I love to see her do this: so clearly show me what she wants, so clearly refuse if I put my right hand in her left, or give her only one hand.)


Motherhood has changed me, and I don’t know how I feel about that. (I don’t have much time to work out how I feel about anything.) It is almost physically painful to think of parents losing children to war or violence. Of wanting to feed a hungry child and not being able to. I have the luxury of being able to look away, to take a break from imagining these scenes.

For the last few months the change to my work has been in the time and energy available. Everything needs to be broken up into smaller, more digestible chunks, to manage in nap times and evenings and while so very tired most of the time.

As I finished my undergraduate, I decided to focus on researching movements that gave me hope. Imperfect, complex movements with many flaws, but nevertheless full of people trying to change things for the better. I wanted, and want, to believe that we have the potential to change this. That hungry children can be fed, that we can look after our neighbours, that we can resist and fight back against tides of hatred and fear.

Last year, I found myself writing a presentation and a book chapter that shifted to focusing on the flaws in these movements. I was tired, and I got snarky and impatient with the imperfection of activists (particularly white men) who didn’t listen and try to define what counts as ‘radical’ and what doesn’t. I still feel that impatience, but that work was depressing. The snark of it was satisfying, but I’m not sure of the use of it and frankly I am subject to many of the same critiques.

As I try to find my way back into research and writing, I’m trying to recommit to finding threads of hope. Critique is important, especially the critiques I need to listen to from the margins of academia and activism: of white women’s role in feminism(s), of settler societies, of academic power structures. In my own writing I want to be finding materials to stitch into alternatives. I want to be finding spaces where my voice can be useful, rather than just adding more noise.

And it’s a terrible cliche, but the urgency of it comes through when I look at this tiny person and imagine other parents doing the same, hoping for safety and flourishing and care for these wonders we are trying to nourish.


Planet DebianDirk Eddelbuettel: BH 1.66.0-1

A new release of the BH package arrived on CRAN a little earlier: now at release 1.66.0-1. BH provides a sizeable portion of the Boost C++ libraries as a set of template headers for use by R, possibly with Rcpp as well as other packages.

This release upgrades the version of Boost to the Boost 1.66.0 version released recently, and also adds one exciting new library: Boost compute which provides a C++ interface to multi-core CPU and GPGPU computing platforms based on OpenCL.

Besides the usual small patches we need to make (i.e., cannot call abort() etc pp to satisfy CRAN Policy) we made one significant new change in response to a relatively recent CRAN Policy change: compiler diagnostics are not suppressed for clang and g++. This may make builds somewhat noisy so we all may want to keep our ~/.R/Makevars finely tuned suppressing a bunch of warnings...

Changes in version 1.66.0-1 (2018-02-12)

  • Upgraded to Boost 1.66.0 (plus the few local tweaks)

  • Added Boost compute (as requested in #16)

Via CRANberries, there is a diffstat report relative to the previous release.

Comments and suggestions are welcome via the mailing list or the issue tracker at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.


Krebs on SecurityMicrosoft Patch Tuesday, February 2018 Edition

Microsoft today released a bevy of security updates to tackle more than 50 serious weaknesses in Windows, Internet Explorer/Edge, Microsoft Office and Adobe Flash Player, among other products. A good number of the patches issued today ship with Microsoft’s “critical” rating, meaning the problems they fix could be exploited remotely by miscreants or malware to seize complete control over vulnerable systems — with little or no help from users.

February’s Patch Tuesday batch includes fixes for at least 55 security holes. Some of the scarier bugs include vulnerabilities in Microsoft Outlook, Edge and Office that could let bad guys or bad code into your Windows system just by getting you to click on a booby trapped link, document or visit a compromised/hacked Web page.

As per usual, the SANS Internet Storm Center has a handy rundown on the individual flaws, neatly indexing them by severity rating, exploitability and whether the problems have been publicly disclosed or exploited.

One of the updates addresses a pair of serious vulnerabilities in Adobe Flash Player (which ships with the latest version of Internet Explorer/Edge). As KrebsOnSecurity warned last week, there are active attacks ongoing against these Flash vulnerabilities.

Adobe is phasing out Flash entirely by 2020, but most of the major browsers already take steps to hobble Flash. And with good reason: It’s a major security liability. Chrome also bundles Flash, but blocks it from running on all but a handful of popular sites, and then only after user approval.

For Windows users with Mozilla Firefox installed, the browser prompts users to enable Flash on a per-site basis. Through the end of 2017 and into 2018, Microsoft Edge will continue to ask users for permission to run Flash on most sites the first time the site is visited, and will remember the user’s preference on subsequent visits.

The latest standalone version of Flash that addresses these bugs is for Windows, Mac, Linux and Chrome OS. But most users probably would be better off manually hobbling or removing Flash altogether, since so few sites actually require it still. Disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

People running Adobe Reader or Acrobat also need to update, as Adobe has shipped new versions of these products that fix at least 39 security holes. Adobe Reader users should know there are alternative PDF readers that aren’t so bloated or full of security issues. Sumatra PDF is a good, lightweight alternative.

Experience any issues, glitches or problems installing these updates? Sound off about it in the comments below.

Planet DebianGunnar Wolf: Is it an upgrade, or a sidegrade?

I first bought a netbook shortly after the term was coined, in 2008. I got one of the original 8.9" Acer Aspire One. Around 2010, my Dell laptop was stolen, so the AAO ended up being my main computer at home — And my favorite computer for convenience, not just for when I needed to travel light. Back then, Regina used to work in a national park and had to cross her province (~6hr by a combination of buses) twice a week, so she had one as well. When she came to Mexico, she surely brought it along. Over the years, we bought new batteries and chargers, as they died over time...

Five years later, it started feeling too slow, and I remember to start having keyboard issues. Time to change.

Sadly, 9" computers were no longer to be found. Even though I am a touch typist, and a big person, I miss several things about the Acer's tiny keyboard (such as being able to cover the diagonal with a single hand, something useful when you are typing while standing). But, anyway, I got the closest I could to it — In July 2013, I bought the successor to the Acer Aspire One: An 10.5" Acer Aspire One Nowadays, the name that used to identify just the smallest of the Acer Family brethen covers at least up to 15.6" (which is not exactly helpful IMO).

Anyway, for close to five years I was also very happy with it. A light laptop that didn't mean a burden to me. Also, very important: A computer I could take with me without ever thinking twice. I often tell people I use a computer I got at a supermarket, and that, bought as new, costed me under US$300. That way, were I to lose it (say, if it falls from my bike, if somebody steals it, if it gets in any way damaged, whatever), it's not a big blow. Quite a difference from my two former laptops, both over US$1000.

I enjoyed this computer a lot. So much, I ended up buying four of them (mine, Regina's, and two for her family members).

Over the last few months, I have started being nagged by unresponsivity, mainly in the browser (blame me, as I typically keep ~40 tabs open). Some keyboard issues... I had started thinking about changing my trusty laptop. Would I want a newfangle laptop-and-tablet-in-one? Just thinking about fiddling with the OS to recognize stuff was a sort-of-turnoff...

This weekend we had an incident with spilled water. After opening and carefully ensuring the computer was dry, it would not turn on. Waited an hour or two, and no changes. Clear sign, a new computer is needed ☹

I went to a nearby store, looked at the offers... And, in part due to the attitude of the salesguy, I decided not to (installing Linux will void any warranty, WTF‽ In 2018‽). Came back home, and... My Acer works again!

But, I know five years are enough. I decided to keep looking for a replacement. After some hesitation, I decided to join what seems to be the elite group in Debian, and go for a refurbished Thinkpad X230.

And that's why I feel this is some sort of "sidegrade" — I am replacing a five year old computer with another five year old computer. Of course, a much sturdier one, built to last, originally sold as an "Ultrabook" (that means, meant for a higher user segment) much more expandable... I'm paying ~US$250, which I'm comfortable with. Looking at several online forums, it is a model quite popular with "knowledgeable" people AFAICT even now. I was hoping, just for the sake of it, to find a X230t (foldable and usable as tablet)... But I won't put too much time into looking for it.

The Thinkpad is 12", which I expect will still fit in my smallish satchel I take to my classes. The machine looks as tweakable as I can expect. Spare parts for replacement are readily available. I have 4GB I bought for the Acer I will probably be able to carry on to this machine, so I'm ready with 8GB. I'm eager to feel the keyboard, as it's often repeated it's the best in the laptop world (although it's not the classic one anymore) I'm just considering to pop ~US$100 more and buy an SSD drive, and... Well, lets see how much does this new sidegrade make me smile!

Planet DebianRiku Voipio: Making sense of /proc/cpuinfo on ARM

Ever stared at output of /proc/cpuinfo and wondered what the CPU is?

processor : 7
BogoMIPS : 2.40
Features : fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid
CPU implementer : 0x41
CPU architecture: 8
CPU variant : 0x0
CPU part : 0xd03
CPU revision : 3
Or maybe like:

$ cat /proc/cpuinfo
processor : 0
model name : ARMv7 Processor rev 2 (v7l)
BogoMIPS : 50.00
Features : half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
CPU implementer : 0x56
CPU architecture: 7
CPU variant : 0x2
CPU part : 0x584
CPU revision : 2
The bits "CPU implementer" and "CPU part" could be mapped to human understandable strings. But the Kernel developers are heavily against the idea. Therefor, to the next idea: Parse in userspace. Turns out, there is a common tool almost everyone has installed does similar stuff. lscpu(1) from util-linux. So I proposed a patch to do ID mapping on arm/arm64 to util-linux, and it was accepted! So using lscpu from util-linux 2.32 (hopefully to be released soon) the above two systems look like:

Architecture: aarch64
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-7
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 2
NUMA node(s): 1
Vendor ID: ARM
Model: 3
Model name: Cortex-A53
Stepping: r0p3
CPU max MHz: 1200.0000
CPU min MHz: 208.0000
BogoMIPS: 2.40
L1d cache: unknown size
L1i cache: unknown size
L2 cache: unknown size
NUMA node0 CPU(s): 0-7
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 cpuid

$ lscpu
Architecture: armv7l
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 1
Core(s) per socket: 4
Socket(s): 1
Vendor ID: Marvell
Model: 2
Model name: PJ4B-MP
Stepping: 0x2
CPU max MHz: 1333.0000
CPU min MHz: 666.5000
BogoMIPS: 50.00
Flags: half thumb fastmult vfp edsp thumbee vfpv3 tls idiva idivt vfpd32 lpae
As we can see, lscpu is quite versatile and can show more information than just what is available in cpuinfo.

TEDYou are here for a reason: 4 questions with Halla Tómasdóttir

Cartier and TED believe in the power of bold ideas to empower local initiatives to have global impact. To celebrate Cartier’s dedication to launching the ideas of female entrepreneurs into concrete change, TED has curated a special session of talks around the theme “Bold Alchemy” for the Cartier Women’s Initiative Awards, featuring a selection of favorite TED speakers.

Leading up to the session, TED talked with financier, entrepreneur and onetime candidate for president of Iceland, Halla Tómasdóttir, about what influences, inspires and drives her to be bold.

TED: Tell us who you are.
Halla Tómasdóttir: I think of myself first and foremost as a change catalyst who is passionate about good leadership and a gender-balanced world. My leadership career started in corporate America with Mars and Pepsi Cola, but since then I have served as an entrepreneur, educator, investor, board director, business leader and presidential candidate. I am married, a proud mother of two teenagers and a dog and am perhaps best described by the title given to me by the New Yorker: “A Living Emoji of Sincerity.”

TED: What’s a bold move you’ve made in your career?
HT: I left a high-profile position as the first female CEO of the Iceland Chamber of Commerce to become an entrepreneur with the vision to incorporate feminine values into finance. I felt the urge to show a different way in a sector that felt unsustainable to me, and I longed to work in line with my own values.

TED: Tell us about a woman who inspires you.
HT: The women of Iceland inspired me at an early age, when they showed incredible courage, solidarity and sisterhood and “took the day off” (went on a strike) and literally brought the country to its knees — as nothing worked when women didn’t do any work. Five years later, Iceland was the first country in the world to democratically elect a woman as president. I was 11 years old at the time, and her leadership has inspired me ever since. Her clarity on what she cares about and her humble way of serving those causes is truly remarkable.

TED: If you could go back in time, what would you tell your 18-year-old self?
HT: I would say: Halla, just be you and know that you are enough. People will frequently tell you things like: “This is the way we do things around here.” Don’t ever take that as a valid answer if it doesn’t feel right to you. We are not here to continue to do more of the same if it doesn’t work or feel right anymore. We are here to grow, ourselves and our society. You are here for a reason: make your life and leadership matter.

The private TED session at Cartier takes place April 26 in Singapore. It will feature talks from a diverse range of global leaders, entrepreneurs and change-makers, exploring topics ranging from the changing global workforce to maternal health to data literacy, and it will include a performance from the only female double violinist in the world.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #146

Here's what happened in the Reproducible Builds effort between Sunday February 4 and Saturday February 10 2018:

Media coverage

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

63 package reviews have been added, 26 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

A new issue type have been added:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (34)
  • Antonio Terceiro (1)
  • James Cowgill (1)
  • Matthias Klose (1)

diffoscope development

In addition, Juliana—our Outreachy intern—continues her work on parallel processing.

disorderfs development development


This week's edition was written by Chris Lamb, Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

TEDNew podcast alert: WorkLife with Adam Grant, a TED original, premieres Feb. 28

Adam Grant to Explore the Psychology of Unconventional Workplaces as Host of Upcoming New TED Original Podcast “WorkLife”

Organizational psychologist, professor, bestselling author and TED speaker Adam Grant is set to host a new TED original podcast series titled WorkLife with Adam Grant, which will explore unorthodox work cultures in search of surprising and actionable lessons for improving listeners’ work lives.

Beginning Wednesday, February 28, each weekly episode of WorkLife will center around one extraordinary workplace—from an award-winning TV writing team racing against the clock, to a sports team whose culture of humility propelled it to unexpected heights. In immersive interviews that take place in both the field and the studio, Adam brings his observations to vivid life – and distills useful insights in his friendly, accessible style.

“We spend a quarter of our lives in our jobs. This show is about making all that time worth your time,” says Adam, the bestselling author of OriginalsGive and Take, and Option B with Sheryl Sandberg. “In WorkLife, we’ll take listeners inside the minds of some fascinating people in some truly unusual workplaces, and mix in fresh social science to reveal how we can lead more creative, meaningful, and generous lives at work.”

Produced by TED in partnership with Pineapple Street Media and Transmitter Media, WorkLife is TED’s first original podcast created in partnership with a TED speaker. Its immersive, narrative format is designed to offer audiences a new way to explore TED speaker ideas in depth. Adam’s talks “Are you a giver or a taker?” and “The surprising habits of original thinkers” have together been viewed more than 11 million times in the past two years.

The show marks TED’s latest effort to test new content formats beyond the nonprofit’s signature first-person TED talk. Other recent TED original content experiments include Sincerely, X, an audio series featuring talks delivered anonymously;  Small Thing Big Idea, a Facebook Watch video series about everyday designs that changed the world; and the Indian prime-time live-audience television series TED Talks India: Nayi Soch, hosted by Bollywood star and TED speaker Shah Rukh Khan.

“We’re aggressively developing and testing a number of new audio and video programs that support TED’s mission of ‘Ideas Worth Spreading,’” said TED head of media and WorkLife co-executive producer Colin Helms. “In every case, our speakers and their ideas remain the focus, but with fresh formats, styles and lengths, we can reach and appeal to even more curious audiences, wherever they are.”

WorkLife debuts Wednesday, February 28 on Apple Podcasts, the TED Android app, or wherever you like to listen to podcasts. Season 1 features eight episodes, roughly 30 minutes each, plus two bonus episodes. It’s sponsored by Accenture, Bonobos, JPMorgan Chase & Co., and Warby Parker. New episodes will be made available every Wednesday.

CryptogramJumping Air Gaps

Nice profile of Mordechai Guri, who researches a variety of clever ways to steal data over air-gapped computers.

Guri and his fellow Ben-Gurion researchers have shown, for instance, that it's possible to trick a fully offline computer into leaking data to another nearby device via the noise its internal fan generates, by changing air temperatures in patterns that the receiving computer can detect with thermal sensors, or even by blinking out a stream of information from a computer hard drive LED to the camera on a quadcopter drone hovering outside a nearby window. In new research published today, the Ben-Gurion team has even shown that they can pull data off a computer protected by not only an air gap, but also a Faraday cage designed to block all radio signals.

Here's a page with all the research results.

BoingBoing post.

Worse Than FailureBudget Cuts

Xavier was the head of a 100+ person development team. Like many enterprise teams, they had to support a variety of vendor-specific platforms, each with their own vendor-specific development environment and its own licensing costs. All the licensing costs were budgeted for at year’s end, when Xavier would submit the costs to the CTO. The approval was a mere formality, ensuring his team would have everything they needed for another year.

Unfortunately, that CTO left to pursue another opportunity. Enter Greg, a new CTO who joined the company from the financial sector. Greg was a penny-pincher on a level that would make the novelty coin-smasher you find at zoos and highway rest-stops jealous. Greg started cutting costs left and right immediately. When the time came for budgeting development tool licensing, Greg threw down the gauntlet on Xavier’s “wild” spending.

Alan Rickman, in Galaxy Quest, delivering the line, 'By Grabthar's Hammer, what a savings' while looking like his soul is dying forever. "By Grabthar's Hammer, what a savings."

“Have a seat, X-man,” Greg offered, in a faux-friendly voice. “Let’s get to the point. I looked at your proposal for all of these tools, your team supposedly ‘needs’. $40,000 is absurd! Do you think we print money? If your team were any good,, they should be able to do everything they need without these expensive, gold-plated programs!”

Xavier was taken aback by Greg’s brashness, but he was prepared for a fight. “Greg, these tools are vital to our development efforts. There are maybe a few products we could do without, but most of them are absolutely required. Even the more ‘optional’ ones, like our refactoring and static analysis tools, they save us money and time and improve code quality. Not having them would be more expensive than the license.”

Greg scowled and tented his fingers. “There is no chance I’m approving this as it stands. Go back and figure out what you can do without. If you don’t cut this cost down, I’ll find an easier way to reduce expenses… like by cutting bonuses… or staff.”

Xavier spent the next few days having an extensive tool review with his lead developers. Many of the vendor-specific tools had no alternative, but there were a few third party tools they could do without, or use an open-source equivalent. Across the team of 100+ developers, the net cost savings would be $4,000, or 10%.

Xavier didn’t expect that to make Greg happy, but it was the best they could do. The following morning, Xavier presented his findings in Greg’s office, and it went smoother than expected. “Listen, X. I want this cost down even more, but we’re running out of time to approve this year’s budget. Since I did so much work cutting costs in other ways, I’ll submit this to finance. But enjoy your last year of all these fancy tools! Next year, things will be different!”

Xavier was relieved he didn’t have to fight further. Perhaps, over the next year, he could further demonstrate the necessity of their tooling. With the budget resolved, Xavier had some much-overdue vacation time. He had saved up enough PTO to spend a month in the Australian Outback. Development tools and budgets would be the furthest thing from his mind.

Three great weeks in the land down under were enhanced by being mostly cut off from communications from anyone in the company. During a trip through a town with cell phone reception, Xavier decided to check his voicemail, to make sure the sky wasn’t falling. Dave, his #2 in command, had left an urgent message two days prior.

“Xavier!” Dave shouted on the other end. “You need to get back here soon. Greg never paid the invoices for anything in our stack. We’re sitting here with a huge pile of unlicensed stuff. We’ve been racking up unlicensed usage and support costs, and Greg is going to flip when he sees our monthly statements.” With deep horror, Dave added, “One of the licenses he didn’t pay was for Oracle!”

Xavier reluctantly left the land of dingoes and wallabies to head back home. He arrived just about the same time the first vendor calls demanding payment did. The costs from just three weeks of unlicensed usage of enterprise software was astronomical. Certainly more than just buying the licenses would have been in the first place. Xavier scheduled a meeting with Greg to decide what to do next.

The following Monday, the dreaded meeting was on. “Sit,” Greg said. “I have some good news, and some bad news. The good news is that I’ve found a way to pay these ridiculous charges your team racked up.” Xavier leaned forward in his chair, eager to learn how Greg had pulled it off. “The bad news is that I’ve identified a redundant position- yours.”

Xavier slumped into his chair.

Greg continued. “While you were gone, I realized we were in quite capable hands with Dave, and his salary is quite a bit lower than yours. Coincidentally, the original costs and these ridiculous penalties add up to an amount just a little less than your annual salary. I guess you’re getting your wish: the development team can keep the tools you insist they need to do their jobs. It seems you were right about saving money in the long run, too.”

Xavier left Greg’s office, stunned. On his way out for the last time, he stopped by Dave to congratulate him on the new promotion.

“Oh,” Dave said, sourly, “it’s not a promotion. They’re just eliminating your position. What, you think Greg would give me a raise?”

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Don MartiTwo visions of GDPR

As far as I can tell, there are two sets of ambitious predictions about GDPR.

One is the VRM vision. Doc Searls writes, on ProjectVRM:

I am sure Google, Facebook and lesser purveyors of advertising online will find less icky ways to stay in business; but it is becoming clear that next May 25, when the GDPR goes into full effect, will be an extinction-level event for tracking-based advertising (aka adtech) as a business model.

Big impact? Not so fast. There's also a "business as usual" story, and that one, you'll find at Digital Advertising Consent.

Our complex ecosystem of companies must cooperate more closely than ever before to meet the transparency and consent requirements of European data protection law.

According to the adtech firms, well, maybe there will be more Bürokratie, more pointless dialogs that users have to click through, and one more line item, "GDPR compliance", to come out of the publisher's share, of course, but the second vision of GDPR is essentially just adtech/adfraud as usual. Upgrade to the new version of OpenRTB, and move along, nothing to see here.

Personally, I'm not buying either one of these GDPR visions. Because, just for fun and also because reasons, I run my own mail server.

And every little decision I have to make about how to configure the damn thing is based on playing a game with email spammers. Regulation is a part of my complete breakfast, but it's not the whole story.

The government doesn't give you freedom from spam. You have to take it for yourself, one filtering rule at a time. Or, do what most people do, and find a company that does it for you, but it has to be a company that you trust with your information.

A mail sender's decision to comply, or not comply, with some regulation is a bit of information. That feeds into the software that makes the final decision: inbox, spam folder, or reject. When a spam message complies with the regulations of some country, my mail server doesn't say, "Oh, wow, compliant! I can skip all the other checks and send this one straight to the inbox!" It uses the regulation compliance along with other information to make that decision.

So whatever extra consent forms that surveillance marketers are required to send by GDPR? They're not the final decision on What The User Must See. They're just data, coming over the network.

Some of that data will be interpreted to mean that this request is an obvious mismatch with how the user chooses to share their info. The user might not even see those consent forms, or the browser might pop up a notification:

4 requests to do creepy shit, that's obviously against your preferences, already denied. Isn't this the best browser ever?

(No, I don't write copy for browser notifications. But you get the idea.)

Browsers that implement tracking protection might end up with a feature where they detect requests for permission to do things that the user has already said no to—by turning on tracking protection in the first place—and auto-deny them.

Legit email senders had to learn "deliverability," the art and science of making legit mail look legit so that it can get past email spam filters. Legit advertisers will have to learn that users aren't identical and spherical, users choose tools to implement their data sharing preferences, and that regulatory compliance is only part of the job.

Should web browsers adopt Google’s new selective ad blocking tech?


Content recommendation services Outbrain and Taboola are no longer a guaranteed source of revenue for digital publishers

CryptogramCabinet of Secret Documents from Australia

This story of leaked Australian government secrets is unlike any other I've heard:

It begins at a second-hand shop in Canberra, where ex-government furniture is sold off cheaply.

The deals can be even cheaper when the items in question are two heavy filing cabinets to which no-one can find the keys.

They were purchased for small change and sat unopened for some months until the locks were attacked with a drill.

Inside was the trove of documents now known as The Cabinet Files.

The thousands of pages reveal the inner workings of five separate governments and span nearly a decade.

Nearly all the files are classified, some as "top secret" or "AUSTEO", which means they are to be seen by Australian eyes only.

Yes, that really happened. The person who bought and opened the file cabinets contacted the Australian Broadcasting Corp, who is now publishing a bunch of it.

There's lots of interesting (and embarassing) stuff in the documents, although most of it is local politics. I am more interested in the government's reaction to the incident: they're pushing for a law making it illegal for the press to publish government secrets it received through unofficial channels.

"The one thing I would point out about the legislation that does concern me particularly is that classified information is an element of the offence," he said.

"That is to say, if you've got a filing cabinet that is full of classified information ... that means all the Crown has to prove if they're prosecuting you is that it is classified ­ nothing else.

"They don't have to prove that you knew it was classified, so knowledge is beside the point."


Many groups have raised concerns, including media organisations who say they unfairly target journalists trying to do their job.

But really anyone could be prosecuted just for possessing classified information, regardless of whether they know about it.

That might include, for instance, if you stumbled across a folder of secret files in a regular skip bin while walking home and handed it over to a journalist.

This illustrates a fundamental misunderstanding of the threat. The Australian Broadcasting Corp gets their funding from the government, and was very restrained in what they published. They waited months before publishing as they coordinated with the Australian government. They allowed the government to secure the files, and then returned them. From the government's perspective, they were the best possible media outlet to receive this information. If the government makes it illegal for the Australian press to publish this sort of material, the next time it will be sent to the BBC, the Guardian, the New York Times, or Wikileaks. And since people no longer read their news from newspapers sold in stores but on the Internet, the result will be just as many people reading the stories with far fewer redactions.

The proposed law is older than this leak, but the leak is giving it new life. The Australian opposition party is being cagey on whether they will support the law. They don't want to appear weak on national security, so I'm not optimistic.

EDITED TO ADD (2/8): The Australian government backed down on that new security law.

EDITED TO ADD (2/13): Excellent political cartoon.

CryptogramPoor Security at the UK National Health Service

The Guardian is reporting that "every NHS trust assessed for cyber security vulnerabilities has failed to meet the standard required."

This is the same NHS that was debilitated by WannaCry.

EDITED TO ADD (2/13): More news.

And don't think that US hospitals are much better.

Planet DebianPetter Reinholdtsen: Version 3.1 of Cura, the 3D print slicer, is now in Debian

A new version of the 3D printer slicer software Cura, version 3.1.0, is now available in Debian Testing (aka Buster) and Debian Unstable (aka Sid). I hope you find it useful. It was uploaded the last few days, and the last update will enter testing tomorrow. See the release notes for the list of bug fixes and new features. Version 3.2 was announced 6 days ago. We will try to get it into Debian as well.

More information related to 3D printing is available on the 3D printing and 3D printer wiki pages in Debian.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Cory DoctorowThe Man Who Sold the Moon, Part 04 [FIXED]

Here’s part four of my reading (MP3) (part three, part two, part one) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.



Sociological ImagesWhat’s That Fact? A Tricky Graph on Terror

The Star Tribune recently ran an article about a new study from George Washington University tracking cases of Americans who traveled to join jihadist groups in Syria and Iraq since 2011. The print version of the article was accompanied by a graph showing that Minnesota has the highest rate of cases in the study. TSP editor Chris Uggen tweeted the graph, noting that this rate represented a whopping seven cases in the last six years.

Here is the original data from the study next to the graph that the paper published:

(Click to Enlarge)

Social scientists often focus on rates when reporting events, because it make cases easier to compare. If one county has 300 cases of the flu, and another has 30,000, you wouldn’t panic about an epidemic in the second county if it had a city with many more people. But relying on rates to describe extremely rare cases can be misleading. 

For example, the data show this graph misses some key information. California and Texas had more individual cases than Minnesota, but their large populations hide this difference in the rates. Sorting by rates here makes Minnesota look a lot worse than other states, while the number of cases is not dramatically different. 

As far as I can tell, this chart only appeared in the print newspaper photographed above and not on the online story. If so, this chart only went to print audiences. Today we hear a lot of concern about the impact of “filter bubbles,” especially online, and the spread of misleading information. What concerns me most about this graph is how it shows the potential impact of offline filter bubbles in local communities, too.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at

Planet DebianJeremy Bicha: GNOME Tweaks 3.28 Progress Report 2

GNOME 3.28 has reached its 3.27.90 milestone. This milestone is important because it means that GNOME is now at API Freeze, Feature Freeze, and UI Freeze. From this point on, GNOME shouldn’t change much, but that’s good because it allows for distros, translators, and documentation writers to prepare for the 3.28 release. It also gives time to ensure that new feature are working correctly and as many important bugs as possible are fixed. GNOME 3.28 will be released in approximately one month.

If you haven’t read my last 3.28 post, please read it now. So what else has changed in Tweaks this release cycle?


As has been widely discussed, Nautilus itself will no longer manage desktop icons in GNOME 3.28. The intention is for this to be handled in a GNOME Shell extension. Therefore, I had to drop the desktop-related tweaks from GNOME Tweaks since the old methods don’t work.

If your Linux distro will be keeping Nautilus 3.26 a bit longer (like Ubuntu), it’s pretty easy for distro maintainers to re-enable the desktop panel so you’ll still get all the other 3.28 features without losing the convenient desktop tweaks.

As part of this change, the Background tweaks have been moved from the Desktop panel to the Appearance panel.


Historically, laptop touchpads had two or three physical hardware buttons just like mice. Nowadays, it’s common for touchpads to have no buttons. At least on Windows, the historical convention was a click in the bottom left would be treated as a left mouse button click, and a click in the bottom right would be treated as a right mouse button click.

Macs are a bit different in handling right click (or secondary click as it’s also called). To get a right-click on a Mac, just click with two fingers simultaneously. You don’t have to worry about whether you are clicking in the bottom right of the touchpad so things should work a bit better when you get used to it. Therefore, this is even used now in some Windows computers.

My understanding is that GNOME used Windows-style “area” mouse-click emulation on most computers, but there was a manually updated list of computers where the Mac style “fingers” mouse-click emulation was used.

In GNOME 3.28, the default is now the Mac style for everyone. For the past few years, you could change the default behavior in the GNOME Tweaks app, but I’ve redesigned the section now to make it easier to use and understand. I assume there will be some people who prefer the old behavior so we want to make it easy for them!

GNOME Tweaks 3.27.90 Mouse Click Emulation

For more screenshots (before and after), see the GitLab issue.


There is one more feature pending for Tweaks 3.28, but it’s incomplete so I’m not going to discuss it here yet. I’ll be sure to link to a blog post about it when it’s ready though.

For more details about what’s changed, see the NEWS file or the commit log.

Planet DebianJulien Danjou: Scaling a polling Python application with asyncio

This article is a follow-up of my previous blog post about scaling a large number of connections. If you don't remember, I was trying to solve one of my followers' problem:

It so happened that I'm currently working on scaling some Python app. Specifically, now I'm trying to figure out the best way to scale SSH connections - when one server has to connect to thousands (or even tens of thousands) of remote machines in a short period of time (say, several minutes).

How would you write an application that does that in a scalable way?

In the first article, we wrote a program that could handle large scale of this problem by using multiple threads. While this worked pretty well, this had some severe limitations. This time, we're going to take a different approach.

The job

The job has not changed and is still about connecting to a remote server via ssh. This time, rather than faking it by using ping instead, we are going to connect for real to an ssh server. Once connected to the remote server, the mission will be to run a single command. For the sake of this example, the command that will be run here is just a simple "echo hello world".

Using an event loop

This time, rather than leveraging threads, we are using asyncio. Asyncio is the leading Python event loop system implementation. It allows executing multiple functions (named coroutines) concurrently. The idea is that each time a coroutine performs an I/O operation, it yields back the control to the event loop. As the input or output might be blocking (e.g., the socket has no data yet to be read), the event loop will reschedule the coroutine as soon as there is work to do. In the meantime, the loop can schedule another coroutine that has something to do – or wait for that to happen.

Not all libraries are compatible with the asyncio framework. In our case, we need an ssh library that has support for asyncio. It happens that AsyncSSH is a Python library that provides ssh connection handling support for asyncio. It is particularly easy to use, and the documentation has plenty of examples.

Here's the function that we're going to use to execute our command on a remote host:

import asyncssh
async def run_command(host, command):
async with asyncssh.connect(host) as conn:
result = await
return result.stdout

The function run_command runs a command on a remote host once connected to it via ssh. It then returns the standard output of the command. The function uses the keywords async and await that are specific to Python >= 3.6 and asyncio. It indicates that the called functions are coroutine that might be blocking, and that the control is yield back to the event loop.

As I don't own hundreds of servers where I can connect to, I will be using a single remote server as the target – but the program will connect to it multiple times. The server is at a latency of about 6 ms, so that'll magnify a bit the results.

The first version of this program is simple and stupid. It'll run N times the run_command function serially by providing the tasks one at a time to the asyncio event loop:

loop = asyncio.get_event_loop()
outputs = [
run_command("myserver", "echo hello world %d" % i))
for i in range(200)

Once executed, the program prints the following:

$ time python3
['hello world 0\n', 'hello world 1\n', 'hello world 2\n', … 'hello world 199\n']
python3 6.11s user 0.35s system 15% cpu 41.249 total

It took 41 seconds to connect 200 times to the remote server and execute a simple printing command.

To make this faster, we're going to schedule all the coroutines at the same time. We just need to feed the event loop with the 200 coroutines at once. That will give it the ability to schedule them efficiently.

outputs = loop.run_until_complete(asyncio.gather(
*[run_command("myserver", "echo hello world %d" % i)
for i in range(200)]))

By using asyncio.gather, it is possible to pass a list of coroutines and wait for all of them to be finished. Once run, this program prints the following:

$ time python3
['hello world 0\n', 'hello world 1\n', 'hello world 2\n', … 'hello world 199\n']
python3 4.90s user 0.34s system 35% cpu 14.761 total

This version only took ⅓ of the original execution time to finish! As a fun note, the main limitation here is that my remote server is having trouble to handle more than 150 connections in parallel, so this program is a bit tough for it alone.


To show how great this method is, I've built a chart below that shows the difference of execution time between the two approaches, depending on the number of hosts the application has to connect to.

The trend lines highlight the difference of execution time and how important the concurrency is here. For 10,000 nodes, the time needed for a serial execution would be around 40 minutes whereas it would be only 7 minutes with a cooperative approach – quite a difference. The concurrent approach allows executing one command 205 times a day rather than only 36 times!

That was the second step

Using an event loop for tasks that can run concurrently due to their I/O intensive nature is really a great way to maximize the throughput of a program. This simple changes made the program 6× faster.

Anyhow, this is not the only way to scale Python program. There are a few other options available on top of this mechanism – I've covered those in my book Scaling Python, if you're interested in learning more!

Until then, stay tuned for the next article of this series!

Planet DebianJo Shields: Long-term distribution support?

A question: how long is reasonable for an ISV to keep releasing software for an older distribution? When is it fair for them to say “look, we can’t feasibly support this old thing any more”.

For example, Debian 7 is still considered supported, via the Debian LTS project. Should ISV app vendors keep producing builds built for Debian 7, with its ancient versions of GCC or CMake, rudimentary C++11 support, ARM64 bugs, etc? How long is it fair to expect an ISV to keep spitting out builds on top of obsolete toolchains?

Let’s take Mono as an example, since, well, that’s what I’m paid to care about. Right now, we do builds for:

  • Debian 7 (oldoldstable, supported until May 2018)
  • Debian 8 (oldstable, supported until April 2020)
  • Debian 9 (stable, supported until June 2022)
  • Raspbian 8 (oldstable, supported until June 2018)
  • Raspbian 9 (stable, supported until June 2020)
  • Ubuntu 12.04 (EOL unless you pay big bucks to Canonical – but was used by TravisCI long after it was EOL)
  • Ubuntu 14.04 (LTS, supported until April 2019)
  • Ubuntu 16.04 (LTS, supported until April 2021)
  • CentOS 6 (LTS, supported until November 2020)
  • CentOS 7 (LTS, supported until June 2024)

Supporting just these is a problem already. CentOS 6 builds lack support for TLS 1.2+, as that requires GCC 4.7+ – but I can’t just drop it, since Amazon Linux (used by a surprising number of people on AWS) is based on CentOS 6. Ubuntu 12.04 support requires build-dependencies on a secret Mozilla-team maintained copy of GCC 4.7 in the archive, used to keep building Firefox releases.

Why not just use the CDN analytics to form my opinion? Well, it seems most people didn’t update their sources.list after we switched to producing per-distribution binaries some time around May 2017 – so they’re still hardcoding wheezy in their sources. And I can’t go by user agent to determine their OS, as Azure CDN helpfully aggregates all of them into “Debian APT-HTTP/1.x” rather than giving me the exact version numbers I’d need to cross-reference to determine OS release.

So, with the next set of releases coming on the horizon (e.g. Ubuntu 18.04), at what point is it okay to say “no more, sorry” to an old version?

Answers on a postcard. Or the blog comments. Or Twitter. Or Gitter.

Krebs on SecurityDomain Theft Strands Thousands of Web Sites

Newtek Business Services Corp. [NASDAQ:NEWT], a Web services conglomerate that operates more than 100,000 business Web sites and some 40,000 managed technology accounts, had several of its core domain names stolen over the weekend. The theft shut off email and stranded Web sites for many of Newtek’s customers.

An email blast Newtek sent to customers late Saturday evening made no mention of a breach or incident, saying only that the company was changing domains due to “increased” security. A copy of that message can be read here (PDF).

In reality, three of their core domains were hijacked by a Vietnamese hacker, who replaced the login page many Newtek customers used to remotely manage their Web sites (webcontrolcenter[dot]com) with a live Web chat service. As a result, Newtek customers seeking answers to why their Web sites no longer resolved correctly ended up chatting with the hijacker instead.

The PHP Web chat client that the intruder installed on Webcontrolcenter[dot]com, a domain that many Newtek customers used to manage their Web sites with the company. The perpetrator can be seen in this chat using the name “admin.” Click to enlarge.

In a follow-up email sent to customers 10 hours later (PDF), Newtek acknowledged the outage was the result of a “dispute” over three domains, webcontrolcenter[dot]com, thesba[dot]com, and crystaltech[dot]com.

“We strongly request that you eliminate these domain names from all your corporate or personal browsers, and avoid clicking on them,” the company warned its customers. “At this hour, it has become apparent that as a result over the dispute for these three domain names, we do not currently have control over the domains or email coming from them.”

The warning continued: “There is an unidentified third party that is attempting to chat and may engage with clients when visiting the three domains. It is imperative that you do not communicate or provide any sensitive data at these locations.”

Newtek did not respond to requests for comment.

Domain hijacking is not a new problem, but it can be potentially devastating to the victim organization. In control of a hijacked domain, a malicious attacker could seamlessly conduct phishing attacks to steal personal information, or use the domain to foist malicious software on visitors.

Newtek is not just a large Web hosting firm: It aims to be a one-stop shop for almost any online service a small business might need. As such, it’s a mix of very different business units rolled up into one since its founding in 1998, including lending solutions, HR, payroll, managed cloud solutions, group health insurance and disaster recovery solutions.

“NEWT’s tentacles go deep into their client’s businesses through providing data security, human resources, employee benefits, payments technology, web design and hosting, a multitude of insurance solutions, and a suite of IT services,” reads a Sept. 2017 profile of the company at SeekingAlpha, a crowdsourced market analysis publication.

Newtek’s various business lines. Source: Newtek.

Reached via the Web chat client he installed at webcontrolcenter[dot]com, the person who claimed responsibility for the hijack said he notified Newtek five days ago about a “bug” he found in the company’s online operations, but that he received no reply.

A Newtek customer who resells the company’s products to his clients said he had to spend much of the weekend helping clients regain access to email accounts and domains as a result of the incident. The customer, who asked to remain anonymous, said he was shocked that Newtek made little effort to convey the gravity of the hijack to its customers — noting that the company’s home page still makes no mention of the incident.

“They also fail to make it clear that any data sent to any host under the domain could be recorded (email passwords, web credentials, etc.) by the attacker,” he said. “I’m floored at how bad their communication was to their users. I’m not surprised, but concerned, that they didn’t publish the content in the emails directly on their website.”

The source said that at a minimum Newtek should have expired all passwords immediately and required resets through non-compromised hosts.

“And maybe put a notice about this on their home page instead of relying on email, because a lot of my customers can’t get email right now as a result of this,” the source said.

There are a few clues that suggest the perpetrator of these domain hijacks is indeed being truthful about both his nationality and that he located a bug in Newtek’s service. Two of the hijacked domains were moved to a Vietnamese domain registrar (

This individual gave me an email address to contact him at — — although he has so far not responded to questions beyond promising to reply in Vietenamese. The email is tied to two different Vietnamese-language social networking profiles.

A search at Domaintools indicates that this address is linked to the registration records for four domains, including one (giakiemnew[dot]com) that was recently hosted on a dedicated server operated by Newtek’s legacy business unit Crystaltek [full disclosure: Domaintools is an advertiser on this site]. Recall that Crystaltek[dot]com was among the three hijacked domains.

In addition, the domain giakiemnew[dot]com was registered through Newtek Technology Services, a domain registration service offered by Newtek. This suggests that the perpetrator was in fact a customer of Newtek, and perhaps did discover a vulnerability while using the service.

CryptogramInternet Security Threats at the Olympics

There are a lot:

The cybersecurity company McAfee recently uncovered a cyber operation, dubbed Operation GoldDragon, attacking South Korean organizations related to the Winter Olympics. McAfee believes the attack came from a nation state that speaks Korean, although it has no definitive proof that this is a North Korean operation. The victim organizations include ice hockey teams, ski suppliers, ski resorts, tourist organizations in Pyeongchang, and departments organizing the Pyeongchang Olympics.

Meanwhile, a Russia-linked cyber attack has already stolen and leaked documents from other Olympic organizations. The so-called Fancy Bear group, or APT28, began its operations in late 2017 --­ according to Trend Micro and Threat Connect, two private cybersecurity firms­ -- eventually publishing documents in 2018 outlining the political tensions between IOC officials and World Anti-Doping Agency (WADA) officials who are policing Olympic athletes. It also released documents specifying exceptions to anti-doping regulations granted to specific athletes (for instance, one athlete was given an exception because of his asthma medication). The most recent Fancy Bear leak exposed details about a Canadian pole vaulter's positive results for cocaine. This group has targeted WADA in the past, specifically during the 2016 Rio de Janeiro Olympics. Assuming the attribution is right, the action appears to be Russian retaliation for the punitive steps against Russia.

A senior analyst at McAfee warned that the Olympics may experience more cyber attacks before closing ceremonies. A researcher at ThreatConnect asserted that organizations like Fancy Bear have no reason to stop operations just because they've already stolen and released documents. Even the United States Department of Homeland Security has issued a notice to those traveling to South Korea to remind them to protect themselves against cyber risks.

One presumes the Olympics network is sufficiently protected against the more pedestrian DDoS attacks and the like, but who knows?

EDITED TO ADD: There was already one attack.

Worse Than FailureCoded Smorgasbord: If It's Stupid and It Works

On a certain level, if code works, it can only be so wrong. For today, we have a series of code blocks that work… mostly. Despite that, each one leaves you scratching your head, wondering how, exactly this happened.

Lisa works at a web dev firm that just picked up a web app from a client. They didn’t have much knowledge about what it was or how it worked beyond, “It uses JQuery?”

Well, they’re technically correct:

if ($(document.getElementById("really_long_id_of_client_side_element")).checked) {
    $(document.getElementById("xxxx1")).css({ "background-color": "#FFFFFF", "color": "Black" });
    $(document.getElementById("xxxx2")).css({ "background-color": "#FFFFFF", "color": "Black" });
    $(document.getElementById("xxxx3")).css({ "background-color": "#FFFFFF", "color": "Black" });
    $(document.getElementById("xxxx4")).css({ "background-color": "#FFFFFF", "color": "Black" });

In this case, they’re ignoring the main reason people use jQuery- the ability to easily and clearly fetch DOM elements with CSS selectors. But they do use the css function as intended, giving them an object-oriented way to control styles. Then again, one probably shouldn’t set style properties directly from JS anyway, that’s what CSS classes are for. Then again, why mix #FFFFFF and Black, when you could use white or #000000

Regardless, it does in fact use JQuery.

Dave A was recently trying to debug a test in Ruby, and found this unique construct:

if status == status = 1 || status = 2 || status = 3
  @msg.stubs(:is_reply?).returns true
  @msg.stubs(:is_reply?).returns false

This is an interesting case of syntactically correct nonsense that looks incorrect. status = 1 returns a 1, a “truthy” value, thus short circuiting the || operator. In this code, if status is undefined, it returns true and sets status equal to 1. The rest of the time it returns false and sets status equal to 1.

What the developer meant to do was check if status was 1, 2 or 3, e.g. if status == 1 || status == 2…, or, to use a more Ruby idiom: if [1, 2, 3].include? status. Still, given the setup for the test, the code actually worked until Dave changed the pre-conditions.

Meanwhile, Leonardo Scur came across this JavaScript reinvention of an array:

tags = {
  "tags": {
    "0": {"id": "asdf"},
    "1": {"id": "1234"},
    "2": {"id": "etc"}
  "tagsCounter": 3,
  // … below this are reimplementations of common array methods built to work on `tags`

This was part of a trendy front-end framework he was using, and it’s obvious that arrays indexed by integers are simply too mainstream. Strings are where it’s at.

This library is in wide use, meant to add simple tagging widgets to an AngularJS application. It also demonstrates a strange way to reinvent the array.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianMichal Čihař: New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long and waited for more than month, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Planet DebianRuss Allbery: February Haul

Most of this is the current StoryBundle: Black Narratives, in honor of Black History Month in the United States. But there's also a random selection of other things that I couldn't resist.

(I'm still reading this year too! Just a touch behind on reviews at the moment.)

Alicia Wright Brewster — Echo (sff)
T. Thorne Coyle — To Raise a Clenched Fist to the Sky (sff)
T. Thorne Coyle — To Wrest Our Bodies from the Fire (sff)
Julie E. Czerneda — Riders of the Storm (sff)
Julie E. Czerneda — Rift in the Sky (sff)
Terah Edun — Blades of Magic (sff)
Terah Edun — Blades of Illusion (sff)
L.L. Farmer — Black Borne (sff)
Jim C. Hines — Goblin Quest (sff)
Jim C. Hines — The Stepsister Scheme (sff)
Nalo Hopkinson — The Salt Roads (sff)
S.L. Huang — Root of Unity (sff)
Ursula K. Le Guin — Steering the Craft (nonfiction)
Nnedi Okorafor — Kabu-Kabu (sff collection)
Karen Lord — Redemption in Indigo (sff)
L. Penelope — Angelborn (sff)
Elizabeth Wein — The Pearl Thief (mainstream)

I'm slowly reading through the Czerneda that I missed, since I liked the Species Imperative series so much. Most of it isn't that good, and Czerneda has a few predictable themes, but it's fun and entertaining.

The Wein is a prequel to Code Name Verity, so, uh, yes please.

Cory DoctorowHey, Australia and New Zealand, I’m coming to visit you!

I’m about to embark on a tour of Australia and New Zealand to support my novel Walkaway, with stops in Perth, Melbourne, Sydney, Adelaide, and Wellington! I really hope you’ll come out and say hello!

Perth: Feb 24-25, Perth Festival

Melbourne: Feb 27: An expansive conversation about the imperfect present and foreseeable future with CS Pascat, St Kilda Town Hall, 19h

Melbourne: Feb 28: How do writers get paid?, Wheeler Centre, 1815h

Sydney: Mar 1: What should we do about democracy?, City Recital Hall, 1930h

Adelaide: Mar 4-6: Adelaide Festival

Wellington: Mar 9-11: Writers and Readers Week

Wellington: Mar 12: NetHui one-day event on copyright


Planet DebianRuss Allbery: pgpcontrol 2.6

This is the legacy bundle of Usenet control message signing and verification tools, distributed primarily via (which hasn't updated yet as I write this). You can see the files for the current release at

This release adds support for using gpg for signature verification, provided by Thomas Hochstein, since gpgv may no longer support insecure digest algorithms.

Honestly, all the Perl Usenet control message code I maintain is a mess and needs some serious investment in effort, along with a major migration for the Big Eight signing key (and probably the signing key for various other archives). A lot of this stuff hasn't changed substantially in something like 20 years now, still supports software that no one has used in eons (like the PGP 2.6.3i release), doesn't use modern coding idioms, doesn't have a working test suite any longer, and is full of duplicate code to mess about with temporary files to generate signatures.

The formal protocol specification is also a pretty old and scanty description from the original project, and really should be a proper RFC.

I keep wanting to work on this, and keep not clearing the time to start properly and do a decent job of it, since it's a relatively large effort. But this could all be so much better, and I could then unify about four different software distributions I currently maintain, or at least layer them properly, and have something that would have a modern test suite and could be packaged properly. And then I could start a migration for the Big Eight signing key, which has been needed for quite some time.

Not sure when I'm going to do this, though, since it's several days of work to really get started. Maybe my next vacation?

(Alternately, I could just switch everything over to Julien's Python code. But I have a bunch of software already written in Perl of which the control message processing is just a component, so it would be easier to have a good Perl implementation.)

Planet DebianSteinar H. Gunderson: Chess960 opening position analysis

Magnus Carlsen and Hikaru Nakamura are playing an unofficial Chess960 world championship, so I thought I'd have a look at what the white advantage is for the 960 different positions. Obviously, you can't build anything like a huge opening book, but I let Stockfish run on the positions for increasing depths until I didn't have time anymore (in all, it was a little over a week, multiplied by 20 cores plus hyperthreading).

I've been asked to publish the list, so here it is. It's calculated deterministically using a prerelease version of Stockfish 9 (git from about a month before release), using only a single thread and consistently cleared 256 MB hash. All positions are calculated to depth 39, which is about equivalent to looking at the position for 2–3 hours, but a few are at depth 40. (At those depths, the white advantage varies from 0.01 to 0.57 pawns.) Unfortunately, I didn't save the principal variation, so it can be hard to know exactly why it thinks one position is particularly good for white, but generally, the best positions for white contain early attacks that are hard to counter without putting the pieces in awkward positions.

One thing you may notice is that the evaluation varies quite a lot between depths. This means you shouldn't take the values as absolute gospel; it's fairly clear that the +0.57 position is better than the +0.01 position, but the difference between +0.5 and +0.4 is much less clear-cut, as you can easily see one position varying between e.g. +0.2 and +0.5.

Note that my analysis page doesn't use this information, since Stockfish doesn't have persistent hash; it calculates from scratch every game.

Planet DebianPetter Reinholdtsen: How hard can æ, ø and å be?

We write 2018, and it is 30 years since Unicode was introduced. Most of us in Norway have come to expect the use of our alphabet to just work with any computer system. But it is apparently beyond reach of the computers printing recites at a restaurant. Recently I visited a Peppes pizza resturant, and noticed a few details on the recite. Notice how 'ø' and 'å' are replaced with strange symbols in 'Servitør', 'Å BETALE', 'Beløp pr. gjest', 'Takk for besøket.' and 'Vi gleder oss til å se deg igjen'.

I would say that this state is passed sad and over in embarrassing.

I removed personal and private information to be nice.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianShirish Agarwal: A hack and a snowflake

This would be a long post. Before starting, I would like to share or explain that I am not a native English speaker. I say this as what I’m going to write, may or may not be the same terms or meaning that others understand as I’m an Indian who uses American English + British English.

So it’s very much possible that I have picked up all the bad habits of learning and understanding and not any of the good ones in writing English as bad habits as elsewhere in life are the easier ones to pick up. Also I’m not a trained writer or have taken any writing lessons ever apart from when I was learning English in school as a language meant to communicate.

Few days back, I was reading an opinion piece (I tried to find the opinion piece again but have failed to do since) if anybody finds it, please share in the comments so will link here. A feminist author proclaimed how some poets preached or shared violence against women in their writings, poems including some of the most famous poets we admire today. The author of the article was talking about poets and artists like William Wordsworth and others. She picked out particular poems from their body of work which seemed to convey that message. Going further than that, she chose to de-hypnate between the poet and their large body of work. I wished she had cared enough to also look a bit more deeply in the poet’s life than just labeling him from one poem among perhaps tens or hundreds he may have written. I confess I haven’t read much of Wordsworth than what was in school and from those he seemed to be a nature lover rather than a sexual predator he was/is being made out to be. It is possible that I might have been mis-informed.


Meaning of author

– Courtesy – CC-by-SA

The reason I say this is because I’m a hack. A hack in the writing department or ‘business’ is somebody who is not supposed to tell any back story and just go away. Writers though, even those who write short stories need to have a backstory at least in the back of their mind about each character that s/he introduces into the story. Because it’s a short story s/he cannot reveal where they come from but only point at the actions they are doing. I had started the process two times and two small stories got somewhat written through me but I stopped both the times midway.

while I was hammering through the keyboard for the stories, it was as if the characters themselves who were taking me on a journey which was dark and I didn’t want to venture more. I had heard this from quite a few authors, few of them published as well and I had dismissed it as a kind of sales pitch or something.

When I did write those stories for the sake of argument, I realized the only thing that the author has is an idea and the overall arc of the story. You have to have complete faith in your characters even if they led you astray or in unexpected places. The characters speak to you and through you rather than the other way around. It is the most maddest and the most mysterious of journeys and it seemed the characters liked the darker tones more than the lighter ones. I do not know whether it the same for all the writers/hacks (at least in the beginning) or just me ? Or Maybe its a cathartic expression. I do hope to still do more stories and even complete them even if they have dark overtones just to understand the process. By dark I mean violence here.

That is why I asked that maybe if the author of the opinion piece had taken the pain and shared more of the context surrounding the poem themselves as to when did Mr. Wordsworth wrote that poem or other poets did, perhaps I could identify with that as well as many writers/authors themselves.

I was disappointed with the article in two ways, in one way they were dismissing the poet/the artist and yet they seemed or did not want to critique/ban all the other works because

a. either they liked the major part of the work or

b. they knew the audience to whom they were serving the article to probably likes the other works and chose not to provoke them.

Another point was I felt when you are pushing and punishing poets are you doing so because they are the soft targets now more than ever? Almost all the poets she had talked about are unfortunately not in this mortal realm anymore. On the business side of things, the publishing industry is in for grim times . The poets and the poems being perhaps the easiest targets atm as they are not the center of the world anymore as they used to do. Both in United States as well as here in India, literature or even fiction for that matter has been booted out of the educational system. The point I’m trying to make here that publishers would and are not in a position to protect authors or even themselves when such articles are written and opinions are being formed. Also see for an Indian viewpoint of the same.

I also did not understand what the author wanted when she named and shamed the poets. If you really want to name and shame people who have and are committing acts of violence against women, then the horror film genre apart from action genre should be easily targeted. In many of the horror movies, both in hollywood, Bollywood and perhaps in other countries as well, the female protagonist/lead is often molested,sexually assaulted, maimed, killed, cannibalized so and so forth. Should we ban such movies forthwith ?

Also does ‘banning’ a work of art really work ? The movie ‘Padmavaat‘ has been mired in controversies due to a cultural history where as the story/myth goes ‘Rani Padmavati’ (whether she is real or an imaginary figure is probably fodder for another blog post) when confronted with Khilji committed ‘Jauhar’ or self-immolation so that she remains ‘pure’. The fanatics rally around her as she is supposed to have paid the ultimate price, sacrificing herself. But if she were really a queen, shouldn’t she have thought of her people and lived to lead the fight, run away and fight for another day or if she was cunning enough to worm her way into Khilji’s heart and topple him from within. The history and politics of those days had all those options in front of her if she were a real character, why commit suicide ?

Because of the violence being perpetuated around Rani Padmavati there hasn’t been either a decent critique either about the movie or the historical times in which she lived. It perhaps makes the men of the land secure in the knowledge that the women then and even now should kill themselves than either falling in love with the ‘other’ (a Muslim) romantically thought of as the ‘invader’ a thought which has and was perpetuated by the English ever since the East India company came for their own political gains. Another idea being women being pure, oblivious and not ‘devious’ could also be debated.

(sarcasm) Of course, the idea that Khilji and Rani Padmavati living in the same century is not possible by actual historians is too crackpot to believe as the cultural history wins over real history. (/sarcasm)

The reason this whole thing got triggered was the ‘snowflake’ comments on . The article itself is a pretty good read as even though I’m an outsider to how the kernel comes together and although I have the theoretical knowledge about how the various subsystem maintainers pull and push patches up the train and how Linus manages to eke out a kernel release every 3-4 months, I did have an opportunity to observe how fly-by-contributors are ignored by subsystem-maintainers.

About a decade or less ago, my 2-button wheel Logitech mouse at the time was going down and I had no idea why sometimes the mouse used to function and why sometimes it didn’t. A hacker named ‘John Hill’ put up a patch. What the patch did essentially was trigger warnings on the console when the system was unable to get signal from my 2-button wheel mouse. I did comment and try to get it pushed into the trunk but it didn’t and there was also no explanation by anyone why the patch was discarded. I did come to know while building the mouse module as to how many types and models of mouse there were which was a revelation to me at that point in time. By pushing I had commented on where the patch was posted and the mailing list where threads for patches are posted and posted couple of times that the patch by John Hill should be considered but nobody either got back to me or him.

It’s been a decade since then and still we do not have any proper error reporting process AFAIK if the mouse/keyboard fails to transmit messages/signals to the system.

That apart the real long thread was about the term ‘snowflake’. I had been called that in the past but had sort of tuned it out as I didn’t know what the term means/meant.

When I went to wikipedia and came up with the ‘snowflake’ and it came with 3 meanings to the same word.

a. A unique crystalline shape of white

b. A person who believes that s/he is unique and hence entitled

c. A person who is weak or thin-skinned (overly sensitive)

I believe we all are of the above, the only difference is perhaps a factor. If we weren’t meant to be unique we wouldn’t have been given a personality, a body type, a sense of reasoning and logic and perhaps most important a sense of what is right or wrong. To be thick-skinned also comes the inability to love and have empathy with others.

To round off on a somewhat hopeful note, I was re-reading maybe for the umpteenth time ‘Sacred Stone‘ an action thriller in which four hindus along with a corrupt, wealthy and hurt billionaire try to blow the most sacred site of the Muslims, the Mecca and Medina. While I don’t know whether it would be possible or not, I would for sure like to see people using the pious days for reflection . I don’t have to do anything, just be.

Similarly, the spanish pilgrimage as shown in the Way . I don’t think any of my issues will be resolved in being either of the two places but it may trigger paths within which I have not yet explored or forgotten longtime ago.

At the end I would like to share two interesting articles that I saw/read over the week, the first one is about the ‘Alphonso‘ and the other Samarkhand . I hope you enjoy both the articles.

Don MartiTeam A vs. Team B

Let's run a technical challenge on the Internet. Team A vs. Team B.

Team A gets to work where they want, when they want. Team B has to work in an open-plan office, with people walking behind them, talking on the phone, doing all that annoying office stuff.

Members of Team A get paid for successful work within weeks or months. Members of Team B get a base salary that they have to spend on rent in an expensive location, but just might get paid extra for successful work in four years.

Team A will let anyone try to join, and those who aren't successful have to drop out quickly. Team B will only let members who are a "good cultural fit" join, and it takes a while to get rid of an unsuccessful member.

Team A can deploy unproven work for real-world testing, using infrastructure that they get for free on the Internet. Team B can only deploy their work when production-ready, on infrastructure they have to pay for.

If Team A breaks the rules, the penalty is that they have to spend a little money to register new domain names. If Team B breaks the rules, they risk lengthy regulatory and/or legal consequences.

Team A scores a win any time they can beat whoever is the weakest member of Team B at that time. Team B can only score a win when they can consistently defeat all of the most active members of Team A.

Team A is adfraud.

Why is so much marketing money being bet on Team B?


Planet DebianSteve Kemp: Decoding 433Mhz-transmissions with software-defined radio

This blog-post is a bit of a diversion, and builds upon my previous entry of using 433Mhz radio-transmitters and receivers with Arduino and/or ESP8266 devices.

As mentioned in my post I've recently been overhauling my in-house IoT buttons, and I decided to go down the route of using commercially-available buttons which broadcast signals via radio, rather than using IR, or WiFi. The advantage is that I don't need to build any devices, or worry about 3D-printing a case - the commercially available buttons are cheap, water-proof, portable, and reliable, so why not use them? Ultimately I bought around ten buttons, along with a radio-receiver and radio-transmitter modules for my ESP8266 device. I wrote code to run on my device to receive the transmissions, decode the device-ID, and take different actions based upon the specific button pressed.

In the gap between buying the buttons (read: radio transmitters) and waiting for the transmitter/receiver modules I intended to connect to my ESP8266/arduino device(s) I remembered that I'd previously bought a software-defined-radio receiver, and figured I could use it to receive and react to the transmissions directly upon my PC.

The dongle I'd bought in the past was a simple USB-device which identifies itself as follows when inserted into my desktop:

  [17844333.387774] usb 3-9: New USB device found, idVendor=0bda, idProduct=2838
  [17844333.387777] usb 3-9: New USB device strings: Mfr=1, Product=2, SerialNumber=3
  [17844333.387778] usb 3-9: Product: RTL2838UHIDIR
  [17844333.387778] usb 3-9: Manufacturer: Realtek
  [17844333.387779] usb 3-9: SerialNumber: 00000001

At the time I bought it I wrote a brief blog post, which described tracking aircraft, and I said "I know almost nothing about SDR, except that it can be used to let your computer do stuff with radio."

So my first step was finding some suitable software to listen to the right-frequency and ideally decode the transmissions. A brief search lead me to the following repository:

The RTL_433 project is pretty neat as it allows receiving transmissions and decoding them. Of course it can't decode everything, but it has the ability to recognize a bunch of commonly-used hardware, and when it does it outputs the payload in a useful way, rather than just dumping a bitstream/bytestream.

Once you've got your USB-dongle plugged in, and you've built the project you can start receiving and decoding all discovered broadcasts like so:

  skx@deagol ~$ ./build/src/rtl_433 -U -G
  trying device  0:  Realtek, RTL2838UHIDIR, SN: 00000001
  Found Rafael Micro R820T tuner
  Using device 0: Generic RTL2832U OEM
  Exact sample rate is: 250000.000414 Hz
  Sample rate set to 250000.
  Bit detection level set to 0 (Auto).
  Tuner gain set to Auto.
  Reading samples in async mode...
  Tuned to 433920000 Hz.

Here we've added flags:

  • -G
    • Enable all decoders. So we're not just listening for traffic at 433Mhz, but we're actively trying to decode the payload of the transmissions.
  • -U
    • Timestamps are in UTC

Leaving this running for a few hours I noted that there are several nearby cars which are transmitting data about their tyre-pressure:

  2018-02-10 11:53:33 :      Schrader       :      TPMS       :      25
  ID:          1B747B0
  Pressure:    2.500 bar
  Temperature: 6 C
  Integrity:   CRC

The second log is from running with "-F json" to cause output to be generated in JSON format:

  {"time" : "2018-02-10 09:51:02",
   "model" : "Toyota",
   "type" : "TPMS",
   "id" : "5e7e0637",
   "code" : "63e6026d",
   "mic" : "CRC"}

In both cases we see "TPMS", and according to wikipedia that is Tyre Pressure Monitoring System. I'm a little shocked to receive this data, unencrypted!

Other events also become visible, when I left the scanner running, which is presumably some kind of temperature-sensor a neighbour has running:

  2018-02-10 13:19:08 : RF-tech
     Id:              0
     Battery:         LOW
     Button:          0
     Temperature:     0.0 C

Anyway I have a bunch of remote-controlled sockets, branded "NEXA", which look like this:

Radio-Controlled Sockets

When I press the remote I can see the transmissions and program my PC to react to them:

  2018-02-11 07:31:20 : Nexa
    House Code:  39920705
    Group:  1
    Channel: 3
    State:   ON
    Unit:    2

In conclusion:

  • SDR can be used to easily sniff & decode cheap and commonly available 433Mhz-based devices.
  • "Modern" cars transmit their tyre-pressure, apparently!
  • My neighbours can probably overhear my button presses.

Rondam RamblingsA Multilogue on Free Will

[Inspired by this comment thread.] The Tortoise is standing next to a railroad track when Achilles, an ancient Greek warrior, happens by.  In the distance, a train whistle sounds. Tortoise: Greetings, friend Achilles.  You have impeccable timing.  I could use your assistance. Achilles: Hello, Mr. T.  Always happy to help.  What seems to be the trouble? Tortoise: Look there. Achilles: Why, it

Planet DebianNorbert Preining: In memoriam Staszek Wawrykiewicz

We have lost a dear member of our community, Staszek Wawrykiewicz. I got notice that our friend died in an accident the other day. My heart stopped for an instant when I read the news, it cannot be – one of the most friendly, open, heart-warming friends has passed.

Staszek was an active member of the Polish TeX community, and an incredibly valuable TeX Live Team member. His insistence and perseverance have saved TeX Live from many disasters and bugs. Although I have been in contact with Staszek over the TeX Live mailing lists since some years, I met him in person for the first time on my first ever BachoTeX, the EuroBachoTeX 2007. His friendliness, openness to all new things, his inquisitiveness, all took a great place in my heart.

I dearly remember the evenings with Staszek and our Polish friends, in one of the Bachotek huts, around the bonfire place, him playing the guitar and singing traditional and not-so-traditional Polish music, inviting everyone to join and enjoy together. Rarely technical and social abilities have found such a nice combination as in Staszek.

Despite his age he often felt like someone in his twens, always ready for a joke, always ready to party, always ready to have fun. It is this kind of attitude I would like to carry with me when I get older. Thanks for giving me a great example.

The few times I managed to come to BachoTeX from far Japan, Staszek was as usual welcoming – it is the feeling of close friends that even if you haven’t seen each other for long time, the moment you meet it feels like it was just yesterday. And wherever you go during a BachoTeX conference, his traces and funniness were always present.

It is a very sad loss for all of those who knew Staszek. If I could I would like to board the plane just now and join the final service to a great man, a great friend.

Staszek, we will miss you. BachoTeX will miss you, TeX Live will miss you, I will miss you badly. A good friend has passed away. May you rest in peace.

Photo credit goes to various people attending BachoTeX conferences.

Planet DebianJunichi Uekawa: Writing chrome extensions.

Writing chrome extensions. I am writing javascript with lots of async/await, and I forget. It's also a bit annoying that many system provided functions don't support promises yet.

Don MartiFOSDEM videos

Check it out. The videos from the Mozilla room at FOSDEM are up, and here's me, talking about bug futures.

All FOSDEM videos

And, yes, the video link Just Works. Bonus link to some background on that: The Fight For Patent-Unencumbered Media Codecs Is Nearly Won by Robert O'Callahan

Another bonus link: FOSDEM video project, including what those custom boxes do.


CryptogramCalling Squid "Calamari" Makes It More Appetizing

Research shows that what a food is called affects how we think about it.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianJohn Goerzen: The Big Green Weasel Song

One does not normally get into one’s car intending to sing about barfing green weasels to the tune of a Beethoven symphony for a quarter of an hour. And yet, somehow, that’s what I wound up doing today.

Little did I know that when Jacob started band last fall, it would inspire him to sing about weasels to the tune of Beethoven’s 9th. That may come as a surprise to his band teacher, too, who didn’t likely expect that having the kids learn the theme to the Ninth Symphony would inspire them to sing about large weasels.

But tonight, as we were driving, I mentioned that I knew the original German words. He asked me to sing. I did.

Then, of course, Jacob and Oliver tried to sing it back in German. This devolved into gibberish and a fit of laughter pretty quick, and ended with something sounding like “schneezel”. So of course, I had to be silly and added, “I have a big green weasel!”

From then on, they were singing about big green weasels. It wasn’t long before they decided they would sing “the chorus” and I was supposed to improvise verse after verse about these weasels. Improvising to the middle of the 9th Symphony isn’t the easiest thing, but I had verses about giving weasels a hug, about weasels smelling like a bug, about a green weasel on a chair, about a weasel barfing on the stair. And soon, Jacob wanted to record the weasel song to make a CD of it. So we did, before we even got to town. Here it is:

[Youtube link]

I love to hear children delight in making music. And I love to hear them enjoying Beethoven. Especially with weasels.

I just wonder what will happen when they learn about Mozart.

Google AdsenseAdSense now supports Tamil

Continuing our commitment to support more languages and encourage content creation on the web, we’re excited to announce the addition of Tamil, a language spoken by millions of Indians, to the family of AdSense supported languages.

AdSense provides an easy way for publishers to monetize the content they create in Tamil, and help advertisers looking to connect with a Tamil-speaking audience with relevant ads.

To start monetizing your Tamil content website with Google AdSense:

  1. Check the AdSense program policies and make sure your website is compliant.
  2. Sign up for an AdSense account
  3. Add the AdSense code to start displaying relevant ads to your users.

Welcome to AdSense! Sign Up now.

Posted by: The AdSense Internationalization Team

Planet DebianErich Schubert: Genius Nonsense & Spam just spammed me with an email that claims that I were a “frequent traveller” (which I am not), and thus would get “Genius” status, and rebates (which means they are going to hide some non-partner search results from me…) - I hate such marketing spam.

What a big rip-off.

I have rarely ever used, and in fact I have last used it 2015.

That is certainly not what you would call a “frequent traveler”.

But sell this to their hotel customers as “most loyal guests”. As I am clearly not a “loyal guest”, I consider this claim of to be borderline to fraud. And beware, that since this is a partner programme, it does come with a downside for the user: the partner results will be “boosted in our search results”. In other words, your search results will be biased. They will hide other results to boost their partners, that would otherwise come first (for example, because they are closer to your desired location, or even cheaper).

Forget and their “Genius program”. It’s a marketing fake.

Going to report this as spam, and kill my account there now.

Pro tip: use incognito mode whenever possible for surfing. For Chromium (or Google Chrome), add the option --incognito to your launcher icon, for Firefox use --private-window. On a smartphone, you may want to switch to Firefox Focus, or the DuckDuckGo browser.

Looks like those hotel booking brokers (who are in a fierce competition) are getting quite despeate. We are certainly heading into the second big Dot-com bubble, and it is probably going to bust rather sooner than later. Maybe that current stock market fragility will finally trigger this. If some parts of the “old” economy have to cut down their advertisement budgets, this will have a very immediate effect on Google, Facebook, and many others.

Planet DebianLars Wirzenius: Qvisqve - an authorisation server, first alpha release

My company, QvarnLabs Ab, has today released the first alpha version of our new product, Qvisqve. Below is the press release. I wrote pretty much all the code, and it's free software (AGPL+).

Helsinki, Finland - 2018-02-09. QvarnLabs Ab is happy to announce the first public release of Qvisqve, an authorisation server and identity provider for web and mobile applications. Qvisqve aims to be secure, lightweight, fast, and easy to manage. "We have big plans for Qvisqve, and helping customers' manage cloud identities" says Kaius Häggblom, CEO of QvarnLabs.

In this alpha release, Qvisqve supports the OAuth2 client credentials grant, which is useful for authenticating and authorising automated systems, including IoT devices. Qvisqve can be integrated with any web service that can use OAuth2 and JWT tokens for access control.

Future releases will provide support for end-user authentication by implementing the OpenID Connect protocol, with a variety of authentication methods, including username/password, U2F, TOTP, and TLS client certificates. Multi-factor authentication will also be supported. "We will make Qvisqve be flexible for any serious use case", says Lars Wirzenius, software architect at QvarnLabs. "We hope Qvisqve will be useful to the software freedom ecosystem in general" Wirzenius adds.

Qvisqve is developed and supported by QvarnLabs Ab, and works together with the Qvarn software, which is award-winning free and open-source software for managing sensitive personal information. Qvarn is in production use in Finland and Sweden and manages over a million identities. Both Qvisqve and Qvarn are released under the Affero General Public Licence.

Planet DebianOlivier Berger: A review of Virtual Labs virtualization solutions for MOOCs

I’ve just uploaded a new memo A review of Virtual Labs virtualization solutions for MOOCs in the form of a page on my blog, before I eventually publish something more elaborated (and valuated by peer review).

The subtitle is “From Virtual Machines running locally or on IaaS, to containers on a PaaS, up to hypothetical ports of tools to WebAssembly for serverless execution in the Web browser

Excerpt from the intro :

In this memo, we try to draw an overview of some benefits and concerns with existing approaches at using virtualization techniques for running Virtual Labs, as distributions of tools made available for distant learners.

We describe 3 main technical architectures: (1) running Virtual Machine images locally on a virtual machine manager, or (2) displaying the remote execution of similar virtual machines on a IaaS cloud, and (3) the potential of connecting to the remote execution of minimized containers on a remote PaaS cloud.

We then elaborate on some perspectives for locally running ports of applications to the WebAssembly virtual machine of the modern Web browsers.

I hope this will be of some interest for some.

Feel free to comment in this blog post.

CryptogramLiving in a Smart Home

In "The House that Spied on Me," Kashmir Hill outfits her home to be as "smart" as possible and writes about the results.

Worse Than FailureError'd: Whatever Happened to January 2nd?

"Skype for Business is trying to tell me something...but I'm not sure exactly what," writes Jeremy W.


"I was looking for a tactile switch. And yes, I absolutely do want an operating switch," writes Michael B.


Chris D. wrote, "While booking a hair appointment online, I found that the calendar on the website was a little confused as to how calendars work."


"Don't be fooled by the image on the left," wrote Dan D., "If you get caught in the line of fire, you will assuredly get soaked!"


Jonathan G. writes, "My local bar's Facebook ad shows that, depending on how the viewer frames it, even an error message can look appealing."


"I'll have to check my calendar - I may or may not have plans on the Nanth," wrote Brian.


[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Planet Linux AustraliaOpenSTEM: Australia Day in the early 20th century

Australia Day and its commemoration on 26 January, has long been a controversial topic. This year has seen calls once again for the date to be changed. Similar calls have been made for a long time. As early as 1938, Aboriginal civil rights leaders declared a “Day of Mourning” to highlight issues in the Aboriginal […]


Planet DebianSteve Kemp: Creating an IoT button, the smart way

There are several projects out there designed to create an IoT button:

  • You press a button.
  • Magic happens, and stuff runs on your computer, or is otherwise triggered remotely.

I made my own internet-button, an esp8266-based alarm-button, and recently I've wanted to have a few more dotted around our flat. To recap, the basic way these things work is that you have a device with a button on it.

Once deployed you would press the button, your device wakes up, connects to your WiFi and sends a "message". That message then goes on to trigger some kind of defined action. In my case my button would mute any existing audio-playback, then trigger the launch of an "alarm.mp3" file. In short - if somebody presses the button I would notice.

I wanted a few more doing slightly more complex things in the flat, such as triggering lights and various programs. Unfortunately these buttons are actually relatively heavy-weight, largely because connecting to WiFi demands a reasonable amount of power-draw. Even with deep-sleeping between invocations, driving such a device from battery-power means the life-span is not great. (In my case I cheat, my button is powered by a small phone-charger, which means power isn't a concern, but my "button" is hobbled.)

Ideally what everybody wants is security, simplicity, and availability. Running from batteries, avoiding the need to program WiFi credentials and having a decent form-factor makes an IoT button a lot simpler to sell - you don't need to do complicated setup, and things are nice and neat.

So I wondered is such an impossible dream actually possible, and it turns out that yes, such a device is trivial.

Instead of building WiFi into a bunch of buttons you could you build the smarts into one device, a receiver, connected to your PC via a USB cable - the buttons are very very simple, don't use WiFi, don't need to be programmed, and don't draw a lot of current. How? Radio.

There exist pre-packaged and simple radio-based buttons, such as this one:


You press a button and it broadcasts a simple message on 433Mhz. There exist very cheap and reliable 433Mhz recievers which you can connect to an arduino, or ESP8266-based device. Which means you have a different approach:

  • You build a device based upon an Arduino/ESP8266/similar.
  • It listens for 433Mhz transmissions, decodes the device ID.
  • Once it finds something it recognizes it can write to STDOUT (more or less)
  • The host system opens /dev/ttyUSB0 and reads the output
    • Which it can then use to trigger actions.

The net result is you can buy a bunch of buttons, for €5 each and add them to your system. The transmissions are easy to decode, and each button has a unique ID. You don't need to program them with your WiFi credentials, or set them up - except on the host - and because these devices don't do much, they just sleep waiting for a press, make a brief radio-transmission, then go back to sleep, their batteries can last for months.

So that's what I've done. I've written a simple program which decodes the trasmissions and posts to an MQ instance "button-press-a", "button-press-b", etc, and I can react to them uniquely. (In some cases the actions taken depend upon the time of day.)

No code to show here, because it depends upon the precise flavour of button(s) that you buy. But I had fun because some of the remote-controls around my house use the same frequency - and a lot of the cheap "remote-controlled power-switches" use this fequency too. If you transmit as well as receive you can have a decent amount of fun. :)

Of course radio is "broadcast", so somebody nearby could tell I've pressed a button, but as far as security goes there are no obvious IoT-snafus that I think I care about.

In my next post I might even talk about something more interesting - SMS-based things. In Europe (data)-roaming fees have recently been abolished, and anti-monopoly regulations make it "simple" to get your own SIMs made. This means you can buy a bunch of SIMS, stick them in your devices and get cheap data-transfer Europe-wide. There are obvious commercial aspects available if you go down this route, if you can accept the caveat that you need to add a SIM-card to each transmitter and each receiver. If you can a lot of things that become possible, especially when coupled with GPS. Not only do you gain the ability to send messages/events/data, but you can see where it came from, physically/geographically, and that is something that I think has a lot of interesting use-cases.

Krebs on SecurityU.S. Arrests 13, Charges 36 in ‘Infraud’ Cybercrime Forum Bust

The U.S. Justice Department announced charges on Wednesday against three dozen individuals thought to be key members of ‘Infraud,” a long-running cybercrime forum that federal prosecutors say cost consumers more than a half billion dollars. In conjunction with the forum takedown, 13 alleged Infraud members from the United States and six other countries were arrested.

A screenshot of the Infraud forum, circa Oct. 2014. Like most other crime forums, it had special sections dedicated to vendors of virtually every kind of cybercriminal goods or services imaginable. Click to enlarge.

Started in October 2010, Infraud was short for “In Fraud We Trust,” and collectively the forum referred to itself as the “Ministry of Fraudulently [sic] Affairs.” As a mostly English-language fraud forum, Infraud attracted nearly 11,000 members from around the globe who sold, traded and bought everything from stolen identities and credit card accounts to ATM skimmers, botnet hosting and malicious software.

“Today’s indictment and arrests mark one of the largest cyberfraud enterprise prosecutions ever undertaken by the Department of Justice,” said John P. Cronan, acting assistant attorney general of the Justice Department’s criminal division. “As alleged in the indictment, Infraud operated like a business to facilitate cyberfraud on a global scale.”

The complaint released by the DOJ lists 36 Infraud members — some only by their hacker nicknames, others by their alleged real names and handles, and still others just as “John Does.” Having been a fairly regular lurker on Infraud over the past seven years who has sought to independently identify many of these individuals, I can say that some of these names and nick associations sound accurate but several do not.

The government says the founder and top member of Infraud was Svyatoslav Bondarenko, a hacker from Ukraine who used the nicknames “Rector” and “Helkern.” The first nickname is well supported by copies of the forum obtained by this author several years back; indeed, Rector’s profile listed him an administrator, and Rector can be seen on countless Infraud discussion threads vouching for sellers who had paid the monthly fee to advertise their services in “sticky” threads on the forum.

However, I’m not sure the Helkern association with Bondarenko is accurate. In December 2014, just days after breaking the story about the theft of some 40 million credit and debit cards from retail giant Target, KrebsOnSecurity posted a lengthy investigation into the identity of “Rescator” — the hacker whose cybercrime shop was identified as the primary vendor of cards stolen from Target.

That story showed that Rescator changed his nickname from Helkern after Helkern’s previous cybercrime forum (Darklife) got massively hacked, and it presented clues indicating that Rescator/Helkern was a different Ukrainian man named Andrey Hodirevski. For more on that connection, see Who’s Selling Cards from Target.

Also, Rescator was a separate vendor on Infraud, and there are no indications that I could find suggesting that Rector and Rescator were the same people. Here is Rescator’s most recent sales thread for his credit card shop on Infraud — dated almost a year after the Target breach. Notice the last comment on that thread alleges that Rescator had recently been arrested and that his shop was being run by law enforcement officials: 

Another top administrator of Infraud used the nickname “Stells.” According to the Justice Department, Stells’ real name is Sergey Medvedev. The government doesn’t describe his exact role, but it appears to have been administering the forum’s escrow service (see screenshot below).

Most large cybercrime forums have an escrow service, which holds the buyer’s virtual currency until forum administrators can confirm the seller has consummated the transaction acceptably to both parties. The escrow feature is designed to cut down on members ripping one another off — but it also can add considerably to the final price of the item(s) for sale.

In April 2016, Medvedev would take over as the “admin and owner” of Infraud, after he posted a note online saying that Bondarenko had gone missing, the Justice Department said.

One defendant in the case, a well-known vendor of stolen credit and debit cards who goes by the nickname “Zo0mer,” is listed as a John Doe. But according to a New York Times story from 2006, Zo0mer’s real name is Sergey Kozerev, and he hails from St. Petersburg, Russia.

The indictments also list two other major vendors of stolen credit and debit cards: hackers who went by the nicknames “Unicc” and “TonyMontana” (the latter being a reference to the fictional gangster character played by Al Pacino in the 1983 movie Scarface). Both hackers have long operated and operate to this day their own carding shops:

Unicc shop, which sells stolen credit card data as well as Social Security numbers and other consumer information that can be used for identity theft.

The government says Unicc’s real name is Andrey Sergeevich Novak. TonyMontana is listed in the complaint as John Doe #1.

TonyMontana’s carding shop.

Perhaps the most successful vendor of skimming devices made to be affixed to ATMs and fuel pumps was a hacker known on Infraud and other crime forums as “Rafael101.” Several of my early stories about new skimming innovations came from discussions with Rafael in which this author posed as an interested buyer and asked for videos, pictures and technical descriptions of his skimming devices.

A confidential source who asked not to be named told me a few years back that Rafael had used the same password for his skimming sales accounts on multiple competing cybercrime forums. When one of those forums got hacked, it enabled this source to read Rafael’s emails (Rafael evidently used the same password for his email account as well).

The source said the emails showed Rafael was ordering the parts for his skimmers in bulk from Chinese e-commerce giant Alibaba, and that he charged a significant markup on the final product. The source said Rafael had the packages all shipped to a Jose Gamboa in Norwalk, Calif — a suburb of Los Angeles. Sure enough, the indictment unsealed this week says Rafael’s real name is Jose Gamboa and that he is from Los Angeles.

A private message from the skimmer vendor Rafael101, from on a competing cybercrime forum ( in 2012.

The Justice Department says the arrests in this case took place in Australia, France, Italy, Kosovo, Serbia, the United Kingdom and the United States. The defendants face a variety of criminal charges, including identity theft, bank fraud, wire fraud and money laundering. A copy of the indictment is available here.

CryptogramWater Utility Infected by Cryptocurrency Mining Software

A water utility in Europe has been infected by cryptocurrency mining software. This is a relatively new attack: hackers compromise computers and force them to mine cryptocurrency for them. This is the first time I've seen it infect SCADA systems, though.

It seems that this mining software is benign, and doesn't affect the performance of the hacked computer. (A smart virus doesn't kill its host.) But that's not going to always be the case.

Worse Than FailureCodeSOD: I Take Exception

We've all seen code that ignores errors. We've all seen code that simply rethrows an exception. We've all seen code that wraps one exception for another. The submitter, Mr. O, took exception to this exceptionally exceptional exception handling code.

I was particularly amused by the OutOfMemoryException handler that allocates another exception object, and if it fails, another layer of exception trapping catches that and attempts to allocate yet another exception object. if that fails, it doesn't even try. So that makes this an exceptionally unexceptional exception handler?! (ouch, my head hurts)

It contains a modest amount of fairly straightforward code to read config files and write assorted XML documents. And it handles exceptions in all of the above ways.

You might note that the exception handling code was unformatted, unaligned and substantially larger than the code it is attempting to protect. To help you out, I've stripped out the fairly straightforward code being protected, and formatted the exception handling code to make it easier to see this exceptional piece of code (you may need to go full-screen to get the full impact).

After all, it's not like exceptions can contain explanatory text, or stack context information...

namespace HotfolderMerger {
  public class Merger : IDisposable {
    public Merger() {
      try {
          object section = ConfigurationManager.GetSection("HFMSettings/DataSettings");
          if (section == null) throw new MergerSetupException();
          _settings = (DataSettings)section;
      } catch (MergerSetupException) {
      } catch (ConfigurationErrorsException ex){
        throw new MergerSetupException("Error in configuration", ex);
      } catch (Exception ex) {
        throw new MergerSetupException("Unexpected error while loading configuration",ex);

    // A whole bunch of regex about as complex as this one...
    private readonly Regex _fileNameRegex = new Regex(@"^(?<System>[A-Za-z0-9]{1,10})_(?<DesignName>[A-Za-z0-9]{1,})_(?<DocumentID>\d{1,})_(?<FileTimeUTC>\d{1,})(_(?<BAMID>\d+))?\.(?<extension>\w{0,3})$");

    public void MergeFiles() {
      try {
          foreach (FileElement filElement in _settings.Filelist) {
            // Lots of declarations here...
            foreach (FileInfo fi in hotfolder.GetFiles()) {
              try {
                  // 35 lines of innocuous code..
              } catch (ArgumentException ex) {
                throw new BasisException(ex, int.Parse(ErrorCodes.MergePreRunArgumentException),     ErrorMessages.MergePreRunArgumentException);
              } catch (ConfigurationException ex) {
                throw new BasisException(ex, int.Parse(ErrorCodes.MergePreRunConfigurationException),ErrorMessages.MergePreRunConfigurationException);
              } catch (Exception ex) {
                throw new UnexpectedMergerException("Unexpected exception while setting up for merge!", ex);
              try {
                  // 23 lines of StreamReader code to load some XML from a file...
              } catch (OutOfMemoryException ex) {
                // OP: so if we're out of memory, how is this new exception going to be allocated? 
                //     Maybe in the wrapping "try/catch Exception" - which allocates a new UnexpectedMergerException object??? Oh, wait...
                throw new BasisException(  ex,int.Parse(ErrorCodes.MergeRunOutOfMemoryException),   ErrorMessages.MergeRunOutOfMemoryException);
              } catch (ConfigurationException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunConfigurationException),ErrorMessages.MergeRunConfigurationException);
              } catch (FormatException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunFormatException),       ErrorMessages.MergeRunFormatException);
              } catch (ArgumentException ex) { 
                throw new BasisException(    ex, int.Parse(ErrorCodes.MergeRunArgumentException),   ErrorMessages.MergeRunArgumentException);
              } catch (SecurityException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunSecurityException),     ErrorMessages.MergeRunSecurityException);
              } catch (IOException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunIOException),           ErrorMessages.MergeRunIOException);
              } catch (NotSupportedException ex) {
                throw new BasisException(  ex, int.Parse(ErrorCodes.MergeRunNotSupportedException), ErrorMessages.MergeRunNotSupportedException);
              } catch (Exception ex) {
                throw new UnexpectedMergerException("Unexpected exception while merging!", ex);
            // ...
      } catch (UnexpectedMergerException) {
      } catch (BasisException ex) {
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected error while attempting to parse settings prior to merge", ex);

    private static void prepareNewMergeFile(ref XmlTextWriter xtw, string filename, int numfiles) {
      if (string.IsNullOrEmpty(filename))
         throw new BasisException(    int.Parse(ErrorCodes.MergeSetupNullReferenceException),       ErrorMessages.MergeSetupNullReferenceException, "filename parameter was null or empty");
      try {
          // Use XmlTextWriter to concatenate ~30 lines of canned XML...
      } catch (InvalidOperationException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupInvalidOperationException),     ErrorMessages.MergeSetupInvalidOperationException);
      } catch (ArgumentException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupArgumentException),             ErrorMessages.MergeSetupArgumentException);
      } catch (IOException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupIOException),                   ErrorMessages.MergeSetupIOException);
      } catch (UnauthorizedAccessException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupUnauthorizedAccessException),   ErrorMessages.MergeSetupUnauthorizedAccessException);
      } catch (SecurityException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeSetupSecurityException),             ErrorMessages.MergeSetupSecurityException);
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected exception while setting up for merge!", ex);

    private void closeMergeFile(ref XmlTextWriter xtw, ref List<FileInfo> filesComplete, string filename, double i) {
      if (xtw == null)
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeCleanupNullReferenceException, "xtw ref parameter was null");
      if (filesComplete == null)
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeSetupNullReferenceException,   "filesComplete ref parameter was null");
      if (string.IsNullOrEmpty(filename))
         throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeSetupNullReferenceException,   "filename parameter was null or empty");

      try {
          // ~ 30 lines of XmlTextWriter, StreamWriter and File IO...
      } catch (ArgumentException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupArgumentException),           ErrorMessages.MergeCleanupArgumentException);
      } catch (InvalidOperationException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupInvalidOperationException),   ErrorMessages.MergeCleanupInvalidOperationException);
      } catch (UnauthorizedAccessException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupUnauthorizedAccessException), ErrorMessages.MergeCleanupUnauthorizedAccessException);
      } catch (IOException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupIOException),                 ErrorMessages.MergeCleanupIOException);
      } catch (NullReferenceException ex) {
        throw new BasisException(int.Parse(ErrorCodes.MergeCleanupNullReferenceException),          ErrorMessages.MergeCleanupNullReferenceException, "unknown exception details");
      } catch (NotSupportedException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupNotSupportedException),       ErrorMessages.MergeCleanupNotSupportedException);
      } catch (MergerException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupMergerException),             ErrorMessages.MergeCleanupMergerException);
      } catch (SecurityException ex) {
        throw new BasisException(ex, int.Parse(ErrorCodes.MergeCleanupSecurityException),           ErrorMessages.MergeCleanupSecurityException);
      } catch (Exception ex) {
        throw new UnexpectedMergerException("Unexpected exception while merging!", ex);
[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Geek FeminismBringing the blog to a close

We’re bringing the Geek Feminism blog to a close.

First, some logistics; then some reasons and reminiscences; then, some thanks.


The site will still be up for at least several years, barring Internet catastrophe. We won’t post to it anymore and comments will be closed, but we intend to keep the archives up and available at their current URLs, or to have durable redirects from the current URLs to the archive.

This doesn’t affect the Geek Feminism wiki, which will keep going.

There’s a Twitter feed and a Facebook page; after our last blog post, we won’t post to those again.

We don’t have a definite date yet for when we’ll post for the last time. It’ll almost certainly be this year.

I might add to this, or post in the comments, to add stuff. And this isn’t the absolute last post on the blog; it’d be nice to re-run a few of our best-of posts, for instance, like the ones Tim Chevalier linked to here. We’re figuring that out.

Reasons and reminiscences

Alex Bayley and a bunch of their peers — myself included — started posting on this blog in 2009. We coalesced around feminist issues in scifi/fantasy fandom, open culture projects like Wikipedia, gaming, the sciences, the tech industry and open source software development, Internet culture, and so on. Alex gave a talk at Open Source Bridge 2014 about our history to that point, and our meta tag has some further background on what we were up to over those years.

You’ve probably seen a number of these kinds of volunteer group efforts end. People’s lives shift, our priorities change as we adapt to new challenges, and so on. And we’ve seen the birth or growth of other independent media; there are quite a lot of places to go, for a feminist take on the issues I mentioned. For example:

We did some interesting, useful, and cool stuff for several years; I try to keep myself from dwelling too much in the sad half of “bittersweet” by thinking of the many communities that have already been carrying on without waiting for us to pass any torches.


Thanks of course to all our contributors, past and present, and those who provided the theme, logo, and technical support and built or provided infrastructure, social and digital and financial, for this blog. Thanks to our readers and commenters. Thanks to everyone who did neat stuff for us to write about. And thanks to anyone who used things we said to go make the world happier.

More later; thanks.

Sociological ImagesBeyond Racial Binaries: How ‘White’ Latinos Can Experience Racism

Recent reports indicated that FEMA was cuttingand then not cutting—hurricane relief aid to Puerto Rico. When Donald Trump recently slandered Puerto Ricans as lazy and too dependent on aid after Hurricane Maria, Fox News host Tucker Carlson stated that Trump’s criticism could not be racist because “Puerto Rico is 75 percent white, according to the U.S. Census.”

Photo Credit: Coast Guard News, Flickr CC

This statement presents racism as a false choice between nonwhite people who experience racism and white people who don’t. It ignores the fact that someone can be classed as white by one organization but treated as non-white by another, due to the way ‘race’ is socially constructed across time, regions and social contexts.

Whiteness for Puerto Ricans is a contradiction. Racial labels that developed in Puerto Rico were much more fluid than on the U.S. mainland, with at least twenty categories. But the island came under U.S. rule at the height of American nativism and biological racism, which relied on a dichotomy between a privileged white race and a stigmatized black one that was designed to protect the privileges of slavery and segregation. So the U.S. portrayed the islanders with racist caricatures in cartoons like this one:

Clara Rodriguez has shown how Puerto Ricans who migrated to the mainland had to conform to this white-black duality that bore no relation to their self-identifications. The Census only gave two options, white or non-white, so respondents who would have identified themselves as “indio, moreno, mulato, prieto, jabao, and the most common term, trigueño (literally, ‘wheat-colored’)” chose white by default, simply to avoid the disadvantage and stigma of being seen as black bodied.

Choosing the white option did not protect Puerto Ricans from discrimination. Those who came to the mainland to work in agriculture found themselves cast as ‘alien labor’ despite their US citizenship. When the federal government gave loans to white home buyers after 1945, Puerto Ricans were usually excluded on zonal grounds, being subjected to ‘redlining’ alongside African Americans. Redlining was also found to be operating on Puerto Rico itself in the insurance market as late as 1998, suggesting it may have even contributed to the destitution faced by islanders after natural disasters.

The racist treatment of Puerto Ricans shows how it is possible to “be white” without white privilege. There have been historical advantages in being “not black” and “not Mexican”, but they have not included the freedom to seek employment, housing and insurance without fear of exclusion or disadvantage. When a hurricane strikes, Puerto Rico finds itself closer to New Orleans than to Florida.

An earlier version of this post appeared at History News Network

Jonathan Harrison, PhD, is an adjunct Professor in Sociology at Florida Gulf Coast University, Florida SouthWestern State College and Hodges University whose PhD was in the field of racism and antisemitism.

(View original at


Worse Than FailureCodeSOD: How To Creat Socket?

JR earned a bit of a reputation as the developer who could solve anything. Like most reputations, this was worse than it sounded, and it meant he got the weird heisenbugs. The weirdest and the worst heisenbugs came from Gerry, a developer who had worked for the company for many, many years, and left behind many, many landmines.

Once upon a time, in those Bad Old Days, Gerry wrote a C++ socket-server. In those days, the socket-server would crash any time there was an issue with network connectivity. Crashing services were bad, so Gerry “fixed” it. Whatever Gerry did fell into the category of “good enough”, but it had one problem: after any sort of network hiccup, the server wouldn’t crash, but it would take a very long time to start servicing requests again. Long enough that other processes would sometime fail. It was infrequent enough that the bug had stuck around for years, but finally, someone wanted Something Done™.

JR got Something Done™, and he did it by looking at the CreatSocket method, buried deep in a "God" class of 15,000 lines.

void UglyClassThatDidEverything::CreatSocket() {
    while (true) {
                try {
                        m_pSocket = new Socket((ip + ":55043").c_str());
                        if (m_pSocket != null) {
                                //"Creat socket");
                        } else {
                                //"Creat socket failed");
                                // usleep(1000);
                                // sleep(1);
                } catch (...) {
                    if (m_pSocket == null) {
                                //"Creat socket failed");

The try portion of the code provides an… interesting take on handling socket creation. Create a socket, and grab a handle. If you don’t get a socket for some reason, sleep for 5 seconds, and then the infinite while loop means that it’ll try again. Eventually, this will hopefully get a socket. It might take until the heat death of the universe, or at least until the half-created-but-never-cleaned-up sockets consume all the filehandles on the OS, but eventually.

Unless of course, there’s an exception thrown. In that case, we drop down into the catch, where we sleep for 5 seconds, and then call CreatSocket recursively. If that succeeds, we still have that extra call to sleep which guarantees a little nap, presumably to congratulate ourselves for finally creating a socket.

JR had a simple fix for this code: burn it to the ground and replace it with a more normal approach to creating sockets. Unfortunately, management was a bit gun-shy about making any major changes to Gerry’s work. That recursive call might be more important than anyone imagined.

JR had a simpler, if stupider fix: remove the final call to sleep(5) after creating the socket in the exception handler. It wouldn’t make this code any less terrible, but it would mean that it wouldn’t spend all that time waiting to proceed even after it had created a socket, thus solving the initial problem: that it takes a long time to recover after failure.

Unfortunately, management balked at removing a line of code. “It wouldn’t be there if it weren’t important. Instead of removing it, can you just comment it out?”

JR commented it out, closed VIM, and hoped never to touch this service again.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

CryptogramSensitive Super Bowl Security Documents Left on an Airplane

A CNN reporter found some sensitive -- but, technically, not classified -- documents about Super Bowl security in the front pocket of an airplane seat.


TEDThe Big Idea: How to find and hire the best employees

So, you want to hire the best employee for the job? Or perhaps you’re the employee looking to be hired. Here’s some counterintuitive and hyper-intuitive advice that could get the right foot in the door.

Expand your definition of the “right” resume

Here’s the hypothetical situation: a position opens up at your company, applications start rolling in and qualified candidates are identified. Who do you choose? Person A: Ivy League, flawless resume, great recommendations — or Person B: state school, fair amount of job hopping, with odd jobs like cashier and singing waitress thrown in the mix. Both are qualified — but have you already formed a decision?

Well, you might want to take a second look at Person B.

Human resources executive Regina Hartley describes these candidates as “The Silver Spoon” (Person A), the one who clearly had advantages and was set up for success, and “The Scrapper” (Person B), who had to fight tremendous odds to get to the same point.

“To be clear, I don’t hold anything against the Silver Spoon; getting into and graduating from an elite university takes a lot of hard work and sacrifice,” she says. But if it so happens that someone’s whole life has been engineered toward success, how will that person handle the low times? Do they seem like they’re driven by passion and purpose?


Take this resume. This guy never finishes college. He job-hops quite a bit, goes on a sojourn to India for a year, and to top it off, he has dyslexia. Would you hire this guy? His name is Steve Jobs.

That’s not to say every person who has a similar story will ultimately become Steve Jobs, but it’s about extending opportunity to those whose lives have resulted in transformation and growth. Companies that are committed to diversity and inclusive practices tend to support Scrappers and outperform their peers. According to DiversityInc, a study of their top 50 companies for diversity outperformed the S&P 500 by 25 percent.

(Check out Regina’s TED Talk: Why the best hire might not have the perfect resume for more advice and a fantastic suggested reading list full of helpful resources.)

Shake up the face-to-face time

Once you choose candidates to meet in-person, scrap that old hand-me-down list of interview questions — or if you can’t simply toss them, think about adding a couple more.

TED Ideas interview questions

Generally, these conversations ping-pong between two basic questions: one of competency or a one of character. To identify the candidates who have substance and not just smarts, business consultant Anthony Tjan recommends that interviewers ask these five questions to illuminate not just skills and abilities, but intrinsic values and personality traits too.

  1. What are the one or two traits from your parents that you most want to ensure you and your kids have for the rest of your life? A rehearsal is not the result you want. This question calls for a bit more thought on the applicant’s end and sheds light on the things they most value. After hearing the person’s initial response, Tjan says you should immediately follow up with “Can you tell me more?” This is essential if you want to elicit an answer with real depth and substance.
  2. What is 25 times 25? Yes, it sounds ridiculous but trust us — the math adds up. How people react under real-time pressure, and their response can show you how they’ll approach challenging or awkward situations. “It’s about whether they can roll with the embarrassment and discomfort and work with me. When a person is in a job, they’re not always going to be in situations that are in their alley,” he says.
  3. Tell me about three people whose lives you positively changed. What would they say if I called them tomorrow? If a person can’t think of single person, that may say a lot for the role you’re trying to fill. Organizations need employees who can lift each other up. When a person is naturally inclined toward compassionate mentorship, it can have a domino effect in an institution.
  4. After an interview, ask yourself (and other team members, if relevant) “Can I imagine taking this person home with me for the holidays?” This may seem overly personal (because, yes it is), but you’ll most likely trigger a gut reaction.
  5. After an interview, ask security or the receptionist: “How was the candidate’s interaction with you?” How a person treats someone they don’t feel they need to impress is important and telling. It speaks to whether they act with compassion and openness and view others as equals.

(Maybe ask them if they played a lot of Dungeons & Dragons in their life?)

The New York Times’ Adam Bryant suggests getting away from the standard job interview entirely. Reject the played-out choreography — the conference room, the resume, the “Where do you want to be in five years?” — and feel free to shake it up. Instead, get up and move about to observe how they behave in (and out of) the workplace wild.

Take them on a tour of the office (if you can’t take them out for a meal), he proposes, and if you feel so inclined, introduce them to some colleagues. Shake off that stress, walk-and-talk (as TED speaker Nilofer Merchant also advises) and most important, pay attention!

Are they curious about how everything happens? Do they show interest in what your colleagues do? These markers could be the difference between someone you work with and someone you want to work with. Monster has a series of good questions to asks yourself post-meeting potential candidates.

Ultimately, Tjan and Bryant seem to agree, the art of the interview is a tricky but not impossible balance to strike.

Hire for your company’s values, not its internal culture

Culture fit is important, of course, but it can also be used as a shield. The bottom line is hire for diversity — in all its forms.

There’s a chance you may be tired of reading about diversity and inclusion, that you get the point and we don’t need to keep addressing it. Well, tough. Suck it up. Because we do need to talk about it until there’s literally no need to talk about, until this fundamental issue becomes an overarching non-issue (and preferably before we all sink into the sea). This is a concept that can’t just exist in science fictional universes.

Example A: a sci-fi universe featuring a group of people that could be seen working together in a non-fictional universe.


MIT Media Lab director Joi Ito and writer Jeff Howe explain that the best way to prepare for a future of unknown complexity is to build on the strength of our differences. Race, gender,  sexual orientation, socioeconomic background and disciplinary training are all important, as are life experiences that produce cognitive diversity (aka different ways of thinking).

Thanks to an increasing body of research, diversity is becoming a strategic imperative for schools, firms and other institutions. It may be good politics and good PR and, depending on an individual’s commitment to racial and gender equity, good for the soul, say Ito and Howe. But in an era in which your challenges are likely to feature maximum complexity as well, it’s simply good management — which marks a striking departure from an age when diversity was presumed to come at the expense of ability.

As TED speaker Mellody Hobson (TED Talk: Color blind or color brave?) says: “I’m actually asking you to do something really simple.  I’m asking you to look at the people around you purposefully and intentionally. Invite people into your life who don’t look like you, don’t think like you, don’t act like you, don’t come from where you come from, and you might find that they will challenge your assumptions.”

So, in conclusion, go out and hire someone and give them the opportunity to change the world. Or at least, give them the opportunity to prove that they have the wherewithal to change something for the better.



Krebs on SecurityWould You Have Spotted This Skimmer?

When you realize how easy it is for thieves to compromise an ATM or credit card terminal with skimming devices, it’s difficult not to inspect or even pull on these machines when you’re forced to use them personally — half expecting something will come detached. For those unfamiliar with the stealth of these skimming devices and the thieves who install them, read on.

Police in Lower Pottsgrove, PA are searching for a pair of men who’ve spent the last few months installing card and PIN skimmers at checkout lanes inside of Aldi supermarkets in the region. These are “overlay” skimmers, in that they’re designed to be installed in the blink of an eye just by placing them over top of the customer-facing card terminal.

The top of the overlay skimmer models removed from several Aldi grocery story locations in Pennsylvania over the past few months.

The underside of the skimmer hides the brains of this little beauty, which is configured to capture the personal identification number (PIN) of shoppers who pay for their purchases with a debit card. This likely describes a great number of loyal customers at Aldi; the discount grocery chain only in 2016 started accepting credit cards, and previously only took cash, debit cards, SNAP, and EBT cards.

The underside of this skimmer found at Aldi is designed to record PINs.

The Lower Pottsgrove police have been asking local citizens for help in identifying the men spotted on surveillance cameras installing the skimming devices, noting that multiple victims have seen their checking accounts cleaned out after paying at compromised checkout lanes.

Local police released the following video footage showing one of the suspects installing an overlay skimmer exactly like the one pictured above. The man is clearly nervous and fidgety with his feet, but the cashier can’t see his little dance and certainly doesn’t notice the half second or so that it takes him to slip the skimming device over top of the payment terminal.

I realize a great many people use debit cards for everyday purchases, but I’ve never been interested in assuming the added risk and so pay for everything with cash or a credit card. Armed with your PIN and debit card data, thieves can clone the card and pull money out of your account at an ATM. Having your checking account emptied of cash while your bank sorts out the situation can be a huge hassle and create secondary problems (bounced checks, for instance).

The Lower Pottsgrove Police have been admonishing people for blaming Aldi for the incidents, saying the thieves are extremely stealthy and that this type of crime could hit virtually any grocery chain.

While Aldi payment terminals in the United States are capable of accepting more secure chip-based card transactions, the company has yet to enable chip payments (although it does accept mobile contactless payment methods such as Apple Pay and Google Pay). This is important because these overlay skimmers are designed to steal card data stored on the magnetic stripe when customers swipe their cards.

However, many stores that have chip-enabled terminals are still forcing customers to swipe the stripe instead of dip the chip.

Want to learn more about self-checkout skimmers? Check out these other posts:

How to Spot Ingenico Self-Checkout Skimmers

Self-Checkout Skimmers Go Bluetooth

More on Bluetooth Ingenico Overlay Skimmers

Safeway Self-Checkout Skimmers Up Close

Skimmers Found at Wal-Mart: A Closer Look

Worse Than FailureFor Want of a CR…

A few years ago I was hired as an architect to help design some massive changes to a melange of existing systems so a northern foreign bank could meet some new regulatory requirements. As a development team, they gave me one junior developer with almost a year of experience. There were very few requirements and most of it would be guesswork to fill in the blanks. OK, typical Wall Street BS.

Horseshoe nails, because 'for want of a nail, the shoe was lost…

The junior developer was, well, junior, but bright, and he remembered what you taught him, so there was a chance we could succeed.

The setup was that what little requirements there were would come from the Almighty Project Architect down to me and a few of my peers. We would design our respective pieces in as generic a way as possible, and then oversee and help with the coding.

One day, my boss+1 has my boss have the junior guy develop a web service; something the guy had never done before. Since I was busy, it was deemed unnecessary to tell me about it. The guy Googled a bit and put something together. However, he was unsure of how the response was sent back to the browser (e.g.: what sort of line endings to use) and admitted he had questions. Our boss said not to worry about it and had him install it on the dev server so boss+1 could demo it to users.

Demo time came, and the resulting output lines needed an extra newline between them to make the output look nice.

The boss+1 was incensed and started telling the users and other teams that our work was crap, inferior and not to be trusted.


When this got back to me, I went to have a chat with him about a) going behind my back and leaving me entirely out of the loop, b) having a junior developer do something in an unfamiliar technology and then deploying it without having someone more experienced even look at it, c) running his mouth with unjustified caustic comments ... to the world.

He was not amused and informed us that the work should be perfect every time! I pointed out that while everyone strives for just that, that it was an unreasonable response, and doesn't do much to foster team morale or cooperation.

This went back and forth for a while until I decided that this idiot simply wasn't worth my time.

A few days later, I hear one of my peers having the same conversation with our boss+1. A few days later, someone else. Each time, the architect had been bypassed and some junior developer missed something; it was always some ridiculous trivial facet of the implementation.

I got together with my peers and discussed possibly instituting mandatory testing - by US - to prevent them from bypassing us to get junior developers to do stuff and then having it thrown into a user-visible environment. We agreed, and were promptly overruled by boss+1. Apparently, all programmers, even juniors, were expected to produce perfect code (even without requirements) every time, without exception, and anyone who couldn't cut it should be exposed as incompetent.

We just shot each other the expected Are you f'g kidding me? looks.

After a few weeks of this, we had all had enough of the abuse and went to boss+2, who was totally disinterested.

We all found other jobs, and made sure to bring the better junior devs with us.

[Advertisement] Easily create complex server configurations and orchestrations using both the intuitive, drag-and-drop editor and the text/script editor.  Find out more and download today!

Don MartiFun with numbers

(I work for Mozilla. None of this is secret. None of this is official Mozilla policy. Not speaking for Mozilla here.)

Guess what? According to Emil Protalinski at VentureBeat, the browser wars are back on.

Google is doubling down on the user experience by focusing on ads and performance, an opportunity I’ve argued its competitors have completely missed.

Good point. Jonathan Mendez has some good background on that.

The IAB road blocked the W3C Do Not Track initiative in 2012 that was led by a cross functional group that most importantly included the browser makers. In hindsight this was the only real chance for the industry to solve consumer needs around data privacy and advertising technology. The IAB wanted self-regulation. In the end, DNT died as the IAB hoped.

As third-party tracking made the ad experience crappier and crappier, browser makers tried to play nice. Browser makers tried to work in the open and build consensus.

That didn't work, which shouldn't be a surprise. Imagine if email providers had decided to build consensus with spammers about spam filtering rules. The spammers would have been all like, "It replaces the principle of consumer choice with an arrogant 'Hotmail knows best' system." Any sensible email provider would ignore the spammers but listen to deliverability concerns from senders of legit opt-in newsletters. Spammers depend on sneaking around the user's intent to get their stuff through, so email providers that want to get and keep users should stay on the user's side. Fortunately for legit mail senders and recipients, that's what happened.

On the web, though, not so much.

But now Apple Safari has Intelligent Tracking Prevention. Industry consensus achieved? No way. Safari's developers put users first and, like the man said, if you're not first you're last.

And now Google is doing their own thing. Some positive parts about it, but by focusing on filtering annoying types of ad units they're closer to the Adblock Plus "Acceptable Ads" racket than to a real solution. So it's better to let Ben Williams at Adblock Plus explain that one. I still don't get how it is that so many otherwise capable people come up with "let's filter superficial annoyances and not fundamental issues" and "let's shake down legit publishers for cash" as solutions to the web advertising problem, though. Especially when $16 billion in adfraud is just sitting there. It's almost as if the Lumascape doesn't care about fraud because it's priced in so it comes out of the publisher's share anyway.

So with all the money going to fraud and the intermediaries that facilitate it, local digital news publishers are looking for money in other places and writing off ads. That's good news for the surviving web ad optimists (like me) because any time Management stops caring about something you get a big opportunity to do something transformative.

Small victories

The web advertising problem looks big, but I want to think positive about it.

  • billions of web users

  • visiting hundreds of web sites

  • with tens of third-party trackers per site.

That's trillions of opportunities for tiny victories against adfraud.

Right now most browsers and most fraudbots are hard to tell apart. Both maintain a single "cookie jar" across trusted and untrusted sites, and both are subject to fingerprinting.

For fraudbots, cross-site trackability is a feature. A fraudbot can only produce valuable ad impressions on a fraud site if it is somehow trackable from a legit site.

For browsers, cross-site trackability is a bug, for two reasons.

  • Leaking activity from one context to another violates widely held user norms.

  • Because users enjoy ad-supported content, it is in the interest of users to reduce the fraction of ad budgets that go to fraud and intermediaries.

Browsers don't have the solve the whole web advertising problem to make a meaningful difference. As soon as a trustworthy site's real users look diffferent enough from fraudbots, because fraudbots make themselves more trackable than users running tracking-protected browsers do, then low-reputation and fraud sites claiming to offer the same audience will have a harder and harder time trying to sell impressions to agencies that can see it's not the same people.

Of course, the browser market share numbers will still over-represent any undetected fraudbots and under-represent the "conscious chooser" users who choose to turn on extra tracking protection options. But that's an opportunity for creative ad agencies that can buy underpriced post-creepy ad impressions and stay away from overvalued or worthless bot impressions. I expect that data on who has legit users—made more accurate by including tracking protection measurements—will be proprietary to certain agencies and brands that are going after customer segments with high tracking protection adoption, at least for a while.

Now even YouTube serves ads with CPU-draining cryptocurrency miners … by @dangoodin001

Remarks delivered at the World Economic Forum

Improving privacy without breaking the web

Greater control with new features in your Ads Settings

PageFair’s long letter to the Article 29 Working Party

‘Never get high on your own supply’ – why social media bosses don’t use social media

Can you detect WebDriver sessions from inside a web page? … via @wordpressdotcom

Making WebAssembly even faster: Firefox’s new streaming and tiering compiler

Newsonomics: Inside L.A.’s journalistic collapse

The State of Ad Fraud

The more Facebook examines itself, the more fault it finds

In-N-Out managers earn triple the industry average

Five loopholes in the GDPR

Why ads keep redirecting you to scammy sites and what we’re doing about it

Website operators are in the dark about privacy violations by third-party scripts

Mark Zuckerberg's former mentor says 'parasitic' Facebook threatens our health and democracy

Craft Beer Is the Strangest, Happiest Economic Story in America

The 29 Stages Of A Twitterstorm In 2018

How Facebook Helped Ruin Cambodia's Democracy

How Facebook’s Political Unit Enables the Dark Art of Digital Propaganda

Firefox 57 delays requests to tracking domains

Direct ad buys are back in fashion as programmatic declines

‘Data arbitrage is as big a problem as media arbitrage’: Confessions of a media exec

Why publishers don’t name and shame vendors over ad fraud

News UK finds high levels of domain spoofing to the tune of $1 million a month in lost revenue • Digiday

The Finish Line in the Race to the Bottom

Something doesn’t ad up about America’s advertising market

Fraud filters don't work

Ad retargeters scramble to get consumer consent


Krebs on SecurityAlleged Spam Kingpin ‘Severa’ Extradited to US

Peter Yuryevich Levashov, a 37-year-old Russian computer programmer thought to be one of the world’s most notorious spam kingpins, has been extradited to the United States to face federal hacking and spamming charges.

Levashov, in an undated photo.

Levashov, who allegedly went by the hacker names “Peter Severa,” and “Peter of the North,” hails from St. Petersburg in northern Russia, but he was arrested last year while in Barcelona, Spain with his family.

Authorities have long suspected he is the cybercriminal behind the once powerful spam botnet known as Waledac (a.k.a. “Kelihos”), a now-defunct malware strain responsible for sending more than 1.5 billion spam, phishing and malware attacks each day.

According to a statement released by the U.S. Justice Department, Levashov was arraigned last Friday in a federal court in New Haven, Ct. Levashov’s New York attorney Igor Litvak said he is eager to review the evidence against Mr. Levashov, and that while the indictment against his client is available, the complaint in the case remains sealed.

“We haven’t received any discovery, we have no idea what the government is relying on to bring these allegations,” Litvak said. “Mr. Levashov maintains his innocence and is looking forward to resolving this case, clearing his name, and returning home to his wife and 5-year-old son in Spain.”

In 2010, Microsoft — in tandem with a number of security researchers — launched a combined technical and legal sneak attack on the Waledac botnet, successfully dismantling it. The company would later do the same to the Kelihos botnet, a global spam machine which shared a great deal of computer code with Waledac.

Severa routinely rented out segments of his Waledac botnet to anyone seeking a vehicle for sending spam. For $200, vetted users could hire his botnet to blast one million pieces of spam. Junk email campaigns touting employment or “money mule” scams cost $300 per million, and phishing emails could be blasted out through Severa’s botnet for the bargain price of $500 per million.

Waledac first surfaced in April 2008, but many experts believe the spam-spewing machine was merely an update to the Storm worm, the engine behind another massive spam botnet that first surfaced in 2007. Both Waledac and Storm were major distributors of pharmaceutical and malware spam.

According to Microsoft, in one month alone approximately 651 million spam emails attributable to Waledac/Kelihos were directed to Hotmail accounts, including offers and scams related to online pharmacies, imitation goods, jobs, penny stocks, and more. The Storm worm botnet also sent billions of messages daily and infected an estimated one million computers worldwide.

Both Waledac/Kelihos and Storm were hugely innovative because they each included self-defense mechanisms designed specifically to stymie security researchers who might try to dismantle the crime machines.

Waledac and Storm sent updates and other instructions via a peer-to-peer communications system not unlike popular music and file-sharing services. Thus, even if security researchers or law-enforcement officials manage to seize the botnet’s back-end control servers and clean up huge numbers of infected PCs, the botnets could respawn themselves by relaying software updates from one infected PC to another.


According to a lengthy April 2017 story in about Levashov’s arrest and the takedown of Waledac, Levashov got caught because he violated a basic security no-no: He used the same log-in credentials to both run his criminal enterprise and log into sites like iTunes.

After Levashov’s arrest, numerous media outlets quoted his wife saying he was being rounded up as part of a dragnet targeting Russian hackers thought to be involved in alleged interference in the 2016 U.S. election. Russian news media outlets made much hay over this claim. In contesting his extradition to the United States, Levashov even reportedly told the RIA Russian news agency that he worked for Russian President Vladimir Putin‘s United Russia party, and that he would die within a year of being extradited to the United States.

“If I go to the U.S., I will die in a year,” Levashov is quoted as saying. “They want to get information of a military nature and about the United Russia party. I will be tortured, within a year I will be killed, or I will kill myself.”

But there is so far zero evidence that anyone has accused Levashov of being involved in election meddling. However, the Waledac/Kelihos botnet does have a historic association with election meddling: It was used during the Russian election in 2012 to send political messages to email accounts on computers with Russian Internet addresses. Those emails linked to fake news stories saying that Mikhail D. Prokhorov, a businessman who was running for president against Putin, had come out as gay.


If Levashov was to plead guilty in the case being prosecuted by U.S. authorities, it could shed light on the real-life identities of other top spammers.

Severa worked very closely with two major purveyors of spam. One was Alan Ralsky, an American spammer who was convicted in 2009 of paying him and other spammers to promote the pump-and-dump stock scams.

The other was a spammer who went by the nickname “Cosma,” the cybercriminal thought to be responsible for managing the Rustock botnet (so named because it was a Russian botnet frequently used to send pump-and-dump stock spam). In 2011, Microsoft offered a still-unclaimed $250,000 reward for information leading to the arrest and conviction of the Rustock author. moderator Severa listing prices to rent his Waledac spam botnet.

Microsoft believes Cosma’s real name may be Dmitri A. SergeevArtem Sergeev, or Sergey Vladomirovich Sergeev. In June 2011, KrebsOnSecurity published a brief profile of Cosma that included Sergeev’s resume and photo, both of which indicated he is a Belorussian programmer who once sought a job at Google. For more on Cosma, see “Flashy Car Got Spam Kingpin Mugged.”

Severa and Cosma had met one another several times in their years together in the stock spamming business, and they appear to have known each other intimately enough to be on a first-name basis. Both of these titans of junk email are featured prominently in “Meet the Spammers,” the 7th chapter of my book, Spam Nation: The Inside Story of Organized Cybercrime.

Much like his close associate — Cosma, the Rustock botmaster — Severa may also have a $250,000 bounty on his head, albeit indirectly. The Conficker worm, a global contagion launched in 2009 that quickly spread to an estimated 9 to 15 million computers worldwide, prompted an unprecedented international response from security experts. This group of experts, dubbed the “Conficker Cabal,” sought in vain to corral the spread of the worm.

But despite infecting huge numbers of Microsoft Windows systems, Conficker was never once used to send spam. In fact, the only thing that Conficker-infected systems ever did was download and spread a new version of the the malware that powered the Waledac botnet. Later that year, Microsoft announced it was offering a $250,000 reward for information leading to the arrest and conviction of the Conficker author(s). Some security experts believe this proves a link between Severa and Conficker.

Both Cosma and Severa were quite active on Spamit[dot]com, a once closely-guarded forum for Russian spammers. In 2010, Spamit was hacked, and a copy of its database was shared with this author. In that database were all private messages between Spamit members, including many between Cosma and Severa. For more on those conversations, see “A Closer Look at Two Big Time Botmasters.

In addition to renting out his spam botnet, Severa also managed multiple affiliate programs in which he paid other cybercriminals to distribute so-called fake antivirus products. Also known as “scareware,” fake antivirus was at one time a major scourge, using false and misleading pop-up alerts to trick and mousetrap unsuspecting computer users into purchasing worthless (and in many cases outright harmful) software disguised as antivirus software.

A screenshot of the eponymous scareware affiliate program run by “Severa,” allegedly the cybercriminal alias of Peter Levashov.

In 2011, KrebsOnSecurity published Spam & Fake AV: Like Ham & Eggs, which sought to illustrate the many ways in which the spam industry and fake antivirus overlapped. That analysis included data from Brett Stone-Gross, a cybercrime expert who later would assist Microsoft and other researchers in their successful efforts to dismantle the Waledac/Kelihos botnet.

Levashov faces federal criminal charges on eight counts, including aggravated identity theft, wire fraud, conspiracy, and intentional damage to protected computers. The indictment in his case is available here (PDF).

Further reading: Mr Waledac — The Peter North of Spamming

Cory DoctorowNominations for the Hugo Awards are now open

If you were a voting member of the World Science Fiction Convention in 2017, or are registered as a voting member for the upcoming conventions in 2018 or 2019, you are eligible to nominate for the Hugo Awards; the Locus List is a great way to jog your memory about your favorite works from last year — and may I humbly remind you that my novel Walkaway is eligible for your nomination?


Adam recently tried to claim a rebate for a purchase. Rebate websites, of course, are awful. The vendor doesn’t really want you to claim the rebate, after all, so even if they don’t actively try and make it user hostile, they’re also not going to go out of their way to make the experience pleasant.

In Adam’s case, it just didn’t work. It attempted to use a custom-built auto-complete textbox, which errored out and in some cases popped up an alert which read: [object Object]. Determined to get his $9.99 rebate, Adam did what any of us would do: he started trying to debug the page.

The HTML, of course, was a layout built from nested tables, complete with 1px transparent GIFs for spacing. But there were a few bits of JavaScript code which caught Adam’s eye.

function doTheme(myclass) {
         if ( document.getElementById ) {
                if(document.getElementById("divLog").className=="princess") {
                } else {
                        if(document.getElementById("divLog").className=="death") {
                        } else {
                                if(document.getElementById("divLog").className=="clowns") {
        } else if ( document.all ) {
                if(document.all["divLog"].className=="princess") {
                } else {
                        if(document.all["divLog"].className=="death") {
                        } else {
                                if(document.all["divLog"].className=="clowns") {

This implements some sort of state machine. If the state is “princess”, become “death”. If the state is “death”, become “clowns”. If the state is “clowns”, go back to being a “princess”. Death before clowns is a pretty safe rule.

This code also will work gracefully if document.getElementById is unavailable, meaning it works all the way back to IE4. That’s backwards compatibility. Since it doesn't work in Adam's browser, it missed out on the forward compatibility, but it's got backwards compatibility.

To round out the meal, Adam also provides a little bit of dessert for this entry of awful code.


function over(myimage,str) {  


Adam used some google-fu and found an alternate site that allowed him to redeem his rebate.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Sam VargheseCricket Australia needs to get player availability policies sorted

Australian cricket authorities are short-charging fans of the national Twenty20 competition, the Big Bash League, through their policies on releasing players from national duty when needed by their BBL sides for crucial encounters.

The Adelaide Strikers and the Hobart Hurricanes, who contested Sunday’s final, were both affected by this policy.

Adelaide won, but had they failed to do so, no doubt there would have been attention drawn to the fact that their main fast bowler, Billy Stanlake, did not play as he was on national duty to play in a tri-nation tournament involving New Zealand and England.

Even though Cricket Australia released some other players – Alex Carey of the Strikers and Darcy Short of the Hurricanes – for the BBL final, it was clear that the travel from Sydney (where a game in the tri-nation tournament was played on Saturday night) to Adelaide (where the BBL final took place on Sunday afternoon) had affected them.

Carey, whose batting has been one of the Strikers’ strengths, was out cheaply, while Short, normally an ebullient six-hitter who had rung up two 90s and a century in the BBL league stage, was totally off his game. He made a a listless 68 and his strike rate was much lower than normal, something which made a big difference as his team was chasing 203 for a win. Both Carey and Short had played for the national team the previous night.

Strikers captain Travis Head was also in Sydney on Saturday but did not play in the game. He rushed back to Adelaide for the final and made a rather subdued 44, playing second fiddle to opener Jake Weatherald who made a quick century.

Given that the international cricket season clashes with the BBL, the good folk at Cricket Australia need to develop some consistent policies about player involvement in both forms of the game.

Weakening the BBL sides at crucial stages of the tournament will mean that the game becomes that much less competitive. And that will affect the crowds, who are already diminishing in numbers. Sunday’s final involved the home team and yet could not fill the Adelaide stadium.

With grandiose plans to expand the BBL next year so that each team plays each other both at home and away, and to also add an AFL style finals process – where the teams that finish higher up get a second chance at qualifying for the final – Cricket Australia would do well to pay heed to player availability policies.

Else, what was once a golden goose may be found to have no more eggs to lay.


Planet Linux AustraliaJonathan Adamczewski: Watch as the OS rewrites my buggy program.

I didn’t know that SetErrorMode(SEM_NOALIGNMENTFAULTEXCEPT) was a thing, until I wrote a bad test that wouldn’t crash.

Digging into it, I found that a movaps instruction was being rewritten as movups, which was a thoroughly confusing thing to see.

The one clue I had was that a fault due to an unaligned load had been observed in non-test code, but did not reproduce when written as a test using the google-test framework. A short hunt later (including a failed attempt at writing a small repro case), I found an explanation: google test suppresses this class of failure.

The code below will successfully demonstrate the behavior, printing out the SIMD load instruction before and after calling the function with an unaligned pointer.


View the code on Gist.


CryptogramFriday Squid Blogging: Kraken Pie

Pretty, but contains no actual squid ingredients.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityAttackers Exploiting Unpatched Flaw in Flash

Adobe warned on Thursday that attackers are exploiting a previously unknown security hole in its Flash Player software to break into Microsoft Windows computers. Adobe said it plans to issue a fix for the flaw in the next few days, but now might be a good time to check your exposure to this still-ubiquitous program and harden your defenses.

Adobe said a critical vulnerability (CVE-2018-4878) exists in Adobe Flash Player and earlier versions. Successful exploitation could allow an attacker to take control of the affected system.

The software company warns that an exploit for the flaw is being used in the wild, and that so far the attacks leverage Microsoft Office documents with embedded malicious Flash content. Adobe said it plans to address this vulnerability in a release planned for the week of February 5.

According to Adobe’s advisory, beginning with Flash Player 27, administrators have the ability to change Flash Player’s behavior when running on Internet Explorer on Windows 7 and below by prompting the user before playing Flash content. A guide on how to do that is here (PDF). Administrators may also consider implementing Protected View for Office. Protected View opens a file marked as potentially unsafe in Read-only mode.

Hopefully, most readers here have taken my longstanding advice to disable or at least hobble Flash, a buggy and insecure component that nonetheless ships by default with Google Chrome and Internet Explorer. More on that approach (as well as slightly less radical solutions) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the Flash cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

By default, Mozilla Firefox on Windows computers with Flash installed runs Flash in a “protected mode,” which prompts the user to decide if they want to enable the plugin before Flash content runs on a Web site.

Another, perhaps less elegant, alternative to wholesale kicking Flash to the curb is to keeping it installed in a browser that you don’t normally use, and then only using that browser on sites that require Flash.

CryptogramSigned Malware

Stuxnet famously used legitimate digital certificates to sign its malware. A research paper from last year found that the practice is much more common than previously thought.

Now, researchers have presented proof that digitally signed malware is much more common than previously believed. What's more, it predated Stuxnet, with the first known instance occurring in 2003. The researchers said they found 189 malware samples bearing valid digital signatures that were created using compromised certificates issued by recognized certificate authorities and used to sign legitimate software. In total, 109 of those abused certificates remain valid. The researchers, who presented their findings Wednesday at the ACM Conference on Computer and Communications Security, found another 136 malware samples signed by legitimate CA-issued certificates, although the signatures were malformed.

The results are significant because digitally signed software is often able to bypass User Account Control and other Windows measures designed to prevent malicious code from being installed. Forged signatures also represent a significant breach of trust because certificates provide what's supposed to be an unassailable assurance to end users that the software was developed by the company named in the certificate and hasn't been modified by anyone else. The forgeries also allow malware to evade antivirus protections. Surprisingly, weaknesses in the majority of available AV programs prevented them from detecting known malware that was digitally signed even though the signatures weren't valid.

Worse Than FailureError'd: The Biggest Loser

"I don't know what's more surprising - losing $2,000,000 or that Yahoo! thought I had $2,000,000 to lose," writes Bruce W.


"Autodesk sent out an email about my account's password being changed recently. Now it's up to me to guess which $productName it is!" wrote Tom G.


Kurt C. writes, "I kept repeating my mantra: 'Must not click forbidden radio buttons...'"


"My son boarded a bus in Toronto and got a free ride when the driver showed him this crash message," Ari S. writes.


"For those who are in denial about global warming, may I please direct you to conditions in Wisconsin," wrote Chelsie S.


Billie J. wrote, "Sorry there, Walmart, but that's not how math works."


[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaOpenSTEM: Welcome Back!

Well, most of our schools are back, or about to start the new year. Did you know that there are schools using OpenSTEM materials in every state and territory of Australia? Our wide range of resources, especially those on Australian history, give detailed information about the history of all our states and territories. We pride […]


Cory DoctorowThe 2017 Locus List: a must-read list of the best science fiction and fantasy of the past year

Every year, Locus Magazine’s panel of editors reviews the entire field of science fiction and fantasy and produces its Recommended Reading List; the 2017 list is now out, and I’m proud to say that it features my novel Walkaway, in excellent company with dozens of other works I enjoyed in the past year.

2017 Locus Recommended Reading List
[Locus Magazine]

CryptogramJackpotting Attacks Against US ATMs

Brian Krebs is reporting sophisticated jackpotting attacks against US ATMs. The attacker gains physical access to the ATM, plants malware using specialized electronics, and then later returns and forces the machine to dispense all the cash it has inside.

The Secret Service alert explains that the attackers typically use an endoscope -- a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body -- to locate the internal portion of the cash machine where they can attach a cord that allows them to sync their laptop with the ATM's computer.

"Once this is complete, the ATM is controlled by the fraudsters and the ATM will appear Out of Service to potential customers," reads the confidential Secret Service alert.

At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

"In previous Ploutus.D attacks, the ATM continuously dispensed at a rate of 40 bills every 23 seconds," the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert.

Lots of details in the article.

Worse Than FailureWe Sell Bonds!

We Sell Bonds!

The quaint, brick-faced downtown office building was exactly the sort of place Alexis wanted her first real programming job to be. She took a moment to just soak in the big picture. The building's façade oozed history, and culture. The busy downtown crowd flowed around her like a tranquil stream. And this was where she landed right out of college-- if this interview went well.

Alexis went inside, got a really groovy start-up vibe from the place. The lobby slash waiting room slash employee lounge slash kitchen slash receptionist desk was jam packed full of boxes of paperwork waiting to be unpacked and filed (once a filing cabinet was bought). The walls, still the color of unpainted drywall, accented with spats of plaster and glue-tape. Everything was permeated with chaotic beginnings and untapped potential.

Her interviewer, Mr. Chen, the CEO of the company, lead her into the main conference room, which she suspected was the main conference room by virtue of being the only conference room. The faux-wood table, though moderately sized, still barely left room for herself and the five interviewers, crammed into a mish-mash of conference-room chairs, office chairs and one barstool. At least this room's walls had seen a coat of paint-- if only a single coat. Framed artwork sat on the ground, leaned up gently against the wall. She shared the artwork's anticipation-- waiting for the last coat of paint and touch-ups to dry, to hang proudly for all to see, fulfilling their destiny as the company grew and evolved around them.

"Thank you for coming in," said Mr. Chen as he sat at the head of the conference table.

"Thank you for having me," Alexis replied, sitting opposite him, flanked by the five other interviewers. She was glad she'd decided to play cautious and wear her formal 'Interview Suit'. She fit right in with the suits and ties everyone else was wearing. "I really dig the office space. How long have you been here?"

"Five years," Mr. Chen answered.

Her contextual awareness barely had time to register the whiplash of unpainted walls and unhung pictures in a long occupied office-- not that she had time to process that thought anyways.

"Let the interview begin now," Mr. Chen said abruptly. "Tell me your qualifications."

"I-- uh, okay," Alexis sat up straight and opened her leather folder, "Does everyone have a copy of my resume? I printed extra in case-- "

"We are a green company," Mr. Chen stated.

Alexis froze for a moment, her hand inches from the stack of resumes. She folded her hands over her own copy, smiled, and recovered. "Okay. My qualifications..." She filled them in on the usual details-- college education, GPA, co-op jobs, known languages and frameworks, contributions to open source projects. It was as natural as any practice interview she'd ever had. Smile. Talk clearly. Make eye contact with each member of the interview team for an appropriate length of time before looking at the next person.

Though doing so made her acutely aware that she had no idea who the other people were. They'd never been introduced. They were just-- there.

As soon as she'd finished her last qualification point, Mr. Chen spoke. "Are you familiar with the bonds market?"

She'd done some cursory Wikipedia research before her interview, but admitted, "An introductory familiarity."

"You are not expected to know it," Mr. Chen said, "The bond market is complicated. Very complicated. Even experienced brokers who know about futures contracts, forward contractions, options, swaps and warrants often have no idea how bonds work. But their customers still want to buy a bond. The brokers are our customers, and allowing them to buy bonds is the sole purpose of 'We Sell Bonds!'."

Though Mr. Chen had a distinctly dry and matter-of-fact way of speaking, she could viscerally HEAR the exclamation point in the company's name.

"Very interesting," Alexis said. Always be sure to compliment the interviewer at some point. "What sort of programming projects would I be working on?"

"The system is very complicated," Mr. Chen retorted. "Benny is our programmer."

One of the suited individuals to her left nodded, and she smiled back at him. At least now she knew one other person's name.

"He will train you before you may modify the system. It is very important the system be working properly, and any development must be done carefully. At least six months of training. But the system gathers lots of data, from markets, and from our customers. That data must be imported into the system. That will be part of your duties."

Again, Alexis found herself sitting behind a default smile while her brain processed. The ad she'd answered had clearly been for a junior developer. It had asked for developer skills, listed must-know programming languages, and even been titled 'Junior Developer'. Without breaking the smile, she asked, "What would you say the ratio of data handling to programming would be?"

"I would say close to one hundred percent."

Alexis' heart sank, and she curled her toes to keep any physical sign of her disappointment showing. She mentally looked to her sliver-linings view. Sure, it was data entry-- but she'd still be hands-on with a complicated financial system. She'd get training. Six months of training, which would be a breeze compared to full-time college. And if there really was that much data entry, then the job would be perfect for a fresh mind. There'd be TONS of room for introducing automation and efficiency. What more could a junior developer want?

"That sounds great," Alexis said, enthusiastic as ever.

"Good," Mr. Chen said. "The job starts on Monday."

Her whiplash systems had already long gone offline from overload. Was that a job offer?

"That-- sounds great!" Alexis repeated.

"Good. Nadine will email your paperwork. Email it back promptly."

And now Alexis knew a second person's name. "I look forward to meeting the whole company," she said aloud.

"You have," Mr. Chen replied, gesturing to the others at the table. "We will return to work now. Good day."

Alexis found herself back on the sidewalk outside the quaint brick-faced downtown office building, gainfully employed and not sure if she actually understood what the heck had just happened. But that was a problem for Monday.


Alexis arrived fifteen minutes early for her first day at the quaint brick-faced downtown office-- no, make that HER quaint brick-faced downtown office.

Fourteen minutes later, Mr. Chen unlocked the front-door from the inside, and let her in.

"You're early," he stated, locking the door behind her.

"The early bird gets the worm," she clichéd.

"You don't need to be early if you are punctual. Follow."

Mr. Chen lead her through the lobby, and once again into the main boardroom. As before, five people sat around the conference table. Alexis figured there'd be formalities and paperwork to file before she got a desk. HER desk! The whole company (all six of them-- though now it was seven) were here to greet her. And, for some reason, they'd brought their laptops.

"You will sit beside Benny," Mr. Chen said, taking his seat.

"I-- huh?"

Next to Benny, there was an empty chair, and an unoccupied laptop. Alexis slunk around the other chairs, careful not to knock over the framed posters that were still propped against the wall, and sat beside the lead programmer.

"Morning meeting before getting down to work, huh?" she said, smiling at him.

Benny gave her a sideways glance. "We are working."

Alexis wasn't sure what he meant-- and then she noticed, for the first time, that everyone was heads down, looking at their screens, typing away. This wasn't just a boardroom. This was her desk. This was everyone's desk.

Over the morning, Benny gave her his split attention-- interspersing his work with muttering instructions to her; how to log in, where the data files were, how to install Excel. He would only talk to her in-between steps of his task; never interrupting a step to give her attention. Sometimes she just sat there and watched him watch a progress bar. She gathered he was upgrading a server's instance of SQL Server from version "way too old" to version "still way too old, but not as much".

After lunch (also eaten at the shared desk), Benny actually looked at her.

"Time for your first task," he said, giving her a sheet of paper. "We have a new financial institution buying bonds from us. They will use our Single SignOn Solution. You will need to create these accounts."

She took the sheet of paper, a simple printed table with first name, last name, company name, username and password.

Alexis was recently enough out of college that "Advanced Security Practices 401" was still fresh in her mind-- and seeing a plaintext password made her bones nearly crawl out of her skin.

"I-- um-- are there supposed to be passwords here?"

Benny nodded. "Yes. To facilitate single sign-on, usernames and passwords in 'We Sell Bonds!' website must exactly match those used in the broker's own system. Their company signs up for 'We Sell Bonds!', and they are provided with a website skinned to look like their own. The company's employees are given a seamless experience. Most don't even know they are on a different site."

Her skin gnawed on her bones to keep them in place. "But, if the passwords here are in plaintext, that is their real, actual password?"

Benny gave her the same nod. "They must be. Otherwise we could not log in to test their account."

That either made perfect sense, or had dumbfounded all the sense out of Alexis, so she just said "Ok." The rest of the day was spent creating accounts through an ASP interface, then logging into the site to test them.

When she arrived at the quaint brick-faced office building the next day, there was a large stack of papers at her spot at the communal desk. Benny said, "Mr. Chen was happy with your data entry yesterday."

Mr. Chen, who was seated at the head of the shared desk, didn't look up from his laptop screen. "You are allowed to enter this data too."

"Thank you?" Alexis settled in, and got to work. For every record she entered, a different way of optimizing this system would flitter through her mind. A better entry form, maybe auto-focus the first field? How about an XML dump on a USB disk? Or a SOAP service that could interface directly with the database? There could be a whole validation layer to check the data and pre-emptively catch data errors.

Data errors like the one she was looking at right now. She waited patiently for Benny to complete whatever step of his task he was on, and showed him the offending records.

"I don't see the problem," Benny said, shortly.

"John Smith and Jon Smith both have the same username, jsmith" she said, not sure how to make it more clear.

"Yes, they do," Benny confirmed.

"They can't both have the same username."

"They can!" Mr. Chen's sudden interjection startled her-- though she wasn't sure if it was because of the sharpness of his tone, or because she hadn't actually heard him speak for a day and a half. "Do you not see that they have different passwords?"

"Uh," Alexis checked, "They do. But the username is still the same."

There was no response. Mr. Chen was already looking back at his screen. Benny was looking at her expectantly.

"So users are differentiated by their-- password?" she said, trying to grasp what the implications of that would be. "What if someone changes their password to be-- "

"Users don't change passwords," Benny replied. "That would break single sign-on. If a user changes their password in their home system, their company will submit a change request to us to modify the password on 'We Sell Bonds!'."

Alexis blinked-- this time certain that this made no sense, and she was actually dumbfounded. But Benny must have taken her silence as 'question answered', and immediately started his next task. It made no sense, but she was still a junior developer, fresh out of school; full of ideas but short on experience. Maybe-- yeah, maybe there was a reason to do it this one. One that made sense once she learned to stop thinking so academically. That must be it.

She dutifully entered two records for jsmith and kept working on the pile.


Friday. The end of her first real work week. Such a long, long week of data entry, interspersed by being allowed to witness a small piece of the system as Benny worked on his upgrades. At least she knew now which version of SQL Server was in use; and that Benny avoided the braces-verses-no-braces argument by just using vbscript which was "pure and brace-free"; and that stored procedures were allowed because raw SQL was too advanced to trust to human hands.

Alexis stood in front of the quaint brick-faced office building. It was familiar enough now, after even just a week, that she could see the discoloured mortar, and cracked bricks near the foundation, and the smatterings of dirt and debris that came with being on a busy downtown street.

She went into the office, and sat down at the desk. Another stack of papers for her to enter, just like the day before, just like every day this week. Though something was different. In the middle of the table, there was a box of donuts from the local bakery.

"Well, that's nice," she said as she sat down. "Happy Friday?"

Everyone looked up at her at the same time.

"No," Mr. Chen stated, "Friday is not a celebration; please do not detract from Benny's success."

She felt like she wanted to apologize, but she didn't know why. "What's the occasion, Benny?"

"He has completed the upgrade of the database. We celebrate his success."

That seemed reasonable enough. Mr. Chen opened the box. There was an assortment of donuts. Seven donuts. Exactly seven donuts. Not a dozen. Not half a dozen. Seven. Who buys seven donuts?

Mr. Chen selected one, and then the box was passed around. Alexis didn't know much about her coworkers (a fact that, upon thinking about it, was not normal)-- but she did know enough about their positions to recognize the box being passed around in order of seniority. She took the last one, a plain cake donut.

Of course.

"Well," she said, making a silver lining out of a plain donut, "Congratulations, Benny. Cheers to you."

"Thank you," he said, "I was finally able to successfully update the server for the first time last night."

"Nice. When do we roll it out to the live website?"

Benny looked at her a blankly. "The website is live."

"Yeah, I know," Alexis said, swallowing the last bit of donut. It landed hard on the weird feeling she had in her stomach. "But, y'know-- you upgraded whatever environment you were experimenting on, right? So now that that's done, are you, like-- going to upgrade the live, production server over the weekend or something-- like, off hours?"

"I have upgraded the live, production server. That is our server. That is where we do all the work."

Alexis became acutely aware that the weird feeling in her stomach was a perfectly normal and natural reaction to thinking about doing work directly on a live, production server that served hundreds of customers handling millions of dollars.


Mr. Chen finished his donut and said, "Benny is a proper, careful worker. There is no need to waste resources on many environments when one can just do the job correctly in the first place. Again, good work, Benny, and now the day begins."

Everyone turned to their laptops, and so did Alexis, reflectively. She started in on the first stack of papers to enter into the database-- the live, production database she was interfacing directly with-- when she heard a sound she'd never heard before.

A phone rang.

The man beside Mr. Chen-- Trevor, she thought his name was, stood up and excused himself to the lobby to answer the phone. He returned after a few moments, and put a piece of paper on top of her pile.

"That request should be queued at the bottom of her pile," Mr. Chen said as soon as Trevor's hand was off the paper.

"I believe this may be a case of priority," Trevor replied. He had a nice voice. Why hadn't she heard her co-worker's voice after a week of working here? "A user cannot log in."

She glanced down at the paper. There was a username and password jotted down. When she looked back up at Mr. Chen, he waved her to proceed. Alexis pulled up "We Sell Bonds!" home page, and tried to log in as "a.sanders"

The logged-in page popped up. "Huh, seems to be working now."

"No," Trevor said, "You should be logged in as Andrew Sanders from Initech Bonds and Holdings, not Andrew Sanders from Fiduciary Interests."

"But I am logged in as a.sanders from Initiech, see?" she brought up the account details to show Trevor.

"No, I tried it myself. I will show you." Trevor took her laptop, repeated the login steps. "There."

"Huh." Alexis stared at the account information for Andrew Sanders from Fiduciary Interests. "Maybe one of us is typing in the wrong password?"

Alexis tried again, and Trevor tried-- and this time got the results reversed. They tried a new browser session, and both got Initech. Then try tried different browsers and Trevor got Initech twice in a row. They copy and pasted usernames and passwords to and from Notepad. No matter what they tried, they couldn't consistently reproduce which Andrew Sanders they got.

As Alexis tried over and over to come up with something or anything to explain it, Benny was frantically running through code, adding Response.Write("<!-- some debugging message -->") anywhere he could think might help.

By this point the whole company was watching them. While that shouldn't be noteworthy since the entire company was in the same room, being paid attention to by this particular group of coworkers was extremely noticeable.

And of all the looks that fell on her, the most disconcerting was Mr. Chen's gaze.

"Determine the cause of this disruption to our website," he said flatly.

"I don't get it," Alexis said, "This doesn't make any sense. We should be able to determine what's causing this bug, but-- um-- hang on."

Determine-- the word tugged at her, begging to be noticed. Or begging her to notice something. Something she'd seen on Benny's screen. A SQL query. It reminded her of a term from one of her Database Management exams. Deterministic. Yes, of course!

"Benny, go back to that query you had on screen!" she exclaimed! "Yes, that one!"

As she pointed at Benny's screen, Mr. Chen was already on his feet, heading over. A perfect chance for her to finally prove her worth as a developer.

"That query, right there, for getting the user record. It's using a view and-- may I?" she took over Benny's laptop, focused on the SQL Management Studio, but excitedly talking aloud as she went.

"Programmability... views... VIEW_ALL_USERS... aha! Check it out."

ORDER BY UserCreatedDate

"Which," she clicked back to the query, "Is used here..."

WHERE username=@Username and password=@Password

"... and we only use the first record we return, but I've read about this! Okay, like, the select without an ORDER BY returns in random order-- no no no, NON-DETERMINISTIC order, basically however the optimizer felt like returning them, based on which ones returned faster, or what it had for breakfast that day, but totally non-deterministic. No "ORDER BY" means no order. Or at least it is supposed to, but, like, SQL Server 2000 had this bug in the optimizer, which became this epic 'undocumented feature'. When you did TOP 100 PERCENT along with an ORDER BY in a view, the optimizer just bugged the fudge out and did the sorting wrong, but did the sorting wrong in a deterministic way. In effect, it would obey the ORDER BY inside the view, but only by accident. But, like I said, that was a bug in SQL Server 2000, and Benny, WE JUST UPGRADED TO SQL SERVER 2005!"

She held her hands out, the solution at last. Mr. Chen was standing right there. Okay, perfect-- because what had Logical Thinking and Troubleshooting 302 taught her? Don't just identify a PROBLEM, also present a SOLUTION!

"Okay, look-- I bet if I query for users with this username and this password-- " she typed the query in frantically-- "see, right there, that's Andrew Sanders from Initech AND Andrew Sanders from Fiduciary Interests. They both have the same username and password, so they're both returned. I bet no one ever noticed before. That other guy has no activity on his account. So all we really have to do is put the same ORDER BY into the query itself-- and-- click click, copy paste-- there! Log in and-- there's Mr. Initech. Log out, log in, log out, log in-- I could do this all day and we'll get the same results. Tah-dah!"

She sat back in her chair, grinning at her captive audience. But they weren't grinning back. Instead they were averting their gaze. Everyone-- except for Mr. Chen. There was no doubt he was staring right at her. Glaring.

"Undo that immediately," he said, in an extremely calm voice that did not match his reddening face.

"I, uh-- okay?" she reached for the keyboard.

"BENNY!" Mr. Chen rebuked, "Benny, you undo those changes."

Benny snatched the laptop away, and with a barrage of CTRL-Z wiped away her change.

"But-- that fixes the bug-- "

"No," Mr. Chen stated, "The CORRECT fix is to delete the second record, and inform Fiduciary Interest that their Andrew Sanders may not have access to this system until he changes his password to something unique. Then there is no code change needed."

"But but-- " Alexis stumbled, "It's a documented Microsoft bug, and if-- "

"The code of 'We Sell Bonds!' is properly and carefully written. We do not change it to conform to someone else's mistakes. This complex code change you unilaterally impose is unknown, untested, unreliable and utterly unacceptable. You would determine the course of a financial business based on an outrageous outside case?"

"But, it's happening and causing a problem now and-- "

Benny pointed at his screen, where he'd entered a query with a GROUP BY and HAVING. "Only eight usernames are duplicated like that."

"Vanishingly small," Mr. Chen said, "Benny, print out those users, and then delete them. Nadine, contact those companies and inform them those users will not have access to the website until their information is corrected. With that solved, we can all resume work."

Everyone at the company returned to their tasks. Alexis stared at her screen for a moment, at the ASP management screen that waited for her data to be entered. It didn't implement any change. It didn't introduce any progress. It was just an ASP form for data entry. And that was her job.

She entered her data.

At lunch, when everyone in the company got up to take their break, Mr. Chen motioned for her to sit back down. After the rest of the company filed out, he spoke.

"Alexis, although your ability to interface with the system is adequate, I am afraid your inability to focus on your task is not. I require a worker who is careful and proper, and you are not. Thank you for your time. You will be paid for the remainder of the day, but may go home now. I will see you out."

Alexis erred on the better side of valor, and did not shout in his face that he can't fire her, because she was quitting.

Mr. Chen ushered her out the front door, and locked it behind her.

Alexis stood on the busy sidewalk, the lunchtime crowd pushing and shoving their way past her. She looked back on the quaint, brick-faced office building. On the surface, it had been exactly what she'd wanted from her first programming job. She only got one "first" job, and it had ended up being-- that.

She wallowed for a moment, and then pulled herself back together. No. Data entry did not a programming job make. Her real first programming job was still ahead of her, somewhere. And next time, when she thought she'd found it, she would first look-- properly and carefully-- past the quaint surface to what lay beneath.

[Advertisement] Incrementally adopt DevOps best practices with BuildMaster, ProGet and Otter, creating a robust, secure, scalable, and reliable DevOps toolchain.

Planet Linux AustraliaCraige McWhirter: Querying Installed Package Versions Across An Openstack Cloud

AKA: The Joy of juju run

Package upgrades across an OpenStack cloud do not always happen at the same time. In most cases they may happen within an hour or so across your cloud but for a variety reasons, some upgrades may be applied inconsistently, delayed or blocked on some servers.

As these packages may be rolling out a much needed patch or perhaps carrying a bug, you may wish to know which services are impacted in fairly short order.

If your OpenStack cloud is running Ubuntu and managed by Juju and MAAS, here's where juju run can come to the rescue.

For example, perhaps there's an update to the Corosync library libcpg4 and you wish to know which of your HA clusters have what version installed.

From your Juju controller, create a list of servers managed by Juju:

Juju 1.x:

$ juju stat --format tabular > jsft.out

Now you could fashion a query like this, utilising juju run:

$ for i in $(egrep -o '[a-z]+-hacluster/[0-9]+' jsft.out | cut -d/ -f1 | sort -u);
do juju run --timeout 30s --service $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';

The output returned will look something like this:

2.3.3-1ubuntu4 == ceilometer-hacluster/1
2.3.3-1ubuntu4 == ceilometer-hacluster/0
2.3.3-1ubuntu4 == ceilometer-hacluster/2
2.3.3-1ubuntu4 == cinder-hacluster/0
2.3.3-1ubuntu4 == cinder-hacluster/1
2.3.3-1ubuntu4 == cinder-hacluster/2
2.3.3-1ubuntu4 == glance-hacluster/3
2.3.3-1ubuntu4 == glance-hacluster/4
2.3.3-1ubuntu4 == glance-hacluster/5
2.3.3-1ubuntu4 == keystone-hacluster/1
2.3.3-1ubuntu4 == keystone-hacluster/0
2.3.3-1ubuntu4 == keystone-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/1
2.3.3-1ubuntu4 == mysql-hacluster/2
2.3.3-1ubuntu4 == mysql-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/1
2.3.3-1ubuntu4 == ncc-hacluster/0
2.3.3-1ubuntu4 == ncc-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/2
2.3.3-1ubuntu4 == neutron-hacluster/1
2.3.3-1ubuntu4 == neutron-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/0
2.3.3-1ubuntu4 == osd-hacluster/1
2.3.3-1ubuntu4 == osd-hacluster/2
2.3.3-1ubuntu4 == swift-hacluster/1
2.3.3-1ubuntu4 == swift-hacluster/0
2.3.3-1ubuntu4 == swift-hacluster/2

Juju 2.x:

$ juju status > jsft.out

Now you could fashion a query like this:

$ for i in $(egrep -o 'hacluster-[a-z]+/[0-9]+' jsft.out | cut -d/ -f1 |sort -u);
do juju run --timeout 30s --application $i "dpkg-query -W -f='\${Version}' libcpg4" | \
python -c 'import yaml,sys;print("\n".join(["{} == {}".format(y["Stdout"], y["UnitId"]) for y in yaml.safe_load(sys.stdin)]))';

The output returned will look something like this:

2.3.5-3ubuntu2 == hacluster-ceilometer/1
2.3.5-3ubuntu2 == hacluster-ceilometer/0
2.3.5-3ubuntu2 == hacluster-ceilometer/2
2.3.5-3ubuntu2 == hacluster-cinder/1
2.3.5-3ubuntu2 == hacluster-cinder/0
2.3.5-3ubuntu2 == hacluster-cinder/2
2.3.5-3ubuntu2 == hacluster-glance/0
2.3.5-3ubuntu2 == hacluster-glance/1
2.3.5-3ubuntu2 == hacluster-glance/2
2.3.5-3ubuntu2 == hacluster-heat/0
2.3.5-3ubuntu2 == hacluster-heat/1
2.3.5-3ubuntu2 == hacluster-heat/2
2.3.5-3ubuntu2 == hacluster-horizon/0
2.3.5-3ubuntu2 == hacluster-horizon/1
2.3.5-3ubuntu2 == hacluster-horizon/2
2.3.5-3ubuntu2 == hacluster-keystone/0
2.3.5-3ubuntu2 == hacluster-keystone/1
2.3.5-3ubuntu2 == hacluster-keystone/2
2.3.5-3ubuntu2 == hacluster-mysql/0
2.3.5-3ubuntu2 == hacluster-mysql/1
2.3.5-3ubuntu2 == hacluster-mysql/2
2.3.5-3ubuntu2 == hacluster-neutron/0
2.3.5-3ubuntu2 == hacluster-neutron/2
2.3.5-3ubuntu2 == hacluster-neutron/1
2.3.5-3ubuntu2 == hacluster-nova/1
2.3.5-3ubuntu2 == hacluster-nova/2
2.3.5-3ubuntu2 == hacluster-nova/0

You can of course substitute libcpg4 in the above query for any package that you need to check.

By far and away my most favourite feature of Juju at present, juju run reminds me of knife ssh, which is unsurprisingly one of my favourite features of Chef.

Sociological ImagesSelling the Sport Spectacle

That large (and largely trademarked) sporting event is this weekend. In honor of its reputation for massive advertising, Lisa Wade tipped me off about this interesting content analysis of last year’s event by the Media Education Foundation.

MEF watched last year’s big game and tallied just how much time was devoted to playing and how much was devoted to ads and other branded content during the game. According to their data, the ball was only in play “for a mere 18 minutes and 43 seconds, or roughly 8% of the entire broadcast.”

MEF used a pie chart to illustrate their findings, but readers can get better information from comparing different heights instead of different angles. Using their data, I quickly made this chart to more easily compare branded and non-branded content.

Data Source: Media Education Foundation, 2018

One surprising thing that jumps out of this data is that, for all the hubbub about commercials, far and away the most time is devoted to replays, shots of the crowd, and shots of the field without the ball in play. We know “the big game” is a big sell, but it is interesting to see how the thing it sells the most is the spectacle of the event itself.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at


CryptogramIsraeli Scientists Accidentally Reveal Classified Information

According to this story (non-paywall English version here), Israeli scientists released some information to the public they shouldn't have.

Defense establishment officials are now trying to erase any trace of the secret information from the web, but they have run into difficulties because the information was copied and is found on a number of platforms.

Those officials have managed to ensure that the Haaretz article doesn't have any actual information about the information. I have reason to believe the information is related to Internet security. Does anyone know more?

CryptogramAfter Section 702 Reauthorization

For over a decade, civil libertarians have been fighting government mass surveillance of innocent Americans over the Internet. We've just lost an important battle. On January 18, President Trump signed the renewal of Section 702, domestic mass surveillance became effectively a permanent part of US law.

Section 702 was initially passed in 2008, as an amendment to the Foreign Intelligence Surveillance Act of 1978. As the title of that law says, it was billed as a way for the NSA to spy on non-Americans located outside the United States. It was supposed to be an efficiency and cost-saving measure: the NSA was already permitted to tap communications cables located outside the country, and it was already permitted to tap communications cables from one foreign country to another that passed through the United States. Section 702 allowed it to tap those cables from inside the United States, where it was easier. It also allowed the NSA to request surveillance data directly from Internet companies under a program called PRISM.

The problem is that this authority also gave the NSA the ability to collect foreign communications and data in a way that inherently and intentionally also swept up Americans' communications as well, without a warrant. Other law enforcement agencies are allowed to ask the NSA to search those communications, give their contents to the FBI and other agencies and then lie about their origins in court.

In 1978, after Watergate had revealed the Nixon administration's abuses of power, we erected a wall between intelligence and law enforcement that prevented precisely this kind of sharing of surveillance data under any authority less restrictive than the Fourth Amendment. Weakening that wall is incredibly dangerous, and the NSA should never have been given this authority in the first place.

Arguably, it never was. The NSA had been doing this type of surveillance illegally for years, something that was first made public in 2006. Section 702 was secretly used as a way to paper over that illegal collection, but nothing in the text of the later amendment gives the NSA this authority. We didn't know that the NSA was using this law as the statutory basis for this surveillance until Edward Snowden showed us in 2013.

Civil libertarians have been battling this law in both Congress and the courts ever since it was proposed, and the NSA's domestic surveillance activities even longer. What this most recent vote tells me is that we've lost that fight.

Section 702 was passed under George W. Bush in 2008, reauthorized under Barack Obama in 2012, and now reauthorized again under Trump. In all three cases, congressional support was bipartisan. It has survived multiple lawsuits by the Electronic Frontier Foundation, the ACLU, and others. It has survived the revelations by Snowden that it was being used far more extensively than Congress or the public believed, and numerous public reports of violations of the law. It has even survived Trump's belief that he was being personally spied on by the intelligence community, as well as any congressional fears that Trump could abuse the authority in the coming years. And though this extension lasts only six years, it's inconceivable to me that it will ever be repealed at this point.

So what do we do? If we can't fight this particular statutory authority, where's the new front on surveillance? There are, it turns out, reasonable modifications that target surveillance more generally, and not in terms of any particular statutory authority. We need to look at US surveillance law more generally.

First, we need to strengthen the minimization procedures to limit incidental collection. Since the Internet was developed, all the world's communications travel around in a single global network. It's impossible to collect only foreign communications, because they're invariably mixed in with domestic communications. This is called "incidental" collection, but that's a misleading name. It's collected knowingly, and searched regularly. The intelligence community needs much stronger restrictions on which American communications channels it can access without a court order, and rules that require they delete the data if they inadvertently collect it. More importantly, "collection" is defined as the point the NSA takes a copy of the communications, and not later when they search their databases.

Second, we need to limit how other law enforcement agencies can use incidentally collected information. Today, those agencies can query a database of incidental collection on Americans. The NSA can legally pass information to those other agencies. This has to stop. Data collected by the NSA under its foreign surveillance authority should not be used as a vehicle for domestic surveillance.

The most recent reauthorization modified this lightly, forcing the FBI to obtain a court order when querying the 702 data for a criminal investigation. There are still exceptions and loopholes, though.

Third, we need to end what's called "parallel construction." Today, when a law enforcement agency uses evidence found in this NSA database to arrest someone, it doesn't have to disclose that fact in court. It can reconstruct the evidence in some other manner once it knows about it, and then pretend it learned of it that way. This right to lie to the judge and the defense is corrosive to liberty, and it must end.

Pressure to reform the NSA will probably first come from Europe. Already, European Union courts have pointed to warrantless NSA surveillance as a reason to keep Europeans' data out of US hands. Right now, there is a fragile agreement between the EU and the United States ­-- called "Privacy Shield" -- ­that requires Americans to maintain certain safeguards for international data flows. NSA surveillance goes against that, and it's only a matter of time before EU courts start ruling this way. That'll have significant effects on both government and corporate surveillance of Europeans and, by extension, the entire world.

Further pressure will come from the increased surveillance coming from the Internet of Things. When your home, car, and body are awash in sensors, privacy from both governments and corporations will become increasingly important. Sooner or later, society will reach a tipping point where it's all too much. When that happens, we're going to see significant pushback against surveillance of all kinds. That's when we'll get new laws that revise all government authorities in this area: a clean sweep for a new world, one with new norms and new fears.

It's possible that a federal court will rule on Section 702. Although there have been many lawsuits challenging the legality of what the NSA is doing and the constitutionality of the 702 program, no court has ever ruled on those questions. The Bush and Obama administrations successfully argued that defendants don't have legal standing to sue. That is, they have no right to sue because they don't know they're being targeted. If any of the lawsuits can get past that, things might change dramatically.

Meanwhile, much of this is the responsibility of the tech sector. This problem exists primarily because Internet companies collect and retain so much personal data and allow it to be sent across the network with minimal security. Since the government has abdicated its responsibility to protect our privacy and security, these companies need to step up: Minimize data collection. Don't save data longer than absolutely necessary. Encrypt what has to be saved. Well-designed Internet services will safeguard users, regardless of government surveillance authority.

For the rest of us concerned about this, it's important not to give up hope. Everything we do to keep the issue in the public eye ­-- and not just when the authority comes up for reauthorization again in 2024 -- hastens the day when we will reaffirm our rights to privacy in the digital age.

This essay previously appeared in the Washington Post.

Worse Than FailureCodeSOD: The Pythonic Wheel Reinvention

Starting with Java, a robust built-in class library is practically a default feature of modern programming languages. Why struggle with OS-specific behaviors, or with writing your own code, or managing a third party library to handle problems like accessing files or network resources.

One common class of WTF is the developer who steadfastly refuses to use it. They inevitably reinvent the wheel as a triangle with no axle. Another is the developer who is simply ignorant of what the language offers, and is too lazy to Google it. They don’t know what a wheel is, so they invent a coffee-table instead.

My personal favorite, though, is the rare person who knows about the class library, that uses the class library… to reinvent methods which exist in the class library. They’ve seen a wheel, they know what a wheel is for, and they still insist on inventing a coffee-table.

Anneke sends us one such method.

The method in question is called thus:

if output_exists("/some/path.dat"):

I want to stress, this is the only use of this method. The purpose is to check if a file containing output from a different process exists. If you’re familiar with Python, you might be thinking, “Wait, isn’t that just os.path.exists?”

Of course not.

def output_exists(full_path):
    path = os.path.dirname(full_path) + "/*"
    filename = '%s' % filename2
    files = glob.glob(path)
    back = []
    for f in re.findall(filename, " ".join(files)):
        back.append(os.path.join(os.path.dirname(full_path), f))
    return back

Now, in general, most of your directory-tree manipulating functions live in the os.path package, and you can see os.path.dirname used. That splits off the directory-only part. Then they throw a glob on it. I could, at this point, bring up the importance of os.path.join for that sort of operation, but why bother?

They knew enough to use os.path.dirname to get the directory portion of the path, but not os.path.split which can pick off the file portion of the path. The “Pythonic” way of writing that line would be (path, filename) = os.path.split(full_path). Wait, I misspoke: the “Pythonic” way would be to not write any part of this method.

'%s' % filename2 is how Python’s version of printf and I cannot for the life of me guess why it’s being done here. A misguided attempt at doing an strcpy-type operation?

glob.glob isn’t just the best method name in anything, it also does a filesystem search using globs, so files contains a list of all files in that directory.

" ".join(files) is the Python idiom for joining an array, so we turn the list of files into an array and search it using re.findall… which uses a regex for searching. Note that they’re using the filename for the regex, and they haven’t placed any guards around it, so if the input file is “foo.c”, and the directory contains “foo.cpp”, this will think that’s fine.

And then last but not least, it returns the array of matches, relying on the fact that an empty array in Python is false.

To write this code required at least some familiarity with three different major packages in the class library- os.path, glob, and re, but just one ounce of more familiarity ith os.path would have replaced the entire thing with a simple call to os.path.exists. Which is what Anneke did.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Sam VargheseBlack money continues to pour in to IPL

A little more than a year ago, Indian Prime Minister Narendra Modi announced that 500 and 1000 rupee notes would be removed from circulation as a step to flushing out all the black money in the country.

He made the announcement on TV in prime time on 8 November 2016 and gave people four hours time to be ready for the change!

But judging by the amounts which cricketers were bought for in the Indian Premier League Twenty20 auction last week, there is more black money than ever in the country.

Else, sums like US$1.5 million would not be available for the Kolkata Knight Riders to buy a cricketer like Mitchell Starc. This is black money being flushed out and made ready to be used as legal tender, the main reason why the Indian Government turns a blind eye to the process.

Former Indian spin king Bishen Singh Bedi accused the IPL of being a centre for money-laundering and he may not be far off the mark.

A little history will help explain India’s black money problem: Back in 1967, the then Indian finance minister Morarji Desai had the brilliant idea of raising taxes well beyond their existing level; the maximum marginal tax rate was raised as high as 97.75 percent.

Desai, who was better known for drinking his own urine, reasoned that people would pay up and that India’s budgetary problems would become more manageable.

Instead, the reverse happened. India has always had a problem with undeclared wealth, a kind of parallel economy. The amount of black money increased by leaps and bounds after Desai’s ridiculous laws were promulgated.

Seven years later, in 1974, the new finance minister Y.B. Chavan brought down rates by some 20 percentage points, but by then the damage had been done. The amount of black money in India today is estimated to be anything from 30 to 100 times the national budget.

The IPL attracts the best cricketers from around the world because of the money on offer. The amounts that are bid are paid for three years, and the player has to play for two months every year, with some additional promotional activity also involved.

The competition is in its 11th season and it has been dogged by controversy; in 2015, two teams were suspended for match-fixing and betting, with the incidents taking place in 2012 and 2013.

So, despite all the platitudes from Modi, don’t expect anything to change in India as far as black money is concerned. If anything, the amount will increase now that people know that these kinds of measures will be announced at the drop of a hat. They will be ready the next time Modi or anyone else comes up with some crazy initiative like this.


Krebs on SecurityDrugs Tripped Up Suspects In First Known ATM “Jackpotting” Attacks in the US

On Jan. 27, 2018, KrebsOnSecurity published what this author thought was a scoop about the first known incidence of U.S. ATMs being hit with “jackpotting” attacks, a crime in which thieves deploy malware that forces cash machines to spit out money like a loose Las Vegas slot machine. As it happens, the first known jackpotting attacks in the United States were reported in November 2017 by local media on the west coast, although the reporters in those cases seem to have completely buried the lede.

Isaac Rafael Jorge Romero, Jose Alejandro Osorio Echegaray, and Elio Moren Gozalez have been charged with carrying out ATM “jackpotting” attacks that force ATMs to spit out cash like a Las Vegas casino.

On Nov. 20, 2017, Oil City News — a community publication in Wyoming — reported on the arrest of three Venezuelan nationals who were busted on charges of marijuana possession after being stopped by police.

After pulling over the van the men were driving, police on the scene reportedly detected the unmistakable aroma of pot smoke wafting from the vehicle. When the cops searched the van, they discovered small amounts of pot, THC edible gummy candies, and several backpacks full of cash.

FBI agents had already been looking for the men, who were allegedly caught on surveillance footage tinkering with cash machines in Wyoming, Colorado and Utah, shortly before those ATMs were relieved of tens of thousands of dollars.

According to a complaint filed in the U.S. District Court for the District of Colorado, the men first hit an ATM at a credit union in Parker, Colo. on October 10, 2017. The robbery occurred after business hours, but the cash machine in question was located in a vestibule available to customers 24/7.

The complaint says surveillance videos showed the men opening the top of the ATM, which housed the computer and hard drive for the ATM — but not the secured vault where the cash was stored. The video showed the subjects reaching into the ATM, and then closing it and exiting the vestibule. On the video, one of the subjects appears to be carrying an object consistent with the size and appearance of the hard drive from the ATM.

Approximately ten minutes later, the subjects returned and opened up the cash machine again. Then they closed the top of the ATM and appeared to wait while the ATM computer restarted. After that, both subjects could be seen on the video using their mobile phones. One of the subjects reportedly appeared to be holding a small wireless mini-computer keyboard.

Soon after, the ATM began spitting out cash, netting the thieves more than $24,000. When they they were done, the suspects allegedly retrieved their equipment from the ATM and left.

Forensic analysis of the ATM hard drive determined that the thieves installed the Ploutus.D malware on the cash machine’s hard drive. Ploutus.D is an advanced malware strain that lets crooks interact directly with the ATM’s computer and force it to dispense money.

“Often the malware requires entering of codes to dispense cash,” reads an FBI affidavit (PDF). “These codes can be obtained by a third party, not at the location, who then provides the codes to the subjects at the ATM. This allows the third party to know how much cash is dispensed from the ATM, preventing those who are physically at the ATM from keeping cash for themselves instead of providing it to the criminal organization. The use of mobile phones is often used to obtain these dispensing codes.”

In November 2017, similar ATM jackpotting attacks were discovered in the Saint George, Utah area. Surveillance footage from those ATMs showed the same subjects were at work.

The FBI’s investigation determined that the vehicles used by the suspects in the Utah thefts were rented by Venezuelan nationals.

On Nov. 16, Isaac Rafael Jorge Romero, 29, Jose Alejandro Osorio Echegaray, 36, and two other Venezuelan nationals were detained in Teton County, Wyo. for drug possession. Two other suspects in the Utah theft were arrested in San Diego when they tried to return a rental car that was caught on surveillance camera at one of the hacked ATMs.

To carry out a jackpotting attack, thieves first must gain physical access to the cash machine. From there they can use malware or specialized electronics — often a combination of both — to control the operations of the ATM.

All of the known ATM jackpotting attacks in the U.S. so far appear to be targeting a handful of older model cash machines manufactured by ATM giant Diebold Nixdorf. However, security firm FireEye notes that — with minor modifications to the malware code — Plotus.D could be used to target software that runs on 40 different ATM vendors in 80 countries.

Diebold’s advisory on hardening ATMs against jackpotting attacks is available here (PDF).

Jackpotting is not a new crime: Indeed, it has been a problem for ATM operators in most of the world for many years now. But for some reason, jackpotting attacks have until recently eluded U.S. ATM operators.

Jackpotting has been a real threat to ATM owners and manufacturers since at least 2010, when the late security researcher Barnaby Michael Douglas Jack (known to most as simply “Barnaby Jack”) demonstrated the attack to a cheering audience at the Black Hat security conference. A recording of that presentation is below.

Cory DoctorowPodcast: The Man Who Sold the Moon, Part 03

Here’s part three of my reading (MP3) of The Man Who Sold the Moon, my award-winning novella first published in 2015’s Hieroglyph: Stories and Visions for a Better Future, edited by Ed Finn and Kathryn Cramer. It’s my Burning Man/maker/first days of a better nation story and was a kind of practice run for my 2017 novel Walkaway.


CryptogramSubway Elevators and Movie-Plot Threats

Local residents are opposing adding an elevator to a subway station because terrorists might use it to detonate a bomb. No, really. There's no actual threat analysis, only fear:

"The idea that people can then ride in on the subway with a bomb or whatever and come straight up in an elevator is awful to me," said Claudia Ward, who lives in 15 Broad Street and was among a group of neighbors who denounced the plan at a recent meeting of the local community board. "It's too easy for someone to slip through. And I just don't want my family and my neighbors to be the collateral on that."


Local residents plan to continue to fight, said Ms. Gerstman, noting that her building's board decided against putting decorative planters at the building's entrance over fears that shards could injure people in the event of a blast.

"Knowing that, and then seeing the proposal for giant glass structures in front of my building ­- ding ding ding! -- what does a giant glass structure become in the event of an explosion?" she said.

In 2005, I coined the term "movie-plot threat" to denote a threat scenario that caused undue fear solely because of its specificity. Longtime readers of this blog will remember my annual Movie-Plot Threat Contests. I ended the contest in 2015 because I thought the meme had played itself out. Clearly there's more work to be done.

Worse Than FailureRepresentative Line: As the Clock Terns

Hydranix” can’t provide much detail about today’s code, because they’re under a “strict NDA”. All they could tell us was that it’s C++, and it’s part of a “mission critical” front end package. Honestly, I think this line speaks for itself:

(mil == 999 ? (!(mil = 0) && (sec == 59 ? 
  (!(sec = 0) && (min == 59 ? 
    (!(min = 0) && (++hou)) : ++min)) : ++sec)) : ++mil);

“Hydranix” suspects that this powers some sort of stopwatch, but they’re not really certain what its role is.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!


CryptogramLocating Secret Military Bases via Fitness Data

In November, the company Strava released an anonymous data-visualization map showing all the fitness activity by everyone using the app.

Over this weekend, someone realized that it could be used to locate secret military bases: just look for repeated fitness activity in the middle of nowhere.

News article.

TEDLee Cronin’s ongoing quest for print-your-own medicine, and more news from TED speakers

Behold, your recap of TED-related news:

Print your own pharmaceutical factory. As part of an ongoing quest to make pharmaceuticals easier to manufacture, chemist Lee Cronin and his team at the University of Glasgow have designed a way to 3D-print a portable “factory” for the complicated and multi-step chemical reactions needed to create useful drugs. It’s a grouping of vessels about the size of water bottles; each vessel houses a different type of chemical reaction. Pharmacists or doctors could create specific drugs by adding the right ingredients to each vessel from pre-measured cartridges, following a simple step-by-step recipe. The process could help replace or supplement large chemical factories, and bring helpful drugs to new markets. (Watch Cronin’s TED Talk)

How Amit Sood’s TED Talk spawned the Google Art selfie craze. While Amit Sood was preparing for his 2016 TED Talk about Google’s Cultural Institute and Art Project, his co-presenter Cyril Diagne, a digital interaction artist, suggested that he include a prototype of a project they’d been playing with, one that matched selfies to famous pieces of art. Amit added the prototype to his talk, in which he matched live video of Cyril’s face to classic artworks — and when the TED audience saw it, they broke out in spontaneous applause. Inspired, Amit decided to develop the feature and add it to the Google Arts & Culture app. The new feature launched in December 2017, and it went viral in January 2018. Just like the live TED audience before it, online users loved it so much that the app shot to the number one spot in both the Android and iOS app stores. (Watch Sood’s TED Talk)

A lyrical film about the very real danger of climate change. Funded by a Kickstarter campaign by director and producer Matthieu Rytz, the documentary Anote’s Ark focuses on the clear and present danger of global warming to the Pacific Island nation of Kiribati (population: 100,000). As sea levels rise, the small, low-lying islands that make up Kiribati will soon be entirely covered by the ocean, displacing the population and their culture. Former president Anote Tong, who’s long been fighting global warming to save his country and his constituents, provides one of two central stories within the documentary. The film (here’s the trailer) premiered at the 2018 Sundance Festival in late January, and will be available more widely soon; follow on Facebook for news. (Watch Tong’s TED Talk)

An animated series about global challenges. Sometimes the best way to understand a big idea is on a whiteboard. Throughout 2018, Rabbi Jonathan Sacks and his team are producing a six-part whiteboard-animation series that explains key themes in his theology and philosophy around contemporary global issues. The first video, called “The Politics of Hope,” examines political strife in the West, and ways to change the culture from the politics of anger to the politics of hope. Future whiteboard videos will delve into integrated diversity, the relationship between religion and science, the dignity of difference, confronting religious violence, and the ethics of responsibility. (Watch Rabbi Sacks’ TED Talk)

Nobody wins the Google Lunar X Prize competition :( Launched in 2007, the Google Lunar X Prize competition challenged entrepreneurs and engineers to design low-cost ways to explore space. The premise, if not the work itself, was simple — the first privately funded team to get a robotic spacecraft to the moon, send high-resolution photos and video back to Earth, and move the spacecraft 500 meters would win a $20 million prize, from a total prize fund of $30 million. The deadline was set for 2012, and was pushed back four times; the latest deadline was set to be March 31, 2018. On January 23, X Prize founder and executive chair Peter Diamandis and CEO Marcus Shingles announced that the competition was over: It was clear none of the five remaining teams stood a chance of launching by March 31. Of course, the teams may continue to compete without the incentive of this cash prize, and some plan to. (Watch Diamandis’ TED Talk)

15 photos of America’s journey towards inclusivity. Art historian Sarah Lewis took control of the New Yorker photo team’s Instagram last week, sharing pictures that answered the timely question: “What are 15 images that chronicle America’s journey toward a more inclusive level of citizenship?” Among the iconic images from Gordon Parks and Carrie Mae Weems, Lewis also includes an image of her grandfather, who was expelled from the eleventh grade in 1926 for asking why history books ignored his own history. In the caption, she tells how he became a jazz musician and an artist, “inserting images of African Americans in scenes where he thought they should—and knew they did—exist.” All the photos are ones she uses in her “Vision and Justice” course at Harvard, which focuses on art, race and justice. (Watch Lewis’ TED Talk)

Have a news item to share? Write us at and you may see it included in this weekly round-up.

Krebs on SecurityFile Your Taxes Before Scammers Do It For You

Today, Jan. 29, is officially the first day of the 2018 tax-filing season, also known as the day fraudsters start requesting phony tax refunds in the names of identity theft victims. Want to minimize the chances of getting hit by tax refund fraud this year? File your taxes before the bad guys can!

Tax refund fraud affects hundreds of thousands, if not millions, of U.S. citizens annually. Victims usually first learn of the crime after having their returns rejected because scammers beat them to it. Even those who are not required to file a return can be victims of refund fraud, as can those who are not actually due a refund from the IRS.

According to the IRS, consumer complaints over tax refund fraud have been declining steadily over the years as the IRS and states enact more stringent measures for screening potentially fraudulent applications.

If you file your taxes electronically and the return is rejected, and if you were the victim of identity theft (e.g., if your Social Security number and other information was leaked in the Equifax breach last year), you should submit an Identity Theft Affidavit (Form 14039). The IRS advises that if you suspect you are a victim of identity theft, continue to pay your taxes and file your tax return, even if you must do so by paper.

If the IRS believes you were likely the victim of tax refund fraud in the previous tax year they will likely send you a special filing PIN that needs to be entered along with this year’s return before the filing will be accepted by the IRS electronically. This year marks the third out of the last five that I’ve received one of these PINs from the IRS.

Of course, filing your taxes early to beat the fraudsters requires one to have all of the tax forms needed to do so. As a sole proprietor, this is a great challenge because many companies take their sweet time sending out 1099 forms and such (even though they’re required to do so by Jan. 31).

A great many companies are now turning to online services to deliver tax forms to contractors, employees and others. For example, I have received several notices via email regarding the availability of 1099 forms online; most say they are sending the forms in snail mail, but that if I need them sooner I can get them online if I just create an account or enter some personal information at some third-party site.

Having seen how so many of these sites handle personal information, I’m not terribly interested in volunteering more of it. According to Bankrate, taxpayers can still file their returns even if they don’t yet have all of their 1099s — as long as you have the correct information about how much you earned.

“Unlike a W-2, you generally don’t have to attach 1099s to your tax return,” Bankrate explains. “They are just issued so you’ll know how much to report, with copies going to the IRS so return processors can double-check your entries. As long as you have the correct information, you can put it on your tax form without having the statement in hand.”

In past tax years, identity thieves have used data gleaned from a variety of third-party and government Web sites to file phony tax refund requests — including from the IRS itself! One of their perennial favorites was the IRS’s Get Transcript service, which previously had fairly lax authentication measures.

After hundreds of thousands of taxpayers had their tax data accessed through the online tool, the IRS took it offline for a bit and then brought it back online but requiring a host of new data elements.

But many of those elements — such as your personal account number from a credit card, mortgage, home equity loan, home equity line of credit or car loan — can be gathered from multiple locations online with almost no authentication. For example, earlier this week I heard from Jason, a longtime reader who was shocked at how little information was required to get a copy of his 2017 mortgage interest statement from his former lender.

“I called our old mortgage company (Chase) to retrieve our 1098 from an old loan today,” Jason wrote. “After I provided the last four digits of the social security # to their IVR [interactive voice response system] that was enough to validate me to request a fax of the tax form, which would have included sensitive information. I asked for a supervisor who explained to me that it was sufficient to check the SSN last 4 + the caller id phone number to validate the account.”

If you’ve taken my advice and placed a security freeze on your credit file with the major credit bureaus, you don’t have to worry about thieves somehow bypassing the security on the IRS’s Get Transcript site. That’s because the IRS uses Experian to ask a series of knowledge-based authentication questions before an online account can even be created at the IRS’s site to access the transcript.

Now, anyone who reads this site regularly should know I’ve been highly critical of these KBA questions as a means of authentication. But the upshot here is that if you have a freeze in place at Experian (and I sincerely hope you do), Experian won’t even be able to ask those questions. Thus, thieves should not be able to create an account in your name at the IRS’s site (unless of course thieves manage to successfully request your freeze PIN from Experian’s site, in which case all bets are off).

While you’re getting your taxes in order this filing season, be on guard against fake emails or Web sites that may try to phish your personal or tax data. The IRS stresses that it will never initiate contact with taxpayers about a bill or refund. If you receive a phishing email that spoofs the IRS, consider forwarding it to

Finally, tax season also is when the phone-based tax scams kick into high gear, with fraudsters threatening taxpayers with arrest, deportation and other penalties if they don’t make an immediate payment over the phone. If you care for older parents or relatives, this may be a good time to remind them about these and other phone-based scams.

CryptogramEstimating the Cost of Internet Insecurity

It's really hard to estimate the cost of an insecure Internet. Studies are all over the map. A methodical study by RAND is the best work I've seen at trying to put a number on this. The results are, well, all over the map:

"Estimating the Global Cost of Cyber Risk: Methodology and Examples":

Abstract: There is marked variability from study to study in the estimated direct and systemic costs of cyber incidents, which is further complicated by the considerable variation in cyber risk in different countries and industry sectors. This report shares a transparent and adaptable methodology for estimating present and future global costs of cyber risk that acknowledges the considerable uncertainty in the frequencies and costs of cyber incidents. Specifically, this methodology (1) identifies the value at risk by country and industry sector; (2) computes direct costs by considering multiple financial exposures for each industry sector and the fraction of each exposure that is potentially at risk to cyber incidents; and (3) computes the systemic costs of cyber risk between industry sectors using Organisation for Economic Co-operation and Development input, output, and value-added data across sectors in more than 60 countries. The report has a companion Excel-based modeling and simulation platform that allows users to alter assumptions and investigate a wide variety of research questions. The authors used a literature review and data to create multiple sample sets of parameters. They then ran a set of case studies to show the model's functionality and to compare the results against those in the existing literature. The resulting values are highly sensitive to input parameters; for instance, the global cost of cyber crime has direct gross domestic product (GDP) costs of $275 billion to $6.6 trillion and total GDP costs (direct plus systemic) of $799 billion to $22.5 trillion (1.1 to 32.4 percent of GDP).

Here's Rand's risk calculator, if you want to play with the parameters yourself.

Note: I was an advisor to the project.

Separately, Symantec has published a new cybercrime report with their own statistics.

Worse Than FailureRepresentative Line: The Mystery of the SmallInt

PT didn’t provide very much information about today’s Representative Line.

Clearly bits and bytes was not something studied in this SQL stored procedure author. Additionally, Source control versions are managed with comments. OVER 90 Thousand!

        --Declare @PrevNumber smallint
                --2015/11/18 - SMALLINT causes overflow error when it goes over 32000 something 
                -- - this sp errors but can only see that when
                -- code is run in SQL Query analyzer 
                -- - we should also check if it goes higher than 99999
        DECLARE @PrevNumber int         --2015/11/18

Fortunately, I am Remy Poirot, the world’s greatest code detective. To your untrained eyes, you simply see the kind of comment which would annoy you. But I, an expert, with experience of the worst sorts of code man may imagine, can piece together the entire lineage of this code.

Let us begin with the facts: no source control is in use. Version history is managed in the comments. From this, we can deduce a number of things: the database where this code runs is also where it is stored. Changes are almost certainly made directly in production.

A statue of Hercule Poirot in Belgium

Which, when those changes fail, they may only be detected when the “code is run in SQL Query Analyzer”. This ties in with the “changes in production/no source control”, but it also tells us that it is possible to run this code, have it fail, and no one notices. This means this code must be part of an unattended process, a batch job of some kind. Even an overflow error vanishes into the ether.

This code also, according to the comments, should “also check if [@PrevNumber] goes higher than 99999”. This is our most vital clue, for it tells us that the content of the value has a maximum width- more than 5 characters to represent it is a problem. This obviously means that the target system is a mainframe with a flat-file storage model.

Already, from one line and a handful of comments, we’ve learned a great deal about this code, but one need not be the world’s greatest code detective to figure out this much. Let’s see what else we can tease out.

@PrevNumber must tie to some ID in the database, likely the “last processed ID” from the previous run of the batch job. The confusion over smallint and need to enforce a five-digit limit implies that this database isn’t actually in control of its data. Either the data comes from a front-end with no validation- certainly possible- or it comes from an external system. But a value greater than 99999 isn’t invalid in the database- otherwise they could enforce that restriction via a constraint. This means the database holds data coming from and going to different places- it’s a “business integration” database.

With these clues, we can assemble the final picture of the crime scene.

In a dark corner of a datacenter are the components of a mainframe system. The original install was likely in the 70s, and while it saw updates and maintenance for the next twenty years, starting in the 90s it was put on life support. “It’s going away, any day now…” Unfortunately, huge swathes of the business depended on it and no one is entirely certain what it does. They can’t simply port business processes to a new system, because no one knows what the business processes are. They’re encoded as millions of lines of code in a dialect of COBOL no one understands.

A conga-line of managers have passed through the organization, trying to kill the mainframe. Over time, they’ve brought in ERP systems from Oracle or SAP. Maybe they’ve taken some home-grown ERP written by users in Access and tried to extend them to an enterprise solution. Worse, over the years, acquisitions come along, bringing new systems from other vendors under the umbrella of one corporate IT group.

All these silos need to talk to each other. They also have to be integrated into the suite of home-grown reporting tools, automation, and Excel spreadsheets that run the business. Shipping can only pull from the mainframe, while the P/L dashboard the executives use can only pull from SAP. At first, it’s a zoo of ETL jobs, until someone gets the bright idea to centralize it.

They launch a project, and give it a slick three-letter-acronym, like “QPR” or “LMP”. Or, since there are only so many TLAs one can invent, they probably reuse an existing one, like “GPS” or “FBI”. The goal: have a central datastore that integrates data from every silo in the organization, and then the silos can pull back out the data they want for their processes. This project has a multi-million dollar budget, and has exceeded that budget twice over, and is not yet finished.

The code PT supplied for us is but one slice of that architecture. It’s pulling data from one of those centralized tables in the business integration database, and massaging it into a format it can pass off to the mainframe. Like a murder reconstructed from just a bit of fingernail, we’ve laid bare the entire crime with our powers of observation, our experience, and our knowledge of bad code.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

TEDTalks from TEDNYC Idea Search 2018

Cloe Shasha and Kelly Stoetzel hosted the fast-paced TED Idea Search 2018 program on January 24, 2018 at TED HQ in New York, NY. (Photo: Ryan Lash / TED)

TED is always looking for new voices with fresh ideas — and earlier this winter, we opened a challenge to the world: make a one-minute audition video that makes the case for your TED Talk. More than 1,200 people applied to be a part of the Idea Search program this year, and on Wednesday night at our New York headquarters, 13 audition finalists shared their ideas in a fast-paced program. Here are voices you may not have heard before — but that you’ll want to hear more from soon.

Ruth Morgan shares her work preventing the misinterpretation of forensic evidence. (Photo: Ryan Lash / TED)

Forensic evidence isn’t as clear-cut as you think. For years, forensic science research has focused on making it easier and more accurate to figure out what a trace — such as DNA or a jacket fiber — is and who it came from, but that doesn’t help us interpret what the evidence means. “What we need to know if we find your DNA on a weapon or gunshot residue on you is how did it get there and when did it get there,” says forensic scientist Ruth Morgan. These gaps in understanding have real consequences: forensic evidence is often misinterpreted and used to convict people of crimes they didn’t commit. Morgan and her team are committed to finding ways to answer the why and how, such as determining whether it’s possible to get trace evidence on you during normal daily activities (it is) and how trace DNA can be transferred. “We need to dramatically reduce the chance of forensic evidence being misinterpreted,” she says. “We need that to happen so that you never have to be that innocent person in the dock.”

The intersection of our senses. An experienced composer and filmmaker, Philip Clemo has been on a quest to determine if people can experience imagery with the same depth that they experience music. Research has shown that sound can impact how we perceive visuals, but can visuals have a similarly profound impact? In his live performances, Clemo and his band use abstract imagery in addition to abstract music to create a visual experience for the audience. He hopes that people can have these same experiences in their everyday lives by quieting their minds to fully experience the “visual music” of our surrounding environment — and improve our connection to our world.

Reading the Bible … without omission. At a time when he was a recovering fundamentalist and longtime atheist, David Ellis Dickerson received a job offer as a quiz question writer and Bible fact-checker for the game show The American Bible Challenge. Among his responsibilities: coming up with questions that conveniently ignored the sections of the Bible that mention slavery, concubines and incest. The omission expectations he was faced with made him realize that evangelicals read the Bible in the same way they watch reality television: “with a willing, even loving, suspension of disbelief.” Now, he invites devout Christians to read the Bible in its most unedited form, to recognize its internal contradictions and to grapple with its imperfections.

Danielle Bilot Bilot suggests three simple but productive actions we can take to help bees: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food. (Photo: Ryan Lash / TED)

To bee or not to bee? The most famous bee species of recent memory has overwhelmingly been the honey bee. For years, their concerning disappearance has made international news and been the center of research. Environmental designer Danielle Bilot believes that the honey bee should share the spotlight with the 3,600 other species that pollinate much of the food we eat every day in the US, such as blueberries, tomatoes and eggplants. Honey bees, she says, aren’t even native to North America (they were originally brought over from Europe) and therefore have a tougher time successfully pollinating these and many other indigenous crops. Regardless of species, human activity is harming them, and Bilot suggests three simple but productive actions we can take to make their lives easier and revive their populations: plant flowers that bloom year-round, leave bare areas of soil for bees to nest in, and plant flower patches so that bees can more easily locate food.

What if technology protected your hearing? Computers that talk to us and voice-enabled technology like Siri, Alexa and Google Home are changing the importance of hearing, says ear surgeon Matthew Bromwich. And with more than 500 million people suffering from disabling hearing loss globally, the importance of democratizing hearing health care is more relevant than ever. “How do we use our technology to improve the clarity of our communication?” Bromwich asks. He and his team have created a hearing testing technology called “SHOEBOX,” which gives hearing health care access to more than 70,000 people in 32 countries. He proposes using technology to help prevent this disability, amplify sound clarity, and paint a new future for hearing.

Welcome to the 2025 Technology Design Awards, with your host, Tom Wujec. Rocking a glittery dinner jacket, design guru Tom Wujec presents a science-fiction-y awards show from eight years into the future, honoring the designs that made the biggest impact in technology, consumer goods and transportation — capped off by a grand impact award chosen live onstage by an AI. While the designs seem fictional — a hacked auto, a self-rising house, a cutting-edge prosthetic — the kicker to Tom’s future award show is that everything he shows is in development right now.

In collaboration with the MIT Media Lab, Nilay Kulkarni used his skills as a self-taught programmer to build a simple tech solution to prevent human stampedes during the Kumbh Mela, one of the world’s largest crowd gatherings, in India. (Photo: Ryan Lash / TED)

A 15-year-old solves a deadly problem with a low-cost device. Every four years, more than 30 million Hindus gather for the Kumbh Mela, the world’s largest religious gathering, in order to wash themselves of their sins. Once every 12 years, it takes place in Nashik, a city on the western coast of India that ordinarily contains 1.5 million residents. With such a massive crowd in a small space, stampedes inevitably happen, and in 2003, 39 people were killed during the festival in Nashik. In 2014, then-15-year-old Nilay Kulkarni decided he wanted to find a solution. He recalls: “It seemed like a mission impossible, a dream too big.” After much trial and error, he and collaborators at the MIT Media Lab came up with a cheap, portable, effective stampede stopper called “Ashioto” (meaning footstep in Japanese): a pressure-sensor-equipped mat which counts the number of people stepping on it and sends the data over the internet to authorities so they can monitor the flow of people in real time. Five mats were deployed at the 2015 Nashik Kumbh Mela, and thanks to their use and other innovations, no stampedes occurred for the first time ever there. Much of the code is now available to the public to use for free, and Kulkarni is trying to improve the device. His dream: for Ashiotos to be used at all large gatherings, like the other Kumbh Melas, the Hajj and even at major concerts and sports events.

A new way to think about health care. Though doctors and nurses dominate people’s minds when it comes to health care, Tamekia MizLadi Smith is more interested in the roles of non-clinical staff in creating in creating effective and positive patient experiences. As an educator and spoken word poet, Smith uses the acronym “G.R.A.C.E.D.” to empower non-clinical staff to be accountable for data collection and to provide compassionate care to patients. Under the belief that compassionate care doesn’t begin and end with just clinicians, Smith asks that desk specialists, parking attendants and other non-clinical staff are trained and treated as integral parts of well-functioning health care systems.

Mad? Try some humor. “The world seems humor-impaired,” says comedian Judy Carter. “It just seems like everyone is going around angry: going, ‘Politics is making me so mad; my boss is just making me so mad.'” In a sharp, zippy talk, Carter makes the case that no one can actually make you mad — you always have a choice of how to respond — and that anger actually might be the wrong way to react. Instead, she suggests trying humor. “Comedy rips negativity to shreds,” she says.

Want a happier, healthier life? Look to your friends. In the relationship pantheon, friends tend to place third in importance (after spouses and blood relatives). But we should make them a priority in our lives, argues science writer Lydia Denworth. “The science of friendship suggests we should invest in relationships that deliver strong bonds. We should value people who are positive, stable, cooperative forces,” Denworth says. While friendship was long ignored by academics, researchers are now studying it and finding it provides us with strong physical and emotional benefits. “In this time when we’re struggling with an epidemic of loneliness and bitter political divisions, [friendships] remind us what joy and connection look like and why they matter,” Denworth says.

An accessible musical wonderland. With quick but impressive musical interludes, Matan Berkowitz introduces the “Airstrument” — a new type of instrument that allows anyone to create music in a matter of minutes by translating movement into sound. This technology is part of a series of devices Berkowitz has developed to enable musicians with disabilities (and anyone who wants to make music) to express themselves in non-traditional ways. “Creation with intention,” he raps, accompanied by a song created on the Airstrument. “Now it’s up to us to wake up and act.”

Divya Chander shares her work studying what human brains look like when they lose and regain consciousness. (Photo: Ryan Lash / TED)

Where does consciousness go when you’re under anesthesia? Divya Chander is an anesthesiologist, delivering specialized doses of drugs that put people in an altered state before surgery. She often wonders: Where do peoples brains do while they’re under? What do they perceive? The question has led her into a deeper exploration of perception, awareness and consciousness itself. In a thoughtful talk, she suggests that we have a lot to learn about consciousness … that we could learn by studying unconsciousness.

The art of creation without preparation. To close out the night, vocalist and improviser Lex Empress creates coherent lyrics from words written by audience members on paper airplanes thrown onto the stage. Empress is accompanied by virtuoso pianist and producer Gilian Baracs, who also improvises everything he plays. Their music reminds us to enjoy the great improvisation that is life.


Krebs on SecurityFirst ‘Jackpotting’ Attacks Hit U.S. ATMs

ATM “jackpotting” — a sophisticated crime in which thieves install malicious software and/or hardware at ATMs that forces the machines to spit out huge volumes of cash on demand — has long been a threat for banks in Europe and Asia, yet these attacks somehow have eluded U.S. ATM operators. But all that changed this week after the U.S. Secret Service quietly began warning financial institutions that jackpotting attacks have now been spotted targeting cash machines here in the United States.

To carry out a jackpotting attack, thieves first must gain physical access to the cash machine. From there they can use malware or specialized electronics — often a combination of both — to control the operations of the ATM.

A keyboard attached to the ATM port. Image: FireEye

On Jan. 21, 2018, KrebsOnSecurity began hearing rumblings about jackpotting attacks, also known as “logical attacks,” hitting U.S. ATM operators. I quickly reached out to ATM giant NCR Corp. to see if they’d heard anything. NCR said at the time it had received unconfirmed reports, but nothing solid yet.

On Jan. 26, NCR sent an advisory to its customers saying it had received reports from the Secret Service and other sources about jackpotting attacks against ATMs in the United States.

“While at present these appear focused on non-NCR ATMs, logical attacks are an industry-wide issue,” the NCR alert reads. “This represents the first confirmed cases of losses due to logical attacks in the US. This should be treated as a call to action to take appropriate steps to protect their ATMs against these forms of attack and mitigate any consequences.”

The NCR memo does not mention the type of jackpotting malware used against U.S. ATMs. But a source close to the matter said the Secret Service is warning that organized criminal gangs have been attacking stand-alone ATMs in the United States using “Ploutus.D,” an advanced strain of jackpotting malware first spotted in 2013.

According to that source — who asked to remain anonymous because he was not authorized to speak on the record — the Secret Service has received credible information that crooks are activating so-called “cash out crews” to attack front-loading ATMs manufactured by ATM vendor Diebold Nixdorf.

The source said the Secret Service is warning that thieves appear to be targeting Opteva 500 and 700 series Dielbold ATMs using the Ploutus.D malware in a series of coordinated attacks over the past 10 days, and that there is evidence that further attacks are being planned across the country.

“The targeted stand-alone ATMs are routinely located in pharmacies, big box retailers, and drive-thru ATMs,” reads a confidential Secret Service alert sent to multiple financial institutions and obtained by KrebsOnSecurity. “During previous attacks, fraudsters dressed as ATM technicians and attached a laptop computer with a mirror image of the ATMs operating system along with a mobile device to the targeted ATM.

Reached for comment, Diebold shared an alert it sent to customers Friday warning of potential jackpotting attacks in the United States. Diebold’s alert confirms the attacks so far appear to be targeting front-loaded Opteva cash machines.

“As in Mexico last year, the attack mode involves a series of different steps to overcome security mechanism and the authorization process for setting the communication with the [cash] dispenser,” the Diebold security alert reads. A copy of the entire Diebold alert, complete with advice on how to mitigate these attacks, is available here (PDF).

The Secret Service alert explains that the attackers typically use an endoscope — a slender, flexible instrument traditionally used in medicine to give physicians a look inside the human body — to locate the internal portion of the cash machine where they can attach a cord that allows them to sync their laptop with the ATM’s computer.

An endoscope made to work in tandem with a mobile device. Source:

“Once this is complete, the ATM is controlled by the fraudsters and the ATM will appear Out of Service to potential customers,” reads the confidential Secret Service alert.

At this point, the crook(s) installing the malware will contact co-conspirators who can remotely control the ATMs and force the machines to dispense cash.

“In previous Ploutus.D attacks, the ATM continuously dispensed at a rate of 40 bills every 23 seconds,” the alert continues. Once the dispense cycle starts, the only way to stop it is to press cancel on the keypad. Otherwise, the machine is completely emptied of cash, according to the alert.

An 2017 analysis of Ploutus.D by security firm FireEye called it “one of the most advanced ATM malware families we’ve seen in the last few years.”

“Discovered for the first time in Mexico back in 2013, Ploutus enabled criminals to empty ATMs using either an external keyboard attached to the machine or via SMS message, a technique that had never been seen before,” FireEye’s Daniel Regalado wrote.

According to FireEye, the Ploutus attacks seen so far require thieves to somehow gain physical access to an ATM — either by picking its locks, using a stolen master key or otherwise removing or destroying part of the machine.

Regalado says the crime gangs typically responsible for these attacks deploy “money mules” to conduct the attacks and siphon cash from ATMs. The term refers to low-level operators within a criminal organization who are assigned high-risk jobs, such as installing ATM skimmers and otherwise physically tampering with cash machines.

“From there, the attackers can attach a physical keyboard to connect to the machine, and [use] an activation code provided by the boss in charge of the operation in order to dispense money from the ATM,” he wrote. “Once deployed to an ATM, Ploutus makes it possible for criminals to obtain thousands of dollars in minutes. While there are some risks of the money mule being caught by cameras, the speed in which the operation is carried out minimizes the mule’s risk.”

Indeed, the Secret Service memo shared by my source says the cash out crew/money mules typically take the dispensed cash and place it in a large bag. After the cash is taken from the ATM and the mule leaves, the phony technician(s) return to the site and remove their equipment from the compromised ATM.

“The last thing the fraudsters do before leaving the site is to plug the Ethernet cable back in,” the alert notes.

FireEye said all of the samples of Ploutus.D it examined targeted Diebold ATMs, but it warned that small changes to the malware’s code could enable it to be used against 40 different ATM vendors in 80 countries.

The Secret Service alert says ATMs still running on Windows XP are particularly vulnerable, and it urged ATM operators to update to a version of Windows 7 to defeat this specific type of attack.

This is a quickly developing story and may be updated multiple times over the next few days as more information becomes available.

Cory DoctorowI’m speaking at UCSD on Feb 9!

I’m appearing at UCSD on February 9, with a talk called “Scarcity, Abundance and the Finite Planet: Nothing Exceeds Like Excess,” in which I’ll discuss the potentials for scarcity and abundance — and bright-green vs austere-green futurism — drawing on my novels Walkaway, Makers and Down and Out in the Magic Kingdom.

The talk is free and open to the public; the organizers would appreciate an RSVP to

The lecture will take place in the Calit2 Auditorium in the Qualcomm Institute’s Atkinson Hall headquarters. The talk will begin at 5:00 p.m., and it will be moderated by Visual Arts professor Jordan Crandall, who chairs the gallery@calit2’s 2017-2018 faculty committee. Following the talk and Q&A session, attendees are invited to stay for a public reception.

Doctorow will discuss the economics, material science, psychology and politics of scarcity and abundance as described in his novels WALKAWAY, MAKERS and DOWN AND OUT IN THE MAGIC KINGDOM. Together, they represent what he calls “a 15-year literature of technology, fabrication and fairness.”

Among the questions he’ll pose: How can everyone in the world live like an American when we need six planets’ worth of materials to realize that dream? Doctorow also asks, “Can fully automated luxury communism get us there, or will our futures be miserable austerity-ecology hairshirts where we all make do with less?”

Author Cory Doctorow to Speak at UC San Diego on Scarcity, Abundance and the Finite Planet [Doug Ramsey/UCSD News]

Planet Linux AustraliaDonna Benjamin: Turning stories into software at LCA2018

Donna speaking in front of a large screen showing a survey and colourful graph. Photo Credit: Josh Simmons
I love free software, but sometimes, I feel, that free software does not love me.
Why is it so hard to use? Why is it still so buggy? Why do the things I can do simply with other tools, take so much effort? Why is the documentation so inscrutable?  Why have all the config settings been removed from the GUI? Why does this HowTo assume I can find a config file, and edit it with VI? Do I have to learn to use VI before I can stop my window manager getting in the way of the application I’m trying to use?
Tis a mystery. Or is it?
It’s fair to say, that the Free Software community is still largely made up of blokes, who are software developers.  The idea that “user centered design” is a “Good Thing” is not evenly distributed. In fact, some seem to think it’s not a good thing at all, “patches welcome” they say, “go fix it yourself”. 
The web community on the other hand, has discovered that the key to their success is understanding and meeting the needs of the people who use their software. Ideological purity is great, but enabling people to meet their objectives, is better.
As technologists, we get excited by technology. Of course we do! Technology is modern magic. And we are wizards. It’s wonderful. But the people who use our software are not necessarily interested in the tech itself, they probably just want to use it to get something done. They probably don’t even care what language it’s written in.
Let’s say a customer walks into a hardware store and says they want a drill.  Or perhaps they walk in and stand in front of a shelf simply contemplating a dizzying array of drills, drill bits and other accessories. Which one is right for the job they wonder. Should I get a cordless one? Will I really need diamond tipped drill bits? 
There's a technique called the 5 Why's that's useful to get under the surface of a requirement. The idea is, you keep asking why until you uncover the real reason for a request, need, feature or widget. For example, we could ask this customer...
Why do you want this drill? To drill a hole. 
Why? To hang a picture on my wall.  
Why? To be able to share and enjoy this amazing photo from my recent holiday.
So we discover our customer did not, in fact, want a drill. Our customer wanted to express something about their identity by decorating their home.  So telling them all about the voltage of the drill, and the huge range of drill bits available, may have helped them choose the right drill for the job, but if we stop to understand the job in the first place, we’re more likely to be able to help that person get what they need to get their job done.
User stories are one way we can explore the “Why” behind the software we build. Check out my talk from the Developers Developers miniconf at on Monday “Turning stories, into software.”




Photo by Josh Simmons


CryptogramFriday Squid Blogging: Squid that Mate, Die, and Then Sink

The mating and death characteristics of some squid are fascinating.

Research paper.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityRegistered at SSA.GOV? Good for You, But Keep Your Guard Up

KrebsOnSecurity has long warned readers to plant your own flag at the my Social Security online portal of the U.S. Social Security Administration (SSA) — even if you are not yet drawing benefits from the agency — because identity thieves have been registering accounts in peoples’ names and siphoning retirement and/or disability funds. This is the story of a Midwest couple that took all the right precautions and still got hit by ID thieves who impersonated them to the SSA directly over the phone.

In mid-December 2017 this author heard from Ed Eckenstein, a longtime reader in Oklahoma whose wife Ruth had just received a snail mail letter from the SSA about successfully applying to withdraw benefits. The letter confirmed she’d requested a one-time transfer of more than $11,000 from her SSA account. The couple said they were perplexed because both previously had taken my advice and registered accounts with MySocialSecurity, even though Ruth had not yet chosen to start receiving SSA benefits.

The fraudulent one-time payment that scammers tried to siphon from Ruth Eckenstein’s Social Security account.

Sure enough, when Ruth logged into her MySocialSecurity account online, there was a pending $11,665 withdrawal destined to be deposited into a Green Dot prepaid debit card account (funds deposited onto a Green Dot card can be spent like cash at any store that accepts credit or debit cards). The $11,655 amount was available for a one-time transfer because it was intended to retroactively cover monthly retirement payments back to her 65th birthday.

The letter the Eckensteins received from the SSA indicated that the benefits had been requested over the phone, meaning the crook(s) had called the SSA pretending to be Ruth and supplied them with enough information about her to enroll her to begin receiving benefits. Ed said he and his wife immediately called the SSA to notify them of fraudulent enrollment and pending withdrawal, and they were instructed to appear in person at an SSA office in Oklahoma City.

The SSA ultimately put a hold on the fraudulent $11,665 transfer, but Ed said it took more than four hours at the SSA office to sort it all out. Mr. Eckenstein said the agency also informed them that the thieves had signed his wife up for disability payments. In addition, her profile at the SSA had been changed to include a phone number in the 786 area code (Miami, Fla.).

“They didn’t change the physical address perhaps thinking that would trigger a letter to be sent to us,” Ed explained.

Thankfully, the SSA sent a letter anyway. Ed said many additional hours spent researching the matter with SSA personnel revealed that in order to open the claim on Ruth’s retirement benefits, the thieves had to supply the SSA with a short list of static identifiers about her, including her birthday, place of birth, mother’s maiden name, current address and phone number.

Unfortunately, most (if not all) of this data is available on a broad swath of the American populace for free online (think Zillow,, Facebook, etc.) or else for sale in the cybercrime underground for about the cost of a latte at Starbucks.

The Eckensteins thought the matter had been resolved until Jan. 14, when Ruth received a 1099 form from the SSA indicating they’d reported to the IRS that she had in fact received an $11,665 payment.

“We’ve emailed our tax guy for guidance on how to deal with this on our taxes,” Mr. Eckenstein wrote in an email to KrebsOnSecurity. “My wife logged into SSA portal and there was a note indicating that corrected/updated 1099s would be available at the end of the month. She’s not sure whether that message was specific to her or whether everyone’s seeing that.”


Identity thieves have been exploiting authentication weaknesses to divert retirement account funds almost since the SSA launched its portal eight years ago. But the crime really picked up in 2013, around the same time KrebsOnSecurity first began warning readers to register their own accounts at the MySSA portal. That uptick coincided with a move by the U.S. Treasury to start requiring that all beneficiaries receive payments through direct deposit (though the SSA says paper checks are still available to some beneficiaries under limited circumstances).

More than 34 million Americans now conduct business with the Social Security Administration (SSA) online. A story this week from Reuters says the SSA doesn’t track data on the prevalence of identity theft. Nevertheless, the agency assured the news outlet that its anti-fraud efforts have made the problem “very rare.”

But Reuters notes that a 2015 investigation by the SSA’s Office of Inspector General investigation identified more than 30,000 suspicious MySSA registrations, and more than 58,000 allegations of fraud related to MySSA accounts from February 2013 to February 2016.

“Those figures are small in the context of overall MySSA activity – but it will not seem small if it happens to you,” writes Mark Miller for Reuters.

The SSA has not yet responded to a request for comment.

Ed and Ruth’s experience notwithstanding, it’s still a good idea to set up a MySSA account — particularly if you or your spouse will be eligible to withdraw benefits soon. The agency has been trying to beef up online authentication for citizens logging into its MySSA portal. Last summer the SSA began requiring all users to enter a username and password in addition to a one-time security code sent their email or phone, although as previously reported here that authentication process could be far more robust.

The Reuters story reminds readers to periodically use the MySSA portal to make sure that your personal information – such as date of birth and mailing address – are correct. “For current beneficiaries, if you notice that a monthly payment has not arrived, you should notify the SSA immediately via the agency’s toll-free line (1-800-772-1213) or at your local field office,” Miller advised. “In most cases, the SSA will make you whole if the theft is reported quickly.”

Another option is to use the SSA’s “Block Electronic Access” feature, which blocks any automatic telephone or online access to your Social Security record – including by you (although it’s unclear if blocking access this way would have stopped ID thieves who manage to speak with a live SSA representative). To restore electronic access, you’ll need to contact the Social Security Administration and provide proof of your identity.

CryptogramThe Effects of the Spectre and Meltdown Vulnerabilities

On January 3, the world learned about a series of major security vulnerabilities in modern microprocessors. Called Spectre and Meltdown, these vulnerabilities were discovered by several different researchers last summer, disclosed to the microprocessors' manufacturers, and patched­ -- at least to the extent possible.

This news isn't really any different from the usual endless stream of security vulnerabilities and patches, but it's also a harbinger of the sorts of security problems we're going to be seeing in the coming years. These are vulnerabilities in computer hardware, not software. They affect virtually all high-end microprocessors produced in the last 20 years. Patching them requires large-scale coordination across the industry, and in some cases drastically affects the performance of the computers. And sometimes patching isn't possible; the vulnerability will remain until the computer is discarded.

Spectre and Meltdown aren't anomalies. They represent a new area to look for vulnerabilities and a new avenue of attack. They're the future of security­ -- and it doesn't look good for the defenders.

Modern computers do lots of things at the same time. Your computer and your phone simultaneously run several applications -- ­or apps. Your browser has several windows open. A cloud computer runs applications for many different computers. All of those applications need to be isolated from each other. For security, one application isn't supposed to be able to peek at what another one is doing, except in very controlled circumstances. Otherwise, a malicious advertisement on a website you're visiting could eavesdrop on your banking details, or the cloud service purchased by some foreign intelligence organization could eavesdrop on every other cloud customer, and so on. The companies that write browsers, operating systems, and cloud infrastructure spend a lot of time making sure this isolation works.

Both Spectre and Meltdown break that isolation, deep down at the microprocessor level, by exploiting performance optimizations that have been implemented for the past decade or so. Basically, microprocessors have become so fast that they spend a lot of time waiting for data to move in and out of memory. To increase performance, these processors guess what data they're going to receive and execute instructions based on that. If the guess turns out to be correct, it's a performance win. If it's wrong, the microprocessors throw away what they've done without losing any time. This feature is called speculative execution.

Spectre and Meltdown attack speculative execution in different ways. Meltdown is more of a conventional vulnerability; the designers of the speculative-execution process made a mistake, so they just needed to fix it. Spectre is worse; it's a flaw in the very concept of speculative execution. There's no way to patch that vulnerability; the chips need to be redesigned in such a way as to eliminate it.

Since the announcement, manufacturers have been rolling out patches to these vulnerabilities to the extent possible. Operating systems have been patched so that attackers can't make use of the vulnerabilities. Web browsers have been patched. Chips have been patched. From the user's perspective, these are routine fixes. But several aspects of these vulnerabilities illustrate the sorts of security problems we're only going to be seeing more of.

First, attacks against hardware, as opposed to software, will become more common. Last fall, vulnerabilities were discovered in Intel's Management Engine, a remote-administration feature on its microprocessors. Like Spectre and Meltdown, they affected how the chips operate. Looking for vulnerabilities on computer chips is new. Now that researchers know this is a fruitful area to explore, security researchers, foreign intelligence agencies, and criminals will be on the hunt.

Second, because microprocessors are fundamental parts of computers, patching requires coordination between many companies. Even when manufacturers like Intel and AMD can write a patch for a vulnerability, computer makers and application vendors still have to customize and push the patch out to the users. This makes it much harder to keep vulnerabilities secret while patches are being written. Spectre and Meltdown were announced prematurely because details were leaking and rumors were swirling. Situations like this give malicious actors more opportunity to attack systems before they're guarded.

Third, these vulnerabilities will affect computers' functionality. In some cases, the patches for Spectre and Meltdown result in significant reductions in speed. The press initially reported 30%, but that only seems true for certain servers running in the cloud. For your personal computer or phone, the performance hit from the patch is minimal. But as more vulnerabilities are discovered in hardware, patches will affect performance in noticeable ways.

And then there are the unpatchable vulnerabilities. For decades, the computer industry has kept things secure by finding vulnerabilities in fielded products and quickly patching them. Now there are cases where that doesn't work. Sometimes it's because computers are in cheap products that don't have a patch mechanism, like many of the DVRs and webcams that are vulnerable to the Mirai (and other) botnets -- ­groups of Internet-connected devices sabotaged for coordinated digital attacks. Sometimes it's because a computer chip's functionality is so core to a computer's design that patching it effectively means turning the computer off. This, too, is becoming more common.

Increasingly, everything is a computer: not just your laptop and phone, but your car, your appliances, your medical devices, and global infrastructure. These computers are and always will be vulnerable, but Spectre and Meltdown represent a new class of vulnerability. Unpatchable vulnerabilities in the deepest recesses of the world's computer hardware is the new normal. It's going to leave us all much more vulnerable in the future.

This essay previously appeared on

Worse Than FailureError'd: #TITLE_OF_ERRORD2#

Joe P. wrote, When I tried to buy a coffee at the airport with my contactless VISA card, it apparently thought my name was '%s'."


"Instead of outsourcing to Eastern Europe or the Asian subcontinent, companies should be hiring from Malta. Just look at these people! They speak fluent base64!" writes Michael J.


Raffael wrote, "While I can proudly say that I am working on bugs, the Salesforce Chatter site should probably consider doing the same."


"Wow! Thanks! Happy Null Year to you too!" Alexander K. writes.


Joel B. wrote, "Yesterday was the first time I've ever seen a phone with a 'License Violation'. Phone still works, so I guess there's that."


"They missed me so much, they decided to give me...nothing," writes Timothy.


[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaSimon Lyall: 2018 – Day 5 – Light Talks and Close

Lightning Talk

  • Usability Fails
  • Etching
  • Diverse Events
  • Kids Space – fairly unstructured and self organising
  • Opening up LandSat imagery – NBAR-T available on NCI
  • Project Nacho – HTML -> VPN/RDP gateway . Apache Guacomle
  • Vocaloids
  • Blockchain
  • Using j2 to create C++ code
  • Memory model code update
  • CLIs are user interface too
  • Complicated git things
  • Mollygive -matching donations
  • Abusing Docker


  • LCA 2019 will be in Christchurch, New Zealand –
  • 700 Attendees at 2018
  • 400 talk and 36 Miniconf submissions




Planet Linux AustraliaSimon Lyall: 2018 – Day 5 – Session 2

QUIC: Replacing TCP for the Web Jana Iyengar

  • History
    • Protocol for http transport
    • Deployed Inside Google 2014 and Chrome / mobile apps
    • Improved performance: Youtube rebuffers 15-18% , Google search latency 3.6 – 8 %
    • 35% of Google’s egree traffic (7% of Internet)
    • Working group started in 2016 to standardized QUIC
    • Turned off at the start of 2016 due to security problem
    • Doubled in Sept 2016 due turned on for the youtube app
  • Technology
    • Previously – ip _> TCP -> TLS -> HTTP/2
    • QUIC -> udp -> QUIC -> http over QUIC
    • Includes crypto and tcp handshake
    • congestion control
    • loss recovery
    • TLS 1.3 has some of the same features that QUIC pioneered, being updated to take account
  • HTTP/1
    • 1 trip for TCP
    • 2 trips for TLS
    • Single connection – Head Of Line blocking
    • Multiple TCP connections workaround.
  • HTTP/2
    • Streams within a single transport connection
    • Packet loss will stall the TCP layer
    • Unresolved problems
      • Connection setup latency
      • Middlebox interference with TCP – makes it hard to change TCP
      • Head of line blocking within TCP
  • QUIC
    • Connection setup
      • 0 round trips, handshake packet followed directly by data packet
      • 1 round-trips if crypto keys are not new
      • 2 round trips if QUIC version needs renegotiation
    • Streams
      • http/2 streams are sent as quic streams
  • Aspirations of protocol
    • Deployable and evolveable
    • Low latency connection establishment
    • Stream multiplexing
    • Better loss recovery and flexible congestion control
      • richer signalling (unique packet number)
      • better RTT estimates
    • Resilience to NAT-rebinding ( UDP Nat-mapping changes often, maybe every few seconds)
  • UDP is not a transport, you put something in top of UDP to build a transport
  • Why not a new protocol instead of UDP? Almost impossible to get a new protocol in middle boxes around the Internet.
  • Metrics
    • Search Latency (see paper for other metrics)
    • Enter search term > entire page is loaded
    • Mean: desktop improve 8% , mobile 3.6 %
    • Low latency: Desktop 1% , Mobile none
    • Highest Latency 90-99% of users: Desktop & mobile 15-16%
    • Video similar
    • Big gain is from 0 RTT handshake
  • QUIC – Search Latency Improvements by Country
    • South Korea – 38ms RTT – 1% improvement
    • USA – 50ms – 2 – 3.5 %
    • India – 188ms – 5 – 13%
  • Middlebox ossification
    • Vendor ossified first byte of QUIC packet – flags byte
    • since it seemed to be the same on all QUIC packets
    • broke QUIC deployment when a flag was fixed
    • Encryption is the only way to protect against network ossification
    • “Greasing” by randomly changing options is also an option.
  • Other Protocols over QUIC?
    • Concentrating on http/2
    • Looking at Web RPC

Remote Work: My first decade working from the far end of the earth John Dalton

  • “Remote work has given me a fulfilling technical career while still being able to raise my family in Tasmania”
  • First son both in 2015, wanted to start in Tasmania with family to raise them, rather than moving to a tech hub.
  • 2017 working with High Performance Computing at University Tasmania
  • If everything is going to be outsourced, I want to be the one they outsourced to.
  • Wanted to do big web stuff, nobody in Tasmania doing that.
  • Was a user at LibraryThing
    • They were searching for Sysadmin/DBA in Portland, Maine
    • Knew he could do the job even though was on other side of the world
    • Negotiated into it over a couple of months
    • Knew could do the work, but not sure how the position would work out


  • Discipline
    • Feels he is not organised. Doesn’t keep planner uptodate or todo lists etc
    • “You can spend a lot of time reading about time management without actually doing it”
    • Do you need to have the minimum level
  • Isolation
    • Lives 20 minutes out of Hobart
    • In semi-rural area for days at a time, doesn’t leave house all week except to ferry kids on weekends.
    • “Never considered myself an extrovert, but I do enjoy talking to people at least weekly”
    • Need to work to hook in with Hobart tech community, Goes to meetups. Plays D&D with friends.
    • Considering going to coworking space. sometimes goes to Cafes etc
  • Setting Boundries
    • Hard to Leave work.
    • Have a dedicated work space.
  • Internet Access
    • Prioritise Coverage over cost these days for mobile.
    • Sometimes fixed provider go down, need to have a backup
  • Communication
    • Less random communicated with other employees
    • Cannot assume any particular knowledge when talking with other people
    • Aware of particular cultural differences
    • Multiple chance of a miscommunication


  • Access to companies and jobs and technologies that could get locally
  • Access to people with a wider range of experiences and backgrounds

Finding remote work

  • Talk your way into it
  • Networking
  • Job Bof
  • can filter

Making it work

  • Be Visable
  • Go home at the end of the day
  • Remember real people are at the end of the email



Planet Linux AustraliaSimon Lyall: 2018 – Day 5 – Session 1

Self-Documenting Coders: Writing Workshop for Devs Heidi Waterhouse

History of Technical documentation

  • Linear Writing
    • On Paper, usually books
    • Emphasis on understanding and doing
  • Task-based writing
    • Early 90s
    • DITA
    • Concept, Procedure, Reference
  • Object-orientated writing
    • High art for of tech writers
    • Content as code
    • Only works when compiled
    • Favoured by tech writers, translated. Up to $2000 per seat
  • Guerilla Writing
    • Stack Overflow
    • Wikis
    • YouTube
    • frustrated non-writers trying to help peers
  • Search-first writing
    • Every page is page one
    • Search-index driven

Writing Words

  • 5 W’s of journalism.
  • Documentation needs to be tested
  • Audiences
    • eg Users, future-self, Sysadmins, experts, End users, installers
  • Writing Basics
    • Sentences short
    • Graphics for concepts
    • Avoid screencaps (too easily outdated)
    • User style guides and linters
    • Accessibility is a real thing
  • Words with pictures
    • Never include settings only in an image ( “set your screen to look like this” is bad)
    • Use images for concepts not instructions
  • Not all your users are readers
    • Can’t see well
    • Can’t parse easily
    • Some have terrible equipment
    • Some of the “some people” is us
    • Accessibility is not a checklist, although that helps, it is us
  • Using templates to write
    • Organising your thoughts and avoid forgetting parts
    • Add a standard look at low mental cost
  • Search-first writing – page one
    • If you didn’t answer the question or point to the answer you failed
    • answer “How do I?”
  • Indexing and search
    • All the words present are indexed
    • No false pointers
    • Use words people use and search for, Don’t use just your internal names for things
  • Semantic tagging and reuse
    • Semantic text splits form and content
    • Semantic tagging allows reuse
    • Reuse saves duplication
    • Reuse requires compiling
  • Sorting topics into buckets
    • Even with search you need some organisation
    • Group items by how they get used not by how they get prammed
    • Grouping similar items allows serendipity
  • Links, menus and flow
    • give people a next step
    • Provide related info on same page
    • show location
    • offer a chance to see the document structure

Distributing Words

  • Static Sites
  • Hosted Sites
  • Baked into the product
    • Only available to customers
    • only updates with the product
    • Hard to encourage average user to input
  • Knowledge based / CMS
    • Useful to community that known what it wants
    • Prone to aging and rot
    • Sometimes diverges from published docs or company message
  • Professional Writing Tools
    • Shiny and powerful
    • Learning Cliff
    • IDE
    • Super features
    • Not going to happen again
  • Paper-ish things
    • Essential for some topics
    • Reassuring to many people
    • touch is a sense we can bond with
    • Need to understand if people using docs will be online or offline when they want them.
  • Using templates to publish
    • Unified look and feel
    • Consistency and not missing things
    • Built-in checklist

Collaborating on Words

  • One weird trick, write it up as your best guess and let them correct it
  • Have a hack day
    • Ste a goal of things to delete
    • Set a goal of things to fix
    • Keep track of debt you can’t handle today
    • team-building doesn’t have to be about activities

Deleting Words

  • What needs to go
    • Old stuff that is wrong and terrible
    • Wrong stuff that hides right stuff
  • What to delete
    • Anything wrong
    • Anything dangerious
    • Anything used of updated in year
  • How
    • Delete temporarily (put aside for a while)
    • Based on analytics
    • Ruthlessly
    • Delete or update

Documentation Must be

  • True
  • Timely
  • Testable
  • Tuned

Documentation Components

  • Who is reading and why
    • Assuming no one likes reading docs
    • What is driving them to be here
  • Pre Requisites
    • What does a user need to succeed
    • Can I change the product to reduce documentation
    • Is there any hazard in this process
  • How do I do this task
    • Steps
    • Results
    • Next steps
  • Test – How do I know that it worked
    • If you can’t test i, it is not a procedure
    • What will the system do, how does the state change
  • Reference
    • What other stuff that affects this
    • What are the optionsal settings
    • What are the related things
  • Code and code samples
    • Best: code you can modify and run in the docs
    • 2nd Best: Code you can copy easily
    • Worst: retyping code
  • Option
    • Why did we build it this way
    • What else might you want to know
    • Have other people done this
    • Lifecycle

Documentation Types

  • Instructions
  • Ideas (arch, problem space,discarded options, process)
  • Action required (release notes, updates, deprecation)
  • Historical (roads maps, projects plans, retrospective documents)
  • Invisible docs (user experience, microinteractions, error messages)
    • Error messages – Unique ID, what caused, What mitigation, optional: Link to report



Planet Linux AustraliaSimon Lyall: 2018 – Day 5 – Keynote – Jess Frazelle

Keynote: Containers aka crazy user space fun

  • Work at Microsoft on Open Source and containers, specifically on kubernetes
  • Containers vs Zones vs Jails vs VMs
  • Containers are not a first class concept in the kernel.
    • Namespaces
    • Cgroups
    • AppArmour in LSM (prevent mounting, writing to /proc etc) (or SELinux)
    • Seccomp (syscall filters, which allowed or denied) – Prevent 150 other syscalls which are uncommon or dangerous.
      • Got list from testing all of dockerhub
      • eg CLONE, UNSHARE
      • NoNewPrivs (exposed as “AllowPrivilegeEsculation” in K8s)
      • rkt and systemd-nspawn don’t 100% follow
  • Intel Clear containers are really VMs

History of Containers

  • OpenVZ – released 2005
  • Linux-Vserver (2008)
  • LXC ( 2008)
  • Docker ( 2013)
    • Initially used LXC as a backend
    • Switched to libcontainer in v0.7
  • lmctfy (2013)
    • By Google
  • rkt (2014)
  • runc (2015)
    • Part of Open container Initiative
  • Container runtimes are like the new Javascript frameworks

Are Containers Secure

  • Yes
  • and I can prove it
  • VMs / Zones and Jails are like all the Lego pieces are already glued togeather
  • Containers you have the parts seperate
    • You can turn on and off certain namespaces
    • You can share namespaces between containers
    • Every container in k8s shares PID and NET namespaces
    • Docker has sane defaults
    • You can sandbox apps every further though
    • No one has managed to break out of the container
    • Has a very strict seccomp profile applied
    • You’d be better off attacking the app, but you are still running a containers default seccomp filters

Containerizing the Desktop

  • Switched to runc from docker (had to convert stuff)
  • rootless containers
  • Runc hook “netns” to do networking
  • Sandboxed desktop apps, running in containers
  • Switch from Debian to CoreOS Container Linux as base OS
    • Verify the integrity of the OS
    • Just had to add graphics drivers
    • Based on gentoo, emerge all the way down

What if we applied the the same defaults to programming languages?

  • Generate seccomp filters at build-time
    • Previously tried at run time, doesn’t work that well, something always missed
    • At build time we can ensure all code is included in the filter
    • The go compiler writes the assembly for all the syscalls, you can hijack and grab the list of these, create a seccomp filter
    • No quite that simply
      • plugins
      • exec external stuff
      • can directly exec a syscall in go code, the name passed in via arguments at runtime
    • Library for cloud-native applications

Linux Containers in secure enclaves (SCONE)

  • Currently Slow
  • Lots of tradeoffs or what executes where (trusted area or untrsuted area)

Soft multi-tenancy

  • Reduced threat model, users not actively malicious
  • Hard Multi-tenancy would have potentially malicious containers running next to others
  • Host OS – eg CoreOs
  • Container Runtime – Look at glasshouse VMs
  • Network – Lots to do, default deny in k8s is a good start
  • DNS – Needs to be namespaced properly or turned off. option: kube-dns as a sidecar
  • Authentication and Authorisation – rbac
  • Isolation of master and System nodes from nodes running containers
  • Restricting access to host resources (k8s hostpath for volumes, pod security policy)
  • making sure everything else is “very dumb” to it’s surroundings




Sociological ImagesChildren Learn Rules for Romance in Preschool

Originally Posted at TSP Discoveries

Photo by oddharmonic, Flickr CC

In the United States we tend to think children develop sexuality in adolescence, but new research by Heidi Gansen shows that children learn the rules and beliefs associated with romantic relationships and sexuality much earlier.

Gansen spent over 400 hours in nine different classrooms in three Michigan preschools. She observed behavior from teachers and students during daytime classroom hours and concluded that children learn — via teachers’ practices — that heterosexual relationships are normal and that boys and girls have very different roles to play in them.

In some classrooms, teachers actively encouraged “crushes” and kissing between boys and girls. Teachers assumed that any form of affection between opposite gender children was romantically-motivated and these teachers talked about the children as if they were in a romantic relationship, calling them “boyfriend/girlfriend.” On the other hand, the same teachers interpreted affection between children of the same gender as friendly, but not romantic. Children reproduced these beliefs when they played “house” in these classrooms. Rarely did children ever suggest that girls played the role of “dad” or boys played the role of “mom.” If they did, other children would propose a character they deemed more gender-appropriate like a sibling or a cousin.

Preschoolers also learned that boys have power over girls’ bodies in the classroom. In one case, teachers witnessed a boy kiss a girl on the cheek without permission. While teachers in some schools enforced what the author calls “kissing consent” rules, the teachers in this school interpreted the kiss as “sweet” and as the result of a harmless crush. Teachers also did not police boys’ sexual behaviors as actively as girls’ behaviors. For instance, when girls pulled their pants down teachers disciplined them, while teachers often ignored the same behavior from boys. Thus, children learned that rules for romance also differ by gender.

Allison Nobles is a PhD candidate in sociology at the University of Minnesota and Graduate Editor at The Society Pages. Her research primarily focuses on sexuality and gender, and their intersections with race, immigration, and law.

(View original at