Planet Russell

,

CryptogramGCHQ Found -- and Disclosed -- a Windows 10 Vulnerability

Now this is good news. The UK's National Cyber Security Centre (NCSC) -- part of GCHQ -- found a serious vulnerability in Windows Defender (their anti-virus component). Instead of keeping it secret and all of us vulnerable, it alerted Microsoft.

I'd like believe the US does this, too.

Worse Than FailureCodeSOD: Titration Frustration

From submitter Christoph comes a function that makes your average regex seem not all that bad, actually:

According to "What is a Titration?" we learn that "a titration is a technique where a solution of known concentration is used to determine the concentration of an unknown solution." Since this is an often needed calculation in a laboratory, we can write a program to solve this problem for us.

Part of the solver is a formula parser, which needs to accept variable names (either lower or upper case letters), decimal numbers, and any of '+-*/^()' for mathematical operators. Presented here is the part of the code for the solveTitration() function that deals with parsing of the formula. Try to read it in an 80 chars/line window. Once with wrapping enabled, and once with wrapping disabled and horizontal scrolling. Enjoy!
String solveTitration(char *yvalue)
{
    String mreport;
    lettere = 0;
    //now we have to solve the system of equations
    //yvalue contains the equation of Y-axis variable
    String tempy = "";
    end = 1;
    mreport = "";
    String tempyval;
    String ptem;
    for (int i = 0; strlen(yvalue) + 1; ++i) {
        if (!(yvalue[i]=='q' || yvalue[i]=='w' || yvalue[i]=='e' 
|| yvalue[i]=='r' || yvalue[i]=='t' || yvalue[i]=='y' || yvalue[i]=='u' || 
yvalue[i]=='i' || yvalue[i]=='o' || yvalue[i]=='p' || yvalue[i]=='a' || 
yvalue[i]=='s' || yvalue[i]=='d' || yvalue[i]=='f' || yvalue[i]=='g' || 
yvalue[i]=='h' || yvalue[i]=='j' || yvalue[i]=='k' || yvalue[i]=='l' || 
yvalue[i]=='z' || yvalue[i]=='x' || yvalue[i]=='c' || yvalue[i]=='v' || 
yvalue[i]=='b' || yvalue[i]=='n' || yvalue[i]=='m' || yvalue[i]=='+' || 
yvalue[i]=='-' || yvalue[i]=='^' || yvalue[i]=='*' || yvalue[i]=='/' || 
yvalue[i]=='(' || yvalue[i]==')' || yvalue[i]=='Q' || yvalue[i]=='W' || 
yvalue[i]=='E' || yvalue[i]=='R' || yvalue[i]=='T' || yvalue[i]=='Y' || 
yvalue[i]=='U' || yvalue[i]=='I' || yvalue[i]=='O' || yvalue[i]=='P' || 
yvalue[i]=='A' || yvalue[i]=='S' || yvalue[i]=='D' || yvalue[i]=='F' || 
yvalue[i]=='G' || yvalue[i]=='H' || yvalue[i]=='J' || yvalue[i]=='K' || 
yvalue[i]=='L' || yvalue[i]=='Z' || yvalue[i]=='X' || yvalue[i]=='C' || 
yvalue[i]=='V' || yvalue[i]=='B' || yvalue[i]=='N' || yvalue[i]=='M' || 
yvalue[i]=='1' || yvalue[i]=='2' || yvalue[i]=='3' || yvalue[i]=='4' || 
yvalue[i]=='5' || yvalue[i]=='6' || yvalue[i]=='7' || yvalue[i]=='8' || 
yvalue[i]=='9' || yvalue[i]=='0' || yvalue[i]=='.' || yvalue[i]==',')) {
            break; //if current value is not a permitted value, this means that something is wrong
        }
        if (yvalue[i]=='q' || yvalue[i]=='w' || yvalue[i]=='e' 
|| yvalue[i]=='r' || yvalue[i]=='t' || yvalue[i]=='y' || yvalue[i]=='u' || 
yvalue[i]=='i' || yvalue[i]=='o' || yvalue[i]=='p' || yvalue[i]=='a' || 
yvalue[i]=='s' || yvalue[i]=='d' || yvalue[i]=='f' || yvalue[i]=='g' || 
yvalue[i]=='h' || yvalue[i]=='j' || yvalue[i]=='k' || yvalue[i]=='l' || 
yvalue[i]=='z' || yvalue[i]=='x' || yvalue[i]=='c' || yvalue[i]=='v' || 
yvalue[i]=='b' || yvalue[i]=='n' || yvalue[i]=='m' || yvalue[i]=='Q' || 
yvalue[i]=='W' || yvalue[i]=='E' || yvalue[i]=='R' || yvalue[i]=='T' || 
yvalue[i]=='Y' || yvalue[i]=='U' || yvalue[i]=='I' || yvalue[i]=='O' || 
yvalue[i]=='P' || yvalue[i]=='A' || yvalue[i]=='S' || yvalue[i]=='D' || 
yvalue[i]=='F' || yvalue[i]=='G' || yvalue[i]=='H' || yvalue[i]=='J' || 
yvalue[i]=='K' || yvalue[i]=='L' || yvalue[i]=='Z' || yvalue[i]=='X' || 
yvalue[i]=='C' || yvalue[i]=='V' || yvalue[i]=='B' || yvalue[i]=='N' || 
yvalue[i]=='M' || yvalue[i]=='.' || yvalue[i]==',') {
            lettere = 1; //if lettere == 0 then the equation contains only mnumbers
        }
        if (yvalue[i]=='+' || yvalue[i]=='-' || yvalue[i]=='^' || 
yvalue[i]=='*' || yvalue[i]=='/' || yvalue[i]=='(' || yvalue[i]==')' || 
yvalue[i]=='1' || yvalue[i]=='2' || yvalue[i]=='3' || yvalue[i]=='4' || 
yvalue[i]=='5' || yvalue[i]=='6' || yvalue[i]=='7' || yvalue[i]=='8' || 
yvalue[i]=='9' || yvalue[i]=='0' || yvalue[i]=='.' || yvalue[i]==',') {
            tempyval = tempyval + String(yvalue[i]);
        } else {
            tempy = tempy + String(yvalue[i]);
            for (int i = 0; i < uid.tableWidget->rowCount(); ++i) {
                TableItem *titem = uid.table->item(i, 0);
                TableItem *titemo = uid.table->item(i, 1);
                if (!titem || titem->text().isEmpty()) {
                    break;
                } else {
                    if (tempy == uid.xaxis->text()) {
                        tempyval = uid.xaxis->text();
                        tempy = "";
                    }
                    ... /* some code omitted here */
                    if (tempy!=uid.xaxis->text()) {
                        if (yvalue[i]=='+' || yvalue[i]=='-' 
|| yvalue[i]=='^' || yvalue[i]=='*' || yvalue[i]=='/' || yvalue[i]=='(' || 
yvalue[i]==')' || yvalue[i]=='1' || yvalue[i]=='2' || yvalue[i]=='3' || 
yvalue[i]=='4' || yvalue[i]=='5' || yvalue[i]=='6' || yvalue[i]=='7' || 
yvalue[i]=='8' || yvalue[i]=='9' || yvalue[i]=='0' || yvalue[i]=='.' || 
yvalue[i]==',') {
                            //actually nothing
                        } else {
                            end = 0;
                        }
                    }
                }
            }
        } // simbol end
        if (!tempyval.isEmpty()) {
            mreport = mreport + tempyval;
        }
        tempyval = "";
    }
    return mreport;
}
[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

Planet DebianJonathan Dowland: Containers lecture

I've repeated last year's docker lecture a couple of times recently, now revised and retitled "Introduction to Containers". The material is mostly the same; the demo steps exactly the same and I haven't produced any updated hand-outs this time (sorry). Revised slides: shorter version (terms: CC-BY-SA)

Whilst trying to introduce containers, the approach I've taken is to work up the history of Web site/server/app hosting from physical hosting and via Virtual Machines. This gives you the context for their popularity, but I find VMs are not the best way to explain container technology. I prefer to go the other way and look at a process on a multi-user system, the problems due to lack of isolation and steadily build up the isolation available with tools like chroot, etc.

The other area I've tried to expand on is the orchestration layer on top of containers, and above, including technologies such as Kubernetes and Openshift. If I deliver this again I'd like to expand this material much more. On that note, a colleague recently forwarded a link to a Google research paper originally published in acmqueue in January 2016, Borg, Omega, and Kubernetes which is a great read on the history of containers in Google and what led up to the open sourcing of Kubernetes, their third iteration at designing a container orchestrator.

Planet DebianNorbert Preining: Japan-styled Christmas Cards

A friend of mine, Kimberlee Aliasgar of Trinidad and Tobago, has created very nice Christmas cards “Japan styled” over at Xmascardsjapan (Japanese version is here). They pick up some typical themes from Japan and turn them into lovely designed cards that are a present by itself, no need for additional presents 😉

Here is another example with “Merry Christmas” written in Katakana, giving a nice touch.

In case you are interested, head over to the English version or Japanese version of there web shop.

I hope you enjoy the cards!

,

Rondam RamblingsBook review: "A New Map for Relationships: Creating True Love at Home and Peace on the Planet" by Dorothie and Martin Hellman

We humans dream of meeting our soul mates, someone to be Juliet to our Romeo, Harry to our Sally, Jobs to our Woz, Larry to our Sergey.  Sadly, the odds are stacked heavily against us.  If you do the math you will find that in a typical human lifetime we can only hope to meet a tiny fraction of our 7 billion fellow humans.  And if you factor in the time it takes to properly vet someone to see if

Krebs on SecurityThe Market for Stolen Account Credentials

Past stories here have explored the myriad criminal uses of a hacked computer, the various ways that your inbox can be spliced and diced to help cybercrooks ply their trade, and the value of a hacked company. Today’s post looks at the price of stolen credentials for just about any e-commerce, bank site or popular online service, and provides a glimpse into the fortunes that an enterprising credential thief can earn selling these accounts on consignment.

Not long ago in Internet time, your typical cybercriminal looking for access to a specific password-protected Web site would most likely visit an underground forum and ping one of several miscreants who routinely leased access to their “bot logs.”

These bot log sellers were essentially criminals who ran large botnets (collections of hacked PCs) powered by malware that can snarf any passwords stored in the victim’s Web browser or credentials submitted into a Web-based login form. For a few dollars in virtual currency, a ne’er-do-well could buy access to these logs, or else he and the botmaster would agree in advance upon a price for any specific account credentials sought by the buyer.

Back then, most of the stolen credentials that a botmaster might have in his possession typically went unused or unsold (aside from the occasional bank login that led to a juicy high-value account). Indeed, these plentiful commodities held by the botmaster for the most part were simply not a super profitable line of business and so went largely wasted, like bits of digital detritus left on the cutting room floor.

But oh, how times have changed! With dozens of sites in the underground now competing to purchase and resell credentials for a variety of online locations, it has never been easier for a botmaster to earn a handsome living based solely on the sale of stolen usernames and passwords alone.

If the old adage about a picture being worth a thousand words is true, the one directly below is priceless because it illustrates just how profitable the credential resale business has become.

This screen shot shows the earnings panel of a crook who sells stolen credentials for hundreds of Web sites to a dark web service that resells them. This botmaster only gets paid when someone buys one of his credentials. So far this year, customers of this service have purchased more than 35,000 credentials he’s sold to this service, earning him more than $288,000 in just a few months.

The image shown above is the wholesaler division of “Carder’s Paradise,” a bustling dark web service that sells credentials for hundreds of popular Web destinations. The screen shot above is an earnings panel akin to what you would see if you were a seller of stolen credentials to this service — hence the designation “Seller’s Paradise” in the upper left hand corner of the screen shot.

This screen shot was taken from the logged-in account belonging to one of the more successful vendors at Carder’s Paradise. We can see that in just the first seven months of 2017, this botmaster sold approximately 35,000 credential pairs via the Carder’s Paradise market, earning him more than $288,000. That’s an average of $8.19 for each credential sold through the service.

Bear in mind that this botmaster only makes money based on consignment: Regardless of how much he uploads to Seller’s Paradise, he doesn’t get paid for any of it unless a Seller’s Paradise customer chooses to buy what he’s selling.

Fortunately for this guy, almost 9,000 different customers of Seller’s Paradise chose to purchase one or more of his username and password pairs. It was not possible to tell from this seller’s account how many credential pairs total that he has contributed to this service which went unsold, but it’s a safe bet that it was far more than 35,000.

[A side note is in order here because there is some delicious irony in the backstory behind the screenshot above: The only reason a source of mine was able to share it with me was because this particular seller re-used the same email address and password across multiple unrelated cybercrime services].

Based on the prices advertised at Carder’s Paradise (again, Carder’s Paradise is the retail/customer side of Seller’s Paradise) we can see that the service on average pays its suppliers about half what it charges customers for each credential. The average price of a credential for more than 200 different e-commerce and banking sites sold through this service is approximately $15.

Part of the price list for credentials sold at this dark web ID theft site.

Indeed, fifteen bucks is exactly what it costs to buy stolen logins for airbnb.com, comcast.com, creditkarma.com, logmein.com and uber.com. A credential pair from AT&T Wireless — combined with access to the victim’s email inbox — sells for $30.

The most expensive credentials for sale via this service are those for the electronics store frys.com ($190). I’m not sure why these credentials are so much more expensive than the rest, but it may be because thieves have figured out a reliable and very profitable way to convert stolen frys.com customer credentials into cash.

Usernames and passwords to active accounts at military personnel-only credit union NavyFederal.com fetch $60 apiece, while credentials to various legal and data aggregation services from Thomson Reuters properties command a $50 price tag.

The full price list of credentials for sale by this dark web service is available in this PDF. For CSV format, see this link. Both lists are sorted alphabetically by Web site name.

This service doesn’t just sell credentials: It also peddles entire identities — indexed and priced according to the unwitting victim’s FICO score. An identity with a perfect credit score (850) can demand as much as $150.

Stolen identities with high credit scores fetch higher prices.

And of course this service also offers the ability to pull full credit reports on virtually any American — from all three major credit bureaus — for just $35 per bureau.

It costs $35 through this service to pull someone’s credit file from the three major credit bureaus.

Plenty of people began freaking out earlier this year after a breach at big-three credit bureau Equifax jeopardized the Social Security Numbers, dates of birth and other sensitive date on more than 145 million Americans. But as I have been trying to tell readers for many years, this data is broadly available for sale in the cybercrime underground on a significant portion of the American populace.

If the threat of identity theft has you spooked, place a freeze on your credit file and on the file of your spouse (you may even be able to do this for your kids). Credit monitoring is useful for letting you know when someone has stolen your identity, but these services can’t be counted on to stop an ID thief from opening new lines of credit in your name.

They are, however, useful for helping to clean up identity theft after-the-fact. This story is already too long to go into the pros and cons of credit monitoring vs. freezes, so I’ll instead point to a recent primer on the topic and urge readers to check it out.

Finally, it’s a super bad idea to re-use passwords across multiple sites. KrebsOnSecurity this year has written about multiple, competing services that sell or sold access to billions of usernames and passwords exposed in high profile data breaches at places like Linkedin, Dropbox and Myspace. Crooks pay for access to these stolen credential services because they know that a decent percentage of Internet users recycle the same password at multiple sites.

One alternative to creating and remembering strong, lengthy and complex passwords for every important site you deal with is to outsource this headache to a password manager.  If the online account in question allows 2-factor authentication (2FA), be sure to take advantage of that.

Two-factor authentication makes it much harder for password thieves (or their customers) to hack into your account just by stealing or buying your password: If you have 2FA enabled, they also would need to hack that second factor (usually your mobile device) before being able to access your account. For a list of sites that support 2FA, check out twofactorauth.org.

CryptogramLessons Learned from the Estonian National ID Security Flaw

Estonia recently suffered a major flaw in the security of their national ID card. This article discusses the fix and the lessons learned from the incident:

In the future, the infrastructure dependency on one digital identity platform must be decreased, the use of several alternatives must be encouraged and promoted. In addition, the update and replacement capacity, both remote and physical, should be increased. We also recommend the government to procure the readiness to act fast in force majeure situations from the eID providers.. While deciding on the new eID platforms, the need to replace cryptographic primitives must be taken into account -- particularly the possibility of the need to replace algorithms with those that are not even in existence yet.

Worse Than FailurePromising Equality

One can often hear the phrase, “modern JavaScript”. This is a fig leaf, meant to cover up a sense of shame, for JavaScript has a bit of a checkered past. It started life as a badly designed language, often delivering badly conceived features. It has a reputation for slowness, crap code, and things that make you go “wat?

Thus, “modern” JavaScript. It’s meant to be a promise that we don’t write code like that any more. We use the class keyword and transpile from TypeScript and write fluent APIs and use promises. Yes, a promise to use promises.

Which brings us to Dewi W, who just received some code from contractors. It has some invocations that look like this:

safetyLockValidator(inputValue, targetCode).then(function () {
        // You entered the correct code.
}).catch(function (err) {
        // You entered the wrong code.
})

The use of then and catch, in this context, tells us that they’re using a Promise, presumably to wrap up some asynchronous operation. When the operation completes successfully, the then callback fires, and if it fails, the catch callback fires.

But, one has to wonder… what exactly is safetyLockValidator doing?

safetyLockValidator = function (input, target) {
        return new Promise(function (resolve, reject) {
                if (input === target)
                        return resolve()
                else return reject('Wrong code');
        })
};

It’s just doing an equality test. If input equals target, the Promise resolve()s- completes successfully. Otherwise, it reject()s. Well, at least it’s future-proofed against the day we switch to using an EaaS platform- “Equality as a Service”.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaColin Charles: Percona Live Santa Clara 2018 CFP

Percona Live Santa Clara 2018 call for papers ends fairly soon — December 22 2017. It may be extended, but I suggest getting a topic in ASAP so the conference committee can view everything fairly and quickly. Remember this conference is bigger than just MySQL, so please submit topics on MongoDB, other databases like PostgreSQL, time series, etc., and of course MySQL.

What are you waiting for? Submit TODAY!
(It goes without saying that speakers get a free pass to attend the event.)

Don Martiquick question on tracking protection

One quick question for anyone who still isn't convinced that tracking protection needs to be a high priority for web browsers in 2018. Web tracking isn't just about items from your online shopping cart following you to other sites. Users who are vulnerable to abusive practices for health or other reasons have tracking protection needs too.

Screenshot from the American Cancer Society site, showing 24 web trackers

Who has access to the data from each of the 24 third-party trackers that appear on the American Cancer Society's Find Cancer Treatment and Support page, and for what purposes can they use the data?

,

Planet DebianRuss Allbery: End of an FTP era

I just turned off anonymous FTP service on ftp.eyrie.org.

It's bittersweet, since I've been running an anonymous FTP server since some time around 1996 (longer than HTTP has been a widely-used thing), and at ftp.eyrie.org for nearly that long. The original service was wu-ftpd, as one did at the time, but it's been vsftpd for the past decade plus. (Amusingly, I now work for the author of vsftpd.)

All of the data is still there, at archives.eyrie.org as has been the case for more than a decade. I doubt anyone but me and a few people with ancient bookmarks will ever notice. The whole world switched over to HTTP many years ago, and about the only thing that ever connected to the anonymous FTP server was search engines. I was keeping it running out of nostalgia.

Explaining why I finally pulled the plug requires a bit of background on the FTP protocol. Many of those reading this may already be familiar, but I bet some people aren't, and it's somewhat interesting. The short version is that FTP is a very old protocol from a much different era of the Internet, and it does things in some very odd ways that are partly incompatible with modern networking.

FTP uses two separate network connections between the client and server: a control channel and a data channel. The client sends commands to the server (directory navigation and file upload and download commands, for example) over the control channel. Any data, including directory listings, is sent over the data channel, instead of in-line in the control channel the way almost every other protocol works.

One way to do the data transfer is for the client to send a PORT command to the server before initiating a data transfer, telling the server the local port on which the client was listening. The FTP server would then connect back to the client on that port, using a source port of 20, to send the data. This is called active mode.

This, of course, stopped working as soon as NAT and firewalls became part of networking and servers couldn't connect to clients. (It also has some security issues. Search for FTP bounce attack if you're curious.) Nearly the entire FTP world therefore switched to a different mechanism: passive mode. (This was in the protocol from very early on, but extremely old FTP servers sometimes didn't support it.) In this mode, the client would send the PASV command (EPSV in later versions with IPv6 support), and the server would respond with the ephemeral port on the server to use for data transfer. The client would then open a second connection to the server on that port for the data transfer.

Everything is now fine for the client: it just opens multiple connections to the same server on different ports. The problem is the server firewall. On the modern Internet, you don't want to allow any host on the Internet to open connections to arbitrary ports on the server, even ephemeral ports, for defense in depth against exposing some random service that happens to be running on that port. In standard FTP implementations, there's also no authentication binding between the ports, so some other client could race a client to its designated data port.

You therefore need some way to tell the firewall to allow a client to connect to its provided data port, but not any other port. With iptables, this is done by using the conntrack module and a related port rule. A good implementation has to look inside the contents of the control channel traffic and look for the reply to a PASV or EPSV command to extract the port number. The related port rule will then allow connections to that port from the client for as long as the main control channel lasts.

This has mostly worked for some time, but it's complicated, requires loading several other kernel modules to do this packet inspection, and requires using conntrack, which itself causes issues for some servers because it has to maintain a state table of open connections that has a limited size in the kernel. This conntrack approach also has other security issues around matching the wrong protocol (there's a ton of good information in this article), so modern Linux kernels require setting up special raw iptables rules to enable the correct conntrack helper. I got this working briefly in Debian squeeze with a separate ExecStartPre command for vsftpd to set up the iptables magic, but then it stopped working again for some reason that I never diagnosed.

I probably could get this working again by digging deeper into how the complex conntrack machinery works, but on further reflection, I decided to just turn the service off. It's had a good run, I don't think anyone uses it, and while this corner of Linux networking is moderately interesting, I don't have the time to invest in staying current. So I've updated all of my links to point to HTTP instead and shut the server down today.

Goodbye FTP! It's been a good run, and I'll always have a soft spot in my heart for you.

Planet DebianDirk Eddelbuettel: littler 0.3.3

max-heap image

The fourth release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. In my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. Last but not least it is also less silly than Rscript and always loads the methods package avoiding those bizarro bugs between code running in R itself and a scripting front-end.

littler prefers to live on Linux and Unix, has its difficulties on OS X due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!).

A few examples as highlighted at the Github repo:

This release brings a few new examples scripts, extends a few existing ones and also includes two fixes thanks to Carl. Again, no internals were changed. The NEWS file entry is below.

Changes in littler version 0.3.3 (2017-12-17)

  • Changes in examples

    • The script installGithub.r now correctly uses the upgrade argument (Carl Boettiger in #49).

    • New script pnrrs.r to call the package-native registration helper function added in R 3.4.0

    • The script install2.r now has more robust error handling (Carl Boettiger in #50).

    • New script cow.r to use R Hub's check_on_windows

    • Scripts cow.r and c4c.r use #!/usr/bin/env r

    • New option --fast (or -f) for scripts build.r and rcc.r for faster package build and check

    • The build.r script now defaults to using the current directory if no argument is provided.

    • The RStudio getters now use the rvest package to parse the webpage with available versions.

  • Changes in package

    • Travis CI now uses https to fetch script, and sets the group

Courtesy of CRANberries, there is a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianLars Wirzenius: The proof is in the pudding

I wrote these when I woke up one night and had trouble getting back to sleep, and spent a while in a very philosophical mood thinking about life, success, and productivity as a programmer.

Imagine you're developing a piece of software.

  • You don't know it works, unless you've used it.

  • You don't know it's good, unless people tell you it is.

  • You don't know you can do it, unless you've already done it.

  • You don't know it can handle a given load, unless you've already tried it.

  • The real bottlenecks are always a surprise, the first time you measure.

  • It's not ready for production until it's been used in production.

  • Your automated tests always miss something, but with only manual tests, you always miss more.

Don MartiForbidden words

You know how the US government's Centers for Disease Control and Prevention is now forbidden from using certain words?

vulnerable
entitlement
diversity
transgender
fetus
evidence-based
science-based

(source: Washington Post)

Well, in order to help slow down the spread of political speech enforcement that is apparently stopping all of us cool innovator type people from saying the Things We Can't Say, here's a Git hook to make sure that every time you blog, you include at least one of the forbidden words.

If you blog without including one of the forbidden words, you're obviously internalizing censorship and need more freedom, which you can maybe get by getting out of California for a while. After all, a lot of people here seem to think that "innovation" is building more creepy surveillance as long as you call it "growth hacking" or writing apps to get members of the precariat to do the stuff that your Mom used to do for you.

You only have to include one forbidden word every time you commit a blog entry, not in every file. You only need forbidden words in blog entries, not in scripts or templates. You can always get around the forbidden word check with the --no-verify command-line option.

Suggestions and pull requests welcome. script on GitHub

Planet DebianPetter Reinholdtsen: Cura, the nice 3D print slicer, is now in Debian Unstable

After several months of working and waiting, I am happy to report that the nice and user friendly 3D printer slicer software Cura just entered Debian Unstable. It consist of five packages, cura, cura-engine, libarcus, fdm-materials, libsavitar and uranium. The last two, uranium and cura, entered Unstable yesterday. This should make it easier for Debian users to print on at least the Ultimaker class of 3D printers. My nearest 3D printer is an Ultimaker 2+, so it will make life easier for at least me. :)

The work to make this happen was done by Gregor Riepl, and I was happy to assist him in sponsoring the packages. With the introduction of Cura, Debian is up to three 3D printer slicers at your service, Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D printer, give it a go. :)

The 3D printer software is maintained by the 3D printer Debian team, flocking together on the 3dprinter-general mailing list and the #debian-3dprinting IRC channel.

The next step for Cura in Debian is to update the cura package to version 3.0.3 and then update the entire set of packages to version 3.1.0 which showed up the last few days.

,

Planet DebianRuss Allbery: Saving window position in Xfce session

TLDR: If you're having problems saving window position in your Xfce session, enable save on logout and then log out and back in. This will probably fix the problem (permanently, if you like keeping the same session and turn saving back off again). See below for the details.

I've been using Xfce for my desktop for some years now, and have had a recurring problem with saved sessions after a reboot. After logging in, all the applications from my saved session would be started, but all the workspace and window positioning data would be lost, so they'd just pile onto the default workspace like a train wreck.

Various other people on-line have reported this over the years (there are open bugs in Ubuntu, Xfce, and Red Hat bug trackers), and there was apparently a related bug fixed in Xfce 4.10, but I'm using 4.12. I would have given up (and have several times in the past), except that on one of my systems this works correctly. All the windows go back to their proper positions.

Today, I dug into the difference and finally solved it. Here it is, in case someone else stumbles across it.

Some up-front caveats that are or may be related:

  1. I rarely log out of my Xfce session, since this is a single-user laptop. I hibernate and keep restoring until I decide to do a reboot for kernel patches, or (and this is somewhat more likely) some change to the system invalidates the hibernate image and the system hangs on restore from hibernate and I force-reboot it. I also only sometimes use the Xfce toolbar to do a reboot; often, I just run reboot.

  2. I use xterm and Emacs, which are not horribly sophisticated X applications and which don't remember their own window positioning.

Xfce stores sessions in .cache/sessions in your home directory. The key discovery on close inspection is that there were two types of files in that directory on the working system, and only one on the non-working system.

The typical file will have a name like xfce4-session-hostname:0 and contains things like:

Client9_ClientId=2a654109b-e4d0-40e4-a910-e58717faa80b
Client9_Hostname=local/hostname
Client9_CloneCommand=xterm
Client9_RestartCommand=xterm,-xtsessionID,2a654109b-e4d0-40e4-a910-e58717faa80b
Client9_Program=xterm
Client9_UserId=user

This is the file that remembers all of the running applications. If you go into Settings -> Session and Startup and clear the session cache, files like this will be deleted. If you save your current session, a file like this will be created. This is how Xfce knows to start all of the same applications. But notice that nothing in the above preserves the positioning of the window. (I went down a rabbit hole thinking the session ID was somehow linking to that information elsewhere, but it's not.)

The working system had a second type of file in that directory named xfwm4-2d4c9d4cb-5f6b-41b4-b9d7-5cf7ac3d7e49.state. Looking in that file reveals entries like:

[CLIENT] 0x200000f
  [CLIENT_ID] 2a9e5b8ed-1851-4c11-82cf-e51710dcf733
  [CLIENT_LEADER] 0x200000f
  [RES_NAME] xterm
  [RES_CLASS] XTerm
  [WM_NAME] xterm
  [WM_COMMAND] (1) "xterm"
  [GEOMETRY] (860,35,817,1042)
  [GEOMETRY-MAXIMIZED] (860,35,817,1042)
  [SCREEN] 0
  [DESK] 2
  [FLAGS] 0x0

Notice the geometry and desk, which are exactly what we're looking for: the window location and the workspace it should be on. So the problem with window position not being saved was the absence of this file.

After some more digging, I discovered that while the first file is saved when you explicitly save your session, the second is not. However, it is saved on logout. So, I went to Settings -> Session and Startup and enabled automatically save session on logout in the General tab, logged out and back in again, and tada, the second file appeared. I then turned saving off again (since I set up my screens and then save them and don't want any subsequent changes saved unless I do so explicitly), and now my window position is reliably restored.

This also explains why some people see this and others don't: some people probably regularly use the Log Out button, and others ignore it and manually reboot (or just have their system crash).

Incidentally, this sort of problem, and the amount of digging that I had to do to solve it, is the reason why I'm in favor of writing man pages or some other documentation for every state file your software stores. Not only does it help people digging into weird problems, it helps you as the software author notice surprising oddities, like splitting session state across two separate state files, when you go to document them for the user.

Planet DebianSteve Kemp: IoT radio: Still in-progress ..

So back in September I was talking about building a IoT Radio, and after that I switched to talking about tracking aircraft via software-defined radio. Perhaps time for a followup.

So my initial attempt at a IoT radio was designed with RDA5807M module. Frustratingly the damn thing was too small to solder easily! Once I did get it working though I found that either the specs lied to me, or I'd misunderstood them: It wouldn't drive headphones, and performance was poor. (Though amusingly the first time I got it working I managed to tune to Helsinki's rock-station, and the first thing I heard was Rammstein's Amerika.)

I made another attempt with an Si4703-based "evaluation board". This was a board which had most of the stuff wired in, so all you had to do was connect an MCU to it, and do the necessary software dancing. There was a headphone-socket for output, and no need to fiddle with the chip itself, it was all pretty neat.

Unfortunately the evaluation board was perfect for basic use, but not at all suitable for real use. The board did successfully output audio to a pair of headphones, but unfortunately it required the use of headphones, as the cable would be treated as an antenna. As soon as I fed the output of the headphone-jack to an op-amp to drive some speakers I was beset with the kind of noise that makes old people reminisce about how music was better back in their day.

So I'm now up to round 3. I have a TEA5767-based project in the works, which should hopefully resolve my problems:

  • There are explicit output and aerial connections.
  • I know I'll need an amplifier.
  • The hardware is easy to control via arduino/esp8266 MCUs.
    • Numerous well-documented projects exist using this chip.

The only downside I can see is that I have to use the op-amp for volume control too - the TEA5767-chip allows you to mute/unmute via software but doesn't allow you to set the volume. Probably for the best.

In unrelated news I've got some e-paper which is ESP8266/arduino controlled. I have no killer-app for it, but it's pretty great. I should write that up sometime.

Rondam RamblingsHere comes the next west coast mega-drought

As long as I'm blogging about extreme weather events I would also like to remind everyone that we just came off of the longest drought in California history, followed immediately by the wettest rainy season in California history.  Now it looks like that history could very well be starting to repeat itself.  The weather pattern that caused the six-year-long Great Drought is starting to form again.

Rondam RamblingsThis should convince the climate skeptics. But it probably won't.

One of the factoids that climate-change denialists cling to is the fact (and it is a fact) that major storms haven't gotten measurably worse.  The damage from storms has gotten measurably worse, but that can be attributed to increased development on coastlines.  It might be that the storms themselves have gotten worse, but the data is not good enough to disentangle the two effects. But storms

Planet DebianDirk Eddelbuettel: drat 0.1.4

drat user

A new version of drat just arrived on CRAN as another no-human-can-delay-this automatic upgrade directly from the CRAN prechecks (though I did need a manual reminder from Uwe to remove a now stale drat repo URL -- bad @hrbrmstr -- from the README in a first attempt).

This release is mostly the work of Neal Fultz who kindly sent me two squeaky-clean pull requests addressing two open issue tickets. As drat is reasonably small and simple, that was enough to motivate a quick release. I also ensured that PACKAGES.rds will always if committed along (if we're in commit mode), which is a follow-up to an initial change from 0.1.3 in September.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.4 (2017-12-16)

  • Changes in drat functionality

    • Binaries for macOS are now split by R version into two different directories (Neal Futz in #67 addring #64).

    • The target branch can now be set via a global option (Neal Futz in #68 addressing #61).

    • In commit mode, add file PACKAGES.rds unconditionally.

  • Changes in drat documentation

    • Updated 'README.md' removing another stale example URL

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cory DoctorowTalking Walkaway on the Barnes and Noble podcast

I recorded this interview last summer at San Diego Comic-Con; glad to hear it finally live!

Authors are, without exception, readers, and behind every book there is…another book, and another. In this episode of the podcast, we’re joined by two writers for conversations about the vital books and ideas that influence inform their own work. First, Cory Doctorow talks with B&N’s Josh Perilo about his recent novel of an imagined near future, Walkaway, and the difference between a dystopia and a disaster. Then we hear from Will Schwalbe, talking with Miwa Messer about the lifetime of reading behind his book Books for Living: Some Thoughts on Reading, Reflecting, and Embracing Life.


Hubert Vernon Rudolph Clayton Irving Wilson Alva Anton Jeff Harley Timothy Curtis Cleveland Cecil Ollie Edmund Eli Wiley Marvin Ellis Espinoza—known to his friends as Hubert, Etc—was too old to be at that Communist party.


But after watching the breakdown of modern society, he really has no where left to be—except amongst the dregs of disaffected youth who party all night and heap scorn on the sheep they see on the morning commute. After falling in with Natalie, an ultra-rich heiress trying to escape the clutches of her repressive father, the two decide to give up fully on formal society—and walk away.


After all, now that anyone can design and print the basic necessities of life—food, clothing, shelter—from a computer, there seems to be little reason to toil within the system.


It’s still a dangerous world out there, the empty lands wrecked by climate change, dead cities hollowed out by industrial flight, shadows hiding predators animal and human alike. Still, when the initial pioneer walkaways flourish, more people join them. Then the walkaways discover the one thing the ultra-rich have never been able to buy: how to beat death. Now it’s war – a war that will turn the world upside down.


Fascinating, moving, and darkly humorous, Walkaway is a multi-generation SF thriller about the wrenching changes of the next hundred years…and the very human people who will live their consequences.

Planet DebianDaniel Lange: IMAPFilter 2.6.11-1 backport for Debian Jessie AMD64 available

One of the perks you get as a Debian Developer is a @debian.org email address. And because Debian is old and the Internet used to be a friendly place this email address is plastered all over the Internet. So you get email spam, a lot of spam.

I'm using a combination of server and client site filtering to keep spam at bay. Unfortunately the IMAPFilter version in Debian Jessie doesn't even support "dry run" (-n) which is not so cool when developing complex filter rules. So I backported the latest (sid) version and agreed with Sylvestre Ledru, one of its maintainers, to share it here and see whether making an official backport is worth it. It's a straight recompile so no magic and no source code or packaging changes required.

Get it while its hot:

imapfilter_2.6.11-1~bpo8+1_amd64.deb (IMAPFilter Jessie backport)
SHA1: bedb9c39e576a58acaf41395e667c84a1b400776

Clever LUA snippets for ~/.imapfilter/config.lua appreciated.

Planet DebianDirk Eddelbuettel: digest 0.6.13

A small maintenance release, version 0.6.13, of the digest package arrived on CRAN and in Debian yesterday.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'crc32', 'xxhash' and 'murmurhash' algorithms) permitting easy comparison of R language objects.

This release accomodates a request by Luke and Tomas to make the version argument of serialize() an argument to digest() too, which was easy enough to accomodate. The value 2L is the current default (and for now only permitted value). The ALTREP changes in R 3.5 will bring us a new, and more powerful, format with value 3L. Changes can be set in each call, or globally via options(). Other than we just clarified one aspect of raw vector usage in the manual page.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Don MartiMindless link propagation

,

Planet DebianMichael Prokop: Usage of Ansible for Continuous Configuration Management

It all started with a tweet of mine:

Screenshot of https://twitter.com/mikagrml/status/941304704004448257

I received quite some feedback since then and I’d like to iterate on this.

I’m a puppet user since ~2008 and since ~2015 also ansible is part of my sysadmin toolbox. Recently certain ansible setups I’m involved in grew faster than I’d like to see, both in terms of managed hosts/services as well as the size of the ansible playbooks. I like ansible for ad hoc tasks, like `ansible -i ansible_hosts all -m shell -a 'lsb_release -rs'` to get an overview what distribution release systems are running, requiring only a working SSH connection and python on the client systems. ansible-cmdb provides a nice and simple to use ad hoc host overview without much effort and overhead. I even have puppetdb_to_ansible scripts to query a puppetdb via its API and generate host lists for usage with ansible on-the-fly. Ansible certainly has its use case for e.g. bootstrapping systems, orchestration and handling deployments.

Ansible has an easier learning curve than e.g. puppet and this might seem to be the underlying reason for its usage for tasks it’s not really good at. To be more precise: IMO ansible is a bad choice for continuous configuration management. Some observations, though YMMV:

  • ansible’s vaults are no real replacement for something like puppet’s hiera (though Jerakia might mitigate at least the pain regarding data lookups)
  • ansible runs are slow, and get slower with every single task you add
  • having a push model with ansible instead of pull (like puppet’s agent mode) implies you don’t get/force regular runs all the time, and your ansible playbooks might just not work anymore once you (have to) touch them again
  • the lack of a DSL results in e.g. each single package management having its own module (apt, dnf, yum,….), having too many ways how to do something, resulting more often than not in something I’d tend to call spaghetti code
  • the lack of community modules comparable to Puppet’s Forge
  • the lack of a central DB (like puppetdb) means you can’t do something like with puppet’s exported resources, which is useful e.g. for central ssh hostkey handling, monitoring checks,…
  • the lack of a resources DAG in ansible might look like a welcome simplification in the beginning, but its absence is becoming a problem when complexity and requirements grow (example: delete all unmanaged files from a directory)
  • it’s not easy at all to have ansible run automated and remotely on a couple of hundred hosts without stumbling over anything — Rudolph Bott
  • as complexity grows, the limitations of Ansible’s (lack of a) language become more maddening — Felix Frank

Let me be clear: I’m in no way saying that puppet doesn’t have its problems (side-rant: it took way too long until Debian/stretch was properly supported by puppets’ AIO packages). I had and still have all my ups and downs with it, though in 2017 and especially since puppet v5 it works fine enough for all my use cases at a diverse set of customers. Whenever I can choose between puppet and ansible for continuous configuration management (without having any host specific restrictions like unsupported architectures, memory limitations,… that puppet wouldn’t properly support) I prefer puppet. Ansible can and does exist as a nice addition next to puppet for me, even if MCollective/Choria is available. Ansible has its use cases, just not for continuous configuration management for me.

The hardest part is to leave some tool behind once you reached the end of its scale. Once you feel like a tool takes more effort than it is worth you should take a step back and re-evaluate your choices. And quoting Felix Frank:

OTOH, if you bend either tool towards a common goal, you’re not playing to its respective strengths.

Thanks: Michael Renner and Christian Hofstaedtler for initial proof reading and feedback

CryptogramFriday Squid Blogging: Baby Sea Otters Prefer Shrimp to Squid

At least, this one does.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDBen Saunders’ solo crossing of Antarctica, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

A solo crossing of Antarctica. With chilling detail, Ben Saunders documents his journey across Antarctica as he attempts to complete the first successful solo, unsupported and unassisted crossing. The journey is a way of honoring his friend Henry Worsley, who died attempting a similar crossing last year. While being attacked by intense winds, Saunders writes of his experiences trekking through the hills, the cold, and the ice, the weight he carries, and even the moments he’s missing, as he wishes his dear friends a jolly and fun wedding day back home. (Watch Saunders’ TED Talk)

The dark side of AI. A chilling new video, “Slaughterbots,” gives viewers a glimpse into a dystopian future where people can be targeted and killed by strangers using autonomous weapons simply for having dissenting opinions. This viral video was the brainchild of TED speaker Stuart Russell and a coalition of AI researchers and advocacy organizations. The video warns viewers that while AI has the potential to solve many of our problems, the dangers of AI weapons must be addressed first. “We have an opportunity to prevent the future you just saw,” Stuart states at the end of the video, “but the window to act is closing fast.” (Watch Russell’s TED Talk)

Corruption investigators in paradise. Charmian Gooch and her colleagues at Global Witness have been poring over the Paradise Papers, a cache of 13.4 million files released by the the International Consortium of Investigative Journalists that detail the secret world of offshore financial deals. With the 2014 TED Prize, Gooch wished to end anonymously owned companies, and the Paradise Papers show how this business structure can be used to nefarious end. Check out Global Witness’ report on how the commodities company Glencore appears to have funneled $45 million to a notorious billionaire middleman in the Democratic Republic of the Congo to help them negotiate mining rights. And their look at how a US-based bank helped one of Russia’s richest oligarchs register a private jet, despite his company being on US sanctions lists. (Watch Gooch’s TED Talk)

A metric for measuring corporate vitality. Martin Reeves, director of the Henderson Institute at BCG, and his colleagues have taken his idea that strategies need strategies and expanded it into the creation of the Fortune Future 50, a categorization of companies based on more than financial data. Companies are divided into “leaders” and “challengers,” with the former having a market capitalization over $20 billion as of fiscal year 2016 and the latter including startups with a market capitalization below $20 billion. However, instead of focusing on rear-view analytics, BCG’s assessment uses artificial intelligence and natural language processing to review a company’s vitality, or their “capacity to explore new options, renew strategy, and grow sustainably,” according to a publication by Reeves and his collaborators. Since only 7% of companies that are market-share leaders are also profit leaders, the analysis can provide companies with a new metric to judge progress. (Watch Reeves’ TED Talk)

The boy who harnessed the wind — and the silver screen. William Kamkwamba’s story will soon reach the big screen via the upcoming film The Boy Who Harnessed the Wind. Kamkwamba built a windmill that powered his home in Malawi with no formal education. He snuck into a library, deciphered physics on his own, and trusted his intuition that he had an idea he could execute. His determination ultimately saved his family from a deadly famine. (Watch Kamkwamba’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.


Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #137

Here's what happened in the Reproducible Builds effort between Sunday December 3 and Saturday December 9 2017:

Documentation update

There was more discussion on different logos being proposed for the project.

Reproducible work in other projects

Cyril Brulebois wrote about Tails' work on reproducibility

Gabriel Scherer submitted a pull request to the OCaml compiler to honour the BUILD_PATH_PREFIX_MAP environment variable.

Packages reviewed and fixed

Patches filed upstream:

  • Bernhard M. Wiedemann:
  • Eli Schwartz:
  • Foxboron
    • gopass: - use SOURCE_DATE_EPOCH in Makefile
  • Jelle
    • PHP: - use SOURCE_DATE_EPOCH for Build Date
  • Chris Lamb:
    • pylint - file ordering, nondeterminstic data structure
    • tlsh - clarify error message (via diffoscope development)
  • Alexander "lynxis" Couzens:

Patches filed in Debian:

Patches filed in OpenSUSE:

  • Bernhard M. Wiedemann:
    • build-compare (merged) - handle .egg as .zip
    • neovim (merged) - hostname, username
    • perl (merged) - date, hostname, username
    • sendmail - date, hostname, username

Patches filed in OpenWRT:

  • Alexander "lynxis" Couzens:

Reviews of unreproducible packages

17 package reviews have been added, 31 have been updated and 43 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (13)
  • Andreas Beckmann (2)
  • Emilio Pozuelo Monfort (3)

reprotest development

  • Santiago Torres:
    • Use uname -m instead of arch.

trydiffoscope development

Version 66 was uploaded to unstable by Chris Lamb. It included contributions already covered by posts of the previous weeks as well as new ones from:

  • Chris Lamb:
    • Parse dpkg-parsechangelog instead of hard-coding version
    • Bump Standards-Version to 4.1.2
    • flake8 formatting

reproducible-website development

tests.reproducible-builds.org

reproducible Arch Linux:

reproducible F-Droid:

Misc.

This week's edition was written by Ximin Luo, Alexander Couzens, Holger Levsen, Chris Lamb, Bernhard M. Wiedemann and Santiago Torres & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Krebs on SecurityFormer Botmaster, ‘Darkode’ Founder is CTO of Hacked Bitcoin Mining Firm ‘NiceHash’

On Dec. 6, 2017, approximately USD $52 million worth of Bitcoin mysteriously disappeared from the coffers of NiceHash, a Slovenian company that lets users sell their computing power to help others mine virtual currencies. As the investigation into the heist nears the end of its second week, many Nice-Hash users have expressed surprise to learn that the company’s chief technology officer recently served several years in prison for operating and reselling a massive botnet, and for creating and running ‘Darkode,” until recently the world’s most bustling English-language cybercrime forum.

In December 2013, NiceHash CTO Matjaž Škorjanc was sentenced to four years, ten months in prison for creating the malware that powered the ‘Mariposa‘ botnet. Spanish for “Butterfly,” Mariposa was a potent crime machine first spotted in 2008. Very soon after, Mariposa was estimated to have infected more than 1 million hacked computers — making it one of the largest botnets ever created.

An advertisement for the ButterFly Flooder, a crimeware product based on the ButterFly Bot.

ButterFly Bot, as it was more commonly known to users, was a plug-and-play malware strain that allowed even the most novice of would-be cybercriminals to set up a global operation capable of harvesting data from thousands of infected PCs, and using the enslaved systems for crippling attacks on Web sites. The ButterFly Bot kit sold for prices ranging from $500 to $2,000.

Prior to his initial arrest in Slovenia on cybercrime charges in 2010, Škorjanc was best known to his associates as “Iserdo,” the administrator and founder of the exclusive cybercrime forum Darkode.

A message from Iserdo warning Butterfly Bot subscribers not to try to reverse his code.

On Darkode, Iserdo sold his Butterfly Bot to dozens of other members, who used it for a variety of illicit purposes, from stealing passwords and credit card numbers from infected machines to blasting spam emails and hijacking victim search results. Microsoft Windows PCs infected with the bot would then try to spread the disease over MSN Instant Messenger and peer-to-peer file sharing networks.

In July 2015, authorities in the United States and elsewhere conducted a global takedown of the Darkode crime forum, arresting several of its top members in the process. The U.S. Justice Department at the time said that out of 800 or so crime forums worldwide, Darkode represented “one of the gravest threats to the integrity of data on computers in the United States and around the world and was the most sophisticated English-speaking forum for criminal computer hackers in the world.”

Following Škorjanc’s arrest, Slovenian media reported that his mother Zdenka Škorjanc was accused of money laundering; prosecutors found that several thousand euros were sent to her bank account by her son. That case was dismissed in May of this year after prosecutors conceded she probably didn’t know how her son had obtained the money.

Matjaž Škorjanc did not respond to requests for comment. But local media reports state that he has vehemently denied any involvement in the disappearance of the NiceHash stash of Bitcoins.

In an interview with Slovenian news outlet Delo.si, the NiceHash CTO described the theft “as if his kid was kidnapped and his extremities would be cut off in front of his eyes.” A roughly-translated English version of that interview has been posted to Reddit.

According to media reports, the intruders were able to execute their heist after stealing the credentials of a user with administrator privileges at NiceHash. Less than an hour after breaking into the NiceHash servers, approximately 4,465 Bitcoins were transferred out of the company’s accounts.

NiceHash CTO Matjaž Škorjanc, as pictured on the front page of a recent edition of the Slovenian daily Delo.si

A source close to the investigation told KrebsOnSecurity that the NiceHash hackers used a virtual private network (VPN) connection with a Korean Internet address, although the source said Slovenian investigators were reluctant to say whether that meant South Korea or North Korea because they did not want to spook the perpetrators into further covering their tracks.

CNN, Bloomberg and a number of other Western media outlets reported this week that North Korean hackers have recently doubled down on efforts to steal, phish and extort Bitcoins as the price of the currency has surged in recent weeks.

“North Korean hackers targeted four different exchanges that trade bitcoin and other digital currencies in South Korea in July and August, sending malicious emails to employees, according to police,” CNN reported.

Bitcoin’s blockchain ledger system makes it easy to see when funds are moved, and NiceHash customers who lost money in the theft have been keeping a close eye on the Bitcoin payment address that received the stolen funds ever since. On Dec. 13, someone in control of that account began transferring the stolen bitcoins to other accounts, according to this transaction record.

The NiceHash theft occurred as the price of Bitcoin was skyrocketing to new highs. On January 1, 2017, a single Bitcoin was worth approximately $976. By December 6, the day of the NiceHash hack, the price had ballooned to $11,831 per Bitcoin.

Today, a single Bitcoin can be sold for more than $17,700, meaning whoever is responsible for the NiceHash hack has seen their loot increase in value by roughly $27 million in the nine days since the theft.

In a post on its homepage, NiceHash said it was in the final stages of re-launching the surrogate mining service.

“Your bitcoins were stolen and we are working with international law enforcement agencies to identify the attackers and recover the stolen funds. We understand it may take some time and we are working on a solution for all users that were affected.

“If you have any information about the attack, please email us at [email protected]. We are giving BTC rewards for the best information received. You can also join our community page about the attack on reddit.

However, many followers of NiceHash’s Twitter account said they would not be returning to the service unless and until their stolen Bitcoins were returned.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, November 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 144 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Antoine Beaupré did 8.5h (out of 13h allocated + 3.75h remaining, thus keeping 8.25h for December).
  • Ben Hutchings did 17 hours (out of 13h allocated + 4 extra hours).
  • Brian May did 10 hours.
  • Chris Lamb did 13 hours.
  • Emilio Pozuelo Monfort did 14.5 hours (out of 13 hours allocated + 15.25 hours remaining, thus keeping 13.75 hours for December).
  • Guido Günther did 14 hours (out of 11h allocated + 5.5 extra hours, thus keeping 2.5h for December).
  • Hugo Lefeuvre did 13h.
  • Lucas Kanashiro did not request any work hours, but he had 3 hours left. He did not publish any report yet.
  • Markus Koschany did 14.75 hours (out of 13 allocated + 1.75 extra hours).
  • Ola Lundqvist did 7h.
  • Raphaël Hertzog did 10 hours (out of 12h allocated, thus keeping 2 extra hours for December).
  • Roberto C. Sanchez did 32.5 hours (out of 13 hours allocated + 24.50 hours remaining, thus keeping 5 extra hours for November).
  • Thorsten Alteholz did 13 hours.

About external support partners

You might notice that there is sometimes a significant gap between the number of distributed work hours each month and the number of sponsored hours reported in the “Evolution of the situation” section. This is mainly due to some work hours that are “externalized” (but also because some sponsors pay too late). For instance, since we don’t have Xen experts among our Debian contributors, we rely on credativ to do the Xen security work for us. And when we get an invoice, we convert that to a number of hours that we drop from the available hours in the following month. And in the last months, Xen has been a significant drain to our resources: 35 work hours made in September (invoiced in early October and taken off from the November hours detailed above), 6.25 hours in October, 21.5 hours in November. We also have a similar partnership with Diego Bierrun to help us maintain libav, but here the number of hours tend to be very low.

In both cases, the work done by those paid partners is made freely available for others under the original license: credativ maintains a Xen 4.1 branch on GitHub, Diego commits his work on the release/0.8 branch in the official git repository.

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 55 packages with a known CVE and the dla-needed.txt file 35 (we’re a bit behind in CVE triaging apparently).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianMichal Čihař: Weblate 2.18

Weblate 2.18 has been released today. The biggest improvement is probably reviewer based workflow, but there are some other enhancements as well.

Full list of changes:

  • Extended contributor stats.
  • Improved configuration of special chars virtual keyboard.
  • Added support for DTD file format.
  • Changed keyboard shortcuts to less likely collide with browser/system ones.
  • Improved support for approved flag in Xliff files.
  • Added support for not wrapping long strings in Gettext po files.
  • Added button to copy permalink for current translation.
  • Dropped support for Django 1.10 and added support for Django 2.0.
  • Removed locking of translations while translating.
  • Added support for adding new units to monolingual translations.
  • Added support for translation workflows with dedicated reviewers.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

TEDExploring the boundaries of legacy at TED@Westpac

Cyndi Stivers and Adam Spencer host TED@Westpac — a day of talks and performances themed around “The Future Legacy” — in Sydney, Australia, on Monday, December 11th. (Photo: Jean-Jacques Halans / TED)

Legacy is a delightfully complex concept, and it’s one that the TED@Westpac curators took on with gusto for the daylong event held in Sydney, Australia, on Monday December 11th. Themed around the idea of “The Future Legacy,” the day was packed with 15 speakers and two performers and hosted by TED’s Cyndi Stivers and TED speaker and monster prime number aficionado Adam Spencer. Topics ranged from education to work-health balance to designer babies to the importance of smart conversations around death.

For Westpac managing director and CEO Brian Hartzer, the day was an opportunity both to think back over the bank’s own 200-year-legacy — and a chance for all gathered to imagine a bold new future that might suit everyone. He welcomed talks that explored ideas and stories that may shape a more positive global future. “We are so excited to see the ripple effect of your ideas from today,” he told the collected speakers before introducing Aboriginal elder Uncle Ray Davison to offer the audience a traditional “welcome to country.”

And with that, the speakers were up and running.

“Being an entrepreneur is about creating change,” says Linda Zhang. She suggests we need to encourage the entrepreneurial mindset in high-schoolers. (Photo: Jean-Jacques Halans / TED)

Ask questions, challenge the status quo, build solutions. Who do you think of when you hear the word “entrepreneur?” Steve Jobs, Mark Zuckerberg, Elon Musk and Bill Gates might come to mind. What about a high school student? Linda Zhang might just have graduated herself but she’s been taking entrepreneurial cues from her parents, who started New Zealand’s second-largest thread company. Zhang now runs a program to pair students with industry mentors and get them to work for 48 hours on problems they actually want to solve. The results: a change in mindset that could help prepare them for a tumultuous but opportunity-filled job market. “Being an entrepreneur is about creating change,” Zhang says. “This is what high school should be about … finding things you care about, having the curiosity to learn about those things and having the drive to take that knowledge and implement it into problems you care about solving.”

Should we bribe kids to study math? In this sparky talk, Mohamad Jebara shares a favorite quote from fellow mathematician Francis Su: “We study mathematics for play, for beauty, for truth, for justice, and for love.” Only problem: kids today, he says, often don’t tend to agree, instead finding math “difficult and boring.” Jebara has a counterintuitive potential solution: he wants to bribe kids to study math. His financial incentive plan works like this: his company charges parents a monthly subscription fee; if students complete their weekly math goal then the program refunds that amount of the fee directly into the student’s bank account; if not, the company pockets the profit. Ultimately, Jebara wants kids to discover math’s intrinsic worth and beauty, but until they get there, he’s happy to pay them. And this isn’t just about his own business model. “Unless we find a way to improve student engagement with mathematics, we’ll have not only a huge skills shortage crisis, but a fickle population easily manipulated by whoever can get the most airtime,” he says.

You, cancer and the workplace. When lawyer Sarah Donnelly was diagnosed with breast cancer, she turned to her friends and family for support — but she also sought refuge in her work. “My job and my coworkers would make me feel valuable and human at times when I would have otherwise felt like a statistic,” she says. “Work gave me focus and stability when I was dealing with so many unknowns and difficult personal decisions.” But, she says, not all employers realize that work can be a sanctuary for the sick, and often — believing themselves polite and thoughtful — cast out their employees. Now, Donnelly is striving to change the experiences of individuals coping with serious illness — and the perceptions others might have of them. Together with a colleague, she created a “Working with Cancer” toolkit that provides a framework and guidance for all those professionally involved in an employee’s life, and she is traveling to different companies around Australia to implement it.

Digital strategist Will Jenkins asks that we need to think about what we really want from life, not just our day-to-day. (Photo: Jean-Jacques Halans / TED)

The connection between time and money. We all need more time, says digital strategist Will Jenkins, and historically we’ve developed systems and technologies to save time for ourselves and others by reducing waste and inefficiency. But there’s a problem: even after spending centuries trying to perfect time-saving techniques, it too often still doesn’t feel like we’re getting anywhere. “As individuals, we’re busier than ever,” Jenkins points out, before calling for us to look beyond specialized techniques to think about what we actually really want from life itself, not just our day-to-day. In taking a holistic approach to time, we might, he says, channel John Maynard Keynes to figure out new ways that will allow all of us “to live wisely, agreeably, and well.”

Creating a digital future for Australia’s first people. Indigenous Australian David Unaipon (1862-1967) was called his country’s Leonardo da Vinci — he was responsible for at least 19 inventions, including a tool that led to modern sheep shears. But according to Westpac business analyst Michael Mieni, we need to find better ways to encourage future Unaipons. Right now, he says, too many Aboriginals are on the far side of the digital divide, lacking access to computers and the Internet as well as basic schooling in technology. Mieni was the first indigenous IT honors students at the University of Technology Sydney and he makes the case that tech-savvy Aboriginals are badly needed to serve as role models and teachers, as inventors of ways to record and promote their culture and as guardians of their people’s digital rights. “What if the next ground-breaking idea is already in the mind of a young Aboriginal student but will never surface because they face digital disadvantage or exclusion?” he asks. Everyone in Australia — not just the first people — gains when every citizen has the opportunity and resources to become digitally literate.

Shade Zahrai and Aric Yegudkin perform a gorgeous, sensual dance at TED@Westpac. (Photo: Jean-Jacques Halans / TED)

The beauty of a dance duet. “Partner dance embodies the coming together of two people,” Shade Zahrai‘s voice whispers to a dark auditorium as she and her partner take the TED stage. In the middle of session one, the pair perform a gorgeous and sensual modern dance, complete with Zahrai’s recorded voiceover explaining the coordination and unity that partner dance requires of its participants.

The power of inclusiveness. Inclusion strategist Hayley Yeates shares how her identity as a proud Australian was dimmed by prejudice shown towards her by those who saw her as Asian. When in school, she says, fellow students didn’t want to associate with her in classrooms, while she didn’t add a picture to her LinkedIn profile for fear her race would deem her less worthy of a job. But Yeates focuses on more than the personal stories of those who’ve been dubbed an outsider, and makes the case that diversity leads to innovation and greater profitability for companies. She calls for us all to sponsor safe spaces where authentic, unrestrained conversations about the barriers faced by cultural minorities can be held freely. And she invites leaders to think about creating environments where people’s whole selves can work, and where an organization can thrive because of, not in spite of, its employees’ differences.

Olivia Tyler tracks the complexity of global supply chains, looking to develop smart technology that can allow both corporations and consumers to understand buying decisions. (Photo: Jean-Jacques Halans / TED)

How to do yourself out of a job. As a sustainability practitioner, Olivia Tyler is trying hard to develop systems that will put her out of work. Why? For the good of us all, of course. And how? By encouraging all of us to ask questions about where what we buy, wear or eat comes from. Tyler tracks the fiendish complexity of today’s global supply chains, and she is attempting to develop smart technology that can allow both corporations and consumers to have the visibility they need to understand the buying decisions they make. When something as ostensibly simple as a baked good can include hundreds of data points about the ingredients it contains — a cake can be a minefield, she jokes — it’s time to open up the cupboard and use tech such as the blockchain to crack open the sustainability code. “We can adopt new and exciting ways to change the game on how we conduct ourselves as corporates and consumers across our increasingly smaller world,” she promises.

Can machine intelligence liberate human purpose? Much has been made of the threat robots place to the very existence of certain jobs, with some estimates reckoning that as much as 80% of low skill jobs have already been automated. Self-styled “datapreneur” Tomer Garzberg shares how he researched 11,000 of the world’s most widely held jobs to create the “Short-Term Automation Susceptibility Index” to identify the types of role that might be up for automation next. Perhaps unsurprisingly, highly specialized roles held by those such as neurosurgeons, chemical engineers and, well, acrobats face the least risk of being automated, while even senior blue collar positions or standard white collar roles such as pharmacists, accountants and health inspectors can expect a 25% shrinkage over the next 10 years. But Garzberg believes that we can — must — embrace this cybernated future.”Prepare your family to be okay with change, as uncomfortable as it may be,” he says. “We’ll likely be switching careers far more frequently in the near future.”

Everything’s gonna be alright. After a quick break and a breather, Westpac’s own Rowan Fitzpatrick and his band Heart of Mind played in session two with a sweet, uplifting rock ballad about better days and leaning on one another with love and hope. “Keep looking forward / Don’t lose your grip / One step at a time,” the trained jazz singer croons.

Alastair O’Neill shares the ethical wrangling his family undertook as they figured out how they felt about potentially eradicating a debilitating disease with gene editing. (Photo: Jean-Jacques Halans / TED)

You have the ability to end a hereditary disease. Do you take it? “Recently I had to sign a form promising that I wouldn’t have sex with my wife,” says a deadpan Alastair O’Neill as he kicks off the session’s talks. “Why? Because we decided to have a baby.” He waits a beat. “Let me rewind.” As the audience settles in for a rollercoaster talk of emotional highs and lows, he explains his family’s journey through the ethical minefield of embryonic genetic testing, also known as preimplantation genetic diagnosis or PGD. It was a journey prompted by a hereditary condition in his wife’s family — his father-in-law Phil had inherited the gene for retinal dystrophy and was declared legally blind at 30 years old. The odds that his own young family would have a baby either carrying or inheriting the disease were as low as one in two. In this searingly personal talk, O’Neill shares the ups and downs of both the testing process and the ethical wrangling that their entire family undertook as they tried to figure out how they felt about potentially eradicating a debilitating disease. Spoiler alert: O’Neill is in favor. “PGD gives couples the ability to choose to end a hereditary disease,” he says. “I think we should give every potential parent that choice.”

A game developer’s solution to the housing crisis. When Sarah Murray wanted to buy her first house, she discovered that home prices far exceeded her budget — and building a new house would be prohibitively costly and time-consuming. Frustrated by her lack of self-determination, Murray decided to create a computer game to give control back to buyers. The program allows you to design all aspects of your future home (even down to attention to price and environmental impact) and then delivers the final product directly to you in modular components that can be assembled onsite. Murray’s innovative idea both cuts costs and makes more sustainable dwellings; the first physical houses should be ready by 2018. But the digital housing developer isn’t done yet. Now she is working on adapting the program and investing in construction techniques such as 3D printing so that when a player designs and builds a home, they can also contribute to a home for someone in need. As she says, “I want to put every person who wants one in a home of their own design.”

Tough guys need mental-health help, too. In 2013 in Castlemaine, Victoria, painter and decorator Jeremy Forbes was shaken when a friend and fellow tradie (or tradesman), committed suicide. But what truly shocked him were the murmurs he overheard at the man’s wake — people asking, “Who’s next?” Tradies deal with the same struggles faced by many — depression, alcohol and drug dependency, gambling, financial hardship — but they often don’t feel comfortable opening up about them. “You’re expected to be silent in the face of adversity,” says Forbes. So he and artist Catherine Pilgrim founded HALT (Hope Assistance Local Tradies), a mental health awareness organization for tradie men and women, apprentices, builders, farmers, and their partners. HALT meets people where they are, hosting gatherings at hardware stores, football and sports clubs, and vocational training facilities. There, people learn about the warning signs of depression and anxiety and the available services. According to Forbes, who received a Westpac Social Change Fellowship in 2016, HALT has now held around 150 events, and he describes the process as both empowering and cathartic. We need to know how to respond if people are not OK, he says.

The conversation about death you need to have. “Most of us don’t want to acknowledge death, we don’t want to plan for it, and we don’t want to discuss it with the most important people in our lives,” says mortal realist and portfolio manager Michelle Knox. She’s got stats to prove it: 45% of people in Australia over the age of 18 don’t have a legal will. But dying without one is complicated and expensive for those left behind, and just one reason Knox believes it’s time we take ownership of our own deaths. Others include that talking about death before it happens can help us experience a good death, reduce stress on our loved ones, and also help us support others who are grieving. Knox experienced firsthand the power of talking about death ahead of time when her father passed away earlier this year. “I discovered this year it’s actually a privilege to help someone exit this life and although my heart is heavy with loss and sadness, it is not heavy with regret,” she says, “I knew what Dad wanted and I feel at peace knowing I could support his wishes.”

“What would water do?” asks Raymond Tang. “This simple and powerful question has changed my life for the better.” (Photo: Jean-Jacques Halans / TED)

The philosophy of water. How do we find fulfillment in a world that’s constantly changing? IT strategy manager and “agent of flow” Raymond Tang struggled mightily with this question — until he came across the ancient Chinese philosophy of the Tao Te Ching. In it, he found a passage comparing goodness to water and, inspired, he’s now applying the concepts to his everyday life. In this charming talk, he shares three lessons he’s learned so far from the “philosophy of water.” First, humility: in the same way water helps plants and animals grow without seeking reward, Tang finds fulfillment and meaning in helping others overcome their challenges. Next, harmony: just as water is able to navigate its way around obstacles without force or conflict, Tang believes we can find a greater sense of fulfillment in our endeavors by shifting our focus away from achieving success and towards achieving harmony. Finally, openness: water can be a liquid, solid or gas, and it adapts to the shape in which it’s contained. Tang finds in his professional life that the teams most open to learning (and un-learning) do the best work. “What would water do?” Tang asks. “This simple and powerful question has changed my life for the better.”

With great data comes great responsibility. Remember the hacks on companies such as Equifax and JP Morgan? Well, you ain’t seen nothing yet. As computer technology becomes more powerful (think quantum) the systems we use to protect our wells of data become ever more vulnerable. However, there is still time to plan countermeasures against the impending data apocalypse, reassures encryption expert Vikram Sharma. He and his team are designing security devices and programs that also rely on quantum physics to power a defense against the most sophisticated attacks. “The race is on to build systems that will remain secure in the face of rapid technological advance,” he says.

Rach Ranton brings the leadership lessons she learned in the military to corporations, suggesting that leaders succeed when everyone knows the final goal they’re working toward. (Photo: Jean-Jacques Halans / TED)

Leadership lessons from the front line. How does a leader give their people a sense of purpose and direction? Rach Ranton spent more than a decade in the Australian Army, including tours of Afghanistan and East Timor. Now, she brings the lessons she learned in the military to companies, blending organizational psychology aimed at corporations with the planning and best practices of a well-oiled military unit. Even in a situation of extreme uncertainty, she says, military units function best if everyone understands the leader’s objective exactly as well as they understand their own role, not just their individual part to play but also the whole. She suggests leaders spend time thinking about how to communicate “commander’s intent,” the final goal that everyone is working toward. As a test, she asks: If you as a leader were absent from the scene, would your team still know what to do … and why they were doing it?


CryptogramTracking People Without GPS

Interesting research:

The trick in accurately tracking a person with this method is finding out what kind of activity they're performing. Whether they're walking, driving a car, or riding in a train or airplane, it's pretty easy to figure out when you know what you're looking for.

The sensors can determine how fast a person is traveling and what kind of movements they make. Moving at a slow pace in one direction indicates walking. Going a little bit quicker but turning at 90-degree angles means driving. Faster yet, we're in train or airplane territory. Those are easy to figure out based on speed and air pressure.

After the app determines what you're doing, it uses the information it collects from the sensors. The accelerometer relays your speed, the magnetometer tells your relation to true north, and the barometer offers up the air pressure around you and compares it to publicly available information. It checks in with The Weather Channel to compare air pressure data from the barometer to determine how far above sea level you are. Google Maps and data offered by the US Geological Survey Maps provide incredibly detailed elevation readings.

Once it has gathered all of this information and determined the mode of transportation you're currently taking, it can then begin to narrow down where you are. For flights, four algorithms begin to estimate the target's location and narrows down the possibilities until its error rate hits zero.

If you're driving, it can be even easier. The app knows the time zone you're in based on the information your phone has provided to it. It then accesses information from your barometer and magnetometer and compares it to information from publicly available maps and weather reports. After that, it keeps track of the turns you make. With each turn, the possible locations whittle down until it pinpoints exactly where you are.

To demonstrate how accurate it is, researchers did a test run in Philadelphia. It only took 12 turns before the app knew exactly where the car was.

This is a good example of how powerful synthesizing information from disparate data sources can be. We spend too much time worried about individual data collection systems, and not enough about analysis techniques of those systems.

Research paper.

Worse Than FailureError'd: These are not the Security Questions You're Looking for

"If it didn't involve setting up my own access, I might've tried to find what would happen if I dared defy their labeling," Jameson T. wrote.

 

"I think that someone changed the last sentence in a hurry," writes George.

 

"Now I may not be able to read, or let alone type in Italian, but I bet if given this particular one, I could feel my way through it," Anatoly writes.

 

"Wow! The best rates on default text, guaranteed!" writes Peter G.

 

Thomas R. wrote, "Doing Cyber Monday properly takes some serious skills!"

 

"I'm unsure what's going on here. Is the service status page broken or is it telling me that the service is broken?" writes Neil H.

 

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianDimitri John Ledkov: What does FCC Net Neutrality repeal mean to you?

Sorry, the web page you have requested is not available through your internet connection.

We have received an order from the Courts requiring us to prevent access to this site in order to help protect against Lex Julia Majestatis infridgement.

If you are a home broadband customer, for more information on why certain web pages are blocked, please click here.
If you are a business customer, or are trying to view this page through your company's internet connection, please click here.

Planet DebianUrvika Gola: KubeCon + CloudNativeCon, Austin

KubeCon + CloudNativeCon, North America took place in Austin, Texas from 6th to 8th December. But before that, I stumbled upon this great opportunity by Linux Foundation which would make it possible for me to attend and expand my knowledge about cloud computing, containers and all things cloud native!

cncf

I would like to thank the diversity committee members – @michellenoorali ,  @Kris__Nova, @jessfraz , @evonbuelow and everyone (+Wendy West!!) behind this for making it possible for me and others by going extra miles to achieve the greatest initiative for diversity inclusion. It gave me an opportunity to learn from experts and experience the power of Kubernetes.

After travelling 23+ in flight, I was able to attend the pre-conference sessions on 5th December. The day concluded with amazing Empower Her Evening Event where I met some amazing bunch of people! We had some great discussions and food, Thanks

diversity-scholars-cncfWith diversity scholarship recipients at EmpowerHer event (credits – Radhika Nair)

On 6th December, I was super excited to attend Day 1 of the conference, when I reached at the venue, Austin Convention Center, there was a huge hall with *4100* people talking about all things cloud native!

It started with informational KeyNote by Dan Kohn, the Executive Director of Cloud Native Computing Foundation. He pointed out how CNCF has grown over the year, from having 4 projects in 2016 to 14 projects in 2017. From having 1400 Attendees in March 2017 to 4100 Attendees in December 2017. It was really thrilling to know about the growth and power of Kubernetes, which really inspired me to contribute towards this project.

dan-cncfDan Kohn Keynote talk at KubeCon+CloudNativeCon

It was hard to choose what session to attend because there was just so much going on!! I attended sessions mostly which were beginner & intermediate level. Missed out on the ones which required technical expertise I don’t possess, yet! Curious to know more about other tech companies working on, I made sure I visited all sponsor booths and learn what technology they are building. Apart from that they had cool goodies and stickers, the place where people are labelled at sticker-person or non-sticker-person! 😀

There was a diversity luncheon on 7th December, where I had really interesting conversations with people about their challenges and stories related to technology. I made some great friends at the table and thank you for voting my story as the best story of getting into open source & thank you Samsung for sponsoring this event.

KubeCon + CloudNativeCon was a very informative and hugee event put up by Cloud Native Computing Foundation. It was interesting to know how cloud native technologies have expanded along with the growth of community! Thank you the Linux foundation for this experience! 🙂

IMG_20171206_190624Keeping Cloud Native Weird!
IMG_20171207_192927Open bar all attendee party! (Where I experienced my first snow fall )

 

austin-lakeGoodbye Austin!

Planet DebianSean Whitton: A second X server on vt8, running a different Debian suite

Two tensions

  1. Sometimes the contents of the Debian archive isn’t yet sufficient for working in a software ecosystem in which I’d like to work, and I want to use that ecosystem’s package manager which downloads the world into $HOME – e.g. stack, pip, lein and friends.

    But I can’t use such a package manager when $HOME contains my PGP subkeys and other valuable files, and my X session includes Firefox with lots of saved passwords, etc.

  2. I want to run Debian stable on my laptop for purposes of my day job – if I can’t open Emacs on a Monday morning, it’s going to be a tough week.

    But I also want to do Debian development on my laptop, and most of that’s a pain without either Debian testing or Debian unstable.

The solution

Have Propellor provision and boot a systemd-nspawn(1) container running Debian unstable, and start a window manager in that container with $DISPLAY pointing at an X server in vt8. Wooo!

In more detail:

  1. Laptop runs Debian stable. Main account is spwhitton.
  2. Achieve isolation from /home/spwhitton by creating a new user account, spw, that can’t read /home/spwhitton. Also, in X startup scripts for spwhitton, run xhost -local:.
  3. debootstrap a Debian unstable chroot into /var/lib/container/develacc.
  4. Install useful desktop things like task-british-desktop into /var/lib/container/develacc.
  5. Boot /var/lib/container/develacc as a systemd-nspawn container called develacc.
  6. dm-tool switch-to-greeter to start a new X server on vt8. Login as spw.
  7. Propellor installs a script enter-develacc which uses nsenter(1) to run commands in the develacc container. Create a further script enter-develacc-i3 which does

     /usr/local/bin/enter-develacc sh -c "cd ~spw; DISPLAY=$1 su spw -c i3"
    
  8. Finally, /home/spw/.xsession starts i3 in the chroot pointed at vt8’s X server:

     sudo /usr/local/bin/enter-develacc-i3 $DISPLAY
    
  9. Phew. May now pip install foo. And Ctrl-Alt-F7 to go back to my secure session. That session can read and write /home/spw, so I can dgit push etc.

The Propellor configuration

develaccProvisioned :: Property (HasInfo + DebianLike)
develaccProvisioned = propertyList "develacc provisioned" $ props
    & User.accountFor (User "spw")
    & Dotfiles.installedFor (User "spw")
    & User.hasDesktopGroups (User "spw")
    & withMyAcc "Sean has 'spw' group"
        (\u -> tightenTargets $ User.hasGroup u (Group "spw"))
    & withMyHome "Sean's homedir chmodded"
        (\h -> tightenTargets $ File.mode h 0O0750)
    & "/home/spw" `File.mode` 0O0770

    & "/etc/sudoers.d/spw" `File.hasContent`
        ["spw ALL=(root) NOPASSWD: /usr/local/bin/enter-develacc-i3"]
    & "/usr/local/bin/enter-develacc-i3" `File.hasContent`
        [ "#!/bin/sh"
        , ""
        , "echo \"$1\" | grep -q -E \"^:[0-9.]+$\" || exit 1"
        , ""
        , "/usr/local/bin/enter-develacc sh -c \\"
        , "\t\"cd ~spw; DISPLAY=$1 su spw -c i3\""
        ]
    & "/usr/local/bin/enter-develacc-i3" `File.mode` 0O0755

    -- we have to start xss-lock outside of the container in order that it
    -- can interface with host logind
    & "/home/spw/.xsession" `File.hasContent`
        [ "if [ -e \"$HOME/local/wallpaper.png\" ]; then"
        , "    xss-lock -- i3lock -i $HOME/local/wallpaper.png &"
        , "else"
        , "    xss-lock -- i3lock -c 3f3f3f -n &"
        , "fi"
        , "sudo /usr/local/bin/enter-develacc-i3 $DISPLAY"
        ]

    & Systemd.nspawned develAccChroot
    & "/etc/network/if-up.d/develacc-resolvconf" `File.hasContent`
        [ "#!/bin/sh"
        , ""
        , "cp -fL /etc/resolv.conf \\"
        ,"\t/var/lib/container/develacc/etc/resolv.conf"
        ]
    & "/etc/network/if-up.d/develacc-resolvconf" `File.mode` 0O0755
  where
    develAccChroot = Systemd.debContainer "develacc" $ props
        -- Prevent propellor passing --bind=/etc/resolv.conf which
        -- - won't work when system first boots as WLAN won't be up yet,
        --   so /etc/resolv.conf is a dangling symlink
        -- - doesn't keep /etc/resolv.conf up-to-date as I move between
        --   wireless networks
        ! Systemd.resolvConfed

        & osDebian Unstable X86_64
        & Apt.stdSourcesList
        & Apt.suiteAvailablePinned Experimental 1
        -- use host apt cacher (we assume I have that on any system with
        -- develaccProvisioned)
        & Apt.proxy "http://localhost:3142"

        & Apt.installed [ "i3"
                , "task-xfce-desktop"
                , "task-british-desktop"
                , "xss-lock"
                , "emacs"
                , "caffeine"
                , "redshift-gtk"
                , "gnome-settings-daemon"
                ]

        & Systemd.bind "/home/spw"
        -- note that this won't create /home/spw because that is
        -- bind-mounted, which is what we want
        & User.accountFor (User "spw")
        -- ensure that spw inside the container can read/write ~spw
        & scriptProperty
            [ "usermod -u $(stat --printf=\"%u\" /home/spw) spw"
            , "groupmod -g $(stat --printf=\"%g\" /home/spw) spw"
            ] `assume` NoChange

Comments

I first tried using a traditional chroot. I bound lots of /dev into the chroot and then tried to start lightdm on vt8. This way, the whole X server would be within the chroot; this is in a sense more straightforward and there is not the overhead of booting the container. But lightdm refuses to start.

It might have been possible to work around this, but after reading a number of reasons why chroots are less good under systemd as compared with sysvinit, I thought I’d try systemd-nspawn, which I’ve used before and rather like in general. I couldn’t get lightdm to start inside that, either, because systemd-nspawn makes it difficult to mount enough of /dev for X servers to be started. At that point I realised that I could start only the window manager inside the container, with the X server started from the host’s lightdm, and went from there.

The security isn’t that good. You shouldn’t be running anything actually untrusted, just stuff that’s semi-trusted.

  • chmod 750 /home/spwhitton, xhost -local: and the argument validation in enter-develacc-i3 are pretty much the extent of the security here. The containerisation is to get Debian sid on a Debian stable machine, not for isolation

  • lightdm still runs X servers as root even though it’s been possible to run them as non-root in Debian for a few years now (there’s a wishlist bug against lightdm)

I now have a total of six installations of Debian on my laptop’s hard drive … four traditional chroots, one systemd-nspawn container and of course the host OS. But this is easy to manage with propellor!

Bugs

Screen locking is weird because logind sessions aren’t shared into the container. I have to run xss-lock in /home/spw/.xsession before entering the container, and the window manager running in the container cannot have a keybinding to lock the screen (as it does in my secure session). To lock the spw X server, I have to shut my laptop lid, or run loginctl lock-sessions from my secure session, which requires entering the root password.

Planet Linux AustraliaOpenSTEM: Celebration Time!

Here at OpenSTEM we have a saying “we have a resource on that” and we have yet to be caught out on that one! It is a festive time of year and if you’re looking for resources reflecting that theme, then here are some suggestions: Celebrations in Australia – a resource covering the occasions we […]

,

Cory DoctorowAn 8th grader’s brilliant trailer for Walkaway

CryptogramSecurity Planner

Security Planner is a custom security advice tool from Citizen Lab. Answer a few questions, and it gives you a few simple things you can do to improve your security. It's not meant to be comprehensive, but instead to give people things they can actually do to immediately improve their security. I don't see it replacing any of the good security guides out there, but instead augmenting them.

The advice is peer reviewed, and the team behind Security Planner is committed to keeping it up to date.

Note: I am an advisor to this project.

Google AdsenseThank you for being part of the web

At AdSense we are proud to be part of your journey. This video is our big thanks to you for everything we have done together in 2017!


Posted by: your AdSense Team

Planet DebianRussell Coker: Huawei Mate9

Warranty Etc

I recently got a Huawei Mate 9 phone. My previous phone was a Nexus 6P that died shortly before it’s one year warranty ran out. As there have apparently been many Nexus 6P phones dying there are no stocks of replacements so Kogan (the company I bought the phone from) offered me a choice of 4 phones in the same price range as a replacement.

Previously I had chosen to avoid the extended warranty offerings based on the idea that after more than a year the phone won’t be worth much and therefore getting it replaced under warranty isn’t as much of a benefit. But now that it seems that getting a phone replaced with a newer and more powerful model is a likely outcome it seems that there are benefits in a longer warranty. I chose not to pay for an “extended warranty” on my Nexus 6P because getting a new Nexus 6P now isn’t such a desirable outcome, but when getting a new Mate 9 is a possibility it seems more of a benefit to get the “extended warranty”. OTOH Kogan wasn’t offering more than 2 years of “warranty” recently when buying a phone for a relative, so maybe they lost a lot of money on replacements for the Nexus 6P.

Comparison

I chose the Mate 9 primarily because it has a large screen. It’s 5.9″ display is only slightly larger than the 5.7″ displays in the Nexus 6P and the Samsung Galaxy Note 3 (my previous phone). But it is large enough to force me to change my phone use habits.

I previously wrote about matching phone size to the user’s hand size [1]. When writing that I had the theory that a Note 2 might be too large for me to use one-handed. But when I owned those phones I found that the Note 2 and Note 3 were both quite usable in one-handed mode. But the Mate 9 is just too big for that. To deal with this I now use the top corners of my phone screen for icons that I don’t tend to use one-handed, such as Facebook. I chose this phone knowing that this would be an issue because I’ve been spending more time reading web pages on my phone and I need to see more text on screen.

Adjusting my phone usage to the unusually large screen hasn’t been a problem for me. But I expect that many people will find this phone too large. I don’t think there are many people who buy jeans to fit a large phone in the pocket [2].

A widely touted feature of the Mate 9 is the Leica lens which apparently gives it really good quality photos. I haven’t noticed problems with my photos on my previous two phones and it seems likely that phone cameras have in most situations exceeded my requirements for photos (I’m not a very demanding user). One thing that I miss is the slow-motion video that the Nexus 6P supports. I guess I’ll have to make sure my wife is around when I need to make slow motion video.

My wife’s Nexus 6P is well out of warranty. Her phone was the original Nexus 6P I had. When her previous phone died I had a problem with my phone that needed a factory reset. It’s easier to duplicate the configuration to a new phone than restore it after a factory reset (as an aside I believe Apple does this better) I copied my configuration to the new phone and then wiped it for my wife to use.

One noteworthy but mostly insignificant feature of the Mate 9 is that it comes with a phone case. The case is hard plastic and cracked when I unsuccessfully tried to remove it, so it seems to effectively be a single-use item. But it is good to have that in the box so that you don’t have to use the phone without a case on the first day, this is something almost every other phone manufacturer misses. But there is the option of ordering a case at the same time as a phone and the case isn’t very good.

I regard my Mate 9 as fairly unattractive. Maybe if I had a choice of color I would have been happier, but it still wouldn’t have looked like EVE from Wall-E (unlike the Nexus 6P).

The Mate 9 has a resolution of 1920*1080, while the Nexus 6P (and many other modern phones) has a resolution of 2560*1440 I don’t think that’s a big deal, the pixels are small enough that I can’t see them. I don’t really need my phone to have the same resolution as the 27″ monitor on my desktop.

The Mate 9 has 4G of RAM and apps seem significantly less likely to be killed than on the Nexus 6P with 3G. I can now switch between memory hungry apps like Pokemon Go and Facebook without having one of them killed by the OS.

Security

The OS support from Huawei isn’t nearly as good as a Nexus device. Mine is running Android 7.0 and has a security patch level of the 5th of June 2017. My wife’s Nexus 6P today got an update from Android 8.0 to 8.1 which I believe has the fixes for KRACK and Blueborne among others.

Kogan is currently selling the Pixel XL with 128G of storage for $829, if I was buying a phone now that’s probably what I would buy. It’s a pity that none of the companies that have manufactured Nexus devices seem to have learned how to support devices sold under their own name as well.

Conclusion

Generally this is a decent phone. As a replacement for a failed Nexus 6P it’s pretty good. But at this time I tend to recommend not buying it as the first generation of Pixel phones are now cheap enough to compete. If the Pixel XL is out of your price range then instead of saving $130 for a less secure phone it would be better to save $400 and choose one of the many cheaper phones on offer.

Remember when Linux users used to mock Windows for poor security? Now it seems that most Android devices are facing the security problems that Windows used to face and the iPhone and Pixel are going to take the role of the secure phone.

Worse Than FailureRepresentative Line: An Array of WHY

Medieval labyrinth

Reader Jeremy sends us this baffling JavaScript: "Nobody on the team knows how it came to be. We think all 'they' wanted was a sequence of numbers starting at 1, but you wouldn't really know that from the code."


var numbers = new Array(maxNumber)
    .join()
    .split(',')
    .map(function(){return ++arguments[1]});

The end result: an array of integers starting at 1 and going up to maxNumber. This is probably the most head-scratchingest way to get that result ever devised.

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

Planet DebianDirk Eddelbuettel: #13: (Much) Faster Package (Re-)Installation via Binaries

Welcome to the thirteenth post in the ridiculously rapid R recommendation series, or R4 for short. A few days ago we riffed on faster installation thanks to ccache. Today we show another way to get equally drastic gains for some (if not most) packages.

In a nutshell, there are two ways to get your R packages off CRAN. Either you install as a binary, or you use source. Most people do not think too much about this as on Windows, binary is the default. So why wouldn't one? Precisely. (Unless you are on Windows, and you develop, or debug, or test, or ... and need source. Another story.) On other operating systems, however, source is the rule, and binary is often unavailable.

Or is it? Exactly how to find out what is available will be left for another post as we do have a tool just for that. But today, just hear me out when I say that binary is often an option even when source is the default. And it matters. See below.

As a (mostly-to-always) Linux user, I sometimes whistle between my teeth that we "lost all those battles" (i.e. for the desktop(s) or laptop(s)) but "won the war". That topic merits a longer post I hope to write one day, and I won't do it justice today but my main gist that everybody (and here I mean mostly developers/power users) now at least also runs on Linux. And by that I mean that we all test our code in Linux environments such as e.g. Travis CI, and that many of us run deployments on cloud instances (AWS, GCE, Azure, ...) which are predominantly based on Linux. Or on local clusters. Or, if one may dream, the top500 And on and on. And frequently these are Ubuntu machines.

So here is an Ubuntu trick: Install from binary, and save loads of time. As an illustration, consider the chart below. It carries over the logic from the 'cached vs non-cached' compilation post and contrasts two ways of installing: from source, or as a binary. I use pristine and empty Docker containers as the base, and rely of course on the official r-base image which is supplied by Carl Boettiger and yours truly as part of our Rocker Project (and for which we have a forthcoming R Journal piece I might mention). So for example the timings for the ggplot2 installation were obtained via

time docker run --rm -ti r-base  /bin/bash -c 'install.r ggplot2'

and

time docker run --rm -ti r-base  /bin/bash -c 'apt-get update && apt-get install -y r-cran-ggplot2'

Here docker run --rm -ti just means to launch Docker, in 'remove leftovers at end' mode, use terminal and interactive mode and invoke a shell. The shell command then is, respectively, to install a CRAN package using install.r from my littler package, or to install the binary via apt-get after updating the apt indices (as the Docker container may have been built a few days or more ago).

Let's not focus on Docker here---it is just a convenient means to an end of efficiently measuring via a simple (wall-clock counting) time invocation. The key really is that install.r is just a wrapper to install.packages() meaning source installation on Linux (as used inside the Docker container). And apt-get install ... is how one gets a binary. Again, I will try post another piece to determine how one finds if a suitable binary for a CRAN package exists. For now, just allow me to proceed.

So what do we see then? Well have a look:

A few things stick out. RQuantLib really is a monster. And dplyr is also fairly heavy---both rely on Rcpp, BH and lots of templating. At the other end, data.table is still a marvel. No external dependencies, and just plain C code make the source installation essentially the same speed as the binary installation. Amazing. But I digress.

We should add that one of the source installations also required installing additional libries: QuantLib is needed along with Boost for RQuantLib. Similar for another package (not shown) which needed curl and libcurl.

So what is the upshot? If you can, consider binaries. I will try to write another post how I do that e.g. for Travis CI where all my tests us binaries. (Yes, I know. This mattered more in the past when they did not cache. It still matters today as you a) do not need to fill the cache in the first place and b) do not need to worry about details concerning compilation from source which still throws enough people off. But yes, you can of course survive as is.)

The same approach is equally valid on AWS and related instances: I answered many StackOverflow questions where folks were failing to compile "large-enough" pieces from source on minimal installations with minimal RAM, and running out of resources and failed with bizarre errors. In short: Don't. Consider binaries. It saves time and trouble.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RVowpalWabbit 0.0.10

A boring little RVowpalWabbit package update to version 0.0.10 came in response to another CRAN request: We were switching directories to run tests (or examples) which is now discouraged, so we no longer do this as it turns that we can of course refer to the files directly as well. Much cleaner.

No new code or features were added.

We should mention once more that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch.

More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianSteinar H. Gunderson: Compute shaders

Movit, my GPU-accelerated video filter library, is getting compute shaders. But the experience really makes me happy that I chose to base it on fragment shaders originally and not compute! (Ie., many would claim CUDA would be the natural choice, even if it's single-vendor.) The deinterlace filter is significantly faster (10–70%, depending a bit on various factors) on my Intel card, so I'm hoping the resample filter is also going to get some win, but at this point, I'm not actually convinced it's going to be much faster… and the complexity of managing local memory effectively is sky-high. And then there's the fun of chaining everything together.

Hooray for already having an extensive battery of tests, at least!

Cory DoctorowNet Neutrality is only complicated because monopolists are paying to introduce doubt


My op-ed in New Internationalist,
‘Don’t break the 21st century nervous system’
, seeks to cut through the needless complexity in the Net Neutrality debate, which is as clear-cut as climate change or the link between smoking and cancer — and, like those subjects, the complexity is only there because someone paid to introduce it.


When you want to access my web page, you ask your internet service provider to send some data to my ISP, who passes it on to my server, which passes some data back to the other ISP, who sends it to your ISP, who sends it to you.

That’s a neutral internet: ISPs await requests from their customers, then do their best to fulfill them.

In a discriminatory network, your ISP forwards your requests to mine, then decides whether to give the data I send in reply to you, or to slow it down.

If they slow it down, they can ask me for a payment to get into the ‘fast lane’, where ‘premium traffic’ goes. There’s no fair rationale for this: you’re not subscribing to the internet to get the bits that maximally enrich your ISP, you’re subscribing to get the bits you want.

An ISP who charges extra to get you the bits you ask for is like a cab driver who threatens to circle the block twice before delivering passengers to John Lewis because John Lewis hasn’t paid for ‘premium service’. John Lewis isn’t the passenger, you are, and you’re paying the cab to take you to your destination, not a destination that puts an extra pound in the driver’s pocket.

But there are a lot of taxi options, from black cabs to minicabs to Uber. This isn’t the case when it comes to the internet. For fairly obvious economic and logistical reasons, cities prefer a minimum of networks running under their main roads and into every building: doubling or tripling up on wires is wasteful and a source of long-term inconvenience, as someone’s wires will always want servicing. So cities generally grant network monopolies (historically, two monopolies: one for ‘telephone’ and one for ‘cable TV’).



‘Don’t break the 21st century nervous system’
[Cory Doctorow/The New Internationalist]

Planet DebianBernd Zeimetz: Collecting statistics from TP-Link HS110 SmartPlugs using collectd

Running a 3d printer alone at home is not necessarily the best idea - so I was looking for a way to force it off from remote. As OctoPrint user I stumbled upon a plugin to control TP-Link Smartplugs, so I decided to give them a try. What I found especially nice on the HS110 model was that it is possible to monitor power usage, current and voltage. Of course I wanted to have a long term graph of it. The result is a small collectd plugin, written in Python. It is available on github: https://github.com/bzed/collectd-tplink_hs110. Enjoy!

Planet DebianBits from Debian: Debsources now in sources.debian.org

Debsources is a web application for publishing, browsing and searching an unpacked Debian source mirror on the Web. With Debsources, all the source code of every Debian release is available in https://sources.debian.org, both via an HTML user interface and a JSON API.

This service was first offered in 2013 with the sources.debian.net instance, which was kindly hosted by IRILL, and is now becoming official under sources.debian.org, hosted on the Debian infrastructure.

This new instance offers all the features of the old one (an updater that runs four times a day, various plugins to count lines of code or measure the size of packages, and sub-apps to show lists of patches and copyright files), plus integration with other Debian services such as codesearch.debian.net and the PTS.

The Debsources Team has taken the opportunity of this move of Debsources onto the Debian infrastructure to officially announce the service. Read their message as well as the Debsources documentation page for more details.

TEDFree report: Bright ideas in business, distilled from TEDGlobal 2017

What’s a good way to remember an idea in the middle of a conference — so you can turn it into action? Take notes and brainstorm with others. At TEDGlobal 2017 in Tanzania, the Brightline Initiative inspired people to brainstorm ideas around talks they’d just watched, including Pierre Thiam’s celebration of the ancient grain fonio (watch this talk). (Photo: Ryan Lash/TED)

The Brightline Initiative helps executives implement ambitious ideas from business strategies, so it’s only fitting that the nonprofit group was onsite taking notes and holding brainstorms at TEDGlobal 2017 in Arusha, Tanzania. With the theme “Builders. Truth-Tellers. Catalysts.,” TEDGlobal was a celebration of doers and thinkers, including more than 70 speakers who’ve started companies, nonprofits, education initiatives and even movements.

We’re excited to share the Brightline Initiative’s just-released report on business ideas pulled from the talks of TEDGlobal 2017. These aren’t your typical business ideas — one speaker suggests a way to find brand-new markets by thinking beyond the physical address, while several others share how ancient traditions can spawn fresh ideas and even cutting-edge businesses. Whether you run a startup, sit in the C-suite or are known as a star employee, the ideas from these talks can spark new thinking and renew your inspiration.

Get the report here >>

PS: Look for more great ideas from the Brightline Initiative soon; this week at TED’s New York office, TED and Brightline partnered to produce an evening-length event of speakers who are creating change through smart, nuanced business thinking. Read about the event now, and watch for talks to appear on TED.com in the coming months.


Krebs on SecurityMirai IoT Botnet Co-Authors Plead Guilty

The U.S. Justice Department on Tuesday unsealed the guilty pleas of two men first identified in January 2017 by KrebsOnSecurity as the likely co-authors of Mirai, a malware strain that remotely enslaves so-called “Internet of Things” devices such as security cameras, routers, and digital video recorders for use in large scale attacks designed to knock Web sites and entire networks offline (including multiple major attacks against this site).

Entering guilty pleas for their roles in developing and using Mirai are 21-year-old Paras Jha from Fanwood, N.J. and Josiah White, 20, from Washington, Pennsylvania.

Jha and White were co-founders of Protraf Solutions LLC, a company that specialized in mitigating large-scale DDoS attacks. Like firemen getting paid to put out the fires they started, Jha and White would target organizations with DDoS attacks and then either extort them for money to call off the attacks, or try to sell those companies services they claimed could uniquely help fend off the attacks.

CLICK FRAUD BOTNET

In addition, the Mirai co-creators pleaded guilty to charges of using their botnet to conduct click fraud — a form of online advertising fraud that will cost Internet advertisers more than $16 billion this year, according to estimates from ad verification company Adloox. 

The plea agreements state that Jha, White and another person who also pleaded guilty to click fraud conspiracy charges — a 21-year-old from Metairie, Louisiana named Dalton Norman — leased access to their botnet for the purposes of earning fraudulent advertising revenue through click fraud activity and renting out their botnet to other cybercriminals.

As part of this scheme, victim devices were used to transmit high volumes of requests to view web addresses associated with affiliate advertising content. Because the victim activity resembled legitimate views of these websites, the activity generated fraudulent profits through the sites hosting the advertising content, at the expense of online advertising companies.

Jha and his co-conspirators admitted receiving as part of the click fraud scheme approximately two hundred bitcoin, valued on January 29, 2017 at over $180,000.

Prosecutors say Norman personally earned over 30 bitcoin, valued on January 29, 2017 at approximately $27,000. The documents show that Norman helped Jha and White discover new, previously unknown vulnerabilities in IoT devices that could be used to beef up their Mirai botnet, which at its height grew to more than 300,000 hacked devices.

MASSIVE ATTACKS

The Mirai malware is responsible for coordinating some of the largest and most disruptive online attacks the Internet has ever witnessed. The biggest and first to gain widespread media attention began on Sept. 20, 2016, when KrebsOnSecurity came under a sustained distributed denial-of-service attack from more than 175,000 IoT devices (the size estimates come from this Usenix paper (PDF) on the Mirai botnet evolution).

That September 2016 digital siege maxed out at 620 Gbps, almost twice the size of the next-largest attack that Akamai — my DDoS mitigation provider at the time — had ever seen.

The attack continued for several days, prompting Akamai to force my site off of their network (they were providing the service pro bono, and the attack was starting to cause real problems for their paying customers). For several frustrating days this Web site went dark, until it was brought under the auspices of Google’s Project Shield, a program that protects journalists, dissidents and others who might face withering DDoS attacks and other forms of digital censorship because of their publications.

At the end of September 2016, just days after the attack on this site, the authors of Mirai — who collectively used the nickname “Anna Senpai” — released the source code for their botnet. Within days of its release there were multiple Mirai botnets all competing for the same pool of vulnerable IoT devices.

The Hackforums post that includes links to the Mirai source code.

Some of those Mirai botnets grew quite large and were used to launch hugely damaging attacks, including the Oct. 21, 2016 assault against Internet infrastructure firm Dyn that disrupted Twitter, Netflix, Reddit and a host of other sites for much of that day.

A depiction of the outages caused by the Mirai attacks on Dyn, an Internet infrastructure company. Source: Downdetector.com.

The leak of the Mirai source code led to the creation of dozens of copycat Mirai botnets, all of which were competing to commandeer the same finite number of vulnerable IoT devices. One particularly disruptive Mirai variant was used in extortion attacks against a number of banks and Internet service providers in the United Kingdom and Germany.

In July 2017, KrebsOnSecurity published a story following digital clues that pointed to a U.K. man named Daniel Kaye as the apparent perpetrator of those Mirai attacks. Kaye, who went by the hacker nickname “Bestbuy,” was found guilty in Germany of launching failed Mirai attacks that nevertheless knocked out Internet service for almost a million Deutsche Telekom customers, for which he was given a suspended sentence. Kaye is now on trial in the U.K. for allegedly extorting banks in exchange for calling off targeted DDoS attacks against them.

Not long after the Mirai source code was leaked, I began scouring cybercrime forums and interviewing people to see if there were any clues that might point to the real-life identities of Mirai’s creators.

On Jan 18, 2017, KrebsOnSecurity published the results of that four-month inquiry, Who is Anna Senpai, the Mirai Worm Author? The story is easily the longest in this site’s history, and it cited a bounty of clues pointing back to Jha and White — two of the men whose guilty pleas were announced today.

A tweet from the founder and CTO of French hosting firm OVH, stating the intended target of the Sept. 2016 Mirai DDoS on his company.

According to my reporting, Jha and White primarily used their botnet to target online gaming servers — particularly those tied to the hugely popular game Minecraft. Around the same time as the attack on my site, French hosting provider OVH was hit with a much larger attack from the same Mirai botnet (see image above), and the CTO of OVH confirmed that the target of that attack was a Minecraft server hosted on his company’s network.

My January 2017 investigation also cited evidence and quotes from associates of Jha who said they suspected he was responsible for a series of DDoS attacks against Rutgers University: During the same year that Jha began studying at the university for a bachelor’s degree in computer science, the school’s servers came under repeated, massive attacks from Mirai.

With each DDoS against Rutgers, the attacker — using the nicknames “og_richard_stallman,” “exfocus” and “ogexfocus,” — would taunt the university in online posts and media interviews, encouraging the school to spend the money to purchase some kind of DDoS mitigation service.

It remains unclear if Jha (and possibly others) may face separate charges in New Jersey related to his apparent Mirai attacks on Rutgers. According to a sparsely-detailed press release issued Tuesday afternoon, the Justice Department is slated to hold a media conference at 2 p.m. today with officials from Alaska (where these cases originate) to “discuss significant cybercrime cases.”

Update: 11:43 a.m. ET: The New Jersey Star Ledger just published a story confirming that Jha also has pleaded guilty to the Rutgers DDoS attacks, as part of a separate case lodged by prosecutors in New Jersey.

PAYBACK

Under the terms of his guilty plea in the click fraud conspiracy, Jha agreed to give up 13 bitcoin, which at current market value of bitcoin (~$17,000 apiece) is nearly USD $225,000.

Jha will also waive all rights to appeal the conviction and whatever sentence gets imposed as a result of the plea. For the click fraud conspiracy charges, Jha, White and Norman each face up to five years in prison and a $250,000 fine.

In connection with their roles in creating and ultimately unleashing the Mirai botnet code, Jha and White each pleaded guilty to one count of conspiracy to violate 18 U.S.C. 1030(a)(5)(A). That is, to “causing intentional damage to a protected computer, to knowingly causing the transmission of a program, code, or command to a computer with the intention of impairing without authorization the integrity or availability of data, a program, system, or information.”

For the conspiracy charges related to their authorship and use of Mirai, Jha and White likewise face up to five years in prison, a $250,000 fine, and three years of supervised release.

This is a developing story. Check back later in the day for updates from the DOJ press conference, and later in the week for a follow-up piece on some of the lesser-known details of these investigations.

The Justice Department unsealed the documents related to these cases late in the day on Tuesday. Here they are:

Jha click fraud complaint (PDF)
Jha click fraud plea (PDF)
Jha DDoS/Mirai complaint (PDF)
Jha DDoS/Mirai plea (PDF)
White DDoS complaint (PDF)
White DDoS/Mirai Plea (PDF)
Norman click fraud complaint (PDF)
Norman click fraud plea (PDF)

CryptogramE-Mail Tracking

Good article on the history and practice of e-mail tracking:

The tech is pretty simple. Tracking clients embed a line of code in the body of an email­ -- usually in a 1x1 pixel image, so tiny it's invisible, but also in elements like hyperlinks and custom fonts. When a recipient opens the email, the tracking client recognizes that pixel has been downloaded, as well as where and on what device. Newsletter services, marketers, and advertisers have used the technique for years, to collect data about their open rates; major tech companies like Facebook and Twitter followed suit in their ongoing quest to profile and predict our behavior online.

But lately, a surprising­ -- and growing­ -- number of tracked emails are being sent not from corporations, but acquaintances. "We have been in touch with users that were tracked by their spouses, business partners, competitors," says Florian Seroussi, the founder of OMC. "It's the wild, wild west out there."

According to OMC's data, a full 19 percent of all "conversational" email is now tracked. That's one in five of the emails you get from your friends. And you probably never noticed.

I admit it's enticing. I would very much like the statistics that adding trackers to Crypto-Gram would give me. But I still don't do it.

Worse Than FailureThe Interview Gauntlet

Natasha found a job posting for a defense contractor that was hiring for a web UI developer. She was a web UI developer, familiar with all the technologies they were asking for, and she’d worked for defense contractors before, and understood how they operated. She applied, and they invited her in for one of those day-long, marathon interviews.

They told her to come prepared to present some of her recent work. Natasha and half a dozen members of the team crammed into an undersized meeting room. Irving, the director, was the last to enter, and his reaction to Natasha could best be described as “hate at first sight”.

Irving sat directly across from Natasha, staring daggers at her while she pulled up some examples of her work. Picking on a recent project, she highlighted what parts she’d worked on, what techniques she’d used, and why. Aside from Irving’s glare, it played well. She got good questions, had some decent back-and-forth, and was feeling pretty confident when she said, “Now, moving onto a more recent project-”

A blue sky, highlighted by a 'y' formed out of contrails

“Oh, thank god,” Irving groaned. His tone was annoyed, and possibly sarcastic. It was really impossible to tell. He let Natasha get a few sentences into talking about the next project, and then interrupted her. “This is fine. Let’s just break out into one-on-one interviews.”

Jack, the junior developer, was up first. He moved down the table to be across from Natasha. “You’re really not a good fit for the position we’re hiring for,” he said, “but let’s go ahead and do this anyway.”

So they did. Jack had some basic web-development questions, less on the UI side and more on the tooling side. “What’s transpiling,” and “how do ES2015 modules work”. They had a pleasant back and forth, and then Jack tagged out so that Carl could come in.

Carl didn’t start by asking a question, instead he scribbled some code on the white board:

int a[10];
*(a + 5) = 1;

“What does that do?” he demanded.

Natasha recognized it as C or C++, which jostled a few neurons from back in her CS101 days. She wasn’t interviewing to do C/C++, so she just shrugged and made her best guess. “That’s some pointer arithmetic stuff, right? Um… setting the 5th element of the array?”

Carl scribbled different C code onto the board, and repeated his question: “What does that do?”

Carl’s interview set the tone for the day. Over the next few hours, she met each team member. They each interviewed her on a subject that had nothing to do with UI development. She fielded questions about Linux system administration via LDAP, how subnets are encoded in IPs under IPv6, and their database person wanted her to estimate average seek times to fetch rows from disk when using a 7,200 RPM drive formatted in Ext4.

After surviving that gauntlet of seemingly pointless questions, it was Irving’s turn. His mood hadn’t improved, and he had no intention of asking her anything relevant. His first question was: “Tell me, Natasha, how would you estimate the weight of the Earth?”

“Um… don’t you mean mass?”

Irving grunted and shrugged. He didn’t say, “I don’t like smart-asses” out loud, but it was pretty clear that’s what he thought about her question.

Off balance, she stumbled through a reply about estimating the relative components that make up the Earth, their densities, and the size of the Earth. Irving pressed her on that answer, and she eventually sputtered something about a spring scale with a known mass, and Newton’s law of gravitation.

He still didn’t seem satisfied, but Irving had other questions to ask. “How many people are in the world?” “Why is the sky blue?” “How many turkeys would it take to fill this space?”

Eventually, frustrated by the series of inane questions after a day’s worth of useless questions, Natasha finally bit back. “What is the point of these questions?”

Irving sighed and made a mark on his interview notes. “The point,” he said, “is to see how long it took you to admit you didn’t know the answers. I don’t think you’re going to be a good fit for this team.”

“So I’ve heard,” Natasha said. “And I don’t think this team’s a good fit for me. None of the questions I’ve fielded today really have anything to do with the job I applied for.”

“Well,” Irving said, “we’re hiring for a number of possible positions. Since we had you here anyway, we figured we’d interview you for all of them.”

“If you were interviewing me for all of them, why didn’t I get any UI-related questions?”

“Oh, we already filled that position.”

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianPetter Reinholdtsen: Idea for finding all public domain movies in the USA

While looking at the scanned copies for the copyright renewal entries for movies published in the USA, an idea occurred to me. The number of renewals are so few per year, it should be fairly quick to transcribe them all and add references to the corresponding IMDB title ID. This would give the (presumably) complete list of movies published 28 years earlier that did _not_ enter the public domain for the transcribed year. By fetching the list of USA movies published 28 years earlier and subtract the movies with renewals, we should be left with movies registered in IMDB that are now in the public domain. For the year 1955 (which is the one I have looked at the most), the total number of pages to transcribe is 21. For the 28 years from 1950 to 1978, it should be in the range 500-600 pages. It is just a few days of work, and spread among a small group of people it should be doable in a few weeks of spare time.

A typical copyright renewal entry look like this (the first one listed for 1955):

ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); 10Jun55; R151558.

The movie title as well as registration and renewal dates are easy enough to locate by a program (split on first comma and look for DDmmmYY). The rest of the text is not required to find the movie in IMDB, but is useful to confirm the correct movie is found. I am not quite sure what the L and R numbers mean, but suspect they are reference numbers into the archive of the US Copyright Office.

Tracking down the equivalent IMDB title ID is probably going to be a manual task, but given the year it is fairly easy to search for the movie title using for example http://www.imdb.com/find?q=adam+and+evil+1927&s=all. Using this search, I find that the equivalent IMDB title ID for the first renewal entry from 1955 is http://www.imdb.com/title/tt0017588/.

I suspect the best way to do this would be to make a specialised web service to make it easy for contributors to transcribe and track down IMDB title IDs. In the web service, once a entry is transcribed, the title and year could be extracted from the text, a search in IMDB conducted for the user to pick the equivalent IMDB title ID right away. By spreading out the work among volunteers, it would also be possible to make at least two persons transcribe the same entries to be able to discover any typos introduced. But I will need help to make this happen, as I lack the spare time to do all of this on my own. If you would like to help, please get in touch. Perhaps you can draft a web service for crowd sourcing the task?

Note, Project Gutenberg already have some transcribed copies of the US Copyright Office renewal protocols, but I have not been able to find any film renewals there, so I suspect they only have copies of renewal for written works. I have not been able to find any transcribed versions of movie renewals so far. Perhaps they exist somewhere?

I would love to figure out methods for finding all the public domain works in other countries too, but it is a lot harder. At least for Norway and Great Britain, such work involve tracking down the people involved in making the movie and figuring out when they died. It is hard enough to figure out who was part of making a movie, but I do not know how to automate such procedure without a registry of every person involved in making movies and their death year.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianRhonda D'Vine: #metoo

I long thought about whether I should post a/my #metoo. It wasn't a rape. Nothing really happened. And a lot of these stories are very disturbing.

And yet it still it bothers me every now and then. I was in school age, late elementary or lower school ... In my hometown there is a cinema. Young as we've been we weren't allowed to see Rambo/Rocky. Not that I was very interested in the movie ... But there the door to the screening room stood open. And curious as we were we looked through the door. The projectionist saw us and waved us in. It was exciting to see a moview from that perspective that was forbidden to us.

He explained to us how the machines worked, showed us how the film rolls were put in and showed us how to see the signals on the screen which are the sign to turn on the second projector with the new roll.

During these explenations he was standing very close to us. Really close. He put his arm around us. The hand moved towards the crotch. It was unpleasantly and we knew that it wasn't all right. But screaming? We weren't allowed to be there ... So we thanked him nicely and retreated disturbed. The movie wasn't that good anyway.

Nothing really happened, and we didn't say anything.

/personal | permanent link | Comments: 2 | Flattr this

Planet DebianRussell Coker: Thinkpad X301

Another Broken Thinkpad

A few months ago I wrote a post about “Observing Reliability” [1] regarding my Thinkpad T420. I noted that the T420 had been running for almost 4 years which was a good run, and therefore the failed DVD drive didn’t convince me that Thinkpads have quality problems.

Since that time the plastic on the lid by the left hinge broke, every time I open or close the lid it breaks a bit more. That prevents use of that Thinkpad by anyone who wants to use it as a serious laptop as it can’t be expected to last long if opened and closed several times a day. It probably wouldn’t be difficult to fix the lid but for an old laptop it doesn’t seem worth the effort and/or money. So my plan now is to give the Thinkpad to someone who wants a compact desktop system with a built-in UPS, a friend in Vietnam can probably find a worthy recipient.

My Thinkpad History

I bought the Thinkpad T420 in October 2013 [2], it lasted about 4 years and 2 months. It cost $306.

I bought my Thinkpad T61 in February 2010 [3], it lasted about 3 years and 8 months. It cost $796 [4].

Prior to the T61 I had a T41p that I received well before 2006 (maybe 2003) [5]. So the T41p lasted close to 7 years, as it was originally bought for me by a multinational corporation I’m sure it cost a lot of money. By the time I bought the T61 it had display problems, cooling problems, and compatibility issues with recent Linux distributions.

Before the T41p I had 3 Thinkpads in 5 years, all of which had the type of price that only made sense in the dot-com boom.

In terms of absolute lifetime the Thinkpad T420 did ok. In terms of cost per year it did very well, only $6 per month. The T61 was $18 per month, and while the T41p lasted a long time it probably cost over $2000 giving it a cost of over $20 per month. $20 per month is still good value, I definitely get a lot more than $20 per month benefit from having a laptop. While it’s nice that my most recent laptop could be said to have saved me $12 per month over the previous one, it doesn’t make much difference to my financial situation.

Thinkpad X301

My latest Thinkpad is an X301 that I found on an e-waste pile, it had a broken DVD drive which is presumably the reason why someone decided to throw it out. It has the same power connector as my previous 2 Thinkpads which was convenient as I didn’t find a PSU with it. I saw a review of the T301 dated 2008 which probably means it was new in 2009, but it has no obvious signs of wear so probably hasn’t been used much.

My X301 has a 1440*900 screen which isn’t as good as the T420 resolution of 1600*900. But a lower resolution is an expected trade-off for a smaller laptop. The T310 comes with a 64G SSD which is a significant limitation.

I previously wrote about a “cloud lifestyle” [6]. I hadn’t implemented all the ideas from that post due to distractions and a lack of time. But now that I’ll have a primary PC with only 64G of storage I have more incentive to do that. The 100G disk in the T61 was a minor limitation at the time I got it but since then everything got bigger and 64G is going to be a big problem and the fact that it’s an unusual 1.8″ form factor means that I can’t cheaply upgrade it or use the SSD that I’ve used in the Thinkpad T420.

My current Desktop PC is an i7-2600 system which builds the SE Linux policy packages for Debian (the thing I compile most frequently) in about 2 minutes with about 5 minutes of CPU time used. the same compilation on the X301 takes just over 6.5 minutes with almost 9 minutes of CPU time used. The i5 CPU in the Thinkpad T420 was somewhere between those times. While I can wait 6.5 minutes for a compile to test something it is an annoyance. So I’ll probably use one of the i7 or i5 class servers I run to do builds.

On the T420 I had chroot environments running with systemd-nspawn for the last few releases of Debian in both AMD64 and i386 variants. Now I have to use a server somewhere for that.

I stored many TV shows, TED talks, and movies on the T420. Probably part of the problem with the hinge was due to adjusting the screen while watching TV in bed. Now I have a phone with 64G of storage and a tablet with 32G so I will use those for playing videos.

I’ve started to increase my use of Git recently. There’s many programs I maintain that I really should have had version control for years ago. Now the desire to develop them on multiple systems gives me an incentive to do this.

Comparing to a Phone

My latest phone is a Huawei Mate 9 (I’ll blog about that shortly) which has a 1920*1080 screen and 64G of storage. So it has a higher resolution screen than my latest Thinkpad as well as equal storage. My phone has 4G of RAM while the Thinkpad only has 2G (I plan to add RAM soon).

I don’t know of a good way of comparing CPU power of phones and laptops (please comment if you have suggestions about this). The issues of GPU integration etc will make this complex. But I’m sure that the octa-core CPU in my phone doesn’t look too bad when compared to the dual-core CPU in my Thinkpad.

Conclusion

The X301 isn’t a laptop I would choose to buy today. Since using it I’ve appreciated how small and light it is, so I would definitely consider a recent X series. But being free the value for money is NaN which makes it more attractive. Maybe I won’t try to get 4+ years of use out of it, in 2 years time I might buy something newer and better in a similar form factor.

I can just occasionally poll an auction site and bid if there’s anything particularly tempting. If I was going to buy a new laptop now before the old one becomes totally unusable I would be rushed and wouldn’t get the best deal (particularly given that it’s almost Christmas).

Who knows, I might even find something newer and better on an e-waste pile. It’s amazing the type of stuff that gets thrown out nowadays.

,

Sociological ImagesSocImages Classic—The Ugly Christmas Sweater: From ironic nostalgia to festive simulation

National Ugly Christmas Sweater Day is this Friday, December 15th. Perhaps you’ve noticed the recent ascent of the Ugly Christmas Sweater or even been invited to an Ugly Christmas Sweater Party. How do we account for this trend and its call to “don we now our tacky apparel”?

Total search of term “ugly Christmas sweater” relative to other searches over time (c/o Google Trends):

Ugly Christmas Sweater parties purportedly originated in Vancouver, Canada, in 2001. Their appeal might seem to stem from their role as a vehicle for ironic nostalgia, an opportunity to revel in all that is festively cheesy. It also might provide an opportunity to express the collective effervescence of the well-intentioned (but hopelessly tacky) holiday apparel from moms and grandmas.

However, The Atlantic points to a more complex reason why we might enjoy the cheesy simplicity offered by Ugly Christmas Sweaters: “If there is a war on Christmas, then the Ugly Christmas Sweater, awesome in its terribleness, is a blissfully demilitarized zone.” This observation pokes fun at the Fox News-style hysterics regarding the “War on Christmas”; despite being commonly called Ugly Christmas Sweaters, the notion seems to persist that their celebration is an inclusive and “safe” one.

Photo Credit: TheUglySweaterShop, Flickr CC

We might also consider the generally fraught nature of the holidays (which are financially and emotionally taxing for many), suggesting that the Ugly Sweater could offer an escape from individual holiday stress. There is no shortage of sociologists who can speak to the strain of family, consumerism, and mental health issues that plague the holidays, to say nothing of the particular gendered burdens they produce. Perhaps these parties represent an opportunity to shelve those tensions.

But how do we explain the fervent communal desire for simultaneous festive celebration and escape? Fred Davis notes that nostalgia is invoked during periods of discontinuity. This can occur at the individual level when we use nostalgia to “reassure ourselves of past happiness.” It may also function as a collective response – a “nostalgia orgy”- whereby we collaboratively reassure ourselves of shared past happiness through cultural symbols. The Ugly Christmas Sweater becomes a freighted symbol of past misguided, but genuine, familial affection and unselfconscious enthusiasm for the holidays – it doesn’t matter that we have not all really had the actual experience of receiving such a garment.

Jean Baudrillard might call the process of mythologizing the Ugly Christmas Sweater a simulation, a collapsing between reality and representation. And, as George Ritzer points out, simulation can become a ripe target for corporatization as it can be made more spectacular than its authentic counterparts. We need only look at the shift from the “authentic” prerogative to root through one’s closet for an ugly sweater bestowed by grandma (or even to retrieve from the thrift store a sweater imparted by someone else’s grandma) to the cottage-industry that has sprung up to provide ugly sweaters to the masses. There appears to be a need for collective nostalgia that is outstripped by the supply of “actual” Ugly Christmas Sweaters that we have at our disposal.

Colin Campbell states that consumption involves not just purchasing or using a good or service, but also selecting and enhancing it. Accordingly, our consumptive obligation to the Ugly Christmas Sweater becomes more demanding, individualized and, as Ritzer predicts, spectacular. For examples, we can view this intensive guide for DIY ugly sweaters. If DIY isn’t your style, you can indulge your individual (but mass-produced) tastes in NBA-inspired or cultural mash-up Ugly Christmas Sweaters, or these Ugly Christmas Sweaters that aren’t even sweaters at all.

The ironic appeal of the Ugly Christmas Sweater Party is that one can be deemed festive for partaking, while simultaneously ensuring that one is participating in a “safe” celebration – or even a gentle mockery – of holiday saturation and demands. The ascent of the Ugly Christmas Sweater has involved a transition from ironic nostalgia vehicle to a corporatized form of escapism, one that we are induced to participate in as a “safe” form of  festive simulation that becomes increasingly individualized and demanding in expression.

Re-posted at Pacific Standard.

Kerri Scheer is a PhD Student working in law and regulation in the Department of Sociology at the University of Toronto. She thanks her colleague Allison Meads for insights and edits on this post. You can follow Kerri on Twitter.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityPatch Tuesday, December 2017 Edition

The final Patch Tuesday of the year is upon us, with Adobe and Microsoft each issuing security updates for their software once again. Redmond fixed problems with various flavors of WindowsMicrosoft Edge, Office, Exchange and its Malware Protection Engine. And of course Adobe’s got another security update available for its Flash Player software.

The December patch batch addresses more than 30 vulnerabilities in Windows and related software. As per usual, a huge chunk of the updates from Microsoft tackle security problems with the Web browsers built into Windows.

Also in the batch today is an out-of-band update that Microsoft first issued last week to fix a critical issue in its Malware Protection Engine, the component that drives the Windows Defender/Microsoft Security Essentials embedded in most modern versions of Windows, as well as Microsoft Endpoint Protection, and the Windows Intune Endpoint Protection anti-malware system.

Microsoft was reportedly made aware of the malware protection engine bug by the U.K.’s National Cyber Security Centre (NCSC), a division of the Government Communications Headquarters — the United Kingdom’s main intelligence and security agency. As spooky as that sounds, Microsoft said it is not aware of active attacks exploiting this flaw.

Microsoft said the flaw could be exploited via a booby-trapped file that gets scanned by the Windows anti-malware engine, such as an email or document. The issue is fixed in version 1.1.14405.2 of the engine. According to Microsoft, Windows users should already have the latest version because the anti-malware engine updates itself constantly. In any case, for detailed instructions on how to check whether your system has this update installed, see this link.

The Microsoft updates released today are available in one big batch from Windows Update, or automagically via Automatic Updates. If you don’t have Automatic Updates enabled, please visit Windows Update sometime soon (click the Start/Windows button, then type Windows Update).

The newest Flash update from Adobe brings the player to v. 28.0.0.126 on Windows, Macintosh, Linux and Chrome OS. Windows users who browse the Web with anything other than Internet Explorer may need to apply the Flash patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version.

When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then. Chrome will replace that three dot icon with an up-arrow inside of a circle when updates are waiting to be installed.

Standard disclaimer: Because Flash remains such a security risk, I continue to encourage readers to remove or hobble Flash Player unless and until it is needed for a specific site or purpose. More on that approach (as well as slightly less radical solutions ) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

Another, perhaps less elegant, solution is to keep Flash installed in a browser that you don’t normally use, and then to only use that browser on sites that require it.

Planet DebianKeith Packard: Altos1.8.3

AltOS 1.8.3 — TeleMega version 3.0 support and bug fixes

Bdale and I are pleased to announce the release of AltOS version 1.8.3.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including support for our new TeleMega v3.0 board and a selection of bug fixes

Announcing TeleMega v3.0

TeleMega is our top of the line flight computer with 9-axis IMU, 6 pyro channels, uBlox Max 7Q GPS and 40mW telemetry system. Version 3.0 is feature compatible with version 2.0, incorporating a new higher-perfomance 9-axis IMU in place of the former 6-axis IMU and separate 3-axis magnetometer.

AltOS 1.8.3

In addition to support for TeleMega v3.0 boards, AltOS 1.8.3 contains some important bug fixes for all flight computers. Users are advised to upgrade their devices.

  • Ground testing EasyMega and TeleMega additional pyro channels could result in a sticky 'fired' status which would prevent these channels from firing on future flights.

  • Corrupted flight log records could prevent future flights from capturing log data.

  • Fixed saving of pyro configuration that ended with 'Descending'. This would cause the configuration to revert to the previous state during setup.

The latest AltosUI and TeleGPS applications have improved functionality for analyzing flight data. The built-in graphing capabilities are improved with:

  • Graph lines have improved appearance to make them easier to distinguish. Markers may be placed at data points to show captured recorded data values.

  • Graphing offers the ability to adjust the smoothing of computed speed and acceleration data.

Exporting data for other applications has some improvements as well:

  • KML export now reports both barometric and GPS altitude data to make it more useful for Tripoli record reporting.

  • CSV export now includes TeleMega/EasyMega pyro voltages and tilt angle.

CryptogramRemote Hack of a Boeing 757

Last month, the DHS announced that it was able to remotely hack a Boeing 757:

"We got the airplane on Sept. 19, 2016. Two days later, I was successful in accomplishing a remote, non-cooperative, penetration," said Robert Hickey, aviation program manager within the Cyber Security Division of the DHS Science and Technology (S&T) Directorate.

"[Which] means I didn't have anybody touching the airplane, I didn't have an insider threat. I stood off using typical stuff that could get through security and we were able to establish a presence on the systems of the aircraft." Hickey said the details of the hack and the work his team are doing are classified, but said they accessed the aircraft's systems through radio frequency communications, adding that, based on the RF configuration of most aircraft, "you can come to grips pretty quickly where we went" on the aircraft.

Worse Than FailureCodeSOD: ALM Tools Could Fix This

I’m old enough that, when I got into IT, we just called our organizational techniques “software engineering”. It drifted into “project management”, then the “software development life-cycle”, and lately “application life-cycle management (ALM)”.

No matter what you call it, you apply these techniques so that you can at least attempt to release software that meets the requirements and is reasonably free from defects.

Within the software development space, there are families of tools and software that we can use to implement some sort of ALM process… like “Harry Peckherd”’s Application Life-Cycle Management suite. By using their tool, you can release software that meets the requirements and is free from defects, right?

Well, Brendan recently attempted to upgrade their suite from 12.01 to 12.53, and it blew up with a JDBC error: [Mercury][SQLServer JDBC Driver][SQLServer]Cannot find the object "T_DBMS_SQL_BIND_VARIABLE" because it does not exist or you do not have permissions. He picked through the code that it was running, and found this blob of SQL:

DROP TABLE [t_dbms_sql_bind_variable]
DECLARE @sql AS VARCHAR(4000)
begin
SET @sql = ''
SELECT @sql = @sql + 'DROP FULLTEXT INDEX ON T_DBMS_SQL_BIND_VARIABLE'
FROM sys.fulltext_indexes
WHERE object_id = object_id('T_DBMS_SQL_BIND_VARIABLE')
GROUP BY object_id
if @sql'' exec (@sql)
end
ALTER TABLE [T_DBMS_SQL_BIND_VARIABLE] DROP CONSTRAINT [FK_t_dbms_sql_bind_variable_t_dbms_sql_cursor]

The upgrade script drops a table, drops the associated indexes on it, and then attempts to alter the table it just dropped. This is a real thing, released as part of software quality tools, by a major vendor in the space. They shipped this.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

,

Planet DebianJoey Hess: two holiday stories

Two stories of something nice coming out of something not-so-nice for the holidays.

Story the first: The Gift That Kept on Giving

I have a Patreon account that is a significant chunk of my funding to do what I do. Patreon has really pissed off a lot of people this week, and people are leaving it in droves. My Patreon funding is down 25%.

This is an opportunity for Liberapay, which is run by a nonprofit, and avoids Patreon's excessive fees, and is free software to boot. So now I have a Liberapay account and have diversified my sustainable funding some more, although only half of the people I lost from Patreon have moved over. A few others have found other ways to donate to me, including snail mail and Paypal, and others I'll just lose. Thanks, Patreon..

Yesterday I realized I should check if anyone had decided to send me Bitcoin. Bitcoin donations are weird because noone ever tells me that they made them. Also because it's never clear if the motive is to get me invested in bitcoin or send me some money. I prefer not to be invested in risky currency speculation, preferring risks like "write free software without any clear way to get paid for it", so I always cash it out immediately.

I have not used bitcoin for a long time. I could see a long time ago that its development community was unhealthy, that there was going to be a messy fork and I didn't want the drama of that. My bitcoin wallet files were long deleted. Checking my address online, I saw that in fact two people had reacted to Patreon by sending a little bit of bitcoin to me.

I checked some old notes to find the recovery seeds, and restored "hot wallet" and "cold wallet", not sure which was my public incoming wallet. Neither was, and after some concerned scrambling, I found the gpg locked file in a hidden basement subdirectory that let me access my public incoming wallet, and in fact two people had reacted to Patreon by sending bitcoin to me.

What of the other two wallets? "Hot wallet" was empty. But "cold wallet" turned out to be some long forgotten wallet, and yes, this is now a story about "some long forgotten bitcoin wallet" -- you know where this is going right?

Yeah, well it didn't have a life changing amount of bitcoin in it, but it had a little almost-dust from a long-ago bitcoin donor, which at current crazy bitcoin prices, is enough that I may need to fill out a tax form now that I've sold it. And so I will be having a happy holidays, no matter how the Patreon implosion goes. But for sustainable funding going forward, I do hope that Liberapay works out.

Story the second: "a lil' positive end note does wonders"

I added this to the end of git-annex's bug report template on a whim two years ago:

Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)

That prompt turned out to be much more successful than I had expected, and so I want to pass the gift of the idea on to you. Consider adding something like that to your project's bug report template.

It really works: I'll see a bug report be lost and confused and discouraged, and keep reading to make sure I see whatever nice thing there might be at the end. It's not just about meaningless politeness either, it's about getting an impression about whether the user is having any success at all, and how experienced they are in general, which is important in understanding where a bug report is coming from.

I've learned more from it than I have from most other interactions with git-annex users, including the git-annex user surveys. Out of 217 bug reports that used this template, 182 answered the question. Here are some of my favorite answers.

Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)

  • I do! I wouldn't even have my job, if it wasn't for git-annex. ;-)

  • Yeah, it works great! If not for it I would not have noticed this data corruption until it was too late.

  • Indeed. All my stuff (around 3.5 terabytes) is stored in git-annex with at least three copies of each file on different disks and locations, spread over various hard disks, memory sticks, servers and you name it. Unused disk space is a waste, so I fill everything up to the brim with extra copies.

    In other words, Git-Annex and I are very happy together, and I'd like to marry it. And because you are the father, I hereby respectfully ask for your blessing.

  • Yes, git with git annex has revolutionised my scientific project file organisation and thats why I want to improve it.

  • <3 <3 <3

  • We use git-annex for our open-source FreeSurfer software and find very helpful indeed. Thank you. https://surfer.nmr.mgh.harvard.edu/

  • Yes I have! I've used it manage lots of video editing disks before, and am now migrating several slightly different copies of 15TB sized documentary footage from random USB3 disks and LTO tapes to a RAID server with BTRFS.

  • Oh yeah! This software is awesome. After getting used to having "dummy" shortcuts to content I don't currently have, with the simple ability to get/drop that content, I can't believe I haven't seen this anywhere before. If there is anything more impressive than this software, it's the support it has had from Joey over all this time. I'd have pulled my hair out long ago. :P

  • kinda

  • Yep, works apart from the few tests that fail.

  • Not yet, but I'm excited to make it work!

  • Roses are red
    Violets are blue
    git-annex is awesome
    and so are you
    ;-)
    But bloody hell, it's hard to get this thing to build.

  • git-annex is awesome, I lean on it heavily nearly every single day.

  • I have a couple of repositories atm, one with my music, another that backs up our family pictures for the family and uses Amazon S3 as a backup.

  • Yes! It's by far one of my favorite apps! it works very well on my laptop, on my home file server, and on my internal storage on my Android phone :)

  • Yes! I've been using git-annex quite a bit over the past year, for everything from my music collection to my personal files. Using it for a not-for-profit too. Even trying to get some Mac and Windows users to use it for our HOA's files.

  • I use git-annex for everything. I've got 10 repositories and around 2.5TB of data in those repos which in turn is synced all over the place. It's excellent.

  • Really nice tool. Thanks Joey!

  • Git-annex rocks !!!!

  • I'd love to say I have. You'll hear my shout of joy when I do.

  • Mixed bag, works when it works, but I've had quite a few "unexplained" happenings. Perservering for now, hoping me reporting bugs will see things improve...

  • Yes !!! I'm moving all my files into my annex. It is very robust; whenever something is wrong there is always some other copy somewhere that can be used.

  • Yes! git annex has been enormously helpful. Thanks so much for this tool.

  • Oh yes! I love git-annex :) I've written the hubiC special remote for git-annex, the zsh completion, contributed to the crowdfunding campaigns, and I'm a supporter on Patreon :)

  • Yes, managing 30000 files, on operating systems other than Windows though...

  • Of course ;) All the time

  • I trust Git Annex to keep hundreds of GB of data safe, and it has never failed me - despite my best efforts

  • Oh yeah, I am still discovering this powerfull git annex tool. In fact, collegues and I are forming a group during the process to exchange about different use cases, encountered problems and help each other.

  • I love the metadata functionality so much that I wrote a gui for metadata operations and discovered this bug.

  • Sure, it works marvels :-) Also what I was trying to do is perhaps not by the book...

  • Oh, yes. It rules. :) One of the most important programs I use because I have all my valuable stuff in it. My files have never been safer.

  • I'm an extremely regular user of git-annex on OS X and Linux, for several years, using it as a podcatcher and to manage most of my "large file" media. It's one of those "couldn't live without" tools. Thanks for writing it.

  • Yes, I've been using git annex for I think a year and a half now, on several repositories. It works pretty well. I have a total of around 315GB and 23K annexed keys across them (counting each annex only once, even though they're cloned on a bunch of machines).

  • I only find (what I think are) bugs because I use it and I use it because I like it. I like it because it works (except for when I find actual bugs :]).

  • I'm new to git-annex and immediately astonished by its unique strength.

  • As mentioned before, I am very, very happy with git-annex :-) Discovery of 2015 for me.

  • git-annex is great and revolutionized my file organization and backup structure (if they were even existing before)

  • That’s just a little hiccup in, up to now, various months of merry use! ;-)

  • Yes. Love it. Donated. Have been using it for years. Recommend it and get(/force) my collaborators to use it. ;-)

  • git-annex is an essential building block in my digital life style!

  • Well, git-annex is wonderful!

A lil' positive end note turned into a big one, eh? :)

Planet DebianWouter Verhelst: Systemd, Devuan, and Debian

Somebody recently pointed me towards a blog post by a small business owner who proclaimed to the world that using Devuan (and not Debian) is better, because it's cheaper.

Hrm.

Looking at creating Devuan, which means splitting of Debian, economically, you caused approximately infinite cost.

Well, no. I'm immensely grateful to the Devuan developers, because when they announced their fork, all the complaints about systemd on the debian-devel mailinglist ceased to exist. Rather than a cost, that was an immensely gratifying experience, and it made sure that I started reading the debian-devel mailinglist again, which I had stopped for a while before that. Meanwhile, life in Debian went on as it always has.

Debian values choice. Fedora may not be about choice, but Debian is. If there are two ways of doing something, Debian will include all four. If you want to run a Linux system, and you're not sure whether to use systemd, upstart, or something else, then Debian is for you! (well, except if you want to use upstart, which is in jessie but not in stretch). Debian defaults to using systemd, but it doesn't enforce it; and while it may require a bit of manual handholding to make sure that systemd never ever ever ends up on your system, this is essentially not difficult.

you@your-machine:~$ apt install equivs; equivs-control your-sanity; $EDITOR your-sanity

Now make sure that what you get looks something like this (ignoring comments):

Section: misc
Priority: standard
Standards-Version: <whatever was there>

Package: your-sanity
Essential: yes
Conflicts: systemd-sysv
Description: Make sure this system does not install what I don't want
 The packages in the Conflicts: header cannot be installed without
 very difficult steps, and apt will never offer to install them.

Install it on every system where you don't want to run systemd. You're done, you'll never run systemd. Well, except if someone types the literal phrase "Yes, do as I say!", including punctuation and everything, when asked to do so. If you do that, well, you get to keep both pieces. Also, did you see my pun there? Yes, it's a bit silly, I admit it.

But before you take that step, consider this.

Four years ago, I was an outspoken opponent of systemd. It was a bad idea, I thought. It is not portable. It will cause the death of Debian GNU/kFreeBSD, and a few other things. It is difficult to understand and debug. It comes with a truckload of other things that want to replace the universe. Most of all, their developers had a pretty bad reputation of being, pardon my French, arrogant assholes.

Then, the systemd maintainers filed bug 796633, asking me to provide a systemd unit for nbd-client, since it provided an rcS init script (which is really a very special case), and the compatibility support for that in systemd was complicated and support for it would be removed from the systemd side. Additionally, providing a systemd template unit would make the systemd nbd experience much better, without dropping support for other init systems (those cases can still use the init script). In order to develop that, I needed a system to test things on. Since I usually test things on my laptop, I installed systemd on my laptop. The intent was to remove it afterwards. However, for various reasons, that never happened, and I still run systemd as my pid1. Here's why:

  • Systemd is much faster. Where my laptop previously took 30 to 45 seconds to boot using sysvinit, it takes less than five. In fact, it took longer for it to do the POST than it took for the system to boot from the time the kernel was loaded. I changed the grub timeout from the default of five seconds to something more reasonable, because I found that five seconds was just ridiculously long if it takes about half that for the rest of the system to boot to a login prompt afterwards.
  • Systemd is much more reliable. That is, it will fail more often, but it will reliably fail. When it fails, it will tell you why it failed, so you can figure out what went wrong and fix it, making sure the system never fails again in the same fashion. The unfortunate fact of the matter is that there were many bugs in our init scripts, but they were never discovered and therefore lingered. For instance, you would not know about this race condition between two init scripts, because sysvinit is so dog slow that 99 times out of 100 it would not trigger, and therefore you don't see it. The one time you do see it, something didn't come up, but sysvinit doesn't log about such errors (it expects the init script to do so), so all you can do is go "damn, wtf happened?!?" and manually start things, allowing the bug to remain. These race conditions were much more likely to trigger with systemd, which caused it a lot of grief originally; but really, you should be thankful, because now that all these race conditions have been discovered by way of an init system that is much more verbose about such problems, they have also been fixed, and your sysvinit system is more reliable, too, as a result. There are other similar issues (dependency loops, to name one) that systemd helped fix.
  • Systemd is different, and that requires some re-schooling. When I first moved my laptop to systemd, I remember running into some kind of issue that I couldn't figure out how to fix. No, I don't remember the specifics of that issue, but they don't really matter. The point is this: at first, I thought "this is horrible, you can't debug it, how can you use such a system". And while it's true that undebuggable systems are not very useful, the systemd maintainers know this too, and therefore systemd is debuggable. It's just that you don't debug it by throwing some imperative init script code through a debugger (or, worse, something like sh -x), because there is no imperative init script code to throw through such a debugger, and therefore that makes little sense. Instead, there is a wealth of different tools to inspect the systemd state, and a lot of documentation on what the different things mean. It takes a while to internalize all that; and if you're not convinced that systemd is a good thing then it may mean some cursing while you're fighting your way through. But in the end, systemd is not more difficult to debug than simple init scripts -- in fact, it sometimes may be easier, because the system is easier to reason about.
  • While systemd comes with a truckload of extra daemons (systemd-networkd, systemd-resolved, systemd-hostnamed, etc etc etc), the systemd in their name do not imply that they are required by systemd. In fact, it's the other way around: you are required to run systemd if you want to run systemd-networkd (etc), because systemd-networkd (etc) make extensive use of the systemd infrastructure and public APIs; but nothing inside systemd requires that systemd-networkd (etc) are running. In fact, on my personal laptop, beyond systemd and udev themselves, I'm not using anything that gets built from the systemd source.

I'm not saying these reasons are universally true, and I'm not saying that you'll like systemd as much as I have. I am saying, however, that you should give it an honest attempt before you say "I'm not going to run systemd, ever," because you might be surprised by the huge gap of difference between what you expected and what you got. I know I was.

So, given all that, do I think that Devuan is a good idea? It is if you want flamewars. It gives those people who want vilify systemd a place to do that without bothering Debian with their opinion. But beyond that, if you want to run Debian and you don't want to run systemd, you can! Just make sure you choose the right options, and you're done.

All that makes me wonder why today, almost half a year after the initial release of Debian 9.0 "Stretch", Devuan Ascii still hasn't released, and why it took them over two years to release their Devuan Jessie based on Debian Jessie. But maybe that's just me.

CryptogramSurveillance inside the Body

The FDA has approved a pill with an embedded sensor that can report when it is swallowed. The pill transmits information to a wearable patch, which in turn transmits information to a smartphone.

Worse Than FailureCodeSOD: A Type of Standard

I’ve brushed up against the automotive industry in the past, and have gained a sense about how automotive companies and their suppliers develop custom software. That is to say, they hack at it until someone from the business side says, “Yes, that’s what we wanted.” 90% of the development time is spent doing re-work (because no one, including the customer, understood the requirements) and putting out fires (because no one, including the customer, understood the requirements well enough to tell you how to test it, so things are going wrong in production).

Mary is writing some software that needs to perform automated testing on automotive components. The good news is that the automotive industry has adopted a standard API for accomplishing this goal. The bad news is that the API was designed by the automotive industry. Developing standards, under ideal conditions, is hard. Developing standards in an industry that is still struggling with software quality and hasn’t quite fully adopted the idea of cross-vendor standardization in the first place?

You’re gonna have problems.

The specific problem that led Mary to send us this code was the way of defining data types. As you can guess, they used an XML schema to lay out the rules. That’s how enterprises do this sort of thing.

There are a bunch of “primitive” data types, like UIntVariable or BoolVariable. There are also collection types, like Vector or Map or Curve (3D plot). You might be tempted to think of the collection types in terms of generics, or you might be tempted to think about how XML schemas let you define new elements, and how these make sense as elements.

If you are thinking in those terms, you obviously aren’t ready for the fast-paced world of developing software for the automotive industry. The correct, enterprise-y way to define these types is just to list off combinations:

<xs:simpleType name="FrameworkVarType">
        <xs:annotation>
                <xs:documentation>This type is an enumeration of all available data types on Framework side.</xs:documentation>
        </xs:annotation>
        <xs:restriction base="xs:string">
                <xs:enumeration value="UIntVariable"/>
                <xs:enumeration value="IntVariable"/>
                <xs:enumeration value="FloatVariable"/>
                <xs:enumeration value="BoolVariable"/>
                <xs:enumeration value="StringVariable"/>
                <xs:enumeration value="UIntVectorVariable"/>
                <xs:enumeration value="IntVectorVariable"/>
                <xs:enumeration value="FloatVectorVariable"/>
                <xs:enumeration value="BoolVectorVariable"/>
                <xs:enumeration value="StringVectorVariable"/>
                <xs:enumeration value="UIntMatrixVariable"/>
                <xs:enumeration value="IntMatrixVariable"/>
                <xs:enumeration value="FloatMatrixVariable"/>
                <xs:enumeration value="BoolMatrixVariable"/>
                <xs:enumeration value="StringMatrixVariable"/>
                <xs:enumeration value="FloatIntCurveVariable"/>
                <xs:enumeration value="FloatFloatCurveVariable"/>
                <xs:enumeration value="FloatBoolCurveVariable"/>
                <xs:enumeration value="FloatStringCurveVariable"/>
                <xs:enumeration value="StringIntCurveVariable"/>
                <xs:enumeration value="StringFloatCurveVariable"/>
                <xs:enumeration value="StringBoolCurveVariable"/>
                <xs:enumeration value="StringStringCurveVariable"/>
                <xs:enumeration value="FloatFloatIntMapVariable"/>
                <xs:enumeration value="FloatFloatFloatMapVariable"/>
                <xs:enumeration value="FloatFloatBoolMapVariable"/>
                <xs:enumeration value="FloatFloatStringMapVariable"/>
                <xs:enumeration value="FloatStringIntMapVariable"/>
                <xs:enumeration value="FloatStringFloatMapVariable"/>
                <xs:enumeration value="FloatStringBoolMapVariable"/>
                <xs:enumeration value="FloatStringStringMapVariable"/>
                <xs:enumeration value="StringFloatIntMapVariable"/>
                <xs:enumeration value="StringFloatFloatMapVariable"/>
                <xs:enumeration value="StringFloatBoolMapVariable"/>
                <xs:enumeration value="StringFloatStringMapVariable"/>
                <xs:enumeration value="StringStringIntMapVariable"/>
                <xs:enumeration value="StringStringFloatMapVariable"/>
                <xs:enumeration value="StringStringBoolMapVariable"/>
                <xs:enumeration value="StringStringStringMapVariable"/>
        </xs:restriction>
</xs:simpleType>

So, not only is this just awkward, it’s not exhaustive. If you, for example, wanted a curve that plots integer values against integer values… you can’t have one. If you want a StringIntFloatMapVariable, your only recourse is to get the standard changed, and that requires years of politics, and agreement from all of the other automotive companies, who won’t want to change anything out of fear that their unreliable, hacky solutions will break.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianFrançois Marier: Using all of the 5 GHz WiFi frequencies in a Gargoyle Router

WiFi in the 2.4 GHz range is usually fairly congested in urban environments. The 5 GHz band used to be better, but an increasing number of routers now support it and so it has become fairly busy as well. It turns out that there are a number of channels on that band that nobody appears to be using despite being legal in my region.

Why are the middle channels unused?

I'm not entirely sure why these channels are completely empty in my area, but I would speculate that access point manufacturers don't want to deal with the extra complexity of the middle channels. Indeed these channels are not entirely unlicensed. They are also used by weather radars, for example. If you look at the regulatory rules that ship with your OS:

$ iw reg get
global
country CA: DFS-FCC
    (2402 - 2472 @ 40), (N/A, 30), (N/A)
    (5170 - 5250 @ 80), (N/A, 17), (N/A), AUTO-BW
    (5250 - 5330 @ 80), (N/A, 24), (0 ms), DFS, AUTO-BW
    (5490 - 5600 @ 80), (N/A, 24), (0 ms), DFS
    (5650 - 5730 @ 80), (N/A, 24), (0 ms), DFS
    (5735 - 5835 @ 80), (N/A, 30), (N/A)

you will see that these channels are flagged with "DFS". That stands for Dynamic Frequency Selection and it means that WiFi equipment needs to be able to detect when the frequency is used by radars (by detecting their pulses) and automaticaly switch to a different channel for a few minutes.

So an access point needs extra hardware and extra code to avoid interfering with priority users. Additionally, different channels have different bandwidth limits so that's something else to consider if you want to use 40/80 MHz at once.

The first time I tried setting my access point channel to one of the middle 5 GHz channels, the SSID wouldn't show up in scans and the channel was still empty in WiFi Analyzer.

I tried changing the channel again, but this time, I ssh'd into my router and looked at the errors messages using this command:

logread -f

I found a number of errors claiming that these channels were not authorized for the "world" regulatory authority.

Because Gargoyle is based on OpenWRT, there are a lot more nnwireless configuration options available than what's exposed in the Web UI.

In this case, the solution was to explicitly set my country in the wireless options by putting:

country 'CA'

(where CA is the country code where the router is physically located) in the 5 GHz radio section of /etc/config/wireless on the router.

Then I rebooted and I was able to set the channel successfully via the Web UI.

If you are interested, there is a lot more information about how all of this works in the kernel documentation for the wireless stack.

,

Planet DebianAndrew Cater: Debian 8.10 and Debian 9.3 released - CDs and DVDs published

Done a tiny bit of testing for this: Sledge and RattusRattus and others have done far more.

Always makes me feel good: always makes me feel as if Debian is progressing - and I'm always amazed I can persuade my oldest 32 bit laptop to work :)

,

Planet DebianDirk Eddelbuettel: #12: Know and Customize your OS and Work Environment

Welcome to the twelveth post in the randomly relevant R recommendations series, or R4 for short. This post will insert a short diversion into what was planned as a sequence of posts on faster installations that started recently with this post but we will resume to it very shortly (for various definitions of "very" or "shortly").

Earlier today Davis Vaughn posted a tweet about a blog post of his describing a (term) paper he wrote modeling bitcoin volatilty using Alexios's excellent rugarch package---and all that typeset with the styling James and I put together in our little pinp package which is indeed very suitable for such tasks of writing (R)Markdown + LaTeX + R code combinations conveniently in a single source file.

Leaving aside the need to celebreate a term paper with a blog post and tweet, pinp is indeed very nice and deserving of some additional exposure and tutorials. Now, Davis sets out to do all this inside RStudio---as folks these days seem to like to limit themselves to a single tool or paradigm. Older and wiser users prefer the flexibility of switching tools and approaches, but alas, we digress. While Davis manages of course to do all this in RStudio which is indeed rather powerful and therefore rightly beloved, he closes on

I wish there was some way to have Live Rendering like with blogdown so that I could just keep a rendered version of the paper up and have it reload every time I save. That would be the dream!

and I can only add a forceful: Fear not, young man, for we can help thou!

Modern operating systems have support for epoll and libnotify, which can be used from the shell. Just how your pdf application refreshes automagically when a pdf file is updated, we can hook into this from the shell to actually create the pdf when the (R)Markdown file is updated. I am going to use a tool readily available on my Linux systems; macOS will surely have something similar. The entr command takes one or more file names supplied on stdin and executes a command when one of them changes. Handy for invoking make whenever one of your header or source files changes, and useable here. E.g. the last markdown file I was working on was named comments.md and contained comments to a referee, and we can auto-process it on each save via

echo comments.md | entr render.r comments.md

which uses render.r from littler (new release soon too...; a simple Rscript -e 'rmarkdown::render("comments.md")' would probably work too but render.r is shorter and little more powerful so I use it more often myself) on the input file comments.md which also happens to (here sole) file being monitored.

And that is really all there is to it. I wanted / needed something like this a few months ago at work too, and may have used an inotify-based tool there but cannot find any notes. Python has something similar via watchdog which is yet again more complicated / general.

It turns out that auto-processing is actually not that helpful as we often save before an expression is complete, leading to needless error messages. So at the end of the day, I often do something much simpler. My preferred editor has a standard interface to 'building': pressing C-x c loads a command (it recalls) that defaults to make -k (i.e., make with error skipping). Simply replacing that with render.r comments.md (in this case) means we get an updated pdf file when we want with a simple customizable command / key-combination.

So in sum: it is worth customizing your environments, learning about what your OS may have, and looking beyond a single tool / editor / approach. Even dreams may come true ...

Postscriptum: And Davis takes this in a stride and almost immediately tweeted a follow-up with a nice screen capture mp4 movie showing that entr does indeed work just as well on his macbook.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

TED“The courage to …” The talks of TED@Tommy

At TED@Tommy — held November 14, 2017, at Mediahaven in Amsterdam — fifteen creators, leaders and innovators invited us to dream, to dare and to do. (Photo: Richard Hadley / TED)

Courage comes in many forms. In the face of fear, it’s the conviction to dream, dare, innovate, create and transform. It’s the ability to try and try again, to admit when we’re wrong and stand up for what’s right.

TED and Tommy Hilfiger both believe in the power of courageous ideas to break conventions and celebrate individuality — it’s the driving force behind why the two organizations have partnered to bring experts in fashion, sustainability, design and more to the stage to share their ideas.

More than 300 Tommy associates from around the world submitted their ideas to take part in TED@Tommy, with more than 20 internal events taking place at local and regional levels, and the top 15 ideas were selected for the red circle on the TED@Tommy stage. At this inaugural event — held on November 14, 2017, at Mediahaven in Amsterdam — creators, leaders and innovators invited us to dream, to dare and to do.

After opening remarks from Daniel Grieder, CEO, Tommy Hilfiger Global and PVH Europe, and Avery Baker, Chief Brand Officer, Tommy Hilfiger Global, the talks of Session 1 kicked off.

Fashion is “about self-expression, a physical embodiment of what we portray ourselves as,” says Mahir Can Isik, speaking at TED@Tommy in Amsterdam. (Photo: Richard Hadley / TED)

Let fashion express your individuality. The stylish clothes you’re wearing right now were predicted to be popular up to two years before you ever bought them. This is thanks to trend forecasting agencies, which sell predictions of the “next big thing” to designers.  And according to Tommy Hilfiger retail buyer Mahir Can Isik, trend forecasting is, for lack of a better term, “absolutely bull.” Here’s a fun fact: More than 12,000 fashion brands all get their predictions from the same single agency — and this, Isik suggests, is the beginning of the end of true individuality. “Fashion is an art form — it’s about excitement, human interaction, touching our hearts and desires,” he says. “It’s about self-expression, a physical embodiment of what we portray ourselves as.” He calls on us to break this hold of forecasters and cherish self-expression and individuality.

Stylish clothing for the differently abled fashionista. Mindy Scheier believes that what you wear matters. “The clothes you choose can affect your mood, your health and your confidence,” she says. But when Scheier’s son Oliver was born with muscular dystrophy, a degenerative disorder that makes it hard for him to dress himself or wear clothing with buttons or zippers, she and her husband resorted to dressing him in what was easiest: sweatpants and a T-shirt. One afternoon when Oliver was eight, he came home from school and declared that he wanted to wear blue jeans like everyone else. Determined to help her son, Mindy spent the entire night MacGyvering a pair of jeans, opening up the legs to give them enough room to accommodate his braces and replacing the zipper and button with a rubber band. Oliver went to school beaming in his jeans the next day — and with that first foray into adaptive clothing, Scheier founded Runway of Dreams to educate the fashion industry about the needs of differently abled people. She explains how she designs for people who have a hard time getting dressed, and how she partnered with Tommy Hilfiger to make fashion history by producing the first mainstream adaptive clothing line, Tommy Adaptive.

Environmentally friendly, evolving fashion. The clothing industry is the world’s second largest source of pollution, second only to the oil and gas industry. (The equivalent of 200 T-shirts per person are thrown away annually in the US alone). Which is why sustainability sower Amit Kalra thinks a lot about how to be conscientious about the environment and still stay stylish. For his own wardrobe, he hits the thrift stores and stitches up his own clothing from recycled garments; as he says, “real style lives at the intersection of design and individuality.” As consumer goods companies struggle to provide consumers with the individuality they crave, Kalra suggests one way forward: Start using natural dyes (from sources such as turmeric or lichen) to color clothes sustainably. As the color fades, the clothing grows more personalized and individual to the owner. “There is no fix-all,” Kalra says, “But the fashion industry is the perfect industry to experiment and embrace change that could one day get us to the sustainable future we so desperately need.”

Tito Deler performs Big Joe Turner’s blues classic “Story to Tell” at TED@Tommy. (Photo: Richard Hadley / TED)

With a welcome musical interlude, blues musician (and VP of graphic design for Tommy Hilfiger) Tito Deler takes the stage, singing and strumming a stirring rendition of Big Joe Turner’s blues classic “Story to Tell.”

The truth we can find through literary fiction. Day by day, we’re exposed to streams of news, updates and information. Our brains are busier than ever as we try to understand the world we live in and develop our own story, and we often reach for nonfiction books to learn to become a better leader or inventor, how to increase our focus, and how to maintain a four-hour workweek. But for Tomas Elemans, brand protection manager for PVH, there’s an important reward from reading fiction that we’re leaving behind: empathy. “Empathy is the friendly enemy to our feeling of self-importance. Storytelling can help us to not only understand but feel the complexity, emotions and situations of distant others. It can be a vital antidote to the stress of all the noise around us,” Elemans says. Telling his personal story of the ups and downs of reading Dave Eggers’ Heroes of the Frontier, Elemans explains the importance of narrative immersion — how we transcend the here-and-now when we imagine being the characters in the stories we read — and how it reduces externally focused attention and increases reflection. “Literature has a way of reminding us that the stranger is not so strange,” Elemans says. “The ambition with which we turn to nonfiction books, we can also foster toward literature … Fiction can help us to disconnect from ourselves and tap into an emotional, empathetic side that we don’t often take the time to explore.”

Irene Mora shares the valuable lessons she learned being raised by a mom who was also a CEO. (Photo: Richard Hadley / TED)

Why you shouldn’t fear having a family and a career. As the child of parents who followed their passions and led successful careers, Irene Mora appreciates rather than resents their decision to have a family. Society’s perceptions of what it means to be a good parent — which usually means rejecting the dedicated pursuit of a profession — are dull and outdated, says Mora, now a merchandiser for Calvin Klein. “A lot of these conversations focus on the hypothetical negative effects, rather than the hypothetical positive effects that this could have on children,” Mora explains. “I’m living proof of the positive.” As she and her sister traveled the world with their parents due to her mother’s job as a CEO, she learned valuable lessons: adaptability, authenticity and independence. And despite her mother’s absences and limited face-to-face time, Mora didn’t feel abandoned or lacking in any way. “If your children know that you care, they will feel your love,” she says. “You don’t always have to be together to love and be loved.”

What you can learn from bad advice. Nicole Wilson, Tommy Hilfiger’s director of corporate responsibility, knows bad advice. From a young age, her father — a professional football player notorious for causing kitchen fires — would offer her unhelpful tidbits like: “It’s better to cheat than repeat,” or, at a street intersection, “No cop-y, no stop-y.” As a child, Wilson learned to steer clear of her father’s, ahem, wisdom, but as an adult, she realized that there’s an upside to bad advice. In this fun, personal talk, she shares how bad advice can be as helpful and as valuable as “so-called good advice” — because it can help you recognize extreme courses of action and develop a sense of when you should take the opposite advice from what you’re being offered. Above all, Wilson says, bad advice teaches you that “you have to learn to trust yourself — to take your own advice — because more times than not, your own advice is the best advice you are ever going to get.”

Fashion is a needed avenue of protest, says Kaustav Dey. He spoke about how we can embrace our most authentic selves at TED@Tommy. (Photo: Richard Hadley / TED)

Fashion as a  language of dissent. From a young age, fashion revolutionary and head of marketing for Tommy Hilfiger India Kaustav Dey knew that he was different, that his sense of self diverged from and even contradicted that of the majority of his classmates. He was never going to be the manly man his father hoped for and whom society privileged, he says. But it was precisely this distinct take on himself that would later land him in the streets of Milan and Paris, fashion worlds that further opened his eyes to the protest value of aesthetics. Dey explains the idea that fashion is a needed avenue of protest (but also a dangerous route to take) by speaking of the hateful comments Malala received for wearing jeans, by commenting on the repressive nature of widowed Indian women being eternally bound to white garments, and by telling the stories of the death of transgender activist Alesha and the murder of the eclectic actor Karar Nushi. Instead of focusing on society’s response to these individuals, Dey emphasizes that “fashion can give us a language of dissent.” Dey encourages us all to embrace our most authentic selves, so “in a world that’s becoming whitewashed, we will become the pinpricks of color pushing through.”

Returning to the stage to open Session 2, Tito Deler plays an original blues song, “My Fine Reward,” combining the influence of the sound of his New York upbringing with the style of pre-war Mississippi Delta blues. “I’m moving on to a place now where the streets are paved with gold,” Deler sings, “I’m gonna catch that fast express train to my reward in the sky.”

We should all make it a point not to buy fake goods and to notify officials when we see them being sold, says Alastair Gray, speaking at TED@Tommy in Amsterdam. (Photo: Richard Hadley / TED)

The deadly impact of counterfeit goods. To most consumers, the trade in knock-off goods seems harmless enough — we get to save money by buying lookalike products, and if anyone suffers, it’s only the big companies. But counterfeit investigator Alastair Gray says that those fake handbags, CDs and watches might be supporting organized crime or even terrorist organizations. “You wouldn’t buy a live scorpion because there’s a chance it will sting you on the way home,” Gray says. “But would you still buy a fake handbag if you knew the profit would enable someone to buy the bullets that might kill you and other innocent people?” This isn’t just conjecture: Saïd and Chérif Kouachi, the two brothers behind the 2015 attack on the Charlie Hebdo office in Paris that killed 12 people and wounded 11, purchased their weapons using the proceeds made from selling counterfeit sneakers. When it comes to organized crime and terrorism, most of us feel understandably helpless. But we do have the power to act, Gray says: make it a point not to buy fake goods and to notify officials (online or in real life) when we see them being sold.

Is data a designer’s hero? Data advocate Steve Brown began working in the fashion industry 15 years ago — when he would have to painstakingly sit for 12 hours each day picking every color that matched every fabric per garment he was working on. Today, however, designers can work with visualized 3D garments, fully functional with fabric, trim and prints, and they can even upload fabric choices to view the flow and drape of the design, all before a garment is ever made. Data and technology saves the designer time, Brown says, which allows for more time and attention to go into the creative tasks rather than the mundane ones. The designer’s role with data and technology is that of both a creator and a curator. He points to Amazon’s “Body Labs” and algorithms that learn a user’s personal style, both of which help companies to design custom-made garments. In this way, data can empower both the consumer and designer — and it should be embraced.

A better way to approach data. Every day, we’re inundated with far more data than our brains can process. Data translator Jonathan Koch outlines a few simple tools we can all use to understand and even critique data meant to persuade us. First: we need transparent definitions. Koch, a senior director of strategy and business development at PVH Asia Pacific, uses the example of a famous cereal brand that promised two scoops of raisins in every box of cereal (without bothering to define exactly what a “scoop” is) and a company that says that they’re the “fastest growing startup in Silicon Valley” (without providing a time period for context). The next tool: context and doubt. To get a clearer picture, we need to always question the context around data, and we need to always doubt the source, Koch says. Finally, we need to solve the problem of averages. When we deconstruct averages, which is how most data is delivered to us, into small segments, we can better understand what makes up the larger whole — and quickly get new, nuanced insights. With these three simple tools, we can use data to help us make better decisions about our health, wealth and happiness.

Conscious quitter Daniela Zamudio explains the benefits of moving on at TED@Tommy in Amsterdam. (Photo: Richard Hadley / TED)

An introduction to conscious quitting. “I’m a quitter,” says Daniela Zamudio, “and I’m very good at it.” Like many millennials, Zamudio has quit multiple jobs, cities, schools and relationships, but she doesn’t think quitting marks her as weak or lazy or commitment-phobic. Instead, she argues that leaving one path to follow another is a sign of strength and often leads to greater happiness in the long run. Now a senior marketing and communications manager for Tommy Hilfiger, Zamudio gives us an introduction to what she calls “conscious quitting.” She teaches us to weigh the pros and cons of qutting a particular situation and then instructs us to create a strategy to deal with the repercussions of our choice. For instance, after Zamudio broke off her engagement to a man she had been dating for nine years, she managed her heartbreak by scheduling every minute of her day, seven days a week. “It takes courage to quit,” says Zamudio, “but too often it feels also like it’s wrong.” She concludes her talk by reminding us that listening to our own needs and feelings (and ignoring society’s expectations) can often be just what we need.

Lessons in dissent. Have you ever presented an idea and been immediately barraged with a line of questioning that feels like it’s poking more holes than it is actually questioning? Then you’ve probably engaged with a dissenter. Serial dissenter Andrew Millar promises these disagreements don’t come from a place of malice but rather from compassion with an aim to improve on your idea. “At this point in time, we don’t have enough dissenters in positions of power,” says Millar. “And history shows that having yes-men is rarely a driver of progress.” He suggests that dissenters find a workplace that truly works with them, not against — so if a company heralds conformity or relies heavily on hierarchy, then that place may not be the best for you. But even in the most welcome environment, no dissenter gets off scot-free — each needs to understand that compromise, or dissent upon response, and thinking you’re always right because you’re the only one to speak up are things that need to be mitigated to be successful. And to those in the path of a dissenter, says Millar, know this: when a dissenter speaks up, it can come across as criticism, but please do assume it stems from a place of good intent and connection.

Gabriela Roa speaks about learning to live in, and embrace, chaos, at TED@Tommy in Amsterdam. (Richard Hadley / TED)

Embrace the chaos. As the daughter of an obsessively organized mother, Gabriela Roa grew up believing that happiness was a color-coordinated closet. When she became a mom, she says, “I wanted my son to feel safe and loved in the way I did.” But he, like most toddlers, became “a chaos machine,” leaving toys and clothes in his wake. Roa, an IT project manager at PVH, felt terrible. Not only was she falling short as a disciplinarian, but she was so busy dwelling on her lapses that she wasn’t emotionally present for her son. One day, she remembered this piece of advice: “Whenever you experience a hard moment, there is always something to smile about.” In search of a smile, she began taking photos of her son’s messes. She shared them with friends and was moved by the compassion she received, so she started taking more pictures of her “happy explorer,” in which she documented her son’s creations and tried seeing life from his perspective. She realized that unlike her, he was living in the now — calm, curious and ready to investigate. The project changed her, ultimately bringing her back to playing the cello, an instrument she’d once loved. “I’m not saying that chaos is better than order,” says Roa. “But it is part of life.”

Present fathers: strong children. Dwight Stitt is a market manager for Tommy Hilfiger, but he identifies first and foremost as a father. He speaks passionately about the need for men to be involved in their children’s lives. Reminiscing about his own relationship with his father — and how it took 24 years for them to form a working bond — Stitt shares that so long as life permits, it’s never too late to recover what may seem lost. He has incorporated the lessons he learned from his father and amplified them to reach not only his children but also other people through a camp and canoeing trip. Conceiving of camp as an opportunity to foster love and growth between fathers and children, Stitt says that “camp has taught me that fatherhood is not only vital to a child’s development, but that seemingly huge hurdles can be overcome by simple acts of love and memorable moments.” He goes so far as to explain the emotional, academic and behavioral benefits of working father-child relationships and, in between tears, calls on all fathers to share his goal of reducing the alarming statistics of fatherlessness in whatever form it comes.

How magic tricks (and politicians) fool your brain. Ever wonder how a magic trick works? How did the magician pull a silver coin from behind your ear? How did they know which card was yours? According to magician and New York Times crossword puzzle constructor David Kwong, it all boils down to evolution. Because we take in an infinite number of stimuli at any given time, we only process a tiny fraction of what’s in front of us. Magic works, Kwong says, by exploiting the gaps in our awareness. We never notice the magician flipping our card to the top of the deck because we’re too busy watching him rub his sleeve three times. But the principles of illusion extend beyond a bit of sleight-of-hand, he says. Politicians also exploit us with cognitive misdirection. For instance, policymakers describe an inheritance tax (which only taxes the very wealthy) as a “death tax” to make the public think it applies to everyone. Kwong then demonstrates a few fun tricks to teach us how to see through the illusions and deceptions that surround us in everyday life. He finishes his set with some sage words of advice for everyone (magic lovers or not): “Question what seems obvious, and above all, pay attention to your attention.”


TEDBreakthroughs: The talks of TED@Merck KGaA, Darmstadt, Germany

TED and Merck KGaA, Darmstadt, Germany, have partnered to help surface and share brilliant ideas, innovations — and breakthroughs. (Photo: Paul Clarke / TED)

Humanity is defined by its immense body of knowledge. Most times it inches forward, shedding light onto the mysteries of the universe and easing life’s endeavors in small increments. But in some special moments, knowledge and understanding leap forward, when one concentrated mind or one crucial discovery redirects the course of things and changes the space of possibilities.

TED and Merck KGaA, Darmstadt, Germany, have partnered to help surface and share brilliant ideas, innovations — and breakthroughs. At the inaugural TED@Merck KGaA, Darmstadt, Germany event, hosted by TED International Curator Bruno Giussani at Here East in London on November 28, 16 brilliant minds in healthcare, technology, art, psychology and other fields shared stories of human imagination and discovery.

After opening remarks from Belén Garijo, CEO, Healthcare for Merck KGaA, Darmstadt, Germany, the talks of Session 1 kicked off.

Biochemist Bijan Zakeri explains the mechanism behind a molecular superglue that could allows us to assemble new protein shapes. (Photo: Paul Clarke / TED)

A molecular superglue made from flesh-eating bacteria. The bacteria Streptococcus pyogenes — responsible for diseases including strep throat, scarlet fever and necrotizing fasciitis (colloquially, flesh-eating disease) — has long, hair-like appendages made of proteins with a unique property: the ends of these proteins are linked by an incredibly strong chemical bond. “You can boil them, try to cut them with enzymes or throw them in strong acids and bases. Nothing happens to them,” says biochemist Bijan Zakeri. Along with his adviser Mark Howarth, Zakeri figured out a way to engineer these proteins to create what he describes as a molecular superglue. The superglue allows us to assemble new protein shapes, and “you can chemically link the glue components to other organic and inorganic molecules, like medicines, DNA, metals and more, to build new nano-scale objects that address important scientific and medical needs,” Zakeri says.

What if we could print electronics? “We must manufacture devices in a whole new way, with the electronics integrated inside the object, not just bolted in afterwards,” says advanced technologist Dan Walker. He introduces us to his vision of the fast approaching future of technology, which could take two potential paths: “The first is hyper-scale production, producing electrically functional parts along the standard centralized model of manufacturing. Think of how we print newspapers, ink on paper, repeating for thousands of copies. Electronics can be printed in this way, too.” he says. Walker designs inks that conduct electricity and can be used to print functional electronics, like wires. This ink can be used in inkjet printers, the sort that can be found in most offices and homes. But these inkjet printers are still 2D printers — they can print the electronics onto the object, but they can’t print the object itself. “The second way the manufacturing world will go is towards marrying these two techniques of digital printing, inkjet and 3D, and the result will be the ability to create electrically functional objects,” Walker explain, both unique objects bespoke for individual customers and perfect replicas printed off by the thousands.

Strategic marketer Hannah Bürckstümmer explains her work developing organic photovoltaics — and how they might change how we power the world. (Photo: Paul Clarke / TED)

A printable solar cell. Buildings consume about 40 percent of our total energy, which means reducing their energy consumption could help us significantly decrease our CO2 emissions. Solar cells could have a big role to play here, but they’re not always the most aesthetically pleasing solution. Strategic marketer Hannah Bürckstümmer is working on a totally different solar cell technology: organic photovoltaics. Unlike the solar cells you’re used to seeing, these cells are made of compounds that are dissolved in ink and can be printed using simple techniques. The result is a thin film that absorbs the energy of the sun. The solar module looks like a plastic foil and is low weight, flexible and semi-transparent. It can be used in this form or combined with conventional construction materials like glass. “With the printing process, the solar cell can change its shape and design very easily,” Bürckstümmer says, displaying a cell onstage. “This will give the flexibility to architects, to planners and building owners to integrate this electricity-producing technology as they wish.” Plus, it may just help buildings go from energy consumers to energy providers.

A robot that can grab a tomato without crushing it. Robots are excellent at many tasks — but handling delicate items isn’t one of them. Carl Vause, CEO of Soft Robotics, suggests that instead of trying to recreate the human hand, roboticists should instead seek inspiration from other parts of nature. Consider the octopus: it’s very good at manipulating items, wrapping its tentacles around objects and conforming to their shapes. So what if we could get robots to act like an octopus tentacle? That’s exactly what a group of researchers at Harvard did: in 2009, they used a composite material structure, with rubber and paper, to create a robot that can conform to and grasp soft objects. Demoing the robot onstage, Vause shows it can pick up a bath sponge, a rubber duck, a breakfast scone and even a chicken egg. Why is this important? Because until now, industries like agriculture, food manufacturing and even retail have been largely unable to benefit from robots. With a robot that can grasp something as soft as a tomato, we’ll open up whole industries to the benefits of automation.

Departing from rules can be advantageous — and hilarious. In between jokes about therapy self-assessment forms, hair salons and box junctions, human nature explorer James Veitch questions the rules and formalities people are taught to respect throughout life. An avid email and letter writer, Veitch is unafraid and unapologetic in voicing his concerns about anyone whose actions fall in line with protocols but out of line with common sense. To this effect, he questions Jennifer, a customer relations team member at Headmasters Salon, as to why he and his friend Nige received comparable haircuts when they booked appointments with different types of stylists: Nige with a Senior Stylist (who cost 34 Euros) and Veitch with a Master Hair Consultant (who cost 54 Euros). Using percentages and mathematics, and even inquiring into the reasons why Nige received a biscuit and he didn’t, Veitch argues his way into a free haircut. Though Veitch is clearly enjoying himself by questioning protocols, he derives more than amusement from this process — as he shows us how departing from rules and formalities can be advantageous and hilarious, all at once.

If we can’t fight our emotions, why not use them? Emotions are as important in science as they are in any other part of our lives, says materials researcher Ilona Stengel. The combination of emotion and logical reasoning is crucial for facing challenges and exploring new solutions. Especially in the scientific world, feelings are just as necessary as facts and logic for paving the way to breakthroughs, discoveries and cutting-edge innovations. “We shouldn’t be afraid of using our feelings to implement and to catalyze fact-based science,” Stengel says. Stengel insists that having a personal, emotional stake in the work that you do can alter your progress and output in an incredibly positive way, that emotions and logic do not oppose, but complement and reinforce each other. She asks us all — whether we’re in or outside of the science world — to reflect on our work and how it might give us a sense of belonging, dedication and empowerment.

TED International Curator Bruno Giussani, left, speaks with Scott Williams about the important role informal caregivers play in the healthcare system. (Photo: Paul Clarke / TED)

Putting the “care” back in healthcare. Once a cared-for patient and now a caregiver himself, Scott Williams asks us to recognize the role that informal caregivers — those friends and relatives who go the extra mile out of love — play in healthcare systems. Although they don’t administer medication or create treatment plans for patients, informal caregivers are instrumental in helping people return to good health, Williams says. They give up their jobs, move into hospitals with patients, know patients’ medical histories and sometimes make difficult decisions for them. Williams suggests that without informal caregivers, “our health and social systems would crumble, and yet they’re largely going unrecognized.” He invites us to recognize their selfless work — and their essential value to the smooth functioning of healthcare systems.

Tiffany Watt Smith speaks about the fascinating history of how we understand our emotions. (Photo: Paul Clarke / TED)

Yes, emotions have a history. When we look to the past, it’s easy to see that emotions changed — sometimes very dramatically — in response to new cultural expectations and new ideas about gender, ethnicity, age and politics, says Tiffany Watt Smith, a research fellow at the Centre for the History of the Emotions at the Queen Mary University of London. Take nostalgia, which was defined in 1688 as homesickness and seen as being deadly. It last appeared as a cause of death on a death certificate in 1918, for an American soldier fighting in France during WWI. Today, it means something quite different — a longing for a lost time — and it’s much less serious. This change was driven by a shift in values, says Watt Smith. “True emotional intelligence requires we understand the social, political, cultural forces that have shaped what we’ve come to believe about our emotions and understand how they might still be changing now.”

Dispelling myths about the future of work. “Could machines replace humans?” was a question once pondered by screenwriters and programmers. Today, it’s on the minds of anybody with a job to lose. Daniel Susskind, a fellow in economics at Oxford University, kicked off Session 2 by tackling three misconceptions we have about our automated future. First: the Terminator myth, which says machines will replace people at work. While that might sometimes happen, Susskind says that machines will also complement us and make us more productive. Next, the intelligence myth, which says some tasks can’t be automated because machines don’t possess the human-like reasoning to do them. Susskind dispels this by explaining how advances in processing power, data storage and algorithms have given computers the ability to handle complex tasks — like diagnosing diseases and driving cars. And finally: the superiority myth, which says technological progress will create new tasks that humans are best equipped to do. That’s simply not true, Susskind says, since machines are capable of doing different kinds of activities. “The threat of technological unemployment is real,” he declares, “Yet it is a good problem to have.” For much of our history, the biggest problem has been ensuring enough material prosperity for everyone; global GDP lingered at about a few hundred dollars per person for centuries. Now it is $10,150, and its growth shows no signs of stopping. Work has been the traditional way in which we’ve distributed wealth, so how should we do it in a world when there will be less — or even no — work? That’s the question we really need to answer.

A happy company is a healthy company, says transformation manager Lena Bieber. She suggests that we factor in employee happiness when we think about — and invest in — companies.  (Photo: Paul Clarke / TED)

Investing in the future of happiness. Can financial parameters like return on equity, cash flow or relative market share really tell us if a company is fundamentally healthy and predict its future performance? Transformation manager Lena Bieber thinks we should add one more indicator: happiness. “I would like to see the level of happiness in a company become a public indicator — literally displayed next to their share price on their homepages,” she says. With the level of happiness so prominent, people could feel more secure in the investments they’re making, knowing that employees of certain companies are in good spirits. But how does one measure something so subjective? Bieber likes the Happy Planet Index (a calculation created by TED speaker Nic Marks), which uses four variables to measure national well-being — she suggests that it can be translated for the workplace to include factors such as average longevity on the job and perceived fairness of opportunities. Bieber envisions a future where we invest not just with our minds and wallets, but with hearts as well.

Seeing intellectual disability through the eyes of a mom. When Emilie Weight’s son Mike was diagnosed with fragile X syndrome, an intellectual disability, it changed how she approached life. Mike’s differences compelled her to question her inner self and her role in the world, leading her to three essential tools that she now benefits from. Mindfulness helps her focus on the positive daily events that we often overlook. Mike also reminds Emilie of the importance of time management to use the time that she has instead of chasing it. Lastly, he’s taught her the benefit of emotional intelligence through adapting to the emotions of others. Emilie believes in harnessing the powers of people with intellectual disabilities: “Intellectually disabled people can bring added value to our society,” she says, “Being free of the mental barriers that are responsible for segregation and isolation, they are natural-born mediators.”

Christian Wickert suggests three ways that we can tap into the power of fiction — and how it could benefit our professional lives. (Photo: Paul Clarke / TED)

Can fiction make you better at your job? Forget self-help books; reading fiction might be the ticket to advancing your career. Take it from Christian Wickert, an engineer focusing on strategy and policy, who took a creative writing course — a course, he believes, that sharpened his perception, helped him understand other people’s motivations, and ultimately made him better at his job. Wickert explores three ways fiction can improve your business skills. First, he says, fiction helps you identify characters, their likes, dislikes, habits and traits. In business, this ability to identify characters can give you tremendous insights into a person’s behavior, telling you when someone is playing a role versus being authentic. Second, fiction reminds us that words have power. For example, “sorry” is a very powerful word, and when used appropriately it can transform a situation. Finally, fiction teaches you to look for a point of view — which in business is the key to good negotiation. So the next time you have a big meeting coming up, Wickert concludes, prepare by writing — and let the fiction flow.

How math can help us answer questions about our health. Mathematician Irina Kareva translates biology into mathematics, and vice versa. As a mathematical modeler, she doesn’t think about what things are but instead about what things do — about relationships between individuals, whether they’re cells, animals or people. Take an example: What do foxes and immune cells have in common? They’re both predators, except foxes feed on rabbits and immune cells feed on invaders, such as cancer cells. But from a mathematical point of view, the same system of predator-prey equations will describe both interactions between foxes and rabbits and cancer and immune cells. Understanding the dynamics between predator and prey — and the ecosystem they both inhabit — from a mathematical point of view could lead to new insights, specifically in the development of drugs that target tumors. “The power and beauty of mathematical modeling lies in the fact that it makes you formalize in a very rigorous way what we think we know,” Kareva says. “It can help guide us as to where we should keep looking, or where there might be a dead end.” It all comes down to asking the right question, and translating it to the right equation, and back.

End-to-end tracking of donated medicine. Neglected tropical diseases, or NTDs, are a diverse group of diseases that prevail in tropical and subtropical conditions. Globally, they affect more than one billion people. A coalition of pharmaceutical companies, governments, health organizations, charities and other partners, called Uniting to Combat Neglected Tropical Diseases, is committed to fighting NTDs using donated medicines — but shipping them to their destination poses complex problems. “How do we keep an overview of our shipments and make sure the tablets actually arrive where they need to go?” asks Christian Schröter, head of Pharma Business Integration at Merck KGaA, Darmstadt, Germany. Currently, the coalition is piloting a shipping tracker for their deliveries — similar to the tracking you receive for a package you order online — that tracks shipments to the first warehouse in recipient countries. This year, they took it a step further and tracked the medicines all the way to the point of treatment. “Still, many stakeholders would need to join in to achieve end-to-end tracking,” he says. “We would not change the amount of tablets donated, but we would change the amount of tablets arriving where they need to go, at the point of treatment, helping patients.”

Why we should pay doctors to keep people healthy. Most healthcare systems reimburse doctors based on the kind and number of treatments they perform, says business developer Matthias Müllenbeck. That’s why when Müllenbeck went to the dentist with a throbbing toothache, his doctor offered him a $10,000 treatment (which would involve removing his damaged tooth and screwing an artificial one into his jaw) instead of a less expensive, less invasive, non-surgical option. We’re incentivizing the wrong thing, Müllenbeck believes. Instead of fee-for-service health care, he proposes that we reimburse doctors and hospitals for the number of days that a single individual is kept healthy, and stop paying them when that individual gets sick. This radical idea could save us from unnecessary costs and risky procedures — and end up keeping people healthier.

The music of TED@Merck KGaA, Darmstadt, Germany. During the conference, music innovator Tim Exile wandered around recording ambient noises and sounds: a robot decompressing, the murmur of the audience, a hand fumbling with tape, a glass of sparkling water being poured. Onstage, he closes out the day by threading together these sounds to create an entirely new — and strangely absorbing — piece of electronic music.


TEDWhy not? Pushing and prodding the possible, at TED@IBM

The stage at TED@IBM bubbles with possibilities … at the SFJAZZ Center, December 6, 2017, San Francisco, California. Photo: Russell Edwards / TED

We know that our world — our data, our lives, our countries — are becoming more and more connected. But what should we do with that? In two sessions of TED@IBM, the answer shaped up to be: Dream as big as you can. Speakers took the stage to pitch their ideas for using connected data and new forms of machine intelligence to make material changes in the way we live our lives — and also challenged us to flip the focus back to ourselves, to think about what we still need to learn about being human in order to make better tech. From the stage of TED@IBM’s longtime home at the SFJAZZ Center, executive Ann Rubin welcomes us and introduces our two onstage hosts, TED’s own Bryn Freedman and her cohost Michaela Stribling, a longtime IBMer who’s been a great champion of new ideas. And with that, we begin.

Giving plastic a new sense of value. A garbage truck full of plastic enters the ocean every minute of every hour of every day. Plastic is now in the food chain (and your bloodstream), and scientists think it’s contributing to the fastest rate of extinction ever. But we shouldn’t be thinking about cleaning up all that ocean plastic, suggests plastics alchemist David Katz — we should be working to stop plastic from getting there in the first place. And the place to start is in extremely poor countries — the origin of 80 percent of plastic pollution — where recycling just isn’t a priority. Katz has created The Plastic Bank, a worldwide chain of stores where everything from school tuition and medical insurance to Wi-Fi and high-efficiency stoves is available to be purchased in exchange for plastic garbage. Once collected, the plastic is sorted, shredded and sold to brands like Marks & Spencer and Henkel, who have commissioned the use of “Social Plastic” in their products. “Buy shampoo or detergent that has Social Plastic packaging, and you’re indirectly contributing to the extraction of plastic from ocean-bound waterways and alleviating poverty at the same time,” Katz says. It’s a step towards closing the loop on the circular economy, it’s completely replicable, and it’s gamifying recycling. As Katz puts it: “Be a part of the solution, not the pollution.”

How can we stop plastic from piling up in the oceans? David Katz has one way: He runs an international chain of stores that trade plastic recyclables for money. Photo: Russell Edwards / TED

How do we help teens in distress? AI is great at looking for patterns. Could we leverage that skill, asks 14-year-old cognitive developer Tanmay Bakshi, to spot behavior issues lurking under the surface? “Humans aren’t very good at detecting patterns like changes in someone’s sleep, exercise levels, and public interaction,” he says. “If some of the patterns from these suicidal teens go unrecognized and unnoticed by the human eye,” he suggests we could let technology help us out. For the last 3 years, Bakshi and his team have been working with artificial neural networks (ANNs, for short) to develop an app that can pick up on irregularities in a person’s online behavior and build an early warning systems for at-risk teens. With this technology and information access, they foresee a future where a diagnosis is given and all-encompassing help is available right at their fingertips.

An IBMer reads Tanmay Bakshi’s bio — to confirm that, yes, he’s just 14. At TED@IBM, Bakshi made his pitch for a social listening tool that could help identify teens who might be heading for a crisis. Photo: Russell Edwards / TED

A better way to manage refugee crises. When the Syrian Civil War broke out, Rana Novack, the daughter of Syrian immigrants, watched her extended family face an impossible choice: stay home and risk their lives, leave for a refugee camp, or apply for a visa, which could take years and has no guarantee. She quickly realized there was no clear plan to handle a refugee crisis of this magnitude (it’s estimated that there are over 5 million Syrian refugees worldwide). “When it comes to refugees, we’re improvising,” she says. Frustrated with her inability to help her family, Novack eventually struck on the idea of applying predictive technology to refugee crises. “I had a vision that if we could predict it, we could enable government agencies and humanitarian aid organizations with the right information ahead of time so it wasn’t such a reactive process,” she says. Novack and her team built a prototype that will be deployed this year with a refugee organization in Denmark and next year with an organization to help prevent human trafficking. “We have to make sure that the people who want to do the right thing, who want to help, have the tools and the information they need to succeed,” she says, “and those who don’t act can no longer hide behind the excuse they didn’t know it was coming.”

After her talk onstage at TED@IBM, Rana Novack continues the conversation on how to use data to help refugees.  Photo: Russell Edwards / TED

What is information? It seems like a simple question, maybe almost too simple to ask. But Phil Tetlow is here to suggest that answering this question might be the key to understanding the universe itself. In an engaging talk, he walks the audience through the eight steps of understanding exactly what information is. It starts by getting to grips with the sheer complexity of the universe. Our minds use particular tools to organize all this sheer data into relevant information, tools like pattern-matching and simplifying. Our need to organize and connect things, in turn, leads us to create networks. Tetlow offers a short course in network theory, and shows us how, over and over, vast amounts of information tend to connect to one another through a relatively small set of central hubs. We’re familiar with this property: think of airline route maps or even those nifty maps of the internet that show how vast amounts of information ends up flowing through a few large sites, mainly Google and Facebook. Call it the 80/20 rule, where 80% of the interesting stuff arrives via 20% of the network. Nature, it turns out, forms the same kind of 80/20 network patterns all the time — in plant evolution, in chemical networks, in the way a tree branches out from a central trunk. And that’s why, Tetlow suggests, understanding the nature of information, and how it networks together, might give us a clue as to the nature of life, the universe, and why we’re even here at all.

Want to know what information is, exactly? Phil Tetlow does too — because understanding what information is, he suggests, might just help us understand why we exist at all. He speaks at TED@IBM. Photo: Russell Edwards / TED

Curiosity + passion = daring innovation. While driving to work in Johannesburg, South Africa, Tapiwa Chiwewe noticed a large cloud of air pollution he hadn’t seen before. While he’s not a pollution expert, he was curious — so he did some research, and discovered that the World Health Organization reported that nearly 14 percent of all deaths worldwide in 2012 were attributable to household and ambient air pollution, mostly in low- and middle-income countries. What could he do with his new knowledge? He’s not a pollution expert — but he is a computer engineer. So he paired up with colleagues from South Africa and China to create an air quality decision support system that lives in the cloud to uncover spatiotemporal trends of air pollution and create a new machine-learning technology to predict future levels of pollution. The tool gives city planners an improved understanding of how to plan infrastructure. His story shows how curiosity and concern for air pollution can lead to collaboration and creative innovation. “Ask yourself this: Why not?” Chiwewe says. “Why not just go ahead and tackle the problem head-on, as best as you can, in your own way?”

Tapiwa Chiwewe helped invent a system that tracks air pollution — blending his expertise as a computer engineer with a field he was not an expert in, air quality monitoring. He speaks at TED@IBM about using one’s particular set of skills to affect issues that matter. Photo: Russell Edwards / TED

What if AI was one of us? Well, it is. If you’re human, you’re biased. Sometimes that bias is explicit, other times it’s unconscious, says documentarian Robin Hauser. Bias can be a good thing — it informs and protects from potential danger. But this ingrained survival technique often leads to more harmful than helpful ends. The same goes for our technology, specifically artificial intelligence. It may sound obvious, but these superhuman algorithms are built by, well, humans. AI is not an objective, all-seeing solution; AI is already biased, just like the humans who built it. Thus, their biases — both implicit and completely obvious — influence what data an AI sees, understands and puts out into the world. Hauser walks through well-recorded moments in our recent history where the inherent, implicit bias of AI revealed the worst of society and the humans in it. Remember Tay? All jokes aside, we need to have a conversation about how AI should be governed and ask who is responsible for overseeing the ethical standards of these supercomputers. “We need to figure this out now,” she says. “Because once skewed data gets into deep learning machines, it’s very difficult to take it out.”

A mesmerizing journey into the world of plankton. “Hold your breath,” says inventor Thomas Zimmerman: “This is the world without plankton.” These tiny organisms produce two-thirds of our oxygen, but rising sea surface temperatures caused by climate change are threatening their very existence. This in turn endangers the fish that eat them and the roughly one billion people around the world that depend on those fish for animal protein. “Our carbon footprint is crushing the very creatures that sustain us,” says his thought partner, engineer Simone Bianco, “Why aren’t we doing something about it?” Their theory is that plankton are tiny and it’s really hard to care about something that you can’t see. So, the pair developed a microscope that allow us to enter the world of plankton and appreciate their tremendous diversity. “Yes, our world is based on fossil fuels, but we can adjust our society to run on renewable energy from the sun to create a more sustainable and secure future,” says Zimmerman, “That’s good for the little creatures here, the plankton, and that’s good for us.”

Thomas Zimmerman (in hat) and Simone Bianco share their project to make plankton more visible — and thus easier to care about and protect. Photo: Russell Edwards / TED

A poet’s call to protect our planet. “How can something this big be invisible?” asks IN-Q. “The ozone is everywhere, and yet it isn’t visible. Maybe if we saw it, we would see it’s not invincible, and have to take responsibility as individuals.” The world-renowned poet closed out the first session of TED@IBM with his original spoken-word poem “Movie Stars,” which asks us to reckon with climate change and our role in it. With feeling and urgency, IN-Q chronicles the havoc we’ve wreaked on our once-wild earth, from “all the species on the planet that are dying” to “the atmosphere we’ve been frying.” He criticizes capitalism that uses “nature as its example and excuse for competition,” the politicians who allow it, and the citizens too cynical to believe in facts. He finishes the poem with a call to action to anyone listening to take ownership of our home turf, our oceans, our forests, our mountains, our skies. “One little dot is all that we’ve got,” says IN-Q. “We just forgot that none of it’s ours; we just forgot that all of it’s ours.”

With guitar, drums and (expert) whistling, The Ferocious Few open Session 2 with a rocking, stripped-down performance of “Crying Shame.” The band’s musical journey from the streets of San Francisco to the big cities of the United States falls within this year’s TED@IBM theme, “Why not?” — encouraging musicians and others alike to question boundaries, explore limits and carry on.

Why the tech world needs more humanities majors. A few years ago, Eric Berridge’s software consultancy was in crisis, struggling to deal with a technical challenge facing his biggest client. When none of his engineers could solve the problem, they went to drown their sorrows and talk to their favorite bartender, Jeff — who said, “Let me talk to these guys.” To everyone’s surprise and delight, Jeff’s meeting the next day shifted the conversation completely, salvaged the company’s relationship with its client, and forever changed how Berridge thinks about who should work in the tech sector. At TED@IBM, he explained why tech companies should look beyond STEM graduates for new hires, and how people with backgrounds in the arts and humanities can bring creativity and unique insight into a technical workplace. Today, Berridge’s consulting company boasts 1,000 employees, only 100 of whom have degrees in computer programming. And his CTO? He’s a former English major/bike messenger.

Eric Berridge put his favorite bartender in a room with his biggest client — and walked out convinced that the tech sector needs to make room for humanities majors and people with multiple kinds of skills, not just more and more engineers. Photo: Russell Edwards / TED

The surprising and empowering truth about your emotions. “It may feel to you like your emotions are hardwired,  that they just happen to you, but they don’t. You might believe your brain is pre-wired with emotion circuits, but it’s not,” says Lisa Feldman Barrett, a psychology professor at Northeastern University who has studied emotions for 25 years. So what are emotions? They’re guesses based on past experiences that our brain generates in the moment to help us make sense of the world quickly, Barrett says. “Emotions that seem to happen to you are actually made by you,” she adds. For example, many of us hear our morning alarm go off, and as we wake up, we find ourselves enveloped by dread. We start thinking about all of our to-dos — the emails and calls to return, the drop-offs, the meals to cook. Our mind races, and we tell ourselves “I feel anxious” or “I feel overwhelmed.” This mind-racing is prediction, says Barrett. “Your brain is searching to find an explanation for those sensations in your body that you’re experiencing as wretchedness. But those sensations may have a physical cause.” In other words — you just woke up, maybe you’re just hungry. The next time you feel distressed, ask yourself: “Could this have a purely physical cause? Am I just tired, hungry, hot or dehydrated?” And we should be empowered by these findings, declares Barrett. “The actions and experiences that you make today become your brain’s predictions for tomorrow.”

In a mind-shifting talk, Lisa Feldman Barrett shares her research on what emotions are … and it’s probably not what you think., Photo: Russell Edwards / TED

Emotionally authentic relationships between AI and humans. “Imagine an AI that can know or predict what you want or need based on a sliver of information, the tone of your voice, or a particular phrase,” says IBM distinguished designer Adam Cutler. “Like when you were growing up and you’d ask your mom to make you a grilled cheese just the way you like it, and she knew exactly what you meant.” Cutler is working to create a bot that would be capable of participating in this kind of exchange with a person. More specifically, he is focusing on how to form an inside joke between machine and mortal. How? “Interpreting human intent through natural language understanding and pairing it with tone and semantic analysis in real time,” he says. Cutler contends that we humans already form relationships with machines — we name our cars, we refer to our laptops as being “cranky” or “difficult” — so we should do this with intention. Let’s design AI that responds and serves us in ways that are truly helpful and meaningful.

Adam Cutler talks about the first time he encountered an AI-enabled robot — and what it made him realize about his and our relationship to AI. Photo: Russell Edwards / TED

Can art help make AI more human? “We’re trying to create technology that you’ll want to interact with in the far future,” says artist Raphael Arar. “We’re taking a moonshot that we’ll want to be interacting with computers in deeply emotional ways.” In order for that future of AI to be a reality, Arar believes that technology will have to become a lot more human, and that art can help by translating the complexity of what it means to be human to machines. As a researcher and designer with IBM Research, Arar has designed artworks that help AI explore nostalgia, conversations and human intuition. “Our lives revolve around our devices, smart appliances, and more, and I don’t think this will let up anytime soon,” he says, “So I’m trying to embed more ‘humanness’ from the start, and I have a hunch that bringing art into an AI research process is a way to do just that.”

How can we make AI more human-friendly? Raphael Arar suggests we start with making art. Photo: Russell Edwards / TED

Where the body meets the mind. For millennia, philosophers have pondered the question of whether the mind and body exist as a duality or as part of a continuum, and it’s never been more practically relevant than it is today, as we learn more about the way the two connect. What can science teach us about this problem? Natalie Gunn studies both Alzheimer’s and colorectal cancer, and she wants to apply modern medicine and analytics to the mind-body problem. For her work on Alzheimer’s, she’s developing a blood test to screen for the disease, hoping to replace expensive PET scans and painful lumbar punctures. Her research on cancer is where the mind-body connection gets interesting: Does our mindset have an impact on cancer? There’s little conclusive evidence either way, Gunn says, but it’s time we took the question seriously, and put the wealth of analytical tools we have at our disposal to the test. “We need to investigate how a disease of the body could be impacted by our mind, particularly for a disease like cancer that is so steeped in our psyche,” Gunn says. “When we can do this, the philosophical question of where the body ends and the mind begins enters into the realm of scientific discovery rather than science fiction.”

Researcher Natalie Gunn wants to suggest a science-based lens to look at the mind-body problem. Photo: Russell Edwards / TED

Is Parkinson’s an electrical problem? Brain researcher Eleftheria Pissadaki studies Parkinson’s, but instead of focusing on the biological aspects of the disease, like genetics and dopamine depletion, she’s looking at the problem in terms of energy. Pissadaki and her team have created mathematical models of dopamine neurons, the neurons that selectively die in Parkinson’s, and they’ve found that the bigger a neuron is, the more vulnerable it becomes … simply because it needs a lot of energy. What can we do with this information? Pissadaki suggests we might someday be able to neuroprotect our brain cells by “finding the fuse box for each neuron” and figuring out how much energy it needs. Then we might be able to develop medicine tailored for people’s brain energy profiles, or drugs that turn neurons off whenever they’re getting tired but before they die. “It’s an amazingly complex problem,” Pissadaki says, “but one that is totally worth pursuing.”

Eleftheria Pissadaki is imagining new ways to think about and treat diseases like Parkinson’s, suggesting research directions that might create new hope. Photo: Photographer Russell Edwards / TED

How to build a smarter brain. Just as we can reshape our bodies and build stronger muscles with exercise, Bruno Michel thinks we can train our way to better, faster brains — brains smart enough to compete with sophisticated AI. At TED@IBM, the brain fitness advocate discussed various strategies for improving your noggin. For instance, to think in a more structured way, try studying Latin, math or music. For a boost to your general intelligence, try yoga, read, make new friends, and do new things. Or, try pursuing a specific task with transferable skills as Michel has done for 30 years. He closed his talk with the practice he credits with significantly improving both the speed of his thinking and his reaction times— tap dancing!

Click to view slideshow.

TEDGet ready for TED Talks India: Nayi Soch, premiering Dec. 10 on Star Plus

This billboard is showing up in streets around India, and it’s made out of pollution fumes that have been collected and made into ink — ink that’s, in turn, made into an image of TED Talks India: Nayi Soch host Shah Rukh Khan. Tune in on Sunday night, Dec. 10, at 7pm on Star Plus to see what it’s all about.

TED is a global organization with a broad global audience. With our TED Translators program working in more than 100 languages, TEDx events happening every day around the world and so much more, we work hard to present the latest ideas for everyone, regardless of language, location or platform.

Now we’ve embarked on a journey with one of the largest TV networks in the world — and one of the biggest movie stars in the world — to create a Hindi-language TV series and digital series that’s focused on a country at the peak of innovation and technology: India.

Hosted and curated by Shah Rukh Khan, the TV series TED Talks India: Nayi Soch will premiere in India on Star Plus on December 10.

The name of the show, Nayi Soch, literally means ‘new ideas’ — and this kick-off episode seeks to inspire the nation to embrace and cultivate ideas and curiosity. Watch it and discover a program of speakers from India and the world whose ideas might inspire you to some new thinking of your own! For instance — the image on this billboard above is made from the fumes of your car … a very new and surprising idea!

If you’re in India, tune in at 7pm IST on Sunday night, Dec. 10, to watch the premiere episode on Star Plus and five other channels. Then tune in to Star Plus on the next seven Sundays, at the same time, to hear even more great talks on ideas, grouped into themes that will certainly inspire conversations. You can also explore the show on the HotStar app.

On TED.com/india and for TED mobile app users in India, each episode will be conveniently turned into five to seven individual TED Talks, one talk for each speaker on the program. You can watch and share them on their own, or download them as playlists to watch one after another. The talks are given in Hindi, with professional subtitles in Hindi and in English. Almost every talk will feature a short Q&A between the speaker and the host, Shah Rukh Khan, that dives deeper into the ideas shared onstage.

Want to learn more about TED Talks? Check out this playlist that SRK curated just for you.


Don MartiAre bug futures just high-tech piecework?

Are bug futures just high-tech piecework, or worse, some kind of "gig economy" racket?

Just to catch up, bug futures, an experimental kind of agreement being developed by the Bugmark project, are futures contracts based on the status of bugs in a bug tracker.

For developers: vist Bugmark to find an open issue that matches your skills and interests. Buy a futures contract connected to that issue that will pay you when the issue is fixed. Work on the issue, in the open—then decide if you want to hold your contract until maturity, or sell it at a profit. Report an issue and pay to reward others to fix it

For users: Create a new issue on the project bug tracker, or select an existing one. Buy a futures contract on that issue that will cost you a known amount when the issue is fixed, or pay you to compensate you if the issue goes unfixed. Reduce your exposure to software risks by directly signaling the project participants about what issues are important to you. Invest in futures on an open source market

Bug futures also open up the possibility of incentivizing other kinds of work, such as clarifying and translating bug reports, triaging bugs, writing failing tests, or doing code reviews—and especially arbitrage of bugs from project to project.

Bug futures are different from open source bounty systems, what have been repeatedly tried but have so far failed to take off. The big problem with conventional open source bounty systems is that, as far as I can tell, they fail to incentivize cooperative work, and in a lot of situations might incentivize un-cooperative behavior. If I find a bug in a web application, and offer a bounty to fix it, the fix might require JavaScript and CSS work. A developer who fixes the JavaScript and gets stuck on the CSS might choose not to share partial work in order to contend for the entire bounty. Likewise, the developer who fixes the CSS part of the bug might get stuck on the JavaScript. Because of how bounties are structured, if the two wanted to split the bounty they would need to find, trust, and coordinate with each other. Meanwhile, if the bug was the subject of a futures contract, the JavaScript developer could write up a good commit message explaining how their partial work made progress toward a fix, and offer to sell their side of the contract. A CSS developer could take on the rest of the work by buying out that position.

Futures trading and risk shifts

But will bug futures tend to shift the risks of software development away from the "owners" of software (the owners don't have to be copyright holders, they could be those who benefit from network effects) and toward the workers who develop, maintain, and support it?

I don't know, but I think that the difference between bug trackers and piecework is where you put the brains of the operation. In piecework and the gig economy, the matching of workers to tasks is done by management, either manually or in software. Workers can set the rate at which they work in conventional piecework, or accept and reject tasks offered to them in the gig economy, but only management can have a view of all available tasks.

Bug futures operate within a commons-based peer production environment, though. In an ideal peer production scene, all participants can see all available tasks, and select the most rewarding tasks. Somewhere in the economics literature there is probably a model of task selection in open source development, and if I knew where to find it I could put an impressive LaTeX equation right around here. Of course, open source still has all kinds of barriers that make matching of workers to tasks less than ideal, but it's a good goal to keep in mind.

If you do bug futures right, they interfere as little as possible with the peer production advantage—that it enables workers to match themselves to tasks. And the futures market adds the ability for people who are knowledgeable about the likelihood of completion of a task, usually those who can do the task, to profit from that knowledge.

Rather than paying a worker directly for performing a task, bug futures are about trading on the outcomes of tasks. When participating, you're not trading labor for money, you're trading on information you hold about the likelihood of successful completion of a task. As in conventional financial markets, information must be present on the edges, with the individual participants, in order for them to participate. If a feature is worth $1000 to me, and someone knows how to fix it in five minutes, bug futures could facilitate a trade that's profitable to both ends. If the market design is done right, then most of that value gets captured by the endpoints—the user and developer who know when to make the right trade.

The transaction costs of trading in information tend to be lower than the transaction costs of trading in labor, for a variety of reasons which you will probably believe in to different extents depending on your politics. What if we could replace some direct trading in labor with trading in the outcomes of that labor by trading information? Lower transaction costs, more gains from trade, more value created.

Bug futures series so far

Planet DebianCraig Small: WordPress 4.9.1

After a much longer than expected break due to moving and the resulting lack of Internet, plus WordPress releasing a package with a non-free file, the Debian package for WordPress 4.9.1 has been uploaded!

WordPress 4.9 has a number of improvements, especially around the customiser components so that looked pretty slick. The editor for the customiser now has a series of linters what will warn if you write something bad, which is a very good thing! Unfortunately the Javascript linter is jshint which uses a non-free license which that team is attempting to fix.  I have also reported the problem to WordPress upstream to have a look at.

While this was all going on, there were 4 security issues found in WordPress which resulted in the 4.9.1 release.

Finally I got the time to look into the jshint problem and Internet to actually download the upstream files and upload the Debian packages. So version 4.9.1-1 of the packages have now been uploaded and should be in the mirrors soon.  I’ll start looking at the 4.9.1 patches to see what is relevant for Stretch and Jessie.

,

TEDDEADLINE EXTENDED: Audition for TED2018!

Olalekan Jeyifous speaks at TED Talent Search 2017 - Ideas Search, January 26, 2017, New York, NY. Photo: Anyssa Samari / TED

At last year’s TEDNYC Idea Search, artist Olalekan Jeyifous showed off his hyper-detailed and gloriously complex imaginary cities. See more of his work in this TED Gallery. Photo: Anyssa Samari / TED

Do you have an idea idea worth spreading? Do you want to speak on the TED2018 stage in Vancouver in April?

To find more new voices, TED is hosting an Idea Search at our office theater in New York City on January 24, 2018. Speakers who audition at this event might be chosen for the TED2018 stage or to become part of our digital archive on TED.com.

You’re invited to pitch your amazing idea to try out on the Idea Search stage in January. The theme of TED2018 is The Age of Amazement, so we are looking for ideas that connect to that theme — from all angles. Are you working on cutting-edge technology that the world needs to hear about? Are you making waves with your art or research? Are you a scientist with a new discovery or an inventor with a new vision? A performer with something spectacular to share? An incredible storyteller? Please apply to audition at our Idea Search.

Important dates:

The deadline to apply to the Idea Search is Friday, December 8, 2017, at noon Eastern.

The Idea Search event happens in New York City from the morning of January 23 through the morning of January 25, 2018. Rehearsals will take place on January 23, and the event happens in the evening of January 24.

TED2018 happens April 10–14, 2018, in Vancouver.

Don’t live in the New York City area? Don’t let that stop you from applying — we may be able to help get you here.

Here’s how to apply!

Sit down and think about what kind of talk you’d like to give, then script a one-minute preview of the talk.

Film yourself delivering the one-minute preview (here are some insider tips for making a great audition video).

Upload the film to Vimeo or YouTube, titled: “[Your name] TED2018 audition video: [name of your talk]” — so, for example: “Jane Smith TED2018 audition video: Why you should pay attention to roadside wildflowers

Then complete the entry form, paste your URL in, and hit Submit!

Curious to learn more?

Read about a few past Idea Search events: TEDNYC auditions in 2017, in 2014 and in 2013.

Watch talks from past Idea Search events that went viral on our digital archive on TED.com:

Christopher Emdin: Teach teachers how to create magic (more than 2 million views)
Sally Kohn: Let’s try emotional correctness (more than 2 million views)
Lux Narayan: What I learned from 2,000 obituaries (currently at 1.4 million views!)
Lara Setrakian: 3 ways to fix a broken news industry (just shy of a million views)
Todd Scott: An intergalactic guide to using a defibrillator (also juuust south of a million)

And here are just a few speakers who were discovered during past talent searches:

Ashton Applewhite: Let’s end ageism (1m views)
OluTimehin Adegbeye: Who belongs in a city? (a huge hit at TEDGlobal 2017)
Richard Turere: My invention that made peace with the lions (2m views)
Zak Ebrahim: I am the son of a terrorist. Here’s how I chose peace (4.7m views and a TED Book)


CryptogramFriday Squid Blogging: Squid Embryos Coming to Life

Beautiful video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSecurity Vulnerabilities in Certificate Pinning

New research found that many banks offer certificate pinning as a security feature, but fail to authenticate the hostname. This leaves the systems open to man-in-the-middle attacks.

From the paper:

Abstract: Certificate verification is a crucial stage in the establishment of a TLS connection. A common security flaw in TLS implementations is the lack of certificate hostname verification but, in general, this is easy to detect. In security-sensitive applications, the usage of certificate pinning is on the rise. This paper shows that certificate pinning can (and often does) hide the lack of proper hostname verification, enabling MITM attacks. Dynamic (black-box) detection of this vulnerability would typically require the tester to own a high security certificate from the same issuer (and often same intermediate CA) as the one used by the app. We present Spinner, a new tool for black-box testing for this vulnerability at scale that does not require purchasing any certificates. By redirecting traffic to websites which use the relevant certificates and then analysing the (encrypted) network traffic we are able to determine whether the hostname check is correctly done, even in the presence of certificate pinning. We use Spinner to analyse 400 security-sensitive Android and iPhone apps. We found that 9 apps had this flaw, including two of the largest banks in the world: Bank of America and HSBC. We also found that TunnelBear, one of the most popular VPN apps was also vulnerable. These apps have a joint user base of tens of millions of users.

News article.

Worse Than FailureError'd: PIck an Object, Any Object

"Who would have guessed Microsoft would have a hard time developing web apps?" writes Sam B.

 

Jerry O. writes, "So, if I eat my phone, I might get acid indigestion? That sounds reasonable."

 

"Got this when I typed into a SwaggerHub session I'd left open overnight and tried to save it," wrote Rupert, "The 'newer' draft was not, in fact, the newer version."

 

Antonio write, "It's nice to buy software from another planet, especially if year there is much longer."

 

"Either Meteorologist (http://heat-meteo.sourceforge.net/) is having some trouble with OpenWeatherMap data, or we're having an unusually hot November in Canada," writes Chris H.

 

"This is possibly one case where a Windows crash can result in a REAL crash," writes Ruben.

 

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianCraig Small: Back Online

I now have Internet back! Which means I can try to get the Debian WordPress packages bashed into shape. Unfortunately they still have the problem with the json horrible “no evil” license which causes so many problems all over the place.

I’m hoping there is a simple way of just removing that component and going from there.

Planet Linux AustraliaOpenSTEM: Happy Holidays, Queensland!

It’s finally holidays in Queensland! Yay! Congratulations to everyone for a wonderful year and lots of hard work! Hope you all enjoy a well-earned rest! Most other states and territories have only a week to go, but the holiday spirit is in the air.- Should you be looking for help with resources, rest assured that […]

Krebs on SecurityPhishers Are Upping Their Game. So Should You.

Not long ago, phishing attacks were fairly easy for the average Internet user to spot: Full of grammatical and spelling errors, and linking to phony bank or email logins at unencrypted (http:// vs. https://) Web pages. Increasingly, however, phishers are upping their game, polishing their copy and hosting scam pages over https:// connections — complete with the green lock icon in the browser address bar to make the fake sites appear more legitimate.

A brand new (and live) PayPal phishing page that uses SSL (https://) to appear more legitimate.

According to stats released this week by anti-phishing firm PhishLabs, nearly 25 percent of all phishing sites in the third quarter of this year were hosted on HTTPS domains — almost double the percentage seen in the previous quarter.

“A year ago, less than three percent of phish were hosted on websites using SSL certificates,” wrote Crane Hassold, the company’s threat intelligence manager. “Two years ago, this figure was less than one percent.”

A currently live Facebook phishing page that uses https.

As shown in the examples above (which KrebsOnSecurity found in just a few minutes of searching via phish site reporting service Phishtank.com), the most successful phishing sites tend to include not only their own SSL certificates but also a portion of the phished domain in the fake address.

Why are phishers more aggressively adopting HTTPS Web sites? Traditionally, many phishing pages are hosted on hacked, legitimate Web sites, in which case the attackers can leverage both the site’s good reputation and its SSL certificate.

Yet this, too, is changing, says PhishLabs’ Hassold.

“An analysis of Q3 HTTPS phishing attacks against PayPal and Apple, the two primary targets of these attacks, indicates that nearly three-quarters of HTTPS phishing sites targeting them were hosted on maliciously-registered domains rather than compromised websites, which is substantially higher than the overall global rate,” he wrote. “Based on data from 2016, slightly less than half of all phishing sites were hosted on domains registered by a threat actor.”

Hassold posits that more phishers are moving to HTTPS because it helps increase the likelihood that users will trust that the site is legitimate. After all, your average Internet user has been taught for years to simply “look for the lock icon” in the browser address bar as assurance that a site is safe.

Perhaps this once was useful advice, but if so its reliability has waned over the years. In November, PhishLabs conducted a poll to see how many people actually knew the meaning of the green padlock that is associated with HTTPS websites.

“More than 80% of the respondents believed the green lock indicated that a website was either legitimate and/or safe, neither of which is true,” he wrote.

What the green lock icon indicates is that the communication between your browser and the Web site in question is encrypted; it does little to ensure that you really are communicating with the site you believe you are visiting.

At a higher level, another reason phishers are more broadly adopting HTTPS is because more sites in general are using encryption: According to Let’s Encrypt, 65% of web pages loaded by Firefox in November used HTTPS, compared to 45% at the end of 2016.

Also, phishers no longer need to cough up a nominal fee each time they wish to obtain a new SSL certificate. Indeed, Let’s Encrypt now gives them away for free.

The major Web browser makers all work diligently to index and block known phishing sites, but you can’t count on the browser to save you:

So what can you do to make sure you’re not the next phishing victim?

Don’t take the bait: Most phishing attacks try to convince you that you need to act quickly to avoid some kind of loss, cost or pain, usually by clicking a link and “verifying” your account information, user name, password, etc. at a fake site. Emails that emphasize urgency should be always considered extremely suspect, and under no circumstances should you do anything suggested in the email.

Phishers count on spooking people into acting rashly because they know their scam sites have a finite lifetime; they may be shuttered at any moment. The best approach is to bookmark the sites that store your sensitive information; that way, if you receive an urgent communication that you’re unsure about, you can visit the site in question manually and log in that way. In general, it’s a bad idea to click on links in email.

Links Lie: You’re a sucker if you take links at face value. For example, this might look like a link to Bank of America, but I assure you it is not. To get an idea of where a link goes, hover over it with your mouse and then look in the bottom left corner of the browser window.

Yet, even this information often tells only part of the story, and some links can be trickier to decipher. For instance, many banks like to send links that include ridiculously long URLs which stretch far beyond the browser’s ability to show the entire thing when you hover over the link.

The most important part of a link is the “root” domain. To find that, look for the first slash (/) after the “http://” part, and then work backwards through the link until you reach the second dot; the part immediately to the right is the real domain to which that link will take you.

“From” Fields can be forged: Just because the message says in the “From:” field that it was sent by your bank doesn’t mean that it’s true. This information can be and frequently is forged.

If you want to discover who (or what) sent a message, you’ll need to examine the email’s “headers,” important data included in all email.  The headers contain a lot of information that can be overwhelming for the untrained eye, so they are often hidden by your email client or service provider, each of which may have different methods for letting users view or enable headers.

Describing succinctly how to read email headers with an eye toward thwarting spammers would require a separate tutorial, so I will link to a decent one already written at About.com. Just know that taking the time to learn how to read headers is a useful skill that is well worth the effort.

Keep in mind that phishing can take many forms: Why steal one set of login credentials for a single brand when you can steal them all? Increasingly, attackers are opting for approaches that allow them to install a password-snarfing Trojan that steals all of the sensitive data on victim PCs.

So be careful about clicking links, and don’t open attachments in emails you weren’t expecting, even if they appear to come from someone you know. Send a note back to the sender to verify the contents and that they really meant to send it. This step can be a pain, but I’m a stickler for it; I’ve been known to lecture people who send me press releases and other items as unrequested attachments.

If you didn’t go looking for it, don’t install it: Password stealing malware doesn’t only come via email; quite often, it is distributed as a Facebook video that claims you need a special “codec” to view the embedded content. There are tons of variations of this scam. The point to remember is: If it wasn’t your idea to install something from the get-go, don’t do it.

Lay traps: When you’ve mastered the basics above, consider setting traps for phishers, scammers and unscrupulous marketers. Some email providers — most notably Gmail — make this especially easy.

When you sign up at a site that requires an email address, think of a word or phrase that represents that site for you, and then add that with a “+” sign just to the left of the “@” sign in your email address. For example, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to Gmail and create a folder called “Example,” along with a new filter that sends any email addressed to that variation of my address to the Example folder.

That way, if anyone other than the company I gave this custom address to starts spamming or phishing it, that may be a clue that example.com shared my address with others (or that it got hacked!). I should note two caveats here. First, although this functionality is part of the email standard, not all email providers will recognize address variations like these. Also, many commercial Web sites freak out if they see anything other than numerals or letters, and may not permit the inclusion of a “+” sign in the email address field.

,

Planet DebianThomas Goirand: Testing OpenStack using tempest: all is packaged, try it yourself

tl;dr: this post explains how the new openstack-tempest-ci-live-booter package configures a machine to PXE boot a Debian Live system running on KVM in order to run functional testing of OpenStack. It may be of interest to you if you want to learn how to PXE boot a KVM virtual machine running Debian Live, even if you aren’t interested in OpenStack.

Moving my CI from one location to another leads to package it fully

After packaging a release of OpenStack, it’s kind of mandatory to functionally test the set of packages. This is done by running the tempest test suite on an already deployed OpenStack installation. I used to do that on a real hardware, provided by my employer. But since I’ve lost my job (I’m still looking for a new employer at this time), I also lost access to the hardware they were providing to me.

As a consequence, I searched for a sponsor to provide the hardware to run tempest on. I first sent a mail to the openstack-dev list, asking for such a hardware. Then Rochelle Grober and Stephen Li from Huawei got me in touch with Zachary Smith, the CEO of Packet.net. And packet.net gave me an account on their system. I am amazed how good their service is. They provide baremetal servers around the world (15 data centers), provisioned using an API (meaning, fully automatically). A big thanks to them!

Anyway, even if I planned for a few weeks to give a big thanks to the above people (they really deserves it!), this isn’t the only goal of this post. This is to introduce how to run your own tempest CI on your own machine. Because since I have been in the situation where my CI had to move twice, I decided to industrialize it, and fully automate the setup of the CI server. And what does a DD do when writing software? Package it of course. So I packaged it all, and uploaded it to the archive. Here’s how to use all of this.

General principle

The best way to run an OpenStack tempest CI is to run it on a Debian Live system. Why? Because setting-up a full OpenStack environment takes a lot of time, mostly spent on disk I/O. And on a live system, everything runs on a RAM disk, so installing under this environment is the fastest way one could do. This is what I did when working with Mirantis: I had a real baremetal server, which I was PXE booting on a Debian Live system. However nice, this imposes having access to 2 servers: one for running the Live system, and one running the dhcp/pxe/tftp server. Also, this means the boot server needs 2 nics, one on the internet, and one for booting the 2nd server that will run the Live system. It was not possible to have such specific setup at packet, so I decided to replicate this using KVM, so it would become portable. And since the servers at packet.net are very fast, it isn’t much of an issue anymore to not run on baremetal.

Anyway, let’s dive into setting-up all of this.

Network topology

We’ll assume that one of your interface has internet access, let’s say eth0. Since we don’t want to destroy any of your network config, the openstack-tempest-ci-live-booter package will use a dummy network interface (ie: modprobe dummy) and bridge it to the network interface of the KVM virtual machine. That dummy network interface will be configured with 192.168.100.1, and the Debian Live KVM will use 192.168.100.2. This convenient default can be changed, but then you’ll have to pass your specific network configuration to each and every script (just read the beginning of each script to read the parameters).

Configure the host machine

First install the openstack-tempest-ci-live-booter package. This runtime depends on the isc-dhcp-server, tftpd-hpa, apache2, qemu-kvm and all what’s needed to run a Debian Live machine, booting it over PXE / iPXE (the package support both, more on iPXE later). So, let’s do it:

apt-get install openstack-tempest-ci-live-booter

The package, once installed, doesn’t do much. To respect the Debian policy, it can’t touch configuration files of other packages in maintainer scripts. Therefore, you have to manually run:

openstack-tempest-ci-live-booter-config --configure-dummy-nick

Running this script will:

  • configure the kvm-intel module to allow nested visualization (by unloading the module, adding “options kvm-intel nested=y” to /etc/modprobe.d, and reloading the module)
  • modprobe the dummy kernel module, run “ip link set name tempestnic0 dev dummy0” to create a tempestnic0 dummy interface
  • create a tempestbr bridge, set 192.168.100.1 for the bridge IP, bridge the tempestnic0 and tempesttap
  • configure tftpd-hpa to listen on 192.168.100.1
  • configure isc-dhcp-server to dhcpreply 192.168.100.2 on the tempestbr, so that the KVM machine can boot up with an IP
  • configure apache2 to serve the filesystem.squashfs root filesystem, loaded by the Linux kernel at boot time. Note that you may need to manually start and/or reload apache after this setup though.

Again, you can change the IP addresses if you like. You can also use a real interface if you intend to boot a real hardware rather than a KVM machine (in which case, just omit the –configure-dummy-nick, and manually configure your 2nd interface).

Also, openstack-tempest-ci-live-booter provides a /etc/init.d/openstack-tempest-ci-live-booter script which will configure NAT on your server, so that the Debian Live machine has internet access (needed for apt-get operations). Edit the file if you need to change 192.168.100.1/24 by something else. The script will pick-up the interface that is connected to the default gateway by itself.

The dhcp server is configured to support both legacy PXE and the new iPXE standard. I had to support iPXE, because that’s what the standard KVM ROM does, and also I wanted to keep legacy support for older baremetal hardware. The way iPXE works is that dhcpd tells the client where to fetch the iPXE script, which itself chains to lpxelinux.0 (instead of the standard pxelinux.0). It’s rather easy to setup once you understood how it works.

Build the live image

Now that the PXE server is configured, it’s now time to build the Debian live image. Simply do this to build the image, and copy its resulting files in the PXE server folder (ie: /var/lib/tempest-live-booter):

mkdir live
cd live
openstack-tempest-ci-build-live-image --debian-mirror-addr http://ftp.nl.debian.org/debian

Since we need to login in that server later on, the script will create an ssh key-pair. If you want your own keys, simply drop the id_rsa and id_rsa.pub files in your current folder before running the script. Then make it so that this key-pair can be later on used by default by the user who will run the tempest script (ie: copy id_rsa and id_rsa.pub in the ~/.ssh folder).

Running the openstack-tempest-ci

What the openstack-tempest-ci script does is (re-)starting your KVM virtual machine, ssh into it, upgrade it to sid, install OpenStack, and eventually run all the tempest suite. There’s 2 ways to run it: either install the openstack-tempest-ci package, eventually configure it (in /etc/default/openstack-tempest-ci), and simply run the “openstack-tempest-ci” command. Or, you can skip the installation of the package, and simply run it from source:

git clone http://anonscm.debian.org/git/openstack/debian/openstack-meta-packages.git
cd openstack-meta-packages/src
./openstack-tempest-ci

Indeed, the script is designed to copy all scripts from source inside the Debian Live machine before using these scripts. The reason it’s doing that is because we want to avoid the situation where a modification needs to be uploaded to Debian before being able to test it, and also it was needed to be able to run the openstack-tempest-ci script without installing a package (which would need root access that I don’t have on casulana.debian.org, where running tempest is needed to test official OpenStack Debian images). So, definitively, feel free to hack everything in openstack-meta-packages/src before running the tempest script. Also, openstack-tempest-ci will look for a sources.list file in the current directory, and upload it to the Debian Live system before doing the upgrade/install. This way, it is easy to use the closest mirror.

Planet DebianChris Lamb: Simple media cachebusting with GitHub pages

GitHub Pages makes it really easy to host static websites, including sites with custom domains or even with HTTPS via CloudFlare.

However, one typical annoyance with static site hosting in general is the lack of cachebusting so updating an image or stylesheet does not result in any change in your users' browsers until they perform an explicit refresh.

One easy way to add cachebusting to your Pages-based site is to use GitHub's support for Jekyll-based sites. To start, first we add some scaffolding to use Jekyll:

$ cd "$(git rev-parse --show-toplevel)
$ touch _config.yml
$ mkdir _layouts
$ echo '{{ content }}' > _layouts/default.html
$ echo /_site/ >> .gitignore

Then in each of our HTML files, we prepend the following header:

---
layout: default
---

This can be performed on your index.html file using sed:

$ sed -i '1s;^;---\nlayout: default\n---\n;' index.html

Alternatively, you can run this against all of your HTML files in one go with:

$ find -not -path './[._]*' -type f -name '*.html' -print0 | \
    xargs -0r sed -i '1s;^;---\nlayout: default\n---\n;'

Due to these new headers, we can obviously no longer simply view our site by pointing our web browser directly at the local files. Thus, we now test our site by running:

$ jekyll serve --watch

... and navigate to http://127.0.0.1:3000/.


Finally, we need to append the cachebusting strings itself. For example, if we had the following HTML to include a CSS stylesheet:

<link href="/css/style.css" rel="stylesheet">

... we should replace it with:

<link href="/css/style.css?{{ site.time | date: '%s%N' }}" rel="stylesheet">

This adds the current "build" timestamp to the file, resulting in the following HTML once deployed:

<link href="/css/style.css?1507450135153299034" rel="stylesheet">

Don't forget to to apply it all your other static media, including images and Javascript:

<img src="image.jpg?{{ site.time | date: '%s%N' }}">
<script src="/js/scripts.js?{{ site.time | date: '%s%N' }}')">

To ensure that transitively-linked images are cachebusted, instead of referencing them in the CSS you can specify them directly in the HTML instead:

<header style="background-image: url(/img/bg.jpg?{{ site.time | date: '%s%N' }})">

Planet DebianSteinar H. Gunderson: Thoughts on AlphaZero

The chess world woke up to something of an earthquake two days ago, when DeepMind (a Google subsidiary) announced that they had adapted their AlphaGo engine to play chess with only minimal domain knowledge—and it was already beating Stockfish. (It also plays shogi, but who cares about shogi. :-) ) Granted, the shock wasn't as huge as what the Go community must have felt when the original AlphaGo came in from nowhere and swept with it the undisputed Go throne and a lot of egos in the Go community over the course of a few short months—computers have been better at chess than humans for a long time—but it's still a huge event.

I see people are trying to make sense of what this means for the chess world. I'm not a strong chess player, an AI expert or a top chess programmer, but I do play chess, I've worked in AI (in Google, briefly in the same division as the DeepMind team) and I run what's the strongest chess analysis website online whenever Magnus Carlsen is playing (next game 17:00 UTC tomorrow!), so I thought I should share some musings.

First some background: We've been trying to make computers play chess for almost 70 years now; originally in the hopes that it would lead us to general AI, although we sort of abandoned that eventually. In almost all of that time, we've used the same basic structure; you have an evaluation function that can look at a specific position and say “I think this is good for white”, and then search that sees what happens with that evaluation function by playing all possible moves and countermoves (“oh wow, no matter what happens black is going to take white's queen, so maybe this wasn't so good after all”). The evaluation function roughly consists of a few hundred of hand-crafted features (everything from “the queen is worth nine points and rooks are five” to more complex issues around king safety, pawn structure and piece mobility) which are more or less added together, and the search tries very hard to prune out uninteresting lines so it can go deeper into the more interesting ones. In the end, you're left with a single “principal variation” (PV) consisting of a series of chess moves (presumably the best the engine can find within the allotted time), and the evaluation of the position at the end of the PV is the final evaluation of the position.

AlphaZero is different. Instead of a hand-crafted evaluation function, it just throws the raw information about the position (where the pieces are, and a few other tidbits like right-to-castle) into a neural network and gets out something like an expected win percentage. And instead of searching for the best line, it uses Monte Carlo tree search to make sort-of a weighted average of possible outcomes, explored in a stochastic way. The neural network is simply optimized through reinforcement learning under self-play; it starts off playing what's essentially random moves (it's restricted from playing illegal ones—that's one of the very few pieces of domain-specific knowledge), but rapidly gets better as the neural network learns what works or not.

These are not new ideas (in fact, I'm hard pressed to find a single new thing in the paper), and the basic structure has been attempted applied to chess in the past with master-level results, but it hasn't really made something approaching the top before now. The idea of numerical optimization through self-play is widely used, though, mostly to tune things like piece-square tables and other evaluation function weights. So I think that it's mainly
through great engineering and tons of computing power, not a radical breakthrough, that DeepMind has managed to make what's now probably the world's strongest chess entity on the planet. (I say “probably” because it “only” won 64–36 against Stockfish 8, which is about 100 Elo, and that's probably possible to do with a few hardware doublings and/or Stockfish improvements. Granted, it didn't lose a single game, and it's highly likely that AlphaZero's approach has a lot more room for further improvement than classical alpha-beta has.)

So what do I think AlphaZero will change? In the short term: Nothing. The paper contains ten games (presumably cherry-picked wins) of the 100-game match, and while those show beautiful chess that at times makes Stockfish seem cramped and limited, they don't seem to show any radically new chess ideas like AlphaGo did with Go. Nobody knows when or if DeepMind will release more games, although they have released a fair amount of Go games in the past, and also done Go exhibition matches. People are trying to pick out information from its opening choices (for instance, it prefers the infamous Berlin defense as black), which is interesting, but right now, there's just too little data to kill entire lines or openings.

We're also not likely to stop playing chess anytime soon, for the same reason that Magnus Carlsen nearly hitting 3000 Elo in blitz didn't stop me from playing online. AlphaZero hasn't solved chess by any means, and even though checkers has been weakly solved (Chinook) provably never loses a game from the opening position, although it won't win every won position), people still play it even on the top level. Most people simply are not at the level where the existence of perfect play matters, nor are their primary motivation to try to explore the frontiers of it.

So the primary question is whether top players can use this to improve their game. Now, DeepMind is not in the business of selling software; they're an AI research company, and AlphaZero runs on hardware (TPUs) you can't buy at this moment, and hardly even rent in the cloud. (Granted, you could probably make AlphaZero run efficiently on GPUs, especially the newer ones that start to get custom blocks for accelerating neural networks, although probably slower and with higher power usage.) Thus, it's unlikely that they will be selling or open-sourcing AlphaZero anytime soon. You could imagine top players wanting to go into talks to pay for exclusive access, but if you look at the costs of developing such a thing (just the training time alone has to be significant), it's obvious that they didn't do this in the hope of recouping the development costs. If anything, you would imagine that they'd sell it as a cloud service, but nothing like that has emerged for AlphaGo, where they have a competitive much larger market lead, so it seems unlikely.

Could anyone take their paper and reimplement it? The answer is: Maybe. AlphaGo was two years ago, has been backed up with several papers, and we still don't have anything public that's really close. Tencent's AI lab has made their own clone (Fine Art), and then there's DeepZenGo and others, but nothing nearly as strong that you can download or buy at this stage (as far as I know, anyway). Chess engines are typically made by teams of one or two people, and so far, deep learning-based approaches seem to require larger teams and a fair amount of (expensive) computing time, and most chess programmers are nor deep learning experts anyway. It's hard to make a living off of selling chess engines even in a small team; one could again assume a for-hire project, but I think not even most of the top players have the money to hire someone for a year or two for doing a speculative project to making an entirely new kind of one. It's limited how much a 100 Elo stronger engine will help you during opening preparation/training anyway; knowing how to work effectively with the computer is much more valuable. After all, it's not like you can use it while playing (unless it's freestyle chess).

The good news is that DeepMind's approach seems to become simpler and simpler over time. The first version of AlphaGo had all sorts of complexities and relied partially on hand-crafted features (although it wasn't very widely publicized), while the latest versions have removed a lot of the fluff. Make no mistake, though; the devil is in the details, and writing a top-class chess engine is a huge undertaking. My hunch is on two to three years before you can buy something that beats Stockfish on the same hardware. But I will hedge my bet here; it's hard to make predictions, especially about the future. Even with a world-class neural network in your brain.

Sociological ImagesHow Hate Hangs On

Originally Posted at Discoveries

After the 2016 Presidential election in the United States, Brexit in the UK, and a wave of far-right election bids across Europe, white supremacist organizations are re-emerging in the public sphere and taking advantage of new opportunities to advocate for their vision of society. While these groups have always been quietly organizing in private enclaves and online forums, their renewed public presence has many wondering how they keep drawing members. Recent research in American Sociological Review by Pete SimiKathleen BleeMatthew DeMichele, and Steven Windisch sheds light on this question with a new theory—people who try to leave these groups can get “addicted” to hate, and leaving requires a long period of recovery.

Photo by Dennis Skley, Flickr CC

The authors draw on 89 life history interviews with former members of white supremacist groups. These interviews were long, in-depth discussions of their pasts, lasting between four and eight hours each. After analyzing over 10,000 pages of interview transcripts, the authors found a common theme emerging from the narratives. Membership in a supremacist group took on a “master status”—an identity that was all-encompassing and touched on every part of a member’s life. Because of this deep involvement, many respondents described leaving these groups like a process of addiction recovery. They would experience momentary flashbacks of hateful thoughts, and even relapses into hateful behaviors that required therapeutic “self talk” to manage.

We often hear about new members (or infiltrators) of extremist groups getting “in too deep” to where they cannot leave without substantial personal risk. This research helps us understand how getting out might not be enough, because deep group commitments don’t just disappear when people leave.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianJonathan Dowland: Three Minimalism reads

"The Life-Changing Magic of Tidying Up" by Marie Kondo is a popular (New York Times best selling) book by lifestyle consultant Mari Kondo about tidying up and decluttering. It's not strictly about minimalism, although her approach is informed by her own preferences which are minimalist. Like all self-help books, there's some stuff in here that you might find interesting or applicable to your own life, amongst other stuff you might not. Kondo believes, however, that her methods only works if you stick to them utterly.

Next is "Goodbye, Things" by Fumio Sasaki. The end-game for this book really is minimalism, but the book is structured in such a way that readers at any point on a journey to minimalism (or coinciding with minimalism, if that isn't your end-goal) can get something out of it. A large proportion of the middle of the book is given over to a general collection of short, one-page-or-less tips on decluttering, minimising, etc. You can randomly flip through this section a bit like randomly drawing a card from a deck. I started to wonder whether there's a gap in the market for an Oblique Strategies-like minimalism product. The book recommended several blogs for further reading, but they are all written in Japanese.

Finally issue #18 of New Philosopher is the "Stuff" issue and features several articles from modern Philosophers (as well as some pertinent material from classical ones) on the nature of materialism. I've been fascinated by Philosophy from a distance ever since my brother read it as an Undergraduate so I occasionally buy the philosophical equivalent of Pop Science books or magazines, but this was the most accessible for me that I've read to date.

Planet DebianWouter Verhelst: Adding subtitles with FFmpeg

For future reference (to myself, for the most part):

ffmpeg -i foo.webm -i foo.en.vtt -i foo.nl.vtt -map 0:v -map 0:a \
  -map 1:s -map 2:s -metadata:s:a language=eng -metadata:s:s:0   \
  language=eng -metadata:s:s:1 language=nld -c copy -y           \
  foo.subbed.webm

... is one way to create a single .webm file from one .webm input file and multiple .vtt files. A little bit of explanation:

  • The -i arguments pass input files. You can have multiple input files for one output file. They are numbered internally (this is necessary for the -map and -metadata options later), starting from 0.
  • The -map options take a "mapping". With them, you specify which input streams should go where in the output stream. By default, if you have multiple streams of the same type, ffmpeg will only pick one (the "best" one, whatever that is). The mappings we specify are:

    • -map 0:v: this means to take the video stream from the first file (this is the default if you do not specify any mapping at all; but if you do specify a mapping, you need to be complete)
    • -map 0:a: take the audio stream from the first file as well (same as with the video).
    • -map 1:s: take the subtitle stream from the second (i.e., indexed 1) file.
    • -map 2:s: take the subtitle stream from the third (i.e., indexed 2) file.
  • The -metadata options set metadata on the output file. Here, we pass:

    • -metadata:s:a language=eng, to add a 's'tream metadata item on the 'a'udio stream, with name language and content eng. The language metadata in ffmpeg is special, in that it gets automatically translated to the correct way of specifying the language in the target container format.
    • -metadata:s:s:0 language=eng, to add a 's'tream metadata item on the first (indexed 0) 's'ubtitle stream in the output file. This too has the english language set
    • `-metadata:s:s:1 language=nld', to add a 's'tream metadata item on the second (indexed 1) 's'ubtitle stream in the output file. This has dutch set as the language.
  • The -c copy option tells ffmpeg to not transcode the input video data, but just to rewrite the container. This works because all input files (WebM video plus VTT subtitles) are valid for WebM. If you do not have an input subtitle format that is valid for WebM, you can instead limit the copy modifier to the video and audio only, allowing ffmpeg to transcode the subtitles. This is done by way of -c:v copy -c:a copy.
  • Finally, we pass -y to specify that any pre-existing output file should be overwritten, and the name of the output file.

Worse Than FailureRepresentative Line: A Case of File Handling

Tim W caught a ticket. The PHP system he inherited allowed users to upload files, and then would process those files. It worked… most of the time. It seemed like a Heisenbug. Logging was non-existent, documentation was a fantasy, and to be honest, no one was exactly 100% certain what the processing feature was supposed to do- but whatever it was doing now was the right thing, except the times that it wasn’t right.

Specifically, some files got processed. Some files didn’t. They all were supposed to.

But other than that, it worked.

Tim worried that this was going to be difficult to replicate, especially after he tried it with a few files he had handy. Digging through the code though, made it perfectly clear what was going on. Buried on about line 1,200 in a 3,000 line file, he found this:

while (false !== ($file = readdir($handle))) {
    if ($file != "." && $file != ".." && ( $file == strtolower($file) ) ) {
        …
    }
}

For some reason, this code required that the name of the file contain no capital letters. Why? Well, again, no documentation, no comments, and the change predated the organization’s use of source control. Someone put in the effort to add the feature, but was it necessary?

Tim took the line out, and now it processes all files. Unfortunately, it’s still only working most of the time, but nobody can exactly agree on what it’s doing wrong.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Don Martithree kinds of open source metrics

Some random notes about open source metrics, related to work on CHAOSS, where Mozilla is a member and I'm on the Governing Board.

As far as I can tell, there are three kinds of open source metrics.

Impact metrics cover how much value the software creates. Possible good ones include count of projects dependent on this one, mentions of this project in job postings, books, papers, and conference talks, and, of course sales of products that bundle this project.

Contributor reward metrics cover how the software is a positive experience for the people who contribute to it. Job postings are a contributor reward metric as well as an impact metric. Contributor retention metrics and positive results on contributor experience surveys are some other examples.

But impact metrics and contributor reward metrics tend to be harder to collect, or slower-moving, than other kinds of metrics, which I'll lump together as activity metrics. Activity metrics include most of the things you see on open source project dashboards, such as pull request counts, time to respond to bug reports, and many others. Other activity metrics can be the output of natural language processing on project discussions. An example of that is FOSS Heartbeat, which does sentiment analysis, but you could also do other kinds of metrics based on text.

IMHO, the most interesting questions in the open source metrics area are all about: how do you predict impact metrics and contributor reward metrics from activity metrics? Activity metrics are easy to automate, and make a nice-looking dashboard, but there are many activity metrics to choose from—so which ones should you look at?

Which activity metrics are correlated to any impact metrics?

Which activity metrics are correlated to any contributor reward metrics?

Those questions are key to deciding which of the activity metrics to pay attention to. I'm optimistic that we'll be seeing some interesting correlations soon.

Planet DebianDirk Eddelbuettel: RcppArmadillo 0.8.300.1.0

armadillo image

Another RcppArmadillo release hit CRAN today. Since our last 0.8.100.1.0 release in October, Conrad kept busy and produced Armadillo releases 8.200.0, 8.200.1, 8.300.0 and now 8.300.1. We tend to now package these (with proper reverse-dependency checks and all) first for the RcppCore drat repo from where you can install them "as usual" (see the repo page for details). But this actual release resumes within our normal bi-monthly CRAN release cycle.

These releases improve a few little nags on the recent switch to more extensive use of OpenMP, and round out a number of other corners. See below for a brief summary.

Armadillo is a powerful and expressive C++ template library for linear algebra aiming towards a good balance between speed and ease of use with a syntax deliberately close to a Matlab. RcppArmadillo integrates this library with the R environment and language--and is widely used by (currently) 405 other packages on CRAN.

A high-level summary of changes follows.

Changes in RcppArmadillo version 0.8.300.1.0 (2017-12-04)

  • Upgraded to Armadillo release 8.300.1 (Tropical Shenanigans)

    • faster handling of band matrices by solve()

    • faster handling of band matrices by chol()

    • faster randg() when using OpenMP

    • added normpdf()

    • expanded .save() to allow appending new datasets to existing HDF5 files

  • Includes changes made in several earlier GitHub-only releases (versions 0.8.300.0.0, 0.8.200.2.0 and 0.8.200.1.0).

  • Conversion from simple_triplet_matrix is now supported (Serguei Sokol in #192).

  • Updated configure code to check for g++ 5.4 or later to enable OpenMP.

  • Updated the skeleton package to current packaging standards

  • Suppress warnings from Armadillo about missing OpenMP support and -fopenmp flags by setting ARMA_DONT_PRINT_OPENMP_WARNING

Courtesy of CRANberries, there is a diffstat report. More detailed information is on the RcppArmadillo page. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianMarkus Koschany: My Free Software Activities in November 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java

  • New upstream versions this month: undertow, jackrabbit, libpdfbox2, easymock, libokhttp-java, mediathekview, pdfsam, libsejda-java, libsambox-java and libnative-platform-java.
  • I updated bnd (2.4.1-7) in order to help with the removal of Eclipse from Testing. Unfortunately there is more work to do and the only way forward is to package a newer version of Eclipse and to split the package in a way, so that such issues can be avoided in the future. P.S.: We appreciate help with maintaining Eclipse! (#681726)
  • I sponsored libimglib2-java for Ghislain Antony Vaillant.
  • I fixed a regression in libmetadata-extractor-java related to relative classpaths. (#880746)
  • I spent more time on upgrading Gradle to version 3.4.1 and finally succeeded. The package is in experimental now. Upgrading from 3.2.1 to 3.4.1 didn’t seem like a big undertaking but the 8 MB debdiff and ~170000 lines of code changes proved me wrong. I discovered two regressions with this version in mockito and bnd. The former one could be resolved but bnd requires probably an upgrade as well. I would like to avoid that at the moment because major bnd upgrades tend to affect dozens of reverse-dependencies, mostly in a negative way.
  • Netbeans was affected by a regression in jaxb and failed to build from source. (#882525) I could partly revert the damage but another bug in jaxb 2.3.0 is currently preventing a complete recovery.
  • I fixed two Java 9 transition bugs in libnative-platform-java (#874645) and  jedit (#875583).

Debian LTS

This was my twenty-first month as a paid contributor and I have been paid to work 14.75 hours (13 +1.75 from October) on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • DLA-1177-1. Issued a security update for poppler fixing 4 CVE.
  • DLA-1178-1. Issued a security update for opensaml2 fixing 1 CVE.
  • DLA-1179-1. Issued a security update for shibboleth-sp2 fixing 1 CVE.
  • DLA-1180-1. Issued a security update for libspring-ldap-java fixing 1 CVE.
  • DLA-1184-1. Issued a security update for optipng fixing 1 CVE.
  • DLA-1185-1. Issued a security update for sam2p fixing 1 CVE.
  • DLA-1197-1. Issued a security update for sox fixing 7 CVE.
  • DLA-1198-1. Issued a security update for libextractor fixing 6 CVE. I also discovered that libextractor in buster/sid is still affected by more security issues and reported my findings as Debian bug #883528.

Misc

  • I packaged a new upstream release of osmo, a neat task manager and calendar application.
  • I prepared a security update for sam2p, which will be part of the next Jessie point release, and libspring-ldap-java. (DSA-4046-1)

Thanks for reading and see you next time.

CryptogramGermany Preparing Backdoor Law

The German Interior Minister is preparing a bill that allows the government to mandate backdoors in encryption.

No details about how likely this is to pass. I am skeptical.

Worse Than FailureNews Roundup: Calculated

A long time ago, in a galaxy right here, we ran a contest. The original OMGWTF contest was a challenge to build the worst calculator you possibly could.

We got some real treats, like the Universal Calculator, which, instead of being a calculator, was a framework for defining your own calculator, or Rube Goldberg’s Calculator, which eschewed cryptic values like “0.109375”, and instead output “seven sixty-fourths” (using inlined assembly for performance!). Or, the champion of the contest, the Buggy Four Function Calculator, which is a perfect simulation of a rotting, aging codebase.

The joke, of course, is that building a usable calculator app is easy. Why, it’s so easy, that we challenged our readers to come up with ways to make it hard. To find creative ways to fail at handling this simple task. To misinterpret and violate basic principles of how calculators should work.

Well, I bring this up, because just a few days ago, iOS 11.2 left beta and went public. And finally, finally, they fixed the calculator, which has been broken since iOS 11 launched. How broken? Let's try 1+2+3+4+5+6 shall we?

For those who can't, or don't wish to watch the video, according to the calculator, 1+2+3+4+5+6 is 75. I entered the values in quickly, but not super-speed.

I personally discovered the bug for myself while scoring at the end of a round of board games. I just ran down the score-sheet to sum things up, tapping away like one does with a calculator, and got downright insane results.

The underlying cause, near as anyone has been able to tell, is a combination of input lag and display updates, so rapidly typing “1+2+3” loses one of the “+”es and becomes “1+23”.

Now Apple’s been in the news a lot recently- in addition to shipping a completely broken calculator, they messed up character encoding, causing “I” to display a placeholder character, released a macOS update which allowed anyone to log in as root with no password, patched it, but with the problem that the patch broke filesharing, and if you didn’t apply it in the “right” order, the bug could come back.

The root cause of the root bug, by the way, was due to bad error handling in the login code.

Now, I’ll leave it to the pundits to wring their hands over the decline of Apple’s code quality, worry that “is this the future of Apple?!?!!11?”, or claim “this never would have happened under Jobs”. I’m not interested in the broad trends here, or prognosticating, or prognostibating (where you please only yourself by imagining alternate realities where Steve Jobs still lives).

What I am interested in is that calculator app. Some developer, I’m gonna assume a more junior one (right? you don’t need 15 years of experience to reimplement a calculator app), really jacked that up. And at no point in testing did anyone actually attempt to use the calculator. I’m sure they ran some automated UI tests, and when they saw odd results, they started chucking some sleep() calls in there until the errors went away.

It’s just amazing to me, that we ran a contest built around designing the worst calculator you could. A decade later, Apple comes sauntering in, vying for an honorable mention, in an application they actually shipped.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

,

Planet DebianRenata D'Avila: Creating a blog with pelican and Github pages

Today I'm going to talk about how this blog was created. Before we begin, I expect you to be familiarized with using Github and creating a Python virtual enviroment to develop. If you aren't, I recommend you to learn with the Django Girls tutorial, which covers that and more.

This is a tutorial to help you publish a personal blog hosted by Github. For that, you will need a regular Github user account (instead of a project account).

The first thing you will do is to create the Github repository where your code will live. If you want your blog to point to only your username (like rsip22.github.io) instead of a subfolder (like rsip22.github.io/blog), you have to create the repository with that full name.

Screenshot of Github, the menu to create a new repository is open and a new repo is being created with the name 'rsip22.github.io'

I recommend that you initialize your repository with a README, with a .gitignore for Python and with a free software license. If you use a free software license, you still own the code, but you make sure that others will benefit from it, by allowing them to study it, reuse it and, most importantly, keep sharing it.

Now that the repository is ready, let's clone it to the folder you will be using to store the code in your machine:

$ git clone https://github.com/YOUR_USERNAME/YOUR_USERNAME.github.io.git

And change to the new directory:

$ cd YOUR_USERNAME.github.io

Because of how Github Pages prefers to work, serving the files from the master branch, you have to put your source code in a new branch, preserving the "master" for the output of the static files generated by Pelican. To do that, you must create a new branch called "source":

$ git checkout -b source

Create the virtualenv with the Python3 version installed on your system.

On GNU/Linux systems, the command might go as:

$ python3 -m venv venv

or as

$ virtualenv --python=python3.5 venv

And activate it:

$ source venv/bin/activate

Inside the virtualenv, you have to install pelican and it's dependencies. You should also install ghp-import (to help us with publishing to github) and Markdown (for writing your posts using markdown). It goes like this:

(venv)$ pip install pelican markdown ghp-import

Once that is done, you can start creating your blog using pelican-quickstart:

(venv)$ pelican-quickstart

Which will prompt us a series of questions. Before answering them, take a look at my answers below:

> Where do you want to create your new web site? [.] ./
> What will be the title of this web site? Renata's blog
> Who will be the author of this web site? Renata
> What will be the default language of this web site? [pt] en
> Do you want to specify a URL prefix? e.g., http://example.com   (Y/n) n
> Do you want to enable article pagination? (Y/n) y
> How many articles per page do you want? [10] 10
> What is your time zone? [Europe/Paris] America/Sao_Paulo
> Do you want to generate a Fabfile/Makefile to automate generation and publishing? (Y/n) Y **# PAY ATTENTION TO THIS!**
> Do you want an auto-reload & simpleHTTP script to assist with theme and site development? (Y/n) n
> Do you want to upload your website using FTP? (y/N) n
> Do you want to upload your website using SSH? (y/N) n
> Do you want to upload your website using Dropbox? (y/N) n
> Do you want to upload your website using S3? (y/N) n
> Do you want to upload your website using Rackspace Cloud Files? (y/N) n
> Do you want to upload your website using GitHub Pages? (y/N) y
> Is this your personal page (username.github.io)? (y/N) y
Done. Your new project is available at /home/username/YOUR_USERNAME.github.io

About the time zone, it should be specified as TZ Time zone (full list here: List of tz database time zones).

Now, go ahead and create your first blog post! You might want to open the project folder on your favorite code editor and find the "content" folder inside it. Then, create a new file, which can be called my-first-post.md (don't worry, this is just for testing, you can change it later). The contents should begin with the metadata which identifies the Title, Date, Category and more from the post before you start with the content, like this:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

Title: My first post
Date: 2017-11-26 10:01
Modified: 2017-11-27 12:30
Category: misc
Tags: first, misc
Slug: My-first-post
Authors: Your name
Summary: What does your post talk about? Write here.

This is the *first post* from my Pelican blog. **YAY!**

Let's see how it looks?

Go to the terminal, generate the static files and start the server. To do that, use the following command:

(venv)$ make html && make serve

While this command is running, you should be able to visit it on your favorite web browser by typing localhost:8000 on the address bar.

Screenshot of the blog home. It has a header with the title Renata\'s blog, the first post on the left, info about the post on the right, links and social on the bottom.

Pretty neat, right?

Now, what if you want to put an image in a post, how do you do that? Well, first you create a directory inside your content directory, where your posts are. Let's call this directory 'images' for easy reference. Now, you have to tell Pelican to use it. Find the pelicanconf.py, the file where you configure the system, and add a variable that contains the directory with your images:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

STATIC_PATHS = ['images']

Save it. Go to your post and add the image this way:

.lang="markdown" # DON'T COPY this line, it exists just for highlighting purposes

![Write here a good description for people who can't see the image]({filename}/images/IMAGE_NAME.jpg)

You can interrupt the server at anytime pressing CTRL+C on the terminal. But you should start it again and check if the image is correct. Can you remember how?

(venv)$ make html && make serve

One last step before your coding is "done": you should make sure anyone can read your posts using ATOM or RSS feeds. Find the pelicanconf.py, the file where you configure the system, and edit the part about feed generation:

.lang="python" # DON'T COPY this line, it exists just for highlighting purposes

FEED_ALL_ATOM = 'feeds/all.atom.xml'
FEED_ALL_RSS = 'feeds/all.rss.xml'
AUTHOR_FEED_RSS = 'feeds/%s.rss.xml'
RSS_FEED_SUMMARY_ONLY = False

Save everything so you can send the code to Github. You can do that by adding all files, committing it with a message ('first commit') and using git push. You will be asked for your Github login and password.

$ git add -A && git commit -a -m 'first commit' && git push --all

And... remember how at the very beginning I said you would be preserving the master branch for the output of the static files generated by Pelican? Now it's time for you to generate them:

$ make github

You will be asked for your Github login and password again. And... voilà! Your new blog should be live on https://YOUR_USERNAME.github.io.

If you had an error in any step of the way, please reread this tutorial, try and see if you can detect in which part the problem happened, because that is the first step to debbugging. Sometimes, even something simple like a typo or, with Python, a wrong indentation, can give us trouble. Shout out and ask for help online or on your community.

For tips on how to write your posts using Markdown, you should read the Daring Fireball Markdown guide.

To get other themes, I recommend you visit Pelican Themes.

This post was adapted from Adrien Leger's Create a github hosted Pelican blog with a Bootstrap3 theme. I hope it was somewhat useful for you.

Planet DebianKeith Packard: nuttx-scheme

Scheme For NuttX

Last fall, I built a tiny lisp interpreter for AltOS. That was fun, but had some constraints:

  • Yet another lisp-like language
  • Ran only on AltOS, not exactly a widely used operating system.

To fix the first problem, I decided to try and just implement scheme. The language I had implemented wasn't far off; it had lexical scoping and call-with-current-continuation after all. The rest is pretty simple stuff.

To fix the second problem, I ported the interpreter to NuttX. NuttX is a modest operating system for 8 to 32-bit microcontrollers with a growing community of developers.

I downloaded the most recent Scheme spec, the Revised⁷ Report, which is the 'small language' follow on to the contentious Revised⁶ Report.

Converting ao-lisp to ao-scheme

Reading through the spec, it was clear there were a few things I needed to deal with to provide something that looked like scheme:

  • quasiquote
  • syntax-rules
  • function names
  • boolean type

Quasiquote turned out to be fun -- the spec described it in terms of a set of list forms, so I hacked up the reader to convert the convenient syntax using ` , and ,@ into lists and wrote a macro to emit construction code from the generated lists.

Syntax-rules is a 'nicer' way to write macros, and I haven't implemented it yet. There's nothing it can do which the underlying full macros cannot, so I'm planning on just writing it in scheme rather than having a pile more C code.

Scheme as a separate boolean type, rather than using the empty list, (), for false, it uses #f and has everything else be 'true'. Adding that wasn't hard, just tedious as I had to work through any place that used boolean values and switch it to using #f or #t.

There were also a pile of random function name swaps and another bunch of new functions to write.

All in all, not a huge amount of work, and now the language looks a lot more like scheme.

Adding more number types

The original language had only integers, and those were only 14 bits wide. To make the language a bit more usable, I added 24 bit integers as well, along with 32-bit floats. Then I added automatic promotion between representations and the usual scheme tests for exactness. This added a bit to the footprint, and maybe I should make it optional.

Porting to NuttX

This was the easiest piece of the process -- NuttX offers a posix-like API, just like AltOS. Getting it to build was actually a piece of cake. The only part requiring new code was the lack of any command line editing or echo -- I ended up using readline to get that to work.

I was pleased that all of the language changes I made didn't significantly impact the footprint of the resulting system. I built NuttX for the stm32f4-discovery board, compiling in basic and then comparing with and without scheme:

Before:

$ size nuttx
   text    data     bss     dec     hex filename
 183037     172    4176  187385   2dbf9 nuttx

After:

$ size nuttx
   text    data     bss     dec     hex filename
 213197     188   22672  236057   39a19 nuttx

The increase in text includes 11kB of built-in lisp code, so that when the interpreter starts, you already have all of the necessary lisp code loaded that turns the bare interpreter into a full scheme system. That makes the core interpreter around 20kB of code, which is nice and compact (at least for scheme, I think).

The BSS space includes the heap; this can be set to any size you like. It would probably be good to have that allocated on the fly instead of used even when the interpreter isn't running.

Where's the Code

I've pushed the code here:

$ git clone git://keithp.com/git/apps

Future Work

There's more work to complete the language support; here's some tasks needing attention at some point:

  • No vectors or bytevectors
  • Characters are just numbers
  • No dynamic-wind or exceptions
  • No environments
  • No ports
  • No syntax-rules
  • No record types
  • No libraries
  • Heap allocated from BSS

A Sample Application!

Here's towers of hanoi in scheme for nuttx:

;
; Towers of Hanoi
;
; Copyright © 2016 Keith Packard <keithp@keithp.com>
;
; This program is free software; you can redistribute it and/or modify
; it under the terms of the GNU General Public License as published by
; the Free Software Foundation, either version 2 of the License, or
; (at your option) any later version.
;
; This program is distributed in the hope that it will be useful, but
; WITHOUT ANY WARRANTY; without even the implied warranty of
; MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
; General Public License for more details.
;

                    ; ANSI control sequences

(define (move-to col row)
  (for-each display (list "\033[" row ";" col "H"))
  )

(define (clear)
  (display "\033[2J")
  )

(define (display-string x y str)
  (move-to x y)
  (display str)
  )

(define (make-piece num max)
                    ; A piece for position 'num'
                    ; is num + 1 + num stars
                    ; centered in a field of max *
                    ; 2 + 1 characters with spaces
                    ; on either side. This way,
                    ; every piece is the same
                    ; number of characters

  (define (chars n c)
    (if (zero? n) ""
      (+ c (chars (- n 1) c))
      )
    )
  (+ (chars (- max num 1) " ")
     (chars (+ (* num 2) 1) "*")
     (chars (- max num 1) " ")
     )
  )

(define (make-pieces max)
                    ; Make a list of numbers from 0 to max-1
  (define (nums cur max)
    (if (= cur max) ()
      (cons cur (nums (+ cur 1) max))
      )
    )
                    ; Create a list of pieces

  (map (lambda (x) (make-piece x max)) (nums 0 max))
  )

                    ; Here's all of the towers of pieces
                    ; This is generated when the program is run

(define towers ())

                    ; position of the bottom of
                    ; the stacks set at runtime
(define bottom-y 0)
(define left-x 0)

(define move-delay 25)

                    ; Display one tower, clearing any
                    ; space above it

(define (display-tower x y clear tower)
  (cond ((= 0 clear)
     (cond ((not (null? tower))
        (display-string x y (car tower))
        (display-tower x (+ y 1) 0 (cdr tower))
        )
           )
     )
    (else 
     (display-string x y "                    ")
     (display-tower x (+ y 1) (- clear 1) tower)
     )
    )
  )

                    ; Position of the top of the tower on the screen
                    ; Shorter towers start further down the screen

(define (tower-pos tower)
  (- bottom-y (length tower))
  )

                    ; Display all of the towers, spaced 20 columns apart

(define (display-towers x towers)
  (cond ((not (null? towers))
     (display-tower x 0 (tower-pos (car towers)) (car towers))
     (display-towers (+ x 20) (cdr towers)))
    )
  )

                    ; Display all of the towers, then move the cursor
                    ; out of the way and flush the output

(define (display-hanoi)
  (display-towers left-x towers)
  (move-to 1 23)
  (flush-output)
  (delay move-delay)
  )

                    ; Reset towers to the starting state, with
                    ; all of the pieces in the first tower and the
                    ; other two empty

(define (reset-towers len)
  (set! towers (list (make-pieces len) () ()))
  (set! bottom-y (+ len 3))
  )

                    ; Move a piece from the top of one tower
                    ; to the top of another

(define (move-piece from to)

                    ; references to the cons holding the two towers

  (define from-tower (list-tail towers from))
  (define to-tower (list-tail towers to))

                    ; stick the car of from-tower onto to-tower

  (set-car! to-tower (cons (caar from-tower) (car to-tower)))

                    ; remove the car of from-tower

  (set-car! from-tower (cdar from-tower))
  )

                    ; The implementation of the game

(define (_hanoi n from to use)
  (cond ((= 1 n)
     (move-piece from to)
     (display-hanoi)
     )
    (else
     (_hanoi (- n 1) from use to)
     (_hanoi 1 from to use)
     (_hanoi (- n 1) use to from)
     )
    )
  )

                    ; A pretty interface which
                    ; resets the state of the game,
                    ; clears the screen and runs
                    ; the program

(define (hanoi len)
  (reset-towers len)
  (clear)
  (display-hanoi)
  (_hanoi len 0 1 2)
  #t
  )

Krebs on SecurityAnti-Skimmer Detector for Skimmer Scammers

Crooks who make and deploy ATM skimmers are constantly engaged in a cat-and-mouse game with financial institutions, which deploy a variety of technological measures designed to defeat skimming devices. The latest innovation aimed at tipping the scales in favor of skimmer thieves is a small, battery powered device that provides crooks a digital readout indicating whether an ATM likely includes digital anti-skimming technology.

A well-known skimmer thief is marketing a product called “Smart Shield Detector” that claims to be able to detect a variety of electronic methods used by banks to foil ATM skimmers.

The device, which sells for $200, is called a “Smart Shield Detector,” and promises to detect “all kinds of noise shields, hidden shields, delayed shields and others!”

It appears to be a relatively simple machine that gives a digital numeric indicator of whether an ATM uses any of a variety of anti-skimming methods. One of the most common is known as “frequency jamming,” which uses electronic signals to scramble both the clock (timing) and the card data itself in a bid to confuse skimming devices.

“You will see current level within seconds!,” the seller enthuses in an online ad for the product, a snippet of which is shown above. “Available for sale after November 1st, market price 200usd. Preorders available at price 150usd/device. 2+ devices for your team – will give discounts.”

According to the individual selling the Smart Shield Detector, a readout of 15 or higher indicates the presence of some type of electronic shield or jamming technology — warning the skimmer thief to consider leaving that ATM alone and to find a less protected machine. In contrast, a score between 3-5 is meant to indicate “no shield,” i.e., that the ATM is ripe for compromise.

KrebsOnSecurity shared this video with Charlie Harrow, solutions manager for ATM maker NCR Corp. Harrow called the device “very interesting” but said NCR doesn’t try to hide which of its ATM include anti-skimming technologies — such as those that claim to be detectable by the Smart Shield Detector.

“We don’t hide the fact that our ATMs are protected against this type of external skimming attack,” Harrow said. “Our Anti-Skimming product uses a uniquely shaped bezel so you can tell just by looking at the ATM that it is protected (if you know what you are looking for).”

Harrow added that NCR doesn’t rely on secrecy of design to protect its ATMs.

“The bad guys are skilled, resourced and determined enough that sooner or later they will figure out exactly what we have done, so the ATM has to be safe against a knowledgeable attacker,” he said. “That said, a little secret sauce doesn’t hurt, and can often be very effective in stopping specific attack [methods] in the short term, but it can’t be relied on to provide any long term protection.”

The best method for protecting yourself against ATM skimmers doesn’t require any fancy gadgets or technology at all: It involves merely covering the PIN pad with your hand while you enter your PIN!

That’s because the vast majority of skimming attacks involve two components: A device that fits over or inside the card reader and steals data from the card’s magnetic stripe, and a tiny hidden camera aimed at the PIN pad. While thieves who have compromised an ATM you used can still replicate your ATM card, the real value rests in your PIN, without which the thieves cannot easily drain your checking or savings account of cash.

Also, be aware of your physical surroundings while using an ATM; you’re probably more apt to get mugged physically than virtually at a cash machine. Finally, try to stick to cash machines that are physically installed inside of banks, as these tend to be much more challenging for thieves to compromise than stand-alone machines like those commonly found at convenience stores.

KrebsOnSecurity would like to thank Alex Holden, founder of Milwaukee, Wisc. based Hold Security, for sharing the above video.

Are you fascinated by skimming devices? Then check out my series, All About Skimmers, which looks at all manner of skimming scams, from fake ATMs and cash claws to PIN pad overlays and gas pump skimmers.

Sociological ImagesWhat Drives Conspiracy Theories?

From Pizzagate to more plausible stories of palace intrigue, U.S. politics has more than a whiff of conspiracy in the air these days. In sorting fact from fiction, why do some people end up believing conspiracy theories? Social science research shows that we shouldn’t think about these beliefs like delusions, because the choice to buy in stems from real structural and psychological conditions that can affect us all.

For example, research in political science shows that people who know a lot about politics, but also show low levels of generalized trust, are more likely to believe conspiracy theories. It isn’t just partisan, either, both liberals and conservatives are equally likely to believe conspiracy theories—just different ones.

In sociology, research also shows how bigger structural factors elevate conspiracy concern. In an article published in Socius earlier this year, Joseph DiGrazia examined Google search trends for two major conspiracy theories between 2007 and 2014: inquiries about the Illuminati and concern about President Obama’s birth and citizenship.

DiGrazia looked at the state-level factors that had the strongest and most consistent relationships with search frequency: partisanship and employment. States with higher unemployment rates had higher search rates about the Illuminati, and more Republican states had higher searches for both conspiracies throughout the Obama administration.

These studies show it isn’t correct to treat conspiracy beliefs as simply absurd or irrational—they flare up among reasonably informed people who have lower trust in institutions, often when they feel powerless in the face of structural changes across politics and the economy.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Planet DebianJonathan Dowland: back on the Linux desktop

As forecast, I've switched from Mac back to Linux on the Desktop. I'm using a work-supplied Thinkpad T470s which is a nice form-factor machine (the the T450s was the first Thinkpad to widen my perspective away from just looking at the X series).

I've installed Debian to get started and ended up with GNOME 3 as the desktop (I was surprised to not be prompted for a choice in the installer, but on reflection that makes sense, I did a non-networked installed from the GNOME-flavour of the live DVD). So for the time being I'm going to stick to GNOME 3 and see what's new/better/worse than last time, but once my replacement SSD arrives I can revisit.

I haven't made much progress on the sticking points I identified in my last post. I'm hoping to get 1pass up and running in the interim to read my 1Password DB so I can get by until I've found a replacement password manager that I like.

Most of my desktop configuration steps I have captured in some Ansible playbooks. I'm looking at Ansible after a long break from using puppet, and there's things I like and things I don't. I've also been exploring ownCloud for personal file sharing and despite a couple of warning signs (urgh PHP, official Debian package was dropped) I'm finding it really useful, in particular for sharing stuff with family. I might write more about both of those later.

Planet DebianJoachim Breitner: Finding bugs in Haskell code by proving it

Last week, I wrote a small nifty tool called bisect-binary, which semi-automates answering the question “To what extent can I fill this file up with zeroes and still have it working”. I wrote it it in Haskell, and part of the Haskell code, in the Intervals.hs module, is a data structure for “subsets of a file” represented as a sorted list of intervals:

data Interval = I { from :: Offset, to :: Offset }
newtype Intervals = Intervals [Interval]

The code is the kind of Haskell code that I like to write: A small local recursive function, a few guards to case analysis, and I am done:

intersect :: Intervals -> Intervals -> Intervals
intersect (Intervals is1) (Intervals is2) = Intervals $ go is1 is2
  where
    go _ [] = []
    go [] _ = []
    go (i1:is1) (i2:is2)
        -- reorder for symmetry
        | to i1 < to i2 = go (i2:is2) (i1:is1)
        -- disjoint
        | from i1 >= to i2 = go (i1:is1) is2
        -- subset
        | to i1 == to i2 = I f' (to i2) : go is1 is2
        -- overlapping
        | otherwise = I f' (to i2) : go (i1 { from = to i2} : is1) is2
      where f' = max (from i1) (from i2)

But clearly, the code is already complicated enough so that it is easy to make a mistake. I could have put in some QuickCheck properties to test the code, I was in proving mood...

Now available: Formal Verification for Haskell

Ten months ago I complained that there was no good way to verify Haskell code (and created the nifty hack ghc-proofs). But things have changed since then, as a group at UPenn (mostly Antal Spector-Zabusky, Stephanie Weirich and myself) has created hs-to-coq: a translator from Haskell to the theorem prover Coq.

We have used hs-to-coq on various examples, as described in our CPP'18 paper, but it is high-time to use it for real. The easiest way to use hs-to-coq at the moment is to clone the repository, copy one of the example directories (e.g. examples/successors), place the Haskell file to be verified there and put the right module name into the Makefile. I also commented out parts of the Haskell file that would drag in non-base dependencies.

Massaging the translation

Often, hs-to-coq translates Haskell code without a hitch, but sometimes, a bit of help is needed. In this case, I had to specify three so-called edits:

  • The Haskell code uses Intervals both as a name for a type and for a value (the constructor). This is fine in Haskell, which has separate value and type namespaces, but not for Coq. The line

    rename value Intervals.Intervals = ival

    changes the constructor name to ival.

  • I use the Int64 type in the Haskell code. The Coq version of Haskell’s base library that comes with hs-to-coq does not support that yet, so I change that via

    rename type GHC.Int.Int64 = GHC.Num.Int

    to the normal Int type, which itself is mapped to Coq’s Z type. This is not a perfect fit, and my verification would not catch problems that arise due to the boundedness of Int64. Since none of my code does arithmetic, only comparisons, I am fine with that.

  • The biggest hurdle is the recursion of the local go functions. Coq requires all recursive functions to be obviously (i.e. structurally) terminating, and the go above is not. For example, in the first case, the arguments to go are simply swapped. It is very much not obvious why this is not an infinite loop.

    I can specify a termination measure, i.e. a function that takes the arguments xs and ys and returns a “size” of type nat that decreases in every call: Add the lengths of xs and ys, multiply by two and add one if the the first interval in xs ends before the first interval in ys.

    If the problematic function were a top-level function I could tell hs-to-coq about this termination measure and it would use this information to define the function using Program Fixpoint.

    Unfortunately, go is a local function, so this mechanism is not available to us. If I care more about the verification than about preserving the exact Haskell code, I could easily change the Haskell code to make go a top-level function, but in this case I did not want to change the Haskell code.

    Another way out offered by hs-to-coq is to translate the recursive function using an axiom unsafeFix : forall a, (a -> a) -> a. This looks scary, but as I explain in the previous blog post, this axiom can be used in a safe way.

    I should point out it is my dissenting opinion to consider this a valid verification approach. The official stand of the hs-to-coq author team is that using unsafeFix in the verification can only be a temporary state, and eventually you’d be expected to fix (heh) this, for example by moving the functions to the top-level and using hs-to-coq’s the support for Program Fixpoint.

With these edits in place, hs-to-coq splits out a faithful Coq copy of my Haskell code.

Time to prove things

The rest of the work is mostly straight-forward use of Coq. I define the invariant I expect to hold for these lists of intervals, namely that they are sorted, non-empty, disjoint and non-adjacent:

Fixpoint goodLIs (is : list Interval) (lb : Z) : Prop :=
  match is with
    | [] => True
    | (I f t :: is) => (lb <= f)%Z /\ (f < t)%Z /\ goodLIs is t
   end.

Definition good is := match is with
  ival is => exists n, goodLIs is n end.

and I give them meaning as Coq type for sets, Ensemble:

Definition range (f t : Z) : Ensemble Z :=
  (fun z => (f <= z)%Z /\ (z < t)%Z).

Definition semI (i : Interval) : Ensemble Z :=
  match i with I f t => range f t end.

Fixpoint semLIs (is : list Interval) : Ensemble Z :=
  match is with
    | [] => Empty_set Z
    | (i :: is) => Union Z (semI i) (semLIs is)
end.

Definition sem is := match is with
  ival is => semLIs is end.

Now I prove for every function that it preserves the invariant and that it corresponds to the, well, corresponding function, e.g.:

Lemma intersect_good : forall (is1 is2 : Intervals),
  good is1 -> good is2 -> good (intersect is1 is2).
Proof. … Qed.

Lemma intersection_spec : forall (is1 is2 : Intervals),
  good is1 -> good is2 ->
  sem (intersect is1 is2) = Intersection Z (sem is1) (sem is2).
Proof. … Qed.

Even though I punted on the question of termination while defining the functions, I do not get around that while verifying this, so I formalize the termination argument above

Definition needs_reorder (is1 is2 : list Interval) : bool :=
  match is1, is2 with
    | (I f1 t1 :: _), (I f2 t2 :: _) => (t1 <? t2)%Z
    | _, _ => false
  end.

Definition size2 (is1 is2 : list Interval) : nat :=
  (if needs_reorder is1 is2 then 1 else 0) + 2 * length is1 + 2 * length is2.

and use it in my inductive proofs.

As I intend this to be a write-once proof, I happily copy’n’pasted proof scripts and did not do any cleanup. Thus, the resulting Proof file is big, ugly and repetitive. I am confident that judicious use of Coq tactics could greatly condense this proof.

Using Program Fixpoint after the fact?

This proofs are also an experiment of how I can actually do induction over a locally defined recursive function without too ugly proof goals (hence the line match goal with [ |- context [unsafeFix ?f _ _] ] => set (u := f) end.). One could improve upon this approach by following these steps:

  1. Define copies (say, intersect_go_witness) of the local go using Program Fixpoint with the above termination measure. The termination argument needs to be made only once, here.

  2. Use this function to prove that the argument f in go = unsafeFix f actually has a fixed point:

    Lemma intersect_go_sound:

    f intersect_go_witness = intersect_go_witness

    (This requires functional extensionality). This lemma indicates that my use of the axioms unsafeFix and unsafeFix_eq are actually sound, as discussed in the previous blog post.

  3. Still prove the desired properties for the go that uses unsafeFix, as before, but using the functional induction scheme for intersect_go! This way, the actual proofs are free from any noisy termination arguments.

    (The trick to define a recursive function just to throw away the function and only use its induction rule is one I learned in Isabelle, and is very useful to separate the meat from the red tape in complex proofs. Note that the induction rule for a function does not actually mention the function!)

Maybe I will get to this later.

Update: I experimented a bit in that direction, and it does not quite work as expected. In step 2 I am stuck because Program Fixpoint does not create a fixpoint-unrolling lemma, and in step 3 I do not get the induction scheme that I was hoping for. Both problems would not exist if I use the Function command, although that needs some tickery to support a termination measure on multiple arguments. The induction lemma is not quite as polished as I was hoping for, so he resulting proof is still somewhat ugly, and it requires copying code, which does not scale well.

Efforts and gains

I spent exactly 7 hours working on these proofs, according to arbtt. I am sure that writing these functions took me much less time, but I cannot calculate that easily, as they were originally in the Main.hs file of bisect-binary.

I did find and fix three bugs:

  • The intersect function would not always retain the invariant that the intervals would be non-empty.
  • The subtract function would prematurely advance through the list intervals in the second argument, which can lead to a genuinely wrong result. (This occurred twice.)

Conclusion: Verification of Haskell code using Coq is now practically possible!

Final rant: Why is the Coq standard library so incomplete (compared to, say, Isabelle’s) and requires me to prove so many lemmas about basic functions on Ensembles?

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #136

Here's what happened in the Reproducible Builds effort between Sunday, November 26 and Saturday, December 2, 2017:

Media coverage

Arch Linux imap key leakage

A security issue was found in the imap package in Arch Linux thanks to the reproducible builds effort in that distribution.

Due to a hardcoded key-generation routine in the build() step of imap's PKGBUILD (the standard packaging file for Arch Linux packages), a default secret key was generated and leaked on all imap installations. This was prompty reviewed, confirmed and fixed by the package maintainers.

This mirrors similar security issues found in Debian, such as #833885.

Debian packages reviewed and fixed, and bugs filed

In addition, 73 FTBFS bugs were detected and reported by Adrian Bunk.

Reviews of unreproducible Debian packages

83 package reviews have been added, 41 have been updated and 33 have been removed in this week, adding to our knowledge about identified issues.

1 issue type was updated:

LEDE / OpenWrt packages updates:

diffoscope development

reprotest development

Version 0.7.4 was uploaded to unstable by Ximin Luo. It included contributions already covered by posts of the previous weeks as well as new ones from:

reproducible-website development

tests.reproducible-builds.org

Misc.

This week's edition was written by Alexander Couzens, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Santiago Torres-Arias, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

CryptogramMatt Blaze on Securing Voting Machines

Matt Blaze's House testimony on the security of voting machines is an excellent read. (Details on the entire hearing is here.) I have not watched the video.

Worse Than FailureEditor's Soapbox: Protect Yourself

We lend the soapbox to snoofle today, to dispense a combination of career and financial advice. I've seen too many of my peers sell their lives for a handful of magic beans. Your time is too valuable to waste for no reward. -- Remy

There is a WTF that far too many people make with their retirement accounts at work. I've seen many many people get massively financially burned. A friend recently lost a huge amount of money from their retirement account when the company went under, which prompted me to write this to help you prevent it from happening to you.

A pile of money

The housing bubble that led up to the 2008 financial collapse was caused by overinflated housing values coming back down to reality. People had been given mortgages far beyond what they could afford using traditional financial norms, and when the value of their homes came back down to realistic values, they couldn't afford their mortgages and started missing payments, or worse, defaulted. This left the banks and brokerages that were holding the mortgage-backed-securities with billions in cash flow, but upside down on the balance sheet. When it crossed a standard threshold, they went under. Notably Bear Stearns and Lehman. Numerous companies (AIG, Citi, etc.) that invested in these MBS also nearly went under.

One person I knew of had worked at BS for many years and had a great deal of BS stock in their retirement account. When they left for Lehman, they left the account in-tact at BS. Then they spent many years at Lehman. When both melted down, that person not only lost their job, but the company stock in both retirement accounts was worth... a whole lot less.

As a general statement, if you work for a company, don't buy only stock of that company in your retirement account because if the place goes belly up, you lose twice: your job and your retirement account!

Another thing people do is accept stock options in lieu of pay. Startups are big on doing this as it limits the cash outflow when they are new. If they succeed, they'll have the cash to cover the options. If they go bust, you lose. Basically, you put in the long hours and take a large chunk of the financial risk on the hopes that the managers know what they're doing, and are one of the lucky unicorns that "makes it". But large companies also pay people (in part) in options. A friend worked their way up to Managing Director of a large firm. He was paid 20% cash and 80% company stock options, but had to hold the options for five years before he was allowed to exercise them - so that he'd be vested in the success of the company. By the time the sixth year had rolled by, he had forgotten about it and let-it-ride, with the options auto-exercising and being converted into regular shares. When he left the job, he left the account in-tact and in-place. When the market tanked, so did the value of the stock that he had earned and been awarded.

When you leave a job, either voluntarily or forcibly, roll the assets in your retirement account in-kind into a personal retirement account at any bank or brokerage that provides that (custodian) service. You won't pay taxes if you do a direct transfer, but if some company where you used to work goes under, you won't have to chase lawyers to get what belongs to you.

Remember, Bill Gates routinely divested huge blocks of MS stock as part of diversifying, even while it was still increasing in value. Your numbers will be smaller but the same principle applies to you too (e.g.: Don't put all your eggs in one basket).

While the 2008 fiasco and dot-com bust will hopefully never be repeated, in the current climate of deregulation, you never know. If you've heavily weighted your retirement account with company stock, or have a trail of retirement accounts at former employers, please go talk to a financial advisor about diversifying your holdings, and collect the past corporate retirement accounts in a single personal retirement brokerage account, where you can more easily control it and keep an eye on it.

Personally, I'm retired. My assets are split foreign/domestic, bonds/equities, large/medium/small-cap and growth/blend/value. a certain percentage is professionally managed, but I keep an eye on what they're doing and the costs. The rest is in mutual funds that cover the desired sectors, etc.

The amounts and percentages across investment types in which you invest will vary by your age, total assets and time horizon. Only you can know what's best for your family, but you should discuss it with an independent advisor (before they repeal the fiduciary rule, which states that they must put your interests ahead of what their firm is pushing).

For what it's worth, over my career, I've worked at five companies that went under, more than twenty years down the road after I moved on. I have always taken the cash value of the pension/401(k) and rolled it into a brokerage account where I manage it myself. Had I left those assets at the respective companies, I would have lost over $100,000 of money that I had earned and been awarded - for absolutely no reason!

Consider for a moment that the managers that we all too often read about in this space are often the same ones who set up and manage these workplace retirement plans. Do you really want them managing money that you've already earned? Especially after you've moved on to the next gig? When you're not there to hear office gossip about Bad Things™ that may be happening?

One final point. During the first few years of my career, there were no 401(k)'s. If you didn't have a pension, your savings account was your main investment vehicle. Unless the IRA and 401(k) plan rules are changed, you can start saving very early on. At first, it seems like it accumulates very slowly, but the rate of growth increases rapidly as you get nearer to the end of your career. The sooner you start saving for the big ticket items down the road, the quicker you'll be able to pay for them. Patience, persistence and diversification are key!

As someone who has spent the last quarter century working for these massive financial institutions, I've seen too many people lose far too much; please protect yourself!

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianPetter Reinholdtsen: Is the short movie «Empty Socks» from 1927 in the public domain or not?

Three years ago, a presumed lost animation film, Empty Socks from 1927, was discovered in the Norwegian National Library. At the time it was discovered, it was generally assumed to be copyrighted by The Walt Disney Company, and I blogged about my reasoning to conclude that it would would enter the Norwegian equivalent of the public domain in 2053, based on my understanding of Norwegian Copyright Law. But a few days ago, I came across a blog post claiming the movie was already in the public domain, at least in USA. The reasoning is as follows: The film was released in November or Desember 1927 (sources disagree), and presumably registered its copyright that year. At that time, right holders of movies registered by the copyright office received government protection for there work for 28 years. After 28 years, the copyright had to be renewed if the wanted the government to protect it further. The blog post I found claim such renewal did not happen for this movie, and thus it entered the public domain in 1956. Yet someone claim the copyright was renewed and the movie is still copyright protected. Can anyone help me to figure out which claim is correct? I have not been able to find Empty Socks in Catalog of copyright entries. Ser.3 pt.12-13 v.9-12 1955-1958 Motion Pictures available from the University of Pennsylvania, neither in page 45 for the first half of 1955, nor in page 119 for the second half of 1955. It is of course possible that the renewal entry was left out of the printed catalog by mistake. Is there some way to rule out this possibility? Please help, and update the wikipedia page with your findings.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet Linux AustraliaLev Lafayette: A Tale of Two Conferences: ISC and TERATEC 2017

This year the International Supercomputing Conference and TERATEC were held in close proximity, the former in Frankfurt from June 17-21 and the latter in Paris from June 27-28. Whilst the two conferences differ greatly in scope (one international, one national) and language (one Anglophone, the other Francophone), the dominance of Linux as the operating system of
choice at both was overwhelming.

read more

,

Planet DebianThomas Lange: FAI.me build server improvements

Only one week ago, I've announced the FAI.me build service for creating your own installation images. I've got some feedback and people like to have root login without a password but using a ssh key. This feature is now available. You can upload you public ssh key which will be installed as authorized_keys for the root account.

You can now also download the configuration space that is used on the installation image and you can get the whole log file from the fai-mirror call. This command creates the partial package mirror. The log file helps you debugging if you add some packages which have conflicts on other packages, or if you misspelt a package name.

FAI.me

Planet DebianJoey Hess: new old thing

closeup of red knothole, purple-pink and yellow wood grain

This branch came from a cedar tree overhanging my driveway.

It was fun to bust this open and shape it with hammer and chisels. My dad once recommended learning to chisel before learning any power tools for wood working.. so I suppose this is a start.

roughly split wood branch

Some tung oil and drilling later, and I'm very pleased to have a nice place to hang my cast iron.

slightly curved wood beam with cast iron pots hung from it

Cryptogram"Crypto" Is Being Redefined as Cryptocurrencies

I agree with Lorenzo Franceschi-Bicchierai, "Cryptocurrencies aren't 'crypto'":

Lately on the internet, people in the world of Bitcoin and other digital currencies are starting to use the word "crypto" as a catch-all term for the lightly regulated and burgeoning world of digital currencies in general, or for the word "cryptocurrency" -- which probably shouldn't even be called "currency," by the way.

[...]

To be clear, I'm not the only one who is mad about this. Bitcoin and other technologies indeed do use cryptography: all cryptocurrency transactions are secured by a "public key" known to all and a "private key" known only to one party­ -- this is the basis for a swath of cryptographic approaches (known as public key, or asymmetric cryptography) like PGP. But cryptographers say that's not really their defining trait.

"Most cryptocurrency barely has anything to do with serious cryptography," Matthew Green, a renowned computer scientist who studies cryptography, told me via email. "Aside from the trivial use of digital signatures and hash functions, it's a stupid name."

It is a stupid name.

Worse Than FailureCodeSOD: Pounding Away

“Hey, Herbie, we need you to add code to our e-commerce package to send an email with order details in it,” was the requirement.

“You mean like a notification? Order confirmation?”

“Yes!”

So Herbie trotted off to write the code, only to learn that it was all wrong. They didn’t want a human-readable confirmation. The emails were going to a VB application, and they needed a machine-readable format. So Herbie revamped the email to have XML, and provided an XML schema.

This was also wrong. Herbie’s boss wrangled Herbie and the VB developer together on a conference call, and they tried to hammer out some sort of contract for how the data would move from system to system.

They didn’t want the data in any standard format. They had their own format. They didn’t have a clear idea about the email was supposed to contain, either, which meant Herbie got to play the game of trying his best to constantly revamp the code as they changed the requirements on the fly.

In the end, he produced this monster:

   private function getAdminMailString(){
        $adminMailString = '';
        $mediaBeans = $this->orders->getConfiguredImageBeans();
        $mediaBeansSize = count($mediaBeans);

        $adminMailString .= '###order-start###'."\n";
        $adminMailString .= '###order-size-start###' . $mediaBeansSize . "###order-size-end###\n";
        $adminMailString .= '###date-start###' . date('d.m.Y',strtotime($this->context->getStartDate())) . "###date-end###\n";
        $adminMailString .= '###business-context-start###' . $this->context->getBusinessContextName() . "###business-context-end###\n";

        if($this->customer->getIsMassOrderUser()){

            $customers = $this->customer->getSelectedMassOrderCustomers();
            $customersSize = count($this->customer->getSelectedMassOrderCustomers());

            $adminMailString .= '###is-mass-order-start###1###is-mass-order-end###'."\n";
            $adminMailString .= '###mass-order-size-start###'.$customersSize.'###mass-order-size-end###'."\n";

            $adminMailString .= '###mass-start###'."\n";
            for($i = 0; $i < $customersSize; $i++){

                $adminMailString .= '###mass-customer-' . $i . '-start###'."\n";
                $adminMailString .= '###customer-start###' . $customers[$i]->getCompanyName() . '###customer-end###'."\n";
                $adminMailString .= '###customer-number-start###' . $customers[$i]->getCustomerNumber() . '###customer-number-end###'."\n";
                $adminMailString .= '###contact-person-start###' . $customers[$i]->getContactPerson() . '###contact-person-end###'."\n";
                $adminMailString .= '###mass-customer-' . $i . '-end###'."\n";

            }
            $adminMailString .= '###mass-end###'."\n";

        } else {
            $adminMailString .= '###is-mass-order-start###0###is-mass-order-end###'."\n";
        }

        for($i = 0; $i < $mediaBeansSize; $i++){

            $adminMailString .= '###medium-' . $i . "-start###\n";

            if($mediaBeans[$i] instanceof ConfiguredImageBean){

                $adminMailString .= '###type-start###picture###type-end###' . "\n";
                $adminMailString .= '###name-start###' . $mediaBeans[$i]->getTitle() . "###name-end###\n";
                $adminMailString .= '###url-start###' . $mediaBeans[$i]->getConfiguredImageWebPath() . "###url-end###\n";

            } else if($mediaBeans[$i] instanceof MovieBean){

                $adminMailString .= '###type-start###movie###type-end###' . "\n";
                $adminMailString .= '###name-start###' . $mediaBeans[$i]->getTitle() . "###name-end###\n";
                $adminMailString .= '###url-start###' . $mediaBeans[$i]->getMoviePath() . "###url-end###\n";

            } else {
                throw new Exception('Bean is wether of type ConfiguredImageBean nor MovieBean!');
            }

            $adminMailString .= '###medium-' . $i . "-end###\n";
        }

        $adminMailString .= '###order-end###'."\n";

        return $adminMailString;
    }

Yes, that’s XML, if instead of tags you used ###some-field-start###value###some-field-end#### in place of traditional tags. Note how in many cases, the tag name itself is dynamic: $adminMailString .= '###medium-' . $i . "-start###\n";

It was bad enough to generate it, but Herbie was glad he wasn’t responsible for parsing it.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Krebs on SecurityHacked Password Service Leakbase Goes Dark

Leakbase, a Web site that indexed and sold access to billions of usernames and passwords stolen in some of the world largest data breaches, has closed up shop. A source close to the matter says the service was taken down in a law enforcement sting that may be tied to the Dutch police raid of the Hansa dark web market earlier this year.

Leakbase[dot]pw began selling memberships in September 2016, advertising more than two billion usernames and passwords that were stolen in high-profile breaches at sites like linkedin.com, myspace.com and dropbox.com.

But roughly two weeks ago KrebsOnSecurity began hearing from Leakbase users who were having trouble reaching the normally responsive and helpful support staff responsible for assisting customers with purchases and site issues.

Sometime this weekend, Leakbase began redirecting visitors to haveibeenpwned.com, a legitimate breach alerting service run by security researcher Troy Hunt (Hunt’s site lets visitors check if their email address has shown up in any public database leaks, but it does not store corresponding account passwords).

Leakbase reportedly came under new ownership after its hack in April. According to a source with knowledge of the matter but who asked to remain anonymous, the new owners of Leakbase dabbled in dealing illicit drugs at Hansa, a dark web marketplace that was dismantled in July by authorities in The Netherlands.

The Dutch police had secretly seized Hansa and operated it for a time in order to gather more information about and ultimately arrest many of Hansa’s top drug sellers and buyers. 

According to my source, information the Dutch cops gleaned from their Hansa takeover led authorities to identify and apprehend one of the owners of Leakbase. This information could not be confirmed, and the Dutch police have not yet responded to requests for comment. 

A message posted Dec. 2 to Leakbase’s Twitter account states that the service was being discontinued, and the final message posted to that account seems to offer paying customers some hope of recovering any unused balances stored with the site.

“We understand many of you may have lost some time, so in an effort to offer compensation please email, refund@leakbase.pw Send your LeakBase username and how much time you had left,” the message reads. “We will have a high influx of emails so be patient, this could take a while.”

My source noted that these last two messages are interesting because they are unlike every other update posted to the Leakbase Twitter account. Prior to the shutdown message on Dec. 2, all updates to that account were done via Twitter’s Web client; but the last two were sent via Mobile Web (M2).

Ironically, Leakbase was itself hacked back in April 2017 after a former administrator was found to be re-using a password from an account at x4b[dot]net, a service that Leakbase relied upon at the time to protect itself from distributed denial-of-service (DDoS) attacks intended to knock the site offline.

X4B[dot]net was hacked just days before the Leakbase intrusion, and soon after cleartext passwords and usernames from hundreds of Leakbase users were posted online by the hacker group calling itself the Money Team.

Many readers have questioned how it could be illegal to resell passwords that were leaked online in the wake of major data breaches. The argument here is generally that in most cases this information is already in the public domain and thus it can’t be a crime to index and resell it.

However, many legal experts see things differently. In February 2017, I wrote about clues that tied back to a real-life identity for one of the alleged administrators of Leakedsource, a very similar service (it’s worth noting that the subject of that story also was found out because he re-used the same credentials across multiple sites).

In the Leakedsource story, I interviewed Orin Kerr, director of the Cybersecurity Law Initiative at The George Washington University. Kerr told me that owners of services like Leakbase and Leakedsource could face criminal charges if prosecutors could show these services intended for the passwords that are for sale on the site to be used in the furtherance of a crime.

Kerr said trafficking in passwords is clearly a crime under the Computer Fraud and Abuse Act (CFAA).

Specifically, Section A6 of the CFAA, which makes it a crime to “knowingly and with intent to defraud traffic in any password or similar information through which a computer may be accessed without authorization, if…such trafficking affects interstate or foreign commerce.”

“CFAA quite clearly punishes password trafficking,” Kerr said. “The statute says the [accused] must be trafficking in passwords knowingly and with intent to defraud, or trying to further unauthorized access.”

,

Planet Linux AustraliaDavid Rowe: How Inlets Generate Thrust on Supersonic aircraft

Some time ago I read Skunk Works, a very good “engineering” read.

In the section on the SR-71, the author Ben Rich made a statement that has puzzled me ever since, something like: “Most of the engines thrust is developed by the intake”. I didn’t get it – surely an intake is a source of drag rather than thrust? I have since read the same statement about the Concorde and it’s inlets.

Lately I’ve been watching a lot of AgentJayZ Gas Turbine videos. This guy services gas turbines for a living and is kind enough to present a lot of intricate detail and answer questions from people. I find his presentation style and personality really engaging, and get a buzz out of his enthusiasm, love for his work, and willingness to share all sorts of geeky, intricate details.

So inspired by AgentJayZ I did some furious Googling and finally worked out why supersonic planes develop thrust from their inlets. I don’t feel it’s well explained elsewhere so here is my attempt:

  1. Gas turbine jet engines only work if the air is moving into the compressor at subsonic speeds. So the job of the inlet is to slow the air down from say Mach 2 to Mach 0.5.
  2. When you slow down a stream of air, the pressure increases. Like when you feel the wind pushing on your face on a bike. Imagine (don’t try) the pressure on your arm hanging out of a car window at 100 km/hr. Now imagine the pressure at 3000 km/hr. Lots. Around a 40 times increase for the inlets used in supersonic aircraft.
  3. So now we have this big box (the inlet chamber) full of high pressure air. Like a balloon this pressure is pushing equally on all sides of the box. Net thrust is zero.
  4. If we untie the balloon neck, the air can escape, and the balloon shoots off in the opposite direction.
  5. Back to the inlet on the supersonic aircraft. It has a big vacuum cleaner at the back – the compressor inlet of the gas turbine. It is sucking air out of the inlet as fast as it can. So – the air can get out, just like the balloon, and the inlet and the aircraft attached to it is thrust in the opposite direction. That’s how an inlet generates thrust.
  6. While there is also thrust from the gas turbine and it’s afterburner, turns out that pressure release in the inlet contributes the majority of the thrust. I don’t know why it’s the majority. Guess I need to do some more reading and get my gas equations on.

Another important point – the aircraft really does experience that extra thrust from the inlet – e.g. it’s transmitted to the aircraft by the engine mounts on the inlet, and the mounts must be designed with those loads in mind. This helps me understand the definition of “thrust from the inlet”.

,

Krebs on SecurityFormer NSA Employee Pleads Guilty to Taking Classified Data

A former employee for the National Security Agency pleaded guilty on Friday to taking classified data to his home computer in Maryland. According to published reports, U.S. intelligence officials believe the data was then stolen from his computer by hackers working for the Russian government.

Nghia Hoang Pho, 67, of Ellicott City, Maryland, pleaded guilty today to “willful retention of national defense information.” The U.S. Justice Department says that beginning in April 2006 Pho was employed as a developer for the NSA’s Tailored Access Operations (TAO) unit, which develops specialized hacking tools to gather intelligence data from foreign targets and information systems.

According to Pho’s plea agreement, between 2010 and March 2015 he removed and retained highly sensitive classified “documents and writings that contained national defense information, including information classified as Top Secret.”

Pho is the third NSA worker to be charged in the past two years with mishandling classified data. His plea is the latest — and perhaps final — chapter in the NSA’s hunt for those responsible for leaking NSA hacking tools that have been published online over the past year by a shadowy group calling itself The Shadow Brokers.

Neither the government’s press release about the plea nor the complaint against Pho mention what happened to the classified documents after he took them home. But a report in The New York Times cites government officials speaking on condition of anonymity saying that Pho had installed on his home computer antivirus software made by a Russian security firm Kaspersky Lab, and that Russian hackers are believed to have exploited the software to steal the classified documents.

On October 5, 2017, The Wall Street Journal reported that Russian government hackers had lifted the hacking tools from an unnamed NSA contractor who’d taken them and examined them on his home computer, which happened to have Kaspersky Antivirus installed.

On October 10, The Times reported that Israeli intelligence officers looked on in real time as Russian government hackers used Kaspersky’s antivirus network as a kind of improvised search tool to scour computers around the world for the code names of American intelligence programs.

For its part, Kaspersky has said its software detected the NSA hacking tools on a customer’s computer and sent the files to the company’s anti-malware network for analysis. In a lengthy investigation report published last month, Kaspersky said it found no evidence that the files left its network, and that the company deleted the files from its system after learning what they were.

Kaspersky also noted that the computer from which the files were taken was most likely already compromised by “unknown threat actors.” It based that conclusion on evidence indicating the user of that system installed a backdoored Microsoft Office 2013 license activation tool, and that in order to run the tool the user must have disabled his antivirus protection.

The U.S. Department of Homeland Security (DHS) issued a binding directive in September ordering all federal agencies to cease using Kaspersky software by Dec. 12.

Pho faces up to 10 years in prison. He is scheduled to be sentenced April 6, 2018.

A note to readers: This author published a story earlier in the week that examined information in the metadata of Microsoft Office documents stolen from the NSA by The Shadow Brokers and leaked online. That story identified several individuals whose names were in the metadata from those documents. After the guilty plea entered this week and described above, KrebsOnSecurity has unpublished that earlier story.

Don MartiPurple box claims another victim

Linux Journal Ceases Publication. If you can stand it, let's have a look at the final damage.

LJ

40 trackers. Not bad, but not especially good either. That purple box of data leakage—third-party trackers that forced Linux Journal into an advertising race to the bottom against low-value and fraud sites—is not so deep as a well, nor so wide as a church door...but it's there. A magazine that was a going concern in print tried to make the move to the web and didn't survive.

Linux Journal is where I was working when I first started wondering why print ads tend to hold their value while web ads keep losing value. Unfortunately it's not enough for sites to just stop running trackers and make the purple box go away. But there are a few practical steps that Internet freedom lovers can take to stop the purple box from taking out your other favorite sites.

Krebs on SecurityCarding Kingpin Sentenced Again. Yahoo Hacker Pleads Guilty

Roman Seleznev, a Russian man who is already serving a record 27-year sentence in the United States for cybercrime charges, was handed a 14-year sentence this week by a federal judge in Atlanta for his role in a credit card and identity theft conspiracy that prosecutors say netted more than $50 million. Separately, a Canadian national has pleaded guilty to charges of helping to steal more than a billion user account credentials from Yahoo.

Seleznev, 33, was given the 14-year sentence in connection with two prosecutions that were consolidated in Georgia: The 2008 heist against Atlanta-based credit card processor RBS Worldpay; and a case out of Nevada where he was charged as a leading merchant of stolen credit cards at carder[dot]su, at one time perhaps the most bustling fraud forum where members openly marketed a variety of cybercrime-oriented services.

Roman Seleznev, pictured with bundles of cash. Image: US DOJ.

Seleznev’s conviction comes more than a year after he was convicted in a Seattle court on 38 counts of cybercrime charges, including wire fraud and aggravated identity theft. The Seattle conviction earned Seleznev a 27-year prison sentence — the most jail time ever given to an individual convicted of cybercrime charges in the United States.

This latest sentence will be served concurrently — meaning it will not add any time to his 27-year sentence. But it’s worth noting because Seleznev is appealing the Seattle verdict. In the event he prevails in Seattle and gets that conviction overturned, he will still serve out his 14-year sentence in the Georgia case because he pleaded guilty to those charges and waived his right to an appeal.

Prosecutors say Seleznev, known in the underworld by his hacker nicknames “nCux” and “Bulba,” enjoyed an extravagant lifestyle prior to his arrest, driving expensive sports cars and dropping tens of thousands of dollars at lavish island vacation spots. The son of an influential Russian politician, Seleznev made international headlines in 2014 after he was captured while vacationing in The Maldives, a popular destination for Russians and one that many Russian cybercriminals previously considered to be out of reach for western law enforcement agencies.

However, U.S. authorities were able to negotiate a secret deal with the Maldivian government to apprehend Seleznev. Following his capture, Seleznev was whisked away to Guam for more than a month before being transported to Washington state to stand trial for computer hacking charges.

The U.S. Justice Department says the laptop found with him when he was arrested contained more than 1.7 million stolen credit card numbers, and that evidence presented at trial showed that Seleznev earned tens of millions of dollars defrauding more than 3,400 financial institutions.

Investigators also reportedly found a smoking gun: a password cheat sheet that linked Seleznev to a decade’s worth of criminal hacking. For more on Seleznev’s arrest and prosecution, see The Backstory Behind Carder Kingpin Roman Seleznev’s Record 27-Year Sentence, and Feds Charge Carding Kingpin in Retail Hacks.

In an unrelated case, federal prosecutors in California announced a guilty plea from Karim Baratov, one of four men indicted in March 2017 for hacking into Yahoo beginning in 2014. Yahoo initially said the intrusion exposed the usernames, passwords and account data for roughly 500 million Yahoo users, but in December 2016 Yahoo said the actual number of victims was closer to one billion (read: all of its users). 

Baratov, 22, is a Canadian and Kazakh national who lived in Canada (he’s now being held in California). He was charged with being hired by two Russian FSB officer defendants in this case — Dmitry Dokuchaev, 33, and Igor Sushchin, 43 — to hack into the email accounts of thousands of individuals. According to prosecutors, Baratov’s role in the charged conspiracy was to hack webmail accounts of individuals of interest to the FSB and send those accounts’ passwords to Dokuchaev in exchange for money.

Karim Baratov, a.k.a. “Karim Taloverov,” as pictured in 2014 on his own site, mr-karim.com.

Baratov operated several business that he advertised openly online that could be hired to hack into email accounts at the world’s largest email providers, including Google, Yahoo and Yandex. As part of his plea agreement, Baratov not only admitted to agreeing and attempting to hack at least 80 webmail accounts on behalf of one of his FSB co-conspirators, but also to hacking more than 11,000 webmail accounts in total from in or around 2010 until his arrest by Canadian authorities.

Shortly after Baratov’s arrest and indictment, KrebsOnSecurity examined many of the email hacking services he operated and that were quite clearly tied to his name. One such business advertised the ability to steal email account passwords without actually changing the victim’s password. According to prosecutors, Baratov’s service relied on “spear phishing” emails that targeted individuals with custom content and enticed recipients to click a booby-trapped link.

For example, one popular email hacking business registered to Baratov was xssmail[dot]com, which for several years advertised the ability to break into email accounts of virtually all of the major Webmail providers. XSS is short for “cross-site-scripting.” XSS attacks rely on vulnerabilities in Web sites that don’t properly parse data submitted by visitors in things like search forms or anyplace one might enter data on a Web site.

Archive.org’s cache of xssmail.com

In the context of phishing links, the user clicks the link and is actually taken to the domain he or she thinks she is visiting (e.g., yahoo.com) but the vulnerability allows the attacker to inject malicious code into the page that the victim is visiting.

This can include fake login prompts that send any data the victim submits directly to the attacker. Alternatively, it could allow the attacker to steal “cookies,” text files that many sites place on visitors’ computers to validate whether they have visited the site previously, as well as if they have authenticated to the site already.

Baratov pleaded guilty to nine counts, including one count of aggravated identity theft and eight violations of the Computer Fraud and Abuse Act. His sentencing hearing is scheduled for Feb. 20, 2018. The aggravated identity theft charge carries a mandatory two-year sentence; each of the other counts is punishable by up to 10 years in jail and fines of $250,000, although any sentence he receives will likely be heavily tempered by U.S. federal sentencing guidelines.

Meanwhile, Baratov’s co-defendant Dokuchaev is embroiled in his own legal worries in Russia, charges that could carry a death sentence. He and his former boss Sergei Mikhailov — once deputy chief of the FSB’s Center for Information Security — were arrested in December 2016 by Russian authorities and charged with treason. Also charged with treason in connection with that case was Ruslan Stoyanov, a senior employee at Russian security firm Kaspersky Lab.

There are many competing theories for the reasons behind their treason charges, some of which are explored in this Washington Post story. I have my own theory, detailed in my January 2017 piece, A Shakeup in Russia’s Top Cybercrime Unit.

,

CryptogramFriday Squid Blogging: Research into Squid-Eating Beaked Whales

Beaked whales, living off the coasts of Ireland, feed on squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDBrand-new TED Talks from TEDWomen 2017: A note from the curator

This year’s TEDWomen in New Orleans was a truly special conference, at a vital moment, and I’m sure the ripples will be felt for a long time to come. The theme this year was bridges: we build them, we cross them, sometimes we even burn them. Our speakers talked about the physical bridges we need for access and connection as well as the metaphoric ones we need to bridge the differences that increasingly divide us.

Along with the inspiring TED Talks and often game-changing ideas that were shared in the TEDWomen stage, my biggest take-away from this year’s conference was once again the importance of community and the opportunity this conference offers for women and a few good men from different countries, cultures, religions, backgrounds, from so many different sectors of work and experience, to come to together to listen, to learn, to connect with each other, to build their own bridges.

Take a look at all the presentations with our detailed speaker-by-speaker coverage on the TED Blog. Between sessions, we hosted four great Facebook Live conversations in the Blue Room, diving deeper into ideas from talks with WNYC’s Manoush Zomorodi. Catch up on them right here.

And we’re starting to post TED Talks from our event to share freely with the world. First up: Gretchen Carlson, whose timely talk about sexual harassment is relevant and resonant for so many women and men at this #MeToo moment. It’s already been viewed by over 800,000 people!

Gretchen calls on women who have experienced sexual harassment to “Be Fierce!” (also the title of her recent book). Luvvie Ajayi, in another TEDWomen Talk being released today, encourages not just women, but all of us to be courageous and to Speak Up when we have something to say, even if it makes others uncomfortable — especially if it makes the speaker uncomfortable. “I want us to leave this world better than we found it,” she told the audience in her hopeful and uplifting talk, “And how I choose to effect change is by speaking up, by being the first and by being the domino.”

And don’t miss Teresa Njoroge’s powerful talk on women in prison. At Clean Start Kenya, Njoroge builds bridges connecting the formerly imprisoned to the outside world and vice versa.

And one of the highlights of the conference for me, my conversation with Leah Chase, the Queen of Creole Cuisine. Chase’s New Orleans restaurant Dooky Chase changed the course of American history over gumbo and fried chicken. During the civil rights movement, it was a place where white and black people came together, where activists planned protests and where the police entered but did not disturb — and it continues to operate in the same spirit today. In our talk, she shares her wisdom from a lifetime of activism, speaking up and cooking.

Follow me on Facebook and Twitter for updates as we publish more TEDWomen 2017 videos in coming weeks and months. And please share your thoughts with me here in the comments about TEDWomen, these videos and ideas you have for speakers at TEDWomen 2018. We’re always looking for great ideas!

Gratefully,
— Pat


CryptogramNeedless Panic Over a Wi-FI Network Name

A Turkish Airlines flight made an emergency landing because someone named his wireless network (presumably from his smartphone) "bomb on board."

In 2006, I wrote an essay titled "Refuse to be Terrorized." (I am also reminded of my 2007 essay, "The War on the Unexpected." A decade later, it seems that the frequency of incidents like the one above is less, although not zero. Progress, I suppose.

Worse Than FailureError'd: Get Inspired

"The great words of inspirationalAuthor.firstName inspirationalAuthor.lastName move me every time," wrote Geoff O.

 

Mark R. writes, "Some countries out there must have some crazy postal codes."

 

"I certainly hope the irony isn't lost on the person who absolutely failed sending out a boiler plate email on the subject of machine learning!" Mike C. wrote.

 

Adrian R. writes, "I'd love to postpone this update, but I feel like I'm playing button roulette."

 

"It's pretty cool that Magneto asks to do a backup before an upgrade," Andrea wrote, "It's only 14TB."

 

Ky W. writes, "I sure hope that these developers didn't write the avaionics software."

 

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

CryptogramWarrant Protections against Police Searches of Our Data

The cell phones we carry with us constantly are the most perfect surveillance device ever invented, and our laws haven't caught up to that reality. That might change soon.

This week, the Supreme Court will hear a case with profound implications on your security and privacy in the coming years. The Fourth Amendment's prohibition of unlawful search and seizure is a vital right that protects us all from police overreach, and the way the courts interpret it is increasingly nonsensical in our computerized and networked world. The Supreme Court can either update current law to reflect the world, or it can further solidify an unnecessary and dangerous police power.

The case centers on cell phone location data and whether the police need a warrant to get it, or if they can use a simple subpoena, which is easier to obtain. Current Fourth Amendment doctrine holds that you lose all privacy protections over any data you willingly share with a third party. Your cellular provider, under this interpretation, is a third party with whom you've willingly shared your movements, 24 hours a day, going back months -- even though you don't really have any choice about whether to share with them. So police can request records of where you've been from cell carriers without any judicial oversight. The case before the court, Carpenter v. United States, could change that.

Traditionally, information that was most precious to us was physically close to us. It was on our bodies, in our homes and offices, in our cars. Because of that, the courts gave that information extra protections. Information that we stored far away from us, or gave to other people, afforded fewer protections. Police searches have been governed by the "third-party doctrine," which explicitly says that information we share with others is not considered private.

The Internet has turned that thinking upside-down. Our cell phones know who we talk to and, if we're talking via text or e-mail, what we say. They track our location constantly, so they know where we live and work. Because they're the first and last thing we check every day, they know when we go to sleep and when we wake up. Because everyone has one, they know whom we sleep with. And because of how those phones work, all that information is naturally shared with third parties.

More generally, all our data is literally stored on computers belonging to other people. It's our e-mail, text messages, photos, Google docs, and more ­ all in the cloud. We store it there not because it's unimportant, but precisely because it is important. And as the Internet of Things computerizes the rest our lives, even more data will be collected by other people: data from our health trackers and medical devices, data from our home sensors and appliances, data from Internet-connected "listeners" like Alexa, Siri, and your voice-activated television.

All this data will be collected and saved by third parties, sometimes for years. The result is a detailed dossier of your activities more complete than any private investigator --­ or police officer --­ could possibly collect by following you around.

The issue here is not whether the police should be allowed to use that data to help solve crimes. Of course they should. The issue is whether that information should be protected by the warrant process that requires the police to have probable cause to investigate you and get approval by a court.

Warrants are a security mechanism. They prevent the police from abusing their authority to investigate someone they have no reason to suspect of a crime. They prevent the police from going on "fishing expeditions." They protect our rights and liberties, even as we willingly give up our privacy to the legitimate needs of law enforcement.

The third-party doctrine never made a lot of sense. Just because I share an intimate secret with my spouse, friend, or doctor doesn't mean that I no longer consider it private. It makes even less sense in today's hyper-connected world. It's long past time the Supreme Court recognized that a months'-long history of my movements is private, and my e-mails and other personal data deserve the same protections, whether they're on my laptop or on Google's servers.

This essay previously appeared in the Washington Post.

Details on the case. Two opinion pieces.

I signed on to two amicus briefs on the case.

EDITED TO ADD (12/1): Good commentary on the Supreme Court oral arguments.

Planet Linux AustraliaPia Waugh: My Canadian adventure exploring FWD50

I recently went to Ottawa for the FWD50 conference run by Rebecca and Alistair Croll. It was my first time in Canada, and it combined a number of my favourite things. I was at an incredible conference with a visionary and enthusiastic crowd, made up of government (international, Federal, Provincial and Municipal), technologists, civil society, industry, academia, and the calibre of discussions and planning for greatness was inspiring.

There was a number of people I have known for years but never met in meatspace, and equally there were a lot of new faces doing amazing things. I got to spend time with the excellent people at the Treasury Board of Canadian Secretariat, including the Canadian Digital Service and the Office of the CIO, and by wonderful coincidence I got to see (briefly) the folk from the Open Government Partnership who happened to be in town. Finally I got to visit the gorgeous Canadian Parliament, see their extraordinary library, and wander past some Parliamentary activity which always helps me feel more connected to (and therefore empowered to contribute to) democracy in action.

Thank you to Alistair Croll who invited me to keynote this excellent event and who, with Rebecca Croll, managed to create a truly excellent event with a diverse range of ideas and voices exploring where we could or should go as a society in future. I hope it is a catalyst for great things to come in Canada and beyond.

For those in Canada who are interested in the work in New Zealand, I strongly encourage you to tune into the D5 event in February which will have some of our best initiatives on display, and to tune in to our new Minister for Broadband, Digital and Open Government (such an incredible combination in a single portfolio), Minister Clare Curran and you can tune in to our “Service Innovation” work at our blog or by subscribing to our mailing list. I also encourage you to read this inspiring “People’s Agenda” by a civil society organisation in NZ which codesigned a vision for the future type of society desired in New Zealand.

Highlights

  • One of the great delights of this trip was seeing a number of people in person for the first time who I know from the early “Gov 2.0″ days (10 years ago!). It was particularly great to see Thom Kearney from Canada’s TBS and his team, Alex Howard (@digiphile) who is now a thought leader at the Sunlight Foundation, and Olivia Neal (@livneal) from the UK CTO office/GDS, Joe Powell from OGP, as well as a few friends from Linux and Open Source (Matt and Danielle amongst others).
  • The speech by Canadian Minister of the Treasury Board Secretariat (which is responsible for digital government) the Hon Scott Brison, was quite interesting and I had the chance to briefly chat to him and his advisor at the speakers drinks afterwards about the challenges of changing government.
  • Meeting with Canadian public servants from a variety of departments including the transport department, innovation and science, as well as the Treasury Board Secretariat and of course the newly formed Canadian Digital Service.
  • Meeting people from a range of sub-national governments including the excellent folk from Peel, Hillary Hartley from Ontario, and hearing about the quite inspiring work to transform organisational structures, digital and other services, adoption of micro service based infrastructure, the use of “labs” for experimentation.
  • It was fun meeting some CIO/CTOs from Canada, Estonia, UK and other jurisdictions, and sharing ideas about where to from here. I was particularly impressed with Alex Benay (Canadian CIO) who is doing great things, and with Siim Sikkut (Estonian CIO) who was taking the digitisation of Estonia into a new stage of being a broader enabler for Estonians and for the world. I shared with them some of my personal lessons learned around digital iteration vs transformation, including from the DTO in Australia (which has changed substantially, including a name change since I was there). Some notes of my lessons learned are at http://pipka.org/2017/04/03/iteration-or-transformation-in-government-paint-jobs-and-engines/.
  • My final highlight was how well my keynote and other talks were taken. People were really inspired to think big picture and I hope it was useful in driving some of those conversations about where we want to collectively go and how we can better collaborate across geopolitical lines.

Below are some photos from the trip, and some observations from specific events/meetings.

My FWD50 Keynote – the Tipping Point

I was invited to give a keynote at FWD50 about the tipping point we have gone through and how we, as a species, need to embrace the major paradigm shifts that have already happened, and decide what sort of future we want and work towards that. I also suggested some predictions about the future and examined the potential roles of governments (and public sectors specifically) in the 21st century. The slides are at https://docs.google.com/presentation/d/1coe4Sl0vVA-gBHQsByrh2awZLa0Nsm6gYEqHn9ppezA/edit?usp=sharing and the full speech is on my personal blog at http://pipka.org/2017/11/08/fwd50-keynote-the-tipping-point.

I also gave a similar keynote speech at the NerHui conference in New Zealand the week after which was recorded for those who want to see or hear the content at https://2017.nethui.nz/friday-livestream

The Canadian Digital Service

Was only set up about a year ago and has a focus on building great services for users, with service design and user needs at the heart of their work. They have some excellent people with diverse skills and we spoke about what is needed to do “digital government” and what that even means, and the parallels and interdependencies between open government and digital government. They spoke about an early piece of work they did before getting set up to do a national consultation about the needs of Canadians (https://digital.canada.ca/beginning-the-conversation/) which had some interesting insights. They were very focused on open source, standards, building better ways to collaborate across government(s), and building useful things. They also spoke about their initial work around capability assessment and development across the public sector. I spoke about my experience in Australia and New Zealand, but also in working and talking to teams around the world. I gave an informal outline about the work of our Service Innovation and Service Integration team in DIA, which was helpful to get some feedback and peer review, and they were very supportive and positive. It was an excellent discussion, thank you all!

CivicTech meetup

I was invited to talk to the CivicTech group meetup in Ottawa (https://www.meetup.com/YOW_CT/events/243891738/) about the roles of government and citizens into the future. I gave a quick version of the keynote I gave at linux.conf.au 2017 (pipka.org/2017/02/18/choose-your-own-adventure-keynote/), which explores paradigm shifts and the roles of civic hackers and activists in helping forge the future whilst also considering what we should (and shouldn’t) take into the future with us. It included my amusing change.log of the history of humans and threw down the gauntlet for civic hackers to lead the way, be the light :)

CDS Halloween Mixer

The Canadian Digital Service does a “mixer” social event every 6 weeks, and this one landed on Halloween, which was also my first ever Halloween celebration  I had a traditional “beavertail” which was a flat cinnamon doughnut with lemon, amazing! Was fun to hang out but of course I had to retire early from jet lag.

Workshop with Alistair

The first day of FWD50 I helped Alistair Croll with a day long workshop exploring the future. We thought we’d have a small interactive group and ended up getting 300, so it was a great mind meld across different ideas, sectors, technologies, challenges and opportunities. I gave a talk on culture change in government, largely influenced by a talk a few years ago called “Collaborative innovation in the public service: Game of Thrones style” (http://pipka.org/2015/01/04/collaborative-innovation-in-the-public-service-game-of-thrones-style/). People responded well and it created a lot of discussions about the cultural challenges and barriers in government.

Thanks

Finally, just a quick shout out and thanks to Alistair for inviting me to such an amazing conference, to Rebecca for getting me organised, to Danielle and Matthew for your companionship and support, to everyone for making me feel so welcome, and to the following folk who inspired, amazed and colluded with me  In chronological order of meeting: Sean Boots, Stéphane Tourangeau, Ryan Androsoff, Mike Williamson, Lena Trudeau, Alex Benay (Canadian Gov CIO), Thom Kearney and all the TBS folk, Siim Sikkut from Estonia, James Steward from UK, and all the other folk I met at FWD50, in between feeling so extremely unwell!

Thank you Canada, I had a magnificent time and am feeling inspired!

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 4, week 9

Well, we’re almost at the end of the year!! It’s a time when students and teachers alike start to look forward to the long, summer break. Generally a time for celebrations and looking back over the highlights of the year – which is reflected in the activities for the final lessons of the Understanding Our […]

,

LongNowMusic, Time and Long-Term Thinking: Brian Eno Expands the Vocabulary of Human Feeling

Brian Eno’s creative activities defy categorization. Widely known as a musician and producer, Eno has expanded the frontiers of audio and visual art for decades, and posited new ways of approaching creativity in general. He is a thinker and speaker, activist and eccentric. He formulated the idea of the Big Here and Long Now—a central conceptual underpinning of The Long Now Foundation, which he helped found with Stewart Brand and Danny Hillis in 01996. Eno’s artistic career has often dealt closely with concepts of time, scale, and, as he puts it in the liner notes to Apollo“expanding the vocabulary of human feeling.”

Ambient and Generative Art

Brian Eno coined the term ‘ambient music’ to describe a kind of music meant to influence an ambience without necessarily demanding the listener’s full attention. The notes accompanying his 01978 album Ambient 1: Music for Airports differentiate it from the commercial music produced specifically for background listening by companies such as Muzak, Inc. in the mid-01900s. Eno explains that ambient music should enhance — not blanket — an environment’s acoustic and atmospheric characteristics, to calming and thought-inducing effect. It has to accommodate various levels of listening engagement, and therefore “must be as ignorable as it is interesting” (Eno 296).

Ambient music can have a timeless quality to it. The absence of a traditional structure of musical development withholds a clear beginning or end or middle, tapping into a sense of deeper, slower processes. It lets you “settle into time a little bit,” as Eno said in the first of Long Now’s SALT talks. As TimeMagazine writes, “the theme of time, foreshortened or elongated, is a defining feature of Eno’s musical and visual adventures. But it takes a long lens, pointing back, to bring into focus the ways in which his influence has seeped into the mainstream.”

Eno’s use of the term ‘ambient’ was, however, a product of a long process of musical development. He had been thinking specifically about this kind of music for several years already, and the influence of minimalist artists such as Terry RileySteve Reich and Philip Glass had long shaped his musical ideas and techniques. He also drew on many other genres, including Krautrockbands such as Tangerine Dream and Can, whose music was contemporaneous and influential in Eno’s early collaborations with Robert Fripp, e.g. (No Pussyfooting). While their music might not necessarily fall into the genre ‘ambient,’ David Sheppard notes that “Eno and Fripp’s lengthy essays shared with Krautrock a disavowal of verse/chorus orthodoxy and instead relied on an essentially static musical core with only gradual internal harmonic developments” (142). In his autobiography, Eno also points to developments in audio technology as key in the development of the genre, as well as one particularly insightful experience he had while bedridden after an accident:

New sound-shaping and space-making devices appeared on the market weekly (and still do), synthesizers made their clumsy but crucial debut, and people like me just sat at home night after night fiddling around with all this stuff, amazed at what was now possible, immersed in the new sonic worlds we could create.

And immersion was really the point: we were making music to swim in, to float in, to get lost inside.

This became clear to me when I was confined to bed, immobilized by an accident in early 01975. My friend Judy Nylon had visited, and brought with her a record of 17th-century harp music. I asked her to put it on as she left, which she did, but it wasn’t until she’d gone that I realized that the hi-fi was much too quiet and one of the speakers had given up anyway. It was raining hard outside, and I could hardly hear the music above the rain — just the loudest notes, like little crystals, sonic icebergs rising out of the storm. I couldn’t get up and change it, so I just lay there waiting for my next visitor to come and sort it out, and gradually I was seduced by this listening experience. I realized that this was what I wanted music to be — a place, a feeling, an all-around tint to my sonic environment.

It was not long after this realization that Eno released the album Discreet Music, which he considers to be an ambient work, mentioning a conceptual likeness to Erik Satie’s Furniture Music. One of the premises behind its creation was that it would be background for Robert Fripp to play over in concerts, and the title track is about half an hour long — as much time as was available to Eno on one side of a record.

It is also an early example in his discography of what later became another genre closely associated with Eno and with ambient: generative music. In the liner notes — which include the story of the broken speaker epiphany — he writes:

Since I have always preferred making plans to executing them, I have gravitated towards situations and systems that, once set into operation, could create music with little or no intervention on my part.

That is to say, I tend towards the roles of planner and programmer, and then become an audience to the results.

This notion of creating a system that generates an output is an idea that artists had considered previously. In fact, in the 18th century even Mozart and others experimented with a ‘musical dice game’ in which the numerical results of rolling dice ‘generated’ a song. More relevant to Brian Eno’s use of generative systems, however, was the influence of 20th century composers such as John Cage. David Sheppard’s biography of Brian Eno describes how Tom Phillips — a teacher at Ipswich School of Art where Eno studied painting in the mid 01960s — introduced him to the musical avant garde scene with the works of Cage, Cornelius Cardew, and the previously mentioned minimalists Reich, Glass and Riley (Sheppard 35–41). These and other artists exposed Eno to ideas such as aleatory and minimalist music, tape experimentation, and performance or process-based musical concepts.

Eno notes Steve Reich’s influence on his generative music, acknowledging that “indeed a lot of my interest was directly inspired by Steve Reich’s sixties tape pieces such as Come Out) and It’s Gonna Rain” (Eno 332). And looking back on a 01970 performance by the Philip Glass Ensemble at the Royal College of Art, Brian Eno highlights its impact on him:

This was one of the most extraordinary musical experiences of my life — sound made completely physical and as dense as concrete by sheer volume and repetition. For me it was like a viscous bath of pure, thick energy. Though he was at that time described as a minimalist, this was actually one of the most detailed musics I’d ever heard. It was all intricacy and exotic harmonics. (Sheppard 63–64)

The relationship between minimalism and intricacy, in a sense, is what underlies the concept of generative music. The artist designs a system with inputs which, when compared to the plethora of outputs, appear quite simple. Steve Reich’s It’s Gonna Rain is, in fact, simply a single 1.8 second recording of a preacher shouting “It’s gonna rain!” played simultaneously on two tape recorders. Due to the inconsistencies in the two devices’ hardware, however, the recordings play at slightly different speeds, producing over 17 minutes of phasing in which the relationship between the two recordings constantly changes.

Brian Eno has taken this capacity for generative music to create complexity out of simplicity much further. Discreet Music (01975) used a similar approach, but started with recordings of different lengths, used an echo system, and altered timbre over time. The sonic possibilities opened by adding just a few more variables are vast.

This experimental approach to creativity is just one of many that Eno explored, including some non-musical means of prompting unexpected outputs. The same year that Discreet Music was released, he collaborated with painter Peter Schmidt to produce Oblique Strategies: Over One Hundred Worthwhile Dilemmas.

The work is a set of cards, each one with an aphorism designed to help people think differently or to approach a problem from a different angle. These include phrases such as “Honour thy error as a hidden intention,” “Work at a different speed,” and “Use an old idea.” Schmidt had created something a few years earlier along the same lines that he called ‘Thoughts Behind the Thoughts.’ There was also inspiration to be drawn from John Cage’s use of the I Ching to direct his musical compositions and George Brecht’s 01963 Water Yam Box. Like a generative system, the Oblique Strategies provides a guiding rule or principle that is specific enough to focus creativity but general enough to yield an unknown outcome, dependent on a multitude of variables interacting within the framework of the strategy.

Three decades later, generative systems remained a central inspiration for Eno and a source of interesting cross-disciplinary collaboration. In 02006, he discussed them with Will Wright, creator of popular video game series The Sims, at a Long Now SALT talk:

Wright observed that science is all about compressing reality to minimal rule sets, but generative creation goes the opposite direction. You look for a combination of the fewest rules that can generate a whole complex world that will always surprise you, yet within a framework that stays recognizable. “It’s not engineering and design,” he said, “so much as it is gardening. You plant seeds. Richard Dawkins says that a willow seed has only about 800K of data in it.” — Stewart Brand

Brian Eno has always been interested in this explosion of possibilities, and has in recent years created generative art that incorporates both audio and visuals. He notes that his work 77 Million Paintings would take about 10,000 years to run through all of its possibilities — at its slowest setting. Long Now produced the North American premiere of 77 Million Paintings at Yerba Buena center for the Arts in 02007, and members were treated to a surprise visit from Mr. Eno who spoke about his work and Long Now.

Eno also designed an art installation for The Interval, Long Now’s cafe-bar-museum venue in San Francisco. “Ambient Painting #1” is the only example of Brian’s generative light work in America, and the only ambient painting of his that is currently on permanent public display anywhere.

Ambient Painting #1, by Brian Eno. Photo by Gary Wilson.

Another generative work called Bloom, created with Peter Chilvers, is available as an app.

Part instrument, part composition and part artwork, Bloom’s innovative controls allow anyone to create elaborate patterns and unique melodies by simply tapping the screen. A generative music player takes over when Bloom is left idle, creating an infinite selection of compositions and their accompanying visualisations. — Generativemusic.com

Eno’s interest in time and scale (among other things) was shared by Long Now co-founder Stewart Brand, and they were in close correspondence in the years leading up to the creation of The Long Now Foundation. Eno’s 01995 diary, published in part in his autobiography, describes that correspondence in its introduction:

My conversation with Stewart Brand is primarily a written one — in the form of e-mail that I routinely save, and which in 1995 alone came to about 100,000 words. Often I discuss things with him in much greater detail than I would write about them for my own benefit in the diary, and occasionally I’ve excerpted from that correspondence. — Eno, ix

Out of Eno’s involvement with the establishment of The Long Now Foundation emerged in his essay “The Big Here and Long Now”, which describes his experiences with small-scale perspectives and the need for larger ones, as well as the artist’s role in social change.

This imaginative process can be seeded and nurtured by artists and designers, for, since the beginning of the 20th century, artists have been moving away from an idea of art as something finished, perfect, definitive and unchanging towards a view of artworks as processes or the seeds for processes — things that exist and change in time, things that are never finished. Sometimes this is quite explicit — as in Walter de Maria’s “Lightning Field,” a huge grid of metal poles designed to attract lightning. Many musical compositions don’t have one form, but change unrepeatingly over time — many of my own pieces and Jem Finer’s Artangel installation “LongPlayer” are like this. Artworks in general are increasingly regarded as seeds — seeds for processes that need a viewer’s (or a whole culture’s) active mind in which to develop. Increasingly working with time, culture-makers see themselves as people who start things, not finish them.

And what is possible in art becomes thinkable in life. We become our new selves first in simulacrum, through style and fashion and art, our deliberate immersions in virtual worlds. Through them we sense what it would be like to be another kind of person with other kinds of values. We rehearse new feelings and sensitivities. We imagine other ways of thinking about our world and its future.

[…] In this, the 21st century, we may need icons more than ever before. Our conversation about time and the future must necessarily be global, so it needs to be inspired and consolidated by images that can transcend language and geography. As artists and culture-makers begin making time, change and continuity their subject-matter, they will legitimise and make emotionally attractive a new and important conversation.

The Chime Generator and January 07003

Brian Eno’s involvement with Long Now began through his discussions with Stewart Brand about time and long-term thinking, and the need for a carefully crafted sonic experience to help The Clock evoke deep time for its visitors posed a challenge Eno was uniquely suited to take on.

From its earliest conception, the imagined visit to the 10,000-Year Clock has had aural experience at its core. One of Danny Hillis’ earliest refrains about The Clock evokes this:

It ticks once a year, bongs once a century, and the cuckoo comes out every millennium. —Danny Hillis

In the years of brainstorming and design that have molded this vision into a tangible object, a much more detailed and complicated picture has come into focus, but sound has remained central; one of the largest components of the 10,000-Year Clock will be its Chime Generator.

Rather than a bong per century, visitors to the Clock will have the opportunity to hear it chime 10 bells in a unique sequence each day at noon. The story of how this came to be is told by Mr. Eno himself in the liner notes of January 07003: Bell Studies for The Clock of the Long Now, a collection of musical experiments he synthesized and recorded in 02003:

When we started thinking about The Clock of the Long Now, we naturally wondered what kind of sound it could make to announce the passage of time. Bells have stood the test of time in their relationship to clocks, and the technology of making them is highly evolved and still evolving. I began reading about bells, discovering the physics of their sounds, and became interested in thinking about what other sorts of bells might exist. My speculations quickly took me out of the bounds of current physical and material possibilities, but I considered some license allowable since the project was conceived in a time scale of thousands of years, and I might therefore imagine bells with quite different physical properties from those we now know (Eno 3).

Bells have a long history of marking time, so their inclusion in The Clock is a natural fit. Throughout this long history, they’ve also commonly been used in churches, meditation halls and yoga studios to offer a resonant ambiance in which to contemplate a connection to something bigger, much as The Clock’s vibrations will help inspire an awareness of one’s place in deep time. Furthermore, bells were central to some early forms of generative music. While learning about their history, Eno found a vast literature on the ways bells had been used in Britain to explore the combinatorial possibilities afforded by following a few simple rules:

Stated briefly, change-ringing is the art (or, to many practitioners, the science) of ringing a given number of bells such that all possible sequences are used without any being repeated. The mathematics of this idea are fairly simple: n bells will yield n! sequences or changes. The ! is not an expression of surprise but the sign for a factorial: a direction to multiply the number by all those lower than it. So 3 bells will yield 3 x 2 x 1 = 6 changes, while 4 bells will yield 4 x 3 x 2 x 1 = 24 changes. The ! process does become rather surprising as you continue it for higher values of n: 5! = 120, and 6! = 720 — and you watch the number of changes increasing dramatically with the number of bells. — Eno 4

Eno noticed that 10 bells in this context will provide 3,628,800 sequences. Ring one of those each day and you’ll be occupied for almost exactly 10,000 years, the proposed lifespan of The Clock.

Following this line of thinking, he imagined using the patterns played by the bells as a method of encoding the amount of time that had elapsed since The Clock had started ringing them. Writing in 02003, he says:

I wanted to hear the bells of the month of January, 07003 — approximately halfway through the life of the Clock.

I had no idea how to generate this series, but I had a good idea who would.

I wrote to Danny Hillis asking whether he could come up with an algorithm for the job. Yes, he wrote back, and in fact he could come up with an algorithm for generating all the possible algorithms for that job. Not having the storage space for a lot of extra algorithms in my studio, I decided to settle for just the one. — Eno 6

And so, the pattern The Clock’s bells will ring was set. Using a start point (02003 in this case), one can extrapolate the order in which the Bells will ring for a given day in the future. The title track of the album features the synthesized bells played in each of the 31 sequences for the month of January in the year 07003. Other tracks on the album use different algorithms or different bells to explore alternative possibilities; taken together, the album is distinctly “ambient” in Eno’s tradition, but also unique within his work for its minimalism and procedurality.

The procedures guiding the composition are strict enough that they can be written in computer code. A Long Now Member named Sean Burke was kind enough to create a webpage that illustrates how this works. The site allows visitors to enter a future date and receive a MIDI file of the chimes from that day. You can also download the algorithm itself in the form of a Perl script or just grab the MIDI data for all 10,000 years and synthesize your own bells.

If the bell ringing algorithm is a seed, in what soil can it be planted and expected to live its full life? Compact disks, Perl scripts and MIDI files have their uses, of course, but The Clock has to really last in a physical, functional sense for many thousands of years. To serve this purpose, the Chime Generator manifests the algorithm in stainless steel Geneva wheels rotating on bearings of silicon nitride.

Eno’s Chime Generator prototype. Photo by Because We Can

One of the first prototypes for this mechanism resides at The Interval. In its operation, one can see that the Geneva wheels rotate at different intervals because of their varying numbers of slots. Together, the Geneva wheels represent the ringing algorithm and sequentially engage the hammers in all 3.6 million permutations. For this prototype, the hammers strike Tibetan Bowl Gongs to sound the notes, but any type of bell can be used.

The full scale Chime Generator will be vertically suspended in the Clock shaft within the mountain. The Geneva wheels will be about 8 feet in diameter, with the full mechanism standing over seventy feet in height.

The bells for the full scale Chime Generator won’t be Tibetan Bowl Gongs like in the smaller prototype above. Though testing has been done within the Clock chamber to find its resonant frequency, the exact tuning and design of the Clock’s bells will be left until the chamber is finished and most of the Clock is installed in order to maximize their ability to resonate within the space.

Like much of Brian Eno’s work, the chimes in the 10,000-Year Clock draw together far-flung traditions, high and low tech, and science and art to create a meditative experience, unique in a given moment, but expansive in scale and scope. They encourage the listener to live and to be present in the moment, the “now,” but to feel that moment expanding forward and backward through time, literally to experience the “Long Now.”



This is the first of a series of articles, “Music, Time and Long-Term Thinking,” in which we will discuss music and musicians who have engaged various aspects of long-term thinking, both historically and in the contemporary scene.

Sociological ImagesWhen Social Class Chooses Your College Major

Originally Posted at Discoveries

Many different factors go into deciding your college major — your school, your skills, and your social network can all influence what field of study you choose. This is an important decision, as social scientists have shown it has consequences well into the life course — not only do college majors vary widely in terms of earnings across the life course, but income gaps between fields are often larger than gaps between those with college degrees and those without them. Natasha Quadlin finds that this gap is in many ways due to differences in funding at the start of college that determine which majors students choose.

Photo by Tom Woodward, Flickr CC

Quadlin draws on data from the Postsecondary Transcript Study, a collection of over 700 college transcripts from students who were enrolled in postsecondary education in 2012. Focusing on students’ declared major during their freshman year, Quadlin analyzes the relationship between the source of funding a student gets — loans, grants, or family funds — and the type of major the student initially chooses — applied versus academic and STEM versus non-STEM. She finds that students who pay for college with loans are more likely to major in applied non-STEM fields, such as business and nursing, and they are less likely to be undeclared. However, students whose funding comes primarily from grants or family members are more likely to choose academic majors like sociology or English and STEM majors like biology or computer science.

In other words, low- and middle-income students with significant amounts of loan debt are likely to choose “practical” applied majors that more quickly result in full-time employment. Conversely, students with grants and financially supportive parents, regardless of class, are more likely to choose what are considered riskier academic and STEM tracks that are more challenging and take longer to turn into a job. Since middle- to upper-class students are more likely to get family assistance and merit-based grants, this means that less advantaged students are most likely to rely on loans. The problem, Quadlin explains, is that applied non-STEM majors have relatively high wages at first, but very little advancement over time, while academic and STEM majors have more barriers to completion but experience more frequent promotions. The result is that inequalities established at the start of college are often maintained throughout people’s lives.

Jacqui Frost is a PhD candidate in sociology at the University of Minnesota and the managing editor at The Society Pages. Her research interests include non-religion and religion, culture, and civic engagement.

(View original at https://thesocietypages.org/socimages)

CryptogramNSA "Red Disk" Data Leak

ZDNet is reporting about another data leak, this one from US Army's Intelligence and Security Command (INSCOM), which is also within to the NSA.

The disk image, when unpacked and loaded, is a snapshot of a hard drive dating back to May 2013 from a Linux-based server that forms part of a cloud-based intelligence sharing system, known as Red Disk. The project, developed by INSCOM's Futures Directorate, was slated to complement the Army's so-called distributed common ground system (DCGS), a legacy platform for processing and sharing intelligence, surveillance, and reconnaissance information.

[...]

Red Disk was envisioned as a highly customizable cloud system that could meet the demands of large, complex military operations. The hope was that Red Disk could provide a consistent picture from the Pentagon to deployed soldiers in the Afghan battlefield, including satellite images and video feeds from drones trained on terrorists and enemy fighters, according to a Foreign Policy report.

[...]

Red Disk was a modular, customizable, and scalable system for sharing intelligence across the battlefield, like electronic intercepts, drone footage and satellite imagery, and classified reports, for troops to access with laptops and tablets on the battlefield. Marking files found in several directories imply the disk is "top secret," and restricted from being shared to foreign intelligence partners.

A couple of points. One, this isn't particularly sensitive. It's an intelligence distribution system under development. It's not raw intelligence. Two, this doesn't seem to be classified data. Even the article hedges, using the unofficial term of "highly sensitive." Three, it doesn't seem that Chris Vickery, the researcher that discovered the data, has published it.

Chris Vickery, director of cyber risk research at security firm UpGuard, found the data and informed the government of the breach in October. The storage server was subsequently secured, though its owner remains unknown.

This doesn't feel like a big deal to me.

Slashdot thread.

Worse Than FailureCodeSOD: Aarb!

C++’s template system is powerful and robust enough that template metaprogramming is Turing complete. Given that kind of power, it’s no surprise that pretty much every other object-oriented language eschews templates for code generation.

Java, for example, uses generics- essentially templates without the metaprogramming. What we still keep is compile-time type-safety, and all the benefits of generic programming, but without the complexity of compile-time code generation.

Thierry L inherited a Java application, and the original developer seems to miss that degree of complexity.

public abstract class CentralValidationDistributionAssemblingService<
        DC extends DistributionChannel,
        DU extends DistributionUnit<DC>,
        AC extends AssemblingContext,
        EAC extends AC,
        A extends Assembly<DC>,
        AAR extends AbstractAssemblingResult<DC>,
        AARB extends AbstractAssemblingResultBuilder<DC, AAR>
        >
        implements DistributionAssemblingService<DC, AC, DU, AAR>
{
    //…
}

The best part about this is that the type abbreviations are an onomatopoeia of the choking noises I made when I saw this code:

"DC… DU?… AC-EAC! A-AAR-AARB!"

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Sam VargheseErratic All Blacks will need to buckle up for 2019 Cup

At the end of the 2015 Rugby World Cup, New Zealand bid goodbye to six players who had been around for what seemed like forever.

Richie McCaw, Ma’a Nonu, Conrad Smith, Keven Mealamu, Tony Woodcock and Daniel Carter left, some to play in foreign countries, others just opting out.

At that point, nobody really raised the issue of how the All Blacks would adjust for talented players seem to come in a never-ending stream. This, despite the fact that six mentioned above were all well above average in ability.

But two years later, the gaps are really showing.

Of the six players mentioned, only Smith played less than 100 international games, but even he was close, with 94.

Together with Nonu, he formed an extremely reliable centre combination, making up for his partner’s occasional tendency to improvise by keeping a cool head and excelling in defence. Prior to them, the last time there was a reliable 12-13 combine, it was Tana Umaga and Aaron Mauger.

Carter was perhaps the most talented fly-half to emerge since Grant Fox and McCaw was a loose forward par excellence and creative leader.

In 2016, there was not much evidence of a big gap in the team, with just one loss, to Ireland in Chicago, the entire year.

But this year, along with some injuries, some players taking up contracts abroad, and the talented Ben Smith taking a break from the game, New Zealand has looked vulnerable on many fronts. They lost two Tests and drew one, splitting a series with the British and Irish Lions at the start of the international season. In sharp contrast, the last time the Lions visited New Zealand, in 2005, they were decisively beaten in all three Tests.

At the start of the year, Sonny Bill Williams, generally a reliable substitute for Nonu in earlier years, was erratic at best, and even earned a red card for a foolish hit on a British Lions player in the second of three Tests. This cost New Zealand that match as they had to play most of the game with 14 players.

The centres combination was also affected by Ryan Crotty’s frequent exits for concussion. At fly-half, even the highly talented Beauden Barrett was erratic on occasion. And his back-up, Lima Sopoaga, is far from a finished product.

Though two losses and a draw in 16 Tests seems a good outcome, many of the wins were achieved in rather shaky fashion. There was only the occasional confident win when the team functioned on all cylinders.

Coach Steve Hansen has been experimenting with various players, no doubt with an intention to have a settled 15 by the beginning of 2019 which is a World Cup year. That is every coach’s target.

So it might be premature to pass judgement on how New Zealand will fare at the 2019 Cup. England shapes as the biggest threat, with coach Eddie Jones, an Australian, having brought a winning culture to the team. Many of his players came up with sparkling performances for the Lions against the All Blacks.

,

Worse Than FailureThanks, Google

Gold Certificate

"Dealing with real customers is a hard job," Katya declared from the safety of the employee breakroom. "Dealing with big companies is even harder!"

"I know what you mean," her coworker Rick replied, sipping his tiny paper cup of water. "Enterprise security requirements, arcane contract requirements, and then they're likely to have all that Oracle junk to integrate with ..."

"Huh? Well, that too, but I'm talking about Google."

"Google? What'd they do?" Rick raised an eyebrow, leaning against the wall by the cooler, as Katya began her story.

As the lead architect, Katya was responsible for keeping their customers happy—no matter what. The product was a Java application, a server that stood between legacy backends and mobile apps to push out notifications when things happened that the customer cared about. So when one of their biggest customers reported that 30% of the Google Cloud messages weren't being delivered to their devices in production, it was all hands on deck, with Katya at the helm.

"So I of course popped open the log right off," she said, her voice dropping lower for effect. "And what do you think I saw? CertPathValidatorExceptions."

"A bad SSL certificate?" Rick asked. "From Google? Can't be."

"You've done this before," Katya pouted, jokingly. "But it only happened sporadically. We even tried two concurrent calls, and got one failure, one success."

"How does that even work?" Rick wondered.

"I know, right? So we cURL'd it, verbose, and got the certificate chain," Katya said. "There was a wildcard cert, signed by an intermediate, signed by a root. I checked the root myself, it was definitely part of the global truststore. So I tried again and again until I got a second cert chain. But it was the same thing: cert, intermediate, trusted root."

"So what was the problem?" Rick asked.

"Get this: the newer cert's root CA was only added in Java 7 and 8, back in 2016. We were still bundling an older version of Java 7, before the update."

"Ouch," sympathized Rick. "So you pushed out an updated runtime to all the customers?"

"What? No way!" Katya said. "They'd have each had to do a full integration test cycle. No, we delivered a shell script that added the root CA to the bundled cacerts."

"Shouldn't they be worried about security updates?" wondered Rick

"Sure, but are they actually going to upgrade to Java 8 on our say-so? You wanna die on that hill?

"It just pissed me right off. Why didn't Google announce the change? How come they whipped through them all in two days—no canary testing or anything? I tell you, it's almost enough to make a girl quit and start an alpaca farm upstate."

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet Linux AustraliaFrancois Marier: Proxy ACME challenges to a single machine

The Libravatar mirrors are setup using DNS round-robin which makes it a little challenging to automatically provision Let's Encrypt certificates.

In order to be able to use Certbot's webroot plugin, I need to be able to simultaneously host a randomly-named file into the webroot of each mirror. The reason is that the verifier will connect to seccdn.libravatar.org, but there's no way to know which of the DNS entries it will hit. I could copy the file over to all of the mirrors, but that would be annoying since some of the mirrors are run by volunteers and I don't have direct access to them.

Thankfully, Scott Helme has shared his elegant solution: proxy the .well-known/acme-challenge/ directory from all of the mirrors to a single validation host. Here's the exact configuration I ended up with.

DNS Configuration

In order to serve the certbot validation files separately from the main service, I created a new hostname, acme.libravatar.org, pointing to the main Libravatar server:

CNAME acme libravatar.org.

Mirror Configuration

On each mirror, I created a new Apache vhost on port 80 to proxy the acme challenge files by putting the following in the existing port 443 vhost config (/etc/apache2/sites-available/libravatar-seccdn.conf):

<VirtualHost *:80>
    ServerName __SECCDNSERVERNAME__
    ServerAdmin __WEBMASTEREMAIL__

    ProxyPass /.well-known/acme-challenge/ http://acme.libravatar.org/.well-known/acme-challenge/
    ProxyPassReverse /.well-known/acme-challenge/ http://acme.libravatar.org/.well-known/acme-challenge/
</VirtualHost>

Then I enabled the right modules and restarted Apache:

a2enmod proxy
a2enmod proxy_http
systemctl restart apache2.service

Finally, I added a cronjob in /etc/cron.daily/commit-new-seccdn-cert to commit the new cert to etckeeper automatically:

#!/bin/sh
cd /etc/libravatar
/usr/bin/git commit --quiet -m "New seccdn cert" seccdn.crt seccdn.pem seccdn-chain.pem > /dev/null || true

Main Configuration

On the main server, I created a new webroot:

mkdir -p /var/www/acme/.well-known

and a new vhost in /etc/apache2/sites-available/acme.conf:

<VirtualHost *:80>
    ServerName acme.libravatar.org
    ServerAdmin webmaster@libravatar.org
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes
    </Directory>
</VirtualHost>

before enabling it and restarting Apache:

a2ensite acme
systemctl restart apache2.service

Registering a new TLS certificate

With all of this in place, I was able to register the cert easily using the webroot plugin on the main server:

certbot certonly --webroot -w /var/www/acme -d seccdn.libravatar.org

The resulting certificate will then be automatically renewed before it expires.

,

Krebs on SecurityMacOS High Sierra Users: Change Root Password Now

A newly-discovered flaw in macOS High Sierra — Apple’s latest iteration of its operating system — allows anyone with local (and, apparently in some cases, remote) access to the machine to log in as the all-powerful “root” user without supplying a password. Fortunately, there is a simple fix for this until Apple patches this inexplicable bug: Change the root account’s password now.

Update, Nov. 29, 11:40 a.m. ET: Apple has released a patch for this flaw. More information on the fix is here. The update is available via the App Store app on your Mac. Click Updates in the App Store toolbar, then use the Update buttons to download and install any updates listed.

Original story:

For better or worse, this glaring vulnerability was first disclosed today on Twitter by Turkish software developer Lemi Orhan Ergin, who unleashed his findings onto the Internet with a tweet to @AppleSupport:

“Dear @AppleSupport, we noticed a *HUGE* security issue at MacOS High Sierra. Anyone can login as “root” with empty password after clicking on login button several times. Are you aware of it @Apple?”

High Sierra users should be able to replicate the exploit by accessing System Preferences, then Users & Groups, and then click the lock to make changes. Type “root” with no password, and simply try that several times until the system relents and lets you in.

How does one change the root password? It’s simple enough. Open up a Terminal (in the Spotlight search box just type “terminal”) and type “sudo passwd root”.

Many people responding to that tweet said they were relieved to learn that this extremely serious oversight by Apple does not appear to be exploitable remotely. However, sources who have tested the bug say it can be exploited remotely if a High Sierra user a) has not changed the root password yet and b) has enabled “screen sharing” on their Mac.

Likewise, multiple sources have now confirmed that disabling the root account does not fix the problem because the exploit actually causes the account to be re-enabled.

There may be other ways that this vulnerability can be exploited: I’ll update this post as more information becomes available. But for now, if you’re using macOS High Sierra, take a moment to change the root password now, please.

LongNowA Message from Long Now Members on #GivingTuesday

We hope you’ll consider a tax-deductible donation to Long Now this giving season.

Thank you to Long Now members and donors: we appreciate your additional support this giving season. With your help we can foster long-term thinking for generations to come.

Sociological ImagesWhy We Worry About Normalizing White Nationalism

The New York Times has been catching a lot of criticism this week for publishing a profile on the co-founder of the Traditionalist Worker Party. Critics argue that stories taking a human interest angle on how alt-right activists live, and how they dress, are not just puff pieces that aren’t doing due diligence in reporting—they also risk “normalizing” neo-nazi and white supremacist views in American society.

It is tempting to scoff at the buzzword “normalization,” but there is good reason for the clunky term. For sociologists, what is normal changes across time and social context, and normalization means more than whether people choose to accept deviant beliefs or behaviors. Normalization means that the everyday structure of organizations can encourage and reward deviance, even unintentionally.

Media organizations play a key role here. Research on the spread of anti-Muslim attitudes by Chris Bail shows how a small number of fringe groups with extremist views were able to craft emotionally jarring messages that caught media attention, giving them disproportionate influence in policy circles and popular culture.

Organizations are also quite good at making mistakes, and even committing atrocities, through “normal” behavior. Research on what happened at NASA leading up to the Challenger disaster by Diane Vaughan describes normalization using a theory of crime from Edwin H. Sutherland where people learn that deviant behavior can earn them benefits, rather than sanctions. When bending the rules becomes routine in organizations, we get everything from corporate corruption up to mass atrocities. According to Vaughan:

When discovered, a horrified world defined these actions deviant, yet they were normative within the culture of the work and occupations of the participants who acted in conformity with organizational mandates

The key point is that normalization doesn’t just stop by punishing or shaming individuals for bad behavior. Businesses can be fined, scapegoats can be fired, and readers can cancel subscriptions, but if normalization is happening the culture of an institution will continue to shape how individual people make decisions.This raises big questions about the decisions made by journalists and editors in pursuit of readership.

Research on normalization also begs us to remember that some of the most horrifying crimes and accidents in human history are linked by a common process: the way organizations can reward deviant work. Just look at the “happy young folks” photographed by Karl Höcker in 1944…while they worked at Auschwitz.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramMan-in-the-Middle Attack against Electronic Car-Door Openers

This is an interesting tactic, and there's a video of it being used:

The theft took just one minute and the Mercedes car, stolen from the Elmdon area of Solihull on 24 September, has not been recovered.

In the footage, one of the men can be seen waving a box in front of the victim's house.

The device receives a signal from the key inside and transmits it to the second box next to the car.

The car's systems are then tricked into thinking the key is present and it unlocks, before the ignition can be started.

Worse Than FailureA Handful of Beans

The startup Juan worked for was going through a growth spurt. There was more work than there were people, and plenty of money, so that meant interviews. Lots, and lots of interviews.

Enter Octavio. Octavio had an impressive resume, had worked for decades as a consultant, and was the project lead on an open source project called “JavaBachata”. Before the interview, Juan gave the project site a quick skim, and it looked like one of those end-to-end ORM/MVC frameworks.

Roasted coffee beans

Juan planned to bring it up during the interview, but Octavio beat him to the punch. “You’ve probably heard of me, and my project,” he said right after shaking hands. “JavaBachata is the fastest Java framework out there. I use it on all my projects, and my customers have been very happy.”

“Ah… we already have a framework,” Juan said, uncertain if this was an interview or a sales-pitch.

“Oh, I know, I know. But if you’re looking for my skills, that’s the place to look. It’s open source.”

While Juan pulled up the GitHub page, Octavio touted the framework’s strength. “I was doing no SQL before NoSQL was a thing,” he said. “All of our queries are executed in-memory, using TableBeans. That’s what makes it so fast.”

Juan decided to start looking in the TableBean class, since Octavio brought it up. The bulk of the class looked like this:

public String var000, var001, var002,… var199,var200;

“What’s this?” Juan asked, politely.

“Oh, yes, I know that looks awkward, but it actually makes the code much more configurable. You see, this is used in conjunction with the ObjectBean, and the appropriate dot-properties file.”

The .properties file was a mapping file, which could map ObjectBean columns to TableBean fields. So, for example, it might have a property like this:

1=178

That meant column 1 in the ObjectBean mapped to column 178 in the TableBean, so that you could conveniently access the data by calling objBean.getCol(1).

“Don’t you think these naming conventions are hard to maintain?” Juan asked. “It’d be nice to have names for things.”

Octavio shrugged. “I think that’s the problem with modern programmers. They just don’t know how to code without using variable names anymore.”

They didn’t hire Octavio, but he took it well. “It leaves me more time to work on my framework.”

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Cory DoctorowHey, Kitchener-Waterloo, I’m headed your way next Monday!

I was honoured to be invited to address the University of Waterloo on the occasion of the 50th anniversary of the Cheriton School of Computer Science; my father is a proud Waterloo grad (and I’m a proud Waterloo dropout!), and so this is indeed a very special opportunity for me.


Moreover, the kind folks at U Waterloo worked with the Kitchener Public Library to book a second event that morning, at the 85 Queen branch.

Both events are free, but they’re ticketed, so book now!

Here’s details:

Both events are on December 4, 2017.

I’ll be at the 85 Queen branch of the Kitchener Public Library at 3PM. Details here.


I’m giving my speech, “Dead canary in the coalmine: we just lost the web in the war on general purpose computing”, for the Cheriton School of Computer Science, at The Theatre of the Arts in the University of Waterloo Modern Languages Building (200 University Avenue West, N2L 3G1), at 7PM. Details here.


Hope to see you there!

,

Krebs on SecurityWho Was the NSA Contractor Arrested for Leaking the ‘Shadow Brokers’ Hacking Tools?

In August 2016, a mysterious entity calling itself “The Shadow Brokers” began releasing the first of several troves of classified documents and hacking tools purportedly stolen from “The Equation Group,” a highly advanced threat actor that is suspected of having ties to the U.S. National Security Agency. According to media reports, at least some of the information was stolen from the computer of an unidentified software developer and NSA contractor who was arrested in 2015 after taking the hacking tools home. In this post, we’ll examine clues left behind in the leaked Equation Group documents that may point to the identity of the mysterious software developer.

The headquarters of the National Security Agency in Fort Meade, Md.

The existence of the Equation Group was first posited in Feb. 2015 by researchers at Russian security firm Kaspersky Lab, which described it as one of the most sophisticated cyber attack teams in the world.

According to Kaspersky, the Equation Group has more than 60 members and has been operating since at least 2001. Most of the Equation Group’s targets have been in Iran, Russia, Pakistan, Afghanistan, India, Syria, and Mali.

Although Kaspersky was the first to report on the existence of the Equation Group, it also has been implicated in the group’s compromise. Earlier this year, both The New York Times and The Wall Street Journal cited unnamed U.S. intelligence officials saying Russian hackers were able to obtain the advanced Equation Group hacking tools after identifying the files through a contractor’s use of Kaspersky Antivirus on his personal computer. For its part, Kaspersky has denied any involvement in the theft.

The Times reports that the NSA has active investigations into at least three former employees or contractors, including two who had worked for a specialized hacking division of NSA known as Tailored Access Operations, or TAO.

Thanks to documents leaked from former NSA contractor Edward Snowden we know that TAO is a cyber-warfare intelligence-gathering unit of NSA that has been active since at least 1998. According to those documents, TAO’s job is to identify, monitor, infiltrate and gather intelligence on computer systems being used by entities foreign to the United States.

The third person under investigation, The Times writes, is “a still publicly unidentified software developer secretly arrested after taking hacking tools home in 2015, only to have Russian hackers lift them from his home computer.”

JEEPFLEA & EASTNETS

So who are those two unnamed NSA employees and the contractor referenced in The Times’ reporting? The hacking tools leaked by The Shadow Brokers are now in the public domain and can be accessed through this Github repository. The toolset includes reams of documentation explaining how the cyber weapons work, as well as details about their use in highly classified intelligence operations abroad.

Several of those documents are Microsoft Powerpoint (.pptx) and PDF files. These files contain interesting metadata that includes the names of at least two people who appear to be connected to the NSA’s TAO operations.

Inside the unzipped folder there are three directories: “oddjob,” “swift,” and “windows.” Looking at the “swift” folder, we can see a Powerpoint file called “JFM_Status.pptx.”

JFM stands for Operation Jeepflea Market, which appears to have been a clandestine operation aimed at siphoning confidential financial data from EastNets — a middle eastern partner in SWIFT. Short for the Society for Worldwide Interbank Financial Telecommunication, SWIFT provides a network that enables financial institutions worldwide to send and receive data about financial transactions.

Each of the Jeepflea Powerpoint slides contain two overlapping seals; one for the NSA and another for an organization called the Central Security Service (CSS). According to Wikipedia, the CSS was formed in 1972 to integrate the NSA and the Service Cryptologic Elements (SCE) of the U.S. armed forces.

In Figure 10 of the Jeepflea Powerpoint document, we can see the seal of the Texas Cryptologic Center, a NSA/CSS entity that is based out of Lackland Air Force Base in San Antonio Texas.

The metadata contained in the Powerpoint document shows that the last person to modify it has the designation “NSA-FTS32 USA,” and that this leaked version is the 9th revision of the document. What does NSA-FTS32 mean? According to multiple sources on the internet it means Tailored Access Operations.

We can also see that the creator of the Jeepflea document is the same person who last modified it. The metadata says it was last modified on 8/12/2013 at 6:52:27 PM and created on 7/1/2013 at 6:44:46 PM.

The file JFMStatus.pptx contains metadata showing that the creator of the file is one Michael A. Pecoraro. Public record searches suggest Mr. Pecoraro is in his mid- to late 30s and is from San Antonio, Texas. Pecoraro’s name appears on multiple documents in The Shadow Brokers collection. Mr. Pecoraro could not be reached for comment.

The metadata in a Microsoft Powerpoint presentation on Operation Jeepflea shows that the document was created and last modified in 2013 by a Michael A. Pecoraro.

Another person who earned the NSA-FTS32 designation in the document metadata was Nathan S. Heidbreder. Public record searches suggest that Mr. Heidbreder is approximately 30 years old and has lived in San Antonio and at Goodfellow Air Force Base in Texas, among other locations in the state.

According to Goodfellow’s Wikipedia entry, the base’s main mission is cryptologic and intelligence training for the Air Force, Army, Coast Guard, Navy, and Marine Corps. Mr. Heidbreder likewise could not be reached for comment.

The metadata contained in one of the classified Jeepflea Market Microsoft Excel documents released in The Shadow Brokers trove states that a Nathan S. Heidbreder was the last person to edit the document.

Another file in the leaked Shadow Brokers trove related to this Jeepflea/EastNets operation is potentially far more revealing, mainly because it appears to have last been touched not by an NSA employee, but by an outside contractor.

That file, “Eastnets_UAE_BE_DEC2010.vsd,” is a Microsoft Visio document created in Sept. 2013. The metadata inside of it says the last user to open the file was one Gennadiy Sidelnikov. Open records searches return several results for this name, including a young business analyst living in Moscow and a software developer based in Maryland.

As the NSA is based in Fort Meade, Md., this latter option seems far more likely. A brief Internet search turns up a 50-something database programmer named Gennadiy “Glen” Sidelnikov who works or worked for a company called Independent Software in Columbia, Md. (Columbia is just a few miles away from Ft. Meade).

The metadata contained within Eastnets_UAE_BE_Dec2010.vsd says Gennadiy Sidelnikov is the last author of the document. Click to enlarge.

What is Independent Software? Their Web site states that Independent Software is “a professional services company providing Information Technology products and services to mission-oriented Federal Civilian Agencies and the DoD. The company has focused on support to the Intelligence Community (IC) in Maryland, Florida, and North Carolina, as well as select commercial client markets outside of the IC.”

Indeed, this job advertisement from August 2017 for a junior software engineer at Independent Software says a qualified applicant will need a TOP SECRET clearance and should be able to pass a polygraph test.

WHO IS GLEN SIDELNIKOV?

The two NSA employees are something of a known commodity, but the third individual — Mr. Sidelnikov — is more mysterious. Sidelnikov did not respond to repeated requests for comment. Independent Software also did not return calls and emails seeking comment.

Sidelnikov’s LinkedIn page (PDF) says he began working for Independent Software in 2015, and that he speaks both English and Russian. In 1982, Sidelnikov earned his masters in information security from Kishinev University, a school located in Moldova — an Eastern European country that at the time was part of the Soviet Union.

Sildelnikov says he also earned a Bachelor of Science degree in “mathematical cybernetics” from the same university in 1981. Under “interests,” Mr. Sidelnikov lists on his LinkedIn profile Independent Software, Microsoft, and The National Security Agency.

Both The Times and The Journal have reported that the contractor suspected of leaking the classified documents was running Kaspersky Antivirus on his computer. It stands to reason that as a Russian native, Mr. Sildelnikov might be predisposed to using a Russian antivirus product.

Mr. Sidelnikov calls himself a senior consultant, but the many skills he self-described on his LinkedIn profile suggest that perhaps a better title for him would be “database administrator” or “database architect.”

A Google search for his name turns up numerous forums dedicated to discussions about administering databases. On the Web forum sqlservercentral.com, for example, a user named Glen Sidelnikov is listed as a “hall of fame” member for providing thousands of answers to questions that other users posted on the forum.

KrebsOnSecurity was first made aware of the metadata in the Shadow Brokers leak by Mike PoorRob Curtinseufert, and Larry Pesce. All three are security experts with InGuardians, a Washington, D.C.-based penetration testing firm that gets paid to break into companies and test their cybersecurity defenses.

Poor, who serves as president and managing partner of InGuardians, said he and his co-workers initially almost overlooked Sidelnikov’s name in the metadata. But he said the more they looked into Sidelnikov the less sense it made that his name was included in the Shadow Brokers metadata at all.

“He’s a database programmer, and they typically don’t have access to operational data like this,” Poor said. “Even if he did have access to an operational production database, it would be highly suspect if he accessed classified operation documents.”

Poor said that as the data owner, the NSA likely would be reviewing file access of all classified documents and systems, and it would definitely have come up as a big red flag if a software developer accessed and opened classified files during some routine maintenance or other occasion.

“He’s the only one in there that is not Agency/TAO, and I think that poses important questions,” Poor said. “Such as why did a DB programmer for a software company have access to operational classified documents? If he is or isn’t a source or a tie to Shadow Brokers, it at least begets the question of why he accessed classified operational documents.”

Curtinseufert said it may be that Sidelnikov’s skills came in handy in the Jeepflea operations, which appear to have involved compromising large financial databases, and querying those for very specific transactions.

For example, Jeepflea apparently involved surreptitiously injecting database queries into the SWIFT Alliance servers, which are responsible for handling the messaging tied to SWIFT financial transactions.

“It looks like the SWIFT data the NSA was collecting relied heavily on databases, and they were apparently also using some exploits to gather data,” Curtinseufert said. “The SWIFT databases are all records of financial transactions, and in Jeepflea they were able to query those SWIFT databases and see who’s moving money where. They were looking to pull SWIFT data on who was moving money around the Middle East. They did this with EastNets, and they tried to do it down in Venezuela. And it looks like through EastNets they were able to hack individual banks.”

The NSA did not respond to requests for comment.

InGuardians president Poor said we may never know for sure who was responsible for the Shadow Brokers leak, an incident that has been called “one of the worst security debacles ever to befall American intelligence.”  But one thing seems certain.

“I think it’s time that we state what we all know,” Poor said. “The Equation Group is Tailored Access Operations. It is the NSA.”

CryptogramUber Data Hack

Uber was hacked, losing data on 57 million driver and rider accounts. The company kept it quiet for over a year. The details are particularly damning:

The two hackers stole data about the company's riders and drivers ­-- including phone numbers, email addresses and names -- from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.

Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a "bug bounty" -- a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.

And almost certainly illegal:

While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.

By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.

The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers' stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver's license data from its drivers had been stolen in the course of the hacking.

Uber was hacked, losing data on 57 million driver and rider accounts. They kept it quiet for over a year. The details are particularly damning:

The two hackers stole data about the company's riders and drivers ­- including phone numbers, email addresses and names -­ from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.

Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a "bug bounty" ­- a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.

And almost certainly illegal:

While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.

By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.

The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers' stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver's license data from its drivers had been stolen in the course of the hacking.

Worse Than FailureCodeSOD: The Delivery Moose

We know stereotypes are poor placeholders for reality. Still, if we name a few nations, there are certain traits and themes that come to mind. Americans are fat, loud, gregarious, and love making pointless smalltalk. The English are reserved, love tea, and have perfected the art of queuing. The French are snobbish, the Japanese have weaponized politeness, the Finns won’t stand within ten meters of another human being at the bus stop, and so on. They can range from harmless to downright offensive and demeaning.

Laurent is Canadian, working for an insurance company. Their software is Russian- in that it comes from a Russian vendor, with a support contract that gives them access to a Russian dev team to make changes. While reviewing commits, Laurent found one simply labeled: “Fix some Sonars issue”.

The change?

public enum CorrespondenceDeliveryMethods {
    MAIL("mail"),
    NOT_MAIL("moose");
}

Apparently, the Russian team has some stereotypes of their own about how documents are sent in Canada. Laurent adds:

Since this saved in the database and would thus imply 5 signatures to do a data migration, it is quite probable we’ll ultimately leave it as is. Which is a shame, because as we all know, the alternative to mail in Canada is to ask a Hockey player to slapshot the letter through the customer window, especially after they canceled their home insurance with us.



They should send a real Canadian hero

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaDavid Rowe: Steve Ports an OFDM modem from Octave to C

Earlier this year I asked for some help. Steve Sampson K5OKC stepped up, and has done some fine work in porting the OFDM modem from Octave to C. I was so happy with his work I asked him to write a guest post on my blog on his experience and here it is!

On a personal level working with Steve was a great experience for me. I always enjoy and appreciate other people working on FreeDV with me, however it is quite rare to have people help out with programming. As you will see, Steve enjoyed the process and learned a great deal in the process.

The Problem with Porting

But first some background on the process involved. In signal processing it is common to develop algorithms in a convenient domain-specific scripting language such as GNU Octave. These languages can do a lot with one line of code and have powerul visualisation tools.

Usually, the algorithm then needs to be ported to a language suitable for real time implementation. For most of my career that has been C. For high speed operation on FPGAs it might be VHDL. It is also common to port algorithms from floating point to fixed point so they can run on low cost hardware.

We don’t develop algorithms directly in the target real-time language as signal processing is hard. Bugs are difficult to find and correct. They may be 10x or 100x times harder (in terms of person-hours) to find in C or VHDL than say GNU Octave.

So a common task in my industry is porting an algorithm from one language to another. Generally the process involves taking a working simulation and injecting a bunch of hard to find bugs into the real time implementation. It’s an excellent way for engineering companies to go bankrupt and upset customers. I have seen and indeed participated in this process (screwing up real time implementations) many times.

The other problem is algorithm development is hard, and not many people can do it. They are hard to find, cost a lot of money to employ, and can be very nerdy (like me). So if you can find a way to get people with C, but not high level DSP skills, to work on these ports – then it’s a huge win from a resourcing perspective. The person doing the C port learns a lot, and managers are happy as there is some predictability in the engineering process and schedule.

The process I have developed allows people with C coding (but not DSP) skills to port complex signal processing algorithms from one language to another. In this case its from GNU Octave to floating point C. The figures below shows how it all fits together.

Here is a sample output plot, in this case a buffer of received samples in the demodulator. This signal is plotted in green, and the difference between C and Octave in red. The red line is all zeros, as it should be.

This particular test generates 12 plots. Running is easy:

$ cd codec2-dev/octave
$ ../build_linux/unittest/tofdm
$ octave
>> tofdm
W........................: OK
tx_bits..................: OK
tx.......................: OK
rx.......................: OK
rxbuf in.................: OK
rxbuf....................: OK
rx_sym...................: FAIL (0.002037)
phase_est_pilot..........: FAIL (0.001318)
rx_amp...................: OK
timing_est...............: OK
sample_point.............: OK
foff_est_hz..............: OK
rx_bits..................: OK

This shows a fail case – two vectors just failed so some further inspection required.

Key points are:

  1. We make sure the C and Octave versions are identical. Near enough is not good enough. For floating point I set a tolerance like 1 part in 1000. For fixed point ports it can be bit exact – zero difference.
  2. We dump a lot of internal states, not just the inputs and outputs. This helps point us at exactly where the problem is.
  3. There is an automatic checklist to give us pass/fail reports of each stage.
  4. This process is not particularly original. It’s not rocket science, but getting people (especially managers) to support and follow such a process is. This part – the human factor – is really hard to get right.
  5. The same process can be used between any two versions of an algorithm. Fixed and float point, fixed point C and VHDL, or a reference implementation and another one that has memory or CPU optimisations. The same basic idea: take a reference version and use software to compare it.
  6. It makes porting fun and strangely satisfying. You get constant forward progress and no hard to find bugs. Things work when they hit real time. After months of tough, brain hurting, algorithm development, I find myself looking forward to the productivity the porting phase.

In this case Steve was the man doing the C port. Here is his story…..

Initial Code Construction

I’m a big fan of the Integrated Debugging Environment (IDE). I’ve used various versions over the years, but mostly only use Netbeans IDE. This is my current favorite, as it works well with C and Java.

When I take on a new programming project I just create a new IDE project and paste in whatever I want to translate, and start filling-in the Java or C code. In the OFDM modem case, it was the Octave source code ofdm_lib.m.

Obviously this code won’t do anything or compile, but it allows me to write C functions for each of the Octave code blocks. Sooner or later, all the Octave code is gone, and only C code remains.

I have very little experience with Octave, but I did use some Matlab in college. It was a new system just being introduced when I was near graduation. I spent a little time trying to make the program as dynamic as the Octave code. But it became mired in memory allocation.

Once David approved the decision for me to go with fixed configuration values (Symbol rate, Sample rate, etc), I was able to quickly create the header files. We could adjust these header files as we went along.

One thing about Octave, is you don’t have to specify the array sizes. So for the C port, one of my tasks was to figure out the array sizes for all the data structures. In some cases I just typed the array name in Octave, and it printed out its value, and then presto I now knew the size. Inspector Clouseau wins again!

The include files were pretty much patterned the same as FDMDV and COHPSK modems.

Code Starting Point

When it comes to modems, the easiest thing to create first is the modulator. It proved true in this case as well. I did have some trouble early on, because of a bug I created in my testing code. My spectrum looked different than Davids. Once this bug was ironed out the spectrums looked similar. David recommended I create a test program, like he had done for other modems.

The output may look similar, but who knows really? I’m certainly not going to go line by line through comma-separated values, and anyway Octave floating point values aren’t the same as C values past some number of decimal points.

This testing program was a little over my head, and since David has written many of these before, he decided to just crank it out and save me the learning curve.

We made a few data structure changes to the C program, but generally it was straight forward. Basically we had the outputs of the C and Octave modulators, and the difference is shown by their different colors. Luckily we finally got no differences.

OFDM Design

As I was writing the modulator, I also had to try and understand this particular OFDM design. I deduced that it was basically eighteen (18) carriers that were grouped into eight (8) rows. The first row was the complex “pilot” symbols (BPSK), and the remaining 7 rows were the 112 complex “data” symbols (QPSK).

But there was a little magic going on, in that the pilots were 18 columns, but the data was only using 16. So in the 7 rows of data, the first and last columns were set to a fixed complex “zero.”

This produces the 16 x 7 or 112 complex data symbols. Each QPSK symbol is two-bits, so each OFDM frame represents 224 bits of data. It wasn’t until I began working on the receiver code that all of this started to make sense.

With this information, I was able to drive the modulator with the correct number of bits, and collect the output and convert it to PCM for testing with Audacity.

DFT Versus FFT

This OFDM modem uses a DFT and IDFT. This greatly simplifies things. All I have to do is a multiply and summation. With only 18 carriers, this is easily fast enough for the task. We just zip through the 18 carriers, and return the frequency or time domain. Obviously this code can be optimized for firmware later on.

The final part of the modulator, is the need for a guard period called the Cyclic Prefix (CP). So by making a copy of the last 16 of the 144 complex time-domain samples, and putting them at the head, we produce 160 complex samples for each row, giving us 160 x 8 rows, or 1280 complex samples every OFDM frame. We send this to the transmitter.

There will probably need to be some filtering, and a function of adjusting gain in the API.

OFDM Modulator

That left the Demodulator which looked much more complex. It took me quite a long time just to get the Octave into some semblance of C. One problem was that Octave arrays start at 1 and C starts at 0. In my initial translation, I just ignored this. I told myself we would find the right numbers when we started pushing data through it.

I won’t kid anyone, I had no idea what was going on, but it didn’t matter. Slowly, after the basic code was doing something, I began to figure out the function of various parts. Again though, we have no idea if the C code is producing the same data as the Octave code. We needed some testing functions, and these were added to tofdm.m and tofdm.c. David wrote this part of the code, and I massaged the C modem code until one day the data were the same. This was pretty exciting to see it passing tests.

One thing I found, was that you can reach an underflow with single precision. Whenever I was really stumped, I would change the single precision to a double, and then see where the problem was. I was trying to stay completely within single precision floating point, because this modem is going to be embedded firmware someday.

Testing Process

There was no way that I could have reached a successful conclusion without the testing code. As a matter of fact, a lot of programming errors were found. You would be surprised at how much damage a miss placed parenthesis can do to a math equation! I’ve had enough math to know how to do the basic operations involved in DSP. I’m sure that as this code is ported to firmware, it can be simplified, optimized, and unrolled a bit for added speed. At this point, we just want valid waveforms.

C99 and Complex Math

Working with David was pretty easy, even though we are almost 16 time-zones apart. We don’t need an answer right now, and we aren’t working on a deadline. Sometimes I would send an email, and then four hours later I would find the problem myself, and the morning was still hours away in his time zone. So he sometimes got some strange emails from me that didn’t require an answer.

David was hands-off on this project, and doesn’t seem to be a control freak, so he just let me go at it, and then teamed-up when we had to merge things in giving us comparable output. Sometimes a simple answer was all I needed to blow through an Octave brain teaser.

I’ve been working in C99 for the past year. For those who haven’t kept up (1999 was a long time ago), but still, we tend to program C in the same way. In working with complex numbers though, the C library has been greatly expanded. For example, to multiply two complex numbers, you type” “A * B”. That’s it. No need to worry about a simulated complex number using a structure. You need a complex exponent, you type “cexp(I * W)” where “I” is the sqrt(-1). But all of this is hidden away inside the compiler.

For me, this became useful when translating Octave to C. Most of the complex functions have the same name. The only thing I had to do, was create a matrix multiply, and a summation function for the DFT. The rest was straight forward. Still a lot of work, but it was enjoyable work.

Where we might have problems interfacing to legacy code, there are functions in the library to extract the real and imaginary parts. We can easily interface to the old structure method. You can see examples of this in the testing code.

Looking back, I don’t think I would do anything different. Translating code is tedious no matter how you go. In this case Octave is 10 times easier than translating Fortran to C, or C to Java.

The best course is where you can start seeing some output early on. This keeps you motivated. I was a happy camper when I could look and listen to the modem using Audacity. Once you see progress, you can’t give up, and want to press on.

Steve/k5okc

Reading Further

The Bit Exact Fairy Tale is a story of fixed point porting. Writing this helped me vent a lot of steam at the time – I’d just left a company that was really good at messing up these sorts of projects.

Modems for HF Digital Voice Part 1 and Part 2.

The cohpsk_frame_design spreadsheet includes some design calculations on the OFDM modem and a map of where the data and pilot symbols go in time and frequency.

Reducing FDMDV Modem Memory is an example of using automated testing to port an earlier HF modem to the SM1000. In this case the goal was to reduce memory consumption without breaking anything.

Fixed Point Scaling – Low Pass Filter example – is consistently one of the most popular posts on this blog. It’s a worked example of a fixed point port of a low pass filter.

,

CryptogramMozilla's Guide to Privacy-Aware Christmas Shopping

Mozilla reviews the privacy practices of Internet-connected toys, home accessories, exercise equipment, and more.

Rondam RamblingsWhy Abortion is (not) Immoral: a followup

This is a followup to my previous post, "A Review of 'Why Abortion is Immoral'".  I want to follow up for two reasons.  First, in my original post I made a serious mistake, which I want to acknowledge, and also explain why I don't think the mistake impacts the overall validity of my original argument.  Second, Peter Donis introduced an interesting new wrinkle in the comments to that post, which I

,

Rondam RamblingsA review of "Why Abortion is Immoral"

Commenter Publius pointed me to this paper by Don Marquis, which advances a secular argument that abortion is immoral.  It's a good paper with an unusually well-reasoned -- though nonetheless incorrect -- argument.  I recommend reading it.  Finding the flaw in Marquis's argument makes an interesting and worthwhile exercise.  Seriously, go read it.  I'l wait. Publius posted this link in the

,

CryptogramFriday Squid Blogging: Fake Squid Seized in Cambodia

Falsely labeled squid snacks were seized in Cambodia. I don't know what food product it really was.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Cory DoctorowHow to get a signed, personalized copy of any of my books, shipped anywhere in the world!

The kind folks at Dark Delicacies, my local specialist horror bookstore here in beautiful Burbank, California have volunteered to fill orders for my novels; since they’re walking distance from my front door, I’ll be popping in there a couple of times every week between now and Xmas to sign and inscribe any orders you place; they make fabulous gifts and also excellent firelighters! Call them at +1-818-556-6660, or email darkdel@darkdel.com.

Krebs on SecurityName+DOB+SSN=FAFSA Data Gold Mine

KrebsOnSecurity has sought to call attention to online services which expose sensitive consumer data if the user knows a handful of static details about a person that are broadly for sale in the cybercrime underground, such as name, date of birth, and Social Security Number. Perhaps the most eye-opening example of this is on display at fafsa.ed.gov, the Web site set up by the U.S. Department of Education for anyone interested in applying for federal student financial aid.

Update, Nov. 28, 12:34 p.m. ET: The Education Department says not all of the data elements mentioned below are accessible on a FAFSA applicant if someone merely knows the static details about that person. Read on for their response to this story.

Original story:

Short for the Free Application for Federal Student Aid, FAFSA is an extremely lengthy and detailed form required at all colleges that accept and award federal aid to students.

Visitors to the login page for FAFSA have two options: Enter either the student’s FSA ID and password, or choose “enter the student’s information.” Selecting the latter brings up a prompt to enter the student’s first and last name, followed by their date of birth and Social Security Number.

Anyone who successfully supplies that information on a student who has applied for financial aid through FAFSA then gets to see a virtual colonoscopy of personal information on that individual and their family’s finances — including almost 200 different data elements.

The information returned includes all of these data fields:

1. Student’s Last Name:
2. Student’s First Name:
3. Student’s Middle Initial:
4. Student’s Permanent Mailing Address:
5. Student’s Permanent City:
6. Student’s Permanent State:
7. Student’s Permanent ZIP Code:
8. Student’s Social Security Number:
9. Student’s Date of Birth:
10. Student’s Telephone Number:
11. Student’s Driver’s License Number:
12. Student’s Driver’s License State:
13. Student’s E-mail Address:
14. Student’s Citizenship Status:
15. Student’s Alien Registration Number:
16. Student’s Marital Status:
17. Student’s Marital Status Date:
18. Student’s State of Legal Residence:
19. Was Student a Legal Resident Before January 1, 2012?
20. Student’s Legal Residence Date:
21. Is the Student Male or Female?
22. Register Student With Selective Service System?
23. Drug Conviction Affecting Eligibility?
24. Parent 1 Educational Level:
25. Parent 2 Educational Level:
26. High School or Equivalent Completed?
27a. Student’s High School Name:
27b. Student’s High School City:
27c. Student’s High School State:
28. First Bachelor’s Degree before 2017-2018 School Year?
29. Student’s Grade Level in College in 2017-2018:
30. Type of Degree/Certificate:
31. Interested in Work-study?
32. Student Filed 2015 Income Tax Return?
33. Student’s Type of 2015 Tax Form Used:
34. Student’s 2015 Tax Return Filing Status:
35. Student Eligible to File a 1040A or 1040EZ?
36. Student’s 2015 Adjusted Gross Income:
37. Student’s 2015 U.S. Income Tax Paid:
38. Student’s 2015 Exemptions Claimed:
39. Student’s 2015 Income Earned from Work:
40. Spouse’s 2015 Income Earned from Work:
41. Student’s Total of Cash, Savings, and Checking Accounts:
42. Student’s Net Worth of Current Investments:
43. Student’s Net Worth of Businesses/Investment Farms:
44a. Student’s Education Credits:
44b. Student’s Child Support Paid:
44c. Student’s Taxable Earnings from Need-Based Employment Programs:
44d. Student’s College Grant and Scholarship Aid Reported in AGI:
44e. Student’s Taxable Combat Pay Reported in AGI:
44f. Student’s Cooperative Education Earnings:
45a. Student’s Payments to Tax-Deferred Pensions & Retirement Savings:
45b. Student’s Deductible Payments to IRA/Keogh/Other:
45c. Student’s Child Support Received:
45d. Student’s Tax Exempt Interest Income:
45e. Student’s Untaxed Portions of IRA Distributions:
45f. Student’s Untaxed Portions of Pensions:
45g. Student’s Housing, Food, & Living Allowances:
45h. Student’s Veterans Noneducation Benefits:
45i. Student’s Other Untaxed Income or Benefits:
45j. Money Received or Paid on Student’s Behalf:
46. Student Born Before January 1, 1994?
47. Is Student Married?
48. Working on Master’s or Doctorate in 2017-2018?
49. Is Student on Active Duty in U.S. Armed Forces?
50. Is Student a Veteran?
51. Does Student Have Children He/She Supports?
52. Does Student Have Dependents Other than Children/Spouse?
53. Parents Deceased?/Student Ward of Court?/In Foster Care?
54. Is or Was Student an Emancipated Minor?
55. Is or Was Student in Legal Guardianship?
56. Is Student an Unaccompanied Homeless Youth as Determined by High School/Homeless Liaison?
57. Is Student an Unaccompanied Homeless Youth as Determined by HUD?
58. Is Student an Unaccompanied Homeless Youth as Determined by Director of Homeless Youth Center?
59. Parents’ Marital Status:
60. Parents’ Marital Status Date:
61. Parent 1 (Father’s/Mother’s/Stepparent’s) Social Security Number:
62. Parent 1 (Father’s/Mother’s/Stepparent’s) Last Name:
63. Parent 1 (Father’s/Mother’s/Stepparent’s) First Name Initial:
64. Parent 1 (Father’s/Mother’s/Stepparent’s) Date of Birth:
65. Parent 2 (Father’s/Mother’s/Stepparent’s) Social Security Number:
66. Parent 2 (Father’s/Mother’s/Stepparent’s) Last Name:
67. Parent 2 (Father’s/Mother’s/Stepparent’s) First Name Initial:
68. Parent 2 (Father’s/Mother’s/Stepparent’s) Date of Birth:
69. Parents’ E-mail Address:
70. Parents’ State of Legal Residence:
71. Were Parents Legal Residents Before January 1, 2012?
72. Parents’ Legal Residence Date:
73. Parents’ Number of Family Members in 2017-2018:
74. Parents’ Number in College in 2017-2018 (Parents Excluded):
75. Parents Received Medicaid or Supplemental Security Income?
76. Parents Received SNAP?
77. Parents Received Free/Reduced Price Lunch?
78. Parents Received TANF?
79. Parents Received WIC?
80. Parents Filed 2015 Income Tax Return?
81. Parents’ Type of 2015 Tax Form Used:
82. Parents’ 2015 Tax Return Filing Status:
83. Parents Eligible to File a 1040A or 1040EZ?
84. Is Parent a Dislocated Worker?
85. Parents’ 2015 Adjusted Gross Income:
86. Parents’ 2015 U.S. Income Tax Paid:
87. Parents’ 2015 Exemptions Claimed:
88. Parent 1 (Father’s/Mother’s/Stepparent’s) 2015 Income Earned from Work:
89. Parent 2 (Father’s/Mother’s/Stepparent’s) 2015 Income Earned from Work:
90. Parents’ Total of Cash, Savings, and Checking Accounts:
91. Parents’ Net Worth of Current Investments:
92. Parents’ Net Worth of Businesses/Investment Farms:
93a. Parents’ Education Credits:
93b. Parents’ Child Support Paid:
93c. Parents’ Taxable Earnings from Need-Based Employment Programs:
93d. Parents’ College Grant and Scholarship Aid Reported in AGI:
93e. Parents’ Taxable Combat Pay Reported in AGI:
93f. Parents’ Cooperative Education Earnings:
94a. Parents’ Payments to Tax-Deferred Pensions & Retirement Savings:
94b. Parents’ Deductible Payments to IRA/Keogh/Other:
94c. Parents’ Child Support Received:
94d. Parents’ Tax Exempt Interest Income:
94e. Parents’ Untaxed Portions of IRA Distributions:
94f. Parents’ Untaxed Portions of Pensions:
94g. Parents’ Housing, Food, & Living Allowances:
94h. Parents’ Veterans Noneducation Benefits:
94i. Parents’ Other Untaxed Income or Benefits:
95. Student’s Number of Family Members in 2017-2018:
96. Student’s Number in College in 2017-2018:
97. Student Received Medicaid or Supplemental Security Income?
98. Student Received SNAP?
99. Student Received Free/Reduced Price Lunch?
100. Student Received TANF?
101. Student Received WIC?
102. Is Student or Spouse a Dislocated Worker?
103a. First Federal School Code:
103b. First Housing Plans:
103c. Second Federal School Code:
103d. Second Housing Plans:
103e. Third Federal School Code:
103f. Third Housing Plans:
103g. Fourth Federal School Code:
103h. Fourth Housing Plans:
103i. Fifth Federal School Code:
103j. Fifth Housing Plans:
103k. Sixth Federal School Code:
103l. Sixth Housing Plans:
103m. Seventh Federal School Code:
103n. Seventh Housing Plans:
103o. Eighth Federal School Code:
103p. Eighth Housing Plans:
103q. Ninth Federal School Code:
103r. Ninth Housing Plans:
103s. Tenth Federal School Code:
103t. Tenth Housing Plans:
104. Date Completed:
105. Signed By:
106. Preparer’s Social Security Number:
107. Preparer’s Employer Identification Number (EIN):
108. Preparer’s Signature:

According to the Education Department, nearly 20 million students filled out this form in the 2015/2016 application cycle.

Update: The process described above was based on a demonstration this author saw while sharing a screen with a KrebsOnSecurity reader who had a family member apply for aid through FAFSA. But an Education Department spokesperson took strong exception to my experience, saying that while someone armed with an applicant’s SSN and date of birth would be able to view some of the less sensitive data elements related to an application that has already been submitted and processed, seeing the more sensitive data requires and additional authentication step.

The spokesperson said the data is displayed across several pages that require manual advancement, and that before the pages with financial data are shown the visitor is prompted to supply a username and password that all users are required to create when they start the application process. The agency said that without those credentials, the system should not display the rest of the data.

In cases where a student has saved but not completed an application, the spokesperson said, the applicant is prompted to create a “save key,” or temporary password that needs to be supplied before the financial data is displayed.

Original story: What indications are there that ID thieves might already be aware of this personal data treasure trove? In March 2017, the Internal Revenue Service (IRS) disabled an automated tool on its Web site that was used to help students and their families apply for federal financial aid — citing evidence that identity thieves were abusing it to siphon data used to commit tax refund fraud with the IRS.

The IRS found that identity thieves were abusing the automated tool — which pulled data directly from the FAFSA Web site — in order to learn the adjusted gross income (AGI) of applicant families. The AGI is crucial to successfully filing a tax refund request in someone’s name at the IRS.

On Oct. 1, the IRS brought its FAFSA data retrieval tool back online, adding additional authentication measures. In addition, the AGI data is now masked when it is electronically transferred into the FAFSA application.

Think it’s hard to find someone’s SSN and DOB? Think again. There are a multitude of Web sites on the open Internet and Dark Web alike that sell access to SSN and DOB data on hundreds of millions of Americans — all for the price of about $4-5 worth of Bitcoin.

Somehow, we need to move away from allowing online access to such a deep vein of consumer data just by supplying static data points that are broadly compromised in a thousand breaches and on sale very cheaply in the cybercrime underground.

Until that happens, anyone who is applying for federal student aid or has a child who applied should strongly consider taking several steps:

-Get on a schedule to request a free copy of your credit report. By law, consumers are entitled to a free copy of their report from each of the major bureaus once a year. Put it on your calendar to request a copy of your file every three to four months, each time from a different credit bureau. Dispute any unauthorized or suspicious activity. This is where credit monitoring services are useful: Part of their service is to help you sort this out with the credit bureaus, so if you’re signed up for credit monitoring make them do the hard work for you.

Consider placing a “security freeze” on your credit files with the major credit bureaus. See this tutorial about why a security freeze — also known as a “credit freeze,” may be more effective than credit monitoring in blocking ID thieves from assuming your identity to open up new lines of credit. Keep in mind that having a security freeze on your credit file won’t stop thieves from committing tax refund fraud in your name; the only real defense against that is to file your taxes as early as possible — before the fraudsters can do it for you.

Monitor, then freeze. Take advantage of any free credit monitoring available to you, and then freeze your credit file with the four major bureaus. Instructions for doing that are here.

Google Adsense[Infographic] Are you ready for the holiday season?

The holiday season is the busiest time of the year for shoppers and, therefore, for advertisers as well. The end of 2016 was a record-breaker for advertiser spend and this year projected to grow even more. Now it is right time to get your website and ads ready so that you can make the most out of this busy season.

Check out our infographic with important stats and tips for you to get ready for this special time of the year:

(to view from mobile download it here)



Find out more about native ads, viewability and test your site speed in PageSpeed Insights.

You can also watch Google AdSense On Air with tips on how to get your site optimized for holidays.

Posted by: the AdSense Team

Not yet an AdSense user? Sign up now!

Worse Than FailureClassic WTF: The Shadow over ShipPoint

What's this? A re-run of a spooky article on the day after Thanksgiving? Well, what's spookier than Black Friday? --Remy
Original

In the winter of 2012-13, I was fired from the ill-rumored e-commerce company known as ShipPoint. Though I remained stalwart to the end, the wretched darkness embodied in ShipPoint's CTO and his twisted worshipers dogs me still, a malignant growth choking the very life out of my career aspirations. And although I fight every day to forget, to leave my time at ShipPoint behind, I still awaken in the uttermost black of night, shuddering, my mind wrenching itself free from nightmare's grip. I record this grim history only because I fear I may soon slip irredeemably into madness.

It was 2011 when, freshly downsized, I found myself wandering the LinkedIn Jobs Directory, seemingly in vain. I had almost made up my mind to hang out my shingle as a consultant when I received an email from a recruiter. I don't remember his name, nor the firm that he claimed to represent, only that he demanded that we meet in person; apparently he was privy to a lucrative opportunity whose details could only be revealed face to face. While suspicious, I must admit I was gripped by curiosity — tinged, I must now believe, with a touch of the wild. I met the recruiter, a grim, swarthy fellow of furtive glance and questionable heritage, in a refuse-choked alley far from the central business district. It was there, amidst the dumpsters and commercial-grade recycling bins, that I first heard in a grating croak the name whose syllables I would one day shudder to write.

"ShipPoint," he said in response to my question about their development environment, "is dedicated to becoming cutting-edge with their development tools and processes. They use Subversion, and I hear they have a focus on quality and testing." I proceeded through a phone interview, and then on to meet James Akeley, ShipPoint's development manager. Imploring that I call him "Jimmy", he proclaimed his easy-going attitude to be matched only by his and ShipPoint's commitment to quality. Though the pay was a bit on the low side, I accepted his offer. I was to start the following Monday, taking the train and then a bus to the ugly one-story building of nondescript gray that contained ShipPoint's offices, a geriatric hulk muttering tonelessly to itself as it wallowed in its crumbling and almost-abandoned office park by the seashore.

My first day at ShipPoint began as prosaically as one could expect with a simple task that would lead me through their codebase. As an e-commerce provider, ShipPoint's stock in trade was web applications written using ASP.NET, and I made careful note of several places where classic code smells made themselves apparent. The team went out to lunch, as was their custom. Jimmy drove, with Jack Mason, the second-most senior developer, in the passenger seat. Sharing the back with me was Rob Carter, the company's web designer — one who would prove himself my most stalwart companion in the unguessed-at trials that lay ahead. While our lunchtime discussion was generally mundane, with only Rob expressing any interest in developing familiarity with his new associate, I found an appropriate pause in the conversation to present Jimmy and Jack with the potential problems I had detected during my brief venture into the code. Given his repeated assertions regarding dedication to quality, I expected Jimmy, at least, to be keenly interested in my discoveries. My surprise was considerate when he and Jack rebuffed me, declaring that Dan Marsh — the CTO — didn't want us to spend time refactoring code. "He and the other executives think it's a waste of time," Jimmy explained, some small measure of remorse evident in his voice, while next to him Jack nodded his head approvingly. "They want us to focus on new deploying new features."

I was disappointed by this, and by the subsequent revelation that, though ShipPoint did indeed mandate Subversion for source control, Jimmy and Jack only ever copied all the files to a separate, timestamped folder before committing. While the two senior developers were hesitant to discuss their mysterious and unseen leader, I was eventually able to coax from Rob what little he knew of the enigmatic Mr. Marsh. It seemed Marsh wasn't a developer, but, after joining the company a decade prior, his possession of certain esoteric scraps of scripting knowledge qualified him as ShipPoint's sole IT person. His authority spread as the years went by, unquestioned by his superiors and the developers he eventually allowed to join his staff, until he now led all technological decision-making at ShipPoint from within the only private office on their floor, an office whose door opened by invitation only.

After several months of my attempted improvements being either stutteringly denied by Jimmy or gruffly rebuked by Jack, new allies arrived at ShipPoint. Arthur Gilman was a brave and clever youth who joined the company alongside his mentor. Walter Peaslee was a hoary old veteran who had been using .NET since the framework was in beta. If anyone could help me champion sane coding and source-management practices at ShipPoint, it was these dynamic individuals. And changes were surely needed, as the months had shown me deep-rooted stability issues that would cause pages to crash or take minutes to load. It had likewise become clear that the senior developers were unwilling or uninterested in tackling these issues, holding up Mr. Marsh's desire for them to complete his endless list of superficial improvements as reason to hack as quickly as possible, leaving Rob and me to fix up the messes they left behind them.

At Christmastime, a chink in the armor appeared. Jimmy announced that he was leaving the company, taking his passive deference to Mr. Marsh with him. I decided to take action, and, with the idealistic Arthur at my heels, endeavored to implement a few changes. First, set up a bug-tracking system and then begin using Subversion properly, setting my protégé to create branches that would let the team collaborate without creating multiple copies of the application's source. Jack agreed to the changes in principle, and victory seemed close at hand. Only no sooner had Arthur went live with the Subversion changes than a blood-curdling cry was heard from Jack's cubicle! His files, Jack insisted, were gone, and he accused us of the most sordid and calculated mayhem, insisting that we sought to discredit him before Mr. Marsh. Not waiting for Arthur to explain that the files had simply been moved to a branch folder, Jack stormed into the CTO's office. By the time we had returned perplexed to our workstations, a directive to return the source control repository to its previous state awaited us, bearing the CTO's imprimatur. This was merely a prelude of things to come as repeated future attempts to sanitize our source-control procedures (and reclaim the gigabytes of storage consumed by the many redundant copies of our source code) were met with similar fear, uncertainty, and doubt from Jack, rapidly followed by executive sanction.

In the venerable person of Walter Peaslee, I was sure a sane counterpart had been found to our volatile senior developer. But the hand of Marsh proved subtle. When attempting to bring Walter's vast experience to bear on our DevOps dilemma, great was my surprise when I found him languishing on a project to produce a report for ShipPoint's CEO. Harbored as the chief executive was on far alien shores, all features of the report required Mr. Marsh's approval. With a sigh that seemed to carry a weight beyond even his advanced years, Walter explained that the CTO would lead him on for weeks regarding the simplest decision, often ignoring multiple emails. With his calendar eternally full to ward off meetings, Mr. Marsh would eventually return terse feedback along the lines of "this is the wrong color", disregarding the actual functionality.

I was saddened, but not surprised, when Walter graciously notified me that he would be submitting his resignation at the end of the week. After being regaled with the sanity-challenging truth of his experience working with Mr. Marsh, I had not the heart to try to convince him to stay. Indeed, I wondered if he might have awakened to a reality that I, too, should embrace. Arthur, on the other hand, being young and impressionable perhaps to a fault, was distracted by a new assignment: the task of utterly redesigning the central UI of our flagship application. It was here, in this project, that the forces of order and of chaos manifest at the heart of ShipPoint would collide in a last, terrible sortie. My support had meanwhile been secured by a timely email from Mr. Marsh, promising to install me as the lead of a new team of developers, since, he astutely pointed out amidst aggravating hints that the two shared some dark and malignant tradition, Jack was content to be a lone wolf. I must admit that the appeal to my leadership aspirations led me to lapse into a period of content productivity, and as the months went by I mostly avoided Jack and his hasty, problematical contributions to the codebase wherever I could, bringing as much improvement to the features I implemented as possible without incurring the wrath of Jack or the dreaded and still unseen Mr. Marsh.

Arthur, alas, had no choice but to collaborate with Jack, who effectively owned the backend of the application he was redesigning. While I thought I had coached the young man to weather this abominable partnership, the elder developer proved maddeningly cunning. While Arthur attempted to coordinate front-end and back-end features in the hurried sprints that Mr. Marsh had demanded, each release was plagued by wave after wave of new bugs, lapping like a foetid, corrosive black tide at a bleak, doomed shore. It was only Rob's fortuitous glimpse of an email seen over Jack's shoulder that we determined Mr. Marsh had been secretly communicating a list of shadow features he had apparently sold to management, and Jack was hacking code at a maddening pace to deliver said features in each release. It was with grim resignation that I entered the repository and inspected the terrible results. I perceived that Arthur's excellent front-end work had been reduced to little more than window-dressing, twisted into whatever shapes Marsh and Jack required to realize their fiendish goals. When I opened the solution containing Jack's jealously-guarded back-end code, obfuscated though it was behind incomprehensible names like "Solution1" and "MvcProject4", only then did I begin to grasp the horror that had taken root beneath the facade of a UI redesign. I saw them in a limitless stream—flopping, hopping, croaking, bleating—surging inhumanly through the spectral moonlight in a grotesque, malignant saraband of fantastic nightmare! That interminable list of poorly-implemented features, its shapeless mass extending blasphemous profusions in all directions throughout the code. It seemed to surge and breathe even as I watched...

It was with a mind gone almost entirely over to the feverish that I found myself composing email after email to Mr. Marsh, laying bare the deleterious effect that this noxious circumvention of procedure was having on our product. Rob was good enough to support this dangerous endeavor, and together we believed we may have been turning the tide of the CTO's sentiment against Jack, whose bland reassurances had apparently blinded Mr. Marsh to the depth of the horror. This last flicker of naiveté on our part was efficiently snuffed when Arthur's employment was terminated without notice. Though no word from Mr. Marsh was forthcoming, Jack's smug explanation was that the youth was slowing down the delivery of critical new features, and, worse, his incompetent code changes were found to be at the root of the catastrophic server instabilities. Perceiving the tolling of a grim bell to have begun, Rob informed me he was thinking of getting out of the technology game altogether, returning to the simple pastoral life he had known while running an organic fruit stand outside a nearby beachfront town. I tried to reassure him that we would find a way to prevail, but in truth my own hope was waning. ShipPoint and its uncouth stewards had ground my desire to write excellent code and promote best practices down to their merest remainder. Deep within me a malaise had taken root, and I knew when I looked hard into the glass that the end was drawing near.

The harbinger came, as it so often does, with a revocation: came a day that Rob needed me to reconfigure something for him on the Production server that had long been my charge, when, upon attempting to connect, I was rebuffed by the server's protestations of an incorrect password. Under my questioning, Jack hesitantly and stutteringly informed me that the password had changed and he'd forgotten to update me. No sooner had he left to fetch the promised credential than my phone rang. Shouldering the receiver, I heard the voice of the spectral Mr. Marsh for the first time. Never have the words "Could you pop by my office for a sec?" been uttered in such a sardonic and inhuman tone as to induce in the listener a shocking wave of panic fear. I felt numb as leaden limbs carried me to the unopened door. Pulled into the dark recesses the portal revealed, I came face to face with unbounded horrors that defy description. Let me only say that the stated reason for my termination was "a change of corporate direction towards a smaller, more agile development team".

Though I survived my meeting with the terrible Mr. Marsh, I was rendered practically an invalid by my abruptly-curtailed employment at ShipPoint, and made my way to a relative's country home to engage in a lengthy convalescence. I received an email from Rob soon after my firing, informing me that he had left the company and was exiting the industry altogether, going so far as to delete his LinkedIn profile. The horrifying dreams in which I blindly shoveled hastily-implemented code into a branchless Subversion repository while pursued down lightless corridors by a shapeless unseen terror had begun to pass when the first job posting appeared. ShipPoint was calling, its unspeakable tendrils reaching out across the vast cosmic gulfs of the internet to ensnare unwary developers. And while I have sworn never to take a job without assurance of sane development practices again, I do not know that my programmer's soul will ever be entirely free of its taint...

So far I have not yet deleted my LinkedIn profile as Rob did. The tense extremes of horror are lessening, and I feel queerly drawn toward the job postings instead of fearing them. I see and do strange things in Subversion, and commit my changes with a kind of exaltation instead of terror. Stupendous and unheard-of splendors await me in Marsh's cube farm, and I shall seek them soon. Iä-R'lyeh! Codethulhu fhtagn! Iä! Iä! No, I shall not delete my LinkedIn profile—I cannot be made to delete my LinkedIn profile!

I shall coax Rob back into software development, and together we shall go to marvel-shadowed ShipPoint. We shall take the bus out to that brooding industrial park by the sea and dive down through black abysses of code to the Cyclopean and many-columned database, and in that lair of the Expert Beginners we shall dwell amidst wonder and glory for ever.

Photo credit: gagilas / Foter / CC BY-SA

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 4, week 8

Well, the end of term is in sight! End of year reporting is in full swing and the Understanding Our World® activities are designed to keep students engaged whilst minimising requirements for teachers, especially over these critical weeks. The current activities for all year levels are tailored to require minimal teaching, allowing teacher aides and […]

,

Worse Than FailureEditor's Soapbox: Give Thanks for Well Routed Packets

It’s Thanksgiving here in the US, so we’re taking a long weekend. In lieu of a more traditional “from the archives” post, I’m going to give thanks.

You know what I’m thankful for? I’m thankful that data packets on the Internet are routed and handled the same way, regardless of which network originated them, nor which network is their destination, nor what they may contain. You could say that networks are… neutral about packets.

A parody of the Gadsden flag that is an ethernet cable poised like a snake

A few years ago, the FCC enshrined this common sense into its regulatory framework. We were all pretty happy about it, and were optimistic that it was done. Unfortunately, it’s never over, and the new management at the FCC wants to reverse that, and plans to vote about it in a few weeks.

Remember: prior to making Network Neutrality the regulated standard, network operators largely (but not completely) followed the rule anyway. Network Neutrality was the default, and then the bean-counters recognized an unexploited revenue stream (why should Netflix get to send data to our customers without paying us for the privilege?). The Internet worked under Network Neutrality, and the FCC only needed to enforce it by rule because network operators wanted to change the playing field.

In any case, if you’re thankful for an Internet that works, between gorging yourself in typical American fashion and arguing with your racist uncle, take a few minutes to do something about network neutrality.

I’d be ever so thankful if you did.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!