Planet Russell

,

Planet DebianDirk Eddelbuettel: #14: Finding Binary .deb Files for CRAN Packages

Welcome to the fourteenth post in the rationally rambling R rants series, or R4 for short. The last two posts were concerned with faster installation. First, we showed how ccache can speed up (re-)installation. This was followed by a second post on faster installation via binaries.

This last post immediately sparked some follow-up. Replying to my tweet about it, David Smith wondered how to combine binary and source installation (tl;dr: it is hard as you need to combine two package managers). Just this week, Max Ogden wondered how to install CRAN packages as binaries on Linux, and Daniel Nuest poked me on GitHub as part of his excellent containerit project as installation of binaries would of course also make Docker container builds much faster. (tl;dr: Oh yes, see below!)

So can one? Sure. We have a tool. But first the basics.

The Basics

Packages for a particular distribution are indexed by a packages file for that distribution. This is not unlike CRAN using top-level PACKAGES* files. So in principle you could just fetch those packages files, parse and index them, and then search them. In practice that is a lot of work as Debian and Ubuntu now have several tens of thousands of packages.

So it is better to use the distro tool. In my use case on .deb-based distros, this is apt-cache. Here is a quick example for the (Ubuntu 17.04) server on which I type this:

$ sudo apt-get update -qq            ## suppress stdout display
$ apt-cache search r-cran- | wc -l
419
$

So a very vanilla Ubuntu installation has "merely" 400+ binary CRAN packages. Nothing to write home about (yet) -- but read on.

cran2deb4ubuntu, or c2d4u for short

A decade ago, I was involved in two projects to turn all of CRAN into .deb binaries. We had a first ad-hoc predecessor project, and then (much better) a 'version 2' thanks to the excellent Google Summer of Code work by Charles Blundell (and mentored by me). I ran with that for a while and carried at the peak about 2500 binaries or so. And then my controlling db died, just as I visited CRAN to show it off. Very sad. Don Armstrong ran with the code and rebuilt it on better foundations and had for quite some time all of CRAN and BioC built (peaking at maybe 7k package). Then his RAID died. The surviving effort is the one by Michael Rutter who always leaned on the Lauchpad PPA system to build his packages. And those still exist and provide a core of over 10k packages (but across different Ubuntu flavours, see below).

Using cran2deb4ubuntu

In order to access c2d4u you need an Ubuntu system. For example my Travis runner script does

# Add marutter's c2d4u repository, (and rrutter for CRAN builds too)
sudo add-apt-repository -y "ppa:marutter/rrutter"
sudo add-apt-repository -y "ppa:marutter/c2d4u"

After that one can query apt-cache as above, but take advantage of a much larger pool with over 3500 packages (see below). The add-apt-repository command does the Right Thing (TM) in terms of both getting the archive key, and adding the apt source entry to the config directory.

How about from R? Sure, via RcppAPT

Now, all this command-line business is nice. But can we do all this programmatically from R? Sort of.

The RcppAPT package interface the libapt library, and provides access to a few functions. I used this feature when I argued (unsuccessfully, as it turned out) for a particular issue concerning Debian and R upgrades. But that is water under the bridge now, and the main point is that "yes we can".

In Docker: r-apt

Building on RcppAPT, within the Rocker Project we built on top of this by proving a particular class of containers for different Ubuntu releases which all contain i) RcppAPT and ii) the required apt source entry for Michael's repos.

So now we can do this

$ docker run --rm -ti rocker/r-apt:xenial /bin/bash -c 'apt-get update -qq; apt-cache search r-cran- | wc -l'
3525
$

This fires up the corresponding Docker container for the xenial (ie 16.04 LTS) release, updates the apt indices and then searches for r-cran-* packages. And it seems we have a little over 3500 packages. Not bad at all (especially once you realize that this skews strongly towards the more popular packages).

Example: An rstan container

A little while a ago a seemingly very frustrated user came to Carl and myself and claimed that out Rocker Project sucketh because building rstan was all but impossible. I don't have the time, space or inclination to go into details, but he was just plain wrong. You do need to know a little about C++, package building, and more to do this from scratch. Plus, there was a long-standing issue with rstan and newer Boost (which also included several workarounds).

Be that as it may, it serves as nice example here. So the first question: is rstan packaged?

$ docker run --rm -ti rocker/r-apt:xenial /bin/bash -c 'apt-get update -qq; apt-cache show r-cran-rstan'
Package: r-cran-rstan
Source: rstan
Priority: optional
Section: gnu-r
Installed-Size: 5110
Maintainer: cran2deb4ubuntu <cran2deb4ubuntu@gmail.com>
Architecture: amd64
Version: 2.16.2-1cran1ppa0
Depends: pandoc, r-base-core, r-cran-ggplot2, r-cran-stanheaders, r-cran-inline, r-cran-gridextra, r-cran-rcpp,\
   r-cran-rcppeigen, r-cran-bh, libc6 (>= 2.14), libgcc1 (>= 1:4.0), libstdc++6 (>= 5.2)
Filename: pool/main/r/rstan/r-cran-rstan_2.16.2-1cran1ppa0_amd64.deb
Size: 1481562
MD5sum: 60fe7cfc3e8813a822e477df24b37ccf
SHA1: 75bbab1a4193a5731ed105842725768587b4ec22
SHA256: 08816ea0e62b93511a43850c315880628419f2b817a83f92d8a28f5beb871fe2
Description: GNU R package "R Interface to Stan"
Description-md5: c9fc74a96bfde57f97f9d7c16a218fe5

$

It would seem so. With that, the following very minimal Dockerfile is all we need:

## Emacs, make this -*- mode: sh; -*-

## Start from xenial
FROM rocker/r-apt:xenial

## This handle reaches Carl and Dirk
MAINTAINER "Carl Boettiger and Dirk Eddelbuettel" rocker-maintainers@eddelbuettel.com

## Update and install rstan
RUN apt-get update && apt-get install -y --no-install-recommends r-cran-rstan

## Make R the default
CMD ["R"]

In essence, it executes one command: install rstan but from binary taking care of all dependencies. And lo and behold, it works as advertised:

$ docker run --rm -ti rocker/rstan:local Rscript -e 'library(rstan)'
Loading required package: ggplot2
Loading required package: StanHeaders
rstan (Version 2.16.2, packaged: 2017-07-03 09:24:58 UTC, GitRev: 2e1f913d3ca3)
For execution on a local, multicore CPU with excess RAM we recommend calling
rstan_options(auto_write = TRUE)
options(mc.cores = parallel::detectCores())
$

So there: installing from binary works, takes care of dependencies, is easy and as an added bonus even faster. What's not too like?

(And yes, a few of us are working on a system to have more packages available as binaries, but it may take another moment...)

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Sociological ImagesListen Up! Great Social Science Podcasts

Photo via Oli (Flickr CC)

Whether you’re taking a long flight, taking some time on the treadmill, or just taking a break over the holidays, ’tis the season to catch up on podcasts. Between long-running hits and some strong newcomers this year, there has never been a better time to dive into the world of social science podcasts. While we bring the sociological images, do your ears a favor and check these out.

Also, this list is far from comprehensive. If you have tips for podcasts I missed, drop a note in the comments!

New in 2017

If you’re new to sociology, or want a more “SOC 101” flavor, The Social Breakdown is perfect for you. Hosts Penn, Ellen, and Omar take a core sociological concept in each episode and break it down, offering great examples both old and new (and plenty of sass). Check out “Buddha Heads and Crosses” for a primer on cultural appropriation from Bourdieu to Notorious B.I.G.

Want to dive deeper? The Annex is at the cutting edge of sociology podcasting. Professors Joseph Cohen, Leslie Hinkson, and Gabriel Rossman banter about the news of the day and bring you interviews and commentary on big ideas in sociology. Check out the episode on Conspiracy Theories and Dover’s Greek Homosexuality for—I kid you not—a really entertaining look at research methods.

Favorite Shows Still Going Strong

In The Society Pages’ network, Office Hours brings you interviews with leading sociologists on new books and groundbreaking research. Check out their favorite episode of 2017: Lisa Wade on American Hookup!

Felling wonky? The Scholars Strategy Network’s No Jargon podcast is a must-listen for the latest public policy talk…without jargon. Check out recent episodes on the political rumor mill and who college affirmative action policies really serve.

I was a latecomer to The Measure of Everyday Life this year, finding it from a tip on No Jargon, but I’m looking forward to catching up on their wide range of fascinating topics. So far, conversations with Kieran Healy on what we should do with nuance and the resurrection of typewriters have been wonderful listens.

And, of course, we can’t forget NPR’s Hidden Brain. Tucked away in their latest episode on fame is a deep dive into inconspicuous consumption and the new, subtle ways of wealth in America.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramFriday Squid Blogging: Gonatus Squid Eating a Dragonfish

There's a video:

Last July, Choy was on a ship off the shore of Monterey Bay, looking at the video footage transmitted by an ROV many feet below. A Gonatus squid was spotted sucking off the face of a "really huge dragonfish," she says. "It took a little while to figure out what's going on here, who's eating whom, how is this going to end?" (The squid won.)

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Planet DebianBen Hutchings: BPF security issues in Debian

Since Debian 9 "stretch", we've shipped a Linux kernel supporting the "enhanced BPF" feature which allows unprivileged user space to upload code into the kernel. This code is written in a restricted language, but one that's much richer than the older "classic" BPF. The kernel verifies that the code is safe (doesn't loop, only accesses memory it is supposed to, etc.) before running it. However, this means that bugs in the verifier could allow unsafe programs to compromise the kernel's security.

Unfortunately, Jann Horn and others recently found many such bugs in Linux 4.14, and some of them affect older versions too. As a mitigation, consider setting the sysctl kernel.unprivileged_bpf_disabled=1. Updated packages will be available shortly.

Update: There is a public exploit that uses several of these bugs to get root privileges. It doesn't work as-is on stretch with the Linux 4.9 kernel, but is easy to adapt. I recommend applying the above mitigation as soon as possible to all systems running Linux 4.4 or later.

Planet DebianShirish Agarwal: 24×7 shopping in Maharashtra, Learning and Economics

Dear Friends,

My broadband connectivity (ADSL) by BSNL was down for a month and a bit more hence couldn’t post any blogs. In account of road-work there had been digging and numerous accounts of stealing thick copper cables which can be resold to different people and even melted to extract copper. Optical fiber for communication prices have dropped tremendously , the only expensive and tricky part is splicing and termination of the strands. There is a lobby which has clout and incentives to continue with this outdated and outmoded technology, which is why it continues, although this take discussion from the main topic.

I could have cheated and made a blog post in bits and pieces, something I hope to do this week-end but there has been some encouraging news and views which prompted me to post this blog post –

Mumbai 24/7: Shop, dine and play all night long in the city from today
and
Hotels And Restaurants In Maharashtra To Remain Open 24×7

One of the motives apart from being part of Debconf itself which is a valuable incentive to learn new things, is to see Taiwan shopping 24×7, it is the night market bit that the Taiwan team has shared, something I looked up and was a bit hooked up when I saw what’s it all about.

Of course if things go my way would probably would have to do bit more of research than what I have shared above.

The real meat (figure of speech) of today’s announcements was a discussion on CNBC Awaaz

Posted the youtube link above of the discussion – is in Hindi, the crux of the discussion while was about Mumbai (while I live in Pune, about 250-300 kms. in the neighboring city) the implications are for all those places which have restaurants. small kirana stores which had been facing lot of competition from e-tailers. One of the things being envisaged are places to eat and shop at unearthly hours at discounted rates which will drive people interested in such products and services. Also lot of retail services which depend upon such services are also reckoned to grow bringing more stability and multiplier effects to the Indian economy. Maharashtra ( one of the 29 States/Provinces in India) has been a bit contributor to the Indian economy over decades but hasn’t had its share of investment vis-a-vis what it gives to the national exchequer in terms of various fees and taxes. There are figures and beliefs which both support the argument. I haven’t shared as the blog post would still balloon up without adding anything but if needed can still share it in comments.

It was also shared that it would increase tourism but that got mixed reviews that it might not unless and until liquor timings, licenses are loosened up a bit.

I would contend though that there should be substantial increase and flexibility in domestic tourism and businesses as people would be able to make amicable plans for both parties ( a giver and a receiver.)

If one were to specifically talk about Mumbai, both Marine Drive, Grandstand as well as some other places in Colaba and elsewhere have been/were open all night. But with this shift of policy the civic infrastructure which already was in deficit might increase more as it comes under more pressure while law and order would need to be more beefed up and adequately trained, both of which are under strain as well. What was shared this policy would also end the lower-rank corruption by police officials who used to ask for protection money if shops were even a little late in closing up.

There is also a possibility that traffic congestion might be in night as well but it may reflect into a bit less traffic congestion in the day-time. Again all of this is a bit of imagination, conjecture at this point in time. People like me who can’t stand Mumbai’s humidity in the day-time would find a bit more excuse to be there at night if more budget restaurants were to be open late at night.

Also it is not a blanket thing for everybody, there are restrictions on shops in residential areas which might be expected to be relaxed a bit once things happen in the open. One well-known name which cropped up was the ubiquitous 7-11 stores. There would certainly be lot of interest if such convenience stores opened up all across the city/state.

I am excited to see if this happens.

Although, this has happened in India years ago, just it was ‘illegal’ and now it is ‘legal’. When I was in my college, around 1993 – 1994 I had shared also in 2016 Debconf the net/web had just started in India and I was lucky to be able to see/view the net using a service called NIIT Computerdrome.

Just to be explicit, NIIT is and was a premium offering for students who wanted to learn about programming and various MS-Windows technologies as there were already signs that IT (Information Technology) would be a disruptive force. Although nowadays they have also moved into management and administration courses as local IT industry has yet to grow up and lot of H1B-Visas under scanner.

It was a fancy name for what is now known as a cyber-cafe but we used to get net access at discounted rates. Now this place was about 4-5 kms. from my place so I had to be really careful in planning, figuring out as I had to buy coupons which had an expiry date and everything.

Couple of years later, came to know of a service much closer to home, in the basement of a place called Sagar Arcade. Now for those of us who were addicted to web access either for porn, or net technologies, or net gaming, IRC, Video chatting all used to throng there. At that time, NIIT had an 8 Mbps leased line which was a big deal and still is.

While wholesale bulk bandwidth rates have hit rock-bottom, last-mile connectivity still seems to be an issue. Because of Reliance Jio’s aggressive pitches some of the retail bandwidth rates have softened up but still have miles to go before I could say we have adequate bandwidth. Dropped calls (on mobile and landline) are still an issue while bandwidth tapering off every now and then seems to be endemic behavior in both public and private sector ISPs (Internet Service Provider) most of which are Tier-3 ISPs, the only tier-1 ISP Indian ISP I know is Tatas, see this FAQ as well –

World’s largest wholly owned submarine fibre network – more than 500,000 km of subsea fibre, and more than 210,000 km of terrestrial fibre
Only Tier-1 provider that is in the top five in five continents – by internet routes
Over 24% of the world’s internet routes are on Tata Communications’ network
400+ PoPs reach more than 240 countries and territories
44 data centres and co-location centres with over one million sq. ft. of space
7600 petabytes of internet traffic travels over the Tata Communications’ internet backbone each month
15+ terabits/s of international bandwidth lit capacity

– From Tata Communications FAQ .

but came to know they are merging their end-user business with Airtel (another Tier-3 ISP) while their under-sea fibre optic cable business(see above) will still remain with them but this again is taking outside the current topic.

Back to topic on hand –

I am guessing that there was practically no work being done after hours so NIIT might have in-turn leased some of the capacity to the cyber-cafe.

The cybercafe owner had two rates, normal rates which were comparable to any other cyber-cafes and night rates which were half or one-fourth (happy hours) which extended from 2300 hrs – 0500 hours. In order to indulge into net curiosity/net addiction me and few of my friends used to go there. Few days/couple of weeks later a chinese take-away and then a juice/tea/coffee shop came to serve the cybercafe customers.

This whole setup was illegal as according to laws of the time, no commercial establishment (only exceptions being Railway stations, Police Stations, Hopitals, some specific Petrol Pumps and Medicine dispensations shops were allowed to remain open 24×7 ) But even in the case of Medicine shops and Petrol Pumps there were very few who had got permission (looking back might have a combo for Business/Political patronage to it which were not apparent when I was a teenager.) I also came to know much much later that what we were doing was illegal as in using a commercial establishment after hours even though it was in connivance with the owner. see The Bombay Shop Act, 1948.

Comically, the Bombay Shop Act which has now been superseded by the Maharashtra Shops and Establishments Act 2017 has never been in the syllabus of Commerce Students even when we were graduating with Business Administration as one of the optional subjects. The Act and surrounding topics should have been there in the books and creative discussions and consultations with students being taken up. This was in 1994, a full 46 years after the Act came into being.

But as has been shared on this blog before, this is a dream which seems shall not be realized at least in the immediate future.

While reading today’s newspaper I came across this editorial which also opens up a window what the elitist institutions have shrunk in their collective responsibility. While it only talks about social sciences, another article for students of UPSC Mains which was shared by a student friend of mine. It actually took me back to the term Dismal Science as I came across the term and understood the implications years ago.

While it is too early to state/predict whether it would change things in Pune and Maharashtra as a whole, but am hopeful as it would generate both direct and indirect employment. After years of jobless inflationary growth it would be welcome departure especially as youngsters without adequate job skills are joining the unemployed in millions.


Filed under: Miscellenous Tagged: # Mumbai 24x7, #Business in India, #Copper Cables, #Economic Theory 19th century, #Maharashtra Shops and Establishments Act 2017, #Optical Fiber Prices in India, #planet-debian, #Social Sciences, #Tier 1 ISP, #Tier 3 ISP's, Broadband

TEDFree report: Bright ideas in business from TEDWomen 2017

At a workshop at TEDWomen 2017, the Brightline Initiative helped attendees parse the topic, “Why great ideas fail and how to make sure they don’t.” Photo: Stacie McChesney/TED

The Brightline Initiative helps leaders from all types of organizations build bridges between ideas and results. So they felt strong thematic resonance with TEDWomen 2017, which took place in New Orleans from November 1-3, and the conference theme of “Bridges.” In listening to the 50+ speakers who shared ideas, Brightline noted many that felt especially helpful for anyone who wants to work more boldly, more efficiently or more collaboratively.

We’re pleased to share Brightline’s just-released report on business ideas from the talks of TEDWomen 2017. Give it a read to find out how thinking about language can help you shake off a rut, and why a better benchmark for success might just be your capacity to form meaningful partnerships.

Get the report here >>


CryptogramAmazon's Door Lock Is Amazon's Bid to Control Your Home

Interesting essay about Amazon's smart lock:

When you add Amazon Key to your door, something more sneaky also happens: Amazon takes over.

You can leave your keys at home and unlock your door with the Amazon Key app -- but it's really built for Amazon deliveries. To share online access with family and friends, I had to give them a special code to SMS (yes, text) to unlock the door. (Amazon offers other smartlocks that have physical keypads).

The Key-compatible locks are made by Yale and Kwikset, yet don't work with those brands' own apps. They also can't connect with a home-security system or smart-home gadgets that work with Apple and Google software.

And, of course, the lock can't be accessed by businesses other than Amazon. No Walmart, no UPS, no local dog-walking company.

Keeping tight control over Key might help Amazon guarantee security or a better experience. "Our focus with smart home is on making things simpler for customers ­-- things like providing easy control of connected devices with your voice using Alexa, simplifying tasks like reordering household goods and receiving packages," the Amazon spokeswoman said.

But Amazon is barely hiding its goal: It wants to be the operating system for your home. Amazon says Key will eventually work with dog walkers, maids and other service workers who bill through its marketplace. An Amazon home security service and grocery delivery from Whole Foods can't be far off.

This is happening all over. Everyone wants to control your life: Google, Apple, Amazon...everyone. It's what I've been calling the feudal Internet. I fear it's going to get a lot worse.

Planet DebianOlivier Berger: Safely testing my students’ PHP graded labs with docker containers

During the course of Web architecture and applications, our students had to deliver a Silex / Symfony Web app project which I’m grading.

I had initially hacked a Docker container to be able to test that the course’s lab examples and code bases provided would be compatible with PHP 5 even though the nominal environment provided in the lab rooms was PHP 7. As I’m running a recent Debian distro with PHP 7 as the default PHP installation, being able to run PHP 5 in a container is quite handy for me. Yes, PHP 5 is dead, but some students might still have remaining installs of old Ubuntus where PHP5 was the norm. As the course was based on Symfony and Silex and these would run as well on PHP 5 or 7 (provided we configured the right stuff in the composer.json), this was supposed to be perfect.

I’ve used such a container a lot for preparing the labs and it served me well. Most of the time I’ve used it to start the PHP command line interpreter from the current dir to start the embedded Web server with “php -S”, which is the standard way to run programs in dev/tests environment with Silex or Symfony (yes, Symfony requires something like “php -S localthost:8000 -t web/” maybe).

I’ve later discovered an additional benefit of using such a container, when comes the time to grad the work that our students have submitted, and I need to test their code. Of course, it ensures that I may run it even if they used PHP5 and I rely on PHP 7 on my machine. But it also assures that I’m only at risk of trashing stuff in the current directory if sh*t happens. Of course, no student would dare deliver malicious PHP code trying to mess with my files… but better safe than sorry. If the contents of the container is trashed, I’m rather on the safe side.

Of course one may give a grade only by reading the students’ code and not testing, but that would be bad taste. And yes, there are probably ways to escape the container safety net in PHP… but I sould maybe not tempt the smartest students of mine in continuing on this path 😉

If you feel like testing the container, I’ve uploaded the necessary bits to a public repo : https://gitlab.com/olberger/local-php5-sqlite-debian.

Planet DebianGustavo Noronha Silva: CEF on Wayland

TL;DR: we have patches for CEF to enable its usage on Wayland and X11 through the Mus/Ozone infrastructure that is to become Chromium’s streamlined future. And also for Content Shell!

At Collabora we recently assisted a customer who wanted to upgrade their system from X11 to Wayland. The problem: they use CEF as a runtime for web applications and CEF was not Wayland-ready. They also wanted to have something which was as future-proof and as upstreamable as possible, so the Chromium team’s plans were quite relevant.

Chromium is at the same time very modular and quite monolithic. It supports several platforms and has slightly different code paths in each, while at the same time acting as a desktop shell for Chromium OS. To make it even more complex, the Chromium team is constantly rewriting bits or doing major refactorings.

That means you’ll often find several different and incompatible ways of doing something in the code base. You will usually not find clear and stable interfaces, which is where tools like CEF come in, to provide some stability to users of the framework. CEF neutralizes some of the instability, providing a more stable API.

So we started by looking at 1) where is Chromium headed and 2) what kind of integration CEF needed with Chromium’s guts to work with Wayland? We quickly found that the Chromium team is trying to streamline some of the infrastructure so that it can be better shared among the several use cases, reducing duplication and complexity.

That’s where the mus+ash (pronounced “mustache”) project comes in. It wants to make a better split of the window management and shell functionalities of Chrome OS from the browser while at the same time replacing obsolete IPC systems with Mojo. That should allow a lot more code sharing with the “Linux Desktop” version. It also meant that we needed to get CEF to talk Mus.

Chromium already has Wayland support that was built by Intel a while ago for the Ozone display platform abstraction layer. More recently, the ozone-wayland-dev branch was started by our friends at Igalia to integrate that work with mus+ash, implementing the necessary Mus and Mojo interfaces, window decorations, menus and so on. That looked like the right base to use for our CEF changes.

It took quite a bit of effort and several Collaborans participated in the effort, but we eventually managed to convince CEF to properly start the necessary processes and set them up for running with Mus and Ozone. Then we moved on to make the use cases our customer cared about stable and to port their internal runtime code.

We contributed touch support for the Wayland Ozone backend, which we are in the process of upstreaming, reported a few bugs on the Mus/Ozone integration, and did some debugging for others, which we still need to figure out better fixes for.

For instance, the way Wayland fd polling works does not integrate nicely with the Chromium run loop, since there needs to be some locking involved. If you don’t lock/unlock the display for polling, you may end up in a situation in which you’re told there is something to read and before you actually do the read the GL stack may do it in another thread, causing your blocking read to hang forever (or until there is something to read, like a mouse move). As a work-around, we avoided the Chromium run loop entirely for Wayland polling.

More recently, we have start working on an internal project for adding Mus/Ozone support to Content Shell, which is a test shell simpler than Chromium the browser. We think it will be useful as a test bed for future work that uses Mus/Ozone and the content API but not the browser UI, since it lives inside the Chromium code base. We are looking forward to upstreaming it soon!

PS: if you want to build it and try it out, here are some instructions:

# Check out Google build tools and put them on the path
$ git clone https://chromium.googlesource.com/a/chromium/tools/depot_tools.git
$ export PATH=$PATH:`pwd`/depot_tools

# Check out chromium; note the 'src' after the git command, it is important
$ mkdir chromium; cd chromium
$ git clone -b cef-wayland https://gitlab.collabora.com/web/chromium.git src
$ gclient sync  --jobs 16 --with_branch_heads

# To use CEF, download it and look at or use the script we put in the repository
$ cd src # cef goes inside the chromium source tree
$ git clone -b cef-wayland https://gitlab.collabora.com/web/cef.git
$ sh ./cef/build.sh # NOTE: you may need to edit this script to adapt to your directory structure
$ out/Release_GN_x64/cefsimple --mus --use-views

# To build Content Shell you do not need to download CEF, just switch to the branch and build
$ cd src
$ git checkout -b content_shell_mus_support origin/content_shell_mus_support
$ gn args out/Default --args="use_ozone=true enable_mus=true use_xkbcommon=true"
$ ninja -C out/Default content_shell
$ ./out/Default/content_shell --mus --ozone-platform=wayland

Planet DebianMichal Čihař: New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

There are also some notable additions to existing projects:

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Worse Than FailureError'd: 'Tis the Season for Confidentiality

"For the non-German speaking people: it's highly confidential & highly restricted information that our canteen is closed between Christmas and New Year's Eve. Now, sue me for disclosing this," Stella writes.

 

Jeff C. writes, "Since when did Ingenico start installing card readers on the other side of the looking glass?!"

 

"Well, looks like someone let their intern into Production unsupervised," wrote Lincoln K.

 

"Clearly, this is a marketing tactic that's being perpetuated by the Keebler Elves in hopes that the lure of (theoretically) saving millions of dollars will make consumers want to buy the entire display," wrote Jared S.

 

George writes, "Glad to see I will be protected against all the latest nulls."

 

"Ok...it's a stretch, I guess that you could make a connection between the items," wrote Ian T.

 

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

,

TEDThe Big Idea: TED’s 4 step guide to the holiday season

More charmingly referred to as a garbage fire that just keeps burning, 2017 has been a tough, relentless year of tragedy and strife. As we approach the holiday season, it’s important to connect and reconnect with those you love and want in your life. So, in these last few weeks of the year, here are a few ways to focus on building and honoring the meaningful relationships in your life.

1. Do some emotional housekeeping

Before you get into the emotional trenches with anyone (or walk into a house full of people you don’t agree with), check in with yourself. How you engage with your inner world drives everything from your ability to lead and moderate your mood, to your quality of sleep. Be compassionate and understanding of where you are in your life, internally and externally.

Psychologist Guy Winch makes a compelling case to practice emotional hygiene — taking care of our emotions, our minds, with the same diligence we take care of our bodies.

“We sustain psychological injuries even more often than we do physical ones, injuries like failure or rejection or loneliness. And they can also get worse if we ignore them, and they can impact our lives in dramatic ways,” he says. “And yet, even though there are scientifically proven techniques we could use to treat these kinds of psychological injuries, we don’t. It doesn’t even occur to us that we should. ‘Oh, you’re feeling depressed? Just shake it off; it’s all in your head. Can you imagine saying that to somebody with a broken leg: ‘Oh, just walk it off; it’s all in your leg.’”

In his article, 7 ways to practice emotional first aid, Winch lays out useful ways to reboot and fortify your emotional health:

  1. Pay attention to emotional pain — recognize it when it happens and work to treat it before it feels all-encompassing. The body evolved the sensation of physical pain to alert us that something is wrong and we need to address it. The same is true for emotional pain. If a rejection, failure or bad mood is not getting better, it means you’ve sustained a psychological wound and you need to treat it. For example, loneliness can be devastatingly damaging to your psychological and physical health, so when you or your friend or loved one is feeling socially or emotionally isolated, you need to take action.
  2. Monitor and protect your self-esteem. When you feel like putting yourself down, take a moment to be compassionate to yourself.Self-esteem is like an emotional immune system that buffers you from emotional pain and strengthens your emotional resilience. As such, it is very important to monitor it and avoid putting yourself down, particularly when you are already hurting. One way to “heal” damaged self-esteem is to practice self-compassion. When you’re feeling critical of yourself, do the following exercise: imagine a dear friend is feeling bad about him or herself for similar reasons and write an email expressing compassion and support. Then read the email. Those are the messages you should be giving yourself.
  3. Find meaning in loss. Loss is a part of life, but it can scar us and keep us from moving forward if we don’t treat the emotional wounds it creates — and the holidays are normally a time when these wounds become sensitive or even reopen completely. If sufficient time has passed and you’re still struggling to move forward after a loss, you need to introduce a new way of thinking about it. Specifically, the most important thing you can do to ease your pain and recover is to find meaning in the loss and derive purpose from it. It might be hard, but think of what you might have gained from the loss (for instance, “I lost my spouse but I’ve become much closer to my kids”). Sometimes, being rejected by your friends and/or family also feels like loss. Consider how you might gain or help others gain a new appreciation for life, or imagine the changes you could make that will help you live a life more aligned with your values and purpose.
  4. Learn what treatments for emotional wounds work for you. Pay attention to yourself and learn how you, personally, deal with common emotional wounds. For instance, do you shrug them off, get really upset but recover quickly, get upset and recover slowly, squelch your feelings, or …? Use this analysis to help yourself understand which emotional first aid treatments work best for you in various situations (just as you would identify which of the many pain relievers on the shelves works best for you). The same goes for building emotional resilience. Try out various techniques and figure out which are easiest for you to implement and which tend to be most effective for you. But mostly, get into the habit of taking note of your psychological health on a regular basis — and especially after a stressful, difficult, or emotionally painful situation.

Yes, practicing emotional hygiene takes a little time and effort, but it will seriously elevate your entire quality of life, the good doctor promises.

2. Sit down and have a chat

Friends are one thing; family, on the other hand, can be an entirely different (and potentially more stressful) situation. More than likely, it’s possible that you’ll get caught in a discussion that you don’t want to be a part of, or a seemingly harmless conversation that may take a frustrating turn.

There’s no reason to reinvent the conversation. But it’s useful to understand how to expertly pivot a talk between you and another person.

Radio host Celeste Headlee (TED Talk: 10 ways to have a better conversation) interviews people for her day job. As such, she accrued a helpful set of strategies and rules to follow when a discussion doesn’t go quite as planned. Check out her article (above) for insights on what to do when:

  • You want to go beyond small talk to have a meaningful conversation
  • An awkward silence happens and you don’t know what to say next
  • It seems like the other person isn’t listening
  • You start, or another person, starts a conversation that might end in an argument
  • You unintentionally offend someone

3. Make new memories while resurfacing old (good) ones

One of the best parts of getting everyone together for holidays or similar events is reminiscing, gathering around and talking about when your grandmother was young or that funny thing your cousin did when he was seven that no one is quite ready to let go of just yet. Resurfacing those moments everyone can enjoy, in one way or another, is a great way to fortify existing bonds and feel closer to loved ones. Who knows, from these stories, you may uncover ones never heard before.

Storycorps, a nonprofit whose founder, Dave Isay, won the 2015 TED Prize, is dedicated to preserving humanity’s cultural history through storytelling and has an expansive collection of great questions to ask just about anyone.

These questions are great for really digging into memories that are both cherished and important to preserve for generations to come. It may be interesting, fascinating and potentially emotional to hear about a loved one’s thoughts, feels and experiences from their lifetime.

For a good place to start, you can download the Storycorps app to start recording from your phone, which will you walk you through a few simple instructions. Then, you can start with these questions to warm-up the conversation:

  • What was your childhood like?
  • Tell me about the traditions that have been passed down through our family. How did they get started?
  • What are your most vivid memories of school?
  • How did you meet your wife/husband/partner?
  • What piece of wisdom or advice would you like to share with future generations?

4. Or if you’re far and can’t make it home to visit your friends and family regularly — get old fashioned.

With the speed and ease of email and texting, it may be hard to see the point in sitting down with a pen and paper.

But being abroad or unable to afford a ticket home is a reality that can feel equal parts isolating and emotionally-exhausting, no matter how many Skype sessions you have. Letter-writing is a lasting way to connect with your loved ones, a tangible collection of your thoughts and feelings at a specific point in your life. If you can’t always send home souvenirs, a thoughtful letter is a delightful, tangible reminder that you care — and helps the person on the receiving end just as much.

Lakshmi Pratury makes a beautiful case for letters to remember the people in your life, that they are a way to keep a person with you long after they’ve passed.

However, if family isn’t so big in your life for one reason or another, or you’d like to send some thoughtful words to someone who may needs them — write a letter to a stranger. The concept may sound strange, but the holiday season is habitually a rough one for those without close connections.

Hannah Brencher’s mother always wrote her letters. So when she felt herself bottom into depression after college, she did what felt natural — she wrote love letters and left them for strangers to find. The act has become a global initiative, The World Needs More Love Letters, which rushes handwritten letters to those in need of a boost. Brencher’s website will set you up with how to format your letter, who to write it to, and even the return address to write on the envelope.

So, here are four ways to do for yourself, but there are several ways to give back during the holiday season and year-round. Happy holidays from the TED staff!


Krebs on SecurityU.K. Man Avoids Jail Time in vDOS Case

A U.K. man who pleaded guilty to launching more than 2,000 cyberattacks against some of the world’s largest companies has avoided jail time for his role in the attacks. The judge in the case reportedly was moved by pleas for leniency that cited the man’s youth at the time of the attacks and a diagnosis of autism.

In early July 2017, the West Midlands Police in the U.K. arrested 19-year-old Stockport resident Jack Chappell and charged him with using a now-defunct attack-for-hire service called vDOS to launch attacks against the Web sites of AmazonBBCBTNetflixT-MobileVirgin Media, and Vodafone, between May 1, 2015 and April 30, 2016.

One of several taunting tweets Chappell sent to his DDoS victims.

Chappell also helped launder money for vDOS, which until its demise in September 2016 was by far the most popular and powerful attack-for-hire service — allowing even completely unskilled Internet users to launch crippling assaults capable of knocking most Web sites offline.

Using the Twitter handle @fractal_warrior, Chappell would taunt his victims while  launching attacks against them. The tweet below was among several sent to the Jisc Janet educational support network and Manchester College, where Chappell was a student. In total, Chappell attacked his school at least 21 times, prosecutors showed.

Another taunting Chappell tweet.

Chappell was arrested in April 2016 after investigators traced his Internet address to his home in the U.K. For more on the clues that likely led to his arrest, check out this story.

Nevertheless, the judge in the case was moved by pleas from Chappell’s lawyer, who argued that his client was just an impressionable youth at the time who has autism, a range of conditions characterized by challenges with social skills, repetitive behaviors, speech and nonverbal communication.

The defense called on an expert who reportedly testified that Chappell was “one of the most talented people with a computer he had ever seen.”

“He is in some ways as much of a victim, he has been exploited and used,” Chappell’s attorney Stuart Kaufman told the court, according to the Manchester Evening News. “He is not malicious, he is mischievous.”

The same publication quoted Judge Maurice Greene at Chappell’s sentencing this week, saying to the young man: “You were undoubtedly taken advantage of by those more criminally sophisticated than yourself. You would be extremely vulnerable in a custodial element.”

Judge Greene decided to suspend a sentence of 16 months at a young offenders institution; Chappell will instead “undertake 20 days rehabilitation activity,” although it’s unclear exactly what that will entail.

ANALYSIS/RANT

It’s remarkable when someone so willingly and gleefully involved in a crime spree such as this can emerge from it looking like the victim. “Autistic Hacker Had Been Exploited,” declared a headline about the sentence in the U.K. newspaper The Times.

After reading the coverage of this case in the press, I half expected to see another story saying someone had pinned a medal on Chappell or offered him a job.

Jack Chappell, outside of a court hearing in the U.K. earlier this year.

Yes, Chappell will have the stain of a criminal conviction on his record, and yes autism can be a very serious and often debilitating illness. Let me be clear: I am not suggesting that offenders like this young man should be tossed in jail with violent criminals.

But courts around the world continue to send a clear message that young men essentially can do whatever they like when it comes to DDoS attacks and that there will be no serious consequences as a result.

Chappell launched his attacks via vDOS, which provided a simple, point-and-click service that allowed even completely unskilled Internet users to launch massive DDoS attacks. vDOS made more than $600,000 in just two of the four years it was in operation, launching more than 150,000 attacks against thousands of victims (including this site).

In September 2016, vDOS was taken offline and its alleged co-creators — two Israeli man who created the business when they were 14 and 15 years old — were arrested and briefly detained by Israeli authorities. But despite assurances that the men (now adults) would be tried for their crimes, neither has been prosecuted.

In July 2017, a court in Germany issued a suspended sentence for Daniel Kaye, a 29-year-old man who allegedly launched extortionist DDoS attacks against several bank Web sites.

After the source code for the Mirai botnet malware was released in September 2016, Kaye built his own Mirai botnet and used it in several high-profile attacks, including a fumbled assault that knocked out Internet service to more than 900,000 Deutsche Telekom customers.

In his trial, Kaye admitted that a customer of his paid him $10,000 to attack the Liberian ISP Lonestar. He’s also thought to have launched DDoS attacks on Lloyds Banking Group and Barclays banks in January 2017. Kaye is now facing related cybercrime charges in the U.K.

Last week, the U.S. Justice Department unsealed the cases of two young men in the United States who have pleaded guilty to co-authoring Mirai, an “Internet of Things” (IoT) malware strain that has been used to create dozens of copycat Mirai botnets responsible for countless DDoS attacks over the past 15 months. Jha and his co-defendants in that case launched highly disruptive and extortionist attacks against a number of Web sites and used their creation to conduct lucrative click fraud schemes.

Like Chappell, the core author of Mirai — 21-year-old Fanwood, N.J. resident Paras Jha — launched countless DDoS attacks against his school, costing Rutgers University between $3.5 million and $9 million to defend against and clean up after the assaults (the actual damages will be decided at Jha’s sentencing in March 2018).

Time will tell if Kaye or Jha and his co-defendants receive any real punishment for their crimes. But I would submit that if we don’t have the stomach to put these “talented young hackers” in jail when they’re ultimately found guilty, perhaps we should consider harnessing their skills in less draconian but still meaningfully punitive ways, such as requiring them to serve several years participating in programs designed to keep other kids from following in their footsteps.

Doing anything less smacks of a disservice to justice, glorifies DDoS as an essentially victimless crime, and serves little deterrent that might otherwise make it less likely that we will see fewer such cases going forward.

Planet DebianVincent Fourmond: Run QSoas complely non-interactively

QSoas can run scripts, and, since version 2.0, it can be run completely without user interaction from the command-line (though an interface may be briefly displayed). This possibility relies on the following command-line options:

  • --run, which runs the command given on the command-line;
  • --exit-after-running, which closes automatically QSoas after all the commands specified by --run were run;
  • --stdout (since version 2.1), which redirects QSoas's terminal directly to the shell output.
If you create a script.cmds file containing the following commands:
generate-buffer -10 10 sin(x)
save sin.dat
and run the following command from your favorite command-line interpreter:
~ QSoas --stdout --run '@ script.cmds' --exit-after-running
This will create a sin.dat file containing a sinusoid. However, if you run it twice, a Overwrite file 'sin.dat' ? dialog box will pop up. You can prevent that by adding the /overwrite=true option to save. As a general rule, you should avoid all commands that may ask questions in the scripts; a /overwrite=true option is also available for save-buffers for instance.

I use this possibility massively because I don't like to store processed files, I prefer to store the original data files and run a script to generate the processed data when I want to plot or to further process them. It can also be used to generate fitted data from saved parameters files. I use this to run automatic tests on Linux, Windows and Mac for every single build, in order to quickly spot platform-specific regressions.

To help you make use of this possibility, here is a shell function (Linux/Mac users only, add to your $HOME/.bashrc file or equivalent, and restart a terminal) to run directly on QSoas command files:

qs-run () {
        QSoas --stdout --run "@ $1" --exit-after-running
}
To run the script.cmds script above, just run
~ qs-run script.cmds

About QSoas

QSoas is a powerful open source data analysis program that focuses on flexibility and powerful fitting capacities. It is described in Fourmond, Anal. Chem., 2016, 88 (10), pp 5050–5052. Current version is 2.1

Planet DebianSandro Knauß: Kontact on Debian

When coding at Kontact you normally don't have to care a lot about dependencies in between the different KDE Pim packages, because there are great tools available already. kdesrc-build finds a solution to build all KDE Pim packages in the correct order. The Kde Pim docker image gives you an environment with all dependencies preinstalled, so you can start hacking on KDE Pim directly.

While hacking on master is nice, most users are not using master on their computers in daily life. To reach the users, distributions need to compile and ship KDE Pim. I am active within Debian and would like to make the newest version of KDE Pim available to Debian users. Because Qt deprecated Qt Webkit within Qt 5.5 KDE Pim had to switch from Qt Webkit to Qt WebEngine. Unfortunately Qt WebEngine wasn't available in Debian, so I had to package Qt WebEngine for Debian before packaging KDE Pim for Debian. Qt WebEngine itself is a beast to package. It was only possible to package Qt WebEngine for the last stable release named "Stretch" in time with the help of others of the Debian Qt/KDE mainatiners especially Scarlett Clark, Dmitry Shachnev and Simon Quigley, and we could only upload it some hours before the deep freeze. So if you have asked yourself why Debian doesn't ship 16.08 within their last stable release, this is the answer. The missing dependency for KDE Pim named Qt WebEngine.
There is a second consequence of the switch: Kontact will only be available for those architectures that are supported by Qt WebEngine. Of 19 supported architectures for 16.04, we can only support five architectures in future.

Now after Debian has woken up again from its slumber, we first had to update Qt and KDE Frameworks. After the first attempt at packaging KDE Pim 17.08.0, that was released for experimental, we are now finally reaching the point where we can package and deliver KDE Pim 17.08.3 to Debian unstable. Because Pino Toscano and I had time we started packaging it and stumbled across the issue of having to package 58 source packages, all dependent on each other. Keep in mind all packaging work is not a oneman or twoman show, mostly all in the Qt/KDE Debian mantainers are involved somehow. Either by putting their name under a upload or by being available via IRC, mail and answering questions, making jokes or doing what so ever. Jonathan Riddell visualized the dependencies for KDE Pim 16.08 with graphviz. But KDE Pim is a fast moving target, and I wanted to make my own graphs and make them more useful for packaging.

Full dependency graph for KDE Pim 17.08

The dependencies you see on this graph are created out of the Build dependencies within Debian for KDE Pim 17.08. I stripped out every dependency that isn't part of KDE Pim. In contrast to Jonathan, I made the arrows from dependency to package. So the starting point of the arrow is the dependency and it is pointing to the packages that can be built from it. The green colour shows you packages that have no dependency inside KDE Pim. The blue indicates packages with nothing depending on them. But to be honest, neither Jonathan's nor my graph tells me any more than they do you. They are simply too convoluted. The only thing these graphs make apparent is that packaging KDE Pim is a very complex task :D

But fortunately we can simplify the graphs. For packaging, I'm not interested in "every" dependency, but only in "new" ones. That means, if a <package> depends on a,b and c, and b depends on a, than I know: Okay, I need to package b and c before <package> and a before b. I would call a an implicit dependency of <package>. Here again in a dot style syntax:

a -> <package>
b -> <package>
c -> <package>
a -> b

can be simplified to:

b -> <package>
c -> <package>
a -> b

With this quite simple rule to strip all implicit dependencies out of the graph we end up with a more useful one:

Simplified dependency graph for KDE Pim 17.08

(You can find the dot file and the code to create such a graph at pkg-kde.alioth.debian.org)

At least this is a lot easier to consume and create a package ordering from. But still it looks scary. So I came up with the idea to define tiers, influenced by the tier model in KDE Frameworks. I defined one tier as the maximum set of packages that are independent from each other and only depend on lower tiers:

Build tiers for KDE Pim 17.08 (The dot file and the code to create such a graph you can find at pkg-kde.alioth.debian.org)

Additionally I only show the dependencies, from the last tier to the current one. So a dependency from tier 0 -> tier 1 but not from tier 0 -> tier 2. That's why it looks like nothing depends on kdav or ktnef. But the ellipse shape tells you, that in higher tiers something depends on them. The lightblue diamond shaped ones in contrast indicating, nothing depending on them anymore. So here you can see the "hotpath" for dependencies. This shows that the bottleneck is libkdepim->pimcommon. Interestingly this is also, more or less, the border between the former split of kdepimlibs and kdepim during KDE SC 4 times.
I think this is a useful visualization of the dependencies and may be a starting point to define a goal, what the dependencies should look like.

You also may ask yourself why an application needs so much more tiers than complete KDE Frameworks? Well, the third tier of KDE Frameworks is more of a collection for leftovers that don't reach tier 1 or tier 2. See the definition of tier 3 is: "Tier 3 Frameworks can depend only on other Tier 3 Frameworks, Tier 2 Frameworks, Tier 1 Frameworks, Qt official frameworks, or other system libraries.". The relevant part is that a framework tier 3 can depend on other tier 3 frameworks. If you use my tier definition in contrast, then you end up with more than ten tiers for KDE Frameworks, too.

After building all of these nice graphs for Debian, I wanted to see if I could create such graphs for KDE Pim directly. As KDE is mostly using the kde-build-metadata.git for documenting dependencies I updated my scripts to create graphs from this data directly:

Simplified dependency graph for for KDE Pim 17.12 Build tiers for KDE Pim 17.12

(the code to build the graphs yourselves is available here: kde-dev-scripts.git/pim-build-deps-graphs.py)

In detail this graph looks different and not just because of the version difference (17.08 vs. master). I think we need to update the dependencies data. This also may explain why sometimes kdesrc-build don't manage it to compile complete KDE Pim in the first run.

Worse Than FailureNotepad Development

Nelson thought he hit the jackpot by getting a paid internship the summer after his sophomore year of majoring in Software Engineering. Not only was it a programming job, it was in his hometown at the headquarters of a large hardware store chain known as ValueAce. Making money and getting real world experience was the ideal situation for a college kid. If it went well enough, perhaps he could climb the ranks of ValueAce IT and never have to relocate to find a good paying job.

A notebook with a marker and a pen resting on it

He was assigned to what was known as the "Internet Team", the group responsible for the ValueAce eCommerce website. It all sounded high-tech and fun, sure to continue to inspire Nelson towards his intended career. On his first day he met his supervisor, John, who escorted him to his first-ever cubicle. He sat down in his squeaky office chair and soaked in the sterile office environment.

"Welcome aboard! This is your development machine," John said, pressing the power buttons on an aging desktop and CRT monitor. "You can start by setting up everything you will need to do your development. I'll be just down the hall in my office if you have any issues!"

Eager to get started, Nelson went down the checklist John provided. He would have to install TortoiseSVN, check out the Internet Team's codebase, then install all the dependencies. Nelson figured it would take the rest of the day, then by Tuesday morning he could get into some real coding. That's when the security prompts started.

Anything Nelson tried to access was met with an abrupt "Access denied" prompt and login dialog that asked for admin credentials. "Ok... I guess they just don't want me installing any old thing on here, makes sense," Nelson said to himself. He tried to do a few other benign things like launching Calculator and Notepad, only to be met with the same roadblocks. He went down the hall to fetch John to find out how to proceed.

"Dammit, they just implemented a bunch of new security policies on our workstations. Only managers like me can do anything on our own machines," John bemoaned. "I'll come by and enter my credentials for now so you can get set up."

The trick worked and Nelson was able to get the codebase and begin poking around on it. He was curious about some of the things they were doing in code, so he opened a web browser to search for them. He was allowed to open the browser only to get nothing but "The page is not available" and a login prompt for any site he tried to browse. "Son of a..." he muttered under his breath. He got up for another trip to John's office.

"Hey John, sorry to bother you again. You'll love this one. As a member of the Internet Team, I'm unable to access the internet," Nelson quipped with a nervous chuckle. "I was just hoping to learn some things about how the code works."

"Oh no, don't even bother with that," John told him, rolling his eyes. "Internet is a four-letter word around here if you aren't a manager. The internet is dark and full of terrors and is not to be trusted in the hands of anyone else. They expect you to learn everything from good old-fashioned books." John motioned to his vast library of programming books. Nelson grabbed a few and took them home to study after a frustrating initial day.

After a late-night cram session, Nelson arrived Tuesday morning prepared to actually accomplish something. He hoped to fire up a local instance of the eCommerce site and make some modifications just to see what he could do. As it turned out, he still couldn't do much of anything. He was still getting blocked on local web pages. To add injury to insult, any of the .aspx pages he had tried to access were replaced with the HTML for "page not found" in source.

After travelling the familiar route to John's office, Nelson explained what happened, hoping to borrow admin credentials again. "Sorry, kid. I can't help you," John told him, sounding dejected. "The network overlords noticed that I logged in to your machine, so they wrote me up for it. Any coding you want to do will have to be done via notepad."

"I already said I can't even launch Notepad though... literally everything is locked down!" Nelson exclaimed, growing further irritated.

"Oh I didn't mean Notepad the program. An actual notepad." John pulled a spiral pad of paper and a pen out of his drawer and slid it over to Nelson." Write down what you want on here, give it to me, and I'll enter it into source and check it in. That's the best I can do."

Nelson grabbed his new "development environment" and went back to his desk to brood. It was going to be a long summer. Perhaps Software Engineering wasn't the right major for him. Maybe something like Anthropology or Art would be more fulfilling.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianBastian Blank: Google Cloud backed Debian mirror

It's been some time that someone at Google told us that they had problems providing a stable mirror of Debian for use by their cloud platform. I wanted to give it a try and see what the platform can give us. At this time I was already responsible for the Debian mirror network inside Microsoft Azure.

So I started to generalize a setup of Debian mirrors in cloud environments. I applied the setup to both Google Cloud Engine and Amazon EC2. The setup on the Google Cloud works pretty fine. I scraped the EC2 setup for now, as it can neither provide the throughput, nor the inter-region connectivity to at a level that can compete with Google.

So I'd like to proudly present a test setup of a Google Cloud backed Debian mirror. It provides access to the main and security archive. I would be glad to see a bit more traffic on it. I'd like to asses if there are problems, both with synchronicity and reacheability.

They can be used by adding one of the following to your sources.list:

deb http://debian.gce-test.mirrors.debian.org/debian stretch main contrib non-free
deb http://debian.gce-test.mirrors.debian.org/debian buster main contrib non-free
deb http://debian.gce-test.mirrors.debian.org/debian sid main contrib non-free
deb http://debian.gce-test.mirrors.debian.org/debian experimental main contrib non-free

If you do and see problems, please report them back to me at waldi@debian.org. Also please note that Google stores load balancer logs for seven days, including the client IP.

Planet DebianMartin Pitt: Migration from PhantomJS to Chrome DevTools Protocol

Being a web interface, Cockpit has a comprehensive integration test suite which exercises all of its functionality on a real web browser that is driven by the tests. Until recently we used PhantomJS for this, but there was an ever-increasing pressure to replace it.

Why replace PhantomJS?

Phantom’s engine is becoming really outdated: it cannot understand even simple ES6 constructs like Set, arrow functions, or promises, which have been in real browsers for many years; this currently blocks hauling in some new code from the welder project. It also doesn’t understand reasonably modern CSS which particularly is important for making mobile-friendly pages, so that we had to put in workarounds for crashes and other misbehaviour into our code. Also, development got officially declared as abandoned last April.

So about two months ago I started some research for possible replacements. Fortunately, Cockpit’s tests are not directly written in JavaScript using the PhantomJS API, but they use an abstract Browser Python class with methods like open(url), wait_visible(selector), and click(selector). So I “only” needed to reimplement that Browser class, and didn’t have to rewrite the entire test suite.

Candidates

The contenders in the ring which are currently popular and most likely supported for a fair while, with their pros and cons:

  1. Electron. This is the rendering engine of Chromium plus the JS engine from nodejs. It is widely adopted and used, and relatively compact (much smaller than Chromium itself).

    • pro: It has a built in REPL to use it interactively (node_modules/.bin/electron -i) and this API is relatively simple and straightforward to use, if your test is built around an external process. This is the case for our Python tests.
    • pro: If your tests are in JS, there is Nightmare as API for electron. This is a really nice one, and super-easy to get started; npm install nightmare, write your first test in 5 lines of JS, done.
    • pro: Has nice features such as verbose debug logging to watch every change, signal, and action that’s going on. You can also enable the graphical window where you see your test actions fly by, you can click around, and use the builtin inspector/debugger/console.
    • It lags behind the latest Chromium a bit. E. g. latest chromium-browser in Fedora 27 is v62, latest Electron is based on 58. (But this might be a good or bad thing depending on your project - sometimes you actually don’t want to require the very latest browser)
    • con: Not currently packaged in Fedora or Debian, so you need to install it through npm (~ 130 MB uncompressed). I. e. almost twice as big as PhantomJS, although the difference in the compressed download is much smaller.
    • con: It does not represent a “real-life” browser as it uses a custom JS engine. While this should not make much of a difference in theory, there’s always little quirks and bugs to be aware of.

  2. Use a real browser (Chromium, Firefox, Edge) with Selenium

    • pro: Gives very realistic results
    • pro: Long-established standard, so most likely will continue to stay around for a while
    • con: Much harder to set up than the other two
    • con: API is low-level, so you need to have some helper API to write tests in a sensible manner.

  3. Use Chromium itself with the DevTools Protocol, possibly in the stripped down headless variant. You have to use a library on top of that: chrome-remote-interface seems to be the standard one, but it’s tiny and straightforward.

    • pro: This is becoming an established standard which other browsers start to support as well (e. g. Edge)
    • pro By nature, gives very realistic test results, and you can choose which Chrome version to test against.
    • pro: Chromium is packaged in all distros, so this doesn’t require a big npm download for running the tests.
    • con: Relatively hard to set up compared to electron or phantom: you manually need to control the actual chromium process plus your own chrome-remote-interface controller process, and allocate port numbers in a race-free manner (to run tests in parallel).
    • con: Relatively low-level protocol (roughly comparable to Selenium), so this is not directly appropriate for writing tests - you need to create your own high-level library on top of this. (But in Cockpit we already have that)

  4. puppeteer is a high-level JS library on top of the Chromium DevTools Protocol.

    • pro: Comfortable and abstract API, comparable to Nightmare.
    • pro: It does the job of launching and controlling Chromium, so similarly simple to set up as Nightmare or Phantom.
    • con: Does not work with the already installed/packaged Chromium, it bundles its own.

After evaluating all these, my conclusion is that for a new project I can recommend puppeteer. If you can live with pulling in the browser through NPM for every test run (CI services like Semaphore cache your node_modules directory, so it might not be a big issue) and are fine with writing your tests in JavaScript, then puppeteer provides the easiest setup and a comfortable and abstract API.

For our existing Cockpit project however, I eventually went with option 3, i. e. Chrome DevTools protocol directly. puppeteer’s own abstraction does not actually help our tests as we already have the Browser class abstraction, and for our CI system and convenience of local test running it actually does make a difference whether you can use the already installed/packaged Chrome or have to download an entire copy. I also suspect that my troubles with SSL certificates (see below) would be much harder or even impossible to solve/workaround with puppeteer.

Interacting with Chromium

The API documentation is excellent, and one can tinker around in the REPL interpreter in simple and straightforward way and watch the result in an interactive Chromium that runs with a temporary $HOME (to avoid interfering with your real config):

$ rm -rf /tmp/h; HOME=/tmp/h chromium-browser --remote-debugging-port=9222 about:blank &
$ mkdir /tmp/test; cd /tmp/test
$ npm install chrome-remote-interface
$ node_modules/.bin/chrome-remote-interface inspect

In the chrome-remote-interface shell one can directly use the CDP commands, for example: Open Google’s search page, focus the search input line, type a query, and check the current URL afterwards:

>>> Page.navigate({url: "https://www.google.de"})
{ frameId: '4521.1' }
>>> Runtime.evaluate({expression: "document.querySelector('input[name=\"q\"]').focus()"})

>>> // type in the search term and Enter key by key
>>> "cockpit\r".split('').map(c => Input.dispatchKeyEvent({type: "char", text: c}))

>>> Runtime.evaluate({expression: "window.location.toString()"})
{ result:
   { type: 'string',
     value: 'https://www.google.de/search?source=hp&ei=T5...&q=cockpit&oq=cockpit&gs_l=[...]' } }

The porting process

After getting an initial idea and feeling how the DevTools protocol works, the actual porting process went in a pretty typical Pareto way. After two days I had around 150 out of our ~ 180 tests working, and porting most of the API from PhantomJS to CDP invocations was straightforward. A lot of the remaining test failures were due to “ordinary” flakes and bugs in the tests themselves, and a series of four PRs fixed them.

There were three major issues on which I spent the “other 90%” of the time on this though - perhaps this blog post and my upstream bug reports help other people to avoid the same traps:

  • Frame handling: Cockpit is built around the concept of iframes, with each frame representing an “application” on your “server Linux session”. To make an assertion or run a query in an iframe, you need to “drill through” into the desired iframe from the root page DOM. I started with a naïve JavaScript-only solution:

    if (current_frame)
      frame_doc = document.querySelector(`iframe[name="${current_frame}"]`).contentDocument.documentElement;
    else
      frame_doc = document;
    

    and then do queries on frame_doc. This actually works well for all but one of our tests which checks embedding a Cockpit page into a custom HTML page. Then this approach (rightfully) fails due to browser’s Same-origin Policy.

    So I went ahead and implemented a solution using the DevTools “mirror” DOM and API. It took me three different attemps to get that right, and in that regard the API documentation nor a Google search were particularly instructive. This is an area where the protocol really could be improved. I posted my solution and a few suggestions to devtools-protocol issue #72.

  • SSL client certs: Our OpenShift tests kept failing when the OAuth page came up, but only when using Headless mode. I initially thought this was due to the OAuth server having an invalid SSL certificate, as the initial error message suggests something like that. But all approaches with --ignore-certificate-errors or a more elaborate usage of the Security API or even actually installing the OAuth server’s certificate didn’t work - quite frustrating.

    It finally helped to enable a third kind of logs (besides console messages and --enable-logging --v=1) which finally revealed what it was complaining about: OAuth was sending a request for presenting a client-side SSL certificate, and this just causes Chromium Headless to throw its hands into the air. As there is no workaround with Chromium Headless, I had to bite the bullet and install the full graphical Chromium (plus half a metric ton of X/mesa dependencies) and Xvfb into our test containers, plus write the logic to bring these up and down in an orderly and parallel fashion.

  • Silently broken pushState API: One of our tests was reproducibly failing on the infrastructure, and only sometimes locally; the screenshot shows that it clearly was on the wrong page, although the previous navigation requests caused no error. Single-stepping through them also worked. Peter and I have spent about three days debugging this and figuring out why adding a simple sleep(3) at a random place in the test made the test to succeed.

    It turned out that a few months ago the window.history.pushState() method changed behaviour: When there are too many calls to it (> 50 in 10 seconds) it ignores the function call, without returning or logging an error. This was by far the most frustrating and biggest time-sink, but after finally discovering it, we had a good justification why a static sleep() is actually warranted in this case. (Related upstream bug reports: #794923 and #769592)

After figuring all that out, the final patch turned out to be reasonably small and readable. Most of the commits are minor test adjustments which weren’t possible to implement exactly as before in the API. Of course this got preceded with half a dozen preparatory commits, to adjust dependencies in containers, fix test races, and the like.

Now that this is landed, we could clean up a bunch of PhantomJS related hacks, it is now possile to write tests for the mobile navigation, and we can now also test ES6 code (such as welder-web). Debugging tests is much more fun now as you can run them on an interactive graphical browser, to see widgets and pages flying around and interactively mess around or inspect them.

,

Planet DebianRussell Coker: Designing Shared Cars

Almost 10 years ago I blogged about car sharing companies in Melbourne [1]. Since that time the use of such services appears to have slowly grown (judging by the slow growth in the reserved parking spots for such cars). This isn’t the sudden growth that public transport advocates and the operators of those companies hoped for, but it is still positive. I have just watched the documentary The Human Scale [2] (which I highly recommend) about the way that cities are designed for cars rather than for people.

I think that it is necessary to make cities more suited to the needs of people and that car share and car hire companies are an important part of converting from a car based city to a human based city. As this sort of change happens the share cars will be an increasing portion of the new car sales and car companies will have to design cars to better suit shared use.

Personalising Cars

Luxury car brands like Mercedes support storing the preferred seat position for each driver, once the basic step of maintaining separate driver profiles is done it’s an easy second step to have them accessed over the Internet and also store settings like preferred radio stations, Bluetooth connection profiles, etc. For a car share company it wouldn’t be particularly difficult to extrapolate settings based on previous use, EG knowing that I’m tall and using the default settings for a tall person every time I get in a shared car that I haven’t driven before. Having Bluetooth connections follow the user would mean having one slave address per customer instead of the current practice of one per car, the addressing is 48bit so this shouldn’t be a problem.

Most people accumulate many items in their car, some they don’t need, but many are needed. Some of the things in my car are change for parking meters, sunscreen, tools, and tissues. Car share companies have deals with councils for reserved parking spaces so it wouldn’t be difficult for them to have a deal for paying for parking and billing the driver thus removing the need for change (and the risk of a car window being smashed by some desperate person who wants to steal a few dollars). Sunscreen is a common enough item in Australia that a car share company might just provide it as a perk of using a shared car.

Most people have items like tools, a water bottle, and spare clothes that can’t be shared which tend to end up distributed in various storage locations. The solution to this might be to have a fixed size storage area, maybe based on some common storage item like a milk crate. Then everyone who is a frequent user of shared cars could buy a container designed to fit that space which is divided in a similar manner to a Bento box to contain whatever they need to carry.

There is a lot of research into having computers observing the operation of a car and warning the driver or even automatically applying the brakes to avoid a crash. For shared cars this is more important as drivers won’t necessarily have a feel for the car and can’t be expected to drive as well.

Car Sizes

Generally cars are designed to have 2 people (sports car, Smart car, van/ute/light-truck), 4/5 people (most cars), or 6-8 people (people movers). These configurations are based on what most people are able to use all the time. Most car travel involves only one adult. Most journeys appear to have no passengers or only children being driven around by a single adult.

Cars are designed for what people can drive all the time rather than what would best suit their needs most of the time. Almost no-one is going to buy a personal car that can only take one person even though most people who drive will be on their own for most journeys. Most people will occasionally need to take passengers and that occasional need will outweigh the additional costs in buying and fueling a car with the extra passenger space.

I expect that when car share companies get a larger market they will have several vehicles in the same location to allow users to choose which to drive. If such a choice is available then I think that many people would sometimes choose a vehicle with no space for passengers but extra space for cargo and/or being smaller and easier to park.

For the common case of one adult driving small children the front passenger seat can’t be used due to the risk of airbags killing small kids. A car with storage space instead of a front passenger seat would be more useful in that situation.

Some of these possible design choices can also be after-market modifications. I know someone who removed the rear row of seats from a people-mover to store the equipment for his work. That gave a vehicle with plenty of space for his equipment while also having a row of seats for his kids. If he was using shared vehicles he might have chosen to use either a vehicle well suited to cargo (a small van or ute) or a regular car for transporting his kids. It could be that there’s an untapped demand for ~4 people in a car along with cargo so a car share company could remove the back row of seats from people movers to cater to that.

CryptogramDetails on the Mirai Botnet Authors

Brian Krebs has a long article on the Mirai botnet authors, who pled guilty.

Worse Than FailureCodeSOD: How is an Employee ID like a Writing Desk?

Chris D’s has a problem. We can see a hint of the kind of problem he needs to deal with by looking at this code:

FUNCTION WHOIS (EMPLOYEE_ID IN VARCHAR2, Action_Date IN DATE)
   RETURN varchar2
IS
  Employee_Name varchar2(50);
BEGIN
   SELECT  employee_name INTO Employee_Name
     FROM eyps_manager.tbemployee_history
    WHERE  ROWNUM=1 AND   employee_id = EMPLOYEE_ID
          AND effective_start_date <= Action_Date
          AND (Action_Date < effective_end_date OR effective_end_date IS NULL);

   RETURN (Employee_Name);
END WHOIS;

This particular function was written many years ago. The developer responsible, Artie, was fired a short time later, because he broke the production database in an unrelated accident involving a badly aimed `DELETE FROM…`.

It’s a weird function- given an EMPLOYEE_ID, it returns an EMPLOYEE_NAME… but why all this work? Why check dates?

This particular business system was purchased back in 1997. The vendor didn’t ship the product with anything so mundane as an EMPLOYEES table- since every business was a unique and special snowflake, there was no way for the vendor to give them exactly the employee-tracking features they needed, so instead it fell to the customer to build the employee features themselves. The vendor would then take that code on for maintenance.

Which brings us to Artie. Artie was told to implement some employee tracking for the sales team. So he did. He gave everyone on the sales team an EMPLOYEE_ID, but instead of using an auto-numbered sequence, or a UUID, he invented a complicated algorithm for generating his own kind-of-unique IDs. These were grouped in blocks, so, for example, all of the IDs in the range “AA1000-AA9999” were assigned to widget sales, while “AB1000A-AB9999A” were for office supply sales.

This introduced a new problem. You see, EMPLOYEE_ID wasn’t a unique ID for an employee. It was actually a sales portfolio ID, a pile of customers and their orders and sales. Sales people would swap portfolios around as one employee left, or a new hire took on a new portfolio. This made it impossible to know who was actually responsible for which sale.

Artie was ready to solve that problem, though, as he quickly added the EFFECTIVE_START_DATE and EFFECTIVE_END_DATE fields. Instead of updating rows as portfolios moved around, you could simply add new rows, keeping an ongoing history of which employee held which portfolio at any given time.

There’s also a UI to manage this data, which was written circa 2000. It is a simple data-grid with absolutely no validation on any of the fields, which means anyone using it corrupts data on a fairly regular basis, and then Chris or one of his peers has to go into the production database and manually correct the data.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet DebianRenata D'Avila: My project with Outreachy

Let's get to the project I actually applied to:

To build a calendar for FOSS events

We have a page on Debian wiki where we centralize the information needed to make that a reality, you can find it here: SocialEventAndConferenceCalendars

So, in fact, the first thing I did on my internship was:

  • Search for more sources for FOSS events that hadn't been mentioned in that page yet

  • Update said page with these sources

  • Add some attributes for events that I believe could be useful for people wanting to attend them, such as:

    • Is the registration (and not just the CFP) still open?
    • Does the event has a code of conduct?
    • What about accessibility?

I understand that some of these informations might not be readily available for most of the events, but maybe the mere act of mentioning them in our aggregation system may be enough to an organizer to think about them, if they aim to have their event mentioned "by us"?

Both my mentor, Daniel, and I have been looking around to find projects that have worked on a goal similar to this one, to study them and see what can be learned from what has been done already and what can be reused from it. They are mentioned on the wiki page as well. If you know any others, feel free to add there or to let us know!

Among the proposed deliverables for this project:

  • making a plugin for other community web sites to maintain calendars within their existing web site (plugin for Discourse forums, MoinMoin, Drupal, MediaWiki, WordPress, etc) and export it as iCalendar data
  • developing tools for parsing iCalendar feeds and storing the data into a large central database
  • developing tools for searching the database to help somebody find relevant events or see a list of deadlines for bursary applications

My dear mentor Daniel Pocock suggested that I considered working on a plugin for MoinMoinWiki, because Debian and FSFE use MoinMoin for their wikis. I have to admit that I thought that was an awesome idea as soon as I read it, but I was a bit afraid that it would be a very steep learning curve to learn how MoinMoin worked and how I could contribute to it. I'm glad Daniel calmed my fears and reminded me that the mentors are on my side and glad to help!

So, what else have I been doing?

So far? I would say studying! Studying documentation for MoinMoin, studying code that has already been written by others, studying how to plan and to implement this project.

And what have I learned so far?

What is MoinMoin Wiki?

MoinMoin logo, sort of a white "M" inside a circle with light blue background. The corners of the M are rounded and seem connected like nodes

MoinMoin is a wiki written in... Python (YAY! \o/). Let's say that I have... interacted with development on a wiki-like system back when I created my first (and now defunct) blog post-Facebook.

Ikiwiki logo, the first 'iki' is black and mirrors the second one, with a red 'W' in the middle

Ikiwiki was written in Perl, a language I know close to nothing about it, which limited a lot how I could interact with. I am glad that I will be able to work with a language that I am way more familiarized with. (And, on Prof. Masanori's words: "Python is a cool language.")

I also learned that MoinMoin's storage mechanism is based on flat files and folders, rather than a database (I swear that, despite my defense for flat file systems, this is a coincidence. I mean, if you believe in coincidences). I also found out that the development uses Mercurial for version control. I look forward to learning exploring it, because so far I have only used git.

The past few days I set up a local MoinMoin instance. Even though there is a HowTo guide to get MoinMoinWiki working on Debian, I had a little trouble setting it up using it. Mostly because the guide is sort of confusing with permissions, I think? I mean, it says to create a new user with no login, but then it gives commands that can only be executed by root or sudo. That doesn't seen very wise. So I went on and found a docker image for MoinMoin wiki and was able to work on MoinMoin with it. This image is based on Debian Jessie, so maybe that is something that I might work to improve in the future.

Only after I got everything working with docker that I found this page with instructions for Linux which was what I should've tried in the first place, because I didn't really needed fully configurated server with nginx and uwsgi, only a local instance to play with. It happens.

I studied the development guide for MoinMoin and I have also worked to understand the development process (and what Macros, Plugins and such are in this context), so I could figure out where and how to develop!

Macros

A macro is entered as wiki markup and it processes a few parameters to generate an output, which is displayed in the content area.

Searching for Macros, I found out there is a Calendar Macro. And I have discovered that, besides the Calendar Macro, there is also an EventCalendar macro that was developed years ago. I expect to use the next few days to study the EventCalendar code more throughly, but the first impression I had is that this code that can be reused and improved for the FOSS calendar.

Parsers

A parser is entered as wiki markup and it processes a few parameters and a multiline block of text data to generate an output, which is displayed in the content area.

Actions

An action is mostly called using the menu (or a macro) and generates a complete HTML page on its own.

So maybe I will have to work a bit on this afterwards, to interact with the macro and customize the information to be displayed? I am not sure, I will have to look more into this.

I guess that is all I have to report for now. See you in two weeks (or less)!

Planet DebianRenata D'Avila: My contribution to Github-icalendar

Hello!

Now that you already know a bit about me, let me start talking about my internship with Outreachy.

One of the steps to apply to the internship is to pick the project you would like to work on. I chose the one with Debian to build a calendar database of social events and conferences.

It is also part of the application process to make some contribution for the project. At first, it wasn't clear to me what contribution would that be (I hadn't found that URL yet), so I went to the #debian-outreach IRC channel and... well, asked, of course. That is when I found the page with a description of the task. I was supposed to learn about the iCalendar format (I didn't even know what it was, back then!) and work on an issue on the github-icalendar project: to use repository labels in one of the suggested ways.

My contribution for github-icalendar

Github-icalendar works by accessing the open issues in all repositories that the user has access to and transforming them into an iCalendar feed of VTODO items.

I chose to solve the labels issue using them to filter the list of issues that should appear in a feed. I imagined two use cases for it:

  1. An user wants to get issues from all their repositories that contain a given label (getting all 'bug' issues, for instance)

  2. An user wants to get issues from only an specific repository that contain a given label.

Therefore, the label system should support both of these uses.

Working on this contribution taught me not only about the icalendar format, but it also gave me hands-on experience on interacting with the Github Issues API.

Back in October, I was able to attend Python Brasil, the national conference about Python, during which I stayed in an accomodation with other PyLadies and allies. I used this opportunity to share what I had developed so far and to get some feedback. That's how I learned about pudb and how to use it to debug my code (and find out where I was getting the Github Issues API wrong). Because I found it so useful, on my Pull-Request, I proposed it to be added to the project, to help with future development. I also started adding some tests and wrote some specifications as suggestions to anyone who keeps working on it.

I would like take this opportunity to thank you to the friends who pointed me in the right direction during the application process and made this internship a reality to me, in particular Elias Dorneles.

,

Krebs on SecurityBuyers Beware of Tampered Gift Cards

Prepaid gift cards make popular presents and no-brainer stocking stuffers, but before you purchase one be on the lookout for signs that someone may have tampered with it. A perennial scam that picks up around the holidays involves thieves who pull back and then replace the decals that obscure the card’s redemption code, allowing them to redeem or transfer the card’s balance online after the card is purchased by an unwitting customer.

Last week KrebsOnSecurity heard from Colorado reader Flint Gatrell, who reached out after finding that a bunch of Sam’s Club gift cards he pulled off the display rack at Wal-Mart showed signs of compromise. The redemption code was obscured by a watermarked sticker that is supposed to make it obvious if it has been tampered with, and many of the cards he looked at clearly had stickers that had been peeled back and then replaced.

“I just identified five fraudulent gift cards on display at my local Wal-Mart,” Gatrell said. “They each had their stickers covering their codes peeled back and replaced. I can only guess that the thieves call the service number to monitor the balances, and try to consume them before the victims can.  I’m just glad I thought to check!”

In the picture below, Gatrell is holding up three of the Sam’s Club cards. The top two showed signs of tampering, but the one on the bottom appeared to be intact.

The top two gift cards show signs that someone previously peeled back the protective sticker covering the redemption code. Image: Flint Gatrell.

Kevin Morrison, a senior analyst on the retail banking and payments team at market analysis firm Aite Group, said the gift card scheme is not new but that it does tend to increase in frequency around the holidays, when demand for the cards is far higher.

“Store employees are instructed to look for abnormalities at the [register] but this happens [more] around the holiday season as attention spans tend to shorten,” he said. “While gift card packaging has improved and some safe-guards put in place, fraudsters look for the weakest link and hit hard when they find one.”

Gift cards make great last-minute gifts, but don’t let your guard down in your haste to wrap up your holiday shopping. There are so many variations on the above-described scheme that many stores have taken to keeping gift cards at or behind the register, where cashiers can more easily spot customers trying to tamper with the cards. As a result, stores that take this basic precaution may be the safest place to purchase gift cards.

Update, Dec. 20, 7:30 a.m. ET: Mr. Gatrell just shared a link to this story, which incredibly is about another man who was found to have bought tampered gift cards in the very same Wal-Mart where Gatrell found the above-pictured cards.

That story includes some other security tips when buying and/or giving gift cards:

When purchasing a gift card, pull from the middle of the pack because those are less likely to be tampered with. Also, get a receipt when buying the card so you have proof of the purchase. Include that receipt if you give the card as a gift. Finally, activate the card quickly and use it quickly and keep a close eye on the balance.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #138

Here's what happened in the Reproducible Builds effort between Sunday December 10 and Saturday December 16 2017:

Upcoming events

The Reproducible Builds project are organising an assembly at 34C3 (the "Galactic Congress") in Leipzig, Germany. We will informally meet every day at 13:37 UTC and would be delighted if you joined us there.

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

43 package reviews have been added, 48 have been updated and 51 have been removed in this week, adding to our knowledge about identified issues.

4 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (55)
  • Andreas Beckmann (2)
  • Laurent Bigonville (1)
  • Michael Biebl (1)
  • Pierre Saramito (2)

diffoscope development

reprotest development

Version 0.7.5, 0.7.6 and 0.7.7 was uploaded to unstable by Ximin Luo.

It included contributions already covered by posts of the previous weeks as well as new changes:

buildinfo.debian.net development

reproducible-website development

jenkins.debian.net development

Misc.

This week's edition was written by Alexander Couzens, Bernhard M. Wiedemann, Chris Lamb and Holger Levsen & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianColin Watson: An odd test failure

Weird test failures are great at teaching you things that you didn’t realise you might need to know.

As previously mentioned, I’ve been working on converting Launchpad from Buildout to virtualenv and pip, and I finally landed that change on our development branch today. The final landing was mostly quite smooth, except for one test failure on our buildbot that I hadn’t seen before:

ERROR: lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked
worker ID: unknown worker (bug in our subunit output?)
----------------------------------------------------------------------
Traceback (most recent call last):
_StringException: log: {{{
36.384  creating repository in file:///tmp/testbzr-6CwSLV.tmp/lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked/work/stacked-on/.bzr/.
36.388  creating branch <bzrlib.branch.BzrBranchFormat7 object at 0xeb85b36c> in file:///tmp/testbzr-6CwSLV.tmp/lp.codehosting.codeimport.tests.test_worker.TestBzrSvnImport.test_stacked/work/stacked-on/
}}}

Traceback (most recent call last):
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/lib/lp/codehosting/codeimport/tests/test_worker.py", line 1108, in test_stacked
    stacked_on.fetch(Branch.open(source_details.url))
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/branch.py", line 186, in open
    possible_transports=possible_transports, _unsupported=_unsupported)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 689, in open
    _unsupported=_unsupported)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 718, in open_from_transport
    find_format, transport, redirected)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/transport/__init__.py", line 1719, in do_catching_redirections
    return action(transport)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 706, in find_format
    probers=probers)
  File "/srv/buildbot/lpbuildbot/lp-devel-xenial/build/env/local/lib/python2.7/site-packages/bzrlib/controldir.py", line 1155, in find_format
    raise errors.NotBranchError(path=transport.base)
NotBranchError: Not a branch: "/tmp/tmpdwqrc6/trunk/".

When I investigated this locally, I found that I could reproduce it if I ran just that test on its own, but not if I ran it together with the other tests in the same class. That’s certainly my favourite way round for test isolation failures to present themselves (it’s more usual to find state from one test leaking out and causing another one to fail, which can make for a very time-consuming exercise of trying to find the critical combination), but it’s still pretty odd.

I stepped through the Branch.open call in each case in the hope of some enlightenment. The interesting difference was that the custom probers installed by the bzr-svn plugin weren’t installed when I ran that one test on its own, so it was trying to open a branch as a Bazaar branch rather than using the foreign-branch logic for Subversion, and this presumably depended on some configuration that only some tests put in place. I was on the verge of just explicitly setting up that plugin in the test suite’s setUp method, but I was still curious about exactly what was breaking this.

Launchpad installs several Bazaar plugins, and lib/lp/codehosting/__init__.py is responsible for putting most of these in place: anything in Launchpad itself that uses Bazaar is generally supposed to do something like import lp.codehosting to set everything up. I therefore put a breakpoint at the top of lp.codehosting and stepped through it to see whether anything was going wrong in the initial setup. Sure enough, I found that bzrlib.plugins.svn was failing to import due to an exception raised by bzrlib.i18n.load_plugin_translations, which was being swallowed silently but meant that its custom probers weren’t being installed. Here’s what that function looks like:

def load_plugin_translations(domain):
    """Load the translations for a specific plugin.

    :param domain: Gettext domain name (usually 'bzr-PLUGINNAME')
    """
    locale_base = os.path.dirname(
        unicode(__file__, sys.getfilesystemencoding()))
    translation = install_translations(domain=domain,
        locale_base=locale_base)
    add_fallback(translation)
    return translation

In this case, sys.getfilesystemencoding was returning None, which isn’t a valid encoding argument to unicode. But why would that be? It gave me a sensible result when I ran it from a Python shell in this environment. A bit of head-scratching later and it occurred to me to look at a backtrace:

(Pdb) bt
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(703)<module>()
-> main()
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(694)main()
-> execsitecustomize()
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/site.py(548)execsitecustomize()
-> import sitecustomize
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/env/lib/python2.7/sitecustomize.py(7)<module>()
-> lp_sitecustomize.main()
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp_sitecustomize.py(193)main()
-> dont_wrap_bzr_branch_classes()
  /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp_sitecustomize.py(139)dont_wrap_bzr_branch_classes()
-> import lp.codehosting
> /home/cjwatson/src/canonical/launchpad/lp-branches/testfix/lib/lp/codehosting/__init__.py(54)<module>()
-> load_plugins([_get_bzr_plugins_path()])

I wonder if there’s something interesting about being imported from a sitecustomize hook? Sure enough, when I went to look at Python for where sys.getfilesystemencoding is set up, I found this in Py_InitializeEx:

    if (!Py_NoSiteFlag)
        initsite(); /* Module site */
    ...
#if defined(Py_USING_UNICODE) && defined(HAVE_LANGINFO_H) && defined(CODESET)
    /* On Unix, set the file system encoding according to the
       user's preference, if the CODESET names a well-known
       Python codec, and Py_FileSystemDefaultEncoding isn't
       initialized by other means. Also set the encoding of
       stdin and stdout if these are terminals, unless overridden.  */

    if (!overridden || !Py_FileSystemDefaultEncoding) {
        ...
    }

I moved this out of sitecustomize, and it’s working better now. But did you know that a sitecustomize hook can’t safely use anything that depends on sys.getfilesystemencoding? I certainly didn’t, until it bit me.

CryptogramGCHQ Found -- and Disclosed -- a Windows 10 Vulnerability

Now this is good news. The UK's National Cyber Security Centre (NCSC) -- part of GCHQ -- found a serious vulnerability in Windows Defender (their anti-virus component). Instead of keeping it secret and all of us vulnerable, it alerted Microsoft.

I'd like believe the US does this, too.

Worse Than FailureCodeSOD: Titration Frustration

From submitter Christoph comes a function that makes your average regex seem not all that bad, actually:

According to "What is a Titration?" we learn that "a titration is a technique where a solution of known concentration is used to determine the concentration of an unknown solution." Since this is an often needed calculation in a laboratory, we can write a program to solve this problem for us.

Part of the solver is a formula parser, which needs to accept variable names (either lower or upper case letters), decimal numbers, and any of '+-*/^()' for mathematical operators. Presented here is the part of the code for the solveTitration() function that deals with parsing of the formula. Try to read it in an 80 chars/line window. Once with wrapping enabled, and once with wrapping disabled and horizontal scrolling. Enjoy!
String solveTitration(char *yvalue)
{
    String mreport;
    lettere = 0;
    //now we have to solve the system of equations
    //yvalue contains the equation of Y-axis variable
    String tempy = "";
    end = 1;
    mreport = "";
    String tempyval;
    String ptem;
    for (int i = 0; strlen(yvalue) + 1; ++i) {
        if (!(yvalue[i]=='q' || yvalue[i]=='w' || yvalue[i]=='e' 
|| yvalue[i]=='r' || yvalue[i]=='t' || yvalue[i]=='y' || yvalue[i]=='u' || 
yvalue[i]=='i' || yvalue[i]=='o' || yvalue[i]=='p' || yvalue[i]=='a' || 
yvalue[i]=='s' || yvalue[i]=='d' || yvalue[i]=='f' || yvalue[i]=='g' || 
yvalue[i]=='h' || yvalue[i]=='j' || yvalue[i]=='k' || yvalue[i]=='l' || 
yvalue[i]=='z' || yvalue[i]=='x' || yvalue[i]=='c' || yvalue[i]=='v' || 
yvalue[i]=='b' || yvalue[i]=='n' || yvalue[i]=='m' || yvalue[i]=='+' || 
yvalue[i]=='-' || yvalue[i]=='^' || yvalue[i]=='*' || yvalue[i]=='/' || 
yvalue[i]=='(' || yvalue[i]==')' || yvalue[i]=='Q' || yvalue[i]=='W' || 
yvalue[i]=='E' || yvalue[i]=='R' || yvalue[i]=='T' || yvalue[i]=='Y' || 
yvalue[i]=='U' || yvalue[i]=='I' || yvalue[i]=='O' || yvalue[i]=='P' || 
yvalue[i]=='A' || yvalue[i]=='S' || yvalue[i]=='D' || yvalue[i]=='F' || 
yvalue[i]=='G' || yvalue[i]=='H' || yvalue[i]=='J' || yvalue[i]=='K' || 
yvalue[i]=='L' || yvalue[i]=='Z' || yvalue[i]=='X' || yvalue[i]=='C' || 
yvalue[i]=='V' || yvalue[i]=='B' || yvalue[i]=='N' || yvalue[i]=='M' || 
yvalue[i]=='1' || yvalue[i]=='2' || yvalue[i]=='3' || yvalue[i]=='4' || 
yvalue[i]=='5' || yvalue[i]=='6' || yvalue[i]=='7' || yvalue[i]=='8' || 
yvalue[i]=='9' || yvalue[i]=='0' || yvalue[i]=='.' || yvalue[i]==',')) {
            break; //if current value is not a permitted value, this means that something is wrong
        }
        if (yvalue[i]=='q' || yvalue[i]=='w' || yvalue[i]=='e' 
|| yvalue[i]=='r' || yvalue[i]=='t' || yvalue[i]=='y' || yvalue[i]=='u' || 
yvalue[i]=='i' || yvalue[i]=='o' || yvalue[i]=='p' || yvalue[i]=='a' || 
yvalue[i]=='s' || yvalue[i]=='d' || yvalue[i]=='f' || yvalue[i]=='g' || 
yvalue[i]=='h' || yvalue[i]=='j' || yvalue[i]=='k' || yvalue[i]=='l' || 
yvalue[i]=='z' || yvalue[i]=='x' || yvalue[i]=='c' || yvalue[i]=='v' || 
yvalue[i]=='b' || yvalue[i]=='n' || yvalue[i]=='m' || yvalue[i]=='Q' || 
yvalue[i]=='W' || yvalue[i]=='E' || yvalue[i]=='R' || yvalue[i]=='T' || 
yvalue[i]=='Y' || yvalue[i]=='U' || yvalue[i]=='I' || yvalue[i]=='O' || 
yvalue[i]=='P' || yvalue[i]=='A' || yvalue[i]=='S' || yvalue[i]=='D' || 
yvalue[i]=='F' || yvalue[i]=='G' || yvalue[i]=='H' || yvalue[i]=='J' || 
yvalue[i]=='K' || yvalue[i]=='L' || yvalue[i]=='Z' || yvalue[i]=='X' || 
yvalue[i]=='C' || yvalue[i]=='V' || yvalue[i]=='B' || yvalue[i]=='N' || 
yvalue[i]=='M' || yvalue[i]=='.' || yvalue[i]==',') {
            lettere = 1; //if lettere == 0 then the equation contains only mnumbers
        }
        if (yvalue[i]=='+' || yvalue[i]=='-' || yvalue[i]=='^' || 
yvalue[i]=='*' || yvalue[i]=='/' || yvalue[i]=='(' || yvalue[i]==')' || 
yvalue[i]=='1' || yvalue[i]=='2' || yvalue[i]=='3' || yvalue[i]=='4' || 
yvalue[i]=='5' || yvalue[i]=='6' || yvalue[i]=='7' || yvalue[i]=='8' || 
yvalue[i]=='9' || yvalue[i]=='0' || yvalue[i]=='.' || yvalue[i]==',') {
            tempyval = tempyval + String(yvalue[i]);
        } else {
            tempy = tempy + String(yvalue[i]);
            for (int i = 0; i < uid.tableWidget->rowCount(); ++i) {
                TableItem *titem = uid.table->item(i, 0);
                TableItem *titemo = uid.table->item(i, 1);
                if (!titem || titem->text().isEmpty()) {
                    break;
                } else {
                    if (tempy == uid.xaxis->text()) {
                        tempyval = uid.xaxis->text();
                        tempy = "";
                    }
                    ... /* some code omitted here */
                    if (tempy!=uid.xaxis->text()) {
                        if (yvalue[i]=='+' || yvalue[i]=='-' 
|| yvalue[i]=='^' || yvalue[i]=='*' || yvalue[i]=='/' || yvalue[i]=='(' || 
yvalue[i]==')' || yvalue[i]=='1' || yvalue[i]=='2' || yvalue[i]=='3' || 
yvalue[i]=='4' || yvalue[i]=='5' || yvalue[i]=='6' || yvalue[i]=='7' || 
yvalue[i]=='8' || yvalue[i]=='9' || yvalue[i]=='0' || yvalue[i]=='.' || 
yvalue[i]==',') {
                            //actually nothing
                        } else {
                            end = 0;
                        }
                    }
                }
            }
        } // simbol end
        if (!tempyval.isEmpty()) {
            mreport = mreport + tempyval;
        }
        tempyval = "";
    }
    return mreport;
}
[Advertisement] Application Release Automation – build complex release pipelines all managed from one central dashboard, accessibility for the whole team. Download and learn more today!

Planet DebianJonathan Dowland: Containers lecture

I've repeated last year's docker lecture a couple of times recently, now revised and retitled "Introduction to Containers". The material is mostly the same; the demo steps exactly the same and I haven't produced any updated hand-outs this time (sorry). Revised slides: shorter version (terms: CC-BY-SA)

Whilst trying to introduce containers, the approach I've taken is to work up the history of Web site/server/app hosting from physical hosting and via Virtual Machines. This gives you the context for their popularity, but I find VMs are not the best way to explain container technology. I prefer to go the other way and look at a process on a multi-user system, the problems due to lack of isolation and steadily build up the isolation available with tools like chroot, etc.

The other area I've tried to expand on is the orchestration layer on top of containers, and above, including technologies such as Kubernetes and Openshift. If I deliver this again I'd like to expand this material much more. On that note, a colleague recently forwarded a link to a Google research paper originally published in acmqueue in January 2016, Borg, Omega, and Kubernetes which is a great read on the history of containers in Google and what led up to the open sourcing of Kubernetes, their third iteration at designing a container orchestrator.

Planet DebianNorbert Preining: Japan-styled Christmas Cards

A friend of mine, Kimberlee Aliasgar of Trinidad and Tobago, has created very nice Christmas cards “Japan styled” over at Xmascardsjapan (Japanese version is here). They pick up some typical themes from Japan and turn them into lovely designed cards that are a present by itself, no need for additional presents 😉

Here is another example with “Merry Christmas” written in Katakana, giving a nice touch.

In case you are interested, head over to the English version or Japanese version of there web shop.

I hope you enjoy the cards!

Planet DebianColin Watson: Kitten Block equivalent for Firefox 57

I’ve been using Kitten Block for years, since I don’t really need the blood pressure spike caused by accidentally following links to certain UK newspapers. Unfortunately it hasn’t been ported to Firefox 57. I tried emailing the author a couple of months ago, but my email bounced.

However, if your primary goal is just to block the websites in question rather than seeing kitten pictures as such (let’s face it, the internet is not short of alternative sources of kitten pictures), then it’s easy to do with uBlock Origin. After installing the extension if necessary, go to Tools → Add-ons → Extensions → uBlock Origin → Preferences → My filters, and add www.dailymail.co.uk and www.express.co.uk, each on its own line. (Of course you can easily add more if you like.) Voilà: instant tranquility.

Incidentally, this also works fine on Android. The fact that it was easy to install a good ad blocker without having to mess about with a rooted device or strange proxy settings was the main reason I switched to Firefox on my phone.

,

Rondam RamblingsBook review: "A New Map for Relationships: Creating True Love at Home and Peace on the Planet" by Dorothie and Martin Hellman

We humans dream of meeting our soul mates, someone to be Juliet to our Romeo, Harry to our Sally, Jobs to our Woz, Larry to our Sergey.  Sadly, the odds are stacked heavily against us.  If you do the math you will find that in a typical human lifetime we can only hope to meet a tiny fraction of our 7 billion fellow humans.  And if you factor in the time it takes to properly vet someone to see if

Krebs on SecurityThe Market for Stolen Account Credentials

Past stories here have explored the myriad criminal uses of a hacked computer, the various ways that your inbox can be spliced and diced to help cybercrooks ply their trade, and the value of a hacked company. Today’s post looks at the price of stolen credentials for just about any e-commerce, bank site or popular online service, and provides a glimpse into the fortunes that an enterprising credential thief can earn selling these accounts on consignment.

Not long ago in Internet time, your typical cybercriminal looking for access to a specific password-protected Web site would most likely visit an underground forum and ping one of several miscreants who routinely leased access to their “bot logs.”

These bot log sellers were essentially criminals who ran large botnets (collections of hacked PCs) powered by malware that can snarf any passwords stored in the victim’s Web browser or credentials submitted into a Web-based login form. For a few dollars in virtual currency, a ne’er-do-well could buy access to these logs, or else he and the botmaster would agree in advance upon a price for any specific account credentials sought by the buyer.

Back then, most of the stolen credentials that a botmaster might have in his possession typically went unused or unsold (aside from the occasional bank login that led to a juicy high-value account). Indeed, these plentiful commodities held by the botmaster for the most part were simply not a super profitable line of business and so went largely wasted, like bits of digital detritus left on the cutting room floor.

But oh, how times have changed! With dozens of sites in the underground now competing to purchase and resell credentials for a variety of online locations, it has never been easier for a botmaster to earn a handsome living based solely on the sale of stolen usernames and passwords alone.

If the old adage about a picture being worth a thousand words is true, the one directly below is priceless because it illustrates just how profitable the credential resale business has become.

This screen shot shows the earnings panel of a crook who sells stolen credentials for hundreds of Web sites to a dark web service that resells them. This botmaster only gets paid when someone buys one of his credentials. So far this year, customers of this service have purchased more than 35,000 credentials he’s sold to this service, earning him more than $288,000 in just a few months.

The image shown above is the wholesaler division of “Carder’s Paradise,” a bustling dark web service that sells credentials for hundreds of popular Web destinations. The screen shot above is an earnings panel akin to what you would see if you were a seller of stolen credentials to this service — hence the designation “Seller’s Paradise” in the upper left hand corner of the screen shot.

This screen shot was taken from the logged-in account belonging to one of the more successful vendors at Carder’s Paradise. We can see that in just the first seven months of 2017, this botmaster sold approximately 35,000 credential pairs via the Carder’s Paradise market, earning him more than $288,000. That’s an average of $8.19 for each credential sold through the service.

Bear in mind that this botmaster only makes money based on consignment: Regardless of how much he uploads to Seller’s Paradise, he doesn’t get paid for any of it unless a Carder’s Paradise customer chooses to buy what he’s selling.

Fortunately for this guy, almost 9,000 different customers of Carder’s Paradise chose to purchase one or more of his username and password pairs. It was not possible to tell from this seller’s account how many credential pairs total that he has contributed to this service which went unsold, but it’s a safe bet that it was far more than 35,000.

[A side note is in order here because there is some delicious irony in the backstory behind the screenshot above: The only reason a source of mine was able to share it with me was because this particular seller re-used the same email address and password across multiple unrelated cybercrime services].

Based on the prices advertised at Carder’s Paradise (again, Carder’s Paradise is the retail/customer side of Seller’s Paradise) we can see that the service on average pays its suppliers about half what it charges customers for each credential. The average price of a credential for more than 200 different e-commerce and banking sites sold through this service is approximately $15.

Part of the price list for credentials sold at this dark web ID theft site.

Indeed, fifteen bucks is exactly what it costs to buy stolen logins for airbnb.com, comcast.com, creditkarma.com, logmein.com and uber.com. A credential pair from AT&T Wireless — combined with access to the victim’s email inbox — sells for $30.

The most expensive credentials for sale via this service are those for the electronics store frys.com ($190). I’m not sure why these credentials are so much more expensive than the rest, but it may be because thieves have figured out a reliable and very profitable way to convert stolen frys.com customer credentials into cash.

Usernames and passwords to active accounts at military personnel-only credit union NavyFederal.com fetch $60 apiece, while credentials to various legal and data aggregation services from Thomson Reuters properties command a $50 price tag.

The full price list of credentials for sale by this dark web service is available in this PDF. For CSV format, see this link. Both lists are sorted alphabetically by Web site name.

This service doesn’t just sell credentials: It also peddles entire identities — indexed and priced according to the unwitting victim’s FICO score. An identity with a perfect credit score (850) can demand as much as $150.

Stolen identities with high credit scores fetch higher prices.

And of course this service also offers the ability to pull full credit reports on virtually any American — from all three major credit bureaus — for just $35 per bureau.

It costs $35 through this service to pull someone’s credit file from the three major credit bureaus.

Plenty of people began freaking out earlier this year after a breach at big-three credit bureau Equifax jeopardized the Social Security Numbers, dates of birth and other sensitive date on more than 145 million Americans. But as I have been trying to tell readers for many years, this data is broadly available for sale in the cybercrime underground on a significant portion of the American populace.

If the threat of identity theft has you spooked, place a freeze on your credit file and on the file of your spouse (you may even be able to do this for your kids). Credit monitoring is useful for letting you know when someone has stolen your identity, but these services can’t be counted on to stop an ID thief from opening new lines of credit in your name.

They are, however, useful for helping to clean up identity theft after-the-fact. This story is already too long to go into the pros and cons of credit monitoring vs. freezes, so I’ll instead point to a recent primer on the topic and urge readers to check it out.

Finally, it’s a super bad idea to re-use passwords across multiple sites. KrebsOnSecurity this year has written about multiple, competing services that sell or sold access to billions of usernames and passwords exposed in high profile data breaches at places like Linkedin, Dropbox and Myspace. Crooks pay for access to these stolen credential services because they know that a decent percentage of Internet users recycle the same password at multiple sites.

One alternative to creating and remembering strong, lengthy and complex passwords for every important site you deal with is to outsource this headache to a password manager.  If the online account in question allows 2-factor authentication (2FA), be sure to take advantage of that.

Two-factor authentication makes it much harder for password thieves (or their customers) to hack into your account just by stealing or buying your password: If you have 2FA enabled, they also would need to hack that second factor (usually your mobile device) before being able to access your account. For a list of sites that support 2FA, check out twofactorauth.org.

Planet DebianElena 'valhalla' Grandi: New note by valhalla

www.eyrie.org/~eagle/journal/2 Includes a description on why keeping an FTP service alive is nowadays enough of an hassle that it's not really worth doing any longer.

CryptogramLessons Learned from the Estonian National ID Security Flaw

Estonia recently suffered a major flaw in the security of their national ID card. This article discusses the fix and the lessons learned from the incident:

In the future, the infrastructure dependency on one digital identity platform must be decreased, the use of several alternatives must be encouraged and promoted. In addition, the update and replacement capacity, both remote and physical, should be increased. We also recommend the government to procure the readiness to act fast in force majeure situations from the eID providers.. While deciding on the new eID platforms, the need to replace cryptographic primitives must be taken into account -- particularly the possibility of the need to replace algorithms with those that are not even in existence yet.

Worse Than FailurePromising Equality

One can often hear the phrase, “modern JavaScript”. This is a fig leaf, meant to cover up a sense of shame, for JavaScript has a bit of a checkered past. It started life as a badly designed language, often delivering badly conceived features. It has a reputation for slowness, crap code, and things that make you go “wat?

Thus, “modern” JavaScript. It’s meant to be a promise that we don’t write code like that any more. We use the class keyword and transpile from TypeScript and write fluent APIs and use promises. Yes, a promise to use promises.

Which brings us to Dewi W, who just received some code from contractors. It has some invocations that look like this:

safetyLockValidator(inputValue, targetCode).then(function () {
        // You entered the correct code.
}).catch(function (err) {
        // You entered the wrong code.
})

The use of then and catch, in this context, tells us that they’re using a Promise, presumably to wrap up some asynchronous operation. When the operation completes successfully, the then callback fires, and if it fails, the catch callback fires.

But, one has to wonder… what exactly is safetyLockValidator doing?

safetyLockValidator = function (input, target) {
        return new Promise(function (resolve, reject) {
                if (input === target)
                        return resolve()
                else return reject('Wrong code');
        })
};

It’s just doing an equality test. If input equals target, the Promise resolve()s- completes successfully. Otherwise, it reject()s. Well, at least it’s future-proofed against the day we switch to using an EaaS platform- “Equality as a Service”.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaColin Charles: Percona Live Santa Clara 2018 CFP

Percona Live Santa Clara 2018 call for papers ends fairly soon — December 22 2017. It may be extended, but I suggest getting a topic in ASAP so the conference committee can view everything fairly and quickly. Remember this conference is bigger than just MySQL, so please submit topics on MongoDB, other databases like PostgreSQL, time series, etc., and of course MySQL.

What are you waiting for? Submit TODAY!
(It goes without saying that speakers get a free pass to attend the event.)

Don Martiquick question on tracking protection

One quick question for anyone who still isn't convinced that tracking protection needs to be a high priority for web browsers in 2018. Web tracking isn't just about items from your online shopping cart following you to other sites. Users who are vulnerable to abusive practices for health or other reasons have tracking protection needs too.

Screenshot from the American Cancer Society site, showing 24 web trackers

Who has access to the data from each of the 24 third-party trackers that appear on the American Cancer Society's Find Cancer Treatment and Support page, and for what purposes can they use the data?

,

Planet DebianRuss Allbery: End of an FTP era

I just turned off anonymous FTP service on ftp.eyrie.org.

It's bittersweet, since I've been running an anonymous FTP server since some time around 1996 (longer than HTTP has been a widely-used thing), and at ftp.eyrie.org for nearly that long. The original service was wu-ftpd, as one did at the time, but it's been vsftpd for the past decade plus. (Amusingly, I now work for the author of vsftpd.)

All of the data is still there, at archives.eyrie.org as has been the case for more than a decade. I doubt anyone but me and a few people with ancient bookmarks will ever notice. The whole world switched over to HTTP many years ago, and about the only thing that ever connected to the anonymous FTP server was search engines. I was keeping it running out of nostalgia.

Explaining why I finally pulled the plug requires a bit of background on the FTP protocol. Many of those reading this may already be familiar, but I bet some people aren't, and it's somewhat interesting. The short version is that FTP is a very old protocol from a much different era of the Internet, and it does things in some very odd ways that are partly incompatible with modern networking.

FTP uses two separate network connections between the client and server: a control channel and a data channel. The client sends commands to the server (directory navigation and file upload and download commands, for example) over the control channel. Any data, including directory listings, is sent over the data channel, instead of in-line in the control channel the way almost every other protocol works.

One way to do the data transfer is for the client to send a PORT command to the server before initiating a data transfer, telling the server the local port on which the client was listening. The FTP server would then connect back to the client on that port, using a source port of 20, to send the data. This is called active mode.

This, of course, stopped working as soon as NAT and firewalls became part of networking and servers couldn't connect to clients. (It also has some security issues. Search for FTP bounce attack if you're curious.) Nearly the entire FTP world therefore switched to a different mechanism: passive mode. (This was in the protocol from very early on, but extremely old FTP servers sometimes didn't support it.) In this mode, the client would send the PASV command (EPSV in later versions with IPv6 support), and the server would respond with the ephemeral port on the server to use for data transfer. The client would then open a second connection to the server on that port for the data transfer.

Everything is now fine for the client: it just opens multiple connections to the same server on different ports. The problem is the server firewall. On the modern Internet, you don't want to allow any host on the Internet to open connections to arbitrary ports on the server, even ephemeral ports, for defense in depth against exposing some random service that happens to be running on that port. In standard FTP implementations, there's also no authentication binding between the ports, so some other client could race a client to its designated data port.

You therefore need some way to tell the firewall to allow a client to connect to its provided data port, but not any other port. With iptables, this is done by using the conntrack module and a related port rule. A good implementation has to look inside the contents of the control channel traffic and look for the reply to a PASV or EPSV command to extract the port number. The related port rule will then allow connections to that port from the client for as long as the main control channel lasts.

This has mostly worked for some time, but it's complicated, requires loading several other kernel modules to do this packet inspection, and requires using conntrack, which itself causes issues for some servers because it has to maintain a state table of open connections that has a limited size in the kernel. This conntrack approach also has other security issues around matching the wrong protocol (there's a ton of good information in this article), so modern Linux kernels require setting up special raw iptables rules to enable the correct conntrack helper. I got this working briefly in Debian squeeze with a separate ExecStartPre command for vsftpd to set up the iptables magic, but then it stopped working again for some reason that I never diagnosed.

I probably could get this working again by digging deeper into how the complex conntrack machinery works, but on further reflection, I decided to just turn the service off. It's had a good run, I don't think anyone uses it, and while this corner of Linux networking is moderately interesting, I don't have the time to invest in staying current. So I've updated all of my links to point to HTTP instead and shut the server down today.

Goodbye FTP! It's been a good run, and I'll always have a soft spot in my heart for you.

Planet DebianDirk Eddelbuettel: littler 0.3.3

max-heap image

The fourth release of littler as a CRAN package is now available, following in the now more than ten-year history as a package started by Jeff in 2006, and joined by me a few weeks later.

littler is the first command-line interface for R and predates Rscript. In my very biased eyes better as it allows for piping as well shebang scripting via #!, uses command-line arguments more consistently and still starts faster. Last but not least it is also less silly than Rscript and always loads the methods package avoiding those bizarro bugs between code running in R itself and a scripting front-end.

littler prefers to live on Linux and Unix, has its difficulties on OS X due to yet-another-braindeadedness there (who ever thought case-insensitive filesystems where a good idea?) and simply does not exist on Windows (yet -- the build system could be extended -- see RInside for an existence proof, and volunteers welcome!).

A few examples as highlighted at the Github repo:

This release brings a few new examples scripts, extends a few existing ones and also includes two fixes thanks to Carl. Again, no internals were changed. The NEWS file entry is below.

Changes in littler version 0.3.3 (2017-12-17)

  • Changes in examples

    • The script installGithub.r now correctly uses the upgrade argument (Carl Boettiger in #49).

    • New script pnrrs.r to call the package-native registration helper function added in R 3.4.0

    • The script install2.r now has more robust error handling (Carl Boettiger in #50).

    • New script cow.r to use R Hub's check_on_windows

    • Scripts cow.r and c4c.r use #!/usr/bin/env r

    • New option --fast (or -f) for scripts build.r and rcc.r for faster package build and check

    • The build.r script now defaults to using the current directory if no argument is provided.

    • The RStudio getters now use the rvest package to parse the webpage with available versions.

  • Changes in package

    • Travis CI now uses https to fetch script, and sets the group

Courtesy of CRANberries, there is a comparison to the previous release. Full details for the littler release are provided as usual at the ChangeLog page. The code is available via the GitHub repo, from tarballs off my littler page and the local directory here -- and now of course all from its CRAN page and via install.packages("littler"). Binary packages are available directly in Debian as well as soon via Ubuntu binaries at CRAN thanks to the tireless Michael Rutter.

Comments and suggestions are welcome at the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianLars Wirzenius: The proof is in the pudding

I wrote these when I woke up one night and had trouble getting back to sleep, and spent a while in a very philosophical mood thinking about life, success, and productivity as a programmer.

Imagine you're developing a piece of software.

  • You don't know it works, unless you've used it.

  • You don't know it's good, unless people tell you it is.

  • You don't know you can do it, unless you've already done it.

  • You don't know it can handle a given load, unless you've already tried it.

  • The real bottlenecks are always a surprise, the first time you measure.

  • It's not ready for production until it's been used in production.

  • Your automated tests always miss something, but with only manual tests, you always miss more.

Don MartiForbidden words

You know how the US government's Centers for Disease Control and Prevention is now forbidden from using certain words?

vulnerable
entitlement
diversity
transgender
fetus
evidence-based
science-based

(source: Washington Post)

Well, in order to help slow down the spread of political speech enforcement that is apparently stopping all of us cool innovator type people from saying the Things We Can't Say, here's a Git hook to make sure that every time you blog, you include at least one of the forbidden words.

If you blog without including one of the forbidden words, you're obviously internalizing censorship and need more freedom, which you can maybe get by getting out of California for a while. After all, a lot of people here seem to think that "innovation" is building more creepy surveillance as long as you call it "growth hacking" or writing apps to get members of the precariat to do the stuff that your Mom used to do for you.

You only have to include one forbidden word every time you commit a blog entry, not in every file. You only need forbidden words in blog entries, not in scripts or templates. You can always get around the forbidden word check with the --no-verify command-line option.

Suggestions and pull requests welcome. script on GitHub

Planet DebianPetter Reinholdtsen: Cura, the nice 3D print slicer, is now in Debian Unstable

After several months of working and waiting, I am happy to report that the nice and user friendly 3D printer slicer software Cura just entered Debian Unstable. It consist of five packages, cura, cura-engine, libarcus, fdm-materials, libsavitar and uranium. The last two, uranium and cura, entered Unstable yesterday. This should make it easier for Debian users to print on at least the Ultimaker class of 3D printers. My nearest 3D printer is an Ultimaker 2+, so it will make life easier for at least me. :)

The work to make this happen was done by Gregor Riepl, and I was happy to assist him in sponsoring the packages. With the introduction of Cura, Debian is up to three 3D printer slicers at your service, Cura, Slic3r and Slic3r Prusa. If you own or have access to a 3D printer, give it a go. :)

The 3D printer software is maintained by the 3D printer Debian team, flocking together on the 3dprinter-general mailing list and the #debian-3dprinting IRC channel.

The next step for Cura in Debian is to update the cura package to version 3.0.3 and then update the entire set of packages to version 3.1.0 which showed up the last few days.

,

Planet DebianRuss Allbery: Saving window position in Xfce session

TLDR: If you're having problems saving window position in your Xfce session, enable save on logout and then log out and back in. This will probably fix the problem (permanently, if you like keeping the same session and turn saving back off again). See below for the details.

I've been using Xfce for my desktop for some years now, and have had a recurring problem with saved sessions after a reboot. After logging in, all the applications from my saved session would be started, but all the workspace and window positioning data would be lost, so they'd just pile onto the default workspace like a train wreck.

Various other people on-line have reported this over the years (there are open bugs in Ubuntu, Xfce, and Red Hat bug trackers), and there was apparently a related bug fixed in Xfce 4.10, but I'm using 4.12. I would have given up (and have several times in the past), except that on one of my systems this works correctly. All the windows go back to their proper positions.

Today, I dug into the difference and finally solved it. Here it is, in case someone else stumbles across it.

Some up-front caveats that are or may be related:

  1. I rarely log out of my Xfce session, since this is a single-user laptop. I hibernate and keep restoring until I decide to do a reboot for kernel patches, or (and this is somewhat more likely) some change to the system invalidates the hibernate image and the system hangs on restore from hibernate and I force-reboot it. I also only sometimes use the Xfce toolbar to do a reboot; often, I just run reboot.

  2. I use xterm and Emacs, which are not horribly sophisticated X applications and which don't remember their own window positioning.

Xfce stores sessions in .cache/sessions in your home directory. The key discovery on close inspection is that there were two types of files in that directory on the working system, and only one on the non-working system.

The typical file will have a name like xfce4-session-hostname:0 and contains things like:

Client9_ClientId=2a654109b-e4d0-40e4-a910-e58717faa80b
Client9_Hostname=local/hostname
Client9_CloneCommand=xterm
Client9_RestartCommand=xterm,-xtsessionID,2a654109b-e4d0-40e4-a910-e58717faa80b
Client9_Program=xterm
Client9_UserId=user

This is the file that remembers all of the running applications. If you go into Settings -> Session and Startup and clear the session cache, files like this will be deleted. If you save your current session, a file like this will be created. This is how Xfce knows to start all of the same applications. But notice that nothing in the above preserves the positioning of the window. (I went down a rabbit hole thinking the session ID was somehow linking to that information elsewhere, but it's not.)

The working system had a second type of file in that directory named xfwm4-2d4c9d4cb-5f6b-41b4-b9d7-5cf7ac3d7e49.state. Looking in that file reveals entries like:

[CLIENT] 0x200000f
  [CLIENT_ID] 2a9e5b8ed-1851-4c11-82cf-e51710dcf733
  [CLIENT_LEADER] 0x200000f
  [RES_NAME] xterm
  [RES_CLASS] XTerm
  [WM_NAME] xterm
  [WM_COMMAND] (1) "xterm"
  [GEOMETRY] (860,35,817,1042)
  [GEOMETRY-MAXIMIZED] (860,35,817,1042)
  [SCREEN] 0
  [DESK] 2
  [FLAGS] 0x0

Notice the geometry and desk, which are exactly what we're looking for: the window location and the workspace it should be on. So the problem with window position not being saved was the absence of this file.

After some more digging, I discovered that while the first file is saved when you explicitly save your session, the second is not. However, it is saved on logout. So, I went to Settings -> Session and Startup and enabled automatically save session on logout in the General tab, logged out and back in again, and tada, the second file appeared. I then turned saving off again (since I set up my screens and then save them and don't want any subsequent changes saved unless I do so explicitly), and now my window position is reliably restored.

This also explains why some people see this and others don't: some people probably regularly use the Log Out button, and others ignore it and manually reboot (or just have their system crash).

Incidentally, this sort of problem, and the amount of digging that I had to do to solve it, is the reason why I'm in favor of writing man pages or some other documentation for every state file your software stores. Not only does it help people digging into weird problems, it helps you as the software author notice surprising oddities, like splitting session state across two separate state files, when you go to document them for the user.

Planet DebianSteve Kemp: IoT radio: Still in-progress ..

So back in September I was talking about building a IoT Radio, and after that I switched to talking about tracking aircraft via software-defined radio. Perhaps time for a followup.

So my initial attempt at a IoT radio was designed with RDA5807M module. Frustratingly the damn thing was too small to solder easily! Once I did get it working though I found that either the specs lied to me, or I'd misunderstood them: It wouldn't drive headphones, and performance was poor. (Though amusingly the first time I got it working I managed to tune to Helsinki's rock-station, and the first thing I heard was Rammstein's Amerika.)

I made another attempt with an Si4703-based "evaluation board". This was a board which had most of the stuff wired in, so all you had to do was connect an MCU to it, and do the necessary software dancing. There was a headphone-socket for output, and no need to fiddle with the chip itself, it was all pretty neat.

Unfortunately the evaluation board was perfect for basic use, but not at all suitable for real use. The board did successfully output audio to a pair of headphones, but unfortunately it required the use of headphones, as the cable would be treated as an antenna. As soon as I fed the output of the headphone-jack to an op-amp to drive some speakers I was beset with the kind of noise that makes old people reminisce about how music was better back in their day.

So I'm now up to round 3. I have a TEA5767-based project in the works, which should hopefully resolve my problems:

  • There are explicit output and aerial connections.
  • I know I'll need an amplifier.
  • The hardware is easy to control via arduino/esp8266 MCUs.
    • Numerous well-documented projects exist using this chip.

The only downside I can see is that I have to use the op-amp for volume control too - the TEA5767-chip allows you to mute/unmute via software but doesn't allow you to set the volume. Probably for the best.

In unrelated news I've got some e-paper which is ESP8266/arduino controlled. I have no killer-app for it, but it's pretty great. I should write that up sometime.

Rondam RamblingsHere comes the next west coast mega-drought

As long as I'm blogging about extreme weather events I would also like to remind everyone that we just came off of the longest drought in California history, followed immediately by the wettest rainy season in California history.  Now it looks like that history could very well be starting to repeat itself.  The weather pattern that caused the six-year-long Great Drought is starting to form again.

Rondam RamblingsThis should convince the climate skeptics. But it probably won't.

One of the factoids that climate-change denialists cling to is the fact (and it is a fact) that major storms haven't gotten measurably worse.  The damage from storms has gotten measurably worse, but that can be attributed to increased development on coastlines.  It might be that the storms themselves have gotten worse, but the data is not good enough to disentangle the two effects. But storms

Planet DebianDirk Eddelbuettel: drat 0.1.4

drat user

A new version of drat just arrived on CRAN as another no-human-can-delay-this automatic upgrade directly from the CRAN prechecks (though I did need a manual reminder from Uwe to remove a now stale drat repo URL -- bad @hrbrmstr -- from the README in a first attempt).

This release is mostly the work of Neal Fultz who kindly sent me two squeaky-clean pull requests addressing two open issue tickets. As drat is reasonably small and simple, that was enough to motivate a quick release. I also ensured that PACKAGES.rds will always if committed along (if we're in commit mode), which is a follow-up to an initial change from 0.1.3 in September.

drat stands for drat R Archive Template, and helps with easy-to-create and easy-to-use repositories for R packages. Since its inception in early 2015 it has found reasonably widespread adoption among R users because repositories with marked releases is the better way to distribute code.

The NEWS file summarises the release as follows:

Changes in drat version 0.1.4 (2017-12-16)

  • Changes in drat functionality

    • Binaries for macOS are now split by R version into two different directories (Neal Futz in #67 addring #64).

    • The target branch can now be set via a global option (Neal Futz in #68 addressing #61).

    • In commit mode, add file PACKAGES.rds unconditionally.

  • Changes in drat documentation

    • Updated 'README.md' removing another stale example URL

Courtesy of CRANberries, there is a comparison to the previous release. More detailed information is on the drat page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Cory DoctorowTalking Walkaway on the Barnes and Noble podcast

I recorded this interview last summer at San Diego Comic-Con; glad to hear it finally live!

Authors are, without exception, readers, and behind every book there is…another book, and another. In this episode of the podcast, we’re joined by two writers for conversations about the vital books and ideas that influence inform their own work. First, Cory Doctorow talks with B&N’s Josh Perilo about his recent novel of an imagined near future, Walkaway, and the difference between a dystopia and a disaster. Then we hear from Will Schwalbe, talking with Miwa Messer about the lifetime of reading behind his book Books for Living: Some Thoughts on Reading, Reflecting, and Embracing Life.


Hubert Vernon Rudolph Clayton Irving Wilson Alva Anton Jeff Harley Timothy Curtis Cleveland Cecil Ollie Edmund Eli Wiley Marvin Ellis Espinoza—known to his friends as Hubert, Etc—was too old to be at that Communist party.


But after watching the breakdown of modern society, he really has no where left to be—except amongst the dregs of disaffected youth who party all night and heap scorn on the sheep they see on the morning commute. After falling in with Natalie, an ultra-rich heiress trying to escape the clutches of her repressive father, the two decide to give up fully on formal society—and walk away.


After all, now that anyone can design and print the basic necessities of life—food, clothing, shelter—from a computer, there seems to be little reason to toil within the system.


It’s still a dangerous world out there, the empty lands wrecked by climate change, dead cities hollowed out by industrial flight, shadows hiding predators animal and human alike. Still, when the initial pioneer walkaways flourish, more people join them. Then the walkaways discover the one thing the ultra-rich have never been able to buy: how to beat death. Now it’s war – a war that will turn the world upside down.


Fascinating, moving, and darkly humorous, Walkaway is a multi-generation SF thriller about the wrenching changes of the next hundred years…and the very human people who will live their consequences.

Planet DebianDaniel Lange: IMAPFilter 2.6.11-1 backport for Debian Jessie AMD64 available

One of the perks you get as a Debian Developer is a @debian.org email address. And because Debian is old and the Internet used to be a friendly place this email address is plastered all over the Internet. So you get email spam, a lot of spam.

I'm using a combination of server and client site filtering to keep spam at bay. Unfortunately the IMAPFilter version in Debian Jessie doesn't even support "dry run" (-n) which is not so cool when developing complex filter rules. So I backported the latest (sid) version and agreed with Sylvestre Ledru, one of its maintainers, to share it here and see whether making an official backport is worth it. It's a straight recompile so no magic and no source code or packaging changes required.

Get it while its hot:

imapfilter_2.6.11-1~bpo8+1_amd64.deb (IMAPFilter Jessie backport)
SHA1: bedb9c39e576a58acaf41395e667c84a1b400776

Clever LUA snippets for ~/.imapfilter/config.lua appreciated.

Planet DebianDirk Eddelbuettel: digest 0.6.13

A small maintenance release, version 0.6.13, of the digest package arrived on CRAN and in Debian yesterday.

digest creates hash digests of arbitrary R objects (using the 'md5', 'sha-1', 'sha-256', 'crc32', 'xxhash' and 'murmurhash' algorithms) permitting easy comparison of R language objects.

This release accomodates a request by Luke and Tomas to make the version argument of serialize() an argument to digest() too, which was easy enough to accomodate. The value 2L is the current default (and for now only permitted value). The ALTREP changes in R 3.5 will bring us a new, and more powerful, format with value 3L. Changes can be set in each call, or globally via options(). Other than we just clarified one aspect of raw vector usage in the manual page.

CRANberries provides the usual summary of changes to the previous version.

For questions or comments use the issue tracker off the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianMichael Prokop: Usage of Ansible for Continuous Configuration Management

It all started with a tweet of mine:

Screenshot of https://twitter.com/mikagrml/status/941304704004448257

I received quite some feedback since then and I’d like to iterate on this.

I’m a puppet user since ~2008 and since ~2015 also ansible is part of my sysadmin toolbox. Recently certain ansible setups I’m involved in grew faster than I’d like to see, both in terms of managed hosts/services as well as the size of the ansible playbooks. I like ansible for ad hoc tasks, like `ansible -i ansible_hosts all -m shell -a 'lsb_release -rs'` to get an overview what distribution release systems are running, requiring only a working SSH connection and python on the client systems. ansible-cmdb provides a nice and simple to use ad hoc host overview without much effort and overhead. I even have puppetdb_to_ansible scripts to query a puppetdb via its API and generate host lists for usage with ansible on-the-fly. Ansible certainly has its use case for e.g. bootstrapping systems, orchestration and handling deployments.

Ansible has an easier learning curve than e.g. puppet and this might seem to be the underlying reason for its usage for tasks it’s not really good at. To be more precise: IMO ansible is a bad choice for continuous configuration management. Some observations, though YMMV:

  • ansible’s vaults are no real replacement for something like puppet’s hiera (though Jerakia might mitigate at least the pain regarding data lookups)
  • ansible runs are slow, and get slower with every single task you add
  • having a push model with ansible instead of pull (like puppet’s agent mode) implies you don’t get/force regular runs all the time, and your ansible playbooks might just not work anymore once you (have to) touch them again
  • the lack of a DSL results in e.g. each single package management having its own module (apt, dnf, yum,….), having too many ways how to do something, resulting more often than not in something I’d tend to call spaghetti code
  • the lack of community modules comparable to Puppet’s Forge
  • the lack of a central DB (like puppetdb) means you can’t do something like with puppet’s exported resources, which is useful e.g. for central ssh hostkey handling, monitoring checks,…
  • the lack of a resources DAG in ansible might look like a welcome simplification in the beginning, but its absence is becoming a problem when complexity and requirements grow (example: delete all unmanaged files from a directory)
  • it’s not easy at all to have ansible run automated and remotely on a couple of hundred hosts without stumbling over anything — Rudolph Bott
  • as complexity grows, the limitations of Ansible’s (lack of a) language become more maddening — Felix Frank

Let me be clear: I’m in no way saying that puppet doesn’t have its problems (side-rant: it took way too long until Debian/stretch was properly supported by puppets’ AIO packages). I had and still have all my ups and downs with it, though in 2017 and especially since puppet v5 it works fine enough for all my use cases at a diverse set of customers. Whenever I can choose between puppet and ansible for continuous configuration management (without having any host specific restrictions like unsupported architectures, memory limitations,… that puppet wouldn’t properly support) I prefer puppet. Ansible can and does exist as a nice addition next to puppet for me, even if MCollective/Choria is available. Ansible has its use cases, just not for continuous configuration management for me.

The hardest part is to leave some tool behind once you reached the end of its scale. Once you feel like a tool takes more effort than it is worth you should take a step back and re-evaluate your choices. And quoting Felix Frank:

OTOH, if you bend either tool towards a common goal, you’re not playing to its respective strengths.

Thanks: Michael Renner and Christian Hofstaedtler for initial proof reading and feedback

CryptogramFriday Squid Blogging: Baby Sea Otters Prefer Shrimp to Squid

At least, this one does.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDBen Saunders’ solo crossing of Antarctica, and more news from TED speakers

As usual, the TED community has lots of news to share this week. Below, some highlights.

A solo crossing of Antarctica. With chilling detail, Ben Saunders documents his journey across Antarctica as he attempts to complete the first successful solo, unsupported and unassisted crossing. The journey is a way of honoring his friend Henry Worsley, who died attempting a similar crossing last year. While being attacked by intense winds, Saunders writes of his experiences trekking through the hills, the cold, and the ice, the weight he carries, and even the moments he’s missing, as he wishes his dear friends a jolly and fun wedding day back home. (Watch Saunders’ TED Talk)

The dark side of AI. A chilling new video, “Slaughterbots,” gives viewers a glimpse into a dystopian future where people can be targeted and killed by strangers using autonomous weapons simply for having dissenting opinions. This viral video was the brainchild of TED speaker Stuart Russell and a coalition of AI researchers and advocacy organizations. The video warns viewers that while AI has the potential to solve many of our problems, the dangers of AI weapons must be addressed first. “We have an opportunity to prevent the future you just saw,” Stuart states at the end of the video, “but the window to act is closing fast.” (Watch Russell’s TED Talk)

Corruption investigators in paradise. Charmian Gooch and her colleagues at Global Witness have been poring over the Paradise Papers, a cache of 13.4 million files released by the the International Consortium of Investigative Journalists that detail the secret world of offshore financial deals. With the 2014 TED Prize, Gooch wished to end anonymously owned companies, and the Paradise Papers show how this business structure can be used to nefarious end. Check out Global Witness’ report on how the commodities company Glencore appears to have funneled $45 million to a notorious billionaire middleman in the Democratic Republic of the Congo to help them negotiate mining rights. And their look at how a US-based bank helped one of Russia’s richest oligarchs register a private jet, despite his company being on US sanctions lists. (Watch Gooch’s TED Talk)

A metric for measuring corporate vitality. Martin Reeves, director of the Henderson Institute at BCG, and his colleagues have taken his idea that strategies need strategies and expanded it into the creation of the Fortune Future 50, a categorization of companies based on more than financial data. Companies are divided into “leaders” and “challengers,” with the former having a market capitalization over $20 billion as of fiscal year 2016 and the latter including startups with a market capitalization below $20 billion. However, instead of focusing on rear-view analytics, BCG’s assessment uses artificial intelligence and natural language processing to review a company’s vitality, or their “capacity to explore new options, renew strategy, and grow sustainably,” according to a publication by Reeves and his collaborators. Since only 7% of companies that are market-share leaders are also profit leaders, the analysis can provide companies with a new metric to judge progress. (Watch Reeves’ TED Talk)

The boy who harnessed the wind — and the silver screen. William Kamkwamba’s story will soon reach the big screen via the upcoming film The Boy Who Harnessed the Wind. Kamkwamba built a windmill that powered his home in Malawi with no formal education. He snuck into a library, deciphered physics on his own, and trusted his intuition that he had an idea he could execute. His determination ultimately saved his family from a deadly famine. (Watch Kamkwamba’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.


Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #137

Here's what happened in the Reproducible Builds effort between Sunday December 3 and Saturday December 9 2017:

Documentation update

There was more discussion on different logos being proposed for the project.

Reproducible work in other projects

Cyril Brulebois wrote about Tails' work on reproducibility

Gabriel Scherer submitted a pull request to the OCaml compiler to honour the BUILD_PATH_PREFIX_MAP environment variable.

Packages reviewed and fixed

Patches filed upstream:

  • Bernhard M. Wiedemann:
  • Eli Schwartz:
  • Foxboron
    • gopass: - use SOURCE_DATE_EPOCH in Makefile
  • Jelle
    • PHP: - use SOURCE_DATE_EPOCH for Build Date
  • Chris Lamb:
    • pylint - file ordering, nondeterminstic data structure
    • tlsh - clarify error message (via diffoscope development)
  • Alexander "lynxis" Couzens:

Patches filed in Debian:

Patches filed in OpenSUSE:

  • Bernhard M. Wiedemann:
    • build-compare (merged) - handle .egg as .zip
    • neovim (merged) - hostname, username
    • perl (merged) - date, hostname, username
    • sendmail - date, hostname, username

Patches filed in OpenWRT:

  • Alexander "lynxis" Couzens:

Reviews of unreproducible packages

17 package reviews have been added, 31 have been updated and 43 have been removed in this week, adding to our knowledge about identified issues.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (13)
  • Andreas Beckmann (2)
  • Emilio Pozuelo Monfort (3)

reprotest development

  • Santiago Torres:
    • Use uname -m instead of arch.

trydiffoscope development

Version 66 was uploaded to unstable by Chris Lamb. It included contributions already covered by posts of the previous weeks as well as new ones from:

  • Chris Lamb:
    • Parse dpkg-parsechangelog instead of hard-coding version
    • Bump Standards-Version to 4.1.2
    • flake8 formatting

reproducible-website development

tests.reproducible-builds.org

reproducible Arch Linux:

reproducible F-Droid:

Misc.

This week's edition was written by Ximin Luo, Alexander Couzens, Holger Levsen, Chris Lamb, Bernhard M. Wiedemann and Santiago Torres & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Krebs on SecurityFormer Botmaster, ‘Darkode’ Founder is CTO of Hacked Bitcoin Mining Firm ‘NiceHash’

On Dec. 6, 2017, approximately USD $52 million worth of Bitcoin mysteriously disappeared from the coffers of NiceHash, a Slovenian company that lets users sell their computing power to help others mine virtual currencies. As the investigation into the heist nears the end of its second week, many Nice-Hash users have expressed surprise to learn that the company’s chief technology officer recently served several years in prison for operating and reselling a massive botnet, and for creating and running ‘Darkode,” until recently the world’s most bustling English-language cybercrime forum.

In December 2013, NiceHash CTO Matjaž Škorjanc was sentenced to four years, ten months in prison for creating the malware that powered the ‘Mariposa‘ botnet. Spanish for “Butterfly,” Mariposa was a potent crime machine first spotted in 2008. Very soon after, Mariposa was estimated to have infected more than 1 million hacked computers — making it one of the largest botnets ever created.

An advertisement for the ButterFly Flooder, a crimeware product based on the ButterFly Bot.

ButterFly Bot, as it was more commonly known to users, was a plug-and-play malware strain that allowed even the most novice of would-be cybercriminals to set up a global operation capable of harvesting data from thousands of infected PCs, and using the enslaved systems for crippling attacks on Web sites. The ButterFly Bot kit sold for prices ranging from $500 to $2,000.

Prior to his initial arrest in Slovenia on cybercrime charges in 2010, Škorjanc was best known to his associates as “Iserdo,” the administrator and founder of the exclusive cybercrime forum Darkode.

A message from Iserdo warning Butterfly Bot subscribers not to try to reverse his code.

On Darkode, Iserdo sold his Butterfly Bot to dozens of other members, who used it for a variety of illicit purposes, from stealing passwords and credit card numbers from infected machines to blasting spam emails and hijacking victim search results. Microsoft Windows PCs infected with the bot would then try to spread the disease over MSN Instant Messenger and peer-to-peer file sharing networks.

In July 2015, authorities in the United States and elsewhere conducted a global takedown of the Darkode crime forum, arresting several of its top members in the process. The U.S. Justice Department at the time said that out of 800 or so crime forums worldwide, Darkode represented “one of the gravest threats to the integrity of data on computers in the United States and around the world and was the most sophisticated English-speaking forum for criminal computer hackers in the world.”

Following Škorjanc’s arrest, Slovenian media reported that his mother Zdenka Škorjanc was accused of money laundering; prosecutors found that several thousand euros were sent to her bank account by her son. That case was dismissed in May of this year after prosecutors conceded she probably didn’t know how her son had obtained the money.

Matjaž Škorjanc did not respond to requests for comment. But local media reports state that he has vehemently denied any involvement in the disappearance of the NiceHash stash of Bitcoins.

In an interview with Slovenian news outlet Delo.si, the NiceHash CTO described the theft “as if his kid was kidnapped and his extremities would be cut off in front of his eyes.” A roughly-translated English version of that interview has been posted to Reddit.

According to media reports, the intruders were able to execute their heist after stealing the credentials of a user with administrator privileges at NiceHash. Less than an hour after breaking into the NiceHash servers, approximately 4,465 Bitcoins were transferred out of the company’s accounts.

NiceHash CTO Matjaž Škorjanc, as pictured on the front page of a recent edition of the Slovenian daily Delo.si

A source close to the investigation told KrebsOnSecurity that the NiceHash hackers used a virtual private network (VPN) connection with a Korean Internet address, although the source said Slovenian investigators were reluctant to say whether that meant South Korea or North Korea because they did not want to spook the perpetrators into further covering their tracks.

CNN, Bloomberg and a number of other Western media outlets reported this week that North Korean hackers have recently doubled down on efforts to steal, phish and extort Bitcoins as the price of the currency has surged in recent weeks.

“North Korean hackers targeted four different exchanges that trade bitcoin and other digital currencies in South Korea in July and August, sending malicious emails to employees, according to police,” CNN reported.

Bitcoin’s blockchain ledger system makes it easy to see when funds are moved, and NiceHash customers who lost money in the theft have been keeping a close eye on the Bitcoin payment address that received the stolen funds ever since. On Dec. 13, someone in control of that account began transferring the stolen bitcoins to other accounts, according to this transaction record.

The NiceHash theft occurred as the price of Bitcoin was skyrocketing to new highs. On January 1, 2017, a single Bitcoin was worth approximately $976. By December 6, the day of the NiceHash hack, the price had ballooned to $11,831 per Bitcoin.

Today, a single Bitcoin can be sold for more than $17,700, meaning whoever is responsible for the NiceHash hack has seen their loot increase in value by roughly $27 million in the nine days since the theft.

In a post on its homepage, NiceHash said it was in the final stages of re-launching the surrogate mining service.

“Your bitcoins were stolen and we are working with international law enforcement agencies to identify the attackers and recover the stolen funds. We understand it may take some time and we are working on a solution for all users that were affected.

“If you have any information about the attack, please email us at [email protected]. We are giving BTC rewards for the best information received. You can also join our community page about the attack on reddit.

However, many followers of NiceHash’s Twitter account said they would not be returning to the service unless and until their stolen Bitcoins were returned.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, November 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In October, about 144 work hours have been dispatched among 12 paid contributors. Their reports are available:

  • Antoine Beaupré did 8.5h (out of 13h allocated + 3.75h remaining, thus keeping 8.25h for December).
  • Ben Hutchings did 17 hours (out of 13h allocated + 4 extra hours).
  • Brian May did 10 hours.
  • Chris Lamb did 13 hours.
  • Emilio Pozuelo Monfort did 14.5 hours (out of 13 hours allocated + 15.25 hours remaining, thus keeping 13.75 hours for December).
  • Guido Günther did 14 hours (out of 11h allocated + 5.5 extra hours, thus keeping 2.5h for December).
  • Hugo Lefeuvre did 13h.
  • Lucas Kanashiro did not request any work hours, but he had 3 hours left. He did not publish any report yet.
  • Markus Koschany did 14.75 hours (out of 13 allocated + 1.75 extra hours).
  • Ola Lundqvist did 7h.
  • Raphaël Hertzog did 10 hours (out of 12h allocated, thus keeping 2 extra hours for December).
  • Roberto C. Sanchez did 32.5 hours (out of 13 hours allocated + 24.50 hours remaining, thus keeping 5 extra hours for November).
  • Thorsten Alteholz did 13 hours.

About external support partners

You might notice that there is sometimes a significant gap between the number of distributed work hours each month and the number of sponsored hours reported in the “Evolution of the situation” section. This is mainly due to some work hours that are “externalized” (but also because some sponsors pay too late). For instance, since we don’t have Xen experts among our Debian contributors, we rely on credativ to do the Xen security work for us. And when we get an invoice, we convert that to a number of hours that we drop from the available hours in the following month. And in the last months, Xen has been a significant drain to our resources: 35 work hours made in September (invoiced in early October and taken off from the November hours detailed above), 6.25 hours in October, 21.5 hours in November. We also have a similar partnership with Diego Bierrun to help us maintain libav, but here the number of hours tend to be very low.

In both cases, the work done by those paid partners is made freely available for others under the original license: credativ maintains a Xen 4.1 branch on GitHub, Diego commits his work on the release/0.8 branch in the official git repository.

Evolution of the situation

The number of sponsored hours did not change at 183 hours per month. It would be nice if we could continue to find new sponsors as the amount of work seems to be slowly growing too.

The security tracker currently lists 55 packages with a known CVE and the dla-needed.txt file 35 (we’re a bit behind in CVE triaging apparently).

Thanks to our sponsors

New sponsors are in bold.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianMichal Čihař: Weblate 2.18

Weblate 2.18 has been released today. The biggest improvement is probably reviewer based workflow, but there are some other enhancements as well.

Full list of changes:

  • Extended contributor stats.
  • Improved configuration of special chars virtual keyboard.
  • Added support for DTD file format.
  • Changed keyboard shortcuts to less likely collide with browser/system ones.
  • Improved support for approved flag in Xliff files.
  • Added support for not wrapping long strings in Gettext po files.
  • Added button to copy permalink for current translation.
  • Dropped support for Django 1.10 and added support for Django 2.0.
  • Removed locking of translations while translating.
  • Added support for adding new units to monolingual translations.
  • Added support for translation workflows with dedicated reviewers.

If you are upgrading from older version, please follow our upgrading instructions.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. You can login there with demo account using demo password or register your own user. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English SUSE Weblate

TEDExploring the boundaries of legacy at TED@Westpac

Cyndi Stivers and Adam Spencer host TED@Westpac — a day of talks and performances themed around “The Future Legacy” — in Sydney, Australia, on Monday, December 11th. (Photo: Jean-Jacques Halans / TED)

Legacy is a delightfully complex concept, and it’s one that the TED@Westpac curators took on with gusto for the daylong event held in Sydney, Australia, on Monday December 11th. Themed around the idea of “The Future Legacy,” the day was packed with 15 speakers and two performers and hosted by TED’s Cyndi Stivers and TED speaker and monster prime number aficionado Adam Spencer. Topics ranged from education to work-health balance to designer babies to the importance of smart conversations around death.

For Westpac managing director and CEO Brian Hartzer, the day was an opportunity both to think back over the bank’s own 200-year-legacy — and a chance for all gathered to imagine a bold new future that might suit everyone. He welcomed talks that explored ideas and stories that may shape a more positive global future. “We are so excited to see the ripple effect of your ideas from today,” he told the collected speakers before introducing Aboriginal elder Uncle Ray Davison to offer the audience a traditional “welcome to country.”

And with that, the speakers were up and running.

“Being an entrepreneur is about creating change,” says Linda Zhang. She suggests we need to encourage the entrepreneurial mindset in high-schoolers. (Photo: Jean-Jacques Halans / TED)

Ask questions, challenge the status quo, build solutions. Who do you think of when you hear the word “entrepreneur?” Steve Jobs, Mark Zuckerberg, Elon Musk and Bill Gates might come to mind. What about a high school student? Linda Zhang might just have graduated herself but she’s been taking entrepreneurial cues from her parents, who started New Zealand’s second-largest thread company. Zhang now runs a program to pair students with industry mentors and get them to work for 48 hours on problems they actually want to solve. The results: a change in mindset that could help prepare them for a tumultuous but opportunity-filled job market. “Being an entrepreneur is about creating change,” Zhang says. “This is what high school should be about … finding things you care about, having the curiosity to learn about those things and having the drive to take that knowledge and implement it into problems you care about solving.”

Should we bribe kids to study math? In this sparky talk, Mohamad Jebara shares a favorite quote from fellow mathematician Francis Su: “We study mathematics for play, for beauty, for truth, for justice, and for love.” Only problem: kids today, he says, often don’t tend to agree, instead finding math “difficult and boring.” Jebara has a counterintuitive potential solution: he wants to bribe kids to study math. His financial incentive plan works like this: his company charges parents a monthly subscription fee; if students complete their weekly math goal then the program refunds that amount of the fee directly into the student’s bank account; if not, the company pockets the profit. Ultimately, Jebara wants kids to discover math’s intrinsic worth and beauty, but until they get there, he’s happy to pay them. And this isn’t just about his own business model. “Unless we find a way to improve student engagement with mathematics, we’ll have not only a huge skills shortage crisis, but a fickle population easily manipulated by whoever can get the most airtime,” he says.

You, cancer and the workplace. When lawyer Sarah Donnelly was diagnosed with breast cancer, she turned to her friends and family for support — but she also sought refuge in her work. “My job and my coworkers would make me feel valuable and human at times when I would have otherwise felt like a statistic,” she says. “Work gave me focus and stability when I was dealing with so many unknowns and difficult personal decisions.” But, she says, not all employers realize that work can be a sanctuary for the sick, and often — believing themselves polite and thoughtful — cast out their employees. Now, Donnelly is striving to change the experiences of individuals coping with serious illness — and the perceptions others might have of them. Together with a colleague, she created a “Working with Cancer” toolkit that provides a framework and guidance for all those professionally involved in an employee’s life, and she is traveling to different companies around Australia to implement it.

Digital strategist Will Jenkins asks that we need to think about what we really want from life, not just our day-to-day. (Photo: Jean-Jacques Halans / TED)

The connection between time and money. We all need more time, says digital strategist Will Jenkins, and historically we’ve developed systems and technologies to save time for ourselves and others by reducing waste and inefficiency. But there’s a problem: even after spending centuries trying to perfect time-saving techniques, it too often still doesn’t feel like we’re getting anywhere. “As individuals, we’re busier than ever,” Jenkins points out, before calling for us to look beyond specialized techniques to think about what we actually really want from life itself, not just our day-to-day. In taking a holistic approach to time, we might, he says, channel John Maynard Keynes to figure out new ways that will allow all of us “to live wisely, agreeably, and well.”

Creating a digital future for Australia’s first people. Indigenous Australian David Unaipon (1862-1967) was called his country’s Leonardo da Vinci — he was responsible for at least 19 inventions, including a tool that led to modern sheep shears. But according to Westpac business analyst Michael Mieni, we need to find better ways to encourage future Unaipons. Right now, he says, too many Aboriginals are on the far side of the digital divide, lacking access to computers and the Internet as well as basic schooling in technology. Mieni was the first indigenous IT honors students at the University of Technology Sydney and he makes the case that tech-savvy Aboriginals are badly needed to serve as role models and teachers, as inventors of ways to record and promote their culture and as guardians of their people’s digital rights. “What if the next ground-breaking idea is already in the mind of a young Aboriginal student but will never surface because they face digital disadvantage or exclusion?” he asks. Everyone in Australia — not just the first people — gains when every citizen has the opportunity and resources to become digitally literate.

Shade Zahrai and Aric Yegudkin perform a gorgeous, sensual dance at TED@Westpac. (Photo: Jean-Jacques Halans / TED)

The beauty of a dance duet. “Partner dance embodies the coming together of two people,” Shade Zahrai‘s voice whispers to a dark auditorium as she and her partner take the TED stage. In the middle of session one, the pair perform a gorgeous and sensual modern dance, complete with Zahrai’s recorded voiceover explaining the coordination and unity that partner dance requires of its participants.

The power of inclusiveness. Inclusion strategist Hayley Yeates shares how her identity as a proud Australian was dimmed by prejudice shown towards her by those who saw her as Asian. When in school, she says, fellow students didn’t want to associate with her in classrooms, while she didn’t add a picture to her LinkedIn profile for fear her race would deem her less worthy of a job. But Yeates focuses on more than the personal stories of those who’ve been dubbed an outsider, and makes the case that diversity leads to innovation and greater profitability for companies. She calls for us all to sponsor safe spaces where authentic, unrestrained conversations about the barriers faced by cultural minorities can be held freely. And she invites leaders to think about creating environments where people’s whole selves can work, and where an organization can thrive because of, not in spite of, its employees’ differences.

Olivia Tyler tracks the complexity of global supply chains, looking to develop smart technology that can allow both corporations and consumers to understand buying decisions. (Photo: Jean-Jacques Halans / TED)

How to do yourself out of a job. As a sustainability practitioner, Olivia Tyler is trying hard to develop systems that will put her out of work. Why? For the good of us all, of course. And how? By encouraging all of us to ask questions about where what we buy, wear or eat comes from. Tyler tracks the fiendish complexity of today’s global supply chains, and she is attempting to develop smart technology that can allow both corporations and consumers to have the visibility they need to understand the buying decisions they make. When something as ostensibly simple as a baked good can include hundreds of data points about the ingredients it contains — a cake can be a minefield, she jokes — it’s time to open up the cupboard and use tech such as the blockchain to crack open the sustainability code. “We can adopt new and exciting ways to change the game on how we conduct ourselves as corporates and consumers across our increasingly smaller world,” she promises.

Can machine intelligence liberate human purpose? Much has been made of the threat robots place to the very existence of certain jobs, with some estimates reckoning that as much as 80% of low skill jobs have already been automated. Self-styled “datapreneur” Tomer Garzberg shares how he researched 11,000 of the world’s most widely held jobs to create the “Short-Term Automation Susceptibility Index” to identify the types of role that might be up for automation next. Perhaps unsurprisingly, highly specialized roles held by those such as neurosurgeons, chemical engineers and, well, acrobats face the least risk of being automated, while even senior blue collar positions or standard white collar roles such as pharmacists, accountants and health inspectors can expect a 25% shrinkage over the next 10 years. But Garzberg believes that we can — must — embrace this cybernated future.”Prepare your family to be okay with change, as uncomfortable as it may be,” he says. “We’ll likely be switching careers far more frequently in the near future.”

Everything’s gonna be alright. After a quick break and a breather, Westpac’s own Rowan Fitzpatrick and his band Heart of Mind played in session two with a sweet, uplifting rock ballad about better days and leaning on one another with love and hope. “Keep looking forward / Don’t lose your grip / One step at a time,” the trained jazz singer croons.

Alastair O’Neill shares the ethical wrangling his family undertook as they figured out how they felt about potentially eradicating a debilitating disease with gene editing. (Photo: Jean-Jacques Halans / TED)

You have the ability to end a hereditary disease. Do you take it? “Recently I had to sign a form promising that I wouldn’t have sex with my wife,” says a deadpan Alastair O’Neill as he kicks off the session’s talks. “Why? Because we decided to have a baby.” He waits a beat. “Let me rewind.” As the audience settles in for a rollercoaster talk of emotional highs and lows, he explains his family’s journey through the ethical minefield of embryonic genetic testing, also known as preimplantation genetic diagnosis or PGD. It was a journey prompted by a hereditary condition in his wife’s family — his father-in-law Phil had inherited the gene for retinal dystrophy and was declared legally blind at 30 years old. The odds that his own young family would have a baby either carrying or inheriting the disease were as low as one in two. In this searingly personal talk, O’Neill shares the ups and downs of both the testing process and the ethical wrangling that their entire family undertook as they tried to figure out how they felt about potentially eradicating a debilitating disease. Spoiler alert: O’Neill is in favor. “PGD gives couples the ability to choose to end a hereditary disease,” he says. “I think we should give every potential parent that choice.”

A game developer’s solution to the housing crisis. When Sarah Murray wanted to buy her first house, she discovered that home prices far exceeded her budget — and building a new house would be prohibitively costly and time-consuming. Frustrated by her lack of self-determination, Murray decided to create a computer game to give control back to buyers. The program allows you to design all aspects of your future home (even down to attention to price and environmental impact) and then delivers the final product directly to you in modular components that can be assembled onsite. Murray’s innovative idea both cuts costs and makes more sustainable dwellings; the first physical houses should be ready by 2018. But the digital housing developer isn’t done yet. Now she is working on adapting the program and investing in construction techniques such as 3D printing so that when a player designs and builds a home, they can also contribute to a home for someone in need. As she says, “I want to put every person who wants one in a home of their own design.”

Tough guys need mental-health help, too. In 2013 in Castlemaine, Victoria, painter and decorator Jeremy Forbes was shaken when a friend and fellow tradie (or tradesman), committed suicide. But what truly shocked him were the murmurs he overheard at the man’s wake — people asking, “Who’s next?” Tradies deal with the same struggles faced by many — depression, alcohol and drug dependency, gambling, financial hardship — but they often don’t feel comfortable opening up about them. “You’re expected to be silent in the face of adversity,” says Forbes. So he and artist Catherine Pilgrim founded HALT (Hope Assistance Local Tradies), a mental health awareness organization for tradie men and women, apprentices, builders, farmers, and their partners. HALT meets people where they are, hosting gatherings at hardware stores, football and sports clubs, and vocational training facilities. There, people learn about the warning signs of depression and anxiety and the available services. According to Forbes, who received a Westpac Social Change Fellowship in 2016, HALT has now held around 150 events, and he describes the process as both empowering and cathartic. We need to know how to respond if people are not OK, he says.

The conversation about death you need to have. “Most of us don’t want to acknowledge death, we don’t want to plan for it, and we don’t want to discuss it with the most important people in our lives,” says mortal realist and portfolio manager Michelle Knox. She’s got stats to prove it: 45% of people in Australia over the age of 18 don’t have a legal will. But dying without one is complicated and expensive for those left behind, and just one reason Knox believes it’s time we take ownership of our own deaths. Others include that talking about death before it happens can help us experience a good death, reduce stress on our loved ones, and also help us support others who are grieving. Knox experienced firsthand the power of talking about death ahead of time when her father passed away earlier this year. “I discovered this year it’s actually a privilege to help someone exit this life and although my heart is heavy with loss and sadness, it is not heavy with regret,” she says, “I knew what Dad wanted and I feel at peace knowing I could support his wishes.”

“What would water do?” asks Raymond Tang. “This simple and powerful question has changed my life for the better.” (Photo: Jean-Jacques Halans / TED)

The philosophy of water. How do we find fulfillment in a world that’s constantly changing? IT strategy manager and “agent of flow” Raymond Tang struggled mightily with this question — until he came across the ancient Chinese philosophy of the Tao Te Ching. In it, he found a passage comparing goodness to water and, inspired, he’s now applying the concepts to his everyday life. In this charming talk, he shares three lessons he’s learned so far from the “philosophy of water.” First, humility: in the same way water helps plants and animals grow without seeking reward, Tang finds fulfillment and meaning in helping others overcome their challenges. Next, harmony: just as water is able to navigate its way around obstacles without force or conflict, Tang believes we can find a greater sense of fulfillment in our endeavors by shifting our focus away from achieving success and towards achieving harmony. Finally, openness: water can be a liquid, solid or gas, and it adapts to the shape in which it’s contained. Tang finds in his professional life that the teams most open to learning (and un-learning) do the best work. “What would water do?” Tang asks. “This simple and powerful question has changed my life for the better.”

With great data comes great responsibility. Remember the hacks on companies such as Equifax and JP Morgan? Well, you ain’t seen nothing yet. As computer technology becomes more powerful (think quantum) the systems we use to protect our wells of data become ever more vulnerable. However, there is still time to plan countermeasures against the impending data apocalypse, reassures encryption expert Vikram Sharma. He and his team are designing security devices and programs that also rely on quantum physics to power a defense against the most sophisticated attacks. “The race is on to build systems that will remain secure in the face of rapid technological advance,” he says.

Rach Ranton brings the leadership lessons she learned in the military to corporations, suggesting that leaders succeed when everyone knows the final goal they’re working toward. (Photo: Jean-Jacques Halans / TED)

Leadership lessons from the front line. How does a leader give their people a sense of purpose and direction? Rach Ranton spent more than a decade in the Australian Army, including tours of Afghanistan and East Timor. Now, she brings the lessons she learned in the military to companies, blending organizational psychology aimed at corporations with the planning and best practices of a well-oiled military unit. Even in a situation of extreme uncertainty, she says, military units function best if everyone understands the leader’s objective exactly as well as they understand their own role, not just their individual part to play but also the whole. She suggests leaders spend time thinking about how to communicate “commander’s intent,” the final goal that everyone is working toward. As a test, she asks: If you as a leader were absent from the scene, would your team still know what to do … and why they were doing it?


CryptogramTracking People Without GPS

Interesting research:

The trick in accurately tracking a person with this method is finding out what kind of activity they're performing. Whether they're walking, driving a car, or riding in a train or airplane, it's pretty easy to figure out when you know what you're looking for.

The sensors can determine how fast a person is traveling and what kind of movements they make. Moving at a slow pace in one direction indicates walking. Going a little bit quicker but turning at 90-degree angles means driving. Faster yet, we're in train or airplane territory. Those are easy to figure out based on speed and air pressure.

After the app determines what you're doing, it uses the information it collects from the sensors. The accelerometer relays your speed, the magnetometer tells your relation to true north, and the barometer offers up the air pressure around you and compares it to publicly available information. It checks in with The Weather Channel to compare air pressure data from the barometer to determine how far above sea level you are. Google Maps and data offered by the US Geological Survey Maps provide incredibly detailed elevation readings.

Once it has gathered all of this information and determined the mode of transportation you're currently taking, it can then begin to narrow down where you are. For flights, four algorithms begin to estimate the target's location and narrows down the possibilities until its error rate hits zero.

If you're driving, it can be even easier. The app knows the time zone you're in based on the information your phone has provided to it. It then accesses information from your barometer and magnetometer and compares it to information from publicly available maps and weather reports. After that, it keeps track of the turns you make. With each turn, the possible locations whittle down until it pinpoints exactly where you are.

To demonstrate how accurate it is, researchers did a test run in Philadelphia. It only took 12 turns before the app knew exactly where the car was.

This is a good example of how powerful synthesizing information from disparate data sources can be. We spend too much time worried about individual data collection systems, and not enough about analysis techniques of those systems.

Research paper.

Worse Than FailureError'd: These are not the Security Questions You're Looking for

"If it didn't involve setting up my own access, I might've tried to find what would happen if I dared defy their labeling," Jameson T. wrote.

 

"I think that someone changed the last sentence in a hurry," writes George.

 

"Now I may not be able to read, or let alone type in Italian, but I bet if given this particular one, I could feel my way through it," Anatoly writes.

 

"Wow! The best rates on default text, guaranteed!" writes Peter G.

 

Thomas R. wrote, "Doing Cyber Monday properly takes some serious skills!"

 

"I'm unsure what's going on here. Is the service status page broken or is it telling me that the service is broken?" writes Neil H.

 

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

Planet DebianDimitri John Ledkov: What does FCC Net Neutrality repeal mean to you?

Sorry, the web page you have requested is not available through your internet connection.

We have received an order from the Courts requiring us to prevent access to this site in order to help protect against Lex Julia Majestatis infridgement.

If you are a home broadband customer, for more information on why certain web pages are blocked, please click here.
If you are a business customer, or are trying to view this page through your company's internet connection, please click here.

Planet DebianUrvika Gola: KubeCon + CloudNativeCon, Austin

KubeCon + CloudNativeCon, North America took place in Austin, Texas from 6th to 8th December. But before that, I stumbled upon this great opportunity by Linux Foundation which would make it possible for me to attend and expand my knowledge about cloud computing, containers and all things cloud native!

cncf

I would like to thank the diversity committee members – @michellenoorali ,  @Kris__Nova, @jessfraz , @evonbuelow and everyone (+Wendy West!!) behind this for making it possible for me and others by going extra miles to achieve the greatest initiative for diversity inclusion. It gave me an opportunity to learn from experts and experience the power of Kubernetes.

After travelling 23+ in flight, I was able to attend the pre-conference sessions on 5th December. The day concluded with amazing Empower Her Evening Event where I met some amazing bunch of people! We had some great discussions and food, Thanks

diversity-scholars-cncfWith diversity scholarship recipients at EmpowerHer event (credits – Radhika Nair)

On 6th December, I was super excited to attend Day 1 of the conference, when I reached at the venue, Austin Convention Center, there was a huge hall with *4100* people talking about all things cloud native!

It started with informational KeyNote by Dan Kohn, the Executive Director of Cloud Native Computing Foundation. He pointed out how CNCF has grown over the year, from having 4 projects in 2016 to 14 projects in 2017. From having 1400 Attendees in March 2017 to 4100 Attendees in December 2017. It was really thrilling to know about the growth and power of Kubernetes, which really inspired me to contribute towards this project.

dan-cncfDan Kohn Keynote talk at KubeCon+CloudNativeCon

It was hard to choose what session to attend because there was just so much going on!! I attended sessions mostly which were beginner & intermediate level. Missed out on the ones which required technical expertise I don’t possess, yet! Curious to know more about other tech companies working on, I made sure I visited all sponsor booths and learn what technology they are building. Apart from that they had cool goodies and stickers, the place where people are labelled at sticker-person or non-sticker-person! 😀

There was a diversity luncheon on 7th December, where I had really interesting conversations with people about their challenges and stories related to technology. I made some great friends at the table and thank you for voting my story as the best story of getting into open source & thank you Samsung for sponsoring this event.

KubeCon + CloudNativeCon was a very informative and hugee event put up by Cloud Native Computing Foundation. It was interesting to know how cloud native technologies have expanded along with the growth of community! Thank you the Linux foundation for this experience! 🙂

IMG_20171206_190624Keeping Cloud Native Weird!
IMG_20171207_192927Open bar all attendee party! (Where I experienced my first snow fall )

 

austin-lakeGoodbye Austin!

Planet DebianSean Whitton: A second X server on vt8, running a different Debian suite

Two tensions

  1. Sometimes the contents of the Debian archive isn’t yet sufficient for working in a software ecosystem in which I’d like to work, and I want to use that ecosystem’s package manager which downloads the world into $HOME – e.g. stack, pip, lein and friends.

    But I can’t use such a package manager when $HOME contains my PGP subkeys and other valuable files, and my X session includes Firefox with lots of saved passwords, etc.

  2. I want to run Debian stable on my laptop for purposes of my day job – if I can’t open Emacs on a Monday morning, it’s going to be a tough week.

    But I also want to do Debian development on my laptop, and most of that’s a pain without either Debian testing or Debian unstable.

The solution

Have Propellor provision and boot a systemd-nspawn(1) container running Debian unstable, and start a window manager in that container with $DISPLAY pointing at an X server in vt8. Wooo!

In more detail:

  1. Laptop runs Debian stable. Main account is spwhitton.
  2. Achieve isolation from /home/spwhitton by creating a new user account, spw, that can’t read /home/spwhitton. Also, in X startup scripts for spwhitton, run xhost -local:.
  3. debootstrap a Debian unstable chroot into /var/lib/container/develacc.
  4. Install useful desktop things like task-british-desktop into /var/lib/container/develacc.
  5. Boot /var/lib/container/develacc as a systemd-nspawn container called develacc.
  6. dm-tool switch-to-greeter to start a new X server on vt8. Login as spw.
  7. Propellor installs a script enter-develacc which uses nsenter(1) to run commands in the develacc container. Create a further script enter-develacc-i3 which does

     /usr/local/bin/enter-develacc sh -c "cd ~spw; DISPLAY=$1 su spw -c i3"
    
  8. Finally, /home/spw/.xsession starts i3 in the chroot pointed at vt8’s X server:

     sudo /usr/local/bin/enter-develacc-i3 $DISPLAY
    
  9. Phew. May now pip install foo. And Ctrl-Alt-F7 to go back to my secure session. That session can read and write /home/spw, so I can dgit push etc.

The Propellor configuration

develaccProvisioned :: Property (HasInfo + DebianLike)
develaccProvisioned = propertyList "develacc provisioned" $ props
    & User.accountFor (User "spw")
    & Dotfiles.installedFor (User "spw")
    & User.hasDesktopGroups (User "spw")
    & withMyAcc "Sean has 'spw' group"
        (\u -> tightenTargets $ User.hasGroup u (Group "spw"))
    & withMyHome "Sean's homedir chmodded"
        (\h -> tightenTargets $ File.mode h 0O0750)
    & "/home/spw" `File.mode` 0O0770

    & "/etc/sudoers.d/spw" `File.hasContent`
        ["spw ALL=(root) NOPASSWD: /usr/local/bin/enter-develacc-i3"]
    & "/usr/local/bin/enter-develacc-i3" `File.hasContent`
        [ "#!/bin/sh"
        , ""
        , "echo \"$1\" | grep -q -E \"^:[0-9.]+$\" || exit 1"
        , ""
        , "/usr/local/bin/enter-develacc sh -c \\"
        , "\t\"cd ~spw; DISPLAY=$1 su spw -c i3\""
        ]
    & "/usr/local/bin/enter-develacc-i3" `File.mode` 0O0755

    -- we have to start xss-lock outside of the container in order that it
    -- can interface with host logind
    & "/home/spw/.xsession" `File.hasContent`
        [ "if [ -e \"$HOME/local/wallpaper.png\" ]; then"
        , "    xss-lock -- i3lock -i $HOME/local/wallpaper.png &"
        , "else"
        , "    xss-lock -- i3lock -c 3f3f3f -n &"
        , "fi"
        , "sudo /usr/local/bin/enter-develacc-i3 $DISPLAY"
        ]

    & Systemd.nspawned develAccChroot
    & "/etc/network/if-up.d/develacc-resolvconf" `File.hasContent`
        [ "#!/bin/sh"
        , ""
        , "cp -fL /etc/resolv.conf \\"
        ,"\t/var/lib/container/develacc/etc/resolv.conf"
        ]
    & "/etc/network/if-up.d/develacc-resolvconf" `File.mode` 0O0755
  where
    develAccChroot = Systemd.debContainer "develacc" $ props
        -- Prevent propellor passing --bind=/etc/resolv.conf which
        -- - won't work when system first boots as WLAN won't be up yet,
        --   so /etc/resolv.conf is a dangling symlink
        -- - doesn't keep /etc/resolv.conf up-to-date as I move between
        --   wireless networks
        ! Systemd.resolvConfed

        & osDebian Unstable X86_64
        & Apt.stdSourcesList
        & Apt.suiteAvailablePinned Experimental 1
        -- use host apt cacher (we assume I have that on any system with
        -- develaccProvisioned)
        & Apt.proxy "http://localhost:3142"

        & Apt.installed [ "i3"
                , "task-xfce-desktop"
                , "task-british-desktop"
                , "xss-lock"
                , "emacs"
                , "caffeine"
                , "redshift-gtk"
                , "gnome-settings-daemon"
                ]

        & Systemd.bind "/home/spw"
        -- note that this won't create /home/spw because that is
        -- bind-mounted, which is what we want
        & User.accountFor (User "spw")
        -- ensure that spw inside the container can read/write ~spw
        & scriptProperty
            [ "usermod -u $(stat --printf=\"%u\" /home/spw) spw"
            , "groupmod -g $(stat --printf=\"%g\" /home/spw) spw"
            ] `assume` NoChange

Comments

I first tried using a traditional chroot. I bound lots of /dev into the chroot and then tried to start lightdm on vt8. This way, the whole X server would be within the chroot; this is in a sense more straightforward and there is not the overhead of booting the container. But lightdm refuses to start.

It might have been possible to work around this, but after reading a number of reasons why chroots are less good under systemd as compared with sysvinit, I thought I’d try systemd-nspawn, which I’ve used before and rather like in general. I couldn’t get lightdm to start inside that, either, because systemd-nspawn makes it difficult to mount enough of /dev for X servers to be started. At that point I realised that I could start only the window manager inside the container, with the X server started from the host’s lightdm, and went from there.

The security isn’t that good. You shouldn’t be running anything actually untrusted, just stuff that’s semi-trusted.

  • chmod 750 /home/spwhitton, xhost -local: and the argument validation in enter-develacc-i3 are pretty much the extent of the security here. The containerisation is to get Debian sid on a Debian stable machine, not for isolation

  • lightdm still runs X servers as root even though it’s been possible to run them as non-root in Debian for a few years now (there’s a wishlist bug against lightdm)

I now have a total of six installations of Debian on my laptop’s hard drive … four traditional chroots, one systemd-nspawn container and of course the host OS. But this is easy to manage with propellor!

Bugs

Screen locking is weird because logind sessions aren’t shared into the container. I have to run xss-lock in /home/spw/.xsession before entering the container, and the window manager running in the container cannot have a keybinding to lock the screen (as it does in my secure session). To lock the spw X server, I have to shut my laptop lid, or run loginctl lock-sessions from my secure session, which requires entering the root password.

Planet Linux AustraliaOpenSTEM: Celebration Time!

Here at OpenSTEM we have a saying “we have a resource on that” and we have yet to be caught out on that one! It is a festive time of year and if you’re looking for resources reflecting that theme, then here are some suggestions: Celebrations in Australia – a resource covering the occasions we […]

,

CryptogramSecurity Planner

Security Planner is a custom security advice tool from Citizen Lab. Answer a few questions, and it gives you a few simple things you can do to improve your security. It's not meant to be comprehensive, but instead to give people things they can actually do to immediately improve their security. I don't see it replacing any of the good security guides out there, but instead augmenting them.

The advice is peer reviewed, and the team behind Security Planner is committed to keeping it up to date.

Note: I am an advisor to this project.

Google AdsenseThank you for being part of the web

At AdSense we are proud to be part of your journey. This video is our big thanks to you for everything we have done together in 2017!


Posted by: your AdSense Team

Planet DebianRussell Coker: Huawei Mate9

Warranty Etc

I recently got a Huawei Mate 9 phone. My previous phone was a Nexus 6P that died shortly before it’s one year warranty ran out. As there have apparently been many Nexus 6P phones dying there are no stocks of replacements so Kogan (the company I bought the phone from) offered me a choice of 4 phones in the same price range as a replacement.

Previously I had chosen to avoid the extended warranty offerings based on the idea that after more than a year the phone won’t be worth much and therefore getting it replaced under warranty isn’t as much of a benefit. But now that it seems that getting a phone replaced with a newer and more powerful model is a likely outcome it seems that there are benefits in a longer warranty. I chose not to pay for an “extended warranty” on my Nexus 6P because getting a new Nexus 6P now isn’t such a desirable outcome, but when getting a new Mate 9 is a possibility it seems more of a benefit to get the “extended warranty”. OTOH Kogan wasn’t offering more than 2 years of “warranty” recently when buying a phone for a relative, so maybe they lost a lot of money on replacements for the Nexus 6P.

Comparison

I chose the Mate 9 primarily because it has a large screen. It’s 5.9″ display is only slightly larger than the 5.7″ displays in the Nexus 6P and the Samsung Galaxy Note 3 (my previous phone). But it is large enough to force me to change my phone use habits.

I previously wrote about matching phone size to the user’s hand size [1]. When writing that I had the theory that a Note 2 might be too large for me to use one-handed. But when I owned those phones I found that the Note 2 and Note 3 were both quite usable in one-handed mode. But the Mate 9 is just too big for that. To deal with this I now use the top corners of my phone screen for icons that I don’t tend to use one-handed, such as Facebook. I chose this phone knowing that this would be an issue because I’ve been spending more time reading web pages on my phone and I need to see more text on screen.

Adjusting my phone usage to the unusually large screen hasn’t been a problem for me. But I expect that many people will find this phone too large. I don’t think there are many people who buy jeans to fit a large phone in the pocket [2].

A widely touted feature of the Mate 9 is the Leica lens which apparently gives it really good quality photos. I haven’t noticed problems with my photos on my previous two phones and it seems likely that phone cameras have in most situations exceeded my requirements for photos (I’m not a very demanding user). One thing that I miss is the slow-motion video that the Nexus 6P supports. I guess I’ll have to make sure my wife is around when I need to make slow motion video.

My wife’s Nexus 6P is well out of warranty. Her phone was the original Nexus 6P I had. When her previous phone died I had a problem with my phone that needed a factory reset. It’s easier to duplicate the configuration to a new phone than restore it after a factory reset (as an aside I believe Apple does this better) I copied my configuration to the new phone and then wiped it for my wife to use.

One noteworthy but mostly insignificant feature of the Mate 9 is that it comes with a phone case. The case is hard plastic and cracked when I unsuccessfully tried to remove it, so it seems to effectively be a single-use item. But it is good to have that in the box so that you don’t have to use the phone without a case on the first day, this is something almost every other phone manufacturer misses. But there is the option of ordering a case at the same time as a phone and the case isn’t very good.

I regard my Mate 9 as fairly unattractive. Maybe if I had a choice of color I would have been happier, but it still wouldn’t have looked like EVE from Wall-E (unlike the Nexus 6P).

The Mate 9 has a resolution of 1920*1080, while the Nexus 6P (and many other modern phones) has a resolution of 2560*1440 I don’t think that’s a big deal, the pixels are small enough that I can’t see them. I don’t really need my phone to have the same resolution as the 27″ monitor on my desktop.

The Mate 9 has 4G of RAM and apps seem significantly less likely to be killed than on the Nexus 6P with 3G. I can now switch between memory hungry apps like Pokemon Go and Facebook without having one of them killed by the OS.

Security

The OS support from Huawei isn’t nearly as good as a Nexus device. Mine is running Android 7.0 and has a security patch level of the 5th of June 2017. My wife’s Nexus 6P today got an update from Android 8.0 to 8.1 which I believe has the fixes for KRACK and Blueborne among others.

Kogan is currently selling the Pixel XL with 128G of storage for $829, if I was buying a phone now that’s probably what I would buy. It’s a pity that none of the companies that have manufactured Nexus devices seem to have learned how to support devices sold under their own name as well.

Conclusion

Generally this is a decent phone. As a replacement for a failed Nexus 6P it’s pretty good. But at this time I tend to recommend not buying it as the first generation of Pixel phones are now cheap enough to compete. If the Pixel XL is out of your price range then instead of saving $130 for a less secure phone it would be better to save $400 and choose one of the many cheaper phones on offer.

Remember when Linux users used to mock Windows for poor security? Now it seems that most Android devices are facing the security problems that Windows used to face and the iPhone and Pixel are going to take the role of the secure phone.

Worse Than FailureRepresentative Line: An Array of WHY

Medieval labyrinth

Reader Jeremy sends us this baffling JavaScript: "Nobody on the team knows how it came to be. We think all 'they' wanted was a sequence of numbers starting at 1, but you wouldn't really know that from the code."


var numbers = new Array(maxNumber)
    .join()
    .split(',')
    .map(function(){return ++arguments[1]});

The end result: an array of integers starting at 1 and going up to maxNumber. This is probably the most head-scratchingest way to get that result ever devised.

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

Planet DebianDirk Eddelbuettel: #13: (Much) Faster Package (Re-)Installation via Binaries

Welcome to the thirteenth post in the ridiculously rapid R recommendation series, or R4 for short. A few days ago we riffed on faster installation thanks to ccache. Today we show another way to get equally drastic gains for some (if not most) packages.

In a nutshell, there are two ways to get your R packages off CRAN. Either you install as a binary, or you use source. Most people do not think too much about this as on Windows, binary is the default. So why wouldn't one? Precisely. (Unless you are on Windows, and you develop, or debug, or test, or ... and need source. Another story.) On other operating systems, however, source is the rule, and binary is often unavailable.

Or is it? Exactly how to find out what is available will be left for another post as we do have a tool just for that. But today, just hear me out when I say that binary is often an option even when source is the default. And it matters. See below.

As a (mostly-to-always) Linux user, I sometimes whistle between my teeth that we "lost all those battles" (i.e. for the desktop(s) or laptop(s)) but "won the war". That topic merits a longer post I hope to write one day, and I won't do it justice today but my main gist that everybody (and here I mean mostly developers/power users) now at least also runs on Linux. And by that I mean that we all test our code in Linux environments such as e.g. Travis CI, and that many of us run deployments on cloud instances (AWS, GCE, Azure, ...) which are predominantly based on Linux. Or on local clusters. Or, if one may dream, the top500 And on and on. And frequently these are Ubuntu machines.

So here is an Ubuntu trick: Install from binary, and save loads of time. As an illustration, consider the chart below. It carries over the logic from the 'cached vs non-cached' compilation post and contrasts two ways of installing: from source, or as a binary. I use pristine and empty Docker containers as the base, and rely of course on the official r-base image which is supplied by Carl Boettiger and yours truly as part of our Rocker Project (and for which we have a forthcoming R Journal piece I might mention). So for example the timings for the ggplot2 installation were obtained via

time docker run --rm -ti r-base  /bin/bash -c 'install.r ggplot2'

and

time docker run --rm -ti r-base  /bin/bash -c 'apt-get update && apt-get install -y r-cran-ggplot2'

Here docker run --rm -ti just means to launch Docker, in 'remove leftovers at end' mode, use terminal and interactive mode and invoke a shell. The shell command then is, respectively, to install a CRAN package using install.r from my littler package, or to install the binary via apt-get after updating the apt indices (as the Docker container may have been built a few days or more ago).

Let's not focus on Docker here---it is just a convenient means to an end of efficiently measuring via a simple (wall-clock counting) time invocation. The key really is that install.r is just a wrapper to install.packages() meaning source installation on Linux (as used inside the Docker container). And apt-get install ... is how one gets a binary. Again, I will try post another piece to determine how one finds if a suitable binary for a CRAN package exists. For now, just allow me to proceed.

So what do we see then? Well have a look:

A few things stick out. RQuantLib really is a monster. And dplyr is also fairly heavy---both rely on Rcpp, BH and lots of templating. At the other end, data.table is still a marvel. No external dependencies, and just plain C code make the source installation essentially the same speed as the binary installation. Amazing. But I digress.

We should add that one of the source installations also required installing additional libries: QuantLib is needed along with Boost for RQuantLib. Similar for another package (not shown) which needed curl and libcurl.

So what is the upshot? If you can, consider binaries. I will try to write another post how I do that e.g. for Travis CI where all my tests us binaries. (Yes, I know. This mattered more in the past when they did not cache. It still matters today as you a) do not need to fill the cache in the first place and b) do not need to worry about details concerning compilation from source which still throws enough people off. But yes, you can of course survive as is.)

The same approach is equally valid on AWS and related instances: I answered many StackOverflow questions where folks were failing to compile "large-enough" pieces from source on minimal installations with minimal RAM, and running out of resources and failed with bizarre errors. In short: Don't. Consider binaries. It saves time and trouble.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianDirk Eddelbuettel: RVowpalWabbit 0.0.10

A boring little RVowpalWabbit package update to version 0.0.10 came in response to another CRAN request: We were switching directories to run tests (or examples) which is now discouraged, so we no longer do this as it turns that we can of course refer to the files directly as well. Much cleaner.

No new code or features were added.

We should mention once more that is parallel work ongoing in a higher-level package interfacing the vw binary -- rvw -- as well as plan to redo this package via the external libraries. In that sounds interesting to you, please get in touch.

More information is on the RVowpalWabbit page. Issues and bugreports should go to the GitHub issue tracker.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

,

Planet DebianSteinar H. Gunderson: Compute shaders

Movit, my GPU-accelerated video filter library, is getting compute shaders. But the experience really makes me happy that I chose to base it on fragment shaders originally and not compute! (Ie., many would claim CUDA would be the natural choice, even if it's single-vendor.) The deinterlace filter is significantly faster (10–70%, depending a bit on various factors) on my Intel card, so I'm hoping the resample filter is also going to get some win, but at this point, I'm not actually convinced it's going to be much faster… and the complexity of managing local memory effectively is sky-high. And then there's the fun of chaining everything together.

Hooray for already having an extensive battery of tests, at least!

Cory DoctorowNet Neutrality is only complicated because monopolists are paying to introduce doubt


My op-ed in New Internationalist,
‘Don’t break the 21st century nervous system’
, seeks to cut through the needless complexity in the Net Neutrality debate, which is as clear-cut as climate change or the link between smoking and cancer — and, like those subjects, the complexity is only there because someone paid to introduce it.


When you want to access my web page, you ask your internet service provider to send some data to my ISP, who passes it on to my server, which passes some data back to the other ISP, who sends it to your ISP, who sends it to you.

That’s a neutral internet: ISPs await requests from their customers, then do their best to fulfill them.

In a discriminatory network, your ISP forwards your requests to mine, then decides whether to give the data I send in reply to you, or to slow it down.

If they slow it down, they can ask me for a payment to get into the ‘fast lane’, where ‘premium traffic’ goes. There’s no fair rationale for this: you’re not subscribing to the internet to get the bits that maximally enrich your ISP, you’re subscribing to get the bits you want.

An ISP who charges extra to get you the bits you ask for is like a cab driver who threatens to circle the block twice before delivering passengers to John Lewis because John Lewis hasn’t paid for ‘premium service’. John Lewis isn’t the passenger, you are, and you’re paying the cab to take you to your destination, not a destination that puts an extra pound in the driver’s pocket.

But there are a lot of taxi options, from black cabs to minicabs to Uber. This isn’t the case when it comes to the internet. For fairly obvious economic and logistical reasons, cities prefer a minimum of networks running under their main roads and into every building: doubling or tripling up on wires is wasteful and a source of long-term inconvenience, as someone’s wires will always want servicing. So cities generally grant network monopolies (historically, two monopolies: one for ‘telephone’ and one for ‘cable TV’).



‘Don’t break the 21st century nervous system’
[Cory Doctorow/The New Internationalist]

Planet DebianBernd Zeimetz: Collecting statistics from TP-Link HS110 SmartPlugs using collectd

Running a 3d printer alone at home is not necessarily the best idea - so I was looking for a way to force it off from remote. As OctoPrint user I stumbled upon a plugin to control TP-Link Smartplugs, so I decided to give them a try. What I found especially nice on the HS110 model was that it is possible to monitor power usage, current and voltage. Of course I wanted to have a long term graph of it. The result is a small collectd plugin, written in Python. It is available on github: https://github.com/bzed/collectd-tplink_hs110. Enjoy!

Planet DebianBits from Debian: Debsources now in sources.debian.org

Debsources is a web application for publishing, browsing and searching an unpacked Debian source mirror on the Web. With Debsources, all the source code of every Debian release is available in https://sources.debian.org, both via an HTML user interface and a JSON API.

This service was first offered in 2013 with the sources.debian.net instance, which was kindly hosted by IRILL, and is now becoming official under sources.debian.org, hosted on the Debian infrastructure.

This new instance offers all the features of the old one (an updater that runs four times a day, various plugins to count lines of code or measure the size of packages, and sub-apps to show lists of patches and copyright files), plus integration with other Debian services such as codesearch.debian.net and the PTS.

The Debsources Team has taken the opportunity of this move of Debsources onto the Debian infrastructure to officially announce the service. Read their message as well as the Debsources documentation page for more details.

TEDFree report: Bright ideas in business, distilled from TEDGlobal 2017

What’s a good way to remember an idea in the middle of a conference — so you can turn it into action? Take notes and brainstorm with others. At TEDGlobal 2017 in Tanzania, the Brightline Initiative inspired people to brainstorm ideas around talks they’d just watched, including Pierre Thiam’s celebration of the ancient grain fonio (watch this talk). (Photo: Ryan Lash/TED)

The Brightline Initiative helps executives implement ambitious ideas from business strategies, so it’s only fitting that the nonprofit group was onsite taking notes and holding brainstorms at TEDGlobal 2017 in Arusha, Tanzania. With the theme “Builders. Truth-Tellers. Catalysts.,” TEDGlobal was a celebration of doers and thinkers, including more than 70 speakers who’ve started companies, nonprofits, education initiatives and even movements.

We’re excited to share the Brightline Initiative’s just-released report on business ideas pulled from the talks of TEDGlobal 2017. These aren’t your typical business ideas — one speaker suggests a way to find brand-new markets by thinking beyond the physical address, while several others share how ancient traditions can spawn fresh ideas and even cutting-edge businesses. Whether you run a startup, sit in the C-suite or are known as a star employee, the ideas from these talks can spark new thinking and renew your inspiration.

Get the report here >>

PS: Look for more great ideas from the Brightline Initiative soon; this week at TED’s New York office, TED and Brightline partnered to produce an evening-length event of speakers who are creating change through smart, nuanced business thinking. Read about the event now, and watch for talks to appear on TED.com in the coming months.


Krebs on SecurityMirai IoT Botnet Co-Authors Plead Guilty

The U.S. Justice Department on Tuesday unsealed the guilty pleas of two men first identified in January 2017 by KrebsOnSecurity as the likely co-authors of Mirai, a malware strain that remotely enslaves so-called “Internet of Things” devices such as security cameras, routers, and digital video recorders for use in large scale attacks designed to knock Web sites and entire networks offline (including multiple major attacks against this site).

Entering guilty pleas for their roles in developing and using Mirai are 21-year-old Paras Jha from Fanwood, N.J. and Josiah White, 20, from Washington, Pennsylvania.

Jha and White were co-founders of Protraf Solutions LLC, a company that specialized in mitigating large-scale DDoS attacks. Like firemen getting paid to put out the fires they started, Jha and White would target organizations with DDoS attacks and then either extort them for money to call off the attacks, or try to sell those companies services they claimed could uniquely help fend off the attacks.

CLICK FRAUD BOTNET

In addition, the Mirai co-creators pleaded guilty to charges of using their botnet to conduct click fraud — a form of online advertising fraud that will cost Internet advertisers more than $16 billion this year, according to estimates from ad verification company Adloox. 

The plea agreements state that Jha, White and another person who also pleaded guilty to click fraud conspiracy charges — a 21-year-old from Metairie, Louisiana named Dalton Norman — leased access to their botnet for the purposes of earning fraudulent advertising revenue through click fraud activity and renting out their botnet to other cybercriminals.

As part of this scheme, victim devices were used to transmit high volumes of requests to view web addresses associated with affiliate advertising content. Because the victim activity resembled legitimate views of these websites, the activity generated fraudulent profits through the sites hosting the advertising content, at the expense of online advertising companies.

Jha and his co-conspirators admitted receiving as part of the click fraud scheme approximately two hundred bitcoin, valued on January 29, 2017 at over $180,000.

Prosecutors say Norman personally earned over 30 bitcoin, valued on January 29, 2017 at approximately $27,000. The documents show that Norman helped Jha and White discover new, previously unknown vulnerabilities in IoT devices that could be used to beef up their Mirai botnet, which at its height grew to more than 300,000 hacked devices.

MASSIVE ATTACKS

The Mirai malware is responsible for coordinating some of the largest and most disruptive online attacks the Internet has ever witnessed. The biggest and first to gain widespread media attention began on Sept. 20, 2016, when KrebsOnSecurity came under a sustained distributed denial-of-service attack from more than 175,000 IoT devices (the size estimates come from this Usenix paper (PDF) on the Mirai botnet evolution).

That September 2016 digital siege maxed out at 620 Gbps, almost twice the size of the next-largest attack that Akamai — my DDoS mitigation provider at the time — had ever seen.

The attack continued for several days, prompting Akamai to force my site off of their network (they were providing the service pro bono, and the attack was starting to cause real problems for their paying customers). For several frustrating days this Web site went dark, until it was brought under the auspices of Google’s Project Shield, a program that protects journalists, dissidents and others who might face withering DDoS attacks and other forms of digital censorship because of their publications.

At the end of September 2016, just days after the attack on this site, the authors of Mirai — who collectively used the nickname “Anna Senpai” — released the source code for their botnet. Within days of its release there were multiple Mirai botnets all competing for the same pool of vulnerable IoT devices.

The Hackforums post that includes links to the Mirai source code.

Some of those Mirai botnets grew quite large and were used to launch hugely damaging attacks, including the Oct. 21, 2016 assault against Internet infrastructure firm Dyn that disrupted Twitter, Netflix, Reddit and a host of other sites for much of that day.

A depiction of the outages caused by the Mirai attacks on Dyn, an Internet infrastructure company. Source: Downdetector.com.

The leak of the Mirai source code led to the creation of dozens of copycat Mirai botnets, all of which were competing to commandeer the same finite number of vulnerable IoT devices. One particularly disruptive Mirai variant was used in extortion attacks against a number of banks and Internet service providers in the United Kingdom and Germany.

In July 2017, KrebsOnSecurity published a story following digital clues that pointed to a U.K. man named Daniel Kaye as the apparent perpetrator of those Mirai attacks. Kaye, who went by the hacker nickname “Bestbuy,” was found guilty in Germany of launching failed Mirai attacks that nevertheless knocked out Internet service for almost a million Deutsche Telekom customers, for which he was given a suspended sentence. Kaye is now on trial in the U.K. for allegedly extorting banks in exchange for calling off targeted DDoS attacks against them.

Not long after the Mirai source code was leaked, I began scouring cybercrime forums and interviewing people to see if there were any clues that might point to the real-life identities of Mirai’s creators.

On Jan 18, 2017, KrebsOnSecurity published the results of that four-month inquiry, Who is Anna Senpai, the Mirai Worm Author? The story is easily the longest in this site’s history, and it cited a bounty of clues pointing back to Jha and White — two of the men whose guilty pleas were announced today.

A tweet from the founder and CTO of French hosting firm OVH, stating the intended target of the Sept. 2016 Mirai DDoS on his company.

According to my reporting, Jha and White primarily used their botnet to target online gaming servers — particularly those tied to the hugely popular game Minecraft. Around the same time as the attack on my site, French hosting provider OVH was hit with a much larger attack from the same Mirai botnet (see image above), and the CTO of OVH confirmed that the target of that attack was a Minecraft server hosted on his company’s network.

My January 2017 investigation also cited evidence and quotes from associates of Jha who said they suspected he was responsible for a series of DDoS attacks against Rutgers University: During the same year that Jha began studying at the university for a bachelor’s degree in computer science, the school’s servers came under repeated, massive attacks from Mirai.

With each DDoS against Rutgers, the attacker — using the nicknames “og_richard_stallman,” “exfocus” and “ogexfocus,” — would taunt the university in online posts and media interviews, encouraging the school to spend the money to purchase some kind of DDoS mitigation service.

It remains unclear if Jha (and possibly others) may face separate charges in New Jersey related to his apparent Mirai attacks on Rutgers. According to a sparsely-detailed press release issued Tuesday afternoon, the Justice Department is slated to hold a media conference at 2 p.m. today with officials from Alaska (where these cases originate) to “discuss significant cybercrime cases.”

Update: 11:43 a.m. ET: The New Jersey Star Ledger just published a story confirming that Jha also has pleaded guilty to the Rutgers DDoS attacks, as part of a separate case lodged by prosecutors in New Jersey.

PAYBACK

Under the terms of his guilty plea in the click fraud conspiracy, Jha agreed to give up 13 bitcoin, which at current market value of bitcoin (~$17,000 apiece) is nearly USD $225,000.

Jha will also waive all rights to appeal the conviction and whatever sentence gets imposed as a result of the plea. For the click fraud conspiracy charges, Jha, White and Norman each face up to five years in prison and a $250,000 fine.

In connection with their roles in creating and ultimately unleashing the Mirai botnet code, Jha and White each pleaded guilty to one count of conspiracy to violate 18 U.S.C. 1030(a)(5)(A). That is, to “causing intentional damage to a protected computer, to knowingly causing the transmission of a program, code, or command to a computer with the intention of impairing without authorization the integrity or availability of data, a program, system, or information.”

For the conspiracy charges related to their authorship and use of Mirai, Jha and White likewise face up to five years in prison, a $250,000 fine, and three years of supervised release.

This is a developing story. Check back later in the day for updates from the DOJ press conference, and later in the week for a follow-up piece on some of the lesser-known details of these investigations.

The Justice Department unsealed the documents related to these cases late in the day on Tuesday. Here they are:

Jha click fraud complaint (PDF)
Jha click fraud plea (PDF)
Jha DDoS/Mirai complaint (PDF)
Jha DDoS/Mirai plea (PDF)
White DDoS complaint (PDF)
White DDoS/Mirai Plea (PDF)
Norman click fraud complaint (PDF)
Norman click fraud plea (PDF)

CryptogramE-Mail Tracking

Good article on the history and practice of e-mail tracking:

The tech is pretty simple. Tracking clients embed a line of code in the body of an email­ -- usually in a 1x1 pixel image, so tiny it's invisible, but also in elements like hyperlinks and custom fonts. When a recipient opens the email, the tracking client recognizes that pixel has been downloaded, as well as where and on what device. Newsletter services, marketers, and advertisers have used the technique for years, to collect data about their open rates; major tech companies like Facebook and Twitter followed suit in their ongoing quest to profile and predict our behavior online.

But lately, a surprising­ -- and growing­ -- number of tracked emails are being sent not from corporations, but acquaintances. "We have been in touch with users that were tracked by their spouses, business partners, competitors," says Florian Seroussi, the founder of OMC. "It's the wild, wild west out there."

According to OMC's data, a full 19 percent of all "conversational" email is now tracked. That's one in five of the emails you get from your friends. And you probably never noticed.

I admit it's enticing. I would very much like the statistics that adding trackers to Crypto-Gram would give me. But I still don't do it.

Worse Than FailureThe Interview Gauntlet

Natasha found a job posting for a defense contractor that was hiring for a web UI developer. She was a web UI developer, familiar with all the technologies they were asking for, and she’d worked for defense contractors before, and understood how they operated. She applied, and they invited her in for one of those day-long, marathon interviews.

They told her to come prepared to present some of her recent work. Natasha and half a dozen members of the team crammed into an undersized meeting room. Irving, the director, was the last to enter, and his reaction to Natasha could best be described as “hate at first sight”.

Irving sat directly across from Natasha, staring daggers at her while she pulled up some examples of her work. Picking on a recent project, she highlighted what parts she’d worked on, what techniques she’d used, and why. Aside from Irving’s glare, it played well. She got good questions, had some decent back-and-forth, and was feeling pretty confident when she said, “Now, moving onto a more recent project-”

A blue sky, highlighted by a 'y' formed out of contrails

“Oh, thank god,” Irving groaned. His tone was annoyed, and possibly sarcastic. It was really impossible to tell. He let Natasha get a few sentences into talking about the next project, and then interrupted her. “This is fine. Let’s just break out into one-on-one interviews.”

Jack, the junior developer, was up first. He moved down the table to be across from Natasha. “You’re really not a good fit for the position we’re hiring for,” he said, “but let’s go ahead and do this anyway.”

So they did. Jack had some basic web-development questions, less on the UI side and more on the tooling side. “What’s transpiling,” and “how do ES2015 modules work”. They had a pleasant back and forth, and then Jack tagged out so that Carl could come in.

Carl didn’t start by asking a question, instead he scribbled some code on the white board:

int a[10];
*(a + 5) = 1;

“What does that do?” he demanded.

Natasha recognized it as C or C++, which jostled a few neurons from back in her CS101 days. She wasn’t interviewing to do C/C++, so she just shrugged and made her best guess. “That’s some pointer arithmetic stuff, right? Um… setting the 5th element of the array?”

Carl scribbled different C code onto the board, and repeated his question: “What does that do?”

Carl’s interview set the tone for the day. Over the next few hours, she met each team member. They each interviewed her on a subject that had nothing to do with UI development. She fielded questions about Linux system administration via LDAP, how subnets are encoded in IPs under IPv6, and their database person wanted her to estimate average seek times to fetch rows from disk when using a 7,200 RPM drive formatted in Ext4.

After surviving that gauntlet of seemingly pointless questions, it was Irving’s turn. His mood hadn’t improved, and he had no intention of asking her anything relevant. His first question was: “Tell me, Natasha, how would you estimate the weight of the Earth?”

“Um… don’t you mean mass?”

Irving grunted and shrugged. He didn’t say, “I don’t like smart-asses” out loud, but it was pretty clear that’s what he thought about her question.

Off balance, she stumbled through a reply about estimating the relative components that make up the Earth, their densities, and the size of the Earth. Irving pressed her on that answer, and she eventually sputtered something about a spring scale with a known mass, and Newton’s law of gravitation.

He still didn’t seem satisfied, but Irving had other questions to ask. “How many people are in the world?” “Why is the sky blue?” “How many turkeys would it take to fill this space?”

Eventually, frustrated by the series of inane questions after a day’s worth of useless questions, Natasha finally bit back. “What is the point of these questions?”

Irving sighed and made a mark on his interview notes. “The point,” he said, “is to see how long it took you to admit you didn’t know the answers. I don’t think you’re going to be a good fit for this team.”

“So I’ve heard,” Natasha said. “And I don’t think this team’s a good fit for me. None of the questions I’ve fielded today really have anything to do with the job I applied for.”

“Well,” Irving said, “we’re hiring for a number of possible positions. Since we had you here anyway, we figured we’d interview you for all of them.”

“If you were interviewing me for all of them, why didn’t I get any UI-related questions?”

“Oh, we already filled that position.”

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianPetter Reinholdtsen: Idea for finding all public domain movies in the USA

While looking at the scanned copies for the copyright renewal entries for movies published in the USA, an idea occurred to me. The number of renewals are so few per year, it should be fairly quick to transcribe them all and add references to the corresponding IMDB title ID. This would give the (presumably) complete list of movies published 28 years earlier that did _not_ enter the public domain for the transcribed year. By fetching the list of USA movies published 28 years earlier and subtract the movies with renewals, we should be left with movies registered in IMDB that are now in the public domain. For the year 1955 (which is the one I have looked at the most), the total number of pages to transcribe is 21. For the 28 years from 1950 to 1978, it should be in the range 500-600 pages. It is just a few days of work, and spread among a small group of people it should be doable in a few weeks of spare time.

A typical copyright renewal entry look like this (the first one listed for 1955):

ADAM AND EVIL, a photoplay in seven reels by Metro-Goldwyn-Mayer Distribution Corp. (c) 17Aug27; L24293. Loew's Incorporated (PWH); 10Jun55; R151558.

The movie title as well as registration and renewal dates are easy enough to locate by a program (split on first comma and look for DDmmmYY). The rest of the text is not required to find the movie in IMDB, but is useful to confirm the correct movie is found. I am not quite sure what the L and R numbers mean, but suspect they are reference numbers into the archive of the US Copyright Office.

Tracking down the equivalent IMDB title ID is probably going to be a manual task, but given the year it is fairly easy to search for the movie title using for example http://www.imdb.com/find?q=adam+and+evil+1927&s=all. Using this search, I find that the equivalent IMDB title ID for the first renewal entry from 1955 is http://www.imdb.com/title/tt0017588/.

I suspect the best way to do this would be to make a specialised web service to make it easy for contributors to transcribe and track down IMDB title IDs. In the web service, once a entry is transcribed, the title and year could be extracted from the text, a search in IMDB conducted for the user to pick the equivalent IMDB title ID right away. By spreading out the work among volunteers, it would also be possible to make at least two persons transcribe the same entries to be able to discover any typos introduced. But I will need help to make this happen, as I lack the spare time to do all of this on my own. If you would like to help, please get in touch. Perhaps you can draft a web service for crowd sourcing the task?

Note, Project Gutenberg already have some transcribed copies of the US Copyright Office renewal protocols, but I have not been able to find any film renewals there, so I suspect they only have copies of renewal for written works. I have not been able to find any transcribed versions of movie renewals so far. Perhaps they exist somewhere?

I would love to figure out methods for finding all the public domain works in other countries too, but it is a lot harder. At least for Norway and Great Britain, such work involve tracking down the people involved in making the movie and figuring out when they died. It is hard enough to figure out who was part of making a movie, but I do not know how to automate such procedure without a registry of every person involved in making movies and their death year.

As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

Planet DebianRhonda D'Vine: #metoo

I long thought about whether I should post a/my #metoo. It wasn't a rape. Nothing really happened. And a lot of these stories are very disturbing.

And yet it still it bothers me every now and then. I was in school age, late elementary or lower school ... In my hometown there is a cinema. Young as we've been we weren't allowed to see Rambo/Rocky. Not that I was very interested in the movie ... But there the door to the screening room stood open. And curious as we were we looked through the door. The projectionist saw us and waved us in. It was exciting to see a moview from that perspective that was forbidden to us.

He explained to us how the machines worked, showed us how the film rolls were put in and showed us how to see the signals on the screen which are the sign to turn on the second projector with the new roll.

During these explenations he was standing very close to us. Really close. He put his arm around us. The hand moved towards the crotch. It was unpleasantly and we knew that it wasn't all right. But screaming? We weren't allowed to be there ... So we thanked him nicely and retreated disturbed. The movie wasn't that good anyway.

Nothing really happened, and we didn't say anything.

/personal | permanent link | Comments: 2 | Flattr this

Planet DebianRussell Coker: Thinkpad X301

Another Broken Thinkpad

A few months ago I wrote a post about “Observing Reliability” [1] regarding my Thinkpad T420. I noted that the T420 had been running for almost 4 years which was a good run, and therefore the failed DVD drive didn’t convince me that Thinkpads have quality problems.

Since that time the plastic on the lid by the left hinge broke, every time I open or close the lid it breaks a bit more. That prevents use of that Thinkpad by anyone who wants to use it as a serious laptop as it can’t be expected to last long if opened and closed several times a day. It probably wouldn’t be difficult to fix the lid but for an old laptop it doesn’t seem worth the effort and/or money. So my plan now is to give the Thinkpad to someone who wants a compact desktop system with a built-in UPS, a friend in Vietnam can probably find a worthy recipient.

My Thinkpad History

I bought the Thinkpad T420 in October 2013 [2], it lasted about 4 years and 2 months. It cost $306.

I bought my Thinkpad T61 in February 2010 [3], it lasted about 3 years and 8 months. It cost $796 [4].

Prior to the T61 I had a T41p that I received well before 2006 (maybe 2003) [5]. So the T41p lasted close to 7 years, as it was originally bought for me by a multinational corporation I’m sure it cost a lot of money. By the time I bought the T61 it had display problems, cooling problems, and compatibility issues with recent Linux distributions.

Before the T41p I had 3 Thinkpads in 5 years, all of which had the type of price that only made sense in the dot-com boom.

In terms of absolute lifetime the Thinkpad T420 did ok. In terms of cost per year it did very well, only $6 per month. The T61 was $18 per month, and while the T41p lasted a long time it probably cost over $2000 giving it a cost of over $20 per month. $20 per month is still good value, I definitely get a lot more than $20 per month benefit from having a laptop. While it’s nice that my most recent laptop could be said to have saved me $12 per month over the previous one, it doesn’t make much difference to my financial situation.

Thinkpad X301

My latest Thinkpad is an X301 that I found on an e-waste pile, it had a broken DVD drive which is presumably the reason why someone decided to throw it out. It has the same power connector as my previous 2 Thinkpads which was convenient as I didn’t find a PSU with it. I saw a review of the T301 dated 2008 which probably means it was new in 2009, but it has no obvious signs of wear so probably hasn’t been used much.

My X301 has a 1440*900 screen which isn’t as good as the T420 resolution of 1600*900. But a lower resolution is an expected trade-off for a smaller laptop. The T310 comes with a 64G SSD which is a significant limitation.

I previously wrote about a “cloud lifestyle” [6]. I hadn’t implemented all the ideas from that post due to distractions and a lack of time. But now that I’ll have a primary PC with only 64G of storage I have more incentive to do that. The 100G disk in the T61 was a minor limitation at the time I got it but since then everything got bigger and 64G is going to be a big problem and the fact that it’s an unusual 1.8″ form factor means that I can’t cheaply upgrade it or use the SSD that I’ve used in the Thinkpad T420.

My current Desktop PC is an i7-2600 system which builds the SE Linux policy packages for Debian (the thing I compile most frequently) in about 2 minutes with about 5 minutes of CPU time used. the same compilation on the X301 takes just over 6.5 minutes with almost 9 minutes of CPU time used. The i5 CPU in the Thinkpad T420 was somewhere between those times. While I can wait 6.5 minutes for a compile to test something it is an annoyance. So I’ll probably use one of the i7 or i5 class servers I run to do builds.

On the T420 I had chroot environments running with systemd-nspawn for the last few releases of Debian in both AMD64 and i386 variants. Now I have to use a server somewhere for that.

I stored many TV shows, TED talks, and movies on the T420. Probably part of the problem with the hinge was due to adjusting the screen while watching TV in bed. Now I have a phone with 64G of storage and a tablet with 32G so I will use those for playing videos.

I’ve started to increase my use of Git recently. There’s many programs I maintain that I really should have had version control for years ago. Now the desire to develop them on multiple systems gives me an incentive to do this.

Comparing to a Phone

My latest phone is a Huawei Mate 9 (I’ll blog about that shortly) which has a 1920*1080 screen and 64G of storage. So it has a higher resolution screen than my latest Thinkpad as well as equal storage. My phone has 4G of RAM while the Thinkpad only has 2G (I plan to add RAM soon).

I don’t know of a good way of comparing CPU power of phones and laptops (please comment if you have suggestions about this). The issues of GPU integration etc will make this complex. But I’m sure that the octa-core CPU in my phone doesn’t look too bad when compared to the dual-core CPU in my Thinkpad.

Conclusion

The X301 isn’t a laptop I would choose to buy today. Since using it I’ve appreciated how small and light it is, so I would definitely consider a recent X series. But being free the value for money is NaN which makes it more attractive. Maybe I won’t try to get 4+ years of use out of it, in 2 years time I might buy something newer and better in a similar form factor.

I can just occasionally poll an auction site and bid if there’s anything particularly tempting. If I was going to buy a new laptop now before the old one becomes totally unusable I would be rushed and wouldn’t get the best deal (particularly given that it’s almost Christmas).

Who knows, I might even find something newer and better on an e-waste pile. It’s amazing the type of stuff that gets thrown out nowadays.

,

Sociological ImagesSocImages Classic—The Ugly Christmas Sweater: From ironic nostalgia to festive simulation

National Ugly Christmas Sweater Day is this Friday, December 15th. Perhaps you’ve noticed the recent ascent of the Ugly Christmas Sweater or even been invited to an Ugly Christmas Sweater Party. How do we account for this trend and its call to “don we now our tacky apparel”?

Total search of term “ugly Christmas sweater” relative to other searches over time (c/o Google Trends):

Ugly Christmas Sweater parties purportedly originated in Vancouver, Canada, in 2001. Their appeal might seem to stem from their role as a vehicle for ironic nostalgia, an opportunity to revel in all that is festively cheesy. It also might provide an opportunity to express the collective effervescence of the well-intentioned (but hopelessly tacky) holiday apparel from moms and grandmas.

However, The Atlantic points to a more complex reason why we might enjoy the cheesy simplicity offered by Ugly Christmas Sweaters: “If there is a war on Christmas, then the Ugly Christmas Sweater, awesome in its terribleness, is a blissfully demilitarized zone.” This observation pokes fun at the Fox News-style hysterics regarding the “War on Christmas”; despite being commonly called Ugly Christmas Sweaters, the notion seems to persist that their celebration is an inclusive and “safe” one.

Photo Credit: TheUglySweaterShop, Flickr CC

We might also consider the generally fraught nature of the holidays (which are financially and emotionally taxing for many), suggesting that the Ugly Sweater could offer an escape from individual holiday stress. There is no shortage of sociologists who can speak to the strain of family, consumerism, and mental health issues that plague the holidays, to say nothing of the particular gendered burdens they produce. Perhaps these parties represent an opportunity to shelve those tensions.

But how do we explain the fervent communal desire for simultaneous festive celebration and escape? Fred Davis notes that nostalgia is invoked during periods of discontinuity. This can occur at the individual level when we use nostalgia to “reassure ourselves of past happiness.” It may also function as a collective response – a “nostalgia orgy”- whereby we collaboratively reassure ourselves of shared past happiness through cultural symbols. The Ugly Christmas Sweater becomes a freighted symbol of past misguided, but genuine, familial affection and unselfconscious enthusiasm for the holidays – it doesn’t matter that we have not all really had the actual experience of receiving such a garment.

Jean Baudrillard might call the process of mythologizing the Ugly Christmas Sweater a simulation, a collapsing between reality and representation. And, as George Ritzer points out, simulation can become a ripe target for corporatization as it can be made more spectacular than its authentic counterparts. We need only look at the shift from the “authentic” prerogative to root through one’s closet for an ugly sweater bestowed by grandma (or even to retrieve from the thrift store a sweater imparted by someone else’s grandma) to the cottage-industry that has sprung up to provide ugly sweaters to the masses. There appears to be a need for collective nostalgia that is outstripped by the supply of “actual” Ugly Christmas Sweaters that we have at our disposal.

Colin Campbell states that consumption involves not just purchasing or using a good or service, but also selecting and enhancing it. Accordingly, our consumptive obligation to the Ugly Christmas Sweater becomes more demanding, individualized and, as Ritzer predicts, spectacular. For examples, we can view this intensive guide for DIY ugly sweaters. If DIY isn’t your style, you can indulge your individual (but mass-produced) tastes in NBA-inspired or cultural mash-up Ugly Christmas Sweaters, or these Ugly Christmas Sweaters that aren’t even sweaters at all.

The ironic appeal of the Ugly Christmas Sweater Party is that one can be deemed festive for partaking, while simultaneously ensuring that one is participating in a “safe” celebration – or even a gentle mockery – of holiday saturation and demands. The ascent of the Ugly Christmas Sweater has involved a transition from ironic nostalgia vehicle to a corporatized form of escapism, one that we are induced to participate in as a “safe” form of  festive simulation that becomes increasingly individualized and demanding in expression.

Re-posted at Pacific Standard.

Kerri Scheer is a PhD Student working in law and regulation in the Department of Sociology at the University of Toronto. She thanks her colleague Allison Meads for insights and edits on this post. You can follow Kerri on Twitter.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityPatch Tuesday, December 2017 Edition

The final Patch Tuesday of the year is upon us, with Adobe and Microsoft each issuing security updates for their software once again. Redmond fixed problems with various flavors of WindowsMicrosoft Edge, Office, Exchange and its Malware Protection Engine. And of course Adobe’s got another security update available for its Flash Player software.

The December patch batch addresses more than 30 vulnerabilities in Windows and related software. As per usual, a huge chunk of the updates from Microsoft tackle security problems with the Web browsers built into Windows.

Also in the batch today is an out-of-band update that Microsoft first issued last week to fix a critical issue in its Malware Protection Engine, the component that drives the Windows Defender/Microsoft Security Essentials embedded in most modern versions of Windows, as well as Microsoft Endpoint Protection, and the Windows Intune Endpoint Protection anti-malware system.

Microsoft was reportedly made aware of the malware protection engine bug by the U.K.’s National Cyber Security Centre (NCSC), a division of the Government Communications Headquarters — the United Kingdom’s main intelligence and security agency. As spooky as that sounds, Microsoft said it is not aware of active attacks exploiting this flaw.

Microsoft said the flaw could be exploited via a booby-trapped file that gets scanned by the Windows anti-malware engine, such as an email or document. The issue is fixed in version 1.1.14405.2 of the engine. According to Microsoft, Windows users should already have the latest version because the anti-malware engine updates itself constantly. In any case, for detailed instructions on how to check whether your system has this update installed, see this link.

The Microsoft updates released today are available in one big batch from Windows Update, or automagically via Automatic Updates. If you don’t have Automatic Updates enabled, please visit Windows Update sometime soon (click the Start/Windows button, then type Windows Update).

The newest Flash update from Adobe brings the player to v. 28.0.0.126 on Windows, Macintosh, Linux and Chrome OS. Windows users who browse the Web with anything other than Internet Explorer may need to apply the Flash patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates and/or restart the browser to get the latest Flash version). Chrome users may need to restart the browser to install or automatically download the latest version.

When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then. Chrome will replace that three dot icon with an up-arrow inside of a circle when updates are waiting to be installed.

Standard disclaimer: Because Flash remains such a security risk, I continue to encourage readers to remove or hobble Flash Player unless and until it is needed for a specific site or purpose. More on that approach (as well as slightly less radical solutions ) can be found in A Month Without Adobe Flash Player. The short version is that you can probably get by without Flash installed and not miss it at all.

For readers still unwilling to cut the cord, there are half-measures that work almost as well. Fortunately, disabling Flash in Chrome is simple enough. Paste “chrome://settings/content” into a Chrome browser bar and then select “Flash” from the list of items. By default it should be set to “Ask first” before running Flash, although users also can disable Flash entirely here or whitelist and blacklist specific sites.

Another, perhaps less elegant, solution is to keep Flash installed in a browser that you don’t normally use, and then to only use that browser on sites that require it.

Planet DebianKeith Packard: Altos1.8.3

AltOS 1.8.3 — TeleMega version 3.0 support and bug fixes

Bdale and I are pleased to announce the release of AltOS version 1.8.3.

AltOS is the core of the software for all of the Altus Metrum products. It consists of firmware for our cc1111, STM32L151, STMF042, LPC11U14 and ATtiny85 based electronics and Java-based ground station software.

This is a minor release of AltOS, including support for our new TeleMega v3.0 board and a selection of bug fixes

Announcing TeleMega v3.0

TeleMega is our top of the line flight computer with 9-axis IMU, 6 pyro channels, uBlox Max 7Q GPS and 40mW telemetry system. Version 3.0 is feature compatible with version 2.0, incorporating a new higher-perfomance 9-axis IMU in place of the former 6-axis IMU and separate 3-axis magnetometer.

AltOS 1.8.3

In addition to support for TeleMega v3.0 boards, AltOS 1.8.3 contains some important bug fixes for all flight computers. Users are advised to upgrade their devices.

  • Ground testing EasyMega and TeleMega additional pyro channels could result in a sticky 'fired' status which would prevent these channels from firing on future flights.

  • Corrupted flight log records could prevent future flights from capturing log data.

  • Fixed saving of pyro configuration that ended with 'Descending'. This would cause the configuration to revert to the previous state during setup.

The latest AltosUI and TeleGPS applications have improved functionality for analyzing flight data. The built-in graphing capabilities are improved with:

  • Graph lines have improved appearance to make them easier to distinguish. Markers may be placed at data points to show captured recorded data values.

  • Graphing offers the ability to adjust the smoothing of computed speed and acceleration data.

Exporting data for other applications has some improvements as well:

  • KML export now reports both barometric and GPS altitude data to make it more useful for Tripoli record reporting.

  • CSV export now includes TeleMega/EasyMega pyro voltages and tilt angle.

CryptogramRemote Hack of a Boeing 757

Last month, the DHS announced that it was able to remotely hack a Boeing 757:

"We got the airplane on Sept. 19, 2016. Two days later, I was successful in accomplishing a remote, non-cooperative, penetration," said Robert Hickey, aviation program manager within the Cyber Security Division of the DHS Science and Technology (S&T) Directorate.

"[Which] means I didn't have anybody touching the airplane, I didn't have an insider threat. I stood off using typical stuff that could get through security and we were able to establish a presence on the systems of the aircraft." Hickey said the details of the hack and the work his team are doing are classified, but said they accessed the aircraft's systems through radio frequency communications, adding that, based on the RF configuration of most aircraft, "you can come to grips pretty quickly where we went" on the aircraft.

Worse Than FailureCodeSOD: ALM Tools Could Fix This

I’m old enough that, when I got into IT, we just called our organizational techniques “software engineering”. It drifted into “project management”, then the “software development life-cycle”, and lately “application life-cycle management (ALM)”.

No matter what you call it, you apply these techniques so that you can at least attempt to release software that meets the requirements and is reasonably free from defects.

Within the software development space, there are families of tools and software that we can use to implement some sort of ALM process… like “Harry Peckherd”’s Application Life-Cycle Management suite. By using their tool, you can release software that meets the requirements and is free from defects, right?

Well, Brendan recently attempted to upgrade their suite from 12.01 to 12.53, and it blew up with a JDBC error: [Mercury][SQLServer JDBC Driver][SQLServer]Cannot find the object "T_DBMS_SQL_BIND_VARIABLE" because it does not exist or you do not have permissions. He picked through the code that it was running, and found this blob of SQL:

DROP TABLE [t_dbms_sql_bind_variable]
DECLARE @sql AS VARCHAR(4000)
begin
SET @sql = ''
SELECT @sql = @sql + 'DROP FULLTEXT INDEX ON T_DBMS_SQL_BIND_VARIABLE'
FROM sys.fulltext_indexes
WHERE object_id = object_id('T_DBMS_SQL_BIND_VARIABLE')
GROUP BY object_id
if @sql'' exec (@sql)
end
ALTER TABLE [T_DBMS_SQL_BIND_VARIABLE] DROP CONSTRAINT [FK_t_dbms_sql_bind_variable_t_dbms_sql_cursor]

The upgrade script drops a table, drops the associated indexes on it, and then attempts to alter the table it just dropped. This is a real thing, released as part of software quality tools, by a major vendor in the space. They shipped this.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

,

Planet DebianJoey Hess: two holiday stories

Two stories of something nice coming out of something not-so-nice for the holidays.

Story the first: The Gift That Kept on Giving

I have a Patreon account that is a significant chunk of my funding to do what I do. Patreon has really pissed off a lot of people this week, and people are leaving it in droves. My Patreon funding is down 25%.

This is an opportunity for Liberapay, which is run by a nonprofit, and avoids Patreon's excessive fees, and is free software to boot. So now I have a Liberapay account and have diversified my sustainable funding some more, although only half of the people I lost from Patreon have moved over. A few others have found other ways to donate to me, including snail mail and Paypal, and others I'll just lose. Thanks, Patreon..

Yesterday I realized I should check if anyone had decided to send me Bitcoin. Bitcoin donations are weird because noone ever tells me that they made them. Also because it's never clear if the motive is to get me invested in bitcoin or send me some money. I prefer not to be invested in risky currency speculation, preferring risks like "write free software without any clear way to get paid for it", so I always cash it out immediately.

I have not used bitcoin for a long time. I could see a long time ago that its development community was unhealthy, that there was going to be a messy fork and I didn't want the drama of that. My bitcoin wallet files were long deleted. Checking my address online, I saw that in fact two people had reacted to Patreon by sending a little bit of bitcoin to me.

I checked some old notes to find the recovery seeds, and restored "hot wallet" and "cold wallet", not sure which was my public incoming wallet. Neither was, and after some concerned scrambling, I found the gpg locked file in a hidden basement subdirectory that let me access my public incoming wallet, and in fact two people had reacted to Patreon by sending bitcoin to me.

What of the other two wallets? "Hot wallet" was empty. But "cold wallet" turned out to be some long forgotten wallet, and yes, this is now a story about "some long forgotten bitcoin wallet" -- you know where this is going right?

Yeah, well it didn't have a life changing amount of bitcoin in it, but it had a little almost-dust from a long-ago bitcoin donor, which at current crazy bitcoin prices, is enough that I may need to fill out a tax form now that I've sold it. And so I will be having a happy holidays, no matter how the Patreon implosion goes. But for sustainable funding going forward, I do hope that Liberapay works out.

Story the second: "a lil' positive end note does wonders"

I added this to the end of git-annex's bug report template on a whim two years ago:

Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)

That prompt turned out to be much more successful than I had expected, and so I want to pass the gift of the idea on to you. Consider adding something like that to your project's bug report template.

It really works: I'll see a bug report be lost and confused and discouraged, and keep reading to make sure I see whatever nice thing there might be at the end. It's not just about meaningless politeness either, it's about getting an impression about whether the user is having any success at all, and how experienced they are in general, which is important in understanding where a bug report is coming from.

I've learned more from it than I have from most other interactions with git-annex users, including the git-annex user surveys. Out of 217 bug reports that used this template, 182 answered the question. Here are some of my favorite answers.

Have you had any luck using git-annex before? (Sometimes we get tired of reading bug reports all day and a lil' positive end note does wonders)

  • I do! I wouldn't even have my job, if it wasn't for git-annex. ;-)

  • Yeah, it works great! If not for it I would not have noticed this data corruption until it was too late.

  • Indeed. All my stuff (around 3.5 terabytes) is stored in git-annex with at least three copies of each file on different disks and locations, spread over various hard disks, memory sticks, servers and you name it. Unused disk space is a waste, so I fill everything up to the brim with extra copies.

    In other words, Git-Annex and I are very happy together, and I'd like to marry it. And because you are the father, I hereby respectfully ask for your blessing.

  • Yes, git with git annex has revolutionised my scientific project file organisation and thats why I want to improve it.

  • <3 <3 <3

  • We use git-annex for our open-source FreeSurfer software and find very helpful indeed. Thank you. https://surfer.nmr.mgh.harvard.edu/

  • Yes I have! I've used it manage lots of video editing disks before, and am now migrating several slightly different copies of 15TB sized documentary footage from random USB3 disks and LTO tapes to a RAID server with BTRFS.

  • Oh yeah! This software is awesome. After getting used to having "dummy" shortcuts to content I don't currently have, with the simple ability to get/drop that content, I can't believe I haven't seen this anywhere before. If there is anything more impressive than this software, it's the support it has had from Joey over all this time. I'd have pulled my hair out long ago. :P

  • kinda

  • Yep, works apart from the few tests that fail.

  • Not yet, but I'm excited to make it work!

  • Roses are red
    Violets are blue
    git-annex is awesome
    and so are you
    ;-)
    But bloody hell, it's hard to get this thing to build.

  • git-annex is awesome, I lean on it heavily nearly every single day.

  • I have a couple of repositories atm, one with my music, another that backs up our family pictures for the family and uses Amazon S3 as a backup.

  • Yes! It's by far one of my favorite apps! it works very well on my laptop, on my home file server, and on my internal storage on my Android phone :)

  • Yes! I've been using git-annex quite a bit over the past year, for everything from my music collection to my personal files. Using it for a not-for-profit too. Even trying to get some Mac and Windows users to use it for our HOA's files.

  • I use git-annex for everything. I've got 10 repositories and around 2.5TB of data in those repos which in turn is synced all over the place. It's excellent.

  • Really nice tool. Thanks Joey!

  • Git-annex rocks !!!!

  • I'd love to say I have. You'll hear my shout of joy when I do.

  • Mixed bag, works when it works, but I've had quite a few "unexplained" happenings. Perservering for now, hoping me reporting bugs will see things improve...

  • Yes !!! I'm moving all my files into my annex. It is very robust; whenever something is wrong there is always some other copy somewhere that can be used.

  • Yes! git annex has been enormously helpful. Thanks so much for this tool.

  • Oh yes! I love git-annex :) I've written the hubiC special remote for git-annex, the zsh completion, contributed to the crowdfunding campaigns, and I'm a supporter on Patreon :)

  • Yes, managing 30000 files, on operating systems other than Windows though...

  • Of course ;) All the time

  • I trust Git Annex to keep hundreds of GB of data safe, and it has never failed me - despite my best efforts

  • Oh yeah, I am still discovering this powerfull git annex tool. In fact, collegues and I are forming a group during the process to exchange about different use cases, encountered problems and help each other.

  • I love the metadata functionality so much that I wrote a gui for metadata operations and discovered this bug.

  • Sure, it works marvels :-) Also what I was trying to do is perhaps not by the book...

  • Oh, yes. It rules. :) One of the most important programs I use because I have all my valuable stuff in it. My files have never been safer.

  • I'm an extremely regular user of git-annex on OS X and Linux, for several years, using it as a podcatcher and to manage most of my "large file" media. It's one of those "couldn't live without" tools. Thanks for writing it.

  • Yes, I've been using git annex for I think a year and a half now, on several repositories. It works pretty well. I have a total of around 315GB and 23K annexed keys across them (counting each annex only once, even though they're cloned on a bunch of machines).

  • I only find (what I think are) bugs because I use it and I use it because I like it. I like it because it works (except for when I find actual bugs :]).

  • I'm new to git-annex and immediately astonished by its unique strength.

  • As mentioned before, I am very, very happy with git-annex :-) Discovery of 2015 for me.

  • git-annex is great and revolutionized my file organization and backup structure (if they were even existing before)

  • That’s just a little hiccup in, up to now, various months of merry use! ;-)

  • Yes. Love it. Donated. Have been using it for years. Recommend it and get(/force) my collaborators to use it. ;-)

  • git-annex is an essential building block in my digital life style!

  • Well, git-annex is wonderful!

A lil' positive end note turned into a big one, eh? :)

Planet DebianWouter Verhelst: Systemd, Devuan, and Debian

Somebody recently pointed me towards a blog post by a small business owner who proclaimed to the world that using Devuan (and not Debian) is better, because it's cheaper.

Hrm.

Looking at creating Devuan, which means splitting of Debian, economically, you caused approximately infinite cost.

Well, no. I'm immensely grateful to the Devuan developers, because when they announced their fork, all the complaints about systemd on the debian-devel mailinglist ceased to exist. Rather than a cost, that was an immensely gratifying experience, and it made sure that I started reading the debian-devel mailinglist again, which I had stopped for a while before that. Meanwhile, life in Debian went on as it always has.

Debian values choice. Fedora may not be about choice, but Debian is. If there are two ways of doing something, Debian will include all four. If you want to run a Linux system, and you're not sure whether to use systemd, upstart, or something else, then Debian is for you! (well, except if you want to use upstart, which is in jessie but not in stretch). Debian defaults to using systemd, but it doesn't enforce it; and while it may require a bit of manual handholding to make sure that systemd never ever ever ends up on your system, this is essentially not difficult.

you@your-machine:~$ apt install equivs; equivs-control your-sanity; $EDITOR your-sanity

Now make sure that what you get looks something like this (ignoring comments):

Section: misc
Priority: standard
Standards-Version: <whatever was there>

Package: your-sanity
Essential: yes
Conflicts: systemd-sysv
Description: Make sure this system does not install what I don't want
 The packages in the Conflicts: header cannot be installed without
 very difficult steps, and apt will never offer to install them.

Install it on every system where you don't want to run systemd. You're done, you'll never run systemd. Well, except if someone types the literal phrase "Yes, do as I say!", including punctuation and everything, when asked to do so. If you do that, well, you get to keep both pieces. Also, did you see my pun there? Yes, it's a bit silly, I admit it.

But before you take that step, consider this.

Four years ago, I was an outspoken opponent of systemd. It was a bad idea, I thought. It is not portable. It will cause the death of Debian GNU/kFreeBSD, and a few other things. It is difficult to understand and debug. It comes with a truckload of other things that want to replace the universe. Most of all, their developers had a pretty bad reputation of being, pardon my French, arrogant assholes.

Then, the systemd maintainers filed bug 796633, asking me to provide a systemd unit for nbd-client, since it provided an rcS init script (which is really a very special case), and the compatibility support for that in systemd was complicated and support for it would be removed from the systemd side. Additionally, providing a systemd template unit would make the systemd nbd experience much better, without dropping support for other init systems (those cases can still use the init script). In order to develop that, I needed a system to test things on. Since I usually test things on my laptop, I installed systemd on my laptop. The intent was to remove it afterwards. However, for various reasons, that never happened, and I still run systemd as my pid1. Here's why:

  • Systemd is much faster. Where my laptop previously took 30 to 45 seconds to boot using sysvinit, it takes less than five. In fact, it took longer for it to do the POST than it took for the system to boot from the time the kernel was loaded. I changed the grub timeout from the default of five seconds to something more reasonable, because I found that five seconds was just ridiculously long if it takes about half that for the rest of the system to boot to a login prompt afterwards.
  • Systemd is much more reliable. That is, it will fail more often, but it will reliably fail. When it fails, it will tell you why it failed, so you can figure out what went wrong and fix it, making sure the system never fails again in the same fashion. The unfortunate fact of the matter is that there were many bugs in our init scripts, but they were never discovered and therefore lingered. For instance, you would not know about this race condition between two init scripts, because sysvinit is so dog slow that 99 times out of 100 it would not trigger, and therefore you don't see it. The one time you do see it, something didn't come up, but sysvinit doesn't log about such errors (it expects the init script to do so), so all you can do is go "damn, wtf happened?!?" and manually start things, allowing the bug to remain. These race conditions were much more likely to trigger with systemd, which caused it a lot of grief originally; but really, you should be thankful, because now that all these race conditions have been discovered by way of an init system that is much more verbose about such problems, they have also been fixed, and your sysvinit system is more reliable, too, as a result. There are other similar issues (dependency loops, to name one) that systemd helped fix.
  • Systemd is different, and that requires some re-schooling. When I first moved my laptop to systemd, I remember running into some kind of issue that I couldn't figure out how to fix. No, I don't remember the specifics of that issue, but they don't really matter. The point is this: at first, I thought "this is horrible, you can't debug it, how can you use such a system". And while it's true that undebuggable systems are not very useful, the systemd maintainers know this too, and therefore systemd is debuggable. It's just that you don't debug it by throwing some imperative init script code through a debugger (or, worse, something like sh -x), because there is no imperative init script code to throw through such a debugger, and therefore that makes little sense. Instead, there is a wealth of different tools to inspect the systemd state, and a lot of documentation on what the different things mean. It takes a while to internalize all that; and if you're not convinced that systemd is a good thing then it may mean some cursing while you're fighting your way through. But in the end, systemd is not more difficult to debug than simple init scripts -- in fact, it sometimes may be easier, because the system is easier to reason about.
  • While systemd comes with a truckload of extra daemons (systemd-networkd, systemd-resolved, systemd-hostnamed, etc etc etc), the systemd in their name do not imply that they are required by systemd. In fact, it's the other way around: you are required to run systemd if you want to run systemd-networkd (etc), because systemd-networkd (etc) make extensive use of the systemd infrastructure and public APIs; but nothing inside systemd requires that systemd-networkd (etc) are running. In fact, on my personal laptop, beyond systemd and udev themselves, I'm not using anything that gets built from the systemd source.

I'm not saying these reasons are universally true, and I'm not saying that you'll like systemd as much as I have. I am saying, however, that you should give it an honest attempt before you say "I'm not going to run systemd, ever," because you might be surprised by the huge gap of difference between what you expected and what you got. I know I was.

So, given all that, do I think that Devuan is a good idea? It is if you want flamewars. It gives those people who want vilify systemd a place to do that without bothering Debian with their opinion. But beyond that, if you want to run Debian and you don't want to run systemd, you can! Just make sure you choose the right options, and you're done.

All that makes me wonder why today, almost half a year after the initial release of Debian 9.0 "Stretch", Devuan Ascii still hasn't released, and why it took them over two years to release their Devuan Jessie based on Debian Jessie. But maybe that's just me.

CryptogramSurveillance inside the Body

The FDA has approved a pill with an embedded sensor that can report when it is swallowed. The pill transmits information to a wearable patch, which in turn transmits information to a smartphone.

Worse Than FailureCodeSOD: A Type of Standard

I’ve brushed up against the automotive industry in the past, and have gained a sense about how automotive companies and their suppliers develop custom software. That is to say, they hack at it until someone from the business side says, “Yes, that’s what we wanted.” 90% of the development time is spent doing re-work (because no one, including the customer, understood the requirements) and putting out fires (because no one, including the customer, understood the requirements well enough to tell you how to test it, so things are going wrong in production).

Mary is writing some software that needs to perform automated testing on automotive components. The good news is that the automotive industry has adopted a standard API for accomplishing this goal. The bad news is that the API was designed by the automotive industry. Developing standards, under ideal conditions, is hard. Developing standards in an industry that is still struggling with software quality and hasn’t quite fully adopted the idea of cross-vendor standardization in the first place?

You’re gonna have problems.

The specific problem that led Mary to send us this code was the way of defining data types. As you can guess, they used an XML schema to lay out the rules. That’s how enterprises do this sort of thing.

There are a bunch of “primitive” data types, like UIntVariable or BoolVariable. There are also collection types, like Vector or Map or Curve (3D plot). You might be tempted to think of the collection types in terms of generics, or you might be tempted to think about how XML schemas let you define new elements, and how these make sense as elements.

If you are thinking in those terms, you obviously aren’t ready for the fast-paced world of developing software for the automotive industry. The correct, enterprise-y way to define these types is just to list off combinations:

<xs:simpleType name="FrameworkVarType">
        <xs:annotation>
                <xs:documentation>This type is an enumeration of all available data types on Framework side.</xs:documentation>
        </xs:annotation>
        <xs:restriction base="xs:string">
                <xs:enumeration value="UIntVariable"/>
                <xs:enumeration value="IntVariable"/>
                <xs:enumeration value="FloatVariable"/>
                <xs:enumeration value="BoolVariable"/>
                <xs:enumeration value="StringVariable"/>
                <xs:enumeration value="UIntVectorVariable"/>
                <xs:enumeration value="IntVectorVariable"/>
                <xs:enumeration value="FloatVectorVariable"/>
                <xs:enumeration value="BoolVectorVariable"/>
                <xs:enumeration value="StringVectorVariable"/>
                <xs:enumeration value="UIntMatrixVariable"/>
                <xs:enumeration value="IntMatrixVariable"/>
                <xs:enumeration value="FloatMatrixVariable"/>
                <xs:enumeration value="BoolMatrixVariable"/>
                <xs:enumeration value="StringMatrixVariable"/>
                <xs:enumeration value="FloatIntCurveVariable"/>
                <xs:enumeration value="FloatFloatCurveVariable"/>
                <xs:enumeration value="FloatBoolCurveVariable"/>
                <xs:enumeration value="FloatStringCurveVariable"/>
                <xs:enumeration value="StringIntCurveVariable"/>
                <xs:enumeration value="StringFloatCurveVariable"/>
                <xs:enumeration value="StringBoolCurveVariable"/>
                <xs:enumeration value="StringStringCurveVariable"/>
                <xs:enumeration value="FloatFloatIntMapVariable"/>
                <xs:enumeration value="FloatFloatFloatMapVariable"/>
                <xs:enumeration value="FloatFloatBoolMapVariable"/>
                <xs:enumeration value="FloatFloatStringMapVariable"/>
                <xs:enumeration value="FloatStringIntMapVariable"/>
                <xs:enumeration value="FloatStringFloatMapVariable"/>
                <xs:enumeration value="FloatStringBoolMapVariable"/>
                <xs:enumeration value="FloatStringStringMapVariable"/>
                <xs:enumeration value="StringFloatIntMapVariable"/>
                <xs:enumeration value="StringFloatFloatMapVariable"/>
                <xs:enumeration value="StringFloatBoolMapVariable"/>
                <xs:enumeration value="StringFloatStringMapVariable"/>
                <xs:enumeration value="StringStringIntMapVariable"/>
                <xs:enumeration value="StringStringFloatMapVariable"/>
                <xs:enumeration value="StringStringBoolMapVariable"/>
                <xs:enumeration value="StringStringStringMapVariable"/>
        </xs:restriction>
</xs:simpleType>

So, not only is this just awkward, it’s not exhaustive. If you, for example, wanted a curve that plots integer values against integer values… you can’t have one. If you want a StringIntFloatMapVariable, your only recourse is to get the standard changed, and that requires years of politics, and agreement from all of the other automotive companies, who won’t want to change anything out of fear that their unreliable, hacky solutions will break.

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

,

TED“The courage to …” The talks of TED@Tommy

At TED@Tommy — held November 14, 2017, at Mediahaven in Amsterdam — fifteen creators, leaders and innovators invited us to dream, to dare and to do. (Photo: Richard Hadley / TED)

Courage comes in many forms. In the face of fear, it’s the conviction to dream, dare, innovate, create and transform. It’s the ability to try and try again, to admit when we’re wrong and stand up for what’s right.

TED and Tommy Hilfiger both believe in the power of courageous ideas to break conventions and celebrate individuality — it’s the driving force behind why the two organizations have partnered to bring experts in fashion, sustainability, design and more to the stage to share their ideas.

More than 300 Tommy associates from around the world submitted their ideas to take part in TED@Tommy, with more than 20 internal events taking place at local and regional levels, and the top 15 ideas were selected for the red circle on the TED@Tommy stage. At this inaugural event — held on November 14, 2017, at Mediahaven in Amsterdam — creators, leaders and innovators invited us to dream, to dare and to do.

After opening remarks from Daniel Grieder, CEO, Tommy Hilfiger Global and PVH Europe, and Avery Baker, Chief Brand Officer, Tommy Hilfiger Global, the talks of Session 1 kicked off.

Fashion is “about self-expression, a physical embodiment of what we portray ourselves as,” says Mahir Can Isik, speaking at TED@Tommy in Amsterdam. (Photo: Richard Hadley / TED)

Let fashion express your individuality. The stylish clothes you’re wearing right now were predicted to be popular up to two years before you ever bought them. This is thanks to trend forecasting agencies, which sell predictions of the “next big thing” to designers.  And according to Tommy Hilfiger retail buyer Mahir Can Isik, trend forecasting is, for lack of a better term, “absolutely bull.” Here’s a fun fact: More than 12,000 fashion brands all get their predictions from the same single agency — and this, Isik suggests, is the beginning of the end of true individuality. “Fashion is an art form — it’s about excitement, human interaction, touching our hearts and desires,” he says. “It’s about self-expression, a physical embodiment of what we portray ourselves as.” He calls on us to break this hold of forecasters and cherish self-expression and individuality.

Stylish clothing for the differently abled fashionista. Mindy Scheier believes that what you wear matters. “The clothes you choose can affect your mood, your health and your confidence,” she says. But when Scheier’s son Oliver was born with muscular dystrophy, a degenerative disorder that makes it hard for him to dress himself or wear clothing with buttons or zippers, she and her husband resorted to dressing him in what was easiest: sweatpants and a T-shirt. One afternoon when Oliver was eight, he came home from school and declared that he wanted to wear blue jeans like everyone else. Determined to help her son, Mindy spent the entire night MacGyvering a pair of jeans, opening up the legs to give them enough room to accommodate his braces and replacing the zipper and button with a rubber band. Oliver went to school beaming in his jeans the next day — and with that first foray into adaptive clothing, Scheier founded Runway of Dreams to educate the fashion industry about the needs of differently abled people. She explains how she designs for people who have a hard time getting dressed, and how she partnered with Tommy Hilfiger to make fashion history by producing the first mainstream adaptive clothing line, Tommy Adaptive.

Environmentally friendly, evolving fashion. The clothing industry is the world’s second largest source of pollution, second only to the oil and gas industry. (The equivalent of 200 T-shirts per person are thrown away annually in the US alone). Which is why sustainability sower Amit Kalra thinks a lot about how to be conscientious about the environment and still stay stylish. For his own wardrobe, he hits the thrift stores and stitches up his own clothing from recycled garments; as he says, “real style lives at the intersection of design and individuality.” As consumer goods companies struggle to provide consumers with the individuality they crave, Kalra suggests one way forward: Start using natural dyes (from sources such as turmeric or lichen) to color clothes sustainably. As the color fades, the clothing grows more personalized and individual to the owner. “There is no fix-all,” Kalra says, “But the fashion industry is the perfect industry to experiment and embrace change that could one day get us to the sustainable future we so desperately need.”

Tito Deler performs Big Joe Turner’s blues classic “Story to Tell” at TED@Tommy. (Photo: Richard Hadley / TED)

With a welcome musical interlude, blues musician (and VP of graphic design for Tommy Hilfiger) Tito Deler takes the stage, singing and strumming a stirring rendition of Big Joe Turner’s blues classic “Story to Tell.”

The truth we can find through literary fiction. Day by day, we’re exposed to streams of news, updates and information. Our brains are busier than ever as we try to understand the world we live in and develop our own story, and we often reach for nonfiction books to learn to become a better leader or inventor, how to increase our focus, and how to maintain a four-hour workweek. But for Tomas Elemans, brand protection manager for PVH, there’s an important reward from reading fiction that we’re leaving behind: empathy. “Empathy is the friendly enemy to our feeling of self-importance. Storytelling can help us to not only understand but feel the complexity, emotions and situations of distant others. It can be a vital antidote to the stress of all the noise around us,” Elemans says. Telling his personal story of the ups and downs of reading Dave Eggers’ Heroes of the Frontier, Elemans explains the importance of narrative immersion — how we transcend the here-and-now when we imagine being the characters in the stories we read — and how it reduces externally focused attention and increases reflection. “Literature has a way of reminding us that the stranger is not so strange,” Elemans says. “The ambition with which we turn to nonfiction books, we can also foster toward literature … Fiction can help us to disconnect from ourselves and tap into an emotional, empathetic side that we don’t often take the time to explore.”

Irene Mora shares the valuable lessons she learned being raised by a mom who was also a CEO. (Photo: Richard Hadley / TED)

Why you shouldn’t fear having a family and a career. As the child of parents who followed their passions and led successful careers, Irene Mora appreciates rather than resents their decision to have a family. Society’s perceptions of what it means to be a good parent — which usually means rejecting the dedicated pursuit of a profession — are dull and outdated, says Mora, now a merchandiser for Calvin Klein. “A lot of these conversations focus on the hypothetical negative effects, rather than the hypothetical positive effects that this could have on children,” Mora explains. “I’m living proof of the positive.” As she and her sister traveled the world with their parents due to her mother’s job as a CEO, she learned valuable lessons: adaptability, authenticity and independence. And despite her mother’s absences and limited face-to-face time, Mora didn’t feel abandoned or lacking in any way. “If your children know that you care, they will feel your love,” she says. “You don’t always have to be together to love and be loved.”

What you can learn from bad advice. Nicole Wilson, Tommy Hilfiger’s director of corporate responsibility, knows bad advice. From a young age, her father — a professional football player notorious for causing kitchen fires — would offer her unhelpful tidbits like: “It’s better to cheat than repeat,” or, at a street intersection, “No cop-y, no stop-y.” As a child, Wilson learned to steer clear of her father’s, ahem, wisdom, but as an adult, she realized that there’s an upside to bad advice. In this fun, personal talk, she shares how bad advice can be as helpful and as valuable as “so-called good advice” — because it can help you recognize extreme courses of action and develop a sense of when you should take the opposite advice from what you’re being offered. Above all, Wilson says, bad advice teaches you that “you have to learn to trust yourself — to take your own advice — because more times than not, your own advice is the best advice you are ever going to get.”

Fashion is a needed avenue of protest, says Kaustav Dey. He spoke about how we can embrace our most authentic selves at TED@Tommy. (Photo: Richard Hadley / TED)

Fashion as a  language of dissent. From a young age, fashion revolutionary and head of marketing for Tommy Hilfiger India Kaustav Dey knew that he was different, that his sense of self diverged from and even contradicted that of the majority of his classmates. He was never going to be the manly man his father hoped for and whom society privileged, he says. But it was precisely this distinct take on himself that would later land him in the streets of Milan and Paris, fashion worlds that further opened his eyes to the protest value of aesthetics. Dey explains the idea that fashion is a needed avenue of protest (but also a dangerous route to take) by speaking of the hateful comments Malala received for wearing jeans, by commenting on the repressive nature of widowed Indian women being eternally bound to white garments, and by telling the stories of the death of transgender activist Alesha and the murder of the eclectic actor Karar Nushi. Instead of focusing on society’s response to these individuals, Dey emphasizes that “fashion can give us a language of dissent.” Dey encourages us all to embrace our most authentic selves, so “in a world that’s becoming whitewashed, we will become the pinpricks of color pushing through.”

Returning to the stage to open Session 2, Tito Deler plays an original blues song, “My Fine Reward,” combining the influence of the sound of his New York upbringing with the style of pre-war Mississippi Delta blues. “I’m moving on to a place now where the streets are paved with gold,” Deler sings, “I’m gonna catch that fast express train to my reward in the sky.”

We should all make it a point not to buy fake goods and to notify officials when we see them being sold, says Alastair Gray, speaking at TED@Tommy in Amsterdam. (Photo: Richard Hadley / TED)

The deadly impact of counterfeit goods. To most consumers, the trade in knock-off goods seems harmless enough — we get to save money by buying lookalike products, and if anyone suffers, it’s only the big companies. But counterfeit investigator Alastair Gray says that those fake handbags, CDs and watches might be supporting organized crime or even terrorist organizations. “You wouldn’t buy a live scorpion because there’s a chance it will sting you on the way home,” Gray says. “But would you still buy a fake handbag if you knew the profit would enable someone to buy the bullets that might kill you and other innocent people?” This isn’t just conjecture: Saïd and Chérif Kouachi, the two brothers behind the 2015 attack on the Charlie Hebdo office in Paris that killed 12 people and wounded 11, purchased their weapons using the proceeds made from selling counterfeit sneakers. When it comes to organized crime and terrorism, most of us feel understandably helpless. But we do have the power to act, Gray says: make it a point not to buy fake goods and to notify officials (online or in real life) when we see them being sold.

Is data a designer’s hero? Data advocate Steve Brown began working in the fashion industry 15 years ago — when he would have to painstakingly sit for 12 hours each day picking every color that matched every fabric per garment he was working on. Today, however, designers can work with visualized 3D garments, fully functional with fabric, trim and prints, and they can even upload fabric choices to view the flow and drape of the design, all before a garment is ever made. Data and technology saves the designer time, Brown says, which allows for more time and attention to go into the creative tasks rather than the mundane ones. The designer’s role with data and technology is that of both a creator and a curator. He points to Amazon’s “Body Labs” and algorithms that learn a user’s personal style, both of which help companies to design custom-made garments. In this way, data can empower both the consumer and designer — and it should be embraced.

A better way to approach data. Every day, we’re inundated with far more data than our brains can process. Data translator Jonathan Koch outlines a few simple tools we can all use to understand and even critique data meant to persuade us. First: we need transparent definitions. Koch, a senior director of strategy and business development at PVH Asia Pacific, uses the example of a famous cereal brand that promised two scoops of raisins in every box of cereal (without bothering to define exactly what a “scoop” is) and a company that says that they’re the “fastest growing startup in Silicon Valley” (without providing a time period for context). The next tool: context and doubt. To get a clearer picture, we need to always question the context around data, and we need to always doubt the source, Koch says. Finally, we need to solve the problem of averages. When we deconstruct averages, which is how most data is delivered to us, into small segments, we can better understand what makes up the larger whole — and quickly get new, nuanced insights. With these three simple tools, we can use data to help us make better decisions about our health, wealth and happiness.

Conscious quitter Daniela Zamudio explains the benefits of moving on at TED@Tommy in Amsterdam. (Photo: Richard Hadley / TED)

An introduction to conscious quitting. “I’m a quitter,” says Daniela Zamudio, “and I’m very good at it.” Like many millennials, Zamudio has quit multiple jobs, cities, schools and relationships, but she doesn’t think quitting marks her as weak or lazy or commitment-phobic. Instead, she argues that leaving one path to follow another is a sign of strength and often leads to greater happiness in the long run. Now a senior marketing and communications manager for Tommy Hilfiger, Zamudio gives us an introduction to what she calls “conscious quitting.” She teaches us to weigh the pros and cons of qutting a particular situation and then instructs us to create a strategy to deal with the repercussions of our choice. For instance, after Zamudio broke off her engagement to a man she had been dating for nine years, she managed her heartbreak by scheduling every minute of her day, seven days a week. “It takes courage to quit,” says Zamudio, “but too often it feels also like it’s wrong.” She concludes her talk by reminding us that listening to our own needs and feelings (and ignoring society’s expectations) can often be just what we need.

Lessons in dissent. Have you ever presented an idea and been immediately barraged with a line of questioning that feels like it’s poking more holes than it is actually questioning? Then you’ve probably engaged with a dissenter. Serial dissenter Andrew Millar promises these disagreements don’t come from a place of malice but rather from compassion with an aim to improve on your idea. “At this point in time, we don’t have enough dissenters in positions of power,” says Millar. “And history shows that having yes-men is rarely a driver of progress.” He suggests that dissenters find a workplace that truly works with them, not against — so if a company heralds conformity or relies heavily on hierarchy, then that place may not be the best for you. But even in the most welcome environment, no dissenter gets off scot-free — each needs to understand that compromise, or dissent upon response, and thinking you’re always right because you’re the only one to speak up are things that need to be mitigated to be successful. And to those in the path of a dissenter, says Millar, know this: when a dissenter speaks up, it can come across as criticism, but please do assume it stems from a place of good intent and connection.

Gabriela Roa speaks about learning to live in, and embrace, chaos, at TED@Tommy in Amsterdam. (Richard Hadley / TED)

Embrace the chaos. As the daughter of an obsessively organized mother, Gabriela Roa grew up believing that happiness was a color-coordinated closet. When she became a mom, she says, “I wanted my son to feel safe and loved in the way I did.” But he, like most toddlers, became “a chaos machine,” leaving toys and clothes in his wake. Roa, an IT project manager at PVH, felt terrible. Not only was she falling short as a disciplinarian, but she was so busy dwelling on her lapses that she wasn’t emotionally present for her son. One day, she remembered this piece of advice: “Whenever you experience a hard moment, there is always something to smile about.” In search of a smile, she began taking photos of her son’s messes. She shared them with friends and was moved by the compassion she received, so she started taking more pictures of her “happy explorer,” in which she documented her son’s creations and tried seeing life from his perspective. She realized that unlike her, he was living in the now — calm, curious and ready to investigate. The project changed her, ultimately bringing her back to playing the cello, an instrument she’d once loved. “I’m not saying that chaos is better than order,” says Roa. “But it is part of life.”

Present fathers: strong children. Dwight Stitt is a market manager for Tommy Hilfiger, but he identifies first and foremost as a father. He speaks passionately about the need for men to be involved in their children’s lives. Reminiscing about his own relationship with his father — and how it took 24 years for them to form a working bond — Stitt shares that so long as life permits, it’s never too late to recover what may seem lost. He has incorporated the lessons he learned from his father and amplified them to reach not only his children but also other people through a camp and canoeing trip. Conceiving of camp as an opportunity to foster love and growth between fathers and children, Stitt says that “camp has taught me that fatherhood is not only vital to a child’s development, but that seemingly huge hurdles can be overcome by simple acts of love and memorable moments.” He goes so far as to explain the emotional, academic and behavioral benefits of working father-child relationships and, in between tears, calls on all fathers to share his goal of reducing the alarming statistics of fatherlessness in whatever form it comes.

How magic tricks (and politicians) fool your brain. Ever wonder how a magic trick works? How did the magician pull a silver coin from behind your ear? How did they know which card was yours? According to magician and New York Times crossword puzzle constructor David Kwong, it all boils down to evolution. Because we take in an infinite number of stimuli at any given time, we only process a tiny fraction of what’s in front of us. Magic works, Kwong says, by exploiting the gaps in our awareness. We never notice the magician flipping our card to the top of the deck because we’re too busy watching him rub his sleeve three times. But the principles of illusion extend beyond a bit of sleight-of-hand, he says. Politicians also exploit us with cognitive misdirection. For instance, policymakers describe an inheritance tax (which only taxes the very wealthy) as a “death tax” to make the public think it applies to everyone. Kwong then demonstrates a few fun tricks to teach us how to see through the illusions and deceptions that surround us in everyday life. He finishes his set with some sage words of advice for everyone (magic lovers or not): “Question what seems obvious, and above all, pay attention to your attention.”


TEDBreakthroughs: The talks of TED@Merck KGaA, Darmstadt, Germany

TED and Merck KGaA, Darmstadt, Germany, have partnered to help surface and share brilliant ideas, innovations — and breakthroughs. (Photo: Paul Clarke / TED)

Humanity is defined by its immense body of knowledge. Most times it inches forward, shedding light onto the mysteries of the universe and easing life’s endeavors in small increments. But in some special moments, knowledge and understanding leap forward, when one concentrated mind or one crucial discovery redirects the course of things and changes the space of possibilities.

TED and Merck KGaA, Darmstadt, Germany, have partnered to help surface and share brilliant ideas, innovations — and breakthroughs. At the inaugural TED@Merck KGaA, Darmstadt, Germany event, hosted by TED International Curator Bruno Giussani at Here East in London on November 28, 16 brilliant minds in healthcare, technology, art, psychology and other fields shared stories of human imagination and discovery.

After opening remarks from Belén Garijo, CEO, Healthcare for Merck KGaA, Darmstadt, Germany, the talks of Session 1 kicked off.

Biochemist Bijan Zakeri explains the mechanism behind a molecular superglue that could allows us to assemble new protein shapes. (Photo: Paul Clarke / TED)

A molecular superglue made from flesh-eating bacteria. The bacteria Streptococcus pyogenes — responsible for diseases including strep throat, scarlet fever and necrotizing fasciitis (colloquially, flesh-eating disease) — has long, hair-like appendages made of proteins with a unique property: the ends of these proteins are linked by an incredibly strong chemical bond. “You can boil them, try to cut them with enzymes or throw them in strong acids and bases. Nothing happens to them,” says biochemist Bijan Zakeri. Along with his adviser Mark Howarth, Zakeri figured out a way to engineer these proteins to create what he describes as a molecular superglue. The superglue allows us to assemble new protein shapes, and “you can chemically link the glue components to other organic and inorganic molecules, like medicines, DNA, metals and more, to build new nano-scale objects that address important scientific and medical needs,” Zakeri says.

What if we could print electronics? “We must manufacture devices in a whole new way, with the electronics integrated inside the object, not just bolted in afterwards,” says advanced technologist Dan Walker. He introduces us to his vision of the fast approaching future of technology, which could take two potential paths: “The first is hyper-scale production, producing electrically functional parts along the standard centralized model of manufacturing. Think of how we print newspapers, ink on paper, repeating for thousands of copies. Electronics can be printed in this way, too.” he says. Walker designs inks that conduct electricity and can be used to print functional electronics, like wires. This ink can be used in inkjet printers, the sort that can be found in most offices and homes. But these inkjet printers are still 2D printers — they can print the electronics onto the object, but they can’t print the object itself. “The second way the manufacturing world will go is towards marrying these two techniques of digital printing, inkjet and 3D, and the result will be the ability to create electrically functional objects,” Walker explain, both unique objects bespoke for individual customers and perfect replicas printed off by the thousands.

Strategic marketer Hannah Bürckstümmer explains her work developing organic photovoltaics — and how they might change how we power the world. (Photo: Paul Clarke / TED)

A printable solar cell. Buildings consume about 40 percent of our total energy, which means reducing their energy consumption could help us significantly decrease our CO2 emissions. Solar cells could have a big role to play here, but they’re not always the most aesthetically pleasing solution. Strategic marketer Hannah Bürckstümmer is working on a totally different solar cell technology: organic photovoltaics. Unlike the solar cells you’re used to seeing, these cells are made of compounds that are dissolved in ink and can be printed using simple techniques. The result is a thin film that absorbs the energy of the sun. The solar module looks like a plastic foil and is low weight, flexible and semi-transparent. It can be used in this form or combined with conventional construction materials like glass. “With the printing process, the solar cell can change its shape and design very easily,” Bürckstümmer says, displaying a cell onstage. “This will give the flexibility to architects, to planners and building owners to integrate this electricity-producing technology as they wish.” Plus, it may just help buildings go from energy consumers to energy providers.

A robot that can grab a tomato without crushing it. Robots are excellent at many tasks — but handling delicate items isn’t one of them. Carl Vause, CEO of Soft Robotics, suggests that instead of trying to recreate the human hand, roboticists should instead seek inspiration from other parts of nature. Consider the octopus: it’s very good at manipulating items, wrapping its tentacles around objects and conforming to their shapes. So what if we could get robots to act like an octopus tentacle? That’s exactly what a group of researchers at Harvard did: in 2009, they used a composite material structure, with rubber and paper, to create a robot that can conform to and grasp soft objects. Demoing the robot onstage, Vause shows it can pick up a bath sponge, a rubber duck, a breakfast scone and even a chicken egg. Why is this important? Because until now, industries like agriculture, food manufacturing and even retail have been largely unable to benefit from robots. With a robot that can grasp something as soft as a tomato, we’ll open up whole industries to the benefits of automation.

Departing from rules can be advantageous — and hilarious. In between jokes about therapy self-assessment forms, hair salons and box junctions, human nature explorer James Veitch questions the rules and formalities people are taught to respect throughout life. An avid email and letter writer, Veitch is unafraid and unapologetic in voicing his concerns about anyone whose actions fall in line with protocols but out of line with common sense. To this effect, he questions Jennifer, a customer relations team member at Headmasters Salon, as to why he and his friend Nige received comparable haircuts when they booked appointments with different types of stylists: Nige with a Senior Stylist (who cost 34 Euros) and Veitch with a Master Hair Consultant (who cost 54 Euros). Using percentages and mathematics, and even inquiring into the reasons why Nige received a biscuit and he didn’t, Veitch argues his way into a free haircut. Though Veitch is clearly enjoying himself by questioning protocols, he derives more than amusement from this process — as he shows us how departing from rules and formalities can be advantageous and hilarious, all at once.

If we can’t fight our emotions, why not use them? Emotions are as important in science as they are in any other part of our lives, says materials researcher Ilona Stengel. The combination of emotion and logical reasoning is crucial for facing challenges and exploring new solutions. Especially in the scientific world, feelings are just as necessary as facts and logic for paving the way to breakthroughs, discoveries and cutting-edge innovations. “We shouldn’t be afraid of using our feelings to implement and to catalyze fact-based science,” Stengel says. Stengel insists that having a personal, emotional stake in the work that you do can alter your progress and output in an incredibly positive way, that emotions and logic do not oppose, but complement and reinforce each other. She asks us all — whether we’re in or outside of the science world — to reflect on our work and how it might give us a sense of belonging, dedication and empowerment.

TED International Curator Bruno Giussani, left, speaks with Scott Williams about the important role informal caregivers play in the healthcare system. (Photo: Paul Clarke / TED)

Putting the “care” back in healthcare. Once a cared-for patient and now a caregiver himself, Scott Williams asks us to recognize the role that informal caregivers — those friends and relatives who go the extra mile out of love — play in healthcare systems. Although they don’t administer medication or create treatment plans for patients, informal caregivers are instrumental in helping people return to good health, Williams says. They give up their jobs, move into hospitals with patients, know patients’ medical histories and sometimes make difficult decisions for them. Williams suggests that without informal caregivers, “our health and social systems would crumble, and yet they’re largely going unrecognized.” He invites us to recognize their selfless work — and their essential value to the smooth functioning of healthcare systems.

Tiffany Watt Smith speaks about the fascinating history of how we understand our emotions. (Photo: Paul Clarke / TED)

Yes, emotions have a history. When we look to the past, it’s easy to see that emotions changed — sometimes very dramatically — in response to new cultural expectations and new ideas about gender, ethnicity, age and politics, says Tiffany Watt Smith, a research fellow at the Centre for the History of the Emotions at the Queen Mary University of London. Take nostalgia, which was defined in 1688 as homesickness and seen as being deadly. It last appeared as a cause of death on a death certificate in 1918, for an American soldier fighting in France during WWI. Today, it means something quite different — a longing for a lost time — and it’s much less serious. This change was driven by a shift in values, says Watt Smith. “True emotional intelligence requires we understand the social, political, cultural forces that have shaped what we’ve come to believe about our emotions and understand how they might still be changing now.”

Dispelling myths about the future of work. “Could machines replace humans?” was a question once pondered by screenwriters and programmers. Today, it’s on the minds of anybody with a job to lose. Daniel Susskind, a fellow in economics at Oxford University, kicked off Session 2 by tackling three misconceptions we have about our automated future. First: the Terminator myth, which says machines will replace people at work. While that might sometimes happen, Susskind says that machines will also complement us and make us more productive. Next, the intelligence myth, which says some tasks can’t be automated because machines don’t possess the human-like reasoning to do them. Susskind dispels this by explaining how advances in processing power, data storage and algorithms have given computers the ability to handle complex tasks — like diagnosing diseases and driving cars. And finally: the superiority myth, which says technological progress will create new tasks that humans are best equipped to do. That’s simply not true, Susskind says, since machines are capable of doing different kinds of activities. “The threat of technological unemployment is real,” he declares, “Yet it is a good problem to have.” For much of our history, the biggest problem has been ensuring enough material prosperity for everyone; global GDP lingered at about a few hundred dollars per person for centuries. Now it is $10,150, and its growth shows no signs of stopping. Work has been the traditional way in which we’ve distributed wealth, so how should we do it in a world when there will be less — or even no — work? That’s the question we really need to answer.

A happy company is a healthy company, says transformation manager Lena Bieber. She suggests that we factor in employee happiness when we think about — and invest in — companies.  (Photo: Paul Clarke / TED)

Investing in the future of happiness. Can financial parameters like return on equity, cash flow or relative market share really tell us if a company is fundamentally healthy and predict its future performance? Transformation manager Lena Bieber thinks we should add one more indicator: happiness. “I would like to see the level of happiness in a company become a public indicator — literally displayed next to their share price on their homepages,” she says. With the level of happiness so prominent, people could feel more secure in the investments they’re making, knowing that employees of certain companies are in good spirits. But how does one measure something so subjective? Bieber likes the Happy Planet Index (a calculation created by TED speaker Nic Marks), which uses four variables to measure national well-being — she suggests that it can be translated for the workplace to include factors such as average longevity on the job and perceived fairness of opportunities. Bieber envisions a future where we invest not just with our minds and wallets, but with hearts as well.

Seeing intellectual disability through the eyes of a mom. When Emilie Weight’s son Mike was diagnosed with fragile X syndrome, an intellectual disability, it changed how she approached life. Mike’s differences compelled her to question her inner self and her role in the world, leading her to three essential tools that she now benefits from. Mindfulness helps her focus on the positive daily events that we often overlook. Mike also reminds Emilie of the importance of time management to use the time that she has instead of chasing it. Lastly, he’s taught her the benefit of emotional intelligence through adapting to the emotions of others. Emilie believes in harnessing the powers of people with intellectual disabilities: “Intellectually disabled people can bring added value to our society,” she says, “Being free of the mental barriers that are responsible for segregation and isolation, they are natural-born mediators.”

Christian Wickert suggests three ways that we can tap into the power of fiction — and how it could benefit our professional lives. (Photo: Paul Clarke / TED)

Can fiction make you better at your job? Forget self-help books; reading fiction might be the ticket to advancing your career. Take it from Christian Wickert, an engineer focusing on strategy and policy, who took a creative writing course — a course, he believes, that sharpened his perception, helped him understand other people’s motivations, and ultimately made him better at his job. Wickert explores three ways fiction can improve your business skills. First, he says, fiction helps you identify characters, their likes, dislikes, habits and traits. In business, this ability to identify characters can give you tremendous insights into a person’s behavior, telling you when someone is playing a role versus being authentic. Second, fiction reminds us that words have power. For example, “sorry” is a very powerful word, and when used appropriately it can transform a situation. Finally, fiction teaches you to look for a point of view — which in business is the key to good negotiation. So the next time you have a big meeting coming up, Wickert concludes, prepare by writing — and let the fiction flow.

How math can help us answer questions about our health. Mathematician Irina Kareva translates biology into mathematics, and vice versa. As a mathematical modeler, she doesn’t think about what things are but instead about what things do — about relationships between individuals, whether they’re cells, animals or people. Take an example: What do foxes and immune cells have in common? They’re both predators, except foxes feed on rabbits and immune cells feed on invaders, such as cancer cells. But from a mathematical point of view, the same system of predator-prey equations will describe both interactions between foxes and rabbits and cancer and immune cells. Understanding the dynamics between predator and prey — and the ecosystem they both inhabit — from a mathematical point of view could lead to new insights, specifically in the development of drugs that target tumors. “The power and beauty of mathematical modeling lies in the fact that it makes you formalize in a very rigorous way what we think we know,” Kareva says. “It can help guide us as to where we should keep looking, or where there might be a dead end.” It all comes down to asking the right question, and translating it to the right equation, and back.

End-to-end tracking of donated medicine. Neglected tropical diseases, or NTDs, are a diverse group of diseases that prevail in tropical and subtropical conditions. Globally, they affect more than one billion people. A coalition of pharmaceutical companies, governments, health organizations, charities and other partners, called Uniting to Combat Neglected Tropical Diseases, is committed to fighting NTDs using donated medicines — but shipping them to their destination poses complex problems. “How do we keep an overview of our shipments and make sure the tablets actually arrive where they need to go?” asks Christian Schröter, head of Pharma Business Integration at Merck KGaA, Darmstadt, Germany. Currently, the coalition is piloting a shipping tracker for their deliveries — similar to the tracking you receive for a package you order online — that tracks shipments to the first warehouse in recipient countries. This year, they took it a step further and tracked the medicines all the way to the point of treatment. “Still, many stakeholders would need to join in to achieve end-to-end tracking,” he says. “We would not change the amount of tablets donated, but we would change the amount of tablets arriving where they need to go, at the point of treatment, helping patients.”

Why we should pay doctors to keep people healthy. Most healthcare systems reimburse doctors based on the kind and number of treatments they perform, says business developer Matthias Müllenbeck. That’s why when Müllenbeck went to the dentist with a throbbing toothache, his doctor offered him a $10,000 treatment (which would involve removing his damaged tooth and screwing an artificial one into his jaw) instead of a less expensive, less invasive, non-surgical option. We’re incentivizing the wrong thing, Müllenbeck believes. Instead of fee-for-service health care, he proposes that we reimburse doctors and hospitals for the number of days that a single individual is kept healthy, and stop paying them when that individual gets sick. This radical idea could save us from unnecessary costs and risky procedures — and end up keeping people healthier.

The music of TED@Merck KGaA, Darmstadt, Germany. During the conference, music innovator Tim Exile wandered around recording ambient noises and sounds: a robot decompressing, the murmur of the audience, a hand fumbling with tape, a glass of sparkling water being poured. Onstage, he closes out the day by threading together these sounds to create an entirely new — and strangely absorbing — piece of electronic music.


TEDWhy not? Pushing and prodding the possible, at TED@IBM

The stage at TED@IBM bubbles with possibilities … at the SFJAZZ Center, December 6, 2017, San Francisco, California. Photo: Russell Edwards / TED

We know that our world — our data, our lives, our countries — are becoming more and more connected. But what should we do with that? In two sessions of TED@IBM, the answer shaped up to be: Dream as big as you can. Speakers took the stage to pitch their ideas for using connected data and new forms of machine intelligence to make material changes in the way we live our lives — and also challenged us to flip the focus back to ourselves, to think about what we still need to learn about being human in order to make better tech. From the stage of TED@IBM’s longtime home at the SFJAZZ Center, executive Ann Rubin welcomes us and introduces our two onstage hosts, TED’s own Bryn Freedman and her cohost Michaela Stribling, a longtime IBMer who’s been a great champion of new ideas. And with that, we begin.

Giving plastic a new sense of value. A garbage truck full of plastic enters the ocean every minute of every hour of every day. Plastic is now in the food chain (and your bloodstream), and scientists think it’s contributing to the fastest rate of extinction ever. But we shouldn’t be thinking about cleaning up all that ocean plastic, suggests plastics alchemist David Katz — we should be working to stop plastic from getting there in the first place. And the place to start is in extremely poor countries — the origin of 80 percent of plastic pollution — where recycling just isn’t a priority. Katz has created The Plastic Bank, a worldwide chain of stores where everything from school tuition and medical insurance to Wi-Fi and high-efficiency stoves is available to be purchased in exchange for plastic garbage. Once collected, the plastic is sorted, shredded and sold to brands like Marks & Spencer and Henkel, who have commissioned the use of “Social Plastic” in their products. “Buy shampoo or detergent that has Social Plastic packaging, and you’re indirectly contributing to the extraction of plastic from ocean-bound waterways and alleviating poverty at the same time,” Katz says. It’s a step towards closing the loop on the circular economy, it’s completely replicable, and it’s gamifying recycling. As Katz puts it: “Be a part of the solution, not the pollution.”

How can we stop plastic from piling up in the oceans? David Katz has one way: He runs an international chain of stores that trade plastic recyclables for money. Photo: Russell Edwards / TED

How do we help teens in distress? AI is great at looking for patterns. Could we leverage that skill, asks 14-year-old cognitive developer Tanmay Bakshi, to spot behavior issues lurking under the surface? “Humans aren’t very good at detecting patterns like changes in someone’s sleep, exercise levels, and public interaction,” he says. “If some of the patterns from these suicidal teens go unrecognized and unnoticed by the human eye,” he suggests we could let technology help us out. For the last 3 years, Bakshi and his team have been working with artificial neural networks (ANNs, for short) to develop an app that can pick up on irregularities in a person’s online behavior and build an early warning systems for at-risk teens. With this technology and information access, they foresee a future where a diagnosis is given and all-encompassing help is available right at their fingertips.

An IBMer reads Tanmay Bakshi’s bio — to confirm that, yes, he’s just 14. At TED@IBM, Bakshi made his pitch for a social listening tool that could help identify teens who might be heading for a crisis. Photo: Russell Edwards / TED

A better way to manage refugee crises. When the Syrian Civil War broke out, Rana Novack, the daughter of Syrian immigrants, watched her extended family face an impossible choice: stay home and risk their lives, leave for a refugee camp, or apply for a visa, which could take years and has no guarantee. She quickly realized there was no clear plan to handle a refugee crisis of this magnitude (it’s estimated that there are over 5 million Syrian refugees worldwide). “When it comes to refugees, we’re improvising,” she says. Frustrated with her inability to help her family, Novack eventually struck on the idea of applying predictive technology to refugee crises. “I had a vision that if we could predict it, we could enable government agencies and humanitarian aid organizations with the right information ahead of time so it wasn’t such a reactive process,” she says. Novack and her team built a prototype that will be deployed this year with a refugee organization in Denmark and next year with an organization to help prevent human trafficking. “We have to make sure that the people who want to do the right thing, who want to help, have the tools and the information they need to succeed,” she says, “and those who don’t act can no longer hide behind the excuse they didn’t know it was coming.”

After her talk onstage at TED@IBM, Rana Novack continues the conversation on how to use data to help refugees.  Photo: Russell Edwards / TED

What is information? It seems like a simple question, maybe almost too simple to ask. But Phil Tetlow is here to suggest that answering this question might be the key to understanding the universe itself. In an engaging talk, he walks the audience through the eight steps of understanding exactly what information is. It starts by getting to grips with the sheer complexity of the universe. Our minds use particular tools to organize all this sheer data into relevant information, tools like pattern-matching and simplifying. Our need to organize and connect things, in turn, leads us to create networks. Tetlow offers a short course in network theory, and shows us how, over and over, vast amounts of information tend to connect to one another through a relatively small set of central hubs. We’re familiar with this property: think of airline route maps or even those nifty maps of the internet that show how vast amounts of information ends up flowing through a few large sites, mainly Google and Facebook. Call it the 80/20 rule, where 80% of the interesting stuff arrives via 20% of the network. Nature, it turns out, forms the same kind of 80/20 network patterns all the time — in plant evolution, in chemical networks, in the way a tree branches out from a central trunk. And that’s why, Tetlow suggests, understanding the nature of information, and how it networks together, might give us a clue as to the nature of life, the universe, and why we’re even here at all.

Want to know what information is, exactly? Phil Tetlow does too — because understanding what information is, he suggests, might just help us understand why we exist at all. He speaks at TED@IBM. Photo: Russell Edwards / TED

Curiosity + passion = daring innovation. While driving to work in Johannesburg, South Africa, Tapiwa Chiwewe noticed a large cloud of air pollution he hadn’t seen before. While he’s not a pollution expert, he was curious — so he did some research, and discovered that the World Health Organization reported that nearly 14 percent of all deaths worldwide in 2012 were attributable to household and ambient air pollution, mostly in low- and middle-income countries. What could he do with his new knowledge? He’s not a pollution expert — but he is a computer engineer. So he paired up with colleagues from South Africa and China to create an air quality decision support system that lives in the cloud to uncover spatiotemporal trends of air pollution and create a new machine-learning technology to predict future levels of pollution. The tool gives city planners an improved understanding of how to plan infrastructure. His story shows how curiosity and concern for air pollution can lead to collaboration and creative innovation. “Ask yourself this: Why not?” Chiwewe says. “Why not just go ahead and tackle the problem head-on, as best as you can, in your own way?”

Tapiwa Chiwewe helped invent a system that tracks air pollution — blending his expertise as a computer engineer with a field he was not an expert in, air quality monitoring. He speaks at TED@IBM about using one’s particular set of skills to affect issues that matter. Photo: Russell Edwards / TED

What if AI was one of us? Well, it is. If you’re human, you’re biased. Sometimes that bias is explicit, other times it’s unconscious, says documentarian Robin Hauser. Bias can be a good thing — it informs and protects from potential danger. But this ingrained survival technique often leads to more harmful than helpful ends. The same goes for our technology, specifically artificial intelligence. It may sound obvious, but these superhuman algorithms are built by, well, humans. AI is not an objective, all-seeing solution; AI is already biased, just like the humans who built it. Thus, their biases — both implicit and completely obvious — influence what data an AI sees, understands and puts out into the world. Hauser walks through well-recorded moments in our recent history where the inherent, implicit bias of AI revealed the worst of society and the humans in it. Remember Tay? All jokes aside, we need to have a conversation about how AI should be governed and ask who is responsible for overseeing the ethical standards of these supercomputers. “We need to figure this out now,” she says. “Because once skewed data gets into deep learning machines, it’s very difficult to take it out.”

A mesmerizing journey into the world of plankton. “Hold your breath,” says inventor Thomas Zimmerman: “This is the world without plankton.” These tiny organisms produce two-thirds of our oxygen, but rising sea surface temperatures caused by climate change are threatening their very existence. This in turn endangers the fish that eat them and the roughly one billion people around the world that depend on those fish for animal protein. “Our carbon footprint is crushing the very creatures that sustain us,” says his thought partner, engineer Simone Bianco, “Why aren’t we doing something about it?” Their theory is that plankton are tiny and it’s really hard to care about something that you can’t see. So, the pair developed a microscope that allow us to enter the world of plankton and appreciate their tremendous diversity. “Yes, our world is based on fossil fuels, but we can adjust our society to run on renewable energy from the sun to create a more sustainable and secure future,” says Zimmerman, “That’s good for the little creatures here, the plankton, and that’s good for us.”

Thomas Zimmerman (in hat) and Simone Bianco share their project to make plankton more visible — and thus easier to care about and protect. Photo: Russell Edwards / TED

A poet’s call to protect our planet. “How can something this big be invisible?” asks IN-Q. “The ozone is everywhere, and yet it isn’t visible. Maybe if we saw it, we would see it’s not invincible, and have to take responsibility as individuals.” The world-renowned poet closed out the first session of TED@IBM with his original spoken-word poem “Movie Stars,” which asks us to reckon with climate change and our role in it. With feeling and urgency, IN-Q chronicles the havoc we’ve wreaked on our once-wild earth, from “all the species on the planet that are dying” to “the atmosphere we’ve been frying.” He criticizes capitalism that uses “nature as its example and excuse for competition,” the politicians who allow it, and the citizens too cynical to believe in facts. He finishes the poem with a call to action to anyone listening to take ownership of our home turf, our oceans, our forests, our mountains, our skies. “One little dot is all that we’ve got,” says IN-Q. “We just forgot that none of it’s ours; we just forgot that all of it’s ours.”

With guitar, drums and (expert) whistling, The Ferocious Few open Session 2 with a rocking, stripped-down performance of “Crying Shame.” The band’s musical journey from the streets of San Francisco to the big cities of the United States falls within this year’s TED@IBM theme, “Why not?” — encouraging musicians and others alike to question boundaries, explore limits and carry on.

Why the tech world needs more humanities majors. A few years ago, Eric Berridge’s software consultancy was in crisis, struggling to deal with a technical challenge facing his biggest client. When none of his engineers could solve the problem, they went to drown their sorrows and talk to their favorite bartender, Jeff — who said, “Let me talk to these guys.” To everyone’s surprise and delight, Jeff’s meeting the next day shifted the conversation completely, salvaged the company’s relationship with its client, and forever changed how Berridge thinks about who should work in the tech sector. At TED@IBM, he explained why tech companies should look beyond STEM graduates for new hires, and how people with backgrounds in the arts and humanities can bring creativity and unique insight into a technical workplace. Today, Berridge’s consulting company boasts 1,000 employees, only 100 of whom have degrees in computer programming. And his CTO? He’s a former English major/bike messenger.

Eric Berridge put his favorite bartender in a room with his biggest client — and walked out convinced that the tech sector needs to make room for humanities majors and people with multiple kinds of skills, not just more and more engineers. Photo: Russell Edwards / TED

The surprising and empowering truth about your emotions. “It may feel to you like your emotions are hardwired,  that they just happen to you, but they don’t. You might believe your brain is pre-wired with emotion circuits, but it’s not,” says Lisa Feldman Barrett, a psychology professor at Northeastern University who has studied emotions for 25 years. So what are emotions? They’re guesses based on past experiences that our brain generates in the moment to help us make sense of the world quickly, Barrett says. “Emotions that seem to happen to you are actually made by you,” she adds. For example, many of us hear our morning alarm go off, and as we wake up, we find ourselves enveloped by dread. We start thinking about all of our to-dos — the emails and calls to return, the drop-offs, the meals to cook. Our mind races, and we tell ourselves “I feel anxious” or “I feel overwhelmed.” This mind-racing is prediction, says Barrett. “Your brain is searching to find an explanation for those sensations in your body that you’re experiencing as wretchedness. But those sensations may have a physical cause.” In other words — you just woke up, maybe you’re just hungry. The next time you feel distressed, ask yourself: “Could this have a purely physical cause? Am I just tired, hungry, hot or dehydrated?” And we should be empowered by these findings, declares Barrett. “The actions and experiences that you make today become your brain’s predictions for tomorrow.”

In a mind-shifting talk, Lisa Feldman Barrett shares her research on what emotions are … and it’s probably not what you think., Photo: Russell Edwards / TED

Emotionally authentic relationships between AI and humans. “Imagine an AI that can know or predict what you want or need based on a sliver of information, the tone of your voice, or a particular phrase,” says IBM distinguished designer Adam Cutler. “Like when you were growing up and you’d ask your mom to make you a grilled cheese just the way you like it, and she knew exactly what you meant.” Cutler is working to create a bot that would be capable of participating in this kind of exchange with a person. More specifically, he is focusing on how to form an inside joke between machine and mortal. How? “Interpreting human intent through natural language understanding and pairing it with tone and semantic analysis in real time,” he says. Cutler contends that we humans already form relationships with machines — we name our cars, we refer to our laptops as being “cranky” or “difficult” — so we should do this with intention. Let’s design AI that responds and serves us in ways that are truly helpful and meaningful.

Adam Cutler talks about the first time he encountered an AI-enabled robot — and what it made him realize about his and our relationship to AI. Photo: Russell Edwards / TED

Can art help make AI more human? “We’re trying to create technology that you’ll want to interact with in the far future,” says artist Raphael Arar. “We’re taking a moonshot that we’ll want to be interacting with computers in deeply emotional ways.” In order for that future of AI to be a reality, Arar believes that technology will have to become a lot more human, and that art can help by translating the complexity of what it means to be human to machines. As a researcher and designer with IBM Research, Arar has designed artworks that help AI explore nostalgia, conversations and human intuition. “Our lives revolve around our devices, smart appliances, and more, and I don’t think this will let up anytime soon,” he says, “So I’m trying to embed more ‘humanness’ from the start, and I have a hunch that bringing art into an AI research process is a way to do just that.”

How can we make AI more human-friendly? Raphael Arar suggests we start with making art. Photo: Russell Edwards / TED

Where the body meets the mind. For millennia, philosophers have pondered the question of whether the mind and body exist as a duality or as part of a continuum, and it’s never been more practically relevant than it is today, as we learn more about the way the two connect. What can science teach us about this problem? Natalie Gunn studies both Alzheimer’s and colorectal cancer, and she wants to apply modern medicine and analytics to the mind-body problem. For her work on Alzheimer’s, she’s developing a blood test to screen for the disease, hoping to replace expensive PET scans and painful lumbar punctures. Her research on cancer is where the mind-body connection gets interesting: Does our mindset have an impact on cancer? There’s little conclusive evidence either way, Gunn says, but it’s time we took the question seriously, and put the wealth of analytical tools we have at our disposal to the test. “We need to investigate how a disease of the body could be impacted by our mind, particularly for a disease like cancer that is so steeped in our psyche,” Gunn says. “When we can do this, the philosophical question of where the body ends and the mind begins enters into the realm of scientific discovery rather than science fiction.”

Researcher Natalie Gunn wants to suggest a science-based lens to look at the mind-body problem. Photo: Russell Edwards / TED

Is Parkinson’s an electrical problem? Brain researcher Eleftheria Pissadaki studies Parkinson’s, but instead of focusing on the biological aspects of the disease, like genetics and dopamine depletion, she’s looking at the problem in terms of energy. Pissadaki and her team have created mathematical models of dopamine neurons, the neurons that selectively die in Parkinson’s, and they’ve found that the bigger a neuron is, the more vulnerable it becomes … simply because it needs a lot of energy. What can we do with this information? Pissadaki suggests we might someday be able to neuroprotect our brain cells by “finding the fuse box for each neuron” and figuring out how much energy it needs. Then we might be able to develop medicine tailored for people’s brain energy profiles, or drugs that turn neurons off whenever they’re getting tired but before they die. “It’s an amazingly complex problem,” Pissadaki says, “but one that is totally worth pursuing.”

Eleftheria Pissadaki is imagining new ways to think about and treat diseases like Parkinson’s, suggesting research directions that might create new hope. Photo: Photographer Russell Edwards / TED

How to build a smarter brain. Just as we can reshape our bodies and build stronger muscles with exercise, Bruno Michel thinks we can train our way to better, faster brains — brains smart enough to compete with sophisticated AI. At TED@IBM, the brain fitness advocate discussed various strategies for improving your noggin. For instance, to think in a more structured way, try studying Latin, math or music. For a boost to your general intelligence, try yoga, read, make new friends, and do new things. Or, try pursuing a specific task with transferable skills as Michel has done for 30 years. He closed his talk with the practice he credits with significantly improving both the speed of his thinking and his reaction times— tap dancing!

Click to view slideshow.

TEDGet ready for TED Talks India: Nayi Soch, premiering Dec. 10 on Star Plus

This billboard is showing up in streets around India, and it’s made out of pollution fumes that have been collected and made into ink — ink that’s, in turn, made into an image of TED Talks India: Nayi Soch host Shah Rukh Khan. Tune in on Sunday night, Dec. 10, at 7pm on Star Plus to see what it’s all about.

TED is a global organization with a broad global audience. With our TED Translators program working in more than 100 languages, TEDx events happening every day around the world and so much more, we work hard to present the latest ideas for everyone, regardless of language, location or platform.

Now we’ve embarked on a journey with one of the largest TV networks in the world — and one of the biggest movie stars in the world — to create a Hindi-language TV series and digital series that’s focused on a country at the peak of innovation and technology: India.

Hosted and curated by Shah Rukh Khan, the TV series TED Talks India: Nayi Soch will premiere in India on Star Plus on December 10.

The name of the show, Nayi Soch, literally means ‘new ideas’ — and this kick-off episode seeks to inspire the nation to embrace and cultivate ideas and curiosity. Watch it and discover a program of speakers from India and the world whose ideas might inspire you to some new thinking of your own! For instance — the image on this billboard above is made from the fumes of your car … a very new and surprising idea!

If you’re in India, tune in at 7pm IST on Sunday night, Dec. 10, to watch the premiere episode on Star Plus and five other channels. Then tune in to Star Plus on the next seven Sundays, at the same time, to hear even more great talks on ideas, grouped into themes that will certainly inspire conversations. You can also explore the show on the HotStar app.

On TED.com/india and for TED mobile app users in India, each episode will be conveniently turned into five to seven individual TED Talks, one talk for each speaker on the program. You can watch and share them on their own, or download them as playlists to watch one after another. The talks are given in Hindi, with professional subtitles in Hindi and in English. Almost every talk will feature a short Q&A between the speaker and the host, Shah Rukh Khan, that dives deeper into the ideas shared onstage.

Want to learn more about TED Talks? Check out this playlist that SRK curated just for you.


Don MartiAre bug futures just high-tech piecework?

Are bug futures just high-tech piecework, or worse, some kind of "gig economy" racket?

Just to catch up, bug futures, an experimental kind of agreement being developed by the Bugmark project, are futures contracts based on the status of bugs in a bug tracker.

For developers: vist Bugmark to find an open issue that matches your skills and interests. Buy a futures contract connected to that issue that will pay you when the issue is fixed. Work on the issue, in the open—then decide if you want to hold your contract until maturity, or sell it at a profit. Report an issue and pay to reward others to fix it

For users: Create a new issue on the project bug tracker, or select an existing one. Buy a futures contract on that issue that will cost you a known amount when the issue is fixed, or pay you to compensate you if the issue goes unfixed. Reduce your exposure to software risks by directly signaling the project participants about what issues are important to you. Invest in futures on an open source market

Bug futures also open up the possibility of incentivizing other kinds of work, such as clarifying and translating bug reports, triaging bugs, writing failing tests, or doing code reviews—and especially arbitrage of bugs from project to project.

Bug futures are different from open source bounty systems, what have been repeatedly tried but have so far failed to take off. The big problem with conventional open source bounty systems is that, as far as I can tell, they fail to incentivize cooperative work, and in a lot of situations might incentivize un-cooperative behavior. If I find a bug in a web application, and offer a bounty to fix it, the fix might require JavaScript and CSS work. A developer who fixes the JavaScript and gets stuck on the CSS might choose not to share partial work in order to contend for the entire bounty. Likewise, the developer who fixes the CSS part of the bug might get stuck on the JavaScript. Because of how bounties are structured, if the two wanted to split the bounty they would need to find, trust, and coordinate with each other. Meanwhile, if the bug was the subject of a futures contract, the JavaScript developer could write up a good commit message explaining how their partial work made progress toward a fix, and offer to sell their side of the contract. A CSS developer could take on the rest of the work by buying out that position.

Futures trading and risk shifts

But will bug futures tend to shift the risks of software development away from the "owners" of software (the owners don't have to be copyright holders, they could be those who benefit from network effects) and toward the workers who develop, maintain, and support it?

I don't know, but I think that the difference between bug trackers and piecework is where you put the brains of the operation. In piecework and the gig economy, the matching of workers to tasks is done by management, either manually or in software. Workers can set the rate at which they work in conventional piecework, or accept and reject tasks offered to them in the gig economy, but only management can have a view of all available tasks.

Bug futures operate within a commons-based peer production environment, though. In an ideal peer production scene, all participants can see all available tasks, and select the most rewarding tasks. Somewhere in the economics literature there is probably a model of task selection in open source development, and if I knew where to find it I could put an impressive LaTeX equation right around here. Of course, open source still has all kinds of barriers that make matching of workers to tasks less than ideal, but it's a good goal to keep in mind.

If you do bug futures right, they interfere as little as possible with the peer production advantage—that it enables workers to match themselves to tasks. And the futures market adds the ability for people who are knowledgeable about the likelihood of completion of a task, usually those who can do the task, to profit from that knowledge.

Rather than paying a worker directly for performing a task, bug futures are about trading on the outcomes of tasks. When participating, you're not trading labor for money, you're trading on information you hold about the likelihood of successful completion of a task. As in conventional financial markets, information must be present on the edges, with the individual participants, in order for them to participate. If a feature is worth $1000 to me, and someone knows how to fix it in five minutes, bug futures could facilitate a trade that's profitable to both ends. If the market design is done right, then most of that value gets captured by the endpoints—the user and developer who know when to make the right trade.

The transaction costs of trading in information tend to be lower than the transaction costs of trading in labor, for a variety of reasons which you will probably believe in to different extents depending on your politics. What if we could replace some direct trading in labor with trading in the outcomes of that labor by trading information? Lower transaction costs, more gains from trade, more value created.

Bug futures series so far

,

TEDDEADLINE EXTENDED: Audition for TED2018!

Olalekan Jeyifous speaks at TED Talent Search 2017 - Ideas Search, January 26, 2017, New York, NY. Photo: Anyssa Samari / TED

At last year’s TEDNYC Idea Search, artist Olalekan Jeyifous showed off his hyper-detailed and gloriously complex imaginary cities. See more of his work in this TED Gallery. Photo: Anyssa Samari / TED

Do you have an idea idea worth spreading? Do you want to speak on the TED2018 stage in Vancouver in April?

To find more new voices, TED is hosting an Idea Search at our office theater in New York City on January 24, 2018. Speakers who audition at this event might be chosen for the TED2018 stage or to become part of our digital archive on TED.com.

You’re invited to pitch your amazing idea to try out on the Idea Search stage in January. The theme of TED2018 is The Age of Amazement, so we are looking for ideas that connect to that theme — from all angles. Are you working on cutting-edge technology that the world needs to hear about? Are you making waves with your art or research? Are you a scientist with a new discovery or an inventor with a new vision? A performer with something spectacular to share? An incredible storyteller? Please apply to audition at our Idea Search.

Important dates:

The deadline to apply to the Idea Search is Friday, December 8, 2017, at noon Eastern.

The Idea Search event happens in New York City from the morning of January 23 through the morning of January 25, 2018. Rehearsals will take place on January 23, and the event happens in the evening of January 24.

TED2018 happens April 10–14, 2018, in Vancouver.

Don’t live in the New York City area? Don’t let that stop you from applying — we may be able to help get you here.

Here’s how to apply!

Sit down and think about what kind of talk you’d like to give, then script a one-minute preview of the talk.

Film yourself delivering the one-minute preview (here are some insider tips for making a great audition video).

Upload the film to Vimeo or YouTube, titled: “[Your name] TED2018 audition video: [name of your talk]” — so, for example: “Jane Smith TED2018 audition video: Why you should pay attention to roadside wildflowers

Then complete the entry form, paste your URL in, and hit Submit!

Curious to learn more?

Read about a few past Idea Search events: TEDNYC auditions in 2017, in 2014 and in 2013.

Watch talks from past Idea Search events that went viral on our digital archive on TED.com:

Christopher Emdin: Teach teachers how to create magic (more than 2 million views)
Sally Kohn: Let’s try emotional correctness (more than 2 million views)
Lux Narayan: What I learned from 2,000 obituaries (currently at 1.4 million views!)
Lara Setrakian: 3 ways to fix a broken news industry (just shy of a million views)
Todd Scott: An intergalactic guide to using a defibrillator (also juuust south of a million)

And here are just a few speakers who were discovered during past talent searches:

Ashton Applewhite: Let’s end ageism (1m views)
OluTimehin Adegbeye: Who belongs in a city? (a huge hit at TEDGlobal 2017)
Richard Turere: My invention that made peace with the lions (2m views)
Zak Ebrahim: I am the son of a terrorist. Here’s how I chose peace (4.7m views and a TED Book)


CryptogramFriday Squid Blogging: Squid Embryos Coming to Life

Beautiful video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSecurity Vulnerabilities in Certificate Pinning

New research found that many banks offer certificate pinning as a security feature, but fail to authenticate the hostname. This leaves the systems open to man-in-the-middle attacks.

From the paper:

Abstract: Certificate verification is a crucial stage in the establishment of a TLS connection. A common security flaw in TLS implementations is the lack of certificate hostname verification but, in general, this is easy to detect. In security-sensitive applications, the usage of certificate pinning is on the rise. This paper shows that certificate pinning can (and often does) hide the lack of proper hostname verification, enabling MITM attacks. Dynamic (black-box) detection of this vulnerability would typically require the tester to own a high security certificate from the same issuer (and often same intermediate CA) as the one used by the app. We present Spinner, a new tool for black-box testing for this vulnerability at scale that does not require purchasing any certificates. By redirecting traffic to websites which use the relevant certificates and then analysing the (encrypted) network traffic we are able to determine whether the hostname check is correctly done, even in the presence of certificate pinning. We use Spinner to analyse 400 security-sensitive Android and iPhone apps. We found that 9 apps had this flaw, including two of the largest banks in the world: Bank of America and HSBC. We also found that TunnelBear, one of the most popular VPN apps was also vulnerable. These apps have a joint user base of tens of millions of users.

News article.

Worse Than FailureError'd: PIck an Object, Any Object

"Who would have guessed Microsoft would have a hard time developing web apps?" writes Sam B.

 

Jerry O. writes, "So, if I eat my phone, I might get acid indigestion? That sounds reasonable."

 

"Got this when I typed into a SwaggerHub session I'd left open overnight and tried to save it," wrote Rupert, "The 'newer' draft was not, in fact, the newer version."

 

Antonio write, "It's nice to buy software from another planet, especially if year there is much longer."

 

"Either Meteorologist (http://heat-meteo.sourceforge.net/) is having some trouble with OpenWeatherMap data, or we're having an unusually hot November in Canada," writes Chris H.

 

"This is possibly one case where a Windows crash can result in a REAL crash," writes Ruben.

 

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaOpenSTEM: Happy Holidays, Queensland!

It’s finally holidays in Queensland! Yay! Congratulations to everyone for a wonderful year and lots of hard work! Hope you all enjoy a well-earned rest! Most other states and territories have only a week to go, but the holiday spirit is in the air.- Should you be looking for help with resources, rest assured that […]

Krebs on SecurityPhishers Are Upping Their Game. So Should You.

Not long ago, phishing attacks were fairly easy for the average Internet user to spot: Full of grammatical and spelling errors, and linking to phony bank or email logins at unencrypted (http:// vs. https://) Web pages. Increasingly, however, phishers are upping their game, polishing their copy and hosting scam pages over https:// connections — complete with the green lock icon in the browser address bar to make the fake sites appear more legitimate.

A brand new (and live) PayPal phishing page that uses SSL (https://) to appear more legitimate.

According to stats released this week by anti-phishing firm PhishLabs, nearly 25 percent of all phishing sites in the third quarter of this year were hosted on HTTPS domains — almost double the percentage seen in the previous quarter.

“A year ago, less than three percent of phish were hosted on websites using SSL certificates,” wrote Crane Hassold, the company’s threat intelligence manager. “Two years ago, this figure was less than one percent.”

A currently live Facebook phishing page that uses https.

As shown in the examples above (which KrebsOnSecurity found in just a few minutes of searching via phish site reporting service Phishtank.com), the most successful phishing sites tend to include not only their own SSL certificates but also a portion of the phished domain in the fake address.

Why are phishers more aggressively adopting HTTPS Web sites? Traditionally, many phishing pages are hosted on hacked, legitimate Web sites, in which case the attackers can leverage both the site’s good reputation and its SSL certificate.

Yet this, too, is changing, says PhishLabs’ Hassold.

“An analysis of Q3 HTTPS phishing attacks against PayPal and Apple, the two primary targets of these attacks, indicates that nearly three-quarters of HTTPS phishing sites targeting them were hosted on maliciously-registered domains rather than compromised websites, which is substantially higher than the overall global rate,” he wrote. “Based on data from 2016, slightly less than half of all phishing sites were hosted on domains registered by a threat actor.”

Hassold posits that more phishers are moving to HTTPS because it helps increase the likelihood that users will trust that the site is legitimate. After all, your average Internet user has been taught for years to simply “look for the lock icon” in the browser address bar as assurance that a site is safe.

Perhaps this once was useful advice, but if so its reliability has waned over the years. In November, PhishLabs conducted a poll to see how many people actually knew the meaning of the green padlock that is associated with HTTPS websites.

“More than 80% of the respondents believed the green lock indicated that a website was either legitimate and/or safe, neither of which is true,” he wrote.

What the green lock icon indicates is that the communication between your browser and the Web site in question is encrypted; it does little to ensure that you really are communicating with the site you believe you are visiting.

At a higher level, another reason phishers are more broadly adopting HTTPS is because more sites in general are using encryption: According to Let’s Encrypt, 65% of web pages loaded by Firefox in November used HTTPS, compared to 45% at the end of 2016.

Also, phishers no longer need to cough up a nominal fee each time they wish to obtain a new SSL certificate. Indeed, Let’s Encrypt now gives them away for free.

The major Web browser makers all work diligently to index and block known phishing sites, but you can’t count on the browser to save you:

So what can you do to make sure you’re not the next phishing victim?

Don’t take the bait: Most phishing attacks try to convince you that you need to act quickly to avoid some kind of loss, cost or pain, usually by clicking a link and “verifying” your account information, user name, password, etc. at a fake site. Emails that emphasize urgency should be always considered extremely suspect, and under no circumstances should you do anything suggested in the email.

Phishers count on spooking people into acting rashly because they know their scam sites have a finite lifetime; they may be shuttered at any moment. The best approach is to bookmark the sites that store your sensitive information; that way, if you receive an urgent communication that you’re unsure about, you can visit the site in question manually and log in that way. In general, it’s a bad idea to click on links in email.

Links Lie: You’re a sucker if you take links at face value. For example, this might look like a link to Bank of America, but I assure you it is not. To get an idea of where a link goes, hover over it with your mouse and then look in the bottom left corner of the browser window.

Yet, even this information often tells only part of the story, and some links can be trickier to decipher. For instance, many banks like to send links that include ridiculously long URLs which stretch far beyond the browser’s ability to show the entire thing when you hover over the link.

The most important part of a link is the “root” domain. To find that, look for the first slash (/) after the “http://” part, and then work backwards through the link until you reach the second dot; the part immediately to the right is the real domain to which that link will take you.

“From” Fields can be forged: Just because the message says in the “From:” field that it was sent by your bank doesn’t mean that it’s true. This information can be and frequently is forged.

If you want to discover who (or what) sent a message, you’ll need to examine the email’s “headers,” important data included in all email.  The headers contain a lot of information that can be overwhelming for the untrained eye, so they are often hidden by your email client or service provider, each of which may have different methods for letting users view or enable headers.

Describing succinctly how to read email headers with an eye toward thwarting spammers would require a separate tutorial, so I will link to a decent one already written at About.com. Just know that taking the time to learn how to read headers is a useful skill that is well worth the effort.

Keep in mind that phishing can take many forms: Why steal one set of login credentials for a single brand when you can steal them all? Increasingly, attackers are opting for approaches that allow them to install a password-snarfing Trojan that steals all of the sensitive data on victim PCs.

So be careful about clicking links, and don’t open attachments in emails you weren’t expecting, even if they appear to come from someone you know. Send a note back to the sender to verify the contents and that they really meant to send it. This step can be a pain, but I’m a stickler for it; I’ve been known to lecture people who send me press releases and other items as unrequested attachments.

If you didn’t go looking for it, don’t install it: Password stealing malware doesn’t only come via email; quite often, it is distributed as a Facebook video that claims you need a special “codec” to view the embedded content. There are tons of variations of this scam. The point to remember is: If it wasn’t your idea to install something from the get-go, don’t do it.

Lay traps: When you’ve mastered the basics above, consider setting traps for phishers, scammers and unscrupulous marketers. Some email providers — most notably Gmail — make this especially easy.

When you sign up at a site that requires an email address, think of a word or phrase that represents that site for you, and then add that with a “+” sign just to the left of the “@” sign in your email address. For example, if I were signing up at example.com, I might give my email address as krebsonsecurity+example@gmail.com. Then, I simply go back to Gmail and create a folder called “Example,” along with a new filter that sends any email addressed to that variation of my address to the Example folder.

That way, if anyone other than the company I gave this custom address to starts spamming or phishing it, that may be a clue that example.com shared my address with others (or that it got hacked!). I should note two caveats here. First, although this functionality is part of the email standard, not all email providers will recognize address variations like these. Also, many commercial Web sites freak out if they see anything other than numerals or letters, and may not permit the inclusion of a “+” sign in the email address field.

,

Sociological ImagesHow Hate Hangs On

Originally Posted at Discoveries

After the 2016 Presidential election in the United States, Brexit in the UK, and a wave of far-right election bids across Europe, white supremacist organizations are re-emerging in the public sphere and taking advantage of new opportunities to advocate for their vision of society. While these groups have always been quietly organizing in private enclaves and online forums, their renewed public presence has many wondering how they keep drawing members. Recent research in American Sociological Review by Pete SimiKathleen BleeMatthew DeMichele, and Steven Windisch sheds light on this question with a new theory—people who try to leave these groups can get “addicted” to hate, and leaving requires a long period of recovery.

Photo by Dennis Skley, Flickr CC

The authors draw on 89 life history interviews with former members of white supremacist groups. These interviews were long, in-depth discussions of their pasts, lasting between four and eight hours each. After analyzing over 10,000 pages of interview transcripts, the authors found a common theme emerging from the narratives. Membership in a supremacist group took on a “master status”—an identity that was all-encompassing and touched on every part of a member’s life. Because of this deep involvement, many respondents described leaving these groups like a process of addiction recovery. They would experience momentary flashbacks of hateful thoughts, and even relapses into hateful behaviors that required therapeutic “self talk” to manage.

We often hear about new members (or infiltrators) of extremist groups getting “in too deep” to where they cannot leave without substantial personal risk. This research helps us understand how getting out might not be enough, because deep group commitments don’t just disappear when people leave.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureRepresentative Line: A Case of File Handling

Tim W caught a ticket. The PHP system he inherited allowed users to upload files, and then would process those files. It worked… most of the time. It seemed like a Heisenbug. Logging was non-existent, documentation was a fantasy, and to be honest, no one was exactly 100% certain what the processing feature was supposed to do- but whatever it was doing now was the right thing, except the times that it wasn’t right.

Specifically, some files got processed. Some files didn’t. They all were supposed to.

But other than that, it worked.

Tim worried that this was going to be difficult to replicate, especially after he tried it with a few files he had handy. Digging through the code though, made it perfectly clear what was going on. Buried on about line 1,200 in a 3,000 line file, he found this:

while (false !== ($file = readdir($handle))) {
    if ($file != "." && $file != ".." && ( $file == strtolower($file) ) ) {
        …
    }
}

For some reason, this code required that the name of the file contain no capital letters. Why? Well, again, no documentation, no comments, and the change predated the organization’s use of source control. Someone put in the effort to add the feature, but was it necessary?

Tim took the line out, and now it processes all files. Unfortunately, it’s still only working most of the time, but nobody can exactly agree on what it’s doing wrong.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Don Martithree kinds of open source metrics

Some random notes about open source metrics, related to work on CHAOSS, where Mozilla is a member and I'm on the Governing Board.

As far as I can tell, there are three kinds of open source metrics.

Impact metrics cover how much value the software creates. Possible good ones include count of projects dependent on this one, mentions of this project in job postings, books, papers, and conference talks, and, of course sales of products that bundle this project.

Contributor reward metrics cover how the software is a positive experience for the people who contribute to it. Job postings are a contributor reward metric as well as an impact metric. Contributor retention metrics and positive results on contributor experience surveys are some other examples.

But impact metrics and contributor reward metrics tend to be harder to collect, or slower-moving, than other kinds of metrics, which I'll lump together as activity metrics. Activity metrics include most of the things you see on open source project dashboards, such as pull request counts, time to respond to bug reports, and many others. Other activity metrics can be the output of natural language processing on project discussions. An example of that is FOSS Heartbeat, which does sentiment analysis, but you could also do other kinds of metrics based on text.

IMHO, the most interesting questions in the open source metrics area are all about: how do you predict impact metrics and contributor reward metrics from activity metrics? Activity metrics are easy to automate, and make a nice-looking dashboard, but there are many activity metrics to choose from—so which ones should you look at?

Which activity metrics are correlated to any impact metrics?

Which activity metrics are correlated to any contributor reward metrics?

Those questions are key to deciding which of the activity metrics to pay attention to. I'm optimistic that we'll be seeing some interesting correlations soon.

,

CryptogramGermany Preparing Backdoor Law

The German Interior Minister is preparing a bill that allows the government to mandate backdoors in encryption.

No details about how likely this is to pass. I am skeptical.

Worse Than FailureNews Roundup: Calculated

A long time ago, in a galaxy right here, we ran a contest. The original OMGWTF contest was a challenge to build the worst calculator you possibly could.

We got some real treats, like the Universal Calculator, which, instead of being a calculator, was a framework for defining your own calculator, or Rube Goldberg’s Calculator, which eschewed cryptic values like “0.109375”, and instead output “seven sixty-fourths” (using inlined assembly for performance!). Or, the champion of the contest, the Buggy Four Function Calculator, which is a perfect simulation of a rotting, aging codebase.

The joke, of course, is that building a usable calculator app is easy. Why, it’s so easy, that we challenged our readers to come up with ways to make it hard. To find creative ways to fail at handling this simple task. To misinterpret and violate basic principles of how calculators should work.

Well, I bring this up, because just a few days ago, iOS 11.2 left beta and went public. And finally, finally, they fixed the calculator, which has been broken since iOS 11 launched. How broken? Let's try 1+2+3+4+5+6 shall we?

For those who can't, or don't wish to watch the video, according to the calculator, 1+2+3+4+5+6 is 75. I entered the values in quickly, but not super-speed.

I personally discovered the bug for myself while scoring at the end of a round of board games. I just ran down the score-sheet to sum things up, tapping away like one does with a calculator, and got downright insane results.

The underlying cause, near as anyone has been able to tell, is a combination of input lag and display updates, so rapidly typing “1+2+3” loses one of the “+”es and becomes “1+23”.

Now Apple’s been in the news a lot recently- in addition to shipping a completely broken calculator, they messed up character encoding, causing “I” to display a placeholder character, released a macOS update which allowed anyone to log in as root with no password, patched it, but with the problem that the patch broke filesharing, and if you didn’t apply it in the “right” order, the bug could come back.

The root cause of the root bug, by the way, was due to bad error handling in the login code.

Now, I’ll leave it to the pundits to wring their hands over the decline of Apple’s code quality, worry that “is this the future of Apple?!?!!11?”, or claim “this never would have happened under Jobs”. I’m not interested in the broad trends here, or prognosticating, or prognostibating (where you please only yourself by imagining alternate realities where Steve Jobs still lives).

What I am interested in is that calculator app. Some developer, I’m gonna assume a more junior one (right? you don’t need 15 years of experience to reimplement a calculator app), really jacked that up. And at no point in testing did anyone actually attempt to use the calculator. I’m sure they ran some automated UI tests, and when they saw odd results, they started chucking some sleep() calls in there until the errors went away.

It’s just amazing to me, that we ran a contest built around designing the worst calculator you could. A decade later, Apple comes sauntering in, vying for an honorable mention, in an application they actually shipped.

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

,

Krebs on SecurityAnti-Skimmer Detector for Skimmer Scammers

Crooks who make and deploy ATM skimmers are constantly engaged in a cat-and-mouse game with financial institutions, which deploy a variety of technological measures designed to defeat skimming devices. The latest innovation aimed at tipping the scales in favor of skimmer thieves is a small, battery powered device that provides crooks a digital readout indicating whether an ATM likely includes digital anti-skimming technology.

A well-known skimmer thief is marketing a product called “Smart Shield Detector” that claims to be able to detect a variety of electronic methods used by banks to foil ATM skimmers.

The device, which sells for $200, is called a “Smart Shield Detector,” and promises to detect “all kinds of noise shields, hidden shields, delayed shields and others!”

It appears to be a relatively simple machine that gives a digital numeric indicator of whether an ATM uses any of a variety of anti-skimming methods. One of the most common is known as “frequency jamming,” which uses electronic signals to scramble both the clock (timing) and the card data itself in a bid to confuse skimming devices.

“You will see current level within seconds!,” the seller enthuses in an online ad for the product, a snippet of which is shown above. “Available for sale after November 1st, market price 200usd. Preorders available at price 150usd/device. 2+ devices for your team – will give discounts.”

According to the individual selling the Smart Shield Detector, a readout of 15 or higher indicates the presence of some type of electronic shield or jamming technology — warning the skimmer thief to consider leaving that ATM alone and to find a less protected machine. In contrast, a score between 3-5 is meant to indicate “no shield,” i.e., that the ATM is ripe for compromise.

KrebsOnSecurity shared this video with Charlie Harrow, solutions manager for ATM maker NCR Corp. Harrow called the device “very interesting” but said NCR doesn’t try to hide which of its ATM include anti-skimming technologies — such as those that claim to be detectable by the Smart Shield Detector.

“We don’t hide the fact that our ATMs are protected against this type of external skimming attack,” Harrow said. “Our Anti-Skimming product uses a uniquely shaped bezel so you can tell just by looking at the ATM that it is protected (if you know what you are looking for).”

Harrow added that NCR doesn’t rely on secrecy of design to protect its ATMs.

“The bad guys are skilled, resourced and determined enough that sooner or later they will figure out exactly what we have done, so the ATM has to be safe against a knowledgeable attacker,” he said. “That said, a little secret sauce doesn’t hurt, and can often be very effective in stopping specific attack [methods] in the short term, but it can’t be relied on to provide any long term protection.”

The best method for protecting yourself against ATM skimmers doesn’t require any fancy gadgets or technology at all: It involves merely covering the PIN pad with your hand while you enter your PIN!

That’s because the vast majority of skimming attacks involve two components: A device that fits over or inside the card reader and steals data from the card’s magnetic stripe, and a tiny hidden camera aimed at the PIN pad. While thieves who have compromised an ATM you used can still replicate your ATM card, the real value rests in your PIN, without which the thieves cannot easily drain your checking or savings account of cash.

Also, be aware of your physical surroundings while using an ATM; you’re probably more apt to get mugged physically than virtually at a cash machine. Finally, try to stick to cash machines that are physically installed inside of banks, as these tend to be much more challenging for thieves to compromise than stand-alone machines like those commonly found at convenience stores.

KrebsOnSecurity would like to thank Alex Holden, founder of Milwaukee, Wisc. based Hold Security, for sharing the above video.

Are you fascinated by skimming devices? Then check out my series, All About Skimmers, which looks at all manner of skimming scams, from fake ATMs and cash claws to PIN pad overlays and gas pump skimmers.

Sociological ImagesWhat Drives Conspiracy Theories?

From Pizzagate to more plausible stories of palace intrigue, U.S. politics has more than a whiff of conspiracy in the air these days. In sorting fact from fiction, why do some people end up believing conspiracy theories? Social science research shows that we shouldn’t think about these beliefs like delusions, because the choice to buy in stems from real structural and psychological conditions that can affect us all.

For example, research in political science shows that people who know a lot about politics, but also show low levels of generalized trust, are more likely to believe conspiracy theories. It isn’t just partisan, either, both liberals and conservatives are equally likely to believe conspiracy theories—just different ones.

In sociology, research also shows how bigger structural factors elevate conspiracy concern. In an article published in Socius earlier this year, Joseph DiGrazia examined Google search trends for two major conspiracy theories between 2007 and 2014: inquiries about the Illuminati and concern about President Obama’s birth and citizenship.

DiGrazia looked at the state-level factors that had the strongest and most consistent relationships with search frequency: partisanship and employment. States with higher unemployment rates had higher search rates about the Illuminati, and more Republican states had higher searches for both conspiracies throughout the Obama administration.

These studies show it isn’t correct to treat conspiracy beliefs as simply absurd or irrational—they flare up among reasonably informed people who have lower trust in institutions, often when they feel powerless in the face of structural changes across politics and the economy.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramMatt Blaze on Securing Voting Machines

Matt Blaze's House testimony on the security of voting machines is an excellent read. (Details on the entire hearing is here.) I have not watched the video.

Worse Than FailureEditor's Soapbox: Protect Yourself

We lend the soapbox to snoofle today, to dispense a combination of career and financial advice. I've seen too many of my peers sell their lives for a handful of magic beans. Your time is too valuable to waste for no reward. -- Remy

There is a WTF that far too many people make with their retirement accounts at work. I've seen many many people get massively financially burned. A friend recently lost a huge amount of money from their retirement account when the company went under, which prompted me to write this to help you prevent it from happening to you.

A pile of money

The housing bubble that led up to the 2008 financial collapse was caused by overinflated housing values coming back down to reality. People had been given mortgages far beyond what they could afford using traditional financial norms, and when the value of their homes came back down to realistic values, they couldn't afford their mortgages and started missing payments, or worse, defaulted. This left the banks and brokerages that were holding the mortgage-backed-securities with billions in cash flow, but upside down on the balance sheet. When it crossed a standard threshold, they went under. Notably Bear Stearns and Lehman. Numerous companies (AIG, Citi, etc.) that invested in these MBS also nearly went under.

One person I knew of had worked at BS for many years and had a great deal of BS stock in their retirement account. When they left for Lehman, they left the account in-tact at BS. Then they spent many years at Lehman. When both melted down, that person not only lost their job, but the company stock in both retirement accounts was worth... a whole lot less.

As a general statement, if you work for a company, don't buy only stock of that company in your retirement account because if the place goes belly up, you lose twice: your job and your retirement account!

Another thing people do is accept stock options in lieu of pay. Startups are big on doing this as it limits the cash outflow when they are new. If they succeed, they'll have the cash to cover the options. If they go bust, you lose. Basically, you put in the long hours and take a large chunk of the financial risk on the hopes that the managers know what they're doing, and are one of the lucky unicorns that "makes it". But large companies also pay people (in part) in options. A friend worked their way up to Managing Director of a large firm. He was paid 20% cash and 80% company stock options, but had to hold the options for five years before he was allowed to exercise them - so that he'd be vested in the success of the company. By the time the sixth year had rolled by, he had forgotten about it and let-it-ride, with the options auto-exercising and being converted into regular shares. When he left the job, he left the account in-tact and in-place. When the market tanked, so did the value of the stock that he had earned and been awarded.

When you leave a job, either voluntarily or forcibly, roll the assets in your retirement account in-kind into a personal retirement account at any bank or brokerage that provides that (custodian) service. You won't pay taxes if you do a direct transfer, but if some company where you used to work goes under, you won't have to chase lawyers to get what belongs to you.

Remember, Bill Gates routinely divested huge blocks of MS stock as part of diversifying, even while it was still increasing in value. Your numbers will be smaller but the same principle applies to you too (e.g.: Don't put all your eggs in one basket).

While the 2008 fiasco and dot-com bust will hopefully never be repeated, in the current climate of deregulation, you never know. If you've heavily weighted your retirement account with company stock, or have a trail of retirement accounts at former employers, please go talk to a financial advisor about diversifying your holdings, and collect the past corporate retirement accounts in a single personal retirement brokerage account, where you can more easily control it and keep an eye on it.

Personally, I'm retired. My assets are split foreign/domestic, bonds/equities, large/medium/small-cap and growth/blend/value. a certain percentage is professionally managed, but I keep an eye on what they're doing and the costs. The rest is in mutual funds that cover the desired sectors, etc.

The amounts and percentages across investment types in which you invest will vary by your age, total assets and time horizon. Only you can know what's best for your family, but you should discuss it with an independent advisor (before they repeal the fiduciary rule, which states that they must put your interests ahead of what their firm is pushing).

For what it's worth, over my career, I've worked at five companies that went under, more than twenty years down the road after I moved on. I have always taken the cash value of the pension/401(k) and rolled it into a brokerage account where I manage it myself. Had I left those assets at the respective companies, I would have lost over $100,000 of money that I had earned and been awarded - for absolutely no reason!

Consider for a moment that the managers that we all too often read about in this space are often the same ones who set up and manage these workplace retirement plans. Do you really want them managing money that you've already earned? Especially after you've moved on to the next gig? When you're not there to hear office gossip about Bad Things™ that may be happening?

One final point. During the first few years of my career, there were no 401(k)'s. If you didn't have a pension, your savings account was your main investment vehicle. Unless the IRA and 401(k) plan rules are changed, you can start saving very early on. At first, it seems like it accumulates very slowly, but the rate of growth increases rapidly as you get nearer to the end of your career. The sooner you start saving for the big ticket items down the road, the quicker you'll be able to pay for them. Patience, persistence and diversification are key!

As someone who has spent the last quarter century working for these massive financial institutions, I've seen too many people lose far too much; please protect yourself!

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet Linux AustraliaLev Lafayette: A Tale of Two Conferences: ISC and TERATEC 2017

This year the International Supercomputing Conference and TERATEC were held in close proximity, the former in Frankfurt from June 17-21 and the latter in Paris from June 27-28. Whilst the two conferences differ greatly in scope (one international, one national) and language (one Anglophone, the other Francophone), the dominance of Linux as the operating system of
choice at both was overwhelming.

read more

,

Cryptogram"Crypto" Is Being Redefined as Cryptocurrencies

I agree with Lorenzo Franceschi-Bicchierai, "Cryptocurrencies aren't 'crypto'":

Lately on the internet, people in the world of Bitcoin and other digital currencies are starting to use the word "crypto" as a catch-all term for the lightly regulated and burgeoning world of digital currencies in general, or for the word "cryptocurrency" -- which probably shouldn't even be called "currency," by the way.

[...]

To be clear, I'm not the only one who is mad about this. Bitcoin and other technologies indeed do use cryptography: all cryptocurrency transactions are secured by a "public key" known to all and a "private key" known only to one party­ -- this is the basis for a swath of cryptographic approaches (known as public key, or asymmetric cryptography) like PGP. But cryptographers say that's not really their defining trait.

"Most cryptocurrency barely has anything to do with serious cryptography," Matthew Green, a renowned computer scientist who studies cryptography, told me via email. "Aside from the trivial use of digital signatures and hash functions, it's a stupid name."

It is a stupid name.

Worse Than FailureCodeSOD: Pounding Away

“Hey, Herbie, we need you to add code to our e-commerce package to send an email with order details in it,” was the requirement.

“You mean like a notification? Order confirmation?”

“Yes!”

So Herbie trotted off to write the code, only to learn that it was all wrong. They didn’t want a human-readable confirmation. The emails were going to a VB application, and they needed a machine-readable format. So Herbie revamped the email to have XML, and provided an XML schema.

This was also wrong. Herbie’s boss wrangled Herbie and the VB developer together on a conference call, and they tried to hammer out some sort of contract for how the data would move from system to system.

They didn’t want the data in any standard format. They had their own format. They didn’t have a clear idea about the email was supposed to contain, either, which meant Herbie got to play the game of trying his best to constantly revamp the code as they changed the requirements on the fly.

In the end, he produced this monster:

   private function getAdminMailString(){
        $adminMailString = '';
        $mediaBeans = $this->orders->getConfiguredImageBeans();
        $mediaBeansSize = count($mediaBeans);

        $adminMailString .= '###order-start###'."\n";
        $adminMailString .= '###order-size-start###' . $mediaBeansSize . "###order-size-end###\n";
        $adminMailString .= '###date-start###' . date('d.m.Y',strtotime($this->context->getStartDate())) . "###date-end###\n";
        $adminMailString .= '###business-context-start###' . $this->context->getBusinessContextName() . "###business-context-end###\n";

        if($this->customer->getIsMassOrderUser()){

            $customers = $this->customer->getSelectedMassOrderCustomers();
            $customersSize = count($this->customer->getSelectedMassOrderCustomers());

            $adminMailString .= '###is-mass-order-start###1###is-mass-order-end###'."\n";
            $adminMailString .= '###mass-order-size-start###'.$customersSize.'###mass-order-size-end###'."\n";

            $adminMailString .= '###mass-start###'."\n";
            for($i = 0; $i < $customersSize; $i++){

                $adminMailString .= '###mass-customer-' . $i . '-start###'."\n";
                $adminMailString .= '###customer-start###' . $customers[$i]->getCompanyName() . '###customer-end###'."\n";
                $adminMailString .= '###customer-number-start###' . $customers[$i]->getCustomerNumber() . '###customer-number-end###'."\n";
                $adminMailString .= '###contact-person-start###' . $customers[$i]->getContactPerson() . '###contact-person-end###'."\n";
                $adminMailString .= '###mass-customer-' . $i . '-end###'."\n";

            }
            $adminMailString .= '###mass-end###'."\n";

        } else {
            $adminMailString .= '###is-mass-order-start###0###is-mass-order-end###'."\n";
        }

        for($i = 0; $i < $mediaBeansSize; $i++){

            $adminMailString .= '###medium-' . $i . "-start###\n";

            if($mediaBeans[$i] instanceof ConfiguredImageBean){

                $adminMailString .= '###type-start###picture###type-end###' . "\n";
                $adminMailString .= '###name-start###' . $mediaBeans[$i]->getTitle() . "###name-end###\n";
                $adminMailString .= '###url-start###' . $mediaBeans[$i]->getConfiguredImageWebPath() . "###url-end###\n";

            } else if($mediaBeans[$i] instanceof MovieBean){

                $adminMailString .= '###type-start###movie###type-end###' . "\n";
                $adminMailString .= '###name-start###' . $mediaBeans[$i]->getTitle() . "###name-end###\n";
                $adminMailString .= '###url-start###' . $mediaBeans[$i]->getMoviePath() . "###url-end###\n";

            } else {
                throw new Exception('Bean is wether of type ConfiguredImageBean nor MovieBean!');
            }

            $adminMailString .= '###medium-' . $i . "-end###\n";
        }

        $adminMailString .= '###order-end###'."\n";

        return $adminMailString;
    }

Yes, that’s XML, if instead of tags you used ###some-field-start###value###some-field-end#### in place of traditional tags. Note how in many cases, the tag name itself is dynamic: $adminMailString .= '###medium-' . $i . "-start###\n";

It was bad enough to generate it, but Herbie was glad he wasn’t responsible for parsing it.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Krebs on SecurityHacked Password Service Leakbase Goes Dark

Leakbase, a Web site that indexed and sold access to billions of usernames and passwords stolen in some of the world largest data breaches, has closed up shop. A source close to the matter says the service was taken down in a law enforcement sting that may be tied to the Dutch police raid of the Hansa dark web market earlier this year.

Leakbase[dot]pw began selling memberships in September 2016, advertising more than two billion usernames and passwords that were stolen in high-profile breaches at sites like linkedin.com, myspace.com and dropbox.com.

But roughly two weeks ago KrebsOnSecurity began hearing from Leakbase users who were having trouble reaching the normally responsive and helpful support staff responsible for assisting customers with purchases and site issues.

Sometime this weekend, Leakbase began redirecting visitors to haveibeenpwned.com, a legitimate breach alerting service run by security researcher Troy Hunt (Hunt’s site lets visitors check if their email address has shown up in any public database leaks, but it does not store corresponding account passwords).

Leakbase reportedly came under new ownership after its hack in April. According to a source with knowledge of the matter but who asked to remain anonymous, the new owners of Leakbase dabbled in dealing illicit drugs at Hansa, a dark web marketplace that was dismantled in July by authorities in The Netherlands.

The Dutch police had secretly seized Hansa and operated it for a time in order to gather more information about and ultimately arrest many of Hansa’s top drug sellers and buyers. 

According to my source, information the Dutch cops gleaned from their Hansa takeover led authorities to identify and apprehend one of the owners of Leakbase. This information could not be confirmed, and the Dutch police have not yet responded to requests for comment. 

A message posted Dec. 2 to Leakbase’s Twitter account states that the service was being discontinued, and the final message posted to that account seems to offer paying customers some hope of recovering any unused balances stored with the site.

“We understand many of you may have lost some time, so in an effort to offer compensation please email, refund@leakbase.pw Send your LeakBase username and how much time you had left,” the message reads. “We will have a high influx of emails so be patient, this could take a while.”

My source noted that these last two messages are interesting because they are unlike every other update posted to the Leakbase Twitter account. Prior to the shutdown message on Dec. 2, all updates to that account were done via Twitter’s Web client; but the last two were sent via Mobile Web (M2).

Ironically, Leakbase was itself hacked back in April 2017 after a former administrator was found to be re-using a password from an account at x4b[dot]net, a service that Leakbase relied upon at the time to protect itself from distributed denial-of-service (DDoS) attacks intended to knock the site offline.

X4B[dot]net was hacked just days before the Leakbase intrusion, and soon after cleartext passwords and usernames from hundreds of Leakbase users were posted online by the hacker group calling itself the Money Team.

Many readers have questioned how it could be illegal to resell passwords that were leaked online in the wake of major data breaches. The argument here is generally that in most cases this information is already in the public domain and thus it can’t be a crime to index and resell it.

However, many legal experts see things differently. In February 2017, I wrote about clues that tied back to a real-life identity for one of the alleged administrators of Leakedsource, a very similar service (it’s worth noting that the subject of that story also was found out because he re-used the same credentials across multiple sites).

In the Leakedsource story, I interviewed Orin Kerr, director of the Cybersecurity Law Initiative at The George Washington University. Kerr told me that owners of services like Leakbase and Leakedsource could face criminal charges if prosecutors could show these services intended for the passwords that are for sale on the site to be used in the furtherance of a crime.

Kerr said trafficking in passwords is clearly a crime under the Computer Fraud and Abuse Act (CFAA).

Specifically, Section A6 of the CFAA, which makes it a crime to “knowingly and with intent to defraud traffic in any password or similar information through which a computer may be accessed without authorization, if…such trafficking affects interstate or foreign commerce.”

“CFAA quite clearly punishes password trafficking,” Kerr said. “The statute says the [accused] must be trafficking in passwords knowingly and with intent to defraud, or trying to further unauthorized access.”

,

Planet Linux AustraliaDavid Rowe: How Inlets Generate Thrust on Supersonic aircraft

Some time ago I read Skunk Works, a very good “engineering” read.

In the section on the SR-71, the author Ben Rich made a statement that has puzzled me ever since, something like: “Most of the engines thrust is developed by the intake”. I didn’t get it – surely an intake is a source of drag rather than thrust? I have since read the same statement about the Concorde and it’s inlets.

Lately I’ve been watching a lot of AgentJayZ Gas Turbine videos. This guy services gas turbines for a living and is kind enough to present a lot of intricate detail and answer questions from people. I find his presentation style and personality really engaging, and get a buzz out of his enthusiasm, love for his work, and willingness to share all sorts of geeky, intricate details.

So inspired by AgentJayZ I did some furious Googling and finally worked out why supersonic planes develop thrust from their inlets. I don’t feel it’s well explained elsewhere so here is my attempt:

  1. Gas turbine jet engines only work if the air is moving into the compressor at subsonic speeds. So the job of the inlet is to slow the air down from say Mach 2 to Mach 0.5.
  2. When you slow down a stream of air, the pressure increases. Like when you feel the wind pushing on your face on a bike. Imagine (don’t try) the pressure on your arm hanging out of a car window at 100 km/hr. Now imagine the pressure at 3000 km/hr. Lots. Around a 40 times increase for the inlets used in supersonic aircraft.
  3. So now we have this big box (the inlet chamber) full of high pressure air. Like a balloon this pressure is pushing equally on all sides of the box. Net thrust is zero.
  4. If we untie the balloon neck, the air can escape, and the balloon shoots off in the opposite direction.
  5. Back to the inlet on the supersonic aircraft. It has a big vacuum cleaner at the back – the compressor inlet of the gas turbine. It is sucking air out of the inlet as fast as it can. So – the air can get out, just like the balloon, and the inlet and the aircraft attached to it is thrust in the opposite direction. That’s how an inlet generates thrust.
  6. While there is also thrust from the gas turbine and it’s afterburner, turns out that pressure release in the inlet contributes the majority of the thrust. I don’t know why it’s the majority. Guess I need to do some more reading and get my gas equations on.

Another important point – the aircraft really does experience that extra thrust from the inlet – e.g. it’s transmitted to the aircraft by the engine mounts on the inlet, and the mounts must be designed with those loads in mind. This helps me understand the definition of “thrust from the inlet”.

,

Krebs on SecurityFormer NSA Employee Pleads Guilty to Taking Classified Data

A former employee for the National Security Agency pleaded guilty on Friday to taking classified data to his home computer in Maryland. According to published reports, U.S. intelligence officials believe the data was then stolen from his computer by hackers working for the Russian government.

Nghia Hoang Pho, 67, of Ellicott City, Maryland, pleaded guilty today to “willful retention of national defense information.” The U.S. Justice Department says that beginning in April 2006 Pho was employed as a developer for the NSA’s Tailored Access Operations (TAO) unit, which develops specialized hacking tools to gather intelligence data from foreign targets and information systems.

According to Pho’s plea agreement, between 2010 and March 2015 he removed and retained highly sensitive classified “documents and writings that contained national defense information, including information classified as Top Secret.”

Pho is the third NSA worker to be charged in the past two years with mishandling classified data. His plea is the latest — and perhaps final — chapter in the NSA’s hunt for those responsible for leaking NSA hacking tools that have been published online over the past year by a shadowy group calling itself The Shadow Brokers.

Neither the government’s press release about the plea nor the complaint against Pho mention what happened to the classified documents after he took them home. But a report in The New York Times cites government officials speaking on condition of anonymity saying that Pho had installed on his home computer antivirus software made by a Russian security firm Kaspersky Lab, and that Russian hackers are believed to have exploited the software to steal the classified documents.

On October 5, 2017, The Wall Street Journal reported that Russian government hackers had lifted the hacking tools from an unnamed NSA contractor who’d taken them and examined them on his home computer, which happened to have Kaspersky Antivirus installed.

On October 10, The Times reported that Israeli intelligence officers looked on in real time as Russian government hackers used Kaspersky’s antivirus network as a kind of improvised search tool to scour computers around the world for the code names of American intelligence programs.

For its part, Kaspersky has said its software detected the NSA hacking tools on a customer’s computer and sent the files to the company’s anti-malware network for analysis. In a lengthy investigation report published last month, Kaspersky said it found no evidence that the files left its network, and that the company deleted the files from its system after learning what they were.

Kaspersky also noted that the computer from which the files were taken was most likely already compromised by “unknown threat actors.” It based that conclusion on evidence indicating the user of that system installed a backdoored Microsoft Office 2013 license activation tool, and that in order to run the tool the user must have disabled his antivirus protection.

The U.S. Department of Homeland Security (DHS) issued a binding directive in September ordering all federal agencies to cease using Kaspersky software by Dec. 12.

Pho faces up to 10 years in prison. He is scheduled to be sentenced April 6, 2018.

A note to readers: This author published a story earlier in the week that examined information in the metadata of Microsoft Office documents stolen from the NSA by The Shadow Brokers and leaked online. That story identified several individuals whose names were in the metadata from those documents. After the guilty plea entered this week and described above, KrebsOnSecurity has unpublished that earlier story.

Don MartiPurple box claims another victim

Linux Journal Ceases Publication. If you can stand it, let's have a look at the final damage.

LJ

40 trackers. Not bad, but not especially good either. That purple box of data leakage—third-party trackers that forced Linux Journal into an advertising race to the bottom against low-value and fraud sites—is not so deep as a well, nor so wide as a church door...but it's there. A magazine that was a going concern in print tried to make the move to the web and didn't survive.

Linux Journal is where I was working when I first started wondering why print ads tend to hold their value while web ads keep losing value. Unfortunately it's not enough for sites to just stop running trackers and make the purple box go away. But there are a few practical steps that Internet freedom lovers can take to stop the purple box from taking out your other favorite sites.

Krebs on SecurityCarding Kingpin Sentenced Again. Yahoo Hacker Pleads Guilty

Roman Seleznev, a Russian man who is already serving a record 27-year sentence in the United States for cybercrime charges, was handed a 14-year sentence this week by a federal judge in Atlanta for his role in a credit card and identity theft conspiracy that prosecutors say netted more than $50 million. Separately, a Canadian national has pleaded guilty to charges of helping to steal more than a billion user account credentials from Yahoo.

Seleznev, 33, was given the 14-year sentence in connection with two prosecutions that were consolidated in Georgia: The 2008 heist against Atlanta-based credit card processor RBS Worldpay; and a case out of Nevada where he was charged as a leading merchant of stolen credit cards at carder[dot]su, at one time perhaps the most bustling fraud forum where members openly marketed a variety of cybercrime-oriented services.

Roman Seleznev, pictured with bundles of cash. Image: US DOJ.

Seleznev’s conviction comes more than a year after he was convicted in a Seattle court on 38 counts of cybercrime charges, including wire fraud and aggravated identity theft. The Seattle conviction earned Seleznev a 27-year prison sentence — the most jail time ever given to an individual convicted of cybercrime charges in the United States.

This latest sentence will be served concurrently — meaning it will not add any time to his 27-year sentence. But it’s worth noting because Seleznev is appealing the Seattle verdict. In the event he prevails in Seattle and gets that conviction overturned, he will still serve out his 14-year sentence in the Georgia case because he pleaded guilty to those charges and waived his right to an appeal.

Prosecutors say Seleznev, known in the underworld by his hacker nicknames “nCux” and “Bulba,” enjoyed an extravagant lifestyle prior to his arrest, driving expensive sports cars and dropping tens of thousands of dollars at lavish island vacation spots. The son of an influential Russian politician, Seleznev made international headlines in 2014 after he was captured while vacationing in The Maldives, a popular destination for Russians and one that many Russian cybercriminals previously considered to be out of reach for western law enforcement agencies.

However, U.S. authorities were able to negotiate a secret deal with the Maldivian government to apprehend Seleznev. Following his capture, Seleznev was whisked away to Guam for more than a month before being transported to Washington state to stand trial for computer hacking charges.

The U.S. Justice Department says the laptop found with him when he was arrested contained more than 1.7 million stolen credit card numbers, and that evidence presented at trial showed that Seleznev earned tens of millions of dollars defrauding more than 3,400 financial institutions.

Investigators also reportedly found a smoking gun: a password cheat sheet that linked Seleznev to a decade’s worth of criminal hacking. For more on Seleznev’s arrest and prosecution, see The Backstory Behind Carder Kingpin Roman Seleznev’s Record 27-Year Sentence, and Feds Charge Carding Kingpin in Retail Hacks.

In an unrelated case, federal prosecutors in California announced a guilty plea from Karim Baratov, one of four men indicted in March 2017 for hacking into Yahoo beginning in 2014. Yahoo initially said the intrusion exposed the usernames, passwords and account data for roughly 500 million Yahoo users, but in December 2016 Yahoo said the actual number of victims was closer to one billion (read: all of its users). 

Baratov, 22, is a Canadian and Kazakh national who lived in Canada (he’s now being held in California). He was charged with being hired by two Russian FSB officer defendants in this case — Dmitry Dokuchaev, 33, and Igor Sushchin, 43 — to hack into the email accounts of thousands of individuals. According to prosecutors, Baratov’s role in the charged conspiracy was to hack webmail accounts of individuals of interest to the FSB and send those accounts’ passwords to Dokuchaev in exchange for money.

Karim Baratov, a.k.a. “Karim Taloverov,” as pictured in 2014 on his own site, mr-karim.com.

Baratov operated several business that he advertised openly online that could be hired to hack into email accounts at the world’s largest email providers, including Google, Yahoo and Yandex. As part of his plea agreement, Baratov not only admitted to agreeing and attempting to hack at least 80 webmail accounts on behalf of one of his FSB co-conspirators, but also to hacking more than 11,000 webmail accounts in total from in or around 2010 until his arrest by Canadian authorities.

Shortly after Baratov’s arrest and indictment, KrebsOnSecurity examined many of the email hacking services he operated and that were quite clearly tied to his name. One such business advertised the ability to steal email account passwords without actually changing the victim’s password. According to prosecutors, Baratov’s service relied on “spear phishing” emails that targeted individuals with custom content and enticed recipients to click a booby-trapped link.

For example, one popular email hacking business registered to Baratov was xssmail[dot]com, which for several years advertised the ability to break into email accounts of virtually all of the major Webmail providers. XSS is short for “cross-site-scripting.” XSS attacks rely on vulnerabilities in Web sites that don’t properly parse data submitted by visitors in things like search forms or anyplace one might enter data on a Web site.

Archive.org’s cache of xssmail.com

In the context of phishing links, the user clicks the link and is actually taken to the domain he or she thinks she is visiting (e.g., yahoo.com) but the vulnerability allows the attacker to inject malicious code into the page that the victim is visiting.

This can include fake login prompts that send any data the victim submits directly to the attacker. Alternatively, it could allow the attacker to steal “cookies,” text files that many sites place on visitors’ computers to validate whether they have visited the site previously, as well as if they have authenticated to the site already.

Baratov pleaded guilty to nine counts, including one count of aggravated identity theft and eight violations of the Computer Fraud and Abuse Act. His sentencing hearing is scheduled for Feb. 20, 2018. The aggravated identity theft charge carries a mandatory two-year sentence; each of the other counts is punishable by up to 10 years in jail and fines of $250,000, although any sentence he receives will likely be heavily tempered by U.S. federal sentencing guidelines.

Meanwhile, Baratov’s co-defendant Dokuchaev is embroiled in his own legal worries in Russia, charges that could carry a death sentence. He and his former boss Sergei Mikhailov — once deputy chief of the FSB’s Center for Information Security — were arrested in December 2016 by Russian authorities and charged with treason. Also charged with treason in connection with that case was Ruslan Stoyanov, a senior employee at Russian security firm Kaspersky Lab.

There are many competing theories for the reasons behind their treason charges, some of which are explored in this Washington Post story. I have my own theory, detailed in my January 2017 piece, A Shakeup in Russia’s Top Cybercrime Unit.

,

CryptogramFriday Squid Blogging: Research into Squid-Eating Beaked Whales

Beaked whales, living off the coasts of Ireland, feed on squid.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDBrand-new TED Talks from TEDWomen 2017: A note from the curator

This year’s TEDWomen in New Orleans was a truly special conference, at a vital moment, and I’m sure the ripples will be felt for a long time to come. The theme this year was bridges: we build them, we cross them, sometimes we even burn them. Our speakers talked about the physical bridges we need for access and connection as well as the metaphoric ones we need to bridge the differences that increasingly divide us.

Along with the inspiring TED Talks and often game-changing ideas that were shared in the TEDWomen stage, my biggest take-away from this year’s conference was once again the importance of community and the opportunity this conference offers for women and a few good men from different countries, cultures, religions, backgrounds, from so many different sectors of work and experience, to come to together to listen, to learn, to connect with each other, to build their own bridges.

Take a look at all the presentations with our detailed speaker-by-speaker coverage on the TED Blog. Between sessions, we hosted four great Facebook Live conversations in the Blue Room, diving deeper into ideas from talks with WNYC’s Manoush Zomorodi. Catch up on them right here.

And we’re starting to post TED Talks from our event to share freely with the world. First up: Gretchen Carlson, whose timely talk about sexual harassment is relevant and resonant for so many women and men at this #MeToo moment. It’s already been viewed by over 800,000 people!

Gretchen calls on women who have experienced sexual harassment to “Be Fierce!” (also the title of her recent book). Luvvie Ajayi, in another TEDWomen Talk being released today, encourages not just women, but all of us to be courageous and to Speak Up when we have something to say, even if it makes others uncomfortable — especially if it makes the speaker uncomfortable. “I want us to leave this world better than we found it,” she told the audience in her hopeful and uplifting talk, “And how I choose to effect change is by speaking up, by being the first and by being the domino.”

And don’t miss Teresa Njoroge’s powerful talk on women in prison. At Clean Start Kenya, Njoroge builds bridges connecting the formerly imprisoned to the outside world and vice versa.

And one of the highlights of the conference for me, my conversation with Leah Chase, the Queen of Creole Cuisine. Chase’s New Orleans restaurant Dooky Chase changed the course of American history over gumbo and fried chicken. During the civil rights movement, it was a place where white and black people came together, where activists planned protests and where the police entered but did not disturb — and it continues to operate in the same spirit today. In our talk, she shares her wisdom from a lifetime of activism, speaking up and cooking.

Follow me on Facebook and Twitter for updates as we publish more TEDWomen 2017 videos in coming weeks and months. And please share your thoughts with me here in the comments about TEDWomen, these videos and ideas you have for speakers at TEDWomen 2018. We’re always looking for great ideas!

Gratefully,
— Pat


CryptogramNeedless Panic Over a Wi-FI Network Name

A Turkish Airlines flight made an emergency landing because someone named his wireless network (presumably from his smartphone) "bomb on board."

In 2006, I wrote an essay titled "Refuse to be Terrorized." (I am also reminded of my 2007 essay, "The War on the Unexpected." A decade later, it seems that the frequency of incidents like the one above is less, although not zero. Progress, I suppose.

Worse Than FailureError'd: Get Inspired

"The great words of inspirationalAuthor.firstName inspirationalAuthor.lastName move me every time," wrote Geoff O.

 

Mark R. writes, "Some countries out there must have some crazy postal codes."

 

"I certainly hope the irony isn't lost on the person who absolutely failed sending out a boiler plate email on the subject of machine learning!" Mike C. wrote.

 

Adrian R. writes, "I'd love to postpone this update, but I feel like I'm playing button roulette."

 

"It's pretty cool that Magneto asks to do a backup before an upgrade," Andrea wrote, "It's only 14TB."

 

Ky W. writes, "I sure hope that these developers didn't write the avaionics software."

 

[Advertisement] High availability, Load-balanced or Basic – design your own Universal Package Manager, allow the enterprise to scale as you grow. Download and see for yourself!

CryptogramWarrant Protections against Police Searches of Our Data

The cell phones we carry with us constantly are the most perfect surveillance device ever invented, and our laws haven't caught up to that reality. That might change soon.

This week, the Supreme Court will hear a case with profound implications on your security and privacy in the coming years. The Fourth Amendment's prohibition of unlawful search and seizure is a vital right that protects us all from police overreach, and the way the courts interpret it is increasingly nonsensical in our computerized and networked world. The Supreme Court can either update current law to reflect the world, or it can further solidify an unnecessary and dangerous police power.

The case centers on cell phone location data and whether the police need a warrant to get it, or if they can use a simple subpoena, which is easier to obtain. Current Fourth Amendment doctrine holds that you lose all privacy protections over any data you willingly share with a third party. Your cellular provider, under this interpretation, is a third party with whom you've willingly shared your movements, 24 hours a day, going back months -- even though you don't really have any choice about whether to share with them. So police can request records of where you've been from cell carriers without any judicial oversight. The case before the court, Carpenter v. United States, could change that.

Traditionally, information that was most precious to us was physically close to us. It was on our bodies, in our homes and offices, in our cars. Because of that, the courts gave that information extra protections. Information that we stored far away from us, or gave to other people, afforded fewer protections. Police searches have been governed by the "third-party doctrine," which explicitly says that information we share with others is not considered private.

The Internet has turned that thinking upside-down. Our cell phones know who we talk to and, if we're talking via text or e-mail, what we say. They track our location constantly, so they know where we live and work. Because they're the first and last thing we check every day, they know when we go to sleep and when we wake up. Because everyone has one, they know whom we sleep with. And because of how those phones work, all that information is naturally shared with third parties.

More generally, all our data is literally stored on computers belonging to other people. It's our e-mail, text messages, photos, Google docs, and more ­ all in the cloud. We store it there not because it's unimportant, but precisely because it is important. And as the Internet of Things computerizes the rest our lives, even more data will be collected by other people: data from our health trackers and medical devices, data from our home sensors and appliances, data from Internet-connected "listeners" like Alexa, Siri, and your voice-activated television.

All this data will be collected and saved by third parties, sometimes for years. The result is a detailed dossier of your activities more complete than any private investigator --­ or police officer --­ could possibly collect by following you around.

The issue here is not whether the police should be allowed to use that data to help solve crimes. Of course they should. The issue is whether that information should be protected by the warrant process that requires the police to have probable cause to investigate you and get approval by a court.

Warrants are a security mechanism. They prevent the police from abusing their authority to investigate someone they have no reason to suspect of a crime. They prevent the police from going on "fishing expeditions." They protect our rights and liberties, even as we willingly give up our privacy to the legitimate needs of law enforcement.

The third-party doctrine never made a lot of sense. Just because I share an intimate secret with my spouse, friend, or doctor doesn't mean that I no longer consider it private. It makes even less sense in today's hyper-connected world. It's long past time the Supreme Court recognized that a months'-long history of my movements is private, and my e-mails and other personal data deserve the same protections, whether they're on my laptop or on Google's servers.

This essay previously appeared in the Washington Post.

Details on the case. Two opinion pieces.

I signed on to two amicus briefs on the case.

EDITED TO ADD (12/1): Good commentary on the Supreme Court oral arguments.

Planet Linux AustraliaPia Waugh: My Canadian adventure exploring FWD50

I recently went to Ottawa for the FWD50 conference run by Rebecca and Alistair Croll. It was my first time in Canada, and it combined a number of my favourite things. I was at an incredible conference with a visionary and enthusiastic crowd, made up of government (international, Federal, Provincial and Municipal), technologists, civil society, industry, academia, and the calibre of discussions and planning for greatness was inspiring.

There was a number of people I have known for years but never met in meatspace, and equally there were a lot of new faces doing amazing things. I got to spend time with the excellent people at the Treasury Board of Canadian Secretariat, including the Canadian Digital Service and the Office of the CIO, and by wonderful coincidence I got to see (briefly) the folk from the Open Government Partnership who happened to be in town. Finally I got to visit the gorgeous Canadian Parliament, see their extraordinary library, and wander past some Parliamentary activity which always helps me feel more connected to (and therefore empowered to contribute to) democracy in action.

Thank you to Alistair Croll who invited me to keynote this excellent event and who, with Rebecca Croll, managed to create a truly excellent event with a diverse range of ideas and voices exploring where we could or should go as a society in future. I hope it is a catalyst for great things to come in Canada and beyond.

For those in Canada who are interested in the work in New Zealand, I strongly encourage you to tune into the D5 event in February which will have some of our best initiatives on display, and to tune in to our new Minister for Broadband, Digital and Open Government (such an incredible combination in a single portfolio), Minister Clare Curran and you can tune in to our “Service Innovation” work at our blog or by subscribing to our mailing list. I also encourage you to read this inspiring “People’s Agenda” by a civil society organisation in NZ which codesigned a vision for the future type of society desired in New Zealand.

Highlights

  • One of the great delights of this trip was seeing a number of people in person for the first time who I know from the early “Gov 2.0″ days (10 years ago!). It was particularly great to see Thom Kearney from Canada’s TBS and his team, Alex Howard (@digiphile) who is now a thought leader at the Sunlight Foundation, and Olivia Neal (@livneal) from the UK CTO office/GDS, Joe Powell from OGP, as well as a few friends from Linux and Open Source (Matt and Danielle amongst others).
  • The speech by Canadian Minister of the Treasury Board Secretariat (which is responsible for digital government) the Hon Scott Brison, was quite interesting and I had the chance to briefly chat to him and his advisor at the speakers drinks afterwards about the challenges of changing government.
  • Meeting with Canadian public servants from a variety of departments including the transport department, innovation and science, as well as the Treasury Board Secretariat and of course the newly formed Canadian Digital Service.
  • Meeting people from a range of sub-national governments including the excellent folk from Peel, Hillary Hartley from Ontario, and hearing about the quite inspiring work to transform organisational structures, digital and other services, adoption of micro service based infrastructure, the use of “labs” for experimentation.
  • It was fun meeting some CIO/CTOs from Canada, Estonia, UK and other jurisdictions, and sharing ideas about where to from here. I was particularly impressed with Alex Benay (Canadian CIO) who is doing great things, and with Siim Sikkut (Estonian CIO) who was taking the digitisation of Estonia into a new stage of being a broader enabler for Estonians and for the world. I shared with them some of my personal lessons learned around digital iteration vs transformation, including from the DTO in Australia (which has changed substantially, including a name change since I was there). Some notes of my lessons learned are at http://pipka.org/2017/04/03/iteration-or-transformation-in-government-paint-jobs-and-engines/.
  • My final highlight was how well my keynote and other talks were taken. People were really inspired to think big picture and I hope it was useful in driving some of those conversations about where we want to collectively go and how we can better collaborate across geopolitical lines.

Below are some photos from the trip, and some observations from specific events/meetings.

My FWD50 Keynote – the Tipping Point

I was invited to give a keynote at FWD50 about the tipping point we have gone through and how we, as a species, need to embrace the major paradigm shifts that have already happened, and decide what sort of future we want and work towards that. I also suggested some predictions about the future and examined the potential roles of governments (and public sectors specifically) in the 21st century. The slides are at https://docs.google.com/presentation/d/1coe4Sl0vVA-gBHQsByrh2awZLa0Nsm6gYEqHn9ppezA/edit?usp=sharing and the full speech is on my personal blog at http://pipka.org/2017/11/08/fwd50-keynote-the-tipping-point.

I also gave a similar keynote speech at the NerHui conference in New Zealand the week after which was recorded for those who want to see or hear the content at https://2017.nethui.nz/friday-livestream

The Canadian Digital Service

Was only set up about a year ago and has a focus on building great services for users, with service design and user needs at the heart of their work. They have some excellent people with diverse skills and we spoke about what is needed to do “digital government” and what that even means, and the parallels and interdependencies between open government and digital government. They spoke about an early piece of work they did before getting set up to do a national consultation about the needs of Canadians (https://digital.canada.ca/beginning-the-conversation/) which had some interesting insights. They were very focused on open source, standards, building better ways to collaborate across government(s), and building useful things. They also spoke about their initial work around capability assessment and development across the public sector. I spoke about my experience in Australia and New Zealand, but also in working and talking to teams around the world. I gave an informal outline about the work of our Service Innovation and Service Integration team in DIA, which was helpful to get some feedback and peer review, and they were very supportive and positive. It was an excellent discussion, thank you all!

CivicTech meetup

I was invited to talk to the CivicTech group meetup in Ottawa (https://www.meetup.com/YOW_CT/events/243891738/) about the roles of government and citizens into the future. I gave a quick version of the keynote I gave at linux.conf.au 2017 (pipka.org/2017/02/18/choose-your-own-adventure-keynote/), which explores paradigm shifts and the roles of civic hackers and activists in helping forge the future whilst also considering what we should (and shouldn’t) take into the future with us. It included my amusing change.log of the history of humans and threw down the gauntlet for civic hackers to lead the way, be the light :)

CDS Halloween Mixer

The Canadian Digital Service does a “mixer” social event every 6 weeks, and this one landed on Halloween, which was also my first ever Halloween celebration  I had a traditional “beavertail” which was a flat cinnamon doughnut with lemon, amazing! Was fun to hang out but of course I had to retire early from jet lag.

Workshop with Alistair

The first day of FWD50 I helped Alistair Croll with a day long workshop exploring the future. We thought we’d have a small interactive group and ended up getting 300, so it was a great mind meld across different ideas, sectors, technologies, challenges and opportunities. I gave a talk on culture change in government, largely influenced by a talk a few years ago called “Collaborative innovation in the public service: Game of Thrones style” (http://pipka.org/2015/01/04/collaborative-innovation-in-the-public-service-game-of-thrones-style/). People responded well and it created a lot of discussions about the cultural challenges and barriers in government.

Thanks

Finally, just a quick shout out and thanks to Alistair for inviting me to such an amazing conference, to Rebecca for getting me organised, to Danielle and Matthew for your companionship and support, to everyone for making me feel so welcome, and to the following folk who inspired, amazed and colluded with me  In chronological order of meeting: Sean Boots, Stéphane Tourangeau, Ryan Androsoff, Mike Williamson, Lena Trudeau, Alex Benay (Canadian Gov CIO), Thom Kearney and all the TBS folk, Siim Sikkut from Estonia, James Steward from UK, and all the other folk I met at FWD50, in between feeling so extremely unwell!

Thank you Canada, I had a magnificent time and am feeling inspired!

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 4, week 9

Well, we’re almost at the end of the year!! It’s a time when students and teachers alike start to look forward to the long, summer break. Generally a time for celebrations and looking back over the highlights of the year – which is reflected in the activities for the final lessons of the Understanding Our […]

,

LongNowMusic, Time and Long-Term Thinking: Brian Eno Expands the Vocabulary of Human Feeling

Brian Eno’s creative activities defy categorization. Widely known as a musician and producer, Eno has expanded the frontiers of audio and visual art for decades, and posited new ways of approaching creativity in general. He is a thinker and speaker, activist and eccentric. He formulated the idea of the Big Here and Long Now—a central conceptual underpinning of The Long Now Foundation, which he helped found with Stewart Brand and Danny Hillis in 01996. Eno’s artistic career has often dealt closely with concepts of time, scale, and, as he puts it in the liner notes to Apollo“expanding the vocabulary of human feeling.”

Ambient and Generative Art

Brian Eno coined the term ‘ambient music’ to describe a kind of music meant to influence an ambience without necessarily demanding the listener’s full attention. The notes accompanying his 01978 album Ambient 1: Music for Airports differentiate it from the commercial music produced specifically for background listening by companies such as Muzak, Inc. in the mid-01900s. Eno explains that ambient music should enhance — not blanket — an environment’s acoustic and atmospheric characteristics, to calming and thought-inducing effect. It has to accommodate various levels of listening engagement, and therefore “must be as ignorable as it is interesting” (Eno 296).

Ambient music can have a timeless quality to it. The absence of a traditional structure of musical development withholds a clear beginning or end or middle, tapping into a sense of deeper, slower processes. It lets you “settle into time a little bit,” as Eno said in the first of Long Now’s SALT talks. As TimeMagazine writes, “the theme of time, foreshortened or elongated, is a defining feature of Eno’s musical and visual adventures. But it takes a long lens, pointing back, to bring into focus the ways in which his influence has seeped into the mainstream.”

Eno’s use of the term ‘ambient’ was, however, a product of a long process of musical development. He had been thinking specifically about this kind of music for several years already, and the influence of minimalist artists such as Terry RileySteve Reich and Philip Glass had long shaped his musical ideas and techniques. He also drew on many other genres, including Krautrockbands such as Tangerine Dream and Can, whose music was contemporaneous and influential in Eno’s early collaborations with Robert Fripp, e.g. (No Pussyfooting). While their music might not necessarily fall into the genre ‘ambient,’ David Sheppard notes that “Eno and Fripp’s lengthy essays shared with Krautrock a disavowal of verse/chorus orthodoxy and instead relied on an essentially static musical core with only gradual internal harmonic developments” (142). In his autobiography, Eno also points to developments in audio technology as key in the development of the genre, as well as one particularly insightful experience he had while bedridden after an accident:

New sound-shaping and space-making devices appeared on the market weekly (and still do), synthesizers made their clumsy but crucial debut, and people like me just sat at home night after night fiddling around with all this stuff, amazed at what was now possible, immersed in the new sonic worlds we could create.

And immersion was really the point: we were making music to swim in, to float in, to get lost inside.

This became clear to me when I was confined to bed, immobilized by an accident in early 01975. My friend Judy Nylon had visited, and brought with her a record of 17th-century harp music. I asked her to put it on as she left, which she did, but it wasn’t until she’d gone that I realized that the hi-fi was much too quiet and one of the speakers had given up anyway. It was raining hard outside, and I could hardly hear the music above the rain — just the loudest notes, like little crystals, sonic icebergs rising out of the storm. I couldn’t get up and change it, so I just lay there waiting for my next visitor to come and sort it out, and gradually I was seduced by this listening experience. I realized that this was what I wanted music to be — a place, a feeling, an all-around tint to my sonic environment.

It was not long after this realization that Eno released the album Discreet Music, which he considers to be an ambient work, mentioning a conceptual likeness to Erik Satie’s Furniture Music. One of the premises behind its creation was that it would be background for Robert Fripp to play over in concerts, and the title track is about half an hour long — as much time as was available to Eno on one side of a record.

It is also an early example in his discography of what later became another genre closely associated with Eno and with ambient: generative music. In the liner notes — which include the story of the broken speaker epiphany — he writes:

Since I have always preferred making plans to executing them, I have gravitated towards situations and systems that, once set into operation, could create music with little or no intervention on my part.

That is to say, I tend towards the roles of planner and programmer, and then become an audience to the results.

This notion of creating a system that generates an output is an idea that artists had considered previously. In fact, in the 18th century even Mozart and others experimented with a ‘musical dice game’ in which the numerical results of rolling dice ‘generated’ a song. More relevant to Brian Eno’s use of generative systems, however, was the influence of 20th century composers such as John Cage. David Sheppard’s biography of Brian Eno describes how Tom Phillips — a teacher at Ipswich School of Art where Eno studied painting in the mid 01960s — introduced him to the musical avant garde scene with the works of Cage, Cornelius Cardew, and the previously mentioned minimalists Reich, Glass and Riley (Sheppard 35–41). These and other artists exposed Eno to ideas such as aleatory and minimalist music, tape experimentation, and performance or process-based musical concepts.

Eno notes Steve Reich’s influence on his generative music, acknowledging that “indeed a lot of my interest was directly inspired by Steve Reich’s sixties tape pieces such as Come Out) and It’s Gonna Rain” (Eno 332). And looking back on a 01970 performance by the Philip Glass Ensemble at the Royal College of Art, Brian Eno highlights its impact on him:

This was one of the most extraordinary musical experiences of my life — sound made completely physical and as dense as concrete by sheer volume and repetition. For me it was like a viscous bath of pure, thick energy. Though he was at that time described as a minimalist, this was actually one of the most detailed musics I’d ever heard. It was all intricacy and exotic harmonics. (Sheppard 63–64)

The relationship between minimalism and intricacy, in a sense, is what underlies the concept of generative music. The artist designs a system with inputs which, when compared to the plethora of outputs, appear quite simple. Steve Reich’s It’s Gonna Rain is, in fact, simply a single 1.8 second recording of a preacher shouting “It’s gonna rain!” played simultaneously on two tape recorders. Due to the inconsistencies in the two devices’ hardware, however, the recordings play at slightly different speeds, producing over 17 minutes of phasing in which the relationship between the two recordings constantly changes.

Brian Eno has taken this capacity for generative music to create complexity out of simplicity much further. Discreet Music (01975) used a similar approach, but started with recordings of different lengths, used an echo system, and altered timbre over time. The sonic possibilities opened by adding just a few more variables are vast.

This experimental approach to creativity is just one of many that Eno explored, including some non-musical means of prompting unexpected outputs. The same year that Discreet Music was released, he collaborated with painter Peter Schmidt to produce Oblique Strategies: Over One Hundred Worthwhile Dilemmas.

The work is a set of cards, each one with an aphorism designed to help people think differently or to approach a problem from a different angle. These include phrases such as “Honour thy error as a hidden intention,” “Work at a different speed,” and “Use an old idea.” Schmidt had created something a few years earlier along the same lines that he called ‘Thoughts Behind the Thoughts.’ There was also inspiration to be drawn from John Cage’s use of the I Ching to direct his musical compositions and George Brecht’s 01963 Water Yam Box. Like a generative system, the Oblique Strategies provides a guiding rule or principle that is specific enough to focus creativity but general enough to yield an unknown outcome, dependent on a multitude of variables interacting within the framework of the strategy.

Three decades later, generative systems remained a central inspiration for Eno and a source of interesting cross-disciplinary collaboration. In 02006, he discussed them with Will Wright, creator of popular video game series The Sims, at a Long Now SALT talk:

Wright observed that science is all about compressing reality to minimal rule sets, but generative creation goes the opposite direction. You look for a combination of the fewest rules that can generate a whole complex world that will always surprise you, yet within a framework that stays recognizable. “It’s not engineering and design,” he said, “so much as it is gardening. You plant seeds. Richard Dawkins says that a willow seed has only about 800K of data in it.” — Stewart Brand

Brian Eno has always been interested in this explosion of possibilities, and has in recent years created generative art that incorporates both audio and visuals. He notes that his work 77 Million Paintings would take about 10,000 years to run through all of its possibilities — at its slowest setting. Long Now produced the North American premiere of 77 Million Paintings at Yerba Buena center for the Arts in 02007, and members were treated to a surprise visit from Mr. Eno who spoke about his work and Long Now.

Eno also designed an art installation for The Interval, Long Now’s cafe-bar-museum venue in San Francisco. “Ambient Painting #1” is the only example of Brian’s generative light work in America, and the only ambient painting of his that is currently on permanent public display anywhere.

Ambient Painting #1, by Brian Eno. Photo by Gary Wilson.

Another generative work called Bloom, created with Peter Chilvers, is available as an app.

Part instrument, part composition and part artwork, Bloom’s innovative controls allow anyone to create elaborate patterns and unique melodies by simply tapping the screen. A generative music player takes over when Bloom is left idle, creating an infinite selection of compositions and their accompanying visualisations. — Generativemusic.com

Eno’s interest in time and scale (among other things) was shared by Long Now co-founder Stewart Brand, and they were in close correspondence in the years leading up to the creation of The Long Now Foundation. Eno’s 01995 diary, published in part in his autobiography, describes that correspondence in its introduction:

My conversation with Stewart Brand is primarily a written one — in the form of e-mail that I routinely save, and which in 1995 alone came to about 100,000 words. Often I discuss things with him in much greater detail than I would write about them for my own benefit in the diary, and occasionally I’ve excerpted from that correspondence. — Eno, ix

Out of Eno’s involvement with the establishment of The Long Now Foundation emerged in his essay “The Big Here and Long Now”, which describes his experiences with small-scale perspectives and the need for larger ones, as well as the artist’s role in social change.

This imaginative process can be seeded and nurtured by artists and designers, for, since the beginning of the 20th century, artists have been moving away from an idea of art as something finished, perfect, definitive and unchanging towards a view of artworks as processes or the seeds for processes — things that exist and change in time, things that are never finished. Sometimes this is quite explicit — as in Walter de Maria’s “Lightning Field,” a huge grid of metal poles designed to attract lightning. Many musical compositions don’t have one form, but change unrepeatingly over time — many of my own pieces and Jem Finer’s Artangel installation “LongPlayer” are like this. Artworks in general are increasingly regarded as seeds — seeds for processes that need a viewer’s (or a whole culture’s) active mind in which to develop. Increasingly working with time, culture-makers see themselves as people who start things, not finish them.

And what is possible in art becomes thinkable in life. We become our new selves first in simulacrum, through style and fashion and art, our deliberate immersions in virtual worlds. Through them we sense what it would be like to be another kind of person with other kinds of values. We rehearse new feelings and sensitivities. We imagine other ways of thinking about our world and its future.

[…] In this, the 21st century, we may need icons more than ever before. Our conversation about time and the future must necessarily be global, so it needs to be inspired and consolidated by images that can transcend language and geography. As artists and culture-makers begin making time, change and continuity their subject-matter, they will legitimise and make emotionally attractive a new and important conversation.

The Chime Generator and January 07003

Brian Eno’s involvement with Long Now began through his discussions with Stewart Brand about time and long-term thinking, and the need for a carefully crafted sonic experience to help The Clock evoke deep time for its visitors posed a challenge Eno was uniquely suited to take on.

From its earliest conception, the imagined visit to the 10,000-Year Clock has had aural experience at its core. One of Danny Hillis’ earliest refrains about The Clock evokes this:

It ticks once a year, bongs once a century, and the cuckoo comes out every millennium. —Danny Hillis

In the years of brainstorming and design that have molded this vision into a tangible object, a much more detailed and complicated picture has come into focus, but sound has remained central; one of the largest components of the 10,000-Year Clock will be its Chime Generator.

Rather than a bong per century, visitors to the Clock will have the opportunity to hear it chime 10 bells in a unique sequence each day at noon. The story of how this came to be is told by Mr. Eno himself in the liner notes of January 07003: Bell Studies for The Clock of the Long Now, a collection of musical experiments he synthesized and recorded in 02003:

When we started thinking about The Clock of the Long Now, we naturally wondered what kind of sound it could make to announce the passage of time. Bells have stood the test of time in their relationship to clocks, and the technology of making them is highly evolved and still evolving. I began reading about bells, discovering the physics of their sounds, and became interested in thinking about what other sorts of bells might exist. My speculations quickly took me out of the bounds of current physical and material possibilities, but I considered some license allowable since the project was conceived in a time scale of thousands of years, and I might therefore imagine bells with quite different physical properties from those we now know (Eno 3).

Bells have a long history of marking time, so their inclusion in The Clock is a natural fit. Throughout this long history, they’ve also commonly been used in churches, meditation halls and yoga studios to offer a resonant ambiance in which to contemplate a connection to something bigger, much as The Clock’s vibrations will help inspire an awareness of one’s place in deep time. Furthermore, bells were central to some early forms of generative music. While learning about their history, Eno found a vast literature on the ways bells had been used in Britain to explore the combinatorial possibilities afforded by following a few simple rules:

Stated briefly, change-ringing is the art (or, to many practitioners, the science) of ringing a given number of bells such that all possible sequences are used without any being repeated. The mathematics of this idea are fairly simple: n bells will yield n! sequences or changes. The ! is not an expression of surprise but the sign for a factorial: a direction to multiply the number by all those lower than it. So 3 bells will yield 3 x 2 x 1 = 6 changes, while 4 bells will yield 4 x 3 x 2 x 1 = 24 changes. The ! process does become rather surprising as you continue it for higher values of n: 5! = 120, and 6! = 720 — and you watch the number of changes increasing dramatically with the number of bells. — Eno 4

Eno noticed that 10 bells in this context will provide 3,628,800 sequences. Ring one of those each day and you’ll be occupied for almost exactly 10,000 years, the proposed lifespan of The Clock.

Following this line of thinking, he imagined using the patterns played by the bells as a method of encoding the amount of time that had elapsed since The Clock had started ringing them. Writing in 02003, he says:

I wanted to hear the bells of the month of January, 07003 — approximately halfway through the life of the Clock.

I had no idea how to generate this series, but I had a good idea who would.

I wrote to Danny Hillis asking whether he could come up with an algorithm for the job. Yes, he wrote back, and in fact he could come up with an algorithm for generating all the possible algorithms for that job. Not having the storage space for a lot of extra algorithms in my studio, I decided to settle for just the one. — Eno 6

And so, the pattern The Clock’s bells will ring was set. Using a start point (02003 in this case), one can extrapolate the order in which the Bells will ring for a given day in the future. The title track of the album features the synthesized bells played in each of the 31 sequences for the month of January in the year 07003. Other tracks on the album use different algorithms or different bells to explore alternative possibilities; taken together, the album is distinctly “ambient” in Eno’s tradition, but also unique within his work for its minimalism and procedurality.

The procedures guiding the composition are strict enough that they can be written in computer code. A Long Now Member named Sean Burke was kind enough to create a webpage that illustrates how this works. The site allows visitors to enter a future date and receive a MIDI file of the chimes from that day. You can also download the algorithm itself in the form of a Perl script or just grab the MIDI data for all 10,000 years and synthesize your own bells.

If the bell ringing algorithm is a seed, in what soil can it be planted and expected to live its full life? Compact disks, Perl scripts and MIDI files have their uses, of course, but The Clock has to really last in a physical, functional sense for many thousands of years. To serve this purpose, the Chime Generator manifests the algorithm in stainless steel Geneva wheels rotating on bearings of silicon nitride.

Eno’s Chime Generator prototype. Photo by Because We Can

One of the first prototypes for this mechanism resides at The Interval. In its operation, one can see that the Geneva wheels rotate at different intervals because of their varying numbers of slots. Together, the Geneva wheels represent the ringing algorithm and sequentially engage the hammers in all 3.6 million permutations. For this prototype, the hammers strike Tibetan Bowl Gongs to sound the notes, but any type of bell can be used.

The full scale Chime Generator will be vertically suspended in the Clock shaft within the mountain. The Geneva wheels will be about 8 feet in diameter, with the full mechanism standing over seventy feet in height.

The bells for the full scale Chime Generator won’t be Tibetan Bowl Gongs like in the smaller prototype above. Though testing has been done within the Clock chamber to find its resonant frequency, the exact tuning and design of the Clock’s bells will be left until the chamber is finished and most of the Clock is installed in order to maximize their ability to resonate within the space.

Like much of Brian Eno’s work, the chimes in the 10,000-Year Clock draw together far-flung traditions, high and low tech, and science and art to create a meditative experience, unique in a given moment, but expansive in scale and scope. They encourage the listener to live and to be present in the moment, the “now,” but to feel that moment expanding forward and backward through time, literally to experience the “Long Now.”



This is the first of a series of articles, “Music, Time and Long-Term Thinking,” in which we will discuss music and musicians who have engaged various aspects of long-term thinking, both historically and in the contemporary scene.

Sociological ImagesWhen Social Class Chooses Your College Major

Originally Posted at Discoveries

Many different factors go into deciding your college major — your school, your skills, and your social network can all influence what field of study you choose. This is an important decision, as social scientists have shown it has consequences well into the life course — not only do college majors vary widely in terms of earnings across the life course, but income gaps between fields are often larger than gaps between those with college degrees and those without them. Natasha Quadlin finds that this gap is in many ways due to differences in funding at the start of college that determine which majors students choose.

Photo by Tom Woodward, Flickr CC

Quadlin draws on data from the Postsecondary Transcript Study, a collection of over 700 college transcripts from students who were enrolled in postsecondary education in 2012. Focusing on students’ declared major during their freshman year, Quadlin analyzes the relationship between the source of funding a student gets — loans, grants, or family funds — and the type of major the student initially chooses — applied versus academic and STEM versus non-STEM. She finds that students who pay for college with loans are more likely to major in applied non-STEM fields, such as business and nursing, and they are less likely to be undeclared. However, students whose funding comes primarily from grants or family members are more likely to choose academic majors like sociology or English and STEM majors like biology or computer science.

In other words, low- and middle-income students with significant amounts of loan debt are likely to choose “practical” applied majors that more quickly result in full-time employment. Conversely, students with grants and financially supportive parents, regardless of class, are more likely to choose what are considered riskier academic and STEM tracks that are more challenging and take longer to turn into a job. Since middle- to upper-class students are more likely to get family assistance and merit-based grants, this means that less advantaged students are most likely to rely on loans. The problem, Quadlin explains, is that applied non-STEM majors have relatively high wages at first, but very little advancement over time, while academic and STEM majors have more barriers to completion but experience more frequent promotions. The result is that inequalities established at the start of college are often maintained throughout people’s lives.

Jacqui Frost is a PhD candidate in sociology at the University of Minnesota and the managing editor at The Society Pages. Her research interests include non-religion and religion, culture, and civic engagement.

(View original at https://thesocietypages.org/socimages)

CryptogramNSA "Red Disk" Data Leak

ZDNet is reporting about another data leak, this one from US Army's Intelligence and Security Command (INSCOM), which is also within to the NSA.

The disk image, when unpacked and loaded, is a snapshot of a hard drive dating back to May 2013 from a Linux-based server that forms part of a cloud-based intelligence sharing system, known as Red Disk. The project, developed by INSCOM's Futures Directorate, was slated to complement the Army's so-called distributed common ground system (DCGS), a legacy platform for processing and sharing intelligence, surveillance, and reconnaissance information.

[...]

Red Disk was envisioned as a highly customizable cloud system that could meet the demands of large, complex military operations. The hope was that Red Disk could provide a consistent picture from the Pentagon to deployed soldiers in the Afghan battlefield, including satellite images and video feeds from drones trained on terrorists and enemy fighters, according to a Foreign Policy report.

[...]

Red Disk was a modular, customizable, and scalable system for sharing intelligence across the battlefield, like electronic intercepts, drone footage and satellite imagery, and classified reports, for troops to access with laptops and tablets on the battlefield. Marking files found in several directories imply the disk is "top secret," and restricted from being shared to foreign intelligence partners.

A couple of points. One, this isn't particularly sensitive. It's an intelligence distribution system under development. It's not raw intelligence. Two, this doesn't seem to be classified data. Even the article hedges, using the unofficial term of "highly sensitive." Three, it doesn't seem that Chris Vickery, the researcher that discovered the data, has published it.

Chris Vickery, director of cyber risk research at security firm UpGuard, found the data and informed the government of the breach in October. The storage server was subsequently secured, though its owner remains unknown.

This doesn't feel like a big deal to me.

Slashdot thread.

Worse Than FailureCodeSOD: Aarb!

C++’s template system is powerful and robust enough that template metaprogramming is Turing complete. Given that kind of power, it’s no surprise that pretty much every other object-oriented language eschews templates for code generation.

Java, for example, uses generics- essentially templates without the metaprogramming. What we still keep is compile-time type-safety, and all the benefits of generic programming, but without the complexity of compile-time code generation.

Thierry L inherited a Java application, and the original developer seems to miss that degree of complexity.

public abstract class CentralValidationDistributionAssemblingService<
        DC extends DistributionChannel,
        DU extends DistributionUnit<DC>,
        AC extends AssemblingContext,
        EAC extends AC,
        A extends Assembly<DC>,
        AAR extends AbstractAssemblingResult<DC>,
        AARB extends AbstractAssemblingResultBuilder<DC, AAR>
        >
        implements DistributionAssemblingService<DC, AC, DU, AAR>
{
    //…
}

The best part about this is that the type abbreviations are an onomatopoeia of the choking noises I made when I saw this code:

"DC… DU?… AC-EAC! A-AAR-AARB!"

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Sam VargheseErratic All Blacks will need to buckle up for 2019 Cup

At the end of the 2015 Rugby World Cup, New Zealand bid goodbye to six players who had been around for what seemed like forever.

Richie McCaw, Ma’a Nonu, Conrad Smith, Keven Mealamu, Tony Woodcock and Daniel Carter left, some to play in foreign countries, others just opting out.

At that point, nobody really raised the issue of how the All Blacks would adjust for talented players seem to come in a never-ending stream. This, despite the fact that six mentioned above were all well above average in ability.

But two years later, the gaps are really showing.

Of the six players mentioned, only Smith played less than 100 international games, but even he was close, with 94.

Together with Nonu, he formed an extremely reliable centre combination, making up for his partner’s occasional tendency to improvise by keeping a cool head and excelling in defence. Prior to them, the last time there was a reliable 12-13 combine, it was Tana Umaga and Aaron Mauger.

Carter was perhaps the most talented fly-half to emerge since Grant Fox and McCaw was a loose forward par excellence and creative leader.

In 2016, there was not much evidence of a big gap in the team, with just one loss, to Ireland in Chicago, the entire year.

But this year, along with some injuries, some players taking up contracts abroad, and the talented Ben Smith taking a break from the game, New Zealand has looked vulnerable on many fronts. They lost two Tests and drew one, splitting a series with the British and Irish Lions at the start of the international season. In sharp contrast, the last time the Lions visited New Zealand, in 2005, they were decisively beaten in all three Tests.

At the start of the year, Sonny Bill Williams, generally a reliable substitute for Nonu in earlier years, was erratic at best, and even earned a red card for a foolish hit on a British Lions player in the second of three Tests. This cost New Zealand that match as they had to play most of the game with 14 players.

The centres combination was also affected by Ryan Crotty’s frequent exits for concussion. At fly-half, even the highly talented Beauden Barrett was erratic on occasion. And his back-up, Lima Sopoaga, is far from a finished product.

Though two losses and a draw in 16 Tests seems a good outcome, many of the wins were achieved in rather shaky fashion. There was only the occasional confident win when the team functioned on all cylinders.

Coach Steve Hansen has been experimenting with various players, no doubt with an intention to have a settled 15 by the beginning of 2019 which is a World Cup year. That is every coach’s target.

So it might be premature to pass judgement on how New Zealand will fare at the 2019 Cup. England shapes as the biggest threat, with coach Eddie Jones, an Australian, having brought a winning culture to the team. Many of his players came up with sparkling performances for the Lions against the All Blacks.

,

Worse Than FailureThanks, Google

Gold Certificate

"Dealing with real customers is a hard job," Katya declared from the safety of the employee breakroom. "Dealing with big companies is even harder!"

"I know what you mean," her coworker Rick replied, sipping his tiny paper cup of water. "Enterprise security requirements, arcane contract requirements, and then they're likely to have all that Oracle junk to integrate with ..."

"Huh? Well, that too, but I'm talking about Google."

"Google? What'd they do?" Rick raised an eyebrow, leaning against the wall by the cooler, as Katya began her story.

As the lead architect, Katya was responsible for keeping their customers happy—no matter what. The product was a Java application, a server that stood between legacy backends and mobile apps to push out notifications when things happened that the customer cared about. So when one of their biggest customers reported that 30% of the Google Cloud messages weren't being delivered to their devices in production, it was all hands on deck, with Katya at the helm.

"So I of course popped open the log right off," she said, her voice dropping lower for effect. "And what do you think I saw? CertPathValidatorExceptions."

"A bad SSL certificate?" Rick asked. "From Google? Can't be."

"You've done this before," Katya pouted, jokingly. "But it only happened sporadically. We even tried two concurrent calls, and got one failure, one success."

"How does that even work?" Rick wondered.

"I know, right? So we cURL'd it, verbose, and got the certificate chain," Katya said. "There was a wildcard cert, signed by an intermediate, signed by a root. I checked the root myself, it was definitely part of the global truststore. So I tried again and again until I got a second cert chain. But it was the same thing: cert, intermediate, trusted root."

"So what was the problem?" Rick asked.

"Get this: the newer cert's root CA was only added in Java 7 and 8, back in 2016. We were still bundling an older version of Java 7, before the update."

"Ouch," sympathized Rick. "So you pushed out an updated runtime to all the customers?"

"What? No way!" Katya said. "They'd have each had to do a full integration test cycle. No, we delivered a shell script that added the root CA to the bundled cacerts."

"Shouldn't they be worried about security updates?" wondered Rick

"Sure, but are they actually going to upgrade to Java 8 on our say-so? You wanna die on that hill?

"It just pissed me right off. Why didn't Google announce the change? How come they whipped through them all in two days—no canary testing or anything? I tell you, it's almost enough to make a girl quit and start an alpaca farm upstate."

[Advertisement] Atalasoft’s imaging SDKs come with APIs & pre-built controls for web viewing, browser scanning, annotating, & OCR/barcode capture. Try it for 30 days with included support.

Planet Linux AustraliaFrancois Marier: Proxy ACME challenges to a single machine

The Libravatar mirrors are setup using DNS round-robin which makes it a little challenging to automatically provision Let's Encrypt certificates.

In order to be able to use Certbot's webroot plugin, I need to be able to simultaneously host a randomly-named file into the webroot of each mirror. The reason is that the verifier will connect to seccdn.libravatar.org, but there's no way to know which of the DNS entries it will hit. I could copy the file over to all of the mirrors, but that would be annoying since some of the mirrors are run by volunteers and I don't have direct access to them.

Thankfully, Scott Helme has shared his elegant solution: proxy the .well-known/acme-challenge/ directory from all of the mirrors to a single validation host. Here's the exact configuration I ended up with.

DNS Configuration

In order to serve the certbot validation files separately from the main service, I created a new hostname, acme.libravatar.org, pointing to the main Libravatar server:

CNAME acme libravatar.org.

Mirror Configuration

On each mirror, I created a new Apache vhost on port 80 to proxy the acme challenge files by putting the following in the existing port 443 vhost config (/etc/apache2/sites-available/libravatar-seccdn.conf):

<VirtualHost *:80>
    ServerName __SECCDNSERVERNAME__
    ServerAdmin __WEBMASTEREMAIL__

    ProxyPass /.well-known/acme-challenge/ http://acme.libravatar.org/.well-known/acme-challenge/
    ProxyPassReverse /.well-known/acme-challenge/ http://acme.libravatar.org/.well-known/acme-challenge/
</VirtualHost>

Then I enabled the right modules and restarted Apache:

a2enmod proxy
a2enmod proxy_http
systemctl restart apache2.service

Finally, I added a cronjob in /etc/cron.daily/commit-new-seccdn-cert to commit the new cert to etckeeper automatically:

#!/bin/sh
cd /etc/libravatar
/usr/bin/git commit --quiet -m "New seccdn cert" seccdn.crt seccdn.pem seccdn-chain.pem > /dev/null || true

Main Configuration

On the main server, I created a new webroot:

mkdir -p /var/www/acme/.well-known

and a new vhost in /etc/apache2/sites-available/acme.conf:

<VirtualHost *:80>
    ServerName acme.libravatar.org
    ServerAdmin webmaster@libravatar.org
    DocumentRoot /var/www/acme
    <Directory /var/www/acme>
        Options -Indexes
    </Directory>
</VirtualHost>

before enabling it and restarting Apache:

a2ensite acme
systemctl restart apache2.service

Registering a new TLS certificate

With all of this in place, I was able to register the cert easily using the webroot plugin on the main server:

certbot certonly --webroot -w /var/www/acme -d seccdn.libravatar.org

The resulting certificate will then be automatically renewed before it expires.

,

Krebs on SecurityMacOS High Sierra Users: Change Root Password Now

A newly-discovered flaw in macOS High Sierra — Apple’s latest iteration of its operating system — allows anyone with local (and, apparently in some cases, remote) access to the machine to log in as the all-powerful “root” user without supplying a password. Fortunately, there is a simple fix for this until Apple patches this inexplicable bug: Change the root account’s password now.

Update, Nov. 29, 11:40 a.m. ET: Apple has released a patch for this flaw. More information on the fix is here. The update is available via the App Store app on your Mac. Click Updates in the App Store toolbar, then use the Update buttons to download and install any updates listed.

Original story:

For better or worse, this glaring vulnerability was first disclosed today on Twitter by Turkish software developer Lemi Orhan Ergin, who unleashed his findings onto the Internet with a tweet to @AppleSupport:

“Dear @AppleSupport, we noticed a *HUGE* security issue at MacOS High Sierra. Anyone can login as “root” with empty password after clicking on login button several times. Are you aware of it @Apple?”

High Sierra users should be able to replicate the exploit by accessing System Preferences, then Users & Groups, and then click the lock to make changes. Type “root” with no password, and simply try that several times until the system relents and lets you in.

How does one change the root password? It’s simple enough. Open up a Terminal (in the Spotlight search box just type “terminal”) and type “sudo passwd root”.

Many people responding to that tweet said they were relieved to learn that this extremely serious oversight by Apple does not appear to be exploitable remotely. However, sources who have tested the bug say it can be exploited remotely if a High Sierra user a) has not changed the root password yet and b) has enabled “screen sharing” on their Mac.

Likewise, multiple sources have now confirmed that disabling the root account does not fix the problem because the exploit actually causes the account to be re-enabled.

There may be other ways that this vulnerability can be exploited: I’ll update this post as more information becomes available. But for now, if you’re using macOS High Sierra, take a moment to change the root password now, please.

LongNowA Message from Long Now Members on #GivingTuesday

We hope you’ll consider a tax-deductible donation to Long Now this giving season.

Thank you to Long Now members and donors: we appreciate your additional support this giving season. With your help we can foster long-term thinking for generations to come.

Sociological ImagesWhy We Worry About Normalizing White Nationalism

The New York Times has been catching a lot of criticism this week for publishing a profile on the co-founder of the Traditionalist Worker Party. Critics argue that stories taking a human interest angle on how alt-right activists live, and how they dress, are not just puff pieces that aren’t doing due diligence in reporting—they also risk “normalizing” neo-nazi and white supremacist views in American society.

It is tempting to scoff at the buzzword “normalization,” but there is good reason for the clunky term. For sociologists, what is normal changes across time and social context, and normalization means more than whether people choose to accept deviant beliefs or behaviors. Normalization means that the everyday structure of organizations can encourage and reward deviance, even unintentionally.

Media organizations play a key role here. Research on the spread of anti-Muslim attitudes by Chris Bail shows how a small number of fringe groups with extremist views were able to craft emotionally jarring messages that caught media attention, giving them disproportionate influence in policy circles and popular culture.

Organizations are also quite good at making mistakes, and even committing atrocities, through “normal” behavior. Research on what happened at NASA leading up to the Challenger disaster by Diane Vaughan describes normalization using a theory of crime from Edwin H. Sutherland where people learn that deviant behavior can earn them benefits, rather than sanctions. When bending the rules becomes routine in organizations, we get everything from corporate corruption up to mass atrocities. According to Vaughan:

When discovered, a horrified world defined these actions deviant, yet they were normative within the culture of the work and occupations of the participants who acted in conformity with organizational mandates

The key point is that normalization doesn’t just stop by punishing or shaming individuals for bad behavior. Businesses can be fined, scapegoats can be fired, and readers can cancel subscriptions, but if normalization is happening the culture of an institution will continue to shape how individual people make decisions.This raises big questions about the decisions made by journalists and editors in pursuit of readership.

Research on normalization also begs us to remember that some of the most horrifying crimes and accidents in human history are linked by a common process: the way organizations can reward deviant work. Just look at the “happy young folks” photographed by Karl Höcker in 1944…while they worked at Auschwitz.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramMan-in-the-Middle Attack against Electronic Car-Door Openers

This is an interesting tactic, and there's a video of it being used:

The theft took just one minute and the Mercedes car, stolen from the Elmdon area of Solihull on 24 September, has not been recovered.

In the footage, one of the men can be seen waving a box in front of the victim's house.

The device receives a signal from the key inside and transmits it to the second box next to the car.

The car's systems are then tricked into thinking the key is present and it unlocks, before the ignition can be started.

Worse Than FailureA Handful of Beans

The startup Juan worked for was going through a growth spurt. There was more work than there were people, and plenty of money, so that meant interviews. Lots, and lots of interviews.

Enter Octavio. Octavio had an impressive resume, had worked for decades as a consultant, and was the project lead on an open source project called “JavaBachata”. Before the interview, Juan gave the project site a quick skim, and it looked like one of those end-to-end ORM/MVC frameworks.

Roasted coffee beans

Juan planned to bring it up during the interview, but Octavio beat him to the punch. “You’ve probably heard of me, and my project,” he said right after shaking hands. “JavaBachata is the fastest Java framework out there. I use it on all my projects, and my customers have been very happy.”

“Ah… we already have a framework,” Juan said, uncertain if this was an interview or a sales-pitch.

“Oh, I know, I know. But if you’re looking for my skills, that’s the place to look. It’s open source.”

While Juan pulled up the GitHub page, Octavio touted the framework’s strength. “I was doing no SQL before NoSQL was a thing,” he said. “All of our queries are executed in-memory, using TableBeans. That’s what makes it so fast.”

Juan decided to start looking in the TableBean class, since Octavio brought it up. The bulk of the class looked like this:

public String var000, var001, var002,… var199,var200;

“What’s this?” Juan asked, politely.

“Oh, yes, I know that looks awkward, but it actually makes the code much more configurable. You see, this is used in conjunction with the ObjectBean, and the appropriate dot-properties file.”

The .properties file was a mapping file, which could map ObjectBean columns to TableBean fields. So, for example, it might have a property like this:

1=178

That meant column 1 in the ObjectBean mapped to column 178 in the TableBean, so that you could conveniently access the data by calling objBean.getCol(1).

“Don’t you think these naming conventions are hard to maintain?” Juan asked. “It’d be nice to have names for things.”

Octavio shrugged. “I think that’s the problem with modern programmers. They just don’t know how to code without using variable names anymore.”

They didn’t hire Octavio, but he took it well. “It leaves me more time to work on my framework.”

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Cory DoctorowHey, Kitchener-Waterloo, I’m headed your way next Monday!

I was honoured to be invited to address the University of Waterloo on the occasion of the 50th anniversary of the Cheriton School of Computer Science; my father is a proud Waterloo grad (and I’m a proud Waterloo dropout!), and so this is indeed a very special opportunity for me.


Moreover, the kind folks at U Waterloo worked with the Kitchener Public Library to book a second event that morning, at the 85 Queen branch.

Both events are free, but they’re ticketed, so book now!

Here’s details:

Both events are on December 4, 2017.

I’ll be at the 85 Queen branch of the Kitchener Public Library at 3PM. Details here.


I’m giving my speech, “Dead canary in the coalmine: we just lost the web in the war on general purpose computing”, for the Cheriton School of Computer Science, at The Theatre of the Arts in the University of Waterloo Modern Languages Building (200 University Avenue West, N2L 3G1), at 7PM. Details here.


Hope to see you there!

,

Krebs on SecurityWho Was the NSA Contractor Arrested for Leaking the ‘Shadow Brokers’ Hacking Tools?

In August 2016, a mysterious entity calling itself “The Shadow Brokers” began releasing the first of several troves of classified documents and hacking tools purportedly stolen from “The Equation Group,” a highly advanced threat actor that is suspected of having ties to the U.S. National Security Agency. According to media reports, at least some of the information was stolen from the computer of an unidentified software developer and NSA contractor who was arrested in 2015 after taking the hacking tools home. In this post, we’ll examine clues left behind in the leaked Equation Group documents that may point to the identity of the mysterious software developer.

The headquarters of the National Security Agency in Fort Meade, Md.

The existence of the Equation Group was first posited in Feb. 2015 by researchers at Russian security firm Kaspersky Lab, which described it as one of the most sophisticated cyber attack teams in the world.

According to Kaspersky, the Equation Group has more than 60 members and has been operating since at least 2001. Most of the Equation Group’s targets have been in Iran, Russia, Pakistan, Afghanistan, India, Syria, and Mali.

Although Kaspersky was the first to report on the existence of the Equation Group, it also has been implicated in the group’s compromise. Earlier this year, both The New York Times and The Wall Street Journal cited unnamed U.S. intelligence officials saying Russian hackers were able to obtain the advanced Equation Group hacking tools after identifying the files through a contractor’s use of Kaspersky Antivirus on his personal computer. For its part, Kaspersky has denied any involvement in the theft.

The Times reports that the NSA has active investigations into at least three former employees or contractors, including two who had worked for a specialized hacking division of NSA known as Tailored Access Operations, or TAO.

Thanks to documents leaked from former NSA contractor Edward Snowden we know that TAO is a cyber-warfare intelligence-gathering unit of NSA that has been active since at least 1998. According to those documents, TAO’s job is to identify, monitor, infiltrate and gather intelligence on computer systems being used by entities foreign to the United States.

The third person under investigation, The Times writes, is “a still publicly unidentified software developer secretly arrested after taking hacking tools home in 2015, only to have Russian hackers lift them from his home computer.”

JEEPFLEA & EASTNETS

So who are those two unnamed NSA employees and the contractor referenced in The Times’ reporting? The hacking tools leaked by The Shadow Brokers are now in the public domain and can be accessed through this Github repository. The toolset includes reams of documentation explaining how the cyber weapons work, as well as details about their use in highly classified intelligence operations abroad.

Several of those documents are Microsoft Powerpoint (.pptx) and PDF files. These files contain interesting metadata that includes the names of at least two people who appear to be connected to the NSA’s TAO operations.

Inside the unzipped folder there are three directories: “oddjob,” “swift,” and “windows.” Looking at the “swift” folder, we can see a Powerpoint file called “JFM_Status.pptx.”

JFM stands for Operation Jeepflea Market, which appears to have been a clandestine operation aimed at siphoning confidential financial data from EastNets — a middle eastern partner in SWIFT. Short for the Society for Worldwide Interbank Financial Telecommunication, SWIFT provides a network that enables financial institutions worldwide to send and receive data about financial transactions.

Each of the Jeepflea Powerpoint slides contain two overlapping seals; one for the NSA and another for an organization called the Central Security Service (CSS). According to Wikipedia, the CSS was formed in 1972 to integrate the NSA and the Service Cryptologic Elements (SCE) of the U.S. armed forces.

In Figure 10 of the Jeepflea Powerpoint document, we can see the seal of the Texas Cryptologic Center, a NSA/CSS entity that is based out of Lackland Air Force Base in San Antonio Texas.

The metadata contained in the Powerpoint document shows that the last person to modify it has the designation “NSA-FTS32 USA,” and that this leaked version is the 9th revision of the document. What does NSA-FTS32 mean? According to multiple sources on the internet it means Tailored Access Operations.

We can also see that the creator of the Jeepflea document is the same person who last modified it. The metadata says it was last modified on 8/12/2013 at 6:52:27 PM and created on 7/1/2013 at 6:44:46 PM.

The file JFMStatus.pptx contains metadata showing that the creator of the file is one Michael A. Pecoraro. Public record searches suggest Mr. Pecoraro is in his mid- to late 30s and is from San Antonio, Texas. Pecoraro’s name appears on multiple documents in The Shadow Brokers collection. Mr. Pecoraro could not be reached for comment.

The metadata in a Microsoft Powerpoint presentation on Operation Jeepflea shows that the document was created and last modified in 2013 by a Michael A. Pecoraro.

Another person who earned the NSA-FTS32 designation in the document metadata was Nathan S. Heidbreder. Public record searches suggest that Mr. Heidbreder is approximately 30 years old and has lived in San Antonio and at Goodfellow Air Force Base in Texas, among other locations in the state.

According to Goodfellow’s Wikipedia entry, the base’s main mission is cryptologic and intelligence training for the Air Force, Army, Coast Guard, Navy, and Marine Corps. Mr. Heidbreder likewise could not be reached for comment.

The metadata contained in one of the classified Jeepflea Market Microsoft Excel documents released in The Shadow Brokers trove states that a Nathan S. Heidbreder was the last person to edit the document.

Another file in the leaked Shadow Brokers trove related to this Jeepflea/EastNets operation is potentially far more revealing, mainly because it appears to have last been touched not by an NSA employee, but by an outside contractor.

That file, “Eastnets_UAE_BE_DEC2010.vsd,” is a Microsoft Visio document created in Sept. 2013. The metadata inside of it says the last user to open the file was one Gennadiy Sidelnikov. Open records searches return several results for this name, including a young business analyst living in Moscow and a software developer based in Maryland.

As the NSA is based in Fort Meade, Md., this latter option seems far more likely. A brief Internet search turns up a 50-something database programmer named Gennadiy “Glen” Sidelnikov who works or worked for a company called Independent Software in Columbia, Md. (Columbia is just a few miles away from Ft. Meade).

The metadata contained within Eastnets_UAE_BE_Dec2010.vsd says Gennadiy Sidelnikov is the last author of the document. Click to enlarge.

What is Independent Software? Their Web site states that Independent Software is “a professional services company providing Information Technology products and services to mission-oriented Federal Civilian Agencies and the DoD. The company has focused on support to the Intelligence Community (IC) in Maryland, Florida, and North Carolina, as well as select commercial client markets outside of the IC.”

Indeed, this job advertisement from August 2017 for a junior software engineer at Independent Software says a qualified applicant will need a TOP SECRET clearance and should be able to pass a polygraph test.

WHO IS GLEN SIDELNIKOV?

The two NSA employees are something of a known commodity, but the third individual — Mr. Sidelnikov — is more mysterious. Sidelnikov did not respond to repeated requests for comment. Independent Software also did not return calls and emails seeking comment.

Sidelnikov’s LinkedIn page (PDF) says he began working for Independent Software in 2015, and that he speaks both English and Russian. In 1982, Sidelnikov earned his masters in information security from Kishinev University, a school located in Moldova — an Eastern European country that at the time was part of the Soviet Union.

Sildelnikov says he also earned a Bachelor of Science degree in “mathematical cybernetics” from the same university in 1981. Under “interests,” Mr. Sidelnikov lists on his LinkedIn profile Independent Software, Microsoft, and The National Security Agency.

Both The Times and The Journal have reported that the contractor suspected of leaking the classified documents was running Kaspersky Antivirus on his computer. It stands to reason that as a Russian native, Mr. Sildelnikov might be predisposed to using a Russian antivirus product.

Mr. Sidelnikov calls himself a senior consultant, but the many skills he self-described on his LinkedIn profile suggest that perhaps a better title for him would be “database administrator” or “database architect.”

A Google search for his name turns up numerous forums dedicated to discussions about administering databases. On the Web forum sqlservercentral.com, for example, a user named Glen Sidelnikov is listed as a “hall of fame” member for providing thousands of answers to questions that other users posted on the forum.

KrebsOnSecurity was first made aware of the metadata in the Shadow Brokers leak by Mike PoorRob Curtinseufert, and Larry Pesce. All three are security experts with InGuardians, a Washington, D.C.-based penetration testing firm that gets paid to break into companies and test their cybersecurity defenses.

Poor, who serves as president and managing partner of InGuardians, said he and his co-workers initially almost overlooked Sidelnikov’s name in the metadata. But he said the more they looked into Sidelnikov the less sense it made that his name was included in the Shadow Brokers metadata at all.

“He’s a database programmer, and they typically don’t have access to operational data like this,” Poor said. “Even if he did have access to an operational production database, it would be highly suspect if he accessed classified operation documents.”

Poor said that as the data owner, the NSA likely would be reviewing file access of all classified documents and systems, and it would definitely have come up as a big red flag if a software developer accessed and opened classified files during some routine maintenance or other occasion.

“He’s the only one in there that is not Agency/TAO, and I think that poses important questions,” Poor said. “Such as why did a DB programmer for a software company have access to operational classified documents? If he is or isn’t a source or a tie to Shadow Brokers, it at least begets the question of why he accessed classified operational documents.”

Curtinseufert said it may be that Sidelnikov’s skills came in handy in the Jeepflea operations, which appear to have involved compromising large financial databases, and querying those for very specific transactions.

For example, Jeepflea apparently involved surreptitiously injecting database queries into the SWIFT Alliance servers, which are responsible for handling the messaging tied to SWIFT financial transactions.

“It looks like the SWIFT data the NSA was collecting relied heavily on databases, and they were apparently also using some exploits to gather data,” Curtinseufert said. “The SWIFT databases are all records of financial transactions, and in Jeepflea they were able to query those SWIFT databases and see who’s moving money where. They were looking to pull SWIFT data on who was moving money around the Middle East. They did this with EastNets, and they tried to do it down in Venezuela. And it looks like through EastNets they were able to hack individual banks.”

The NSA did not respond to requests for comment.

InGuardians president Poor said we may never know for sure who was responsible for the Shadow Brokers leak, an incident that has been called “one of the worst security debacles ever to befall American intelligence.”  But one thing seems certain.

“I think it’s time that we state what we all know,” Poor said. “The Equation Group is Tailored Access Operations. It is the NSA.”

CryptogramUber Data Hack

Uber was hacked, losing data on 57 million driver and rider accounts. The company kept it quiet for over a year. The details are particularly damning:

The two hackers stole data about the company's riders and drivers ­-- including phone numbers, email addresses and names -- from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.

Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a "bug bounty" -- a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.

And almost certainly illegal:

While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.

By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.

The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers' stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver's license data from its drivers had been stolen in the course of the hacking.

Uber was hacked, losing data on 57 million driver and rider accounts. They kept it quiet for over a year. The details are particularly damning:

The two hackers stole data about the company's riders and drivers ­- including phone numbers, email addresses and names -­ from a third-party server and then approached Uber and demanded $100,000 to delete their copy of the data, the employees said.

Uber acquiesced to the demands, and then went further. The company tracked down the hackers and pushed them to sign nondisclosure agreements, according to the people familiar with the matter. To further conceal the damage, Uber executives also made it appear as if the payout had been part of a "bug bounty" ­- a common practice among technology companies in which they pay hackers to attack their software to test for soft spots.

And almost certainly illegal:

While it is not illegal to pay money to hackers, Uber may have violated several laws in its interaction with them.

By demanding that the hackers destroy the stolen data, Uber may have violated a Federal Trade Commission rule on breach disclosure that prohibits companies from destroying any forensic evidence in the course of their investigation.

The company may have also violated state breach disclosure laws by not disclosing the theft of Uber drivers' stolen data. If the data stolen was not encrypted, Uber would have been required by California state law to disclose that driver's license data from its drivers had been stolen in the course of the hacking.

Worse Than FailureCodeSOD: The Delivery Moose

We know stereotypes are poor placeholders for reality. Still, if we name a few nations, there are certain traits and themes that come to mind. Americans are fat, loud, gregarious, and love making pointless smalltalk. The English are reserved, love tea, and have perfected the art of queuing. The French are snobbish, the Japanese have weaponized politeness, the Finns won’t stand within ten meters of another human being at the bus stop, and so on. They can range from harmless to downright offensive and demeaning.

Laurent is Canadian, working for an insurance company. Their software is Russian- in that it comes from a Russian vendor, with a support contract that gives them access to a Russian dev team to make changes. While reviewing commits, Laurent found one simply labeled: “Fix some Sonars issue”.

The change?

public enum CorrespondenceDeliveryMethods {
    MAIL("mail"),
    NOT_MAIL("moose");
}

Apparently, the Russian team has some stereotypes of their own about how documents are sent in Canada. Laurent adds:

Since this saved in the database and would thus imply 5 signatures to do a data migration, it is quite probable we’ll ultimately leave it as is. Which is a shame, because as we all know, the alternative to mail in Canada is to ask a Hockey player to slapshot the letter through the customer window, especially after they canceled their home insurance with us.



They should send a real Canadian hero

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet Linux AustraliaDavid Rowe: Steve Ports an OFDM modem from Octave to C

Earlier this year I asked for some help. Steve Sampson K5OKC stepped up, and has done some fine work in porting the OFDM modem from Octave to C. I was so happy with his work I asked him to write a guest post on my blog on his experience and here it is!

On a personal level working with Steve was a great experience for me. I always enjoy and appreciate other people working on FreeDV with me, however it is quite rare to have people help out with programming. As you will see, Steve enjoyed the process and learned a great deal in the process.

The Problem with Porting

But first some background on the process involved. In signal processing it is common to develop algorithms in a convenient domain-specific scripting language such as GNU Octave. These languages can do a lot with one line of code and have powerul visualisation tools.

Usually, the algorithm then needs to be ported to a language suitable for real time implementation. For most of my career that has been C. For high speed operation on FPGAs it might be VHDL. It is also common to port algorithms from floating point to fixed point so they can run on low cost hardware.

We don’t develop algorithms directly in the target real-time language as signal processing is hard. Bugs are difficult to find and correct. They may be 10x or 100x times harder (in terms of person-hours) to find in C or VHDL than say GNU Octave.

So a common task in my industry is porting an algorithm from one language to another. Generally the process involves taking a working simulation and injecting a bunch of hard to find bugs into the real time implementation. It’s an excellent way for engineering companies to go bankrupt and upset customers. I have seen and indeed participated in this process (screwing up real time implementations) many times.

The other problem is algorithm development is hard, and not many people can do it. They are hard to find, cost a lot of money to employ, and can be very nerdy (like me). So if you can find a way to get people with C, but not high level DSP skills, to work on these ports – then it’s a huge win from a resourcing perspective. The person doing the C port learns a lot, and managers are happy as there is some predictability in the engineering process and schedule.

The process I have developed allows people with C coding (but not DSP) skills to port complex signal processing algorithms from one language to another. In this case its from GNU Octave to floating point C. The figures below shows how it all fits together.

Here is a sample output plot, in this case a buffer of received samples in the demodulator. This signal is plotted in green, and the difference between C and Octave in red. The red line is all zeros, as it should be.

This particular test generates 12 plots. Running is easy:

$ cd codec2-dev/octave
$ ../build_linux/unittest/tofdm
$ octave
>> tofdm
W........................: OK
tx_bits..................: OK
tx.......................: OK
rx.......................: OK
rxbuf in.................: OK
rxbuf....................: OK
rx_sym...................: FAIL (0.002037)
phase_est_pilot..........: FAIL (0.001318)
rx_amp...................: OK
timing_est...............: OK
sample_point.............: OK
foff_est_hz..............: OK
rx_bits..................: OK

This shows a fail case – two vectors just failed so some further inspection required.

Key points are:

  1. We make sure the C and Octave versions are identical. Near enough is not good enough. For floating point I set a tolerance like 1 part in 1000. For fixed point ports it can be bit exact – zero difference.
  2. We dump a lot of internal states, not just the inputs and outputs. This helps point us at exactly where the problem is.
  3. There is an automatic checklist to give us pass/fail reports of each stage.
  4. This process is not particularly original. It’s not rocket science, but getting people (especially managers) to support and follow such a process is. This part – the human factor – is really hard to get right.
  5. The same process can be used between any two versions of an algorithm. Fixed and float point, fixed point C and VHDL, or a reference implementation and another one that has memory or CPU optimisations. The same basic idea: take a reference version and use software to compare it.
  6. It makes porting fun and strangely satisfying. You get constant forward progress and no hard to find bugs. Things work when they hit real time. After months of tough, brain hurting, algorithm development, I find myself looking forward to the productivity the porting phase.

In this case Steve was the man doing the C port. Here is his story…..

Initial Code Construction

I’m a big fan of the Integrated Debugging Environment (IDE). I’ve used various versions over the years, but mostly only use Netbeans IDE. This is my current favorite, as it works well with C and Java.

When I take on a new programming project I just create a new IDE project and paste in whatever I want to translate, and start filling-in the Java or C code. In the OFDM modem case, it was the Octave source code ofdm_lib.m.

Obviously this code won’t do anything or compile, but it allows me to write C functions for each of the Octave code blocks. Sooner or later, all the Octave code is gone, and only C code remains.

I have very little experience with Octave, but I did use some Matlab in college. It was a new system just being introduced when I was near graduation. I spent a little time trying to make the program as dynamic as the Octave code. But it became mired in memory allocation.

Once David approved the decision for me to go with fixed configuration values (Symbol rate, Sample rate, etc), I was able to quickly create the header files. We could adjust these header files as we went along.

One thing about Octave, is you don’t have to specify the array sizes. So for the C port, one of my tasks was to figure out the array sizes for all the data structures. In some cases I just typed the array name in Octave, and it printed out its value, and then presto I now knew the size. Inspector Clouseau wins again!

The include files were pretty much patterned the same as FDMDV and COHPSK modems.

Code Starting Point

When it comes to modems, the easiest thing to create first is the modulator. It proved true in this case as well. I did have some trouble early on, because of a bug I created in my testing code. My spectrum looked different than Davids. Once this bug was ironed out the spectrums looked similar. David recommended I create a test program, like he had done for other modems.

The output may look similar, but who knows really? I’m certainly not going to go line by line through comma-separated values, and anyway Octave floating point values aren’t the same as C values past some number of decimal points.

This testing program was a little over my head, and since David has written many of these before, he decided to just crank it out and save me the learning curve.

We made a few data structure changes to the C program, but generally it was straight forward. Basically we had the outputs of the C and Octave modulators, and the difference is shown by their different colors. Luckily we finally got no differences.

OFDM Design

As I was writing the modulator, I also had to try and understand this particular OFDM design. I deduced that it was basically eighteen (18) carriers that were grouped into eight (8) rows. The first row was the complex “pilot” symbols (BPSK), and the remaining 7 rows were the 112 complex “data” symbols (QPSK).

But there was a little magic going on, in that the pilots were 18 columns, but the data was only using 16. So in the 7 rows of data, the first and last columns were set to a fixed complex “zero.”

This produces the 16 x 7 or 112 complex data symbols. Each QPSK symbol is two-bits, so each OFDM frame represents 224 bits of data. It wasn’t until I began working on the receiver code that all of this started to make sense.

With this information, I was able to drive the modulator with the correct number of bits, and collect the output and convert it to PCM for testing with Audacity.

DFT Versus FFT

This OFDM modem uses a DFT and IDFT. This greatly simplifies things. All I have to do is a multiply and summation. With only 18 carriers, this is easily fast enough for the task. We just zip through the 18 carriers, and return the frequency or time domain. Obviously this code can be optimized for firmware later on.

The final part of the modulator, is the need for a guard period called the Cyclic Prefix (CP). So by making a copy of the last 16 of the 144 complex time-domain samples, and putting them at the head, we produce 160 complex samples for each row, giving us 160 x 8 rows, or 1280 complex samples every OFDM frame. We send this to the transmitter.

There will probably need to be some filtering, and a function of adjusting gain in the API.

OFDM Modulator

That left the Demodulator which looked much more complex. It took me quite a long time just to get the Octave into some semblance of C. One problem was that Octave arrays start at 1 and C starts at 0. In my initial translation, I just ignored this. I told myself we would find the right numbers when we started pushing data through it.

I won’t kid anyone, I had no idea what was going on, but it didn’t matter. Slowly, after the basic code was doing something, I began to figure out the function of various parts. Again though, we have no idea if the C code is producing the same data as the Octave code. We needed some testing functions, and these were added to tofdm.m and tofdm.c. David wrote this part of the code, and I massaged the C modem code until one day the data were the same. This was pretty exciting to see it passing tests.

One thing I found, was that you can reach an underflow with single precision. Whenever I was really stumped, I would change the single precision to a double, and then see where the problem was. I was trying to stay completely within single precision floating point, because this modem is going to be embedded firmware someday.

Testing Process

There was no way that I could have reached a successful conclusion without the testing code. As a matter of fact, a lot of programming errors were found. You would be surprised at how much damage a miss placed parenthesis can do to a math equation! I’ve had enough math to know how to do the basic operations involved in DSP. I’m sure that as this code is ported to firmware, it can be simplified, optimized, and unrolled a bit for added speed. At this point, we just want valid waveforms.

C99 and Complex Math

Working with David was pretty easy, even though we are almost 16 time-zones apart. We don’t need an answer right now, and we aren’t working on a deadline. Sometimes I would send an email, and then four hours later I would find the problem myself, and the morning was still hours away in his time zone. So he sometimes got some strange emails from me that didn’t require an answer.

David was hands-off on this project, and doesn’t seem to be a control freak, so he just let me go at it, and then teamed-up when we had to merge things in giving us comparable output. Sometimes a simple answer was all I needed to blow through an Octave brain teaser.

I’ve been working in C99 for the past year. For those who haven’t kept up (1999 was a long time ago), but still, we tend to program C in the same way. In working with complex numbers though, the C library has been greatly expanded. For example, to multiply two complex numbers, you type” “A * B”. That’s it. No need to worry about a simulated complex number using a structure. You need a complex exponent, you type “cexp(I * W)” where “I” is the sqrt(-1). But all of this is hidden away inside the compiler.

For me, this became useful when translating Octave to C. Most of the complex functions have the same name. The only thing I had to do, was create a matrix multiply, and a summation function for the DFT. The rest was straight forward. Still a lot of work, but it was enjoyable work.

Where we might have problems interfacing to legacy code, there are functions in the library to extract the real and imaginary parts. We can easily interface to the old structure method. You can see examples of this in the testing code.

Looking back, I don’t think I would do anything different. Translating code is tedious no matter how you go. In this case Octave is 10 times easier than translating Fortran to C, or C to Java.

The best course is where you can start seeing some output early on. This keeps you motivated. I was a happy camper when I could look and listen to the modem using Audacity. Once you see progress, you can’t give up, and want to press on.

Steve/k5okc

Reading Further

The Bit Exact Fairy Tale is a story of fixed point porting. Writing this helped me vent a lot of steam at the time – I’d just left a company that was really good at messing up these sorts of projects.

Modems for HF Digital Voice Part 1 and Part 2.

The cohpsk_frame_design spreadsheet includes some design calculations on the OFDM modem and a map of where the data and pilot symbols go in time and frequency.

Reducing FDMDV Modem Memory is an example of using automated testing to port an earlier HF modem to the SM1000. In this case the goal was to reduce memory consumption without breaking anything.

Fixed Point Scaling – Low Pass Filter example – is consistently one of the most popular posts on this blog. It’s a worked example of a fixed point port of a low pass filter.