Planet Russell

,

Planet DebianBrett Parker: The Psion Gemini

So, I backed the Gemini and received my shiny new device just a few months after they said that it'd ship, not bad for an indiegogo project! Out of the box, I flashed it, using the non-approved linux flashing tool at that time, and failed to backup the parts that, err, I really didn't want blatted... So within hours I had a new phone that I, err, couldn't make calls on, which was marginally annoying. And the tech preview of Debian wasn't really worth it, as it was fairly much unusable (which was marginally upsetting, but hey) - after a few more hours / days of playing around I got the IMEI number back in to the Gemini and put back on the stock android image. I didn't at this point have working bluetooth or wifi, which was a bit of a pain too, turns out the mac addresses for those are also stored in the nvram (doh!), that's now mostly working through a bit of collaboration with another Gemini owner, my Gemini currently uses the mac addresses from his device... which I'll need to fix in the next month or so, else we'll have a mac address collision, probably.

Overall, it's not a bad machine, the keyboard isn't quite as good as I was hoping for, the phone functionality is not bad once you're on a call, but not great until you're on a call, and I certainly wouldn't use it to replace the Samsung Galaxy S7 Edge that I currently use as my full time phone. It is however really rather useful as a sysadmin tool when you don't want to be lugging a full laptop around with you, the keyboard is better than using the on screen keyboard on the phone, the ssh client is "good enough" to get to what I need, and the terminal font isn't bad. I look forward to seeing where it goes, I'm happy to have been an early backer, as I don't think I'd pay the current retail price for one.

CryptogramAn Example of Deterrence in Cyberspace

In 2016, the US was successfully deterred from attacking Russia in cyberspace because of fears of Russian capabilities against the US.

I have two citations for this. The first is from the book Russian Roulette: The Inside Story of Putin's War on America and the Election of Donald Trump, by Michael Isikoff and David Corn. Here's the quote:

The principals did discuss cyber responses. The prospect of hitting back with cyber caused trepidation within the deputies and principals meetings. The United States was telling Russia this sort of meddling was unacceptable. If Washington engaged in the same type of covert combat, some of the principals believed, Washington's demand would mean nothing, and there could be an escalation in cyber warfare. There were concerns that the United States would have more to lose in all-out cyberwar.

"If we got into a tit-for-tat on cyber with the Russians, it would not be to our advantage," a participant later remarked. "They could do more to damage us in a cyber war or have a greater impact." In one of the meetings, Clapper said he was worried that Russia might respond with cyberattacks against America's critical infrastructure­ -- and possibly shut down the electrical grid.

The second is from the book The World as It Is, by President Obama's deputy national security advisor Ben Rhodes. Here's the New York Times writing about the book.

Mr. Rhodes writes he did not learn about the F.B.I. investigation until after leaving office, and then from the news media. Mr. Obama did not impose sanctions on Russia in retaliation for the meddling before the election because he believed it might prompt Moscow into hacking into Election Day vote tabulations. Mr. Obama did impose sanctions after the election but Mr. Rhodes's suggestion that the targets include President Vladimir V. Putin was rebuffed on the theory that such a move would go too far.

When people try to claim that there's no such thing as deterrence in cyberspace, this serves as a counterexample.

Worse Than FailureImprov for Programmers: The Internet of Really Bad Things

Things might get a little dark in the season (series?) finale of Improv for Programmers, brought to you by Raygun. Remy, Erin, Ciarán and Josh are back, and not only is everything you're about to hear entirely made up on the spot: everything you hear will be a plot point in the next season of Mr. Robot.

Raygun provides a window into how users are really experiencing your software applications.

Unlike traditional logging, Raygun silently monitors applications for issues affecting end users in production, then allows teams to pinpoint the root cause behind a problem with greater speed and accuracy by providing detailed diagnostic information for developers. Raygun makes fixing issues 1000x faster than traditional debugging methods using logs and incomplete information.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integrated, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Planet DebianMario Lang: Debian on a synthesizer

Bela is a low latency optimized platform for audio applications built using Debian and Xenomai, running on a BeagleBoard Black. I recently stumbled upon this platform while skimming through a modular synthesizer related forum. Bela has teamed up with the guys at Rebel Technologies to build a Bela based system in eurorack module format, called Salt. Luckily enough, I managed to secure a unit for my modular synthesizer.

Picture of the front panel of a Salt and Salt+ module

Inputs and Outputs

Salt features 2 audio (44.1kHz) in, 2 audio out, 8 analog (22kHz) in, 8 analog out, and a number of digital I/Os. And it also features a USB host port, which is what I need to connect a Braille display to it.

Accessible synthesizers

do not really exist. Complex devices like sequencers or basically anything with a elaborate menu structure are usually not usable by the blind. However, Bela, or more specifically, Salt, is actually a game changer. I was able to install brltty and libbrlapi-dev (and a number of C++ libraries I like to use) with just a simple apt invokation.

Programmable module

Salt is marketed as a programmable module. To make life easy for creative people, the Bela platform does provide integration for well-known audio processing systems like PureData, SuperCollider (and recently) Csound. This is great to get started. However, it also allows to write your own C++ applications. Which is what I am doing right now, since I want to implement full Braille integration. So the display of my synthesizer is going to be tactile!

A stable product

Bought in May 2018, Salt shipped with Debian Stretch preinstalled. This means I get to use GCC 6.4 (C++14). Nice to see stable ship in commercial products.

Pepper

pepper is an obvious play on words. The goal for this project is to provide a Bela application for braille display users.

As a proof of concept, I already managed to successfully run a number of LV2 plugins via pepper on my Salt module. In the upcoming days, I hope I can manage to secure enough spare time to actually make more progress with this programming project.

Planet Linux AustraliaGary Pendergast: Podcasting: Tavern Style

Earlier today, I joined JJJ and Jeff on episode 319 of the WP Tavern’s WordPress Weekly podcast!

We chatted about GitHub being acquired by Microsoft (and what that might mean for the future of WordPress using Trac), the state of Gutenberg, WordCamp Europe, as well as getting into a bit of the philosophy that drives WordPress’ auto-update system.

Finally, Jeff was kind enough to name me a Friend of the Show, despite my previous appearance technically not being a WordPress Weekly episode. 🎉

WPWeekly Episode 319 – The Gutenberg Plugin Turns 30

Planet DebianNorbert Preining: Git and Subversion collaboration

Git is great, we all know that, but there are use cases where there completely distributed development model does not shine (see here and here). And while my old git svn mirror of TeX Live subversion was working well, git pull and git svn rebase didn’t work well together, repulling the same changes again and again. Finally, I took the time to experiment and fix this!

Most of the material in this blog is already written up, and the best sources I found are here and here. There practically everything is written down, but when one goes down to business some things work out a bit differently. So here we go.

Aim

Aim of the setup is to be able that several developers can work on a git svn mirror of a central subversion repository. “Work” here means:

  • pull from the git mirror to get the latest changes
  • normal git workflows: branch, develop new features, push new branches to the git mirror
  • commit to the subversion repository using git svn dcommit

and all that with a much redundancy removed as possible.

On solution to this would be that each developer creates his own git-svn mirror. While this is fine in principle, it is error prone, costs lots of time, and everyone has to do git svn rebase etc. We want to be able to use normal git workflows as far as possible.

Layout

The basic layout of our setup is as follows:

The following entities are shown in the above graphics:

  • SvnRepo: the central subversion repository
  • FetchingRepo: the git-svn mirror which does regular fetches and pushes to the BareRepo
  • BareRepo: the central repository which is used by all developers to pull and collaborate
  • DevRepo: normal git clones of the BareRepo on the developers’ computer

The flow of data is also shown in the above diagram:

  • git svn fetch: the FetchingRepo is updated regularly (using cron) to fetch new revisions and new branches/tags from the SvnRepo
  • git push (1): the FetchingRepo pushes changes regularly (using cron) to the BareRepo
  • git pull: developers pull from the BareRepo, can check out remote branches and do normal git workflows
  • git push (2): developers push changes to and creation of new branches to the BareRepo
  • git svn dcommit: developers rebase-merge their changes into the main branch and commit from there to the SvnRepo

Besides the requirement to use git svn dcommit for submitting the changes to the SvnRepo, and the requirement by git svn to have linear histories, everything else can be done with normal workflows.

Procedure

Let us for the following assume that SVNREPO points to the URI of the Subversion repository, and BAREREPO points to the URI of the BareRepo. Furthermore, we refer to the path on the system (server, local) with variables like $BareRepo etc.

Step 1 – preparation of authors-file

To get consistent entries for committers, we need to set up a authors file, giving a mapping from Subversion users to Name/Emails:

svnuser1 = AAA BBB 
svnuser2 = CCC DDD 
...

Let us assume that AUTHORSFILE environment variable points to this file.

Step 2 – creation of fetching repository

This step creates a git-svn mirror, please read the documentation for further details. If the Subversion repository follows the standard layout (trunk, branches, tags), then the following line will work:

git svn clone --prefix="" --authors-file=$AUTHORSFILE -s $SVNREPO

The important part here is the --prefix one. The documentation of git svn says here:

Setting a prefix (with a trailing slash) is strongly encouraged in any case, as your SVN-tracking refs will then be located at “refs/remotes/$prefix/”, which is compatible with Git’s own remote-tracking ref layout (refs/remotes/$remote/). Setting a prefix is also useful if you wish to track multiple projects that share a common repository. By default, the prefix is set to origin/.

Note: Before Git v2.0, the default prefix was “” (no prefix). This meant that SVN-tracking refs were put at “refs/remotes/*”, which is incompatible with how Git’s own remote-tracking refs are organized. If you still want the old default, you can get it by passing –prefix “” on the command line.

While one might be tempted to use a prefix of “svn” or “origin”, both of which I have done, this will complicate (make impossible?) later steps, in particular the synchronization of git pull with git svn fetch.

The original blogs I mentioned in the beginning were written before the switch to default=”origin” was made, so this was the part that puzzled me and I didn’t understand why the old descriptions didn’t work anymore.

Step 3 – cleanup of the fetching repository

By default, git svn creates and checks out a master branch. In this case, the Subversion repositories “master” is the “trunk” branch, and we want to keep it like this. Thus, let us checkout the trunk branch and remove the master, after entering the FetchingRepo, do

cd $FetchingRepo
git checkout trunk
git checkout -b trunk
git branch -d master

The two checkouts are necessary because the first will leave you with a detached head. In fact, no checkout would be fine, too, but git svn does not work over bare repositories, so we need to checkout some branch.

Step 4 – init the bare BareRepo

This is done in the usual way, I guess you know that:

git init --bare $BareRep

Step 5 – setup FetchingRepo to push all branches and push them

The cron job we will introduce later will fetch all new revisions, including new branches. We want to push all branches to the BareRepo. This is done by adjusting the fetch and push configuration, after changing into the FetchingRepo

cd $FetchingRepo
git remote add origin $BAREREPO
git config remote.origin.fetch '+refs/remotes/*:refs/remotes/origin/*'
git config remote.origin.push 'refs/remotes/*:refs/heads/*'
git push origin

What has been done is that fetch should update the remote branches, and push should pull the remote branches to the BareRepo. This ensures that new Subversion branches (or tags, which are nothing else then branches) are also pushed to the BareRepo.

Step 6 – adjust the default checkout branch in the BareRepo

By default the master branch is cloned/checked out in git, but we don’t have a master branch, but “trunk” plays its role. Thus, let us adjust the default in the BareRepo:

cd $BareRepo
git symbolic-ref HEAD refs/heads/trunk

Step 7 – developers branch

Now we are ready to use the bare repo, and clone it onto one of the developers machine:

git clone $BAREREPO

But before we can actually use this item, we need to make sure that git commits sent to the Subversion repository have the same user name and email for the committer. The reason for this is that the commit hash is computed from various information including the name/email (see details here). Thus we need to make sure that the git svn dcommit at the DeveloperRepo and the git svn fetch on the FetchingRepo create the very same hash! Thus, each developer needs to set up an authorsfile with at least his own entry:

cd $DeveloperRepo
echo 'mysvnuser = My Name '  > .git/usermap
git config svn.authorsfile '.git/usermap'

Important: the line for mysvnuser must exactly match the one in the original authorsfile from Step 1!

The final step is to allow the developer to commit to the SvnRepo by adding the necessary information to the git configuration:

git svn init -s $SVNREPO

Warning: Here we rely on two items: First, that the git clone initializes the default origin for the remote name, and second, that git svn init uses the default prefix “origin”, as discussed above.

If this is too shaky for you, the other option is to define the remote name during clone, and use that for the prefix:

git clone -o mirror $BAREREPO
git svn init --prefix=mirror/ -s $SVNREPO

This way the default remote will be “mirror” and all is fine.

Note: Upon your first git svn usage in the DeveloperRepo, as well as always after a pull, you will see messages like:

Rebuilding .git/svn/refs/remotes/origin/trunk/.rev_map.c570f23f-e606-0410-a88d-b1316a301751 ...
rNNNN = 1bdc669fab3d21ed7554064dc461d520222424e2
rNNNM = 2d1385fdd8b8f1eab2a95d325b0d596bd1ddb64f
...

This is a good sign, meaning that git svn does not re-fetch the whole set of revisions, but reuses the one pulled from the BareRepo and only rebuilds the mapping, which should be fast.

Updating the FetchingRepo

Updating the FetchingRepo should be done automatically using cron, the necessary steps are:

cd $FetchingRepo
git svn fetch --all
git push

This will fetch all revisions, and pushes the default configured branches, that are all remote heads to the BareRepo.

Note: If a Developer first commits a change to the SvnRepo using git svn dcommit and before the FetchingRepo updated the BareRepo (i.e., before the next cron run) also uses git pull, he will see something like:

$ git pull
From preining.info:texlive2
 + 10cc435f163...953f9564671 trunk      -> origin/trunk  (forced update)
Already up to date.

This is due to the fact that the remote head is still behind the local head, which can easily be seen by looking at the output of git log: Before the FetchingRepo updated the BareRepo, one would see something like:

$ git log
commit 3809fcc9aa6e0a70857cbe4985576c55317539dc (HEAD -> trunk)
Author: ....

commit eb19b9e6253dbc8bdc4e1774639e18753c4cd08f (origin/trunk, origin/HEAD)
...

and afterwards all of the three refs would point to the same top commit. This is nothing to worry and normal behavior. In fact, the default setup for fetching remotes is to force pull.

Protecting the trunk branch

I found myself sometimes pushing wrongly to trunk instead of using svn dcommit. This can be avoided by posing restriction on pushing. With gitolite, simply add a rule

- refs/heads/trunk = USERID

to the repo stanza of your mirror. When using Git(Lab|Hub) there are options to protect branches.

A more advanced restriction policy would be users to require that created branches are within a certain namespace. For example, a gitolite rule

repo yoursvnmirror
    RW+      = fetching-user
    RW+ dev/ = USERID
    R        = USERID

would only allow the FetchingRepo (identified by fetching-user) to push everywhere, but myself (USERID) to push/rewind/delete etc only branches starting with “dev/”, but read everything.

Workflow for developers

The recommended workflow compatible with this setup is

  • use git pull to update the local developers repository
  • use only branches that are not created/update via git-svn
  • on commit time, (1) rebase you branch on trunk, (2) merge (fast forward) your branch into trunk, (3) commit your changes with git svn dcommit
  • rinse and repeat

More detailed discussion and safety measure as laid out in the git-svn documentation apply as well, worth reading!

,

Planet DebianSylvain Beucler: Best GitHub alternative: us

Why try to choose the host that sucks less, when hosting a single-file (S)CGI gets you decentralized git-like + tracker + wiki?

Fossil

https://www.fossil-scm.org/

We gotta take the power back.

Planet DebianJoey Hess: the single most important criteria when replacing Github

I could write a lot of things about the Github acquisition by Microsoft. About Github's embrace and extend of git, and how it passed unnoticed by people who now fear the same thing now that Microsoft is in the picture. About the stultifying effects of Github's centralization, and its retardant effect on general innovation in spaces around git and software development infrastructure.

Instead I'd rather highlight one simple criteria you can consider when you are evaluating any git hosting service, whether it's Gitlab or something self-hosted, or federated, or P2P[1], or whatever:

Consider all the data that's used to provide the value-added features on top of git. Issue tracking, wikis, notes in commits, lists of forks, pull requests, access controls, hooks, other configuration, etc.
Is that data stored in a git repository?

Github avoids doing that and there's a good reason why: By keeping this data in their own database, they lock you into the service. Consider if Github issues had been stored in a git repository next to the code. Anyone could quickly and easily clone the issue data, consume it, write alternative issue tracking interfaces, which then start accepting git pushes of issue updates and syncing all around. That would have quickly became the de-facto distributed issue tracking data format.

Instead, Github stuck it in a database, with a rate-limited API, and while this probably had as much to do with expediency, and a certain centralized mindset, as intentional lock-in at first, it's now become such good lock-in that Microsoft felt Github was worth $7 billion.

So, if whatever thing you're looking at instead of Github doesn't do this, it's at worst hoping to emulate that, or at best it's neglecting an opportunity to get us out of the trap we now find ourselves in.


[1] Although in the case of a P2P system which uses a distributed data structure, that can have many of the same benefits as using git. So, git-ssb, which stores issues etc as ssb messages, is just as good, for example.

Rondam RamblingsPSA: Blogger comment notifications appear to be kerfliggered

I normally get an email notification whenever anyone posts comment here, but I just noticed that this feature doesn't seem to be working any more.  I hope this is temporary, but I wouldn't bet my life savings on it.  I don't think the Blogger platform is a top priority for Google.  So until I can figure out what to do about it just be aware that I might not be as responsive to comments as I

Krebs on SecurityFurther Down the Trello Rabbit Hole

Last month’s story about organizations exposing passwords and other sensitive data via collaborative online spaces at Trello.com only scratched the surface of the problem. A deeper dive suggests a large number of government agencies, marketing firms, healthcare organizations and IT support companies are publishing credentials via public Trello boards that quickly get indexed by the major search engines.

By default, Trello boards for both enterprise and personal use are set to either private (requires a password to view the content) or team-visible only (approved members of the collaboration team can view).

But individual users may be able to manually share personal boards that include personal or proprietary employer data, information that gets cataloged by Internet search engines and available to anyone with a Web browser.

David Shear is an analyst at Flashpoint, a New York City based threat intelligence company. Shear spent several weeks last month exploring the depths of sensitive data exposed on Trello. Amid his digging, Shear documented hundreds of public Trello boards that were exposing passwords and other sensitive information. KrebsOnSecurity worked with Shear to document and report these boards to Trello.

Shear said he’s amazed at the number of companies selling IT support services that are using Trello not only to store their own passwords, but even credentials to manage customer assets online.

“There’s a bunch of different IT shops using it to troubleshoot client requests, and to do updates to infrastructure,” Shear said. “We also found a Web development team that’s done a lot of work for various dental offices. You could see who all their clients were and see credentials for clients to log into their own sites. These are IT companies doing this. And they tracked it all via [public] Trello pages.”

One particularly jarring misstep came from someone working for Seceon, a Westford, Mass. cybersecurity firm that touts the ability to detect and stop data breaches in real time. But until a few weeks ago the Trello page for Seceon featured multiple usernames and passwords, including credentials to log in to the company’s WordPress blog and iPage domain hosting.

Credentials shared on Trello by an employee of Seceon, a cybersecurity firm.

Shear also found that a senior software engineer working for Red Hat Linux in October 2017 posted administrative credentials to two different servers apparently used to test new builds.

Credentials posted by a senior software engineer at Red Hat.

The Maricopa County Department of Public Health (MCDPH) in Arizona used public Trello boards to document a host of internal resources that are typically found behind corporate intranets, such as this board that aggregated information for new hires (including information about how to navigate the MCDPH’s payroll system):

The (now defunct) Trello page for the Maricopa County Department of Public Health.

Even federal health regulators have made privacy missteps with Trello. Shear’s sleuthing uncovered a public Trello page maintained by HealthIT.gov — the official Web site of the National Coordinator for Health Information Technology, a component of the U.S. Department of Health and Human Services (HHS) — that was leaking credentials.

There appear to be a great many marketers and realtors who are using public Trello boards as their personal password notepads. One of my favorites is a Trello page maintained by a “virtual assistant” who specializes in helping realtors find new clients and sales leads. Apparently, this person re-used her Trello account password somewhere else (and/or perhaps re-used it from a list of passwords available on her Trello page), and as a result someone added a “You hacked” card to the assistant’s Trello board, urging her to change the password.

One realtor from Austin, Texas who posted numerous passwords to her public Trello board apparently had her Twitter profile hijacked and defaced with a photo featuring a giant Nazi flag and assorted Nazi memorabilia. It’s not clear how the hijacker obtained her password, but it appears to have been on Trello for some time.

Other entities that inadvertently shared passwords for private resources via public Trello boards included a Chinese aviation authority; the International AIDS Society; and the global technology consulting and research firm Analysis Mason, which also exposed its Twitter account credentials on Trello until very recently.

Trello responded to this report by making private many of the boards referenced above; other reported boards appear to remain public, minus the sensitive information. Trello said it was working with Google and other search engine providers to have any cached copies of the exposed boards removed.

“We have put many safeguards in place to make sure that public boards are being created intentionally and have clear language around each privacy setting, as well as persistent visibility settings at the top of each board,” a Trello spokesperson told KrebsOnSecurity in response to this research. “With regard to the search-engine indexing, we are currently sending the correct HTTP response code to Google after a board is made private. This is an automated, immediate action that happens upon users making the change. But we are trying to see if we can speed up the time it takes Google to realize that some of the URLs are no longer available.”

If a Trello board is Team Visible it means any members of that team can view, join, and edit cards. If a board is Private, only members of that specific board can see it. If a board is Public, anyone with the link to the board can see it.

Flashpoint’s Shear said Trello should be making a more concerted effort to proactively find sensitive data exposed by its users. For example, Shear said Trello’s platform could perform some type of automated analysis that looks for specific keywords (like “password”) and if the page is public display a reminder to the board’s author about how to make the page private.

“They could easily do input validation on things like passwords if they’re not going to proactively search their own network for this stuff,” Shear said.

Trello co-founder Michael Pryor said the company was grateful for the suggestion and would consider it.

“We are looking at other cloud apps of our size and how they balance the vast majority of useful sharing of public info with helping people not make a mistake,” Pryor said. “We’ll continue to explore the topic and potential solutions, and appreciate the work you put into the list you shared with us.”

Shear said he doubts his finds even come close to revealing the true extent of the sensitive data organizations are exposing via misconfigured Trello boards. He added that even in cases where public Trello boards don’t expose passwords or financial data, the information that countless organizations publish to these boards can provide plenty of ammunition for phishers and cybercriminals looking to target specific entities.

“I don’t think we’ve even uncovered the real depth of what’s probably there,” he said. “I’d be surprised if someone isn’t at least trying to collect a bunch of user passwords and configuration files off lots of Trello accounts for bad guy operations.”

Update, 11:56 p.m. ET: Corrected location of MCDPH.

Worse Than FailureSponsor Post: Six Months of Free Monitoring at Panopta for TDWTF Readers

You may not have noticed, but in the footer of the site, there is a little banner that says:

Monitored by Panopta

Actually, The Daily WTF has been monitored with Panopta for nearly ten years. I've also been using it to monitor Inedo's important public and on-prem servers, and email and text us when there are issues.

I started using Panopta because it's easy to use and allows you to monitor using a number of different methods (public probes, private probes and server agents). I may install agents for more detailed monitoring going forward, but having Panopta probe HTTP, HTTPS, VPN, and SMTP is sufficient for our needs at the moment. We send custom HTTP payloads to mimic our actual use cases, especially with our registration APIs.

If you're not using a monitoring / alerting platform or want to try something new, now's the time to start!

Panopta is offering The Daily WTF readers six months of free monitoring!

Give it a shot. You may find yourself coming to dread those server outage emails and SMS messages. PROTIP: configure the alerting workflow to send outage notices to someone else to worry about.

Disclaimer: while Panopta is not paid sponsor, they been generously providing free monitoring for The Daily WTF (and Inedo) because they're fans of the site; I thought it was high time to tell you about them!

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

CryptogramThe Habituation of Security Warnings

We all know that it happens: when we see a security warning too often -- and without effect -- we start tuning it out. A new paper uses fMRI, eye tracking, and field studies to prove it.

EDITED TO ADD (6/6): This blog post summarizes the findings.

Planet DebianRussell Coker: BTRFS and SE Linux

I’ve had problems with systems running SE Linux on BTRFS losing the XATTRs used for storing the SE Linux file labels after a power outage.

Here is the link to the patch that fixes this [1]. Thanks to Hans van Kranenburg and Holger Hoffstätte for the information about this patch which was already included in kernel 4.16.11. That was uploaded to Debian on the 27th of May and got into testing about the time that my message about this issue got to the SE Linux list (which was a couple of days before I sent it to the BTRFS developers).

The kernel from Debian/Stable still has the issue. So using a testing kernel might be a good option to deal with this problem at the moment.

Below is the information on reproducing this problem. It may be useful for people who want to reproduce similar problems. Also all sysadmins should know about “reboot -nffd”, if something really goes wrong with your kernel you may need to do that immediately to prevent corrupted data being written to your disks.

The command “reboot -nffd” (kernel reboot without flushing kernel buffers or writing status) when run on a BTRFS system with SE Linux will often result in /var/log/audit/audit.log being unlabeled. It also results in some systemd-journald files like /var/log/journal/c195779d29154ed8bcb4e8444c4a1728/system.journal being unlabeled but that is rarer. I think that the same
problem afflicts both systemd-journald and auditd but it’s a race condition that on my systems (both production and test) is more likely to affect auditd.

root@stretch:/# xattr -l /var/log/audit/audit.log 
security.selinux: 
0000   73 79 73 74 65 6D 5F 75 3A 6F 62 6A 65 63 74 5F    system_u:object_ 
0010   72 3A 61 75 64 69 74 64 5F 6C 6F 67 5F 74 3A 73    r:auditd_log_t:s 
0020   30 00                                              0.

SE Linux uses the xattr “security.selinux”, you can see what it’s doing with xattr(1) but generally using “ls -Z” is easiest.

If this issue just affected “reboot -nffd” then a solution might be to just not run that command. However this affects systems after a power outage.

I have reproduced this bug with kernel 4.9.0-6-amd64 (the latest security update for Debian/Stretch which is the latest supported release of Debian). I have also reproduced it in an identical manner with kernel 4.16.0-1-amd64 (the latest from Debian/Unstable). For testing I reproduced this with a 4G filesystem in a VM, but in production it has happened on BTRFS RAID-1 arrays, both SSD and HDD.

#!/bin/bash 
set -e 
COUNT=$(ps aux|grep [s]bin/auditd|wc -l) 
date 
if [ "$COUNT" = "1" ]; then 
 echo "all good" 
else 
 echo "failed" 
 exit 1 
fi

Firstly the above is the script /usr/local/sbin/testit, I test for auditd running because it aborts if the context on it’s log file is wrong. When SE Linux is in enforcing mode an incorrect/missing label on the audit.log file causes auditd to abort.

root@stretch:~# ls -liZ /var/log/audit/audit.log 
37952 -rw-------. 1 root root system_u:object_r:auditd_log_t:s0 4385230 Jun  1 
12:23 /var/log/audit/audit.log

Above is before I do the tests.

while ssh stretch /usr/local/sbin/testit ; do 
 ssh stretch "reboot -nffd" > /dev/null 2>&1 & 
 sleep 20 
done

Above is the shell code I run to do the tests. Note that the VM in question runs on SSD storage which is why it can consistently boot in less than 20 seconds.

Fri  1 Jun 12:26:13 UTC 2018 
all good 
Fri  1 Jun 12:26:33 UTC 2018 
failed

Above is the output from the shell code in question. After the first reboot it fails. The probability of failure on my test system is greater than 50%.

root@stretch:~# ls -liZ /var/log/audit/audit.log  
37952 -rw-------. 1 root root system_u:object_r:unlabeled_t:s0 4396803 Jun  1 12:26 /var/log/audit/audit.log

Now the result. Note that the Inode has not changed. I could understand a newly created file missing an xattr, but this is an existing file which shouldn’t have had it’s xattr changed. But somehow it gets corrupted.

The first possibility I considered was that SE Linux code might be at fault. I asked on the SE Linux mailing list (I haven’t been involved in SE Linux kernel code for about 15 years) and was informed that this isn’t likely at
all. There have been no problems like this reported with other filesystems.

Worse Than FailureCodeSOD: Many Happy Returns

We've all encountered a situation where changing requirements caused some function that had a single native return type to need to return a second value. One possible solution is to put the two return values in some wrapper class as follows:

  class ReturnValues {
    private int    numDays;
    private String lastName;

    public ReturnValues(int i, String s) {
      numDays  = i;
      lastName = s;
    }

    public int    getNumDays()  { return numDays;  }
    public String getLastname() { return lastName; }
  }

It is trivial to add additional return values to this mechanism. If this is used as the return value to an interface function and you don't have access to change the ReturnValues object itself, you can simply subclass the ReturnValues wrapper to include additional fields as needed and return the base class reference.

Then you see something like this spread out over a codebase and wonder if maybe they should have been just a little less agile and that perhaps a tad more planning was required:

  class AlsoReturnTransactionDate extends ReturnValues {
    private Date txnDate;
    public AlsoReturnTransactionDate(int i, String s, Date td) {
      super(i,s);
      txnDate = td;
    }
    public Date getTransactionDate() { return txnDate; }
  }
  
  class AddPriceToReturn extends AlsoReturnTransactionDate {
    private BigDecimal price;
    public AddPriceToReturn(int i, String s, Date td, BigDecimal px) {
      super(i,s,td);
      price = px;
    }
    public BigDecimal getPrice() { return price; }
  }

  class IncludeTransactionData extends AddPriceToReturn {
    private Transaction txn;
    public IncludeTransactionData(int i, String s, Date td, BigDecimal px, Transaction t) {
      super(i,s,td,px);
      txn = t;
    }
    public Transaction getTransaction() { return txn; }
  }

  class IncludeParentTransactionId extends IncludeTransactionData {
    private long id;
    public IncludeParentTransactionId(int i, String s, Date td, BigDecimal px, Transaction t, long id) {
      super(i,s,td,px,t);
      this.id = id;
    }
    public long getParentTransactionId() { return id; }
  }

  class ReturnWithRelatedData extends IncludeParentTransactionId {
    private RelatedData rd;
    public ReturnWithRelatedData(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd) {
      super(i,s,td,px,t,id);
      this.rd = rd;
    }
    public RelatedData getRelatedData() { return rd; }
  }

  class ReturnWithCalculatedFees extends ReturnWithRelatedData {
    private BigDecimal calcedFees;
    public ReturnWithCalculatedFees(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf) {
      super(i,s,td,px,t,id,rd);
      calcedFees = cf;
    }
    public BigDecimal getCalculatedFees() { return calcedFees; }
  }

  class ReturnWithExpiresDate extends ReturnWithCalculatedFees {
    private Date expiresDate;
    public ReturnWithExpiresDate(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf, Date ed) {
      super(i,s,td,px,t,id,rd,cf);
      expiresDate = ed;
    }
    public Date getExpiresDate() { return expiresDate; }
  }

  class ReturnWithRegulatoryQuantities extends ReturnWithExpiresDate {
    private RegulatoryQuantities regQty;
    public ReturnWithRegulatoryQuantities(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf, Date ed, RegulatoryQuantities rq) {
      super(i,s,td,px,t,id,rd,cf,ed);
      regQty = rq;
    }
    public RegulatoryQuantities getRegulatoryQuantities() { return regQty; }
  }

  class ReturnWithPriorities extends ReturnWithRegulatoryQuantities {
    private Map<String,Double> priorities;
    public ReturnWithPriorities(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf, Date ed, RegulatoryQuantities rq, Map<String,Double> p) {
      super(i,s,td,px,t,id,rd,cf,ed,rq);
      priorities = p;
    }
    public Map<String,Double> getPriorities() { return priorities; }
  }

  class ReturnWithRegulatoryValues extends ReturnWithPriorities {
    private Map<String,Double> regVals;
    public ReturnWithRegulatoryValues(int i, String s, Date td, BigDecimal px, Transaction t, long id, RelatedData rd, BigDecimal cf, Date ed, RegulatoryQuantities rq, Map<String,Double> p, Map<String,Double> rv) {
        super(i,s,td,px,t,id,rd,cf,ed,rq,p);
        regVals = rv;
    }
    public Map<String,Double> getRegulatoryValues() { return regVals; }
  }

The icing on the cake is that everywhere the added values are used, the base return type has to be cast to at least the level that contains the needed field.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianEvgeni Golov: Not-So-Self-Hosting

I planned to write about this for quite some time now (last time end of April), and now, thanks to the GitHub acquisition by Microsoft and all that #movingtogitlab traffic, I am finally sitting here and writing these lines.

This post is not about Microsoft, GitHub or GitLab, and it's neither about any other SaaS solution out there, the named companies and products are just examples. It's more about "do you really want to self-host?"

Every time a big company acquires, shuts down or changes an online service (SaaS - Software as a Service), you hear people say "told you so, you should better have self-hosted from the beginning". And while I do run quite a lot of own infrastructure, I think this statement is too general and does not work well for many users out there.

Software as a Service

There are many code-hosting SaaS offerings: GitHub (proprietary), GitLab (open core), Pagure (FOSS) to name just a few. And while their licenses, ToS, implementations and backgrounds differ, they have a few things in common.

Benefits:

  • (sort of) centralized service
  • free (as in beer) tier available
  • high number of users (and potential collaborators)
  • high number of hosted projects
  • good (fsvo "good") connection from around the globe
  • no maintenance required from the users

Limitations:

  • dependency on the interest/goodwill of the owner to continue the service
  • some features might require signing up for a paid tier

Overall, SaaS is handy if you're lazy, just want to get the job done and benefit from others being able to easily contribute to your code.

Hosted Solutions

All of the above mentioned services also offer a hosted solution: GitHub Enterprise, GitLab CE and EE, Pagure.

As those are software packages you can install essentially everywhere, you can host the service "in your basement", in the cloud or in any data center you have hardware or VMs running.

However, with self-hosting, the above list of common things shifts quite a bit.

Benefits:

  • the service is configured and secured exactly like you need it
  • the data remains inside your network/security perimeter if you want it

Limitations:

  • requires users to create an own account on your instance for collaboration
  • probably low number of users (and potential collaborators)
  • connection depends on your hosting connection
  • infrastructure (hardware, VM, OS, software) requires regular maintenance
  • dependency on your (free) time to keep the service running
  • dependency on your provider (network/hardware/VM/cloud)

I think especially the first and last points are very important here.

First, many contributions happen because someone sees something small and wants to improve it, be it a typo in the documentation, a formatting error in the manpage or a trivial improvement of the code. But these contributions only happen when the complexity to submit it is low. Nobody not already involved in OpenStack would submit a typo-fix to their Gerrit which needs a Launchpad account… A small web-edit on GitHub or GitLab on the other hand is quickly done, because "everybody" has an account anyways.

Second, while it is called "self-hosting", in most cases it's more of a "self-running" or "self-maintaining" as most people/companies don't own the whole infrastructure stack.

Let's take this website as an example (even though it does not host any Git repositories): the webserver runs in a container (LXC) on a VM I rent from netcup. In the past, netcup used to get their infrastructure from Hetzner - however I am not sure that this is still the case. So worst case, the hosting of this website depends on me maintaining the container and the container host, netcup maintaining the virtualization infrastructure and Hetzner maintaining the actual data center. This also implies that I have to trust those companies and their suppliers as I only "own" the VM upwards, not the underlying infrastructure and not the supporting infrastructure (network etc).

SaaS vs Hosted

There is no silver bullet to that. One important question is "how much time/effort can you afford?" and another "which security/usability constraints do you have?".

Hosted for a dedicated group

If you need a solution for a dedicated group (your work, a big FOSS project like Debian or a social group like riseup), a hosted solution seems like a good idea. Just ensure that you have enough infrastructure and people to maintain it as a 24x7 service or at least close to that, for a long time, as people will depend on your service.

The same also applies if you need/want to host your code inside your network/security perimeter.

Hosted for an individual

Contrary to a group, I don't think a hosted solution makes sense for an individual most of the time. The burden of maintenance quite often outweighs the benefits, especially as you'll have to keep track of (security) updates for the software and the underlying OS as otherwise the "I own my data" benefit becomes "everyone owns me" quite quickly. You also have to pay for the infrastructure, even if the OS and the software are FOSS.

You're also probably missing out on potential contributors, which might have an account on the common SaaS platforms, but won't submit a pull-request for a small change if they have to register on your individual instance.

SaaS for a dedicated group

If you don't want to maintain an own setup (resources/costs), you can also use a SaaS platform for a group. Some SaaS vendors will charge you for some features (they have to pay their staff and bills too!), but it's probably still cheaper than having the right people in-house unless you have them anyways.

You also benefit from a networking effect, as other users of the same SaaS platform can contribute to your projects "at no cost".

Saas for an individual

For an individual, a SaaS solution is probably the best fit as it's free (as in beer) in the most cases and allows the user to do what they intend to do, instead of shaving yaks and stacking turtles (aka maintaining infrastructure instead of coding).

And you again get the networking effect of the drive-by contributors who would not sign up for a quick fix.

Selecting the right SaaS

When looking for a SaaS solution, try to answer the following questions:

  • Do you trust the service to be present next year? In ten years? Is there a sustainable business model?
  • Do you trust the service with your data?
  • Can you move between SaaS and hosted easily?
  • Can you move to a different SaaS (or hosted solution) easily?
  • Does it offer all the features and integrations you want/need?
  • Can you leverage the network effect of being on the same platform as others?

Selecting the right hosted solution

And answer these when looking for a hosted one:

  • Do you trust the vendor to ship updates next year? In ten years?
  • Do you understand the involved software stack and willing to debug it when things go south?
  • Can you get additional support from the vendor (for money)?
  • Does it offer all the features and integrations you want/need?

So, do you really want to self-host?

I can't speak for you, but for my part, I don't want to run a full-blown Git hosting just for my projects, GitHub is just fine for that. And yes, GitLab would be equally good, but there is little reason to move at the moment.

And yes, I do run my own Nextcloud instance, mostly because I don't want to backup the pictures from my phone to "a cloud". YMMV.

Planet DebianThomas Lange: FAI 5.7

The new FAI release 5.7 is now available. Packages are uploaded to unstable and are available from the fai-project.org repository. I've also created new FAI ISO images and the special Ubuntu only installation FAI CD is now installing Ubuntu 18.04 aka Bionic. The FAI.me build service is also using the new FAI release.

In summary, the process for this release went very smooth and I am happy that the update of the ISO images and FAI.me service happend very shortly after the new release.

Planet DebianLouis-Philippe Véronneau: Disaster a-Brewing

I brewed two new batches of beer last March and I've been so busy since I haven't had time to share how much of a failure it was.

See, after three years I thought I was getting better at brewing beer and the whole process of mashing, boiling, fermenting and bottling was supposed to be all figured out by now.

Turns out I was both greedy and unlucky and - woe is me! - one of my carboy exploded. Imagine 15 liters (out of a 19L batch) spilling out in my bedroom at 1AM with such force that the sound of the rubber bung shattering on the ceiling woke me up in panic. I legitimately thought someone had been shot in my bedroom.

This carboy was full to the brim prior to the beerxplosion

The aftermath left the walls, the ceiling and the wooden floor covered in thick semi-sweet brown liquid.

This was the first time I tried a "new" brewing technique called parti-gyle. When doing a parti-gyle, you reuse the same grains twice to make two different batches of beer: typically, the first batch is strong, whereas the second one is pretty low in alcohol. Parti-gyle used to be way beer was brewed a few hundred years ago. The Belgian monks made their Tripels with the first mash, the Dubbels with the second mash, and the final mash was brewed with funky yeasts to make lighter beers like Saisons.

The reason for my carboy exploding was twofold. First of all, I was greedy and filled the carboy too much for the high-gravity porter I was brewing. When your wort is very sweet, the yeast tends to degas a whole lot more and needs more head space not to spill over. At this point, any homebrewer with experience will revolt and say something like "Why didn't you use a blow-off tube you dummy!". A blow-off tube is a tube that comes out the airlock into a large tub of water and helps contain the effects of violent primary fermentation. With a blow-off tube, instead of having beer spill out everywhere (or worse, having your airlock completely explode), the mess is contained to the water vessel the tube is in.

The thing is, I did use a blow-off tube. Previous experience taught me how useful they can be. No, the real reason my carboy exploded was my airlock clogged up and let pressure build up until the bung gave way. The particular model of airlock I used was a three piece airlock with a little cross at the end of the plastic tube1. Turns out that little cross accumulated yeast and when that yeast dried up, it created a solid plug. Easy to say my airlocks don't have these little crosses anymore...

On a more positive note, it was also the first time I dry-hopped with full cones instead of pellets. I had some leftover cones in the freezer from my summer harvest and decided to use them. The result was great as the cones make for less trub than pellets when dry-hopping.

Recipes

What was left of the porter came out great. Here's the recipe if you want to try to replicate it. The second mash was also surprisingly good and turned out to be a very drinkable brown beer.

Closeup shot of hops floating in my carboy

Party Porter (first mash)

The target boil volume is 23L and the target batch size 17L. Mash at 65°C and ferment at 19°C.

Since this is a parti-gyle, do not sparge. If you don't reach the desired boil size in the kettle, top it off with water until you reach 23L.

Black Malt gives very nice toasty aromas to this porter, whereas the Oat Flakes and the unmalted Black Barley make for a nice black and foamy head.

Malt:

  • 5.7 kg x Superior Pale Ale
  • 450 g x Amber Malt
  • 450 g x Black Barley (not malted)
  • 400 g x Oat Flakes
  • 300 g x Crystal Dark
  • 200 g x Black Malt

Hops:

  • 13 g x Bravo (15.5% alpha acid) - 60 min Boil
  • 13 g x Bramling Cross (6.0% alpha acid) - 30 min Boil
  • 13 g x Challenger (7.0% alpha acid) - 30 min Boil

Yeast:

  • White Labs - American Ale Yeast Blend - WLP060

Party Brown (second mash)

The target boil volume is 26L and the target batch size 18L. Mash at 65°C for over an hour, sparge slowly and ferment at 19°C.

The result is a very nice table beer.

Malt:

same as for the Party Porter, since we are doing a parti-gyle.

Hops:

  • 31 g x Northern Brewer (9.0% alpha acid) - 60 min Boil
  • 16 g x Kent Goldings (5.5% alpha acid) - 15 min Boil
  • 13 g x Kent Goldings (5.5% alpha acid) - 5 min Boil
  • 13 g x Chinook (cones) - Dry Hop

Yeast:

  • White Labs - Nottingham Ale Yeast - WLP039

  1. The same kind of cross you can find in sinks to keep you from dropping objects down the drain by inadvertance. 

,

Rondam RamblingsBlame where it's due

I can't say I'm even a little bit surprised that the summit with North Korea has fallen through.  I wouldn't even bother blogging about this except that back in April I expressed some cautious optimism that maybe, just maybe, Trump's bull-in-the-china-shop tactics could be working.  Nothing makes me happier than having my pessimistic prophecies be proven wrong, but alas, Donald Trump seems to be

Planet Linux AustraliaMichael Still: Mirroring all your repos from github

Share

So let me be clear here, I don’t think its a bad thing that Microsoft bought github. No one is forcing you to use their services, in fact they make it trivial to stop using them. So what’s the big deal.

I’ve posted about a few git mirror scripts I run at home recently: one to mirror gerrit repos; and one to mirror arbitrary github users.

It was therefore trivial to whip up a slightly nicer script intended to help you forklift your repos out of github if you’re truly concerned. Its posted on github now (irony intended).

Now you can just do something like:

$ pip install -U -r requirements.txt
$ python download.py --github_token=foo --username=mikalstill

I intend to add support for auto-creating and importing gitlab repos into the script, but haven’t gotten around to that yet. Pull requests welcome.

Share

The post Mirroring all your repos from github appeared first on Made by Mikal.

Rondam RamblingsSCOTUS got the Masterpiece Cake Shop decision badly wrong

The Supreme Court issued its much-anticipated decision in the gay wedding cake case yesterday.  It hasn't made as much of a splash as expected because the justices tried to split the baby and sidestep making what might otherwise have been a contentious decision.  But I think they failed and got it wrong anyway. The gist of the ruling was that Jack Phillips, the cake shop owner, wins the case

Planet DebianThomas Goirand: Using a dummy network interface

For a long time, I’ve been very much annoyed by network setups on virtual machines. Either you choose a bridge interface (which is very easy with something like Virtualbox), or you choose NAT. The issue with NAT is that you can’t easily get into your VM (for example, virtualbox doesn’t exposes the gateway to your VM). With bridging, you’re getting in trouble because your VM will attempt to get DHCP from the outside network, which means that first, you’ll get a different IP depending on where your laptop runs, and second, the external server may refuse your VM because it’s not authenticated (for example because of a MAC address filter, or 802.11x auth).

But there’s a solution to it. I’m now very happy with my network setup, which is using a dummy network interface. Let me share how it works.

In the modern Linux kernel, there’s “fake” network interface through a module called “dummy”. To add such an interface, simply load the kernel module (ie: “modprobe dummy”) and start playing. Then you can bridge that interface, and tap it, then plug your VM to it. Since the dummy interface is really living in your computer, you do have access to this internal network with a route to it.

I’m using this setup for connecting both KVM and Virtualbox VMs, you can even mix both. For Virtualbox, simply use the dropdown list for the bridge. For KVM, use something like this in the command line: -device e1000,netdev=net0,mac=08:00:27:06:CF:CF -netdev tap,id=net0,ifname=mytap0,script=no,downscript=no

Here’s a simple script to set that up, with on top, masquerading for both ip4 and ipv6:

# Load the dummy interface module
modprobe dummy

# Create a dummy interface called mynic0
ip link set name mynic0 dev dummy0

# Set its MAC address
ifconfig mynic0 hw ether 00:22:22:dd:ee:ff

# Add a tap device
ip tuntap add dev mytap0 mode tap user root

# Create a bridge, and bridge to it mynic0 and mytap0
brctl addbr mybr0
brctl addif mybr0 mynic0
brctl addif mybr0 mytap0

# Set an IP addresses to the bridge
ifconfig mybr0 192.168.100.1 netmask 255.255.255.0 up
ip addr add fd5d:12c9:2201:1::1/24 dev mybr0

# Make sure all interfaces are up
ip link set mybr0 up
ip link set mynic0 up
ip link set mytap0 up

# Set basic masquerading for both ipv4 and 6
iptables -I FORWARD -j ACCEPT
iptables -t nat -I POSTROUTING -s 192.168.100.0/24 -j MASQUERADE
ip6tables -I FORWARD -j ACCEPT
ip6tables -t nat -I POSTROUTING -s fd5d:12c9:2201:1::/64 -j MASQUERADE

Planet DebianDaniel Pocock: Public Money Public Code: a good policy for FSFE and other non-profits?

FSFE has been running the Public Money Public Code (PMPC) campaign for some time now, requesting that software produced with public money be licensed for public use under a free software license. You can request a free box of stickers and posters here (donation optional).

Many non-profits and charitable organizations receive public money directly from public grants and indirectly from the tax deductions given to their supporters. If the PMPC argument is valid for other forms of government expenditure, should it also apply to the expenditures of these organizations too?

Where do we start?

A good place to start could be FSFE itself. Donations to FSFE are tax deductible in Germany, the Netherlands and Switzerland. Therefore, the organization is partially supported by public money.

Personally, I feel that for an organization like FSFE to be true to its principles and its affiliation with the FSF, it should be run without any non-free software or cloud services.

However, in my role as one of FSFE's fellowship representatives, I proposed a compromise: rather than my preferred option, an immediate and outright ban on non-free software in FSFE, I simply asked the organization to keep a register of dependencies on non-free software and services, by way of a motion at the 2017 general assembly:

The GA recognizes the wide range of opinions in the discussion about non-free software and services. As a first step to resolve this, FSFE will maintain a public inventory on the wiki listing the non-free software and services in use, including details of which people/teams are using them, the extent to which FSFE depends on them, a list of any perceived obstacles within FSFE for replacing/abolishing each of them, and for each of them a link to a community-maintained page or discussion with more details and alternatives. FSFE also asks the community for ideas about how to be more pro-active in spotting any other non-free software or services creeping into our organization in future, such as a bounty program or browser plugins that volunteers and staff can use to monitor their own exposure.

Unfortunately, it failed to receive enough votes (minutes: item 24, votes: 0 for, 21 against, 2 abstentions)

In a blog post on the topic of using proprietary software to promote freedom, FSFE's Executive Director Jonas Öberg used the metaphor of taking a journey. Isn't a journey more likely to succeed if you know your starting point? Wouldn't it be even better having a map that shows which roads are a dead end?

In any IT project, it is vital to understand your starting point before changes can be made. A register like this would also serve as a good model for other organizations hoping to secure their own freedoms.

For a community organization like FSFE, there is significant goodwill from volunteers and other free software communities. A register of exposure to proprietary software would allow FSFE to crowdsource solutions from the community.

Back in 2018

I'll be proposing the same motion again for the 2018 general assembly meeting in October.

If you can see something wrong with the text of the motion, please help me improve it so it may be more likely to be accepted.

Offering a reward for best practice

I've observed several discussions recently where people have questioned the impact of FSFE's campaigns. How can we measure whether the campaigns are having an impact?

One idea may be to offer an annual award for other non-profit organizations, outside the IT domain, who demonstrate exemplary use of free software in their own organization. An award could also be offered for some of the individuals who have championed free software solutions in the non-profit sector.

An award program like this would help to showcase best practice and provide proof that organizations can run successfully using free software. Seeing compelling examples of success makes it easier for other organizations to believe freedom is not just a pipe dream.

Therefore, I hope to propose an additional motion at the FSFE general assembly this year, calling for an award program to commence in 2019 as a new phase of the PMPC campaign.

Please share your feedback

Any feedback on this topic is welcome through the FSFE discussion list. You don't have to be a member to share your thoughts.

Krebs on SecurityResearcher Finds Credentials for 92 Million Users of DNA Testing Firm MyHeritage

MyHeritage, an Israeli-based genealogy and DNA testing company, disclosed today that a security researcher found on the Internet a file containing the email addresses and hashed passwords of more than 92 million of its users.

MyHeritage says it has no reason to believe other user data was compromised, and it is urging all users to change their passwords. It says sensitive customer DNA data is stored on IT systems that are separate from its user database, and that user passwords were “hashed” — or churned through a mathematical model designed to turn them into unique pieces of gibberish text that is (in theory, at least) difficult to reverse.

MyHeritage did not say in its blog post which method it used to obfuscate user passwords, but suggested that it had added some uniqueness to each password (beyond the hashing) to make them all much harder to crack.

“MyHeritage does not store user passwords, but rather a one-way hash of each password, in which the hash key differs for each customer,” wrote Omer Deutsch, MyHeritage’s chief information security officer. “This means that anyone gaining access to the hashed passwords does not have the actual passwords.”

The company said the security researcher who found the user database reported it on Monday, June 4. The file contained the email addresses and hashed passwords of 92,283,889 users who created accounts at MyHeritage up to and including Oct. 26, 2017, which MyHeritage says was “the date of the breach.”

MyHeritage added that it is expediting work on an upcoming two-factor authentication option that the company plans to make available to all MyHeritage users soon.

“This will allow users interested in taking advantage of it, to authenticate themselves using a mobile device in addition to a password, which will further harden their MyHeritage accounts against illegitimate access,” the blog post concludes.

MyHeritage has not yet responded to requests for comment and clarification on several points. I will update this post if that changes.

ANALYSIS

MyHeritage’s repeated assurances that nothing related to user DNA ancestry tests and genealogy data was impacted by this incident are not reassuring. Much depends on the strength of the hashing routine used to obfuscate user passwords.

Thieves can use open-source tools to crack large numbers of passwords that are scrambled by weaker hashing algorithms (MD5 and SHA-1, e.g.) with very little effort. Passwords jumbled by more advanced hashing methods — such as Bcrypt — are typically far more difficult to crack, but I would expect any breach victim who was using Bcrypt to disclose this and point to it as a mitigating factor in a cybersecurity incident.

In its blog post, MyHeritage says it enabled a unique “hash key” for each user password. It seems likely the company is talking about adding random “salt” to each password, which can be a very effective method for blunting large-scale password cracking attacks (if implemented properly).

If indeed the MyHeritage user database was taken and stored by a malicious hacker (as opposed to inadvertently exposed by an employee), there is a good chance that the attackers will be trying to crack all user passwords. And if any of those passwords are crackable, the attackers will then of course get access to the more personal data on those users.

In light of this and the sensitivity of the data involved, it would seem prudent for MyHeritage to simply expire all existing passwords and force a password reset for all of users, instead of relying on them to do it themselves at some point (hopefully, before any attackers might figure out how to crack the user password hashes).

Finally, it’s astounding that 92 million+ users thought it was okay to protect such sensitive data with just a username and password. And that MyHeritage is only now getting around developing two-factor solutions.

It’s now 2018, and two-factor authentication is not a new security technology by any stretch. A word of advice: If a Web site you trust with sensitive personal or financial information doesn’t offer some form of multi-factor authentication, it’s time to shop around.

Check out twofactorauth.org, and compare how your bank, email, Web/cloud hosting or domain name provider stacks up against the competition. If you find a competitor with better security, consider moving your data and business there.

Every company (including MyHeritage) likes to say that “your privacy and the security of your data are our highest priority.” Maybe it’s time we stopped patronizing companies that don’t outwardly demonstrate that priority.

For more on MyHeritage, check out this March 2018 story in The Atlantic about how the company recently mapped out a 13-million person family tree.

Update, June 6, 3:12 p.m. ET: MyHeritage just updated their statement to say that they are now forcing a password reset for all users. From the new section:

“To maximize the security of our users, we have started the process of expiring ALL user passwords on MyHeritage. This process will take place over the next few days. It will include all 92.3 million affected user accounts plus all 4 million additional accounts that have signed up to MyHeritage after the breach date of October 26, 2017.”

“As of now, we’ve already expired the passwords of more than half of the user accounts on MyHeritage. Users whose passwords were expired are forced to set a new password and will not be able to access their account and data on MyHeritage until they complete this. This procedure can only be done through an email sent to their account’s email address at MyHeritage. This will make it more difficult for any unauthorized person, even someone who knows the user’s password, to access the account.”

“We plan to complete the process of expiring all the passwords in the next few days, at which point all the affected passwords will no longer be usable to access accounts and data on MyHeritage. Note that other websites and services owned and operated by MyHeritage, such as Geni.com and Legacy Family Tree, have not been affected by the incident.”

Planet DebianJonathan McDowell: Getting started with Home Assistant

Having set up some MQTT sensors and controllable lights the next step was to start tying things together with a nicer interface than mosquitto_pub and mosquitto_sub. I don’t yet have enough devices setup to be able to do some useful scripting (turning on the snug light when the study is cold is not helpful), but a web control interface makes things easier to work with as well as providing a suitable platform for expansion as I add devices.

There are various home automation projects out there to help with this. I’d previously poked openHAB and found it quite complex, and I saw reference to Domoticz which looked viable, but in the end I settled on Home Assistant, which is written in Python and has a good range of integrations available out of the box.

I shoved the install into a systemd-nspawn container (I have an Ansible setup which makes spinning one of these up with a basic Debian install simple, and it makes it easy to cleanly tear things down as well). One downside of Home Assistant is that it decides it’s going to install various Python modules once you actually configure up some of its integrations. This makes me a little uncomfortable, but I set it up with its own virtualenv to make it easy to see what had been pulled in. Additionally I separated out the logs, config and state database, all of which normally go in ~/.homeassistant/. My systemd service file went in /etc/systemd/system/home-assistant.service and looks like:

[Unit]
Description=Home Assistant
After=network-online.target

[Service]
Type=simple
User=hass
ExecStart=/srv/hass/bin/hass -c /etc/homeassistant --log-file /var/log/homeassistant/homeassistant.log

MemoryDenyWriteExecute=true
ProtectControlGroups=true
PrivateDevices=true
ProtectKernelTunables=true
ProtectSystem=true
RestrictRealtime=true
RestrictNamespaces=true

[Install]
WantedBy=multi-user.target

Moving the state database needs an edit to /etc/homeassistant/configuration.yaml (a default will be created on first startup, I’ll only mention the changes I made here):

recorder:
  db_url: sqlite:///var/lib/homeassistant/home-assistant_v2.db

I disabled the Home Assistant cloud piece, as I’m not planning on using it:

# cloud:

And the introduction card:

# introduction:

The existing MQTT broker was easily plumbed in:

mqtt:
  broker: mqtt-host
  username: hass
  password: !secret mqtt_password
  port: 8883
  certificate: /etc/ssl/certs/ca-certificates.crt

Then the study temperature sensor (part of the existing sensor block that had weather prediction):

sensor:
  - platform: mqtt
    name: "Study Temperature"
    state_topic: "collectd/mqtt.o362.us/mqtt/temperature-study"
    value_template: "{{ value.split(':')[1] }}"
    device_class: "temperature"
    unit_of_measurement: "°C"

The templating ability let me continue to log into MQTT in a format collectd could parse, while also being able to pull the information into Home Assistant.

Finally the Sonoff controlled light:

light:
  - platform: mqtt
    name: snug
    command_topic: 'cmnd/sonoff-snug/power'

I set http_password (to prevent unauthenticated access) and mqtt_password in /etc/homeassistant/secrets.yaml. Then systemctl start home-assistant brought the system up on http://hass-host:8123/, and the default interface presented the study temperature and a control for the snug light, as well as the default indicators of whether the sun is up or not and the local weather status.

I do have a few niggles with Home Assistant:

  • Single password for access: There’s one password for accessing the API endpoint, so no ability to give different users different access or limit what an external integration can do.
  • Wants an entire subdomain: This is a common issue with webapps; they don’t want to live in a subdirectory under a main site (I also have this issue with my UniFi controller and Keybase, who don’t want to believe my main website is old skool with /~noodles/). There’s an open configurable webroot feature request, but no sign of it getting resolved. Sadly it involves work to both the backend and the frontend - I think a modicum of hacking could fix up the backend bits, but have no idea where to start with a Polymer frontend.
  • Installs its own software: I don’t like the fact the installation of Python modules isn’t an up front thing. I’d rather be able to pull a dependency file easily into Ansible and lock down the installation of new things. I can probably get around this by enabling plugins, allowing the modules to be installed and then locking down permissions but it’s kludgy and feels fragile.
  • Textual configuration: I’m not really sure I have a good solution to this, but it’s clunky to have to do all the configuration via a text file (and I love scriptable configuration). This isn’t something that’s going to work out of the box for non-technical users, and even for those of us happy hand editing YAML there’s a lot of functionality that’s hard to discover without some digging. One of my original hopes with Home Automation was to get a better central heating control and if it’s not usable by any household member it isn’t going to count as better.

Some of these are works in progress, some are down to my personal preferences. There’s active development, which is great to see, and plenty of documentation - both offical on the project website, and in the community forums. And one of the nice things about tying everything together with MQTT is that if I do decide Home Assistant isn’t the right thing down the line, I should be able to drop in anything else that can deal with an MQTT broker.

Sociological ImagesStaying Cool as Social Policy

This week I came across a fascinating working paper on air conditioning in schools by Joshua Goodman, Michael Hurwitz, Jisung Park, and Jonathan Smith. Using data from ten million students, the authors find a relationship between hotter school instruction days and lower PSAT scores. They also find that air conditioning offsets this problem, but students of color in lower income school districts are less likely to attend schools with adequate air conditioning, making them more vulnerable to the effects of hot weather.

Climate change is a massive global problem, and the heat is a deeply sociological problem, highlighting who has the means or the social ties to survive dangerous heat waves. For much of our history, however, air conditioning has been understood as a luxury good, from wealthy citizens in ancient Rome to cinemas in the first half of the twentieth century. Classic air conditioning ads make the point:

This is a key problem for making social policy in a changing world. If global temperatures are rising, at what point does adequate air conditioning become essential for a school to serve students? At what point is it mandatory to provide AC for the safety of residents, just like landlords have to provide heat? If a school has to undergo budget cuts today, I would bet that most politicians or administrators wouldn’t think to fix the air conditioning first. The estimates from Goodman and coauthors suggest that doing so could offset the cost, though, boosting learning to the tune of thousands of dollars in future earnings for students, all without a curriculum overhaul.

Making such improvements requires cultural changes as well as policy changes. We would need to shift our understanding of what air conditioning means and what it provides: security, rather than luxury. It also means we can’t always focus social policy as something that provides just the bare minimum, we also have to think about what it means to provide for a thriving society, rather than one that just squeaks by. In an era of climate change, it might be time to rethink the old cliché, “if you can’t stand the heat, get out of the kitchen.”

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramRegulating Bitcoin

Ross Anderson has a new paper on cryptocurrency exchanges. From his blog:

Bitcoin Redux explains what's going wrong in the world of cryptocurrencies. The bitcoin exchanges are developing into a shadow banking system, which do not give their customers actual bitcoin but rather display a "balance" and allow them to transact with others. However if Alice sends Bob a bitcoin, and they're both customers of the same exchange, it just adjusts their balances rather than doing anything on the blockchain. This is an e-money service, according to European law, but is the law enforced? Not where it matters. We've been looking at the details.

The paper.

Worse Than FailureRepresentative Line: A Test Configuration

Tyler Zale's organization is a automation success story of configuration-as-code. Any infrastructure change is scripted, those scripts are tested, and deployments happen at the push of a button.

They'd been running so smoothly that Tyler was shocked when his latest automated pull request for changes to their HAProxy load balancer config triggered a stack of errors long enough to circle the moon and back.

The offending line in the test:

assert File(check_lbconfig).exists and File(check_lbconfig).size == 2884

check_lbconfig points to their load balancer config. Their test asserts that the file exists… and that it's exactly 2884 bytes long. Which, of course, raises its own question: if this worked for years, how on Earth was the file size never changing? I'll let Tyler explain:

To make matters worse, the file being checked is one of the test files, not the actual haproxy config being changed.

As it turns out, at least when it comes to the load balancer, they've never actually tested the live config script. In fact, Tyler is the one who broke their tests by actually changing the test config file and making his own assertions about what it should do.

It was a lot more changes before the tests actually became useful.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #162

Here’s what happened in the Reproducible Builds effort between Sunday May 27 and Saturday June 2 2018:

Packages reviewed and fixed, and bugs filed

tests-reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework that powers tests.reproducible-builds.org, including:

reproducible.org website updates

There were a number of changes to the reproducible-builds.org website this week too, including:

Chris Lamb also updated the diffoscope.org website, including adding a progress bar animation as well as making “try it online” link more prominent and correctiing the source tarball link.

Misc.

This week’s edition was written by Bernhard M. Wiedemann, Chris Lamb, Holger Levsen, Jelle van der Waa, Santiago Torres & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Don MartiEvil stuff on the Internet and following the money

Rule number one of dealing with the big Internet companies is: never complain to them about all the evil stuff they support. It's a waste of time and carpal tunnels. All of the major Internet companies have software, processes, and, most important, contract moderators, to attenuate complaints. After all, if Big Company employees came in to work and saw real user screenshots of the beheading videos, or the child abuse channel, or the ethnic cleansing memes, then that would harsh their mellow and severely interfere with their ability to, as they say in California, bro down and crush code.

Fortunately, we have better options than engaging with a process that's designed to mute a complaint. Follow the money.

Your average Internet ad does not come from some ominous all-seeing data-driven Panopticon. It's probably placed by some marketing person looking at an ad dashboard screen that's just as confusing to them as the ad placement is confusing to you.

So I'm borrowing the technique that "Spocko" started for talk radio, and Sleeping Giants scaled up for ads on extremist sites.

  • Contact a brand's marketing decision makers directly.

  • Briefly make a specific request.

  • Put your request in terms that make not granting it riskier and more time-consuming.

This should be pretty well known by now. What's new is a change in European privacy regulations. The famous European GDPR applies not just to Europeans, but to natural persons. So I'm going to test the idea that if I ask for something specific and easy to do, it will be easier for people to just do it, instead of having to figure out that (1) they have a different policy for people who they won't honor GDPR requests from and (2) they can safely assign me to the non-GDPR group and ignore me.

My simple request is not to include me in a Facebook Custom Audience. I can find the brands that are doing this by downloading ad data from Facebook, and here's a letter-making web thingy that I can use. Try it if you like. I'll follow up with how it's going.

Planet DebianRuss Allbery: Review: The Obelisk Gate

Review: The Obelisk Gate, by N.K. Jemisin

Series: The Broken Earth #2
Publisher: Orbit
Copyright: August 2016
ISBN: 0-316-22928-8
Format: Kindle
Pages: 448

The Obelisk Gate is the sequel to The Fifth Season and picks up right where it left off. This is not a series to read out of order.

The complexity of The Fifth Season's three entwined stories narrows down to two here, which stay mostly distinct. One follows Essun, who found at least a temporary refuge at the end of the previous book and now is split between learning a new community and learning more about the nature of the world and orogeny. The second follows Essun's daughter, whose fate had been left a mystery in the first book. This is the middle book of a trilogy, and it's arguably less packed with major events than the first book, but the echoing ramifications of those events are vast and provide plenty to fill a novel. The Obelisk Gate never felt slow. The space between major events is filled with emotional processing and revelations about the (excellent) underlying world-building.

We do finally learn at least something about the stone-eaters, although many of the details remain murky. We also learn something about Alabaster's goals, which were the constant but mysterious undercurrent of the first book. Mixed with this is the nature of the Guardians (still not quite explicit, but much clearer now than before), the purpose of the obelisks, something of the history that made this world such a hostile place, and the underlying nature of orogeny.

The last might be a touch disappointing to some readers (I admit it was a touch disappointing to me). There are enough glimmers of forgotten technology and alternative explanations that I was wondering if Jemisin was setting up a quasi-technological explanation for orogeny. This book makes it firmly clear that she's not: this is a fantasy, and it involves magic. I have a soft spot in my heart for apparent magic that's some form of technology, so I was a bit sad, but I do appreciate the clarity. The Obelisk Gate is far more open with details and underlying systems (largely because Essun is learning more), which provides a lot of meat for the reader to dig into and understand. And it remains a magitech world that creates artifacts with that magic and uses them (or, more accurately, used them) to build advanced civilizations. I still see some potential pitfalls for the third book, depending on how Jemisin reconciles this background with one quasi-spiritual force she's introduced, but the world building has been so good that I have high hopes those pitfalls will be avoided.

The world-building is not the best part of this book, though. That's the characters, and specifically the characters' emotions. Jemisin manages the feat of both giving protagonists enough agency that the story doesn't feel helpless while still capturing the submerged rage and cautious suspicion that develops when the world is not on your side. As with the first book of this series, Jemisin captures the nuances, variations, and consequences of anger in a way that makes most of fiction feel shallow.

I realized, while reading this book, that so many action-oriented and plot-driven novels show anger in only two ways, which I'll call "HULK SMASH!" and "dark side" anger. The first is the righteous anger when the protagonist has finally had enough, taps some heretofore unknown reservoir of power, and brings the hurt to people who greatly deserved it. The second is the Star Wars cliche: anger that leads to hate and suffering, which the protagonist has to learn to control and the villain gives into. I hadn't realized how rarely one sees any other type of anger until Jemisin so vividly showed me the vast range of human reaction that this dichotomy leaves out.

The most obvious missing piece is that both of those modes of anger are active and empowered. Both are the anger of someone who can change the world. The argument between them is whether anger changes the world in a good way or a bad way, but the ability of the angry person to act on that anger and for that anger to be respected in some way by the world is left unquestioned. One might, rarely, see helpless anger, but it's usually just the build-up to a "HULK SMASH!" moment (or, sometimes, leads to a depressing sort of futility that makes me not want to read the book at all).

The Obelisk Gate felt like a vast opening-up of emotional depth that has a more complicated relationship to power: hard-earned bitterness that brings necessary caution, angry cynicism that's sometimes wrong but sometimes right, controlled anger, anger redirected as energy into other actions, anger that flares and subsides but doesn't disappear. Anger that one has to live with, and work around, and understand, instead of getting an easy catharsis. Anger with tradeoffs and sacrifices that the character makes consciously, affected by emotion but not driven by it. There is a moment in this book where one character experiences anger as an overwhelming wave of tiredness, a sharp realization that they're just so utterly done with being angry all the time, where the emotion suddenly shifts into something more introspective. It was a beautifully-captured moment of character depth that I don't remember seeing in another book.

This may sound like it would be depressing and exhausting to read, but at least for me it wasn't at all. I didn't feel like I was drowning in negative emotions — largely, I think, because Jemisin is so good at giving her characters agency without having the world give it to them by default. The protagonists are self-aware. They know what they're angry about, they know when anger can be useful and when it isn't, and they know how to guide it and live with it. It feels more empowering because it has to be fought for, carved out of a hostile world, earned with knowledge and practice and stubborn determination. Particularly in Essun, Jemisin is writing an adult whose life is full of joys and miseries, who doesn't forget her emotions but also isn't controlled by them, and who doesn't have the luxury of either being swept away by anger or reaching some zen state of unperturbed calm.

I think one key to how Jemisin pulls this off is the second-person perspective used for Essun's part of the book (and carried over into the other strand, which has the same narrator but a different perspective since this story is being told to Essun). That's another surprise, since normally this style strikes me as affected and artificial, but here it serves the vital purpose of giving the reader a bit of additional distance from Essun's emotions. Following an emotionally calmer retelling of someone else's perspective on Essun made it easier to admire what Jemisin is doing with the nuances of anger without getting too caught up in it.

It helps considerably that the second-person perspective here has a solid in-story justification (not explicitly explained here, but reasonably obvious by the end of the book), and is not simply a gimmick. The answers to who is telling this story and why they're telling it to a protagonist inside the story are important, intriguing, and relevant.

This series is doing something very special, and I'm glad I stuck to it through the confusing and difficult parts in the first book. There's a reason why every book in it was nominated for the Hugo and The Obelisk Gate won in 2017 (and The Fifth Season in 2016). Despite being the middle book of a trilogy, and therefore still leaving unresolved questions, this book was even better than The Fifth Season, which already set a high bar. This is very skillful and very original work and well worth the investment of time (and emotion).

Followed by The Stone Sky.

Rating: 9 out of 10

Planet Linux AustraliaSimon Lyall: Audiobooks – May 2018

Ramble On by Sinclair McKay

The history of walking in Britain and some of the author’s experiences. A pleasant listen. 7/10

Inherit the Stars by James P. Hogan

Very hard-core Sci Fi (all tech, no character) about a 50,000 year old astronaut’s body being found on the moon. Dated in places (everybody smokes) but I liked it. 7/10

Sapiens: A Brief History of Humankind by Yuval Noah Harari

A good overview of pre-history of human species plus an overview of central features of cultures (government, religion, money, etc). Interesting throughout. 9/10

The Adventures of Sherlock Holmes II by Sir Arthur Conan Doyle, read by David Timson

Another four Holmes stories. I’m pretty happy with Timson’s version. Each is only about an hour long. 7/10

The Happy Traveler: Unpacking the Secrets of Better Vacations by Jaime Kurtz

Written by a “happiness researcher” rather than a travel expert. A bit different from what I expected. Lots about structuring your trips to maximize your memories. 7/10

Mrs. Kennedy and Me: An Intimate Memoir by Clint Hill with Lisa McCubbin

I’ve read several of Hill’s books of his time in the US Secret Service, this overlaps a lot of these but with some extra Jackie-orientated material. I’d recommend reading the others first. 7/10

The Lost Continent: Travels in Small Town America by Bill Bryson

The author drives through small-town American making funny observations. Just 3 hours long so good bang for buck. Almost 30 years old so feels a little dated. 7/10

A Splendid Exchange: How Trade Shaped the World by William J. Bernstein

A pretty good overview of the growth of trade. Concentrates on the evolution of  routes between Asia and Europe. Only brief coverage post-1945. 7/10

The Adventures of Sherlock Holmes III by Sir Arthur Conan Doyle

The Adventure of the Cardboard Box; The Musgrave Ritual; The Man with the Twisted Lip; The Adventure of the Blue Carbuncle. All well done. 7/10

The Gentle Giants of Ganymede (Giants Series, Book 2) by James P. Hogan

Almost as hard-core as the previous book but with less of a central mystery. Worth reading if you like the 1st in the series. 7/10

An Army at Dawn: The War in North Africa, 1942-1943 – The Liberation Trilogy, Book 1 by Rick Atkinson

I didn’t like this as much as I expected or as much as similar books. Can’t quite place the problem though. Perhaps works better when written. 7/10

The Adventures of Sherlock Holmes IV by Sir Arthur Conan Doyle

A Case of Identity; The Crooked Man; The Naval Treaty; The Greek Interpreter. I’m happy with Timson’s version . 7/10

Share

Planet Linux AustraliaMichael Still: Quick note: pre-pulling docker images for ONAP OOM installs

Share

Writing this down here because it took me a while to figure out for myself…

ONAP OOM deploys ONAP using Kubernetes, which effectively means Docker images at the moment. It needs to fetch a lot of Docker images, so there is a convenient script provided to pre-pull those images to make install faster and more reliable.

The script in the OOM codebase isn’t very flexible, so Jira issue OOM-655 was filed for a better script. The script was covered in code review 30169. Disappointingly, the code reviewer there doesn’t seem to have actually read the jira issue or the code before abandoning the patch — which isn’t very impressive.

So how do you get the nicer pre-pull script?

Its actually not too hard once you know the review ID. Just do this inside your OOM git clone:

$ git review -d 30169

You might be prompted for your gerrit details because the ONAP gerrit requires login. Once git review has run, you’ll be left sitting in a branch from when the review was uploaded that includes the script:

$ git branch
  master
* review/james_forsyth/30169

Now just rebase that to bring it in mine with master and get on with your life:

$ git rebase -i origin
Successfully rebased and updated refs/heads/review/james_forsyth/30169.

You’re welcome. I’d like to see the ONAP community take code reviews a bit more seriously, but ONAP seems super corporate (even compared to OpenStack), so I’m not surprised that they haven’t done a very good job here.

Share

The post Quick note: pre-pulling docker images for ONAP OOM installs appeared first on Made by Mikal.

Planet DebianNorbert Preining: Hyper Natural Deduction

After quite some years of research, my colleague Arnold Beckmann and my paper on Hyper Natural Deduction has finally been published in the Journal of Logic and Computation. This paper was the difficult but necessary first step in our program to develop a Curry-Howard style correspondence between standard Gödel logic (and its Hypersequent calculus) and some kind of parallel computations.

The results of this article were first announced at the LICS (Logic in Computer Science) Conference in 2015, but the current version is much more intuitive due to a switch to inductive definition, usage of graph representation for proofs, and finally due to a fix of a serious error. The abstract of the current article read:

We introduce a system of Hyper Natural Deduction for Gödel Logic as an extension of Gentzen’s system of Natural Deduction. A deduction in this system consists of a finite set of derivations which uses the typical rules of Natural Deduction, plus additional rules providing means for communication between derivations. We show that our system is sound and complete for infinite-valued propositional Gödel Logic, by giving translations to and from Avron’s Hypersequent Calculus. We provide conversions for normalization extending usual conversions for Natural Deduction and prove the existence of normal forms for Hyper Natural Deduction for Gödel Logic. We show that normal deductions satisfy the subformula property.

The article (preprint version) by itself is rather long (around 70 pages including the technical appendix), but for those interested, the first 20 pages give a nice introduction and the inductive definition of our system, which suffices for building upon this work. The rest of the paper is dedicated to an extensional definition – not and inductive definition but one via clearly defined properties on the final object – and the proof of normalization.

Starting point of our investigations was Arnon Avron‘s comments on parallel computations and communication when he introduced the Hypersequent calculus (Hypersequents, Logical Consequence and Intermediate Logics for Concurrency, Ann.Math.Art.Int. 4 (1991) 225-248):

The second, deeper objective of this paper is to contribute towards a better understanding of the notion of logical consequence in general, and especially its possible relations with parallel computations.

We believe that these logics […] could serve as bases for parallel λ-calculi.

The name “communication rule” hints, of course, at a certain intuitive interpretation that we have of it as corresponding to the idea of exchanging information between two multiprocesses: […]

In working towards a Curry-Howard (CH) correspondence between Gödel logics and some kind of process calculus, we are guided by the original path, as laid out in the above graphics: Starting from Intuitionistic Logic (IL) and its sequent calculus (LJ) a natural dudcution system (ND) provided the link to the λ-calculus. We started from Gödel logics (GL) and its Hypersequent calculus (HLK) and in this article developed a Hyper Natural Deduction with similar properties as the original Natural Deduction system.

Curry-Howard correspondences provide deep conceptual links between formal proofs and computational programs. A whole range of such CH correspondences have been identified and explored. The most fundamental one is between the natural deduction proof formalism for intuitionistic logic, and a foundational programming language called lambda calculus. This CH correspondence interprets formulas in proofs as types in programs, proof transformations like cut-reduction to computation steps like beta-reduction in lambda calculus. These insights have led to applications of logical tools to programming language technology, and the development of programming languages like CAML and of proof assistants like Coq.

CH correspondences are well established for sequential programming, but are far less clear for parallel programming. Current approaches to establish such links for parallel programming always start from established models of parallel programming like process algebra (CSP, CCS, pi-calculus) and define a related logical formalism. For example the linear logic proof formalism is inspired by the pi-calculus process algebra. Although some links between linear logic and pi-calculus have been established, a deep, inspiring connection is missing. Another problem is that logical formalisms established in this way often lack a clear semantics which is independent of the related computational model on which they are based. Thus, despite years of intense research on this topic, we are far from having a clear and useful answer which leads to strong applications in programming language technology as we have seen for the fundamental CH correspondence for sequential programming.

Although it was a long and tiring path to the current status, it is only the beginning.

,

Planet DebianMarkus Koschany: My Free Software Activities in May 2018

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java

Debian LTS

This was my twenty-seventh month as a paid contributor and I have been paid to work 24,25 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • From 21.05.2018 until 27.05.2018 I was in charge of our LTS frontdesk. I investigated and triaged CVE in glusterfs, tomcat7, zookeeper, imagemagick, strongswan, radare2, batik, mupdf and graphicsmagick.
  • I drafted a announcement for Wheezy’s EOL that was later released as DLA-1393-1 and as an official Debian news.
  • DLA-1384-1. I reviewed and uploaded xdg-utils for Abhijith PA.
  • DLA-1381-1. Issued a security update for imagemagick/Wheezy fixing 3 CVE.
  • DLA-1385-1. Issued a security update for batik/Wheezy fixing 1 CVE.
  • Prepared a backport of Tomcat 7.0.88 for Jessie which fixes all open CVE (5) in Jessie. From now on we intend to provide the latest upstream releases for a specific Tomcat branch. We hope this will improve the user experience. It also allows Debian users to get more help from Tomcat developers directly because there is no significant Debian specific delta anymore. The update is pending review by the security team.
  • Prepared a security update for graphicsmagick fixing 19 CVE. I also investigated CVE-2017-10794 and CVE-2017-17913 and came to the conclusion that the Jessie version is not affected. I merged and reviewed another update by László Böszörményi. At the moment the update is pending review by the security team. Together these updates will fix the most important issues in Graphicsmagick/Jessie.
  • DSA-4214-1. Prepared a security update for zookeeper fixing 1 CVE.
  • DSA-4215-1. Prepared a security update for batik/Jessie fixing 1 CVE.
  • Prepared a security update for memcached in Jessie and Stretch fixing 2 CVE. This update is also pending review by the security team.
  • Finished the security update for JRuby (Jessie and Stretch) fixing 5 respectively 7 CVE. However we discovered that JRuby fails to build from source in Jessie and a fix or workaround will most likely break reverse-dependencies. Thus we have decided to mark JRuby as end-of-life in Jessie also because the version is already eight years old.

Misc

  • I reviewed and sponsored xtrkcad for Jörg Frings-Fürst.

Thanks for reading and see you next time.

Planet DebianRaphaël Hertzog: My Free Software Activities in May 2018

My monthly report covers a large part of what I have been doing in the free software world. I write it for my donors (thanks to them!) but also for the wider Debian community because it can give ideas to newcomers and it’s one of the best ways to find volunteers to work with me on projects that matter to me.

distro-tracker

With the disappearance of many alioth mailing lists, I took the time to finish proper support of a team email in distro-tracker. There’s no official documentation yet but it’s already used by a bunch of team. If you look at the pkg-security team on tracker.debian.org it has used “pkg-security” as its unique identifier and it has thus inherited from team+pkg-security@tracker.debian.org as an email address that can be used in the Maintainer field (and it can be used to communicate between all team subscribers that have the contact keyword enabled on their team subscription).

I also dealt with a few merge requests:

I also filed ticket #7283 on rt.debian.org to have local_part_suffix = “+” for tracker.debian.org’s exim config. This will let us bounce emails sent to invalid email addresses. Right now all emails are delivered in a Maildir, valid messages are processed and the rest is silently discarded. At the time of processing, it’s too late to send bounces back to the sender.

pkg-security team

This month my activity is limited to sponsorship of new packages:

  • grokevt_0.5.0-2.dsc fixing one RC bug (missing build-dep on python3-distutils)
  • dnsrecon_0.8.13-1.dsc (new upstream release)
  • recon-ng_4.9.3-1.dsc (new upstream release)
  • wifite_2.1.0-1.dsc (new upstream release)
  • aircrack-ng (add patch from upstream git)

I also interacted multiple times with Samuel Henrique who started to work on the Google Summer of Code porting Kali packages to Debian. He mainly worked on getting some overview of the work to do.

Misc Debian work

I reviewed multiple changes submitted by Hideki Yamane on debootstrap (on the debian-boot mailing list, and also in MR 2 and MR 3). I reviewed and merged some changes on live-boot too.

Extended LTS

I spent a good part of the month dealing with the setup of the Wheezy Extended LTS program. Given the lack of interest of the various Debian teams, it’s hosted on a Freexian server and not on any debian.org infrastructure. But the principle is basically the same as Debian LTS except that the package list is reduced to the set of packages used by Extended LTS sponsors. But the updates prepared in this project are freely available for all.

It’s not too late to join the program, you can always contact me at deblts@freexian.com with a source package list that you’d like to see supported and I’ll send you back an estimation of the cost.

Thanks to an initial contribution from Credativ, Emilio Pozuelo Monfort has prepared a merge request making it easy for third parties to host their own security tracker that piggy-back on Debian’s one. For Extended LTS, we thus have our own tracker.

Thanks

See you next month for a new summary of my activities.

No comment | Liked this article? Click here. | My blog is Flattr-enabled.

Planet DebianAndrew Cater: Colour me untrusting

... but a leopard doesn't change its spots. My GitHub account - opened eight years ago and not used now deleted. amacater@github.com should not be associated with me in any way shape or form from here on in.

CryptogramE-Mail Vulnerabilities and Disclosure

Last week, researchers disclosed vulnerabilities in a large number of encrypted e-mail clients: specifically, those that use OpenPGP and S/MIME, including Thunderbird and AppleMail. These are serious vulnerabilities: An attacker who can alter mail sent to a vulnerable client can trick that client into sending a copy of the plaintext to a web server controlled by that attacker. The story of these vulnerabilities and the tale of how they were disclosed illustrate some important lessons about security vulnerabilities in general and e-mail security in particular.

But first, if you use PGP or S/MIME to encrypt e-mail, you need to check the list on this page and see if you are vulnerable. If you are, check with the vendor to see if they've fixed the vulnerability. (Note that some early patches turned out not to fix the vulnerability.) If not, stop using the encrypted e-mail program entirely until it's fixed. Or, if you know how to do it, turn off your e-mail client's ability to process HTML e-mail or -- even better -- stop decrypting e-mails from within the client. There's even more complex advice for more sophisticated users, but if you're one of those, you don't need me to explain this to you.

Consider your encrypted e-mail insecure until this is fixed.

All software contains security vulnerabilities, and one of the primary ways we all improve our security is by researchers discovering those vulnerabilities and vendors patching them. It's a weird system: Corporate researchers are motivated by publicity, academic researchers by publication credentials, and just about everyone by individual fame and the small bug-bounties paid by some vendors.

Software vendors, on the other hand, are motivated to fix vulnerabilities by the threat of public disclosure. Without the threat of eventual publication, vendors are likely to ignore researchers and delay patching. This happened a lot in the 1990s, and even today, vendors often use legal tactics to try to block publication. It makes sense; they look bad when their products are pronounced insecure.

Over the past few years, researchers have started to choreograph vulnerability announcements to make a big press splash. Clever names -- the e-mail vulnerability is called "Efail" -- websites, and cute logos are now common. Key reporters are given advance information about the vulnerabilities. Sometimes advance teasers are released. Vendors are now part of this process, trying to announce their patches at the same time the vulnerabilities are announced.

This simultaneous announcement is best for security. While it's always possible that some organization -- either government or criminal -- has independently discovered and is using the vulnerability before the researchers go public, use of the vulnerability is essentially guaranteed after the announcement. The time period between announcement and patching is the most dangerous, and everyone except would-be attackers wants to minimize it.

Things get much more complicated when multiple vendors are involved. In this case, Efail isn't a vulnerability in a particular product; it's a vulnerability in a standard that is used in dozens of different products. As such, the researchers had to ensure both that everyone knew about the vulnerability in time to fix it and that no one leaked the vulnerability to the public during that time. As you can imagine, that's close to impossible.

Efail was discovered sometime last year, and the researchers alerted dozens of different companies between last October and March. Some companies took the news more seriously than others. Most patched. Amazingly, news about the vulnerability didn't leak until the day before the scheduled announcement date. Two days before the scheduled release, the researchers unveiled a teaser -- honestly, a really bad idea -- which resulted in details leaking.

After the leak, the Electronic Frontier Foundation posted a notice about the vulnerability without details. The organization has been criticized for its announcement, but I am hard-pressed to find fault with its advice. (Note: I am a board member at EFF.) Then, the researchers published -- and lots of press followed.

All of this speaks to the difficulty of coordinating vulnerability disclosure when it involves a large number of companies or -- even more problematic -- communities without clear ownership. And that's what we have with OpenPGP. It's even worse when the bug involves the interaction between different parts of a system. In this case, there's nothing wrong with PGP or S/MIME in and of themselves. Rather, the vulnerability occurs because of the way many e-mail programs handle encrypted e-mail. GnuPG, an implementation of OpenPGP, decided that the bug wasn't its fault and did nothing about it. This is arguably true, but irrelevant. They should fix it.

Expect more of these kinds of problems in the future. The Internet is shifting from a set of systems we deliberately use -- our phones and computers -- to a fully immersive Internet-of-things world that we live in 24/7. And like this e-mail vulnerability, vulnerabilities will emerge through the interactions of different systems. Sometimes it will be obvious who should fix the problem. Sometimes it won't be. Sometimes it'll be two secure systems that, when interact in a particular way, cause an insecurity. In April, I wrote about a vulnerability that arose because Google and Netflix make different assumptions about e-mail addresses. I don't even know who to blame for that one.

It gets even worse. Our system of disclosure and patching assumes that vendors have the expertise and ability to patch their systems, but that simply isn't true for many of the embedded and low-cost Internet of things software packages. They're designed at a much lower cost, often by offshore teams that come together, create the software, and then disband; as a result, there simply isn't anyone left around to receive vulnerability alerts from researchers and write patches. Even worse, many of these devices aren't patchable at all. Right now, if you own a digital video recorder that's vulnerable to being recruited for a botnet -- remember Mirai from 2016? -- the only way to patch it is to throw it away and buy a new one.

Patching is starting to fail, which means that we're losing the best mechanism we have for improving software security at exactly the same time that software is gaining autonomy and physical agency. Many researchers and organizations, including myself, have proposed government regulations enforcing minimal security-standards for Internet-of-things devices, including standards around vulnerability disclosure and patching. This would be expensive, but it's hard to see any other viable alternative.

Getting back to e-mail, the truth is that it's incredibly difficult to secure well. Not because the cryptography is hard, but because we expect e-mail to do so many things. We use it for correspondence, for conversations, for scheduling, and for record-keeping. I regularly search my 20-year e-mail archive. The PGP and S/MIME security protocols are outdated, needlessly complicated and have been difficult to properly use the whole time. If we could start again, we would design something better and more user friendly­but the huge number of legacy applications that use the existing standards mean that we can't. I tell people that if they want to communicate securely with someone, to use one of the secure messaging systems: Signal, Off-the-Record, or -- if having one of those two on your system is itself suspicious -- WhatsApp. Of course they're not perfect, as last week's announcement of a vulnerability (patched within hours) in Signal illustrates. And they're not as flexible as e-mail, but that makes them easier to secure.

This essay previously appeared on Lawfare.com.

Worse Than FailureCodeSOD: A/F Testing

A/B testing is a strange beast, to me. I understand the motivations, but to me, it smacks of "I don't know what the requirements should be, so I'll just randomly show users different versions of my software until something 'sticks'". Still, it's a standard practice in modern UI design.

What isn't standard is this little blob of code sent to us anonymously. It was found in a bit of code responsible for A/B testing.

    var getModalGreen = function() {
      d = Math.random() * 100;
      if ((d -= 99.5) < 0) return 1;
      return 2;
    };

You might suspect that this code controls the color of a modal dialog on the page. You'd be wrong. It controls which state of the A/B test this run should use, which has nothing to do with the color green or modal dialogs. Perhaps it started that way, but it isn't used that way. Documentation in code can quickly become outdated as the code changes, and this apparently extends to self documenting code.

The key logic of this is that 0.5% of the time, we want to go down the 2 path. You or I might do a check like Math.random() < 0.005. Perhaps, for "clarity" we might multiply the values by 100, maybe. What we wouldn't do is subtract 99.5. What we definitely wouldn't do is subtract using the assignment operator.

You'll note that d isn't declared with a var or let keyword. JavaScript doesn't particularly care, but it does mean that if the containing scope declared a d variable, this would be touching that variable.

In fact, there did just so happen to be a global variable d, and many functions dropped values there, for no reason.

This A/B test gets a solid D-.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet DebianDaniel Pocock: Free software, GSoC and ham radio in Kosovo

After the excitement of OSCAL in Tirana, I travelled up to Prishtina, Kosovo, with some of Debian's new GSoC students. We don't always have so many students participating in the same location. Being able to meet with all of them for a coffee each morning gave some interesting insights into the challenges people face in these projects and things that communities can do to help new contributors.

On the evening of 23 May, I attended a meeting at the Prishtina hackerspace where a wide range of topics, including future events, were discussed. There are many people who would like to repeat the successful Mini DebConf and Fedora Women's Day events from 2017. A wiki page has been created for planning but no date has been confirmed yet.

On the following evening, 24 May, we had a joint meeting with SHRAK, the ham radio society of Kosovo, at the hackerspace. Acting director Vjollca Caka gave an introduction to the state of ham radio in the country and then we set up a joint demonstration using the equipment I brought for OSCAL.

On my final night in Prishtina, we had a small gathering for dinner: Debian's three GSoC students, Elena, Enkelena and Diellza, Renata Gegaj, who completed Outreachy with the GNOME community and Qendresa Hoti, one of the organizers of last year's very successful hackathon for women in Prizren.

Promoting free software at Doku:tech, Prishtina, 9-10 June 2018

One of the largest technology events in Kosovo, Doku:tech, will take place on 9-10 June. It is not too late for people from other free software communities to get involved, please contact the FLOSSK or Open Labs communities in the region if you have questions about how you can participate. A number of budget airlines, including WizzAir and Easyjet, now have regular flights to Kosovo and many larger free software organizations will consider requests for a travel grant.

Planet DebianRuss Allbery: rra-c-util 7.2

rra-c-util is my collection of portability and utility code that I reuse in all the C and Perl projects I maintain.

Most of the changes in this release are Autoconf macro improvements prompted by Julien ÉLIE. This release incorporates his work on RRA_PROG_PYTHON, RRA_LIB_PYTHON, and RRA_PYTHON_MODULE macros, which support both Python 2 and Python 3. It also improves the RRA_PROG_PERL macro to make PERL a substitution variable and to check that building against libperl actually works. Finally, RRA_LIB_BDB, RRA_LIB_OPENSSL, RRA_LIB_SASL, and RRA_LIB_ZLIB now check that the headers for the library are found as well as the library itself (based on Julien's work in INN).

The docs/urls test, which was always misnamed, is now style/obsolete-strings, since its role is to check for obsolete patterns in my code (old URLs, that sort of thing). It now checks for my old RRA_MAINTAINER_TESTS environment variable, which I replaced with the Perl Lancaster Consensus environment variables a long time ago.

This release also fixes a few more minor issues with test code and the script to update the version of all Perl modules in a package.

You can get the latest release from the rra-c-util distribution page.

Planet DebianRuss Allbery: wallet 1.4

wallet is a secret management system that I developed at Stanford, primarily to distribute keytab management. As mentioned in an earlier post, I'm not entirely sure it has significant advantages over Vault, but it does handle Kerberos natively and we're still using it for some things, so I'm still maintaining it.

This release incorporates a bunch of improvements to the experimental support for managing keytabs for Active Directory principals, all contributed by Bill MacAllister and Dropbox. Anyone using the previous experimental Active Directory support should read through the configuration options, since quite a lot has changed (for the better).

Also fixed in this release are some stray strlcpy and strlcat references that were breaking systems that include them in libc, better krb5.conf configuration handling, better support for Perl in non-standard locations, and a bunch of updates and modernization to the build and test frameworks.

You can get the latest release from the wallet distribution page.

Planet Linux AustraliaDavid Rowe: Rowetel Blog Post Archive

I’ve written so many blog posts in the last 12 years I can’t find them when I need them again. So here is an Archive page…

,

Planet Linux AustraliaDavid Rowe: Bench Testing HF Radios with a HackRF

This post describes how we implemented a HF channel simulator to bench test a digital HF radio using modern SDRs.

Yesterday Mark and I bench tested a HF radio with calibrated SNR over simulated AWGN and HF channels. We recorded the radios transmit signal with an AirSpy HF and GQRX, added calibrated noise and “CCIR Poor” fading, and replayed the signal using a HackRF.

For the FreeDV 700C and 700D work I have developed a utility called cohpsk_ch, that takes a real modem signal, adds channel impairments like noise and fading, and outputs another real signal. It has a built in Hilbert Transformer so it can do complex math cleverness like small frequency shifts and ITUT/CCIR HF fading channel models.

Set Up

The basic idea is to upconvert a 8 kHz real sample file to HF in real time. I have some utilities to help with this in codec2-dev:

$ svn co https://svn.code.sf.net/p/freetel/code/codec2-dev codec2-dev
$ cd codec2-dev/octave
$ octave --no-gui
octave:1> cohpsk_ch_fading("../raw/fast_fading_samples.float", 8000, 1.0, 8000*60)
octave:2> cohpsk_ch_fading("../raw/slow_fading_samples.float", 8000, 0.1, 8000*60)
$ exit
$ cd ..
$ cd codec2-dev && mkdir build_linux && cd build_linux
$ cmake -DCMAKE_BUILD_TYPE=Debug ..
$ make
$ cd unittest 

You also need GNU Octave to generate the HF fading files for cohpsk_ch, and you need to install the very useful CSDR tools.

Connect the HackRF to your SSB receiver, we put a 30dB attenuator in line. Tune the radio to 7.177 MHz LSB. First generate a carrier with your HackRF, offset so we get a 500Hz tone in the SSB radio in LSB mode:

$ hackrf_transfer -f 7176500 -s 8000000 -c 127

Now lets try some DSB audio:

$ cat ../../wav/ve9qrp.wav | csdr mono2stereo_s16 | ./tsrc - - 10 -c |
./tlininterp - - 100 -df | hackrf_transfer -f 5177000 -s 8000000  -t - 2>/dev/null

Don’t change the frequency, but try switching the mode between USB and LSB. Should sound about the same, with a slight frequency offset due to the HackRF. Note that HackRF is tuned to Fs/4 = 2MHz beneath 7.177MHz. “tlininterp” has a simple Fs/4 mixer that we use to shift the signal away from the HackRF DC spike. We up-sample from 8 kHz to 8 MHz in two steps to save MIPs.

The “csdr mono2stereo_s16” just repeats the real output samples, so we get a DSB signal at HF. A bit lazy I know, a better approach would be to modify cohpsk_ch to have a complex output option. Let me know if you want to modify cohpsk_ch – I can tell you how.

Checking Calibration

Now I’m pretty confident that cohpsk_ch works well at baseband on digital signals as I have used it extensively in my HF DV work. However I wanted to make sure the off air signal had the correct SNR.

To check the calibration, we generated a 1000 Hz sine wave Signal + Noise signal:

$ ./mksine - 1000 30  | ./../src/cohpsk_ch - - -30 --Fs 8000 --ssbfilt 0 | csdr mono2stereo_s16 | ./tsrc - - 10 -c | ./tlininterp - - 100 -df | hackrf_transfer -f 12177000 -s 8000000  -t - 2>/dev/null 

Then just a noise signal:

cat /dev/zero | ./../src/cohpsk_ch - - -30 --Fs 8000 --ssbfilt 0 | csdr mono2stereo_s16 | ./tsrc - - 10 -c | ./tlininterp - - 100 -df | hackrf_transfer -f 5177000 -s 8000000  -t - 2>/dev/null

With moderate SNRs (say 10dB), Signal + Noise power is roughly Signal power. So I measured the off air power of the above signals using my FT817 connected to a USB sound card, and an Octave script:

$ rec -t raw -r 8000 -s -2 -c 1 - -q | octave --no-gui -qf power_from_stdio.m

I used alsamixer and the plots from the script to make sure I wasn’t overloading the ADC. You need to turn your receiver AGC OFF, and adjust RF/AF gain to get the levels right.

However from the FT817 I was getting results a few dB off due to the crystal filter bandwidth and non-rectangular shape factor. Mark hooked up his AirSpy HF and GQRX, and we piped the received audio over the LAN to the script:

nc -ul 7355 | octave --no-gui -qf power_from_stdio.m

GQRX had a nice flat response from a few 100 Hz to 3kHz, the same bandwidth cohpsk_ch uses for SNR measurement. OK, so now we had sensible numbers, within 0.2dB of the SNR reported by cohpsk_ch. We moved the levels up and down 3dB, made sure everything was repeatable and linear. We went down to 0dB, where signal and noise power is the same, and Signal+Noise power should be 3dB more than Noise alone. Check.

Tests

Then we could play the HF tx signal at a variety of SNRS, by tweaking third (No) argument. In this case we set No to -100dB, so no noise:

cat tx_file_from_radio.wav | ./../src/cohpsk_ch - - -100 --Fs 8000 --ssbfilt 0 | csdr mono2stereo_s16 | ./tsrc - - 10 -c | ./tlininterp - - 100 -df | hackrf_transfer -f 5177000 -s 8000000  -t - 2>/dev/null

At the end of the cohpsk_ch run, it will print the SNR is has measured. So you read that and tweak No as required to get the SNR you need. In our case around -30 was 8dB SNR. You can also add fast (–fast) or slow (–slow) fading, here is a fast fading run at about 2dB SNR:

cat tx_file_from_radio.wav | ./../src/cohpsk_ch - - -24 --Fs 8000 --ssbfilt 0 --fast | csdr mono2stereo_s16 | ./tsrc - - 10 -c | ./tlininterp - - 100 -df | hackrf_transfer -f 5177000 -s 8000000  -t - 2>/dev/null

The “–ssbfilt 0” option switches off the 300-2600 Hz filter inside cohpsk_ch, that is used to simulate a SSB radio crystal filter. For out tests, the modem waveform was too wide for that filter.

Thoughts

I guess we could also have used the HackRF to sample the signal. The nice thing about SDRs is the frequency response is ‘flat”, no crystal filters messing things up.

The only thing we weren’t sure about was the sample rate and frequency offset accuracy of the HackRF, for example if the sample clock was a bit off that might upset modems.

The radio we tested delivered performance pretty much on it’s data sheet at the SNRs tested, giving us extra confidence in the bench testing system described here.

Reading Further

Measuring SDR Noise Figure in Real Time
High Speed Balloon Data Link, here we bench test a UHF FSK data radios
README_ofdm.txt, Lots of examples of using cohpsk_ch to test the latest and greatest OFDM modem.
PathSim is a very nice Windows GUI HF path simulator, that runs well on Linux using Wine.

Planet DebianRuss Allbery: Free software log (May 2018)

The wonders of a week of vacation that was spent mostly working on free software! The headline releases were remctl 3.15, which fixes a long-standing correctness bug on the server and adds more protocol validation and far better valgrind support, and podlators 4.11, which fixes a buncho f long-standing bugs in Pod::Text and its subclasses.

In support of those releases, I also released new versions of my three major development infrastructure packages:

On the Debian front, I realized that I had intended to donate libnet-duo-perl to the Debian Perl team but never finished uploading the package I had prepared (and even signed). I merged that with some other pending changes in Git and actually uploaded it. (I'm still hanging on to maintenance of the upstream Net::Duo Perl module because I'm kicking around the idea of using Duo on a small scale for some personal stuff, although at the moment I'm not using the module at all and therefore am not making changes to it.)

I also finally started working on wallet again, although I'm of two minds about the future of that package. It needs a ton of work — the Perl style and general backend approach is all wrong, and I've learned far better ways to do equivalent things since. And one could make a pretty solid argument that Vault does essentially the same thing, has a lot more resources behind it, and has a ton of features that I haven't implemented or may never implement. I think I still like my ACL model better, and of course there's the Kerberos support (which is probably superior to Vault), but I haven't looked at Vault closely enough to be sure and it may be that it's better in those areas as well.

I don't use wallet for my personal stuff, but we still do use it in a few places at work. I kind of want to overhaul the package and fix it, since I like the concept, but in the broader scheme of things it's probably a "waste" of my time to do this.

Free software seems full of challenges like this. I'll at least put out another release, and then probably defer making a decision for a while longer.

Planet Linux AustraliaLev Lafayette: Installation of MrTrix 3.0_RC2 on HPC Systems

MrTrix is "a set of tools to perform various types of diffusion MRI analyses, from various forms of tractography through to next-generation group-level analyses". It is mostly designed with post-processing visualisation in mind, but for intensive computational tasks it can make use of high-performance computing systems. It is not designed with messing-passing in mind, but it can be useful for job arrays.

Download the tarball from github and extract. Curiously, MrTrix has had version inflation, moving from 0.x versions to 3.x versions. One is nevertheless thankfully that they use conventional versioning numbers at all, given how many software projects don't bother these days (every commit is a version, right?).

MrTrix has a number of dependencies, and in this particular example Eigen/3.2.9, Python/3.5.2, and GMP/6.1.1 are included in the environment. The build system block is a "ConfigureMakePythonPackage" to use the Easybuild vernacular. This means build a Python package and module with python configure/make/make install. The configuration option configure -nogui is recommended - if not, start adding the appropriate dependencies to the installation.

Now one the annoying things about this use of the Python ConfigureMake build block is that prefix commands typical in standard autotools are absent. Thus one must add these after installation. As one of their developers has said "our configure script, while written is python, is completely specific to MRtrix3 – there’s no way you could possibly have come across anything like it before."

As usual, HPC systems (and development enviroments) find it very useful to have to have multiple versions of the same software available. Thus, create an appropriate for directory software to live (e.g., mkdir -p /usr/local/MRtrix/3-3.0_RC3).

Following this there MRtrix software will be built in the source directory, again, less than ideal. Separation between source, build, and install directories would be a useful improvement. However these can be copied over to the preferred directories.

cp -r bin/ lib/ share/ docs/ /usr/local/MRtrix/3-3.0_RC3

Copying over the docs directory is particularly important, as it provides RST files of core concepts and examples. It is essential that these are provided on a manner that are readable by users on the system they are using without context-switching and in their immediate environment (external sources may not be available). Others have expressed disagreement. but it is fairly obvious that they are not speaking from a position of familiarity with such environments.

The following is a sample EasyBuild script for MRtrix (MRtrix-3.0_RC2-GCC-6.2.0-Python-3.5.2.eb).

easyblock = 'ConfigureMakePythonPackage'
name = 'MRtrix'
version = '3.0_RC2'
homepage = 'http://www.brain.org.au/software/index.html#mrtrix'
description = """MRtrix provides a set of tools to perform diffusion-weighted MR white-matter tractography in a manner robust to crossing fibres, using constrained spherical deconvolution (CSD) and probabilistic streamlines."""
toolchain = {'name': 'GCC', 'version': '6.2.0'}
toolchainopts = {'cstd': 'c++11'}
configopts = ['configure -nogui']
buildcmd = ['build']
source_urls = ['https://github.com/MRtrix3/mrtrix3/archive/']
sources = ['%(version)s.tar.gz']
checksums = ['88187f3498f4ee215b2a51d16acb7f2e6c33217e72403a7d48c2ec5da6e2218b']
dependencies = [
('Eigen', '3.2.9'),
('Python', '3.5.2'),
('GMP', '6.1.1'),
]
moduleclass = 'bio'

Planet DebianDavid Kalnischkies: APT for package self-builders

One of the main jobs of a package manager like apt is to download packages (ideally in a secure way) from a repository so that they can be processed further – usually installed. FSVO "normal user" this is all there ever is to it in terms of getting packages.

Package maintainers and other users rolling their own binary packages on the other hand tend to have the packages they want to install and/or play-test with already on their disk. For them, it seems like an additional hassle to push their packages to a (temporary) repository, so apt can download data from there again… for the love of supercow, there must be a better way… right?

For the sake of a common start lets say I want to modify (and later upload) hello, so I acquire the source via apt source hello. Friendly as apt is it ran dpkg-source for me already, so I have (at the time of writing) the files hello_2.10.orig.tar.gz, hello_2.10-1.debian.tar.xz and hello_2.10-1.dsc in my working directory as well as the extracted tarballs in the subdirectory hello-2.10.

Anything slightly more complex than hello probably has a bunch of build-dependencies, so what I should do next is install build-dependencies: Everyone knows apt build-dep hello and that works in this case, but given that you have a dsc file we could just as well use that and free us from our reliance on the online repository: apt build-dep ./hello_2.10-1.dsc. We still depend on having a source package built previously this way… but wait! We have the source tree and this includes the debian/control file so… apt build-dep ./hello-2.10 – the later is especially handy if you happen to add additional build-dependencies while hacking on your hello.

So now that we can build the package have fun hacking on it! You probably have your preferred way of building packages, but for simplicity lets just continue using apt for now: apt source hello -b. If all worked out well we should have now (if you are on a amd64 machine) also a hello_2.10-1_amd64.changes file as well as two binary packages named hello_2.10-1_amd64.deb and hello-dbgsym_2.10-1_amd64.deb (you will also get a hello_2.10-1_amd64.buildinfo which you can hang onto, but apt has currently no way of making use of it, so I ignore it for the moment).

Everyone should know by now that you can install a deb via apt install ./hello_2.10-1_amd64.deb but that quickly gets boring with increasing numbers, especially if the packages you want to install have tight relations. So feel free to install all debs included in a changes file with apt install ./hello_2.10-1_amd64.changes.

So far so good, but all might be a bit much. What about install only some debs of a changes file? Here it gets interesting as if you play your cards right you can test upgrades this way as well. So lets add a temporary source of metadata (and packages) – but before you get your preferred repository builder setup and your text editor ready: You just have to add an option to your apt call. Coming back to our last example of installing packages via a changes file, lets say we just want to install hello and not hello-dbgsym: apt install --with-source ./hello_2.10-1_amd64.changes hello.

That will install hello just fine, but if you happen to have hello installed already… apt is going to tell you it has already the latest version installed. You can look at this situation e.g. with apt policy --with-source ./hello_2.10-1_amd64.changes hello. See, the Debian repository ships a binary-only rebuild as 2.10-1+b1 at the moment, which is a higher version than the one we have locally build. Your usual apt-knowledge will tell you that you can force apt to install your hello with apt install --with-source ./hello_2.10-1_amd64.changes hello=2.10-1 but that isn't why I went down this path: As you have seen now metadata inserted via --with-source participates as usual in the candidate selection process, so you can actually perform upgrade tests this way: apt upgrade --with-source ./hello_2.10-1_amd64.changes (or full-upgrade).

The hello example reaches its limits here, but if you consider time travel a possibility we will jump back into a time in which hello-debhelper existed. To be exact: Right to the moment its maintainer wanted to rename hello-debhelper to hello. Most people consider package renames hard. You need to get file overrides and maintainerscripts just right, but at least with figuring out the right dependency relations apt can help you a bit. How you can feed in changes files we have already seen, so lets imagine you deal with with multiple packages from different sources – or just want to iterate quickly! In that case you want to create a Packages file which you would normally find in a repository. You can write those by hand of course, but its probably easier to just call dpkg-scanpackages . > Packages (if you have dpkg-dev installed) or apt-ftparchive packages . > Packages (available via apt-utils) – they behave slightly different, but for our proposes its all the same. Either way, ending up with a Packages file nets you another file you can feed to --with-source (sorry, you can't install a Packages file). This also allows you to edit the dependency relations of multiple packages in a single file without constant "fiddle and build" loops of the included packages – just make sure to run as non-root & in simulation mode (-s) only or you will make dpkg (and in turn apt) very sad.

Of course upgrade testing is only complete if you can influence what is installed on your system before you try to upgrade easily. You can with apt install --with-source ./Packages hello=2.10-1 -s -o Dir::state::status=/dev/null (it will look like nothing is installed) or feed a self-crafted file (or some compressed /var/backups/dpkg.status file from days past), but to be fair that gets a bit fiddly, so at some point its probably easier to write an integration test for apt which are just little shellscript in which (nearly) everything is possible, but that might be the topic of another post some day.

Q: How long do I have to wait to use this?

A: I think I have implemented the later parts of this in the 1.3 series. Earlier parts are in starting with 1.0. Debian stable (stretch) has the 1.4 series, so… you can use it now. Otherwise use your preferred package manager to upgrade your system to latest stable release. I hope it is clear which package manager that should be… 😉︎

Q: Does this only work with apt?

A: This works just the same with apt-cache (where the --with-source option is documented in the manpage btw) and apt-get. Everything else using libapt (so aptitude included) does not at the moment, but potentially can and probably will in the future. If you feel like typing a little bit more you can at least replicate the --with-source examples by using the underlying generic option: aptitude install -s hello-dbgsym -o APT::Sources::With::=./hello_2.10-1_amd64.changes (That is all you really need anyhow, the rest is syntactic sugar). Before you start running off to report bugs: Check before reporting duplicates (and don't forget to attach patches)!

Q: Why are you always typing ./packages.deb?

A: With the --with-source option the ./ is not needed actually, but for consistency I wrote it everywhere. In the first examples we need it as apt needs to know somehow if the string it sees here is a package name, a glob, a regex, a task, … or a filename. The string "package.deb" could be a regex after all. And any string could be a directory name… Combine this with picking up files and directories in the current directory and you would have a potential security risk looming here if you start apt in /tmp (No worries, we hadn't realized this from the start either).

Q: But, but, but … security anyone?!?

The files are on your disk and apt expects that you have verified that they aren't some system-devouring malware. How should apt verify that after all as there is no trustpath. So don't think that downloading a random deb suddently became a safe thing to do because you used apt instead of dpkg -i. If the dsc or changes files you use are signed and you verfied them through, you can rest assured that apt is verifying that the hashes mentioned in those files apply to the files they index. Doesn't help you at all if the files are unsigned or other users are able to modify the files after you verified them, but apt will check hashes in those cases anyhow.

Q: I ❤︎ u, 🍑︎ tl;dr

Just 🏃︎ those, you might 😍︎ some of them:

apt source hello
apt build-dep ./hello-*/ -s
apt source -b hello
apt install ./hello_*.deb -s
apt install ./hello_*.changes -s
apt install --with-source ./hello_*.changes hello -s
apt-ftparchive packages . > ./Packages
apt upgrade --with-source ./Packages -s

P.S.: If you have expected this post to be published sometime inbetween the last two months… welcome to the club! I thought I would do it, too. Lets see how long I will need for the next one… I have it partly written already, but that was the case for this one as well… we will see.

Planet DebianMichael Stapelberg: Looking for a new Raspberry Pi image maintainer

(Cross-posting this message I sent to pkg-raspi-maintainers for broader visibility.)

I started building Raspberry Pi images because I thought there should be an easy, official way to install Debian on the Raspberry Pi.

I still believe that, but I’m not actually using Debian on any of my Raspberry Pis anymore¹, so my personal motivation to do any work on the images is gone.

On top of that, I realize that my commitments exceed my spare time capacity, so I need to get rid of responsibilities.

Therefore, I’m looking for someone to take up maintainership of the Raspberry Pi images. Numerous people have reached out to me with thank you notes and questions, so I think the user interest is there. Also, I’ll be happy to answer any questions that you might have and that I can easily answer. Please reply here (or in private) if you’re interested.

If I can’t find someone within the next 7 days, I’ll put up an announcement message in the raspi3-image-spec README, wiki page, and my blog posts, stating that the image is unmaintained and looking for a new maintainer.

Thanks for your understanding,

① just in case you’re curious, I’m now running cross-compiled Go programs directly under a Linux kernel and minimal userland, see https://gokrazy.org/

Planet DebianLouis-Philippe Véronneau: Let's migrate away from GitHub

As many of you heard today, Microsoft is acquiring GitHub. What this means for the future of GitHub is not yet clear, but the folks at Gitlab think Microsoft's end goal is to integrate GitHub in their Azure empire. To me, this makes a lot of sense.

Even though I still reluctantly use GitHub for some projects, I migrated all my personal repositories to Gitlab instances a while ago1. Now is time for you to do the same and ditch GitHub.

Microsft loven't Linux

Some people might be fine with Microsoft's takeover, but to me it's the straw that breaks the camel's back. For a few years now, MS has been running a large marketing campaign on how they love Linux and suddenly decided to embrace Free Software in all of its forms. More like MS BS to me.

Let us take a moment to remind ourselves that:

  • Windows is still a huge proprietary monster that rips billions of people from their privacy and rights every day.
  • Microsoft is known for spreading FUD about "the dangers" of Free Software in order to keep governments and schools from dropping Windows in favor of FOSS.
  • To secure their monopoly, Microsoft hooks up kids on Windows by giving out "free" licences to primary schools around the world. Drug dealers use the same tactics and give out free samples to secure new clients.
  • Microsoft's Azure platform - even though it can run Linux VMs - is still a giant proprietary hypervisor.

I know moving git repositories around can seem like a pain in the ass, but the folks at Gitlab are riding the wave of people leaving GitHub and made the the migration easy by providing a GitHub importer.

If you don't want to use Gitlab's main instance (gitlab.org), here are two other alternative instances you can use for Free Software projects:

Friends don't let friends use GitHub anymore.


  1. Gitlab is pretty good, but it should not be viewed as a panacea: it's still an open-core product made by a for-profit enterprise that could one day be sold to a large corp like Oracle or Microsoft. 

  2. See the Salsa FAQ for more details. 

,

Planet DebianElana Hashman: I'm hosting a small Debian BSP in Brooklyn

The time has come for NYC Debian folks to gather. I've bravely volunteered to host a local bug squashing party (or BSP) in late June.

Details

  • Venue: 61 Local: 61 Bergen St., Brooklyn, NY, USA
  • More about the venue: website, good vegetarian options available
  • Date: Sunday, June 24, 2018
  • Start: 3pm
  • End: 8pm or so
  • RSVP: Please RSVP! Click here

I'm an existing contributor, what should I work on?

The focus of this BSP is to give existing contributors some dedicated time to work on their projects. I don't have a specific outcome in mind. I do not plan on tagging bugs specifically for the BSP, but that shouldn't stop you from doing so if you want to.

Personally, I am going to spend some time on fixing the alternatives logic in the clojure and clojure1.8 packages.

If you don't really have a project you want to work on, but you're interested in helping mentor new contributors that show up, please get in touch.

I'm a new contributor and want to join but I have no idea what I'm doing!

At some point, that was all of us!

Even though this BSP is aimed at existing contributors, you are welcome to attend! We'll have a dedicated mentor available to coordinate and help out new contributors.

If you've never contributed to Debian before, I recommend you check out "How can you help Debian?" and the beginner's HOWTO for BSPs in advance of the BSP. I also wrote a tutorial and blog post on packaging that might help. Remember, you don't have to code or build packages to make valuable contributions!

See you there!

Happy hacking.

Planet DebianHolger Levsen: 20180602-lts-201805

My LTS work in May 2018

Organizing the MiniDebConf 2018 in Hamburg definitly took more time than planned, and then some things didnt work out as I had imagined so I could only start working on LTS at the end of May, and then there was this Alioth2Salsa migration too… But at least I managed to get started working on LTS gain \o/

I managed to spend 6.5h working on:

  • reviewing the list of open CVEs against tiff and tiff3 in wheezy
  • prepare tiff 4.0.2-6+debu21, test and upload to wheezy-security, fixing CVE-2017-11613 and CVE-2018-5784.

  • review procps 1:3.3.3-3+deb7u1 by Abhijith PA, spot an error, re-review, quick test and upload to wheeze-security, then re-upload after building with -sa :) This upload fixes CVE-2018-1122 CVE-2018-1123 CVE-2018-1124 CVE-2018-1125 and CVE-2018-1126.

  • write and release DLA-1390-1 and DLA-1301 for those two uploads.

I still need to mark CVE-2017-9815 as fixed in wheezy, as the fix for CVE-2017-9403 also fixes this issue.

Planet DebianSylvain Beucler: Reproducible Windows builds

I'm working again on making reproducible .exe-s. I thought I'd share my process:

Pros:

  • End users get a bit-for-bit reproducible .exe, known not to contain trojan and auditable from sources
  • Point releases can reuse the exact same build process and avoid introducing bugs

Steps:

  • Generate a source tarball (non reproducibly)
  • Debian Docker as a base, with fixed version + snapshot.debian.org sources.list
    • Dockerfile: install packaged dependencies and MXE(.cc) from a fixed Git revision
    • Dockerfile: compile MXE with SOURCE_DATE_EPOCH + fix-ups
  • Build my project in the container with SOURCE_DATE_EPOCH and check SHA256
  • Copy-on-release

Result:

git.savannah.gnu.org/gitweb/?p=freedink/dfarc.git;a=tree;f=autobuild/dfarc-w32-snapshot

Generate a source tarball (non reproducibly)

This is not reproducible due to using non-reproducible tools (gettext, automake tarballs, etc.) but it doesn't matter: only building from source needs to be reproducible, and the source is the tarball.

It would be better if the source tarball were perfectly reproducible, especially for large generated content (./configure, wxGlade-generated GUI source code...), but that can be a second step.

Debian Docker as a base

AFAIU the Debian Docker images are made by Debian developers but are in no way official images. That's a pity, and to be 100% safe I should start anew from debootstrap, but Docker is providing a very efficient framework to build images, notably with caching of every build steps, immediate fresh containers, and public images repository.

This means with a single:

sudo -g docker make

you get my project reproducibly built from scratch with nothing to setup at all.

I avoid using a :latest tag, since it will change, and also backports, since they can be updated anytime. Here I'm using stretch:9.4 and no backports.

Using snapshot.debian.org in sources.list makes sure the installed packaged dependencies won't change at next build. For a dot release however (not for a rebuild), they should be updated in case there was a security fix that has an effect on built software (rare, but exists).

Last but not least, APT::Install-Recommends "false"; for better dependency control.

MXE

mxe.cc is compilation environment to get MingGW (GCC for Windows) and selected dependencies rebuilt unattended with a single make. Doing this manually would be tedious because every other day, upstream breaks MinGW cross-compilation, and debugging an hour-long build process takes ages. Been there, done that.

MXE has a reproducible-boosted binutils with a patch for SOURCE_DATE_EPOCH that avoids getting date-based and/or random build timestamps in the PE (.exe/.dll) files. It's also compiled with --enable-deterministic-archives to avoid timestamp issues in .a files (but no automatic ordering).

I set SOURCE_DATE_EPOCH to the fixed Git commit date and I run MXE's build.

This does not apply to GCC however, so I needed to e.g. patch a __DATE__ in wxWidgets.

In addition, libstdc++.a has a file ordering issue (said ordering surprisingly stays stable between a container and a host build, but varies when using a different computer with the same distros and tools versions). I hence re-archive libstdc++.a manually.

It's worth noting that PE files don't have issues with build paths (and varying BuildID-s - unlike ELF... T_T).

Again, for a dot release, it makes sense to update the MXE Git revision so as to catch security fixes, but at least I have the choice.

Build project

With this I can start a fresh Docker container and run the compilation process inside, as a non-privileged user just in case.

I set SOURCE_DATE_EPOCH to the release date at 00:00UTC, or the Git revision date for snapshots.

This rebuild framework is excluded from the source tarball, so the latter stays stable during build tuning. I see it as a post-release tool, hence not part of the release (just like distros packaging).

The generated .exe is statically compiled which helps getting a stable result (only the few needed parts of dependencies get included in the final executable).

Since MXE is not itself reproducible differences may come from MXE itself, which may need fixes as explained above. This is annoying and hopefully will be easier once they ship GCC6. To debug I unzip the different .zip-s, upx -d my .exe-s, and run diffoscope.

I use various tricks (stable ordering, stable timestamping, metadata cleaning) to make the final .zip reproducible as well. Post-processing tools would be an alternative if they were fixed.

reprotest

Any process is moot if it can't be tested.

reprotest helps by running 2 successive compilations with varying factors (build path, file system ordering, etc.), and check that we get the exact same binary. As a trade-off, I don't run it on the full build environment, just on the project itself. I plugged reprotest to the Docker container by running a sshd on the fly. I have another Makefile target to run reprotest in my host system where I also installed MXE, so I can compare results and sometimes find differences (e.g. due to using a different filesystem). In addition this is faster for debugging since changing anything in the early Dockerfile steps means a full 1h rebuild.

Copy-on-release

At release time I make a copy of the directory that contains all the self-contained build scripts and the Dockerfile, and rename it after the new release version. I'll continue improving upon the reproducible build system in the 'snapshot' directory, but the versioned directory will stay as-is and can be used in the future to get the same bit-for-bit identical .exe anytime.

This is the technique I used in my Android Rebuilds project.

Other platforms

For now I don't control the build process for other platforms: distros have their own autobuilders, so does F-Droid. Their problem :P

I have plans to make reproducible GNU/Linux AppImage-based builds in the future though. I should be able to use a finer-grained, per-dependency process rather than the huge MXE-based chunk I currently do.

I hope this helps other projects provide reproducible binaries directly! Comments/suggestions welcome.

Don MartiRon Estes, US Congress

If Ron Estes, running for US Congress was a candidate with the same name as a well-known Democratic Party politician, clearly the right-wing pranksters of the USA would give him a bunch of inbound links just for lulz, and to force the better-known politician to spend money on SEO of his own.

But he's not, so people will probably just tweet about the election and stuff.

Don MartiOpting into European mode

Trans Europa Express was covered on ghacks.net. This is an experimental Firefox extension that tries to get web sites to give you European-level privacy rights, even if the site classifies you as non-European.

Since the version they mentioned, I have updated it with a few new features.

Anyway, check it out. Seems to have actual users now, so I've got that going for me. But lots of secret European mode switches still remain unactivated. If you see one, please make a new issue.

Planet DebianSteve Kemp: A brief metric-update, and notes on golang-specific metrics

My previous post briefly described the setup of system-metric collection. (At least the server-side setup required to receive the metrics submitted by various clients.)

When it came to the clients I was complaining that collectd was too heavyweight, as installing it pulled in a ton of packages. A kind twitter user pointed out that you can get most of the stuff you need via the use of the of collectd-core package:

 # apt-get install collectd-core

I guess I should have known that! So for the moment that's what I'm using to submit metrics from my hosts. In the future I will spend more time investigating telegraf, and other "modern" solutions.

Still with collectd-core installed we've got the host-system metrics pretty well covered. Some other things I've put together also support metric-submission, so that's good.

I hacked up a quick package for automatically submitting metrics to a remote server, specifically for golang applications. To use it simply add an import to your golang application:

  import (
    ..
    _ "github.com/skx/golang-metrics"
    ..
  )

Add the import, and rebuild your application and that's it! Configuration is carried out solely via environmental variables, and the only one you need to specify is the end-point for your metrics host:

$ METRICS=metrics.example.com:2003 ./foo

Now your application will be running as usual and will also be submitting metrics to your central host every 10 seconds or so. Metrics include the number of running goroutines, application-uptime, and memory/cpu stats.

I've added a JSON-file to import as a grafana dashboard, and you can see an example of what it looks like there too:

,

CryptogramFriday Squid Blogging: Do Cephalopods Contain Alien DNA?

Maybe not DNA, but biological somethings.

"Cause of Cambrian explosion -- Terrestrial or Cosmic?":

Abstract: We review the salient evidence consistent with or predicted by the Hoyle-Wickramasinghe (H-W) thesis of Cometary (Cosmic) Biology. Much of this physical and biological evidence is multifactorial. One particular focus are the recent studies which date the emergence of the complex retroviruses of vertebrate lines at or just before the Cambrian Explosion of ~500 Ma. Such viruses are known to be plausibly associated with major evolutionary genomic processes. We believe this coincidence is not fortuitous but is consistent with a key prediction of H-W theory whereby major extinction-diversification evolutionary boundaries coincide with virus-bearing cometary-bolide bombardment events. A second focus is the remarkable evolution of intelligent complexity (Cephalopods) culminating in the emergence of the Octopus. A third focus concerns the micro-organism fossil evidence contained within meteorites as well as the detection in the upper atmosphere of apparent incoming life-bearing particles from space. In our view the totality of the multifactorial data and critical analyses assembled by Fred Hoyle, Chandra Wickramasinghe and their many colleagues since the 1960s leads to a very plausible conclusion -- life may have been seeded here on Earth by life-bearing comets as soon as conditions on Earth allowed it to flourish (about or just before 4.1 Billion years ago); and living organisms such as space-resistant and space-hardy bacteria, viruses, more complex eukaryotic cells, fertilised ova and seeds have been continuously delivered ever since to Earth so being one important driver of further terrestrial evolution which has resulted in considerable genetic diversity and which has led to the emergence of mankind.

Two commentaries.

This is almost certainly not true.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramDamaging Hard Drives with an Ultrasonic Attack

Playing a sound over the speakers can cause computers to crash and possibly even physically damage the hard drive.

Academic paper.

Krebs on SecurityAre Your Google Groups Leaking Data?

Google is reminding organizations to review how much of their Google Groups mailing lists should be public and indexed by Google.com. The notice was prompted in part by a review that KrebsOnSecurity undertook with several researchers who’ve been busy cataloging thousands of companies that are using public Google Groups lists to manage customer support and in some cases sensitive internal communications.

Google Groups is a service from Google that provides discussion groups for people sharing common interests. Because of the organic way Google Groups tend to grow as more people are added to projects — and perhaps given the ability to create public accounts on otherwise private groups — a number of organizations with household names are leaking sensitive data in their message lists.

Many Google Groups leak emails that should probably not be public but are nevertheless searchable on Google, including personal information such as passwords and financial data, and in many cases comprehensive lists of company employee names, addresses and emails.

By default, Google Groups are set to private. But Google acknowledges that there have been “a small number of instances where customers have accidentally shared sensitive information as a result of misconfigured Google Groups privacy settings.”

In early May, KrebsOnSecurity heard from two researchers at Kenna Security who started combing through Google Groups for sensitive data. They found thousands of organizations that seem to be inadvertently leaking internal or customer information.

The researchers say they discovered more than 9,600 organizations with public Google Groups settings, and estimate that about one-third of those organizations are currently leaking some form of sensitive email. Those affected include Fortune 500 companies, hospitals, universities and colleges, newspapers and television stations and U.S. government agencies.

In most cases, to find sensitive messages it’s enough to load the company’s public Google Groups page and start typing in key search terms, such as “password,” “account,” “hr,” “accounting,” “username” and “http:”.

Many organizations seem to have used Google Groups to index customer support emails, which can contain all kinds of personal information — particularly in cases where one employee is emailing another.

Here are just a few of their more eyebrow-raising finds:

• Re: Document(s) for Review for Customer [REDACTED]. Group: Accounts Payable
• Re: URGENT: Past Due Invoice. Group: Accounts Payable
• Fw: Password Recovery. Group: Support
• GitHub credentials. Group: [REDACTED]
• Sandbox: Finish resetting your Salesforce password. Group: [REDACTED]
• RE: [REDACTED] Suspension Documents. Group: Risk and Fraud Management

Apart from exposing personal and financial data, misconfigured Google Groups accounts sometimes publicly index a tremendous amount of information about the organization itself, including links to employee manuals, staffing schedules, reports about outages and application bugs, as well as other internal resources.

This information could be a potential gold mine for hackers seeking to conduct so-called “spearphishing” attacks that single out specific employees at a targeted organization. Such information also would be useful for criminals who specialize in “business email compromise” (BEC) or “CEO fraud” schemes, in which thieves spoof emails from top executives to folks in finance asking for large sums of money to be wired to a third-party account in another country.

“The possible implications include spearphishing, account takeover, and a wide variety of case-specific fraud and abuse,” the Kenna Security team wrote.

In its own blog post on the topic, Google said organizations using Google Groups should carefully consider whether to change the access to groups from “private” to “public” on the Internet. The company stresses that public groups have the marker “shared publicly” right at the top, next to the group name.

“If you give your users the ability to create public groups, you can always change the domain-level setting back to private,” Google said. “This will prevent anyone outside of your company from accessing any of your groups, including any groups previously set to public by your users.”

If your organization is using Google Groups mailing lists, please take a moment to read Google’s blog post about how to check for oversharing.

Also, unless you require some groups to be available to external users, it might be a good idea to turn your domain-level Google Group settings to default “private,” Kenna Security advises.

“This will prevent new groups from being shared to anonymous users,” the researchers wrote. “Secondly, check the settings of individual groups to ensure that they’re configured as expected. To determine if external parties have accessed information, Google Groups provides a feature that counts the number of ‘views’ for a specific thread. In almost all sampled cases, this count is currently at zero for affected organizations, indicating that neither malicious nor regular users are utilizing the interface.”

Planet DebianVincent Sanders: You can't make a silk purse from a sow's ear

Pile of network switches
I needed a small Ethernet network switch in my office so went to my pile of devices and selected an old Dell PowerConnect 2724 from the stack. This seemed the best candidate as the others were intended for data centre use and known to be very noisy.

I installed it into place and immediately ran into a problem, the switch was not quiet enough, in fact I could not concentrate at all with it turned on.

Graph of quiet office sound pressure
Believing I could not fix what I could not measure I decided to download an app for my phone that measured raw sound pressure. This would allow me to empirically examine what effects any changes to the switch made.

The app is not calibrated so can only be used to examine relative changes so a reference level is required. I took a reading in the office with the switch turned off but all other equipment operating to obtain a baseline measurement.

All measurements were made with the switch and phone in the same positions about a meter apart. The resulting yellow curves are the average for a thirty second sample period with the peak values in red.

The peak between 50Hz and 500Hz initially surprised me but after researching how a human perceives sound it appears we must apply the equal loudness curve to correct the measurement.

Graph of office sound pressure with switch turned onWith this in mind we can concentrate on the data between 200Hz and 6000Hz as the part of the frequency spectrum with the most impact. So in the reference sample we can see that the audio pressure is around the -105dB level.

I turned the switch on and performed a second measurement which showed a level around the -75dB level with peaks at the -50dB level. This is a difference of some 30dB, if we assume our reference is a "calm room" at 25dB(SPL) then the switch is causing the ambient noise level to similar to a "normal conversation" at 55dB(SPL).

Something had to be done if I were to keep using this device so I opened the switch to examine the possible sources of noise.

Dell PowerConnect 2724 with replacement Noctua fan
There was a single 40x40x20mm 5v high capacity sunon brand fan in the rear of the unit. I unplugged the fan and the noise level immediately returned to ambient indicating that all the noise was being produced by this single device, unfortunately the switch soon overheated without the cooling fan operating.

I thought the fan might be defective so purchased a high quality "quiet" NF-A4x20 replacement from Noctua. The fan has rubber mounting fixings to further reduce noise and I was hopeful this would solve the issue.

Graph of office sound pressure with modified switch turned on
The initial results were promising with noise above 2000Hz largely being eliminated. However the way the switch enclosure was designed caused airflow to make sound which produce a level around 40dB(SPL) between 200Hz and 2000Hz.

I had the switch in service for several weeks in this configuration eventually the device proved impractical on several points:

  • The management interface was dreadful to use.
  • The network performance was not very good especially in trunk mode.
  • The lower frequency noise became a distraction for me in an otherwise quiet office.

In the end I purchased an 8 port zyxel switch which is passively cooled and otherwise silent in operation and has none of the other drawbacks.

From this experience I have learned some things:

  • Higher frequency noise (2000Hz and above) is much more difficult to ignore than other types of noise.
  • As I have become older my tolerance for equipment noise has decreased and it actively affects my concentration levels.
  • Some equipment has a design which means its audio performance cannot be improved sufficiently.
  • Measuring and interpreting noise sources is quite difficult.

Planet DebianMichal Čihař: Weblate 3.0

Weblate 3.0 has been released today. It contains brand new access control module and 61 fixed isssues.

Full list of changes:

  • Rewritten access control.
  • Several code cleanups that lead to moved and renamed modules.
  • New addon for automatic component discovery.
  • The import_project management command has now slightly different parameters.
  • Added basic support for Windows RC files.
  • New addon to store contributor names in PO file headers.
  • The per component hook scripts are removed, use addons instead.
  • Add support for collecting contributor agreements.
  • Access control changes are now tracked in history.
  • New addon to ensure all components in a project have same translations.
  • Support for more variables in commit message templates.
  • Add support for providing additional textual context.

If you are upgrading from older version, please follow our upgrading instructions, the upgrade is more complex this time.

You can find more information about Weblate on https://weblate.org, the code is hosted on Github. If you are curious how it looks, you can try it out on demo server. Weblate is also being used on https://hosted.weblate.org/ as official translating service for phpMyAdmin, OsmAnd, Turris, FreedomBox, Weblate itself and many other projects.

Should you be looking for hosting of translations for your project, I'm happy to host them for you or help with setting it up on your infrastructure.

Further development of Weblate would not be possible without people providing donations, thanks to everybody who have helped so far! The roadmap for next release is just being prepared, you can influence this by expressing support for individual issues either by comments or by providing bounty for them.

Filed under: Debian English phpMyAdmin SUSE Weblate

Worse Than FailureError'd: I Beg Your Entschuldigung?

"Delta does not seem to be so sure of what language to address me in," writes Pat.

 

"I'm wondering if the person writing the release notes made that typo when their mind was...ahem...somewhere else?" writes Pieter V.

 

Brad W. wrote, "For having "Caterpillar," "Revolver," and "Steel Toe" in the description the shoe seems a bit wimpy...maybe the wearer is expected to have an actual steel toe?"

 

"Tomato...tomahto...potato...potahto...GDPR...GPRD...all the same thing. Right?" writes Paul K.

 

"Apparently installing Ubuntu 18.04 on your laptop comes with free increase of battery capacity by almost 40x! Now that's what I call FREE software!" Jordan D. wrote.

 

Ian O. writes, "I don't know why Putin cares about the NE-2 Democratic primary, but I'm sure he added those eight extra precincts for a good reason."

 

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet Debianbisco: Second GSoC Report

A lot has happened since the last report. The main change in nacho was probably the move to integrate django-ldapdb. This abstracts a lot of operations one would have to do on the directory using bare ldap and it also provides the possibility of having the LDAP objects in the Django admin interface, as those are addressed as Django models. By using django-ldapdb i was able to remove around 90% of the self written ldap logic. The only functionality that still remains where i have to directly use the ldap library are the password operations. It would be possible to implement these features with django-ldapdb, but then i would have to integrate password hashing functionality into nacho and above all i would have to adjust the hashing function for every ldap server with a different hashing algorithm setting. This way the ldap server does the hashing and i won’t have to set the algorighm in two places.

This led to the next feature i implemented, which was the password reset functionality. It works as known from most other sites: one enters a username and gets an email with a password reset link. Related to this is also the mofification operation of the mail attribute: i wasn’t sure if the email address should be changeable right away or if a new address should be confirmed with a token sent by mail. We talked about this during our last mentors-student meeting and both formorer and babelouest said it would be good to have a confirmation for email addresses. So that was another feature i implemented.

Two more attribute that weren’t part of nacho up until now were SSH Keys and a profile image. Especially the ssh keys led to a redesign of the profile page, because there can be multiple ssh keys. So i changed the profile container to be a bootstrap card and the individual areas are tabs in this card:

Screenshot of the profile page

For the image i had to create a special upload form that saves the bytestream of the file directly to ldap which stores it as base64 encoded data. The display of the jpegPhot field is then done via

<img src=data:image/png;base64,...

This way we don’t have to store the image files on the server at all.

A short note about the ssh key schema

We are using this openssh-ldap schema. To include the schema in the slapd installation it to be converted to an ldif file. For that i had to create a temporary file, lets call it schema_convert.conf with the line

include /path/to/openssh-ldap.schema

using

sudo slaptest -f schema_convert.conf -F /tmp/temporaryfolder

one gets a folder containing the ldif file in /tmp/temporaryfolder/cn=config/cn=schema/cn={0}openssh-ldap.ldif. This file has to be edited (remove the metadata) and can then be added to ldap using:

ldapadd -Y EXTERNAL -H ldapi:/// -f openssh-ldap.ldif

What else happend

Another big improvement is the admin site. Using django-ldapdb i have a model view on selected ldap tree areas and can manage them using the webinterface. Using the group mapping feature of django-auth-ldap i was able to give management permissions to groups that are also stored in ldap.

I updated the nacho debian package. Now that django-ldapdb is in testing, all the dependecies can be installed from Debian packages. I started to use the salsa issue tracker for the issues which makes it a lot easier to keep track of things to do. I took a whole day to start getting into unit tests and i started writing some. On day two of the unit test experience i started using the gitlab continuous integration feature of salsa. Now every commit is being checked against the test suite. But there are only around 20 tests at the moment and it only covers registration and login and password reset- i guess there are around 100 test cases for all the other stuff that i still have to write ;)

Planet DebianPaul Wise: FLOSS Activities May 2018

Changes

Issues

Review

Administration

  • iotop: merge patch
  • Debian: buildd check, install package, redirect support, fix space in uid/gid, reboot lock workaround
  • Debian mentors: reboot for security updates
  • Debian wiki: whitelist email addresses,
  • Openmoko: web server restart

Communication

Sponsors

The tesseract/purple-discord work, bug reports for samba/git-lab/octotree/dh-make-golang and AutomaticPackagingTools change were sponsored by my employer. All other work was done on a volunteer basis.

,

TEDEbola and the future of vaccines: In conversation with Seth Berkley

At TED2015, Seth Berkley showed two Ebola vaccines under review at the time. One of these vaccines is now being deployed in the current Ebola outbreak in the DRC. Photo: Bret Hartman/TED

Dr. Seth Berkley is an epidemiologist and the CEO of Gavi, the Vaccine Alliance, a global health organization dedicated to improving access to vaccines in developing countries. When he last spoke at TED, in 2015, Seth showed the audience two experimental vaccines for Ebola — both of them in active testing at the time, as the world grappled with the deadly 2014–2016 outbreak. Just last week, one of these vaccines, the Merck rVSV-ZEBOV, was deployed in the Democratic Republic of the Congo to help slow the spread of a new Ebola outbreak in and around the city of Mbandaka. With more than 30 confirmed cases and a contact list of more than 600 people who may be at risk, the situation in the DRC is “on a knife edge,” according to the World Health Organization. Seth flew to the DRC to help launch the vaccine; now back in Geneva, he spoke to TED on the challenges of vaccine development and the stunning risks we are overlooking around global health epidemics.

This interview has been edited and condensed.

You were on the scene in Mbandaka; what were you working on there?

My role was to launch the vaccine — to make sure that this technology which wasn’t going to get made was made, and was made available in case there was another big emergency. And lo and behold, there it is. Obviously, given the emergency nature, a lot of the activity recently has been about how to accelerate the work and prepare the critical pieces that are going to be necessary to get this under control, and not have it spin out of control.

Health workers in the DRC prepare the first dose of the Ebola vaccine. Photo: Pascal Emmanuel Barollier/Gavi

This is the ninth outbreak in the DRC. They are more experienced [with Ebola] than any other country in the world, but the DRC is a massive country, and the people in Mbandaka, Bikoro and Iboko are in very isolated communities. The challenge right now is to set up the basic pillars of Ebola care — basic infection control procedures, making sure that you identify every case, that you create a line-list of cases, and that you identify the context that those cases have had. All of that is the prerequisite to vaccination.

The other thing you have to do is educate the population. They know vaccines — we vaccinate for all diseases in DRC, as we do across most countries in Africa — but the challenge is, people know we do vaccine campaigns where everybody goes to a clinic and get vaccinations, so the idea that somebody comes to your community, goes to a sick person’s house, and vaccinates just people in that house and surrounding family and friends is a concept that won’t make sense. The other important thing is, although the vaccine was 100% effective in the clinical trial … well, it’s 100% effective after 10 days, so people who were already incubating Ebola will go ahead and get diseased. If people don’t understand that, then they’re going to say the vaccine didn’t work and that the vaccine gave them Ebola.

The good news is, logistics is set up. There is an air-bridge from Kinshasa, there’s helicopters to go out to Bikoro, a cold chain of the vaccine is set up in Mbandaka and Bikoro, and there are these cool carriers that keep the vaccine cold so you can transport it out to vaccination campaigns in isolated areas. We have 16,000 doses there, with 300,000 doses total, and we can release more doses as it makes sense.

You mentioned the local communities — how do you navigate that intersection of medical necessity and the lack of education or misinformation? I read that some people are refusing medical treatment and are turning to local healers or churches, instead of getting vaccinated.

There is no treatment right now available in DRC; the hope is that some experimental treatments will come in. We don’t have the equivalent for the vaccines on the treatment side. It’s going to be very important to get those treatments because, without them, what you’re saying to people is: Leave your loved ones, go to an Ebola care facility and get isolated until you most likely die, and if you don’t die, you’ll be sick for a long time. Compare that to the normal process when you get hospitalized in the DRC, which is that your family will take care of you, feed you and provide nursing care. These are tough issues for people to understand even in the best of circumstances. In an ideal world, [health workers will] work with anthropologists and social scientists, but of course, it all has to be done in the local language by people who are trusted. It’s a matter of working to bring in workers from the DRC, religious leaders and elders to educate the community so that they understand what is happening, and can cooperate with the rather chaotic but rapid effort that needs to occur to get this under control.

We know now it’s in three different health zones; we don’t yet know whether cases are connected to other cases or if these are the correct numbers of cases. It could be twice or three or ten times as many. You don’t know until you begin to do the detective work of line-listing. In an ideal world, you know you’re getting where you need to get when 100% of new cases are from the contact list of previous cases, but if 50% or 30% or 80% of the cases are not connected to previous cases. then there’s rings of transmission that are occurring that you haven’t yet identified. This is painstaking, careful detective work.

The EPI manager Dr. Guillaume Ngoie Mwamba is vaccinated in the DRC in response to the 2018 Ebola outbreak. Photo: Pascal Emmanuel Barollier/Gavi

What is different about this outbreak from the 2014 crisis? What will be the impact of this particular vaccine?

It’s the same strain, the Ebola Zaire, just like in West Africa. The difference in West Africa is that they hadn’t seen Ebola before; they initially thought it was lassa fever or cholera, so it took a long time for them to realize this was Ebola. As Isaid, the DRC has had nine outbreaks, so the government and health workers are familiar with the situation and were able to say, “Okay, we know this is Ebola, let’s call for help and bring people in.” For the vaccine campaign, they brought in a lot of the vaccinators that worked in Guinea and other countries to help do the vaccination work, because it’s an experimental vaccine under clinical trial protocols, so informed consent is required.

The impact of the vaccine is that once the line-listings are there — it was highly effective in Guinea — if this is an accelerating epidemic and you get good listing of cases, you can stop the epidemic with intervention. The other thing is that you don’t want health workers or others to say “Oh, I got the vaccine now, I don’t have to worry about it!” They still need to use full precautions, because although the vaccine was 100% effective in previous trials, the confidence interval given the size was between 78% and 100%.

In your TED Talk, you mentioned the inevitability of deadly viruses; that they will incubate, that they are an evolutionary reality. On a global level, what more can be done to anticipate epidemics, and how can we be more proactive?

I talked about the concept of prevention: How do you build vaccines for these diseases before they become real problems, and try to treat them like they’re at global health emergency before they become one? There was the creation of the new initiative at last year’s Davos called CEPI (Coalition for Epidemic Preparedness and Innovation) that is working to develop new vaccines against agents that haven’t yet caused major epidemics but have caused small outbreaks, with an understanding that they could. The idea would be to make a risk assessment and leave the vaccines frozen like they were with Ebola; you can’t do a human trial until you have an outbreak.

In 2015, at the TED Conference, Seth Berkley showed this outbreak map. During our conversation last week, he told us: “The last outbreak in 2014 was the first major outbreak. There had been 24 previous outbreaks, a handful of cases to a few hundred cases, but that was the first case that had gone in the tens of thousands. This vaccine was tried in the waning days of that outbreak, so we know what it looks like in an emergency situation.” Photo: Bret Hartman/TED

Now, the biggest threat of all — and I did a different TED talk on this — is global flu. We’re not prepared in case of a flu pandemic. A hundred years ago, the Spanish flu killed between 50 and 100 million people, and today in an interconnected world, it could be many, many times more than that. A billion people travel outside of their countries these days, and there are 66 million displaced people. I often have dinner in Nairobi, breakfast in London, and lunch in New York, and that’s within the incubation period of any of these infections. It’s a very different world now, and we really have to take that seriously. Flu is the worst one; the good thing about Ebola is that it’s not so easy to transmit, whereas the flu is really easy to transmit, as are many other infectious diseases.

It’s interesting to go back to the panic that existed with Ebola — there were only a few cases in the US but this was the “ISIS of diseases,” “the news story of the decade”. The challenge is, people get so worked up and there’s such fear, and then as soon as the epidemic goes away, they forget about it. I tried to raise money after that TED Talk, and people in general weren’t interested: “Oh, that’s yesterday’s disease.” We persevered and made sure that in our agreement with Merck that they would produce those doses, even though these are not licensed doses — as soon as they get licensed, they’ll have to get rid of those doses and make more. This was a big commitment, but we said, “Can you imagine what would happen if they had an 100% efficacious vaccine and then an outbreak occurred and we didn’t have any doses of the vaccine?” It was a risky thing to do, but it was the right thing to do from a global risk perspective, and here we are in an outbreak. Maybe it’ll stay small, but right now in the DRC, we’re seeing new cases occurring every day. It’s a scary thing.

The idea that we can make a difference is exciting — we announced the Advance Purchase Commitment in January 2017, and it’s now about a year later and here we have it being used. And it’s amazing that Merck has put this much effort in. They’ve done great work and they deserve credit for this, because it’s not like they’re going to make any money out of this. If they break even, it’ll be lucky. They’re doing this because it’s important and because they can help. We need to bring together all of the groups who can help in these circumstances — it’s the dedication of all the people on the ground from the DRC, as well as international volunteers and agencies, that will provide the systems to get this epidemic under control. There’s a lot of heroes here.

The Wangata Hospital in Mbandaka. Photo: Pascal Emmanuel Barollier/Gavi

The financial aspect is interesting — with the scale and scope of a potential global health crisis like Ebola or the flu, once it’s too late, you wouldn’t even be thinking about the relatively small financial risk of creating a vaccine that could have kept us prepared. Even if there is an immediate financial risk, in the long term, it seems incomparable.

The costs of the last Ebola outbreak were huge. In those three countries, their GDP went from positive to negative, health workers died, it affected health work going forward, travel on the continent, selling of commodities, etc. Even in the US, the cost of vanishing the few cases that were there was huge. Even if you’re a cynic and say, “I don’t care about the people, I’m only interested in a capitalistic view of the world”, these outbreaks are really expensive. The problem is there isn’t necessarily a direct link between that and getting products developed and having them stockpiled and ready to go.

The challenge is investing years ahead of time not knowing when a virus will occur or what the strain is going to be. That’s the same thing here with Ebola — we agreed to invest up to $390 million to create a stockpile, at a time when we didn’t have the money and when others weren’t interested. But if we didn’t have those doses, we’d be sitting here saying, “Well gee, shouldn’t we make some doses now?” — it takes a long time to produce the doses, to quality assure and check them, to fill and finish them, and to get them to the site. [It’s important to have] that be done by the world even when the financial incentives aren’t there.

In an interview with NPR’s TED Radio Hour, you mention the “paradox of prevention”, the idea that we seem to view health care with a treatment-centered approach, rather than prevention. With diseases that kill quickly and spread rapidly, we can’t have a solely treatment mindset, we have to be thinking about preventing it from becoming epidemics.

That is right, but we can’t ignore the treatment too [and the context in which you give it]. Personalize it: If your mother gets sick, and you’re dedicated — you would give your life for your mother in that culture, family takes care of family — do you now ship your mother to a center that you’ve heard through the grapevine will lock her up and isolate her, where she will die alone, or do you hide her and pretend she has malaria or something else? But if a doctor can say, “There might be treatment that can save your mother’s life,” well, then you want to do that for her. It [helps create] the right mindset in the population, to know that people are trying to give the best treatment, that this isn’t hopeless.

How do you think that the current Ebola situation will affect the way that we approach vaccine development? The Advance Purchase Commitment was an instance of an industry innovation. How can we continue to create incentives for pharmaceutical companies to invest in long-term development of vaccines that don’t have an immediate or guaranteed market demand?

Every time we support industry with this type of public-private partnership, it increases confidence that vaccines will be bought and supported, and increases the likelihood of industry engagement for future projects. However, it is important to state that this will not be a highly profitable vaccine. There are opportunity costs associated with it, and risks. The commitment helps but doesn’t fully solve the problem. Using push mechanisms like the funding from BARDA, Wellcome Trust and others, or a mechanism like CEPI, also helps with the risk. In an ideal world, there would be more generous mechanisms to actively incentivize industry engagement. Also, by [offering] priority review vouchers, fast track designations and others, governments can put in really good incentives for these types of programs.

Outside of closely monitoring the DRC, what are the next steps in your work?

We just opened a window for typhoid vaccines. And this is perfect timing as we have just seen the first cluster of extreme antibiotic-resistant typhoid in Pakistan, with a case exported to the UK. Pakistan has already submitted an application for support, and the Gates Foundation has provided some doses in the interim. This is an example where prevention is way, way better than cure.

Planet DebianMike Gabriel: I do it my way: Let's Encrypt

There are as many ways of doing the Let's Encrypt thing as there are site admins on this planet. So here is my way of doing it, mainly as a documentation for myself and as a tutorial for a supervision class I'll be teaching tomorrow morning.

TL;DR;

This blog post describes how to obtain certificates from Let's Encrypt on a production web server in a non-privileged user context. We use the small and well-readable acme-tiny [1] Python script for it.

Assumptions

  • You know how e.g. Apache2 gets configured (in general)
  • and you have a host running Apache2 that is reachable on the internet
  • and it least has one DNS hostname associated with its public IP address.
  • You have an idea about OpenSSL, requesting a signed certificate
  • You know what privileges on a *nix system are and why it is bad mostly to run self-updating scripts under a privileged user account (e.g. root)... (finger pointing at certbot development...)

Creating SSL Key and the Certificate Signing Request

For creating an SSL Key and a corresponding Certificate Signing Request, I use this little script, named web_certrequest.sh:

#!/bin/bash

FQDN="$1"
FQDNunderscores="$(echo $FQDN | sed 's/\./_/g')"

base="$(pwd)"

test -d "$base/private" || mkdir -p "$base/private"

if [ -f "$base/private/${FQDNunderscores}.key" ]; then
        openssl  req  -config "$base/openssl::${FQDNunderscores}.cnf" \
                      -nodes  -new \
                      -key "$base/private/${FQDNunderscores}.key" \
                  -out "$base/${FQDNunderscores}.csr"
else
    openssl  req  -config "$base/openssl::${FQDNunderscores}.cnf" \
                  -nodes  -new \
                  -keyout "$base/private/${FQDNunderscores}.key" \
                  -out "$base/${FQDNunderscores}.csr"
fi

If you are doing all this for the first time, create an empty folder, put the script in it and make the script executable:

$ chmod u+x web_certrequest.sh

For each host (FQDN) that I need SSL certificates for I have a small openssl::<fqdn-with-underscores-instead-of-dots>.cnf configuration file:

[ req ]
default_bits            = 4096                  # Size of keys
distinguished_name      = req_distinguished_name
req_extensions          = v3_req
x509_extensions         = v3_req

[ req_distinguished_name ]
# Variable name           Prompt string
#----------------------   ----------------------------------
0.organizationName      = Organization Name (company)
organizationalUnitName  = Organizational Unit Name (department, division)
emailAddress            = Email Address
emailAddress_max        = 40
localityName            = Locality Name (city, district)
stateOrProvinceName     = State or Province Name (full name)
countryName             = Country Name (2 letter code)
countryName_min         = 2
countryName_max         = 2
commonName              = Common Name (hostname, IP, or your name)
commonName_max          = 64

# Default values for the above, for consistency and less typing.
# Variable name                   Value
#------------------------------   ------------------------------
0.organizationName_default      = My Company              # adapt as needed
localityName_default            = My City                 # adapt as needed
stateOrProvinceName_default     = My County/State         # adapt as needed
countryName_default             = C                       # country code: e.g. DE for Germany
commonName_default              = host.example.com        # hostname of the webserver
emailAddress_default            = hostmasters@example.com # adapt as needed
organizationalUnitName_default  = Webmastery              # adapt as needed

[ v3_req ]
# Extensions to add to a certificate request
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment, keyEncipherment, dataEncipherment, keyAgreement

# Some CAs do not yet support subjectAltName in CSRs.
# Instead the additional names are form entries on web
# pages where one requests the certificate...
subjectAltName          = @alt_names

[alt_names]
### host.example.com is the FQDN
DNS.1 = host.example.com
DNS.2 = www.example.com
DNS.3 = example.com
DNS.4 = wiki.example.com
DNS.5 = www.old-company-name.biz

The above config I place into the same directory as the web_certrequest.sh script and name it openssl::host_example_com.cnf.

Note: I will use host.example.com further down as my host's FQDN. Adapt to your host name, please.

You then run the above script (you need to answer several questions on the way...)...

$ ./web_certrequest.sh host.example.com

and you'll get two files from that:

  • private/host_example_com.key: The private key belonging to the to-be-signed certificate
  • host_example_com.csr: The file containing the Certificate Signing Request. This file we will send to the Let's Encrypt signing engine...

Note: Once you have the .key and the .csr file, you only on rare occasions need to recreate them:

  • .key file recreation:

    • if you need more crypto strengths (e.g. more than 4096 bits)
    • except from that, leave it as is...
  • .csr file recreation:

    • only if you recreate the .key file
    • if you need to include another host alias in subjectAltName
    • whenever any of the DNS names in the certificate is not in DNS anymore
    • if the certificate's meta data is wrong/bad/needs to be adapted (e.g. company names sometimes change)

Next step: The host_example_com.csr we will soon send to the Let's Encrypt signing engine, but let's prepare the webserver first.

Setting up the Webserver

With Let's Encrypt there are several ways of getting your mastery over a machine on the internet confirmed. One is to request a challenge from the LE API that you will answer via a webserver you control. Another approach is responding to an LE challenge via a DNS server you control. This blog post looks at web server based responses to the Let's Encrypt challenge.

Place the private key on the web server

Copy the private host_example_com.key file to the web server under /etc/ssl/private. Make sure, file permissions are 0600 and the file is owned by root:root.

Create a letsencrypt system user

For the acme-tiny call we need a non-privileged system user account:

$ sudo adduser --system --home /var/lib/letsencrypt --shell /bin/bash letsencrypt

Setup this letsencrypt user's home directory in the following way:

(root@host) {/var/lib/letsencrypt} # ls -al 
total 40
drwx--x---  8 letsencrypt www-data 4096 Mar  1  2017 .
drwxr-xr-x 80 root        root     4096 Apr 21 08:01 ..
drwx------  2 letsencrypt nogroup  4096 Mar  6  2017 bin
drwx--x---  2 letsencrypt www-data 4096 May  1 00:00 challenges
drwx------  2 letsencrypt nogroup  4096 Mar  6  2017 .letsencrypt

The bin/ subfolder contains this:

(root@host) {/var/lib/letsencrypt/bin} # ls -al 
total 12
drwx------ 2 letsencrypt nogroup  4096 Mar  6  2017 .
drwx--x--- 8 letsencrypt www-data 4096 Mar  1  2017 ..
-rwxr-xr-x 1 root        root   464 Mar  6  2017 letsencrypt-renew-certs

The letsencrypt-renew-certs script has this content:

#!/bin/bash

INTERMEDIATE=lets-encrypt-x3-cross-signed.pem

acme-tiny --account-key ~/.letsencrypt/account-host_example_com.key \
          --csr ~/.letsencrypt/host_example_com.csr \
          --acme-dir ~/challenges/ \
          1> ~/.letsencrypt/host_example_com.crt && \
    \
    cat ~/.letsencrypt/host_example_com.crt \
        ~/.letsencrypt/${INTERMEDIATE} \
        1> ~/.letsencrypt/host_example_com.fullchain.crt

Note: The script above needs the acme-tiny tool, so don't forget to install it:

$ sudo apt-get install acme-tiny

And the .letsencrypt/ subfolder contains this:

(root@host) {/var/lib/letsencrypt} # ls -al .letsencrypt/
total 32
drwx------ 2 letsencrypt nogroup  4096 Mar  6  2017 .
drwx--x--- 8 letsencrypt www-data 4096 Mar  1  2017 ..
-rw-r--r-- 1 root        root     3247 Mar  1  2017 account-host_example_com.key
-rw-r--r-- 1 root        root     1984 Mar  1  2017 letsencryptauthorityx3.pem
-rw-r--r-- 1 root        root     1647 Nov 16  2016 lets-encrypt-x3-cross-signed.pem
-rw-r--r-- 1 root        root     2480 Mar  1  2017 host_example_com.csr

The file host_example_com.csr needs to be copied over. We just created it above. Remember?

The files letsencryptauthorityx3.pem [2] and lets-encrypt-x3-cross-signed.pem [3] are intermediate certificates for the two different certificate chains offered by Let's Encrypt. For more info, see here [4].

The file account-host_example_com.key is our account key for Let's Encrypt. Personally, I prefer one account key per web server I run, but some people have one account.key file per admin. The file can be created with this command (do this as the letsencrypt system user we just created in subdir .letsencrypt/):

$ openssl genrsa 4096 > account-host_example_com.key   

Note: When immitating the above, make sure that the file permissions and file ownerships on your web server system match with what you read above.

Getting started with Apache2

For retrieval of the first certificate, nearly nothing needs to be configured in Apache2, except from it being reachable on port 80 and except from the challenges path being available as URL http://host.example.com/.well-known/acme-challenge:

Create /etc/apache2/conf-available/acme-tiny.conf with this content:

Alias /.well-known/acme-challenge/ /var/lib/letsencrypt/challenges/

<Directory /var/lib/letsencrypt/challenges>
    Require all granted
    Options -Indexes
</Directory>

... and enable it:

$ sudo a2enconf acme-tiny

... and restart Apache2:

$ sudo invoke-rc.rd apache2 restart

Obtaining the Certificate from Let's Encrypt

Then, for (regularly) obtaining / updating our certificate, we create a script that can be found via $PATH and should normally be called with root privileges (/usr/local/sbin/letsencrypt-renew-certs), don't forget making it executable (chmod 0700 /usr/local/sbin/letsencrypt-renew-certs):

#!/bin/bash

su - letsencrypt -c ~letsencrypt/bin/letsencrypt-renew-certs

invoke-rc.d apache2 restart

This script drops privileges for the certificate file update part and then re-launches Apache2. This is the only part that requires root privileges on the web server. Point is: after the certificate has been updated, we need to restart (or reload?) the web server and this requires root privileges.

If all is correctly in place, you should be able to obtain (and later on update) the web server's SSL certificate from Let's Encrypt:

$ sudo letsencrypt-renew-certs

After the successful acme-tiny call, the .letsencrypt/ subfolder in /var/lib/letsencrypt looks like this:

(root@host) {/var/lib/letsencrypt} # ls -al .letsencrypt/
total 32
drwx------ 2 letsencrypt nogroup  4096 Mar  6  2017 .
drwx--x--- 8 letsencrypt www-data 4096 Mar  1  2017 ..
-rw-r--r-- 1 root        root     3247 Mar  1  2017 account-host_example_com.key
-rw-r--r-- 1 root        root     1984 Mar  1  2017 letsencryptauthorityx3.pem
-rw-r--r-- 1 root        root     1647 Nov 16  2016 lets-encrypt-x3-cross-signed.pem
-rw-r--r-- 1 letsencrypt nogroup     0 May  1 00:00 host_example_com.crt
-rw-r--r-- 1 root        root     2480 Mar  1  2017 host_example_com.csr
-rw-r--r-- 1 letsencrypt nogroup  4704 Apr  1 00:17 host_example_com.fullchain.crt

Plumbing it all into the Apache2 Config

First, the webserver needs to know where to find the SSL certificate file and the corresponding key file.

Adapting the basic Apache2 SSL Setup

The key file we earlier copied over into /etc/ssl/private/host_example_com.key.

The certificate file should now exist in /var/lib/letsencrypt/.letsencrypt. Personally, I prefer having it symlinked to /etc/ssl/certs/host_example_com.fullchain.crt:

$ sudo ln -s /var/lib/letsencrypt/.letsencrypt/host_example_com.fullchain.crt /etc/ssl/certs/host_example_com.fullchain.crt

Apache2 needs to be pointed to these two files (the .key and the .crt file). Enable the SSL module in Apache2, enable the default SSL site and modify the default SSL config like this:

diff --git a/apache2/sites-available/default-ssl.conf b/apache2/sites-available/default-ssl.conf
index 7e37a9c..4e8784a 100644
--- a/apache2/sites-available/default-ssl.conf
+++ b/apache2/sites-available/default-ssl.conf
@@ -29,8 +29,10 @@
                #   /usr/share/doc/apache2/README.Debian.gz for more info.
                #   If both key and certificate are stored in the same file, only the
                #   SSLCertificateFile directive is needed.
-               SSLCertificateFile      /etc/ssl/certs/ssl-cert-snakeoil.pem          
-               SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
+               #SSLCertificateFile     /etc/ssl/certs/ssl-cert-snakeoil.pem
+               #SSLCertificateKeyFile /etc/ssl/private/ssl-cert-snakeoil.key
+               SSLCertificateFile      /etc/ssl/certs/host_example_com.fullchain.crt
+               SSLCertificateKeyFile /etc/ssl/private/host_example_com.key

                #   Server Certificate Chain:
                #   Point SSLCertificateChainFile at a file containing the

Enable the SSL module:

$ sudo a2enmod ssl

Enable the default-ssl site:

$ sudo a2ensite default-ssl

Redirecting all traffic to https://

In the webserver, we now need to make sure that the URL (folder) http://host.example.com/.well-known/acme-challenge/ is always reachable for the Let's Encrypt signing engine on a non-encrypted connection.

Note 1: Once you have Let's Encrypt deployed, you want all requests coming in on http:// be redirected to https://.

Note 2: The challenges URL should be reachable without encryption (http://, rather than ```https://``). Why? If your certificate upgrade fails some day, and you want to obtain a new certificate, you only can do it over a non-encrypted connection (as your https:// certificate has expired).

This is the configuration snippet that you need to place into all VirtualHost definitions (for non-encrypted access to the webserver, so normally under <VirtualHost <address>:80>) Watch out! Manual adaptation needed in the third line:

RewriteEngine on
RewriteCond %{REQUEST_URI} !/\.well-known/acme-challenge/.*
RewriteRule /(.*) https://vhost-server-name.example.com/$1 [L,NC,NE]

Enable the rewrite module:

a2enmod rewrite

Then restart Apache2:

invoke-rc.d apache2 restart

Testing in a Web Browser

Now open the URL http://host.example.com in a web browser and it should switch to https://host.example.com and the certificate should be trustworthy.

Testing another Let's Encrypt Signing Request

If the browser test above works ok, then try the sudo letsencrypt-renew-certs run again. It should get you another new certificate just fine (like when you ran it just before). You might want to check time stamps under /var/lib/letsencrypt/.letsencrypt.

If you do this re-test many many times, the Let's Encrypt / ACME site will block you at some point. Don't worry, this is only temporary and a DDoS protection mechanism.

Getting a new Certificate every Month

Every month I run the CRON job below. The CRONTAB entry (use e.g. VISUAL=mcedit crontab -e) looks like this:

0 0  1 * * /usr/local/sbin/letsencrypt-renew-certs 2>&1 1>/root/letsencrypt-renewal.log

References

Planet DebianChris Lamb: Free software activities in May 2018

Here is my monthly update covering what I have been doing in the free software world during May 2018 (previous month):

Coding-wise, I:


Reproducible builds


Whilst anyone can inspect the source code of free software for malicious flaws, almost all software is distributed pre-compiled to end users.

The motivation behind the Reproducible Builds effort is to ensure no flaws have been introduced during this compilation process by ensuring identical results are generated from a given source. This allows multiple third-parties to come to a consensus on whether a build was compromised.

This month I:

  • Fixed an issue in disorderfs (our FUSE-based filesystem that deliberately introduces non-determinism into directory system calls in order to flush out issues) to ensure readdir(2) calls returns consistent and unique inode numbers. (#898287)
  • Presented on our diffoscope "diff-on-steroids" tool, as well as provided an update on the Reproducible Builds effort at the MiniDebConf in Hamburg, Germany.
  • Filed reproducibility-related issues upstream for Fontconfig, tweeny, vcr.py and zstd, as well as authored two patches for GNU mtools to fix reproducibility-related toolchain issues. (#900409 & #900410)
  • Make extensive changes to our website, including overhauling and updating our growing list of talks.
  • Submitted three Debian-specific patches to fix reproducibility issues in telepathy-gabble, vitrage & weston.
  • I categorised a large number of packages and issues in the notes repository and worked on publishing our weekly reports. (#157, #158, #159 & #160)
  • Provided three improvements to our extensive testing infrastructure:
    • Correct the "notes" link URL. [...]
    • Move the package name to the beginning of the "status change" subject lines. [...]
    • Add a X-Reproducible-Builds-Source header to "status change" emails. [...]
  • I also made the following changes to diffoscope, our in-depth and content-aware diff utility that can locate and diagnose reproducibility issues:
    • Clarified the No file format specific differences found inside, yet data differs message. [...]
    • Don't append rather useless "(data)" suffix in the output. [...]
    • Made a number of PEP8-related fixups. (eg. [...], [...], [...], etc.)
  • Finally, I updated the diffoscope.org website, including moving it to a Jekyll-based instance [...], adding a progress bar animation [...], updating the list of supported formats [...], etc.


Debian

  • Made some team-wide changes to packages under the care of the Debian Python Modules Team (DMPT) including:
    • Use HTTPS for Source field in debian/copyright files (eg. [...], [...], [...], etc.)
    • Made a large number of PEP8-related changes to Debian-specific scripts including limiting the line-length [...], placing colon-separated compound statement on separate lines [...], adding blank lines after end of function or class [...], fixing spacing after a comment [...], fixing indentation [...], etc.
    • Use HTTPS URLs for the Homepage field in debian/control. (eg. [...], [...], [...], etc.)
  • Fixed an permissions issue in an Alioth to Salsa repository migration script. [...]
  • Contributed specific patches:
    • cryptsetup: Make the failsleep parameter configurable. (#898495)
    • debhelper: Clarify the order of packages returned from dh_listpackages. (#897949)
    • mssh: Correct "develop" grammar in manual page. (#899368)
    • norwegian: Duplicate dh_build/dh_auto_build in debian/rules. (#900290)
  • Suggested a handful of PEP8-related changes to the Debian Archive Kit (dak) (eg. [...], [...], [...], etc.)
  • Removed build artefacts committed to the repository in the tvb-geodesic packaging. [...]
  • Use the <!nocheck> build profile over an explicit comment in the Python packaging of yarl. [...]
  • I also filed the following bug reports:
    • apt: Inconsistency between apt install ./binary.deb and dpkg -i ./binary.deb if package already up-to-date. (#900142)
    • ftp.debian.org: Please move the website.git repository to salsa. (#899109)
    • git-buildpackage: Add setting to ~/.gbp.conf to prevent debian/gbp.conf overrides. (#898613)
    • plymouth: Repository missing latest upload. (#898511)
    • python-aniso8601: Please revert Python 2.x package drop. (#898245)
    • lastpass-cli: error: Peer certificate cannot be authenticated with given CA certificates. (#898940)
  • Lastly, I submitted 5 patches to fix typos in debian/rules files against catch, grr, imanx, pd-purest-json & tinyos.

Debian LTS


This month I have been paid to work on the Debian Long Term Support (LTS). In that time I did the following:

  • Extensive "Frontdesk" duties including triaging CVEs, following-up with other developers, upstream developers.
  • Filing and cross-referencing bugs in the Debian BTS (eg. #898856).
  • Issued DLA 1379-1 for curl to prevent a heap-based buffer overflow.
  • Preparing uploads to the jessie distribution distribution.
  • Helping prepare the "end-of-life" of the wheezy distribution.

Uploads

  • redis (5:4.0.9-2) — Ignore test failures on problematic architectures to allow migration to testing.
  • ruby-rjb (1.5.5-3) — Replace call to the now-deprecated javah binary. (#897664)
  • python-django (1:1.11.13-1, 2:2.0.5-1 & 2:2.1~alpha1-1) — New upstream releases.
  • gunicorn (19.8.1-1) & redisearch (1.2.0-1) — New upstream releases.

I also performed the following sponsored uploads:


Cryptogram1834: The First Cyberattack

Tom Standage has a great story of the first cyberattack against a telegraph network.

The Blanc brothers traded government bonds at the exchange in the city of Bordeaux, where information about market movements took several days to arrive from Paris by mail coach. Accordingly, traders who could get the information more quickly could make money by anticipating these movements. Some tried using messengers and carrier pigeons, but the Blanc brothers found a way to use the telegraph line instead. They bribed the telegraph operator in the city of Tours to introduce deliberate errors into routine government messages being sent over the network.

The telegraph's encoding system included a "backspace" symbol that instructed the transcriber to ignore the previous character. The addition of a spurious character indicating the direction of the previous day's market movement, followed by a backspace, meant the text of the message being sent was unaffected when it was written out for delivery at the end of the line. But this extra character could be seen by another accomplice: a former telegraph operator who observed the telegraph tower outside Bordeaux with a telescope, and then passed on the news to the Blancs. The scam was only uncovered in 1836, when the crooked operator in Tours fell ill and revealed all to a friend, who he hoped would take his place. The Blanc brothers were put on trial, though they could not be convicted because there was no law against misuse of data networks. But the Blancs' pioneering misuse of the French network qualifies as the world's first cyber-attack.

Planet DebianShirish Agarwal: Authoritarianism and the slow death of Indian Railways

Definiton non-answer – An answer which is not actually an answer, it does everything except answer the question actually asked. Understanding this art and you understand how Indian Politics and the Indian bureaucracy works.

I was reading an article about nations and most of all my own country is getting into a well of authoritarianism and a cycle of fear and non-answers being generated by the present dispensation.

While I believe myself to be partly at fault for being self-censoring, I would try to share some of the issues which have been lying dormant in myself for quite some time.

To start with, there were couple of questions asked by my economic professor when I was studying Economics almost 20 years back.

The first question he asked was –

1. Why do people like status-quo so much ?

Some of the answers which were answered by the professor were –

a. People are happy with the way things are –

b. People do not know how the change will affect them.
The fear of how the change will affect them is unknown
and like magicians only one part/feature is known and
perceived while the other part is hidden.

It took me quite a few years of life, reading newspapers,
I understood what he meant by it.

c. Special Interests who survive and thrive due to the
way the status-quo is or as later I understood ‘Follow the
money’ .

I am going to use Indian Railways to explore the ‘Follow the money model’ way as I have loved Indian Railways since my childhood and it is also pertinent to majority of Indians who have the only means of cheap transport to go from A to B.

Indian Railways logo

A bit of history

Before we get to the present condition, a bit of historical reminiscing is important. Now the Indian Railways has been like the son which was never wanted since it’s birth. The British made lot of investment when they were ruling for their own benefit, most of which is still standing today details the kind of materials that were used. Independence and Partition were two gifts which the English gave us when they left which lead to millions of souls killed on either side. I wouldn’t go much into it as Khushwant Singh’s ‘Train to Pakistan‘ . It is probably one of the most hard books I have read as there are just too many threads to grasp and one is just unable to grasp the horror that Partition wrought to the Indian subcontinent.

The reason I am sharing about Partition because trains were the only means for lot of people to cover huge distances in those times. After Partition, when Pandit Nehruji became the P.M. the constitution he along with many leaders with Dr. Babasaheb Ambedkar (who is known as the Architect of the Constitution of India) wanted to have a secular, socialist India which would be self-sufficent in nature. The experiment which was also tried later by her daughter Mrs. Gandhi and later his grandson Mr. Rajiv Gandhi. All of the Prime Ministers did lot of investment in whatever they thought was best for the country except for Indian Railways. Especially from 1980’s onwards there was a dramatic shift (downwards) in creation of public infrastructure, especially the Railways even though the Governments knew we would be a young country in the coming years.

The 90’s

Before India’s Independence , India was a collection of several princely states consisting of today’s India, Pakistan, parts of Burma, Nepal so when the British came with the Rails, it was an innovation. During the period as the spread of Railways grow, three different railway systems were spawned on the gauge width, the Narrow Gauge, Standard Gauge and the (Indian) broad gauge. Wikipedia has a nice article about the different gauge networks so would leave it to them.

In the 90’s apart one of the dramatic change was from socialism to capitalism (as a policy initiative) and limited entry to foreign capital in specific sectors, one of the good intentions was the Project Uniguage for Indian Railways which was supposed to be finished by the end of the century has still not been done till date.

The other thing which was supposed to also happen is the impetus on Electrification of Indian Railways which is still far from over. There is lobbying from the diesel lobby at least in the locomotive space. As almost all the locomotive designs have been bought from various foreign vendors and then Indianized, they do not want their interests to be diluted.

Present situation

The present situation is that Indian Railways is in dire straits. While Indian Railways had an operating ratio of 94.9 percent

See the image of an Average Indian Household spend on various services –

Average Indian Household spend per month - Copyright Times of India.

– Copyright – Times of India.

As can be seen the biggest expenses are travel and eating out. In most developed economies, the share of travel expenses is not more than 4% of a typical household budget, but as can be seen for Indians the proportions are much more higher.

The Indian Railways has been the worst performer as far as on-time performance is concerned, at least since the present dispensation has taken over.

So who gains, if trains run late, the private buses and Air Services. The private bus transport have known to raise prices every year, especially whenever holidays or festivals happen. The same is the case with Airfares as well and is common occurrence and doesn’t register any shock anymore. There is a proposal to give fair compensation in cases of issues of flight delay and flight cancellation somewhat like what is available in European sectors but most operators say it will inevitably lead to higher fare prices across the board.

The airport infrastructure is also under severe strain while clocking increasing growth as people look to be at places at appointed times. The growth has been amazing while the on-time performance has been going in the opposite direction due to poor planning and mis-management. We need to have more CISF personnel and much wider airports (both land-side and air-side) to accommodate the increasing number of people traveling.

Just yesterday came across an interesting article on civil aviation which brings out all what I wanted to share and more.

Indian railways meanwhile seems to be running out of options as even though infrastructure is being increased, it’s not just fast enough with not enough talent which is going to cost us both in the short and medium-term 😦

I could share quite a lot of operational and policy issues but that might be boring for people who are not rail-fanners. I’ll just end with the simplest station that if you look at the work of at least the last couple of decades of Indian Railway Ministers, most Railway Ministers would present budgets where the emphasis was more on getting new railway services and something like 3-4% budget increase in creation of infrastructure which many a time will be lying unused without proper explanations or a non-answer.

Somewhat Good news

The only bright spot seem to the Dedicated Freight corridors which hopefully should increase Railways freight earnings and give more space for maintenance to happen on Indian Railways. The freight share of Indian Railways which used to be 90% has now shrunk to less than 18% due to number of reasons, among them opening up the freight sector from railways to roads, putting freight on passing loops and making freight a second-class citizen on railways, although roads have their own issues. A Livemint report couple of years back also shed some light on the situation.

On passenger front, only metro railways have some sort of good news but not enough. I am just hanging on hope as until we don’t value-add and move up in the value-chain on exports don’t really see India doing well.

There are a lot of challenges as well as opportunities for whichever Government comes next, have been thoroughly disappointed with the performance of the present Government in everything, including International Trade which was supposed to be unlocked by the Present PM.

Worse Than FailureImprov for Programmers: When Harddrives Attack

Put on some comfy pants, we're back again with a little something different, brought to you by Raygun. This week's installment starts with exploding hard drives, and only Steve Buscemi can save us. Today's episode contains small quantities of profanity.

Raygun provides a window into how users are really experiencing your software applications.

Unlike traditional logging, Raygun silently monitors applications for issues affecting end users in production, then allows teams to pinpoint the root cause behind a problem with greater speed and accuracy by providing detailed diagnostic information for developers. Raygun makes fixing issues 1000x faster than traditional debugging methods using logs and incomplete information.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integrated, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianHideki Yamane: Enabled power-saving feature in Linux 4.17

I've committed to enable power-saving feature on laptops in Linux 4.17 in Debian package and it has entered in experimental repository. Please try it (and give a report if you get any trouble with it).

Thanks to Fedora people to note its feature to its release note :)

,

Planet Linux AustraliaDavid Rowe: FreeDV 700D Released

Here is a sample of Mark, VK5QI, sending a FreeDV 700D signals from Adelaide, South Australia, to a Kiwi SDR at the Bay of Islands, New Zealand. It was a rather poor channel with a path length of 3200km (2000 miles). First SSB, then FreeDV 700D, then SSB again:

Here is FreeDV 700D on the waterfall of Mark’s IC7610. That little narrow signal at 7.176 MHz is 700D, note the “overweight” SSB signals to the right! This is a very bandwidth efficient mode.

Last weekend FreeDV GUI 1.3 was released, which includes the new 700D mode. I’ve been working hard for the last few months to get 700D out of the lab and onto the air. Overall, I estimate about 1000 hours were required to develop FreeDV 700D over the last 12 months.

For the last few weeks teams of beta testers dotted around the world have been running FreeDV 1.3 in the wild. FreeDV 700D is fussy about lost samples so I had to do some work with care and feeding of the sound drivers, espcially on the Windows build. Special thanks to Steve K5OKC, Richard KF5OIM, Mark VK5QI, Bill VK5DSP; Hans PA0HWB and the Dutch team; Eric GW8LJJ, Kieth GW8TRO and the UK team; Mel K0PFX, Walt K5WH and the US team, Peter VK5APR, Peter VK2TPM, Bruce K6BP, Gerhard OE3GBB, John VK5DM/VK3IC, Peter VK3RV and the Sunbury team, and my local AREG club. I apologise if I have missed anyone, all input is greatly appreciated.

Anyone who writes software should be sentenced to use it. So I’ve poked a few antennas up into the air and, conditions permitting have made 700D contacts, getting annoyed with things that don’t work, then tweaking and improving. Much to my surprise it really does handle some nasty fading, and it really does work better than SSB in many cases. Engineers aren’t used to things working, so this is a bit of an adjustment for me personally.

Results

Here’a demo video of FreeDV 1.3 decoding a low SNR Transatlantic contact between Gerhard OE3GBB and Walt, K5WH:

You can see the fast fading on the signal. The speech quality is not great, but you get used to it after a little while and it supports conversations just fine. Remember at this stage we are targeting low SNR communications, as that has been the major challenge to date.

Here’s a screen shot of the FreeDV QSO Finder (thanks John K7VE) chat log, when the team tried SSB shortly afterwards:

FreeDV 700D also has some robustness to urban HF Noise. I’m not sure why, this still needs to be explored. Here is the off-air signal I received from Peter, VK2TPM. It’s full of nasty buzzing switching power supply noises, and is way down in the noise, but I obtained an 80% decode:

It’s hard to hear the modem signal in there!

FreeDV 700D Tips

Lots of information of FreeDV, and the latest software, at freedv.org. Here are some tips on using 700D:

  1. The 700 bit/s codec is’s sensitive to your microphone and the FreeDV microphone equaliser settings (Tools-Filter). Suggest you set up a local loopback to hear your own voice and tune the quality using the Tools-Filter Mic equaliser. You can play pre-recorded wave files of your own voice using Tools-Play File to Mic in or with the “voice keyer” feature.
  2. The current 700D modem is sensitive to tuning, you need to be within +/- 20Hz for it to acquire. This is not a practical problem with modern radios that are accurate to +/- 1Hz. One you have acquired sync it can track drift of 0.2Hz/s. I’ll get around to improving the sync range one day.
  3. Notes on the new features in FreeDV 1.3 User Guide.
  4. Look for people to talk to on the FreeDV QSO Finder (thanks John K7VE)
  5. Adjust the transmit drive to your radio so it’s just moving the ALC. Don’t hammer your PA! Less is more with DV. Aim for about 20W average power output on a 100W PEP radio.
  6. If you get stuck reach out for help on the Digital Voice mailing list (digitalvoice at googlegroups.com)

Significance

The last time a new HF voice mode was introduced was the 1950’s and it was called Single Side Band (SSB). It’s lasted so long because it works well.

So a new voice mode that competes with SSB is something rare and special. We don’t want the next HF Voice mode to be locked down by codec vendors. We want it to be open source.

I feel 700D is a turning point for FreeDV and open source digital voice. After 10 years of working on Codec 2 and FreeDV, we are now competitive with SSB on HF multipath channels at low SNRs. The 700 bits/ codec isn’t great. It’s fussy about microphones, EQ settings, and background noise. But it’s a start, and we can improve from here.

It takes some getting used to, but our growing experience has shown 700D is quite usable for conversations. Bear in mind SSB isn’t pretty at low SNRs either (see sample at the top), indeed untrained listeners struggle with SSB even at high SNRs.

Quite remarkably, the 700 bit/s codec outperforms locked down, proprietary, expensive, no you can’t look at my source or modify me, codecs like MELP and TWELP at around the same bit rate.

The FreeDV 700D waveform (the combined speech codec, FEC, modem, protocol) is competitive at low SNRs (-2dB AWGN, +2dB CCIR Poor channel), with several closed source commercial HF DV systems that we have explored.

FreeDV 700D requires about 1000 Hz of RF bandwidth, half of SSB.

Most importantly FreeDV and Codec 2 are open source. It’s freely available to not just Radio Amateurs, but emergency services, the military, humanitarian organisations, and commercial companies.

Now that we have some traction with low SNR HF fading channels, the next step is to improve the speech quality. We can further improve HF performance with experience, and I’d like to look at VHF/UHF again, and push down to 300 bit/s. The Lower SNR limit of Digital Voice is around -8dB SNR.

This is experimental radio. DV over HF is a very tough problem. Unlike other almost all other voice services (mobile phones, VHF/UHF radio), HF is still dominatted by analog SSB modulation. I’m doing much of the development by myself, so I’m taking one careful, 1000 man-hour, step at a time. Unlike other digital voice modes (I’m looking at you DStar/C4FM/DMR/P25) – we get to set the standard (especially the codec), rather than following it and being told “this is how it is”.

Get Involved

My work excites a lot of people, and the gets the brainstorms flowing. I get overwhelmed by people making well meaning suggestions about what I should do with my volunteer time, and underwhelmed by those who will step up and help me do it.

I actually know what to do, and the track record above demonstrates it. What I need is help to make it happen. I need people who can work with me on the items below:

  1. Support this work via Patreon or PayPal
  2. Refactor and maintain the FreeDV GUI source code. I should be working on DSP code where my skills are unique, not GUI programs and Windows sound problems. See bottom of FreeDV GUI README.
  3. Experienced or not, if you want to play DSP, I have some work for you too. You will learn a lot. Like Steve Did.
  4. Find corner cases where 700D breaks. Then help me fix it.
  5. Work with me to port 700D to the SM1000.
  6. Make freedv.org look great and maintain it.
  7. Help me use Deep Learning to make Codec 2 even better.
  8. Start a FreeDV Net.
  9. Set up a FreeDV beacon.
  10. Help me get some UHF/VHF FreeDV modes on the air. Some coding and messing with radios required.
  11. Help others get set up on FreeDV, 700D voice quality depends on the right microphone and equaliser settings, and noobs tend to over drive their PA.
  12. Create and Post Demo/instructional Videos.

Like the good people above, you have the opportunity to participate in the evolution of HF radio. This has happened once in the last 60 years. Lets get started.

If you are interested in development, please subscribe to the Codec 2 Mailing List.

Reading Further

Peter VK2TPM, blogs on 700D.
AREG Blog Post on FreeDV 700D
Steve Ports an OFDM modem from Octave to C. This is the sort of support I really need – thanks Steve for stepping up and helping!
Windows Installers for development versions of FreeDV.
Codec 2 700C
AMBE+2 and MELPe 600 Compared to Codec 2
Lower SNR limit of Digital Voice
700D OFDM modem README and specs
FreeDV User Guide, including new 700D features.
Bill, VK5DSP designed the LDPC code used in 700D and has helped with its care and feeding. He also encouraged me to carefully minimise the synchronisation (pilot symbol) overhead for the OFDM modem used in 700D.

TEDTED en Español: el primer evento de oradores TED de habla hispana

El presentador Gerry Garbulsky da inicio al evento TED en Español en el teatro TEDNYC, Nueva York, NY (Foto: Dian Lofton/TED)

El 26 de abril tuvo lugar el primer evento de oradores de TED en Español, presentado por TED en su oficina de Nueva York. El evento, completamente en español, contó con ocho oradores, una presentación musical, cinco cortometrajes y 13 charlas de un minuto dadas por miembros de la audiencia.

El evento en Nueva York es la última incorporación a la iniciativa “TED en Español” de TED, diseñada para difundir ideas en Español a la comunidad hispana mundial. El evento fue conducido por Gerry Garbulsky, director de TED en Español (también director del mayor evento de TEDx del mundo: TEDxRiodelaPlata en Argentina.) TED en Español, además, incluye su página en TED.com, una comunidad de Facebook, un feed de Twitter, un “Boletín” semanal, un canal de YouTube y, a principios de este mes, un podcast original creado en asociación con Univision.

¿Deberíamos automatizar la democracia? “¿Soy solo yo, o hay más personas que están un poco decepcionadas con la democracia?, pregunta César A. Hidalgo. Al igual que otros ciudadanos preocupados, el profesor e investigador de física del MIT quiere asegurarse de que hayamos elegido gobiernos que realmente representen nuestros valores y deseos. Su solución: ¿qué tal si los científicos pudieran crear una IA que votara por ti? Hidalgo visualiza un sistema en el que cada votante pueda enseñar a su propia IA, cómo pensar como ella, utilizando cuestionarios, listas de lectura y otros tipos de datos. Una vez que hayas entrenado a tu IA y validado algunas decisiones que toma por ti, puedes dejarla en piloto automático, votando y representándote… o puedes decidir aprobar cada cosa que sugiera. Es muy sencillo restarle credibilidad a su idea, pero Hidalgo cree que vale la pena probarlo a menor escala. Su conclusión: “la democracia tiene una pésima interfaz de usuario. Si se pudiera mejorar la interfaz, podríamos usarla más”.

Cuando el foco del fracaso cambia de lo que se pierde a lo que se gana, todos podemos aprender a “fallar conscientemente”, afirma Leticia Gasca (Foto: Jasmina Tomic/TED)

Cómo fallar conscientemente. Si tu negocio hubiera fallado en la Antigua Grecia, habrías tenido que pararte en la plaza del pueblo con una canasta sobre tu cabeza. Afortunadamente, hemos recorrido un largo camino… ¿o no? La dueña de un negocio fallido, Leticia Gasca, no lo cree. Motivada por su dolorosa experiencia, se dispuso a crear una forma para que otros como ella, transformaran la culpa y la vergüenza de un emprendimiento que salió mal, en un acelerador del crecimiento. En consecuencia, nació “Fuckup Nights” (FUN), una serie de eventos en diversos lugares del mundo para compartir historias de fracaso profesional; y “The Failure Institute” (el Instituto del Fracaso), un grupo de investigación, que estudia el fracaso y su impacto en las personas, empresas y comunidades. Para Gasca, cuando el foco del fracaso cambia de lo que se pierde a lo que se gana, todos podemos aprender a “fallar conscientemente” y ver los desenlaces como puertas a la empatía, la resiliencia y la renovación.

De cuatro países a un escenario. El grupo musical panlatinoamericano LADAMA trajo mucho más que música al escenario de TED en Español. La venezolana María Fernanda González, la brasilera Lara Klaus, la colombiana Daniela Serna y la estadounidense Sara Lucas cantan y bailan al son de una variedad de ritmos, que van desde estilos sudamericanos hasta fusiones caribeñas, invitando a la audiencia a bailar con ellas. Tocando “Night Traveler” y “Porro Maracatu”, LADAMA transformó el escenario en un espacio musical que vale la pena difundir.

Gastón Acurio comparte historias sobre el poder de la comida para cambiar vidas (Foto: Jasmina Tomic/TED)

El cambio mundial comienza en tu cocina. En su trabajo pionero por llevar la cocina peruana al mundo, Gastón Acurio descubrió el poder que tiene la comida para cambiar la vida de las personas. A medida que el ceviche apareció en restaurantes de renombre en todo el mundo, Gastón vio que su país natal, Perú, comenzaba a apreciar la diversidad de su gastronomía y se enorgullecía de su propia cultura. Pero la comida no siempre se ha usado para traer bien al mundo. Debido a la revolución industrial y al aumento del consumismo, “muere más cantidad de gente de obesidad que de hambre”, afirma, y el estilo de vida de muchas personas no es sostenible. Al interactuar y preocuparnos por los alimentos que comemos, dice Gastón, podemos cambiar nuestras prioridades como individuos y cambiar las industrias que nos sirven. Todavía no tiene las respuestas a cómo hacer de esto un movimiento sistemático que los políticos puedan respaldar, sin embargo, cocineros de renombre alrededor del mundo están llevando estas ideas a sus cocinas. Él cuenta historias sobre un restaurante en Perú que ayuda a los nativos obteniendo ingredientes de ellos, un chef famoso en Nueva York que lucha contra el uso de monocultivos y un restaurante emblemático en Francia que ha excluido la carne del menú. “Los cocineros alrededor del mundo estamos convencidos de que no podemos esperar a que otros hagan los cambios y que debemos ponernos en acción”, afirma. Pero los cocineros profesionales no pueden hacerlo todo. Si queremos realizar un cambio profundo, urge Gastón, necesitamos que la comida casera sea la clave.

La interconexión de la música y la vida. El director de orquesta chileno, Paolo Bortolameolli, envuelve su opinión sobre la música, alrededor de su recuerdo de haber llorado la primera vez que escuchó música clásica en vivo. Compartiendo las emociones que la música causó en él, Bortolameolli presenta la misma como una metáfora de la vida, llena de lo esperado y lo inesperado. Cree que escuchamos las mismas canciones una y otra vez porque, como humanos, nos gusta experimentar la vida desde un punto de vista de expectativa y estabilidad y, a la vez, sugiere que cada vez que escuchamos una canción, animamos la música, impregnándola con el potencial de no solo ser reconocida, sino también redescubierta.

Cosechamos lo que sembramos – sembremos algo distinto. Hasta mediados de los años 80, los ingresos en los principales países latinoamericanos estaban a la par de los de Corea. Pero ahora, menos de una generación después, los coreanos ganan entre dos y tres veces más que sus contrapartes latinoamericanos. ¿Cómo puede ser? La diferencia, afirma el futurista Juan Enríquez, radica en una priorización nacional de la capacidad intelectual y en identificar, educar y celebrar las mejores mentes. ¿Qué sucedería si en América Latina empezáramos a seleccionar la excelencia académica como lo hacemos hoy con la selección nacional de fútbol? Si los países latinoamericanos prosperan en la era de la tecnología y más, deberían buscar establecer sus propias universidades superiores en lugar de dejar que sus mentes más brillantes estén ansiosas de alimento, competencia y logros, y lo encuentren en otro lugar, en tierras extranjeras.

Rebeca Hwang comparte su sueño de un mundo donde las identidades se utilizan para unir a la gente, no para alienarlas (Foto: Jasmina Tomic/TED)

La diversidad es un superpoder. Rebeca Hwang nació en Corea, fue criada en Argentina y educada en los Estados Unidos. Como alguien que ha pasado su vida intercambiando varias identidades, Hwang afirma que tener un trasfondo variado, aunque a veces sea desafiante, es en realidad un superpoder. La inversora de riesgo compartió cómo su fluidez en muchos idiomas y culturas le permite establecer conexiones con todo tipo de personas de todo el mundo. Como madre de dos niños pequeños, Hwang espera transmitir esta perspectiva a sus hijos. Ella quiere enseñarles a abrazar sus orígenes y crear un mundo donde las identidades se utilicen para unir a las personas, no para alienarlas.

El ecologista marino Enric Sala desea proteger las últimas especies salvajes del océano (Foto: Jasmina Tomic/TED)

Cómo salvaremos nuestros océanos. Si saltas al océano en cualquier lugar, dice Enric Sala, tendrías un 98 por ciento de posibilidades de sumergirte en una zona muerta, un paisaje estéril, vacío de grandes peces y otras formas de vida marina. Como ecologista marino y explorador residente de National Geographic, Sala ha dedicado su vida a inspeccionar los océanos del mundo. Enfocándose en alta mar, propone una solución radical para ayudar a proteger los océanos, fomentando la creación de una reserva que incluiría dos tercios de los océanos del planeta. Al salvaguardar nuestra alta mar, Sala cree que restauraremos los beneficios ecológicos, económicos y sociales del océano y podremos asegurarnos de que cuando nuestros nietos salten a cualquier lugar en el mar, se encuentren con una gran cantidad de vida marina gloriosa en lugar de un espacio vacío.

Y para concluir… En una presentación improvisada de rap con muchos pasos de baile bien sincronizados, el psicólogo, rapero y bailarín César Silveyra cierra el evento. En una espectacular demostración de sus habilidades, Silveyra une las ideas de oradores anteriores del evento, incluyendo las advertencias de Enric Sala sobre la sobrepesca en los océanos, la revolución de la cocina peruana de Gastón Acurio e incluso un grito para la abuela de la oradora Rebeca Hwang… todo el tiempo “sintiéndose como Beyoncé”.

Cory DoctorowPodcast: Petard, Part 03


Here’s the third part of my reading (MP3) of Petard (part one, part two), a story from MIT Tech Review’s Twelve Tomorrows, edited by Bruce Sterling; a story inspired by, and dedicated to, Aaron Swartz — about elves, Net Neutrality, dorms and the collective action problem.

MP3

Planet DebianBits from Debian: Debian welcomes its GSoC 2018 and Outreachy interns

GSoC logo

Outreachy logo

We're excited to announce that Debian has selected twenty-six interns to work with us during the next months: one person for Outreachy, and twenty-five for the Google Summer of Code.

Here is the list of projects and the interns who will work on them:

A calendar database of social events and conferences

Android SDK Tools in Debian

Automatic builds with clang using OBS

Automatic Packages for Everything

Click To Dial Popup Window for the Linux Desktop

Design and implementation of a Debian SSO solution

EasyGnuPG Improvements

Extracting data from PDF invoices and bills for financial accounting

Firefox and Thunderbird plugin for free software habits

GUI app for EasyGnuPG

Improving Distro Tracker to better support Debian teams

Kanban Board for Debian Bug Tracker and CalDAV servers

OwnMailbox Improvements

P2P Network Boot with BitTorrent

PGP Clean Room Live CD

Port Kali Packages to Debian

Quality assurance for biological applications inside Debian

Reverse Engineering Radiator Bluetooth Thermovalves

Virtual LTSP Server

Wizard/GUI helping students/interns apply and get started

Congratulations and welcome to all of them!

The Google Summer of Code and Outreachy programs are possible in Debian thanks to the efforts of Debian developers and contributors that dedicate part of their free time to mentor interns and outreach tasks.

Join us and help extend Debian! You can follow the interns weekly reports on the debian-outreach mailing-list, chat with us on our IRC channel or on each project's team mailing lists.

TEDTED en Español: TED’s first-ever Spanish-language speaker event in NYC

Host Gerry Garbulsky opens the TED en Español event in the TEDNYC theater, New York, NY. (Photo: Dian Lofton / TED)

Thursday marked the first-ever TED en Español speaker event hosted by TED in its New York City office. The all-Spanish daytime event featured eight speakers, a musical performance, five short films and fifteen one-minute talks given by members of the audience.

The New York event is just the latest addition to TED’s sweeping new Spanish-language TED en Español initiative, designed to spread ideas to the global Hispanic community. Led by TED’s Gerry Garbulsky, also head of the world’s largest TEDx event, TEDxRiodelaPlata in Argentina, TED en Español includes a Facebook community, Twitter feed, weekly “Boletín” newsletter, YouTube channel and — as of earlier this month — an original podcast created in partnership with Univision Communications.

Should we automate democracy? “Is it just me, or are there other people here that are a little bit disappointed with democracy?” asks César A. Hidalgo. Like other concerned citizens, the MIT physics professor wants to make sure we have elected governments that truly represent our values and wishes. His solution: What if scientists could create an AI that votes for you? Hidalgo envisions a system in which each voter could teach her own AI how to think like her, using quizzes, reading lists and other types of data. So once you’ve trained your AI and validated a few of the decisions it makes for you, you could leave it on autopilot, voting and advocating for you … or you could choose to approve every decision it suggests. It’s easy to poke holes in his idea, but Hidalgo believes it’s worth trying out on a small scale. His bottom line: “Democracy has a very bad user interface. If you can improve the user interface, you might be able to use it more.”

When the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully,” says Leticia Gasca. (Photo: Jasmina Tomic / TED)

How to fail mindfully. If your business failed in Ancient Greece, you’d have to stand in the town square with a basket over your head. Thankfully, we’ve come a long way — or have we? Failed-business owner Leticia Gasca doesn’t think so. Motivated by her own painful experience, she set out to create a way for others like her to convert the guilt and shame of a business venture gone bad into a catalyst for growth. Thus was born “Fuckup Nights” (FUN), a global movement and event series for sharing stories of professional failure, and The Failure Institute, a global research group that studies failure and its impact on people, businesses and communities. For Gasca, when the focus of failure shifts from what is lost to what is gained, we can all learn to “fail mindfully” and see endings as doorways to empathy, resilience and renewal.

From four countries to one stage. The pan-Latin-American musical ensemble LADAMA brought much more than just music to the TED en Español stage. Inviting the audience to dance with them, Venezuelan Maria Fernanda Gonzalez, Brazilian Lara Klaus, Colombian Daniela Serna and American Sara Lucas sing and dance to a medley of rhythms that range from South American to Caribbean-infused styles. Playing “Night Traveler” and “Porro Maracatu,” LADAMA transformed the stage into a place of music worth spreading.

Gastón Acurio shares stories of the power of food to change lives. (Photo: Jasmina Tomic / TED)

World change starts in your kitchen. In his pioneering work to bring Peruvian cuisine to the world, Gastón Acurio discovered the power that food has to change peoples’ lives. As ceviche started appearing in renowned restaurants worldwide, Gastón saw his home country of Peru begin to appreciate the diversity of its gastronomy and become proud of its own culture. But food hasn’t always been used to bring good to the world. With the industrial revolution and the rise of consumerism, “more people in the world are dying from obesity than hunger,” he notes, and many peoples’ lifestyles aren’t sustainable. 
By interacting with and caring about the food we eat, Gastón says, we can change our priorities as individuals and change the industries that serve us. He doesn’t yet have all the answers on how to make this a systematic movement that politicians can get behind, but world-renowned cooks are already taking these ideas into their kitchens. He tells the stories of a restaurant in Peru that supports native people by sourcing ingredients from them, a famous chef in NYC who’s fighting against the use of monocultures and an emblematic restaurant in France that has barred meat from the menu. “Cooks worldwide are convinced that we cannot wait for others to make changes and that we must jump into action,” he says. But professional cooks can’t do it all. If we want real change to happen, Gastón urges, we need home cooking to be at the center of everything.

The interconnectedness of music and life. Chilean musical director Paolo Bortolameolli wraps his views on music within his memory of crying the very first time he listened to live classical music. Sharing the emotions music evoked in him, Bortolameolli presents music as a metaphor for life — full of the expected and the unexpected. He thinks that we listen to the same songs again and again because, as humans, we like to experience life from a standpoint of expectation and stability, and he simultaneously suggests that every time we listen to a musical piece, we enliven the music, imbuing it with the potential to be not just recognized but rediscovered.

We reap what we sow — let’s sow something different. Up until the mid-’80s, the average incomes in major Latin American countries were on par with those in Korea. But now, less than a generation later, Koreans earn two to three times more than their Latin American counterparts. How can that be? The difference, says futurist Juan Enriquez, lies in a national prioritization of brainpower — and in identifying, educating and celebrating the best minds. What if in Latin America we started selecting for academic excellence the way we would for an Olympic soccer team? If Latin American countries are to thrive in the era of technology and beyond, they should look to establish their own top universities rather than letting their brightest minds thirst for nourishment, competition and achievement — and find it elsewhere, in foreign lands.

Rebeca Hwang shares her dream of a world where identities are used to bring people together, not alienate them. (Photo: Jasmina Tomic / TED)

Diversity is a superpower. Rebeca Hwang was born in Korea, raised in Argentina and educated in the United States. As someone who has spent a lifetime juggling various identities, Hwang can attest that having a blended background, while sometimes challenging, is actually a superpower. The venture capitalist shared how her fluency in many languages and cultures allows her to make connections with all kinds of people from around the globe. As the mother of two young children, Hwang hopes to pass this perspective on to her kids. She wants to teach them to embrace their unique backgrounds and to create a world where identities are used to bring people together, not alienate them.

Marine ecologist Enric Sala wants to protect the last wild places in the ocean. (Photo: Jasmina Tomic / TED)

How we’ll save our oceans If you jumped in the ocean at any random spot, says Enric Sala, you’d have a 98 percent chance of diving into a dead zone — a barren landscape empty of large fish and other forms of marine life. As a marine ecologist and National Geographic Explorer-in-Residence, Sala has dedicated his life to surveying the world’s oceans. He proposes a radical solution to help protect the oceans by focusing on our high seas, advocating for the creation of a reserve that would include two-thirds of the world’s ocean. By safeguarding our high seas, Sala believes we will restore the ecological, economic and social benefits of the ocean — and ensure that when our grandchildren jump into any random spot in the sea, they’ll encounter an abundance of glorious marine life instead of empty space.

And to wrap it up … In an improvised rap performance with plenty of well-timed dance moves, psychologist and dance therapist César Silveyra closes the session with 15 of what he calls “nano-talks.” In a spectacular showdown of his skills, Silveyra ties together ideas from previous speakers at the event, including Enric Sala’s warnings about overfished oceans, Gastón Acurio’s Peruvian cooking revolution and even a shoutout for speaker Rebeca Hwang’s grandmother … all the while “feeling like Beyoncé.”

Worse Than FailurePassing Messages

About 15 years a go, I had this job where I was requested to set up and administer an MQ connection from our company to the Depository Trust & Clearing Corporation (DTCC). Since I had no prior experience with MQ, I picked up the manual, learned a few commands, and in a day or so, had a script to create queue managers, queues, disk backing stores, etc. I got the system analysts (SA's) at both ends on the phone and in ten minutes had connectivity to their test and production environments. Access was applied for and granted to relevant individuals and applications, and application coding could begin.

Pyramid of Caius Cestius exterior, showing the giant wall which blocks everything
By Torquatus - Own work

I didn't know the full and complete way to manage most of the features of MQ, but I had figured out enough to properly support what we needed. Total time was 2.5 man-days of effort.

Fast forward to the next job, where file-drops and cron job checks for file-drops were the way interprocess communication was implemented. At some point, they decided to use a third party vendor to communicate with DTCC via MQ. Since I had done it before, they asked me to set it up. OK, easy enough. All I had to do was install it, set up a queue manager and a couple of queues to test things. Easy peasy. I put MQ on my laptop and created the queue managers and queues. Then I introduced myself to the SA at the vendor (who was in the same position I had been in at my previous job) and explained to him what I had done in the past and that I needed it set up at the current job. He agreed it was a good way to go. I then got our SAs and my counterpart at the vendor on the phone and asked them to hash out the low level connectivity. That's when I hit the wall of bureacracy-run-amok™.

It turns out that our SA's wouldn't talk to SA's outside the firm. That's what the networking team was for. OK, get them in the loop. We won't set up connectivity with outside entities without security approval. The networking team also informed me that they wouldn't support a router with connections outside the firewall, but that they would allow a router physically outside the firewall IF the vendor would support it (that's like saying I want to connect to Google, so I'll pay Google to support a router outside my firewall to connect to them).

The security people wanted to know whether hardware had been purchased yet (is the hardware "appropriate" for connecting to outside entities). The fact that it was just for a test queue to send one message fell on deaf ears; proper hardware must be approved, funded and in-place first!

The hardware people wanted to know if the hardware had been reviewed by the capacity planning team to be sure that it supported future growth (our plans were to replace a task that moved 4 (f-o-u-r) 2MB messages per day, and if successful, add 5-6 subsequent tasks comprising 10-20 similar messages each per day; a ten year old laptop would have been overpowered for this task).

This lunacy continued until we had 33 teams involved in a 342 line project plan comprising multiple man-years of effort - to set up a queue manager and 2 queues from a laptop to a vendor, to send a single test message.

At this point, everybody at the vendor was enraged at the stupidity to which our various departments were subjecting them (e.g.: you must program your firewall rules like xxx, you must provide and support a router outside out firewall, will a message sent from your hardware be able to be received on our hardware (seriously, MQ supported both platforms!), etc.), and ALL of them got on the phone to try and force me to change my mind (it wasn't coming from me: it was the other departments).

I was finally forced to say the stupidest thing I've ever had to say: Yes, I agree that the way you are proposing that we set things up is well understood, redundant, reliable, easy to set up and support, cost effective, efficient, secure, reasonably simple and generally accepted as the right way to do this, and our company will have none of that!

I then had to tell them yet again that it wasn't coming from me, and to beg them to just do it the way the bureaucracy wanted it done, or said bureaucracy would never let it happen.

At that point, I convinced my boss of the stupidity that was being inflicted on this vendor, and so he agreed to sign a five year contract, at premium rates, to get them to do it the way that our company wanted, even though we knew it was idiotic, wasteful and just plain wrong.

This went back and forth for a year. Long story short: we paid the vendor a crapton of money to supply, configure and remotely support a router at our location outside the firewall, we paid them a fortune for five years of capacity to push 10 million messages per day, and we spent more than $750,000 on super high powered redundant hot/standby hardware across dev, test, qa, pre-production, production and DR environments, all before we were allowed to send one test message across a test queue.

Our company then decided not to move to the more modern messaging technology because it was too difficult to set up and that they would continue to use cron job checks for file-drops as message-ready indicators. I pointed out that the difficulty was from the internal bureaucracy and not the vendor or the technology... <crickets/>. They never sent another message down that queue, and left all that iron - dedicated to the cancelled project - unused, fully supported and running in all the environments for five years, after which the process of decommissioning hardware was triggered (I'll leave this for your nightmares to imagine).

I later found out that I was the 5th person (out of 8 over 10 years) hired to perform this same migration. Apparently each of us ran into the same impenetrable wall-o-bureaucracy.

To this day, they are still doing interprocess communication via file-drops and cron jobs to check for the dropped files.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet DebianRuss Allbery: Review: Bull by the Horns

Review: Bull by the Horns, by Sheila Bair

Publisher: Simon & Schuster
Copyright: 2012
Printing: September 2013
ISBN: 1-4516-7249-7
Format: Trade paperback
Pages: 365

Sheila Bair was the Chair of the Federal Deposit Insurance Corporation from 2006 to 2011, a period that spans the heart of the US housing crisis and the start of the Great Recession. This is her account, based on personal notes, of her experience heading the FDIC, particularly focused on the financial crisis and its immediate aftermath.

Something I would like to do in theory but rarely manage to do in practice is to read more thoughtful political writing from people who disagree with me. Partly that's to broaden my intellectual horizons; partly it's a useful reminder that the current polarized political climate in the United States does not imply that the intellectual tradition of conservatism is devoid of merit. While it's not a complete solution, one way to edge up on such reading is to read books by conservatives that are focused on topics where they and I largely agree.

In this case, that topic is the appalling spectacle of consequence-free government bailouts of incompetently-run financial institutions, coordinated by their co-conspirators inside the federal government and designed to ensure that obscenely large salaries and bonuses continued to flow to exactly the people most responsible for the financial crisis. If I sound a little heated on this topic, well, consider it advance warning for the rest of the review. Suffice it to say that I consider Timothy Geithner to be one of the worst Secretaries of the Treasury in the history of the United States, a position for which the competition is fierce.

Some background on the US financial regulatory system might be helpful here. I'm reasonably well-read on this topic and still learned more about some of the subtleties.

The FDIC, which Bair headed, provides deposit insurance to all of the banks. This ensures that whatever happens to the bank, all depositors of up to $100,000 (now $250,000 due to a law that was passed as part of the events of this book) are guaranteed to get every cent of their money. This deposit insurance is funded by fees charged to every bank, not by general taxes, although the FDIC has an emergency line of credit with the Treasury it can call on (and had to during the savings and loan crisis in the early 1990s).

The FDIC is also the primary federal regulator for state banks. It is not the regulator for federal banks; those are regulated by the Office of the Comptroller of the Currency (OCC) and, at the time of events in this book, the Office of Thrift Supervision (OTS), which regulated Savings and Loans. Some additional regulation of federal banks is done by the Federal Reserve. The FDIC is a "backup" regulator to those other institutions and has some special powers related to its function of providing deposit insurance, but it doesn't in general have the power to demand changes of federal banks, only the smaller state banks.

This turns out to be rather important in the financial crisis: bad state banks regulated by the FDIC were sold off or closed, but the huge federal banks regulated by the OCC and OTS were bailed out via various arranged mergers, loan guarantees, or direct infusions of taxpayer money. Bair's argument is that this difference is partly due to the ethos of the FDIC and its well-developed process for closing troubled banks. The standard counter-argument is that the large national banks were far too large to put through that or some similar process without massive damage to the economy. (Bair strenuously disagrees.)

Bair's account starts in 2006, by which point the crisis was already probably inevitable, and contains a wealth of information about the banking side of the crisis itself and its immediate aftermath. Her story is one of consistent pressure by the FDIC to increase bank capital requirements and downgrade risk ratings of institutions, and consistent pressure by the OCC, OTS, and Geithner (first as the head of the New York branch of the Federal Reserve and then as Treasury Secretary) to decrease capital requirements even in the height of the crisis and allow banks to use ever-more-creative funding models backed by government guarantees. Bair fleshes this out with considerable detail about how capital requirements are measured, how the loan guarantees were structured, the internal arguments over how to get control of the crisis, and the subsequent fights in Congress over Dodd-Frank and how TARP money was spent.

(TARP, the Troubled Asset Relief Program, was the Congressional emergency measure passed during the height of the crisis to fund government purchases and restructuring of troubled mortgage debt. As Bair describes, and has been exhaustively detailed elsewhere, it was never really used for that. The government almost immediately repurposed it for direct bailouts of financial institutions and provided almost no meaningful mortgage restructuring.)

This account also passes my primary sniff test for books about this crisis. Fannie and Freddie (two oddly-named US government institutions with a mandate to support mortgage lending and home ownership) are treated as bad actors and horribly mismanaged entities that made the same irresponsible investments as the private banking industry, but they aren't put at the center of the crisis and aren't blamed for the entire mortgage mess. This disagrees with some corners of Republican politics, but agrees with all other high-quality reporting about the crisis.

Besides fascinating details about the details of banking regulation in a crisis, the primary conclusion I drew from this book is the power of institutions, systems, and rules. One becomes good at things one does regularly. The FDIC closes failing banks without losing insured depositor money, and has been doing that since 1933, often multiple times a year. They therefore have a tested system for doing this, which they practice implementing reliably, efficiently, and quickly. Bair states as a point of deep institutional pride that no insured depositor had to wait more than one business day for access to their funds during the financial crisis. Banks are closed after business hours and, whenever possible, the branches was open for business under new supervision the next morning. This is as important as the insurance in preventing runs on the bank that would make the closing cost even more.

Part of that system, built into the FDIC principles and ethos, was a ranking of priorities and a deep sense of the importance of consequences. Insured depositors are sacrosanct. Uninsured depositors are not, but often they can be protected by selling the bank assets to another, healthier bank, since the uninsured depositors are often the bank's best customers. Investors in the bank, in contrast, are wiped out. And other creditors may also be wiped out, or at least have to take a significant haircut on their investment. That is the price of investing in a failed institution; next time, pay more attention to the health of the business you're investing in. The FDIC is legally required to choose the resolution approach that is the least costly to the deposit insurance fund, without regard to the impact on the bank's other creditors.

And, finally, when the FDIC takes over a failing bank, one of the first things they do is fire all of the bank management. Bair presents this as obvious and straight-forward common sense, as it should be. These were the people who created the problem. Why would you want to let them continue to mismanage the bank? The FDIC may retain essential personnel needed to continue bank operations, but otherwise gets rid of the people who should bear direct responsibility for the bank's failure.

The contrast with the government's approach with AIG, Citigroup, and other failed financial institutions, as spearheaded by Timothy Geithner, could not be more stark. I remember following the news at the time and seeing straight-faced and serious statements that it was important to preserve the compensation and bonuses of the CEOs of failed institutions so that they would continue to work for the institution to unwind all of its bad trades and troubled assets. Bair describes herself as furious over that decision.

The difficulty in critiques of the government's approach to the financial crisis has always been that it was a crisis, with unknown possible consequences, and the size of the shadow banking sector and the level of entangled risk was so large that any systematic bankruptcy process would have been too risky. I'm with Bair in finding this argument dubious but not clearly incorrect. The Lehman Brothers bankruptcy was rocky, but it's not clear to me that a similar process couldn't have worked for other firms. But that aside, retaining the corporate management (and their salaries and bonuses!) seems a clear indication to me of the corruption of the system. (Bair, possibly more to her credit than mine, carefully avoids using that term.)

Bair highlights this as one of the critical reasons why the FDIC process is legally akin to bankruptcy: these sorts of executives write themselves sweetheart employment contracts that guarantee huge payouts even if their company fails. In the FDIC resolution process, those contracts can be broken. If, as Geithner did, you take heroic measures to avoid going anywhere near bankruptcy law, breaking those contracts becomes more legally murky. (Dodd-Frank has a provision, strongly supported by Bair, to create a legal framework for clawing back compensation to executives after certain types of financial misreporting, although it's still far more limited than the FDIC resolution process.)

A note of caution here: this book is obviously Bair's personal account, and she's not an unbiased party. She took specific public positions during the crisis and defends them here, including against analysis in other books about the crisis. She also describes lots of private positions, some of which are disputed. (Andrew Ross Sorkin's book is the subject of some particularly pointed disagreement.) I have read enough other books about the crisis to believe that Bair's account is probably essentially correct, particularly given the nature of the contemporaneous criticism against her. But, that said, the public position against bailouts had become quite clear by the time she was writing this book, and there was doubtless some temptation to remember her previous positions as more in line with later public opinion than they were. This sort of insider account is always worth a note of caution and some effort to balance it with other accounts, particularly given Bair's love of the spotlight (which shines through in a few places in this book).

Bair is a life-long Republican and a Bush appointee. I suspect she and I would disagree on most political positions. But her position as head of the FDIC was that bank failure should come with consequences for those running the bank, that the priority of the government should be protection of insured bank depositors first and the deposit insurance fund second, and that other creditors should bear the brunt of their bad investment decisions, all of which I agree with wholeheartedly. This account is an argument for the importance of moral hazard, and an indictment and diagnosis of regulatory capture from someone who (refreshingly) is not just using that as a stalking horse to argue for eliminating regulation. Bair also directly tackles the question of whether the same moral hazard argument applies to the individual loan holders and concludes no, but this part of the argument was a bit light on detail and probably won't convince someone with the opposite opinion.

It's quite frustrating, reading this in 2018, how many of the reforms Bair argues for in this book never happened. (A ban on naked credit default swaps, for example, which Bair argues increase systemic risk by increasing the consequences of institutional bankruptcy, thus creating new "too big to fail" analyses like that applied to AIG. Timothy Geithner was central to defeating an effort to outlaw them.) It's also a tragic reminder of how blindly partisan our national debates over economic policies are. You can watch, in Bair's account, the way that Democrats who were sharply critical of the Bush administration handling of the financial crisis, including his appointed regulators, swung behind the exact same regulators and essentially the same policies when Obama appointed Geithner to head Treasury. Democrats are traditionally the party favoring stronger regulation, but that's less important than tribal affiliation. The change is sharp enough that at a few points I was caught by surprise at the political affiliation of a member of Congress who was supporting or opposing one of Bair's positions.

As infuriating as this book is in places, it is a strong reminder that there are conservatives with whom I can find common cause despite being on the hard left of US economic politics. Those tend to be the people who believe in the power of institutions, consistent principles, and repeated and efficient execution of processes developed through hard-fought political compromise. I think Bair and I would agree that it's very dangerous to start making up policies on the spot to deal with the crisis du jour. Corruption can more easily enter the system, and very bad decisions are made. This is a failure on both the left and the right. I suspect Bair would turn to a principle of smaller government far more than I would, but we both believe in better government and clear, principled regulation, and on that point we could easily find workable compromises.

You should not read this as your first in-depth look at the US financial crisis. For that, I still recommend McLean & Nocera's All the Devils are Here. But this is a good third or fourth book on the topic, and a deep look at the internal politics around TARP. If that interests you, recommended.

Rating: 8 out of 10

,

Planet DebianJonathan McDowell: Actually switching something with the SonOff

Getting a working MQTT temperature monitoring setup is neat, but not really what we think of when someone talks about home automation. For that we need some element of control. There are various intelligent light bulb systems out there that are obvious candidates, but I decided I wanted the more simple approach of switching on and off an existing lamp. I ended up buying a pair of Sonoff Basic devices; I’d rather not get into trying to safely switch mains voltages myself. As well as being cheap the Sonoff is based upon an ESP8266, which I already had some experience in hacking around with (I have a long running project to build a clock I’ll eventually finish and post about). Even better, the Sonoff-Tasmota project exists, providing an alternative firmware that has some support for MQTT/TLS. Perfect for my needs!

There’s an experimental OTA upgrade approach to getting a new firmware on the Sonoff, but I went the traditional route of soldering a serial header onto the board and flashing using esptool. Additionally none of the precompiled images have MQTT/TLS enabled, so I needed to build the image myself. Both of these turned out to be the right move, because using the latest release (v5.13.1 at the time) I hit problems with the device rebooting as soon as it got connected to the MQTT broker. The serial console allowed me to see the reboot messages, and as I’d built the image myself it was easy to tweak things in the hope of improving matters. It seems the problem is related to the memory consumption that enabling TLS requires. I went back a few releases until I hit on one that works, with everything else disabled. I also had to nail the Espressif Arduino library version to an earlier one to get a reliable wifi connection - using the latest worked fine when the device was power via USB from my laptop, but not once I hooked it up to the mains.

Once the image is installed on the device (just the normal ESP8266 esptool write_flash 0 sonoff-image.bin approach), start mosquitto_sub up somewhere. Plug the Sonoff in (you CANNOT have the Sonoff plugged into the mains while connected to the serial console, because it’s not fully isolated), and you should see something like the following:

$ mosquitto_sub -h mqtt-host -p 8883 --capath /etc/ssl/certs/ -v -t '#' -u user1 -P foo
tele/sonoff/LWT Online
cmnd/sonoff/POWER (null)
tele/sonoff/INFO1 {"Module":"Sonoff Basic","Version":"5.10.0","FallbackTopic":"DVES_123456","GroupTopic":"sonoffs"}
tele/sonoff/INFO3 {"RestartReason":"Power on"}
stat/sonoff/RESULT {"POWER":"OFF"}
stat/sonoff/POWER OFF
tele/sonoff/STATE {"Time":"2018-05-25T10:09:06","Uptime":0,"Vcc":3.176,"POWER":"OFF","Wifi":{"AP":1,"SSId":"My SSID Is Here","RSSI":100,"APMac":"AA:BB:CC:12:34:56"}}

Each of the Sonoff devices will want a different topic rather than the generic ‘sonoff’, and this can be set via MQTT:

$ mosquitto_pub -h mqtt.o362.us -p 8883 --capath /etc/ssl/certs/ -t 'cmnd/sonoff/topic' -m 'sonoff-snug' -u user1 -P foo

The device will provide details of the switchover via MQTT:

cmnd/sonoff/topic sonoff-snug
tele/sonoff/LWT (null)
stat/sonoff-snug/RESULT {"Topic":"sonoff-snug"}
tele/sonoff-snug/LWT Online
cmnd/sonoff-snug/POWER (null)
tele/sonoff-snug/INFO1 {"Module":"Sonoff Basic","Version":"5.10.0","FallbackTopic":"DVES_123456","GroupTopic":"sonoffs"}
tele/sonoff-snug/INFO3 {"RestartReason":"Software/System restart"}
stat/sonoff-snug/RESULT {"POWER":"OFF"}
stat/sonoff-snug/POWER OFF
tele/sonoff-snug/STATE {"Time":"2018-05-25T10:16:29","Uptime":0,"Vcc":3.103,"POWER":"OFF","Wifi":{"AP":1,"SSId":"My SSID Is Here","RSSI":76,"APMac":"AA:BB:CC:12:34:56"}}

Controlling the device is a matter of sending commands to the cmd/sonoff-snug/power topic - 0 for off, 1 for on. All of the available commands are listed on the Sonoff-Tasmota wiki.

At this point I have a wifi connected mains switch, controllable over MQTT via my internal MQTT broker.

(If you want to build your own Sonoff Tasmota image it’s actually not too bad; the build system is Ardunio style on top of PlatformIO. That means downloading a bunch of bits before you can actually build, but the core is Python based so it can be done as a normal user within a virtualenv. Here’s what I did:

# Make a directory to work in and change to it
mkdir sonoff-ws
cd sonoff-ws
# Build a virtual Python environment and activate it
virtualenv platformio
source platformio/bin/activate
# Install PlatformIO core
pip install -U platformio
# Clone Sonoff Tasmota tree
git clone https://github.com/arendst/Sonoff-Tasmota.git
cd Sonoff-Tasmota
# Checkout known to work release
git checkout v5.10.0
# Only build the sonoff firmware, not all the language variants
sed -i 's/;env_default = sonoff$/env_default = sonoff/' platformio.ini
# Force older version of espressif to get more reliable wifi
sed -i 's/platform = espressif8266$/&@1.5.0/' platformio.ini
# Edit the configuration to taste; essentially comment out all the USE_*
# defines and enable USE_MQTT_TLS
vim sonoff/user_config.h
# Actually build. Downloads a bunch of deps the first time.
platformio run

I’ve put my Sonoff-Tasmota user_config.h up in case it’s of help when trying to get up and going. At some point I need to try the latest version and see if I can disable enough to make it happy with MQTT/TLS, but for now I have an image that does what I need.)

LongNowOverview: Earth and Civilization in Macroscope

“Once a photograph of the Earth, taken from outside, is available…a new idea as powerful as any in history will be let loose.“ — Astronomer Fred Hoyle, 01948

I. “Why Do You Look In A Mirror?”

InFebruary 01966, Stewart Brand, a month removed from launching a multimedia psychedelic festival that inaugurated the hippie counterculture, sat on the roof of his apartment in San Francisco’s North Beach, doing what he usually did when he was bored and uncertain. He took some LSD and got to scheming.

Stewart Brand and Ken Kesey, 01966. California Historical Society.

“There I sat,” Brand later recalled, “wrapped in a blanket in the chill afternoon sun, trembling with cold and inchoate emotion, gazing at the San Francisco skyline, waiting for my vision. The buildings were not parallel — because the Earth curved under them, and me, and all of us; it closed on itself. I remembered that Buckminster Fuller had been harping on this at a recent lecture — that people perceived the Earth as flat and infinite, and that that was the root of all their misbehavior. Now from my altitude of three stories and one hundred mikes, I could see that it was curved, think it, and finally feel it. But how to broadcast it?”

Scribbled in his journal entry for that day was the answer, in the form of a question: “Why haven’t we seen a photograph of the whole earth yet?”

Stewart Brand’s journal entry when he conceived of his “Why Haven’t We Seen A Photograph of the Whole Earth?” campaign. Stanford University Special Collections.

In the aftermath of World War II, the United States and the Soviet Union competed for nuclear dominion on Earth. With the 01957 launch of Sputnik, the contest expanded to space. But in the race to the moon, neither side had given much thought to the value of training their satellites’ apertures on the world left behind. Brand glimpsed the power such an image could hold.

Brand in the midst of his Whole Earth campaign, 01966.

“A photograph would do it — a color photograph from space of the earth,” Brand said. “There it would be for all to see, the earth complete, tiny, adrift, and no one would ever perceive things the same way.”

Brand mounted a spirited campaign selling buttons that posed the question “Why Haven’t We Seen A Photograph of the Whole Earth?” on college campuses across the country. He often showed up in costume, and he often was chucked out by security. He sent buttons to Marshall McLuhan, Buckminster Fuller, NASA officials, and members of Congress.

According to a 01966 Village Voice article, a student at Columbia asked Brand: “What would happen if we did have a picture? Would it eliminate slums, or meanness, or anything?”

“Maybe not,” said Brand, “but it might tell us something about ourselves.”

“What?” asked the girl.

“It might tell us where we’re at,” said Brand.

“What for?” asked the girl.

“Why do you look in the mirror?” asked Brand.

“Oh,” said the girl, and bought a button.

The first color photograph of the whole earth, from ATS-3 (01967).

Brand would soon get his photo. On November 10, 01967, the NASA geostationary weather and communications satellite ATS-3 captured the first color photograph of the whole earth. Brand used a reproduction of the photo for the cover of the first Whole Earth Catalog, a countercultural bible and forerunner to the World Wide Web that Steve Jobs once called “Google in paperback form.”

But the image didn’t enter the mainstream, as the first copies of The Whole Earth Catalog seldom strayed far from the communes. (That would change by 01972, when The Last Whole Earth Catalog won a National Book Award).

The moment of revelation for a global audience came in 01968, at the conclusion of a year of violence and unrest that saw the assassinations of Martin Luther King and Robert Kennedy, the escalation of war in Vietnam, and the brutal suppression of student protests across the globe.

During the Apollo 8 lunar mission on Christmas Eve, 01968, Astronauts Frank Borman, James Lovell, and Bill Anders left Earth’s orbit for the moon, traveling further than any humans before. And then they looked back.

The first time humans saw the whole earth (01968).

Anders later said the view of a fragile earth hanging suspended in the void “caught us hardened test pilots.”

“Here we came all this way to the Moon, and yet the most significant thing we’re seeing is our own home planet, the Earth. “— Astronaut Bill Anders

The descriptions of awe, connection, and transcendence Lowell, Borman and Anders said they felt that day when they looked back at Earth would be echoed by future astronauts.

“You develop an instant global consciousness, a people orientation, an intense dissatisfaction with the state of the world, and a compulsion to do something about it. From out there on the moon, international politics look so petty. You want to grab a politician by the scruff of the neck and drag him a quarter of a million miles out and say, ‘Look at that, you son of a bitch.’” — Astronaut Edgar Mitchell

Psychologists call this cognitive shift of awareness during spaceflight the “overview effect.”

The view of Earth for TV audiences during the Apollo 8 Christmas Eve broadcast.

The Apollo 8 astronauts reached for their cameras and started snapping photos. Later that day, in what was, at that time, the most watched television broadcast in history, the astronauts read from the Book of Genesis as the cameras showed a grainy, black and white image of the Earth.

When the astronauts returned to Earth three days later, they brought with them the boon of their new whole earth perspective in the form of a photograph. Earthrise captured what the grainy television cameras could not.

“Earthrise, Seen For The First Time By Human Eyes” (01968). NASA.

“Up there, it’s a black-and-white world,” James Lovell later recalled. “There’s no color. In the whole universe, wherever we looked, the only bit of color was back on Earth…It was the most beautiful thing there was to see in all the heavens.”

Earthrise and its companion Blue Marble (01972) are among the most widely disseminated images in human history. By approximating the overview effect for the earthbound, the photos helped launch the modern environmental movement and reframed how we think about our relationship to the planet.

Blue Marble (01972).

“The sight of the whole Earth, small, alive, and alone, caused scientific and philosophical thought to shift away from the assumption that the Earth was a fixed environment, unalterably given to humankind, and towards a model of the Earth as an evolving environment, conditioned by life and alterable by human activity,” writes historian Robert Poole.² “It was the defining moment of the twentieth century.”

Be that as it may, historian Benjamin Lazier argues that by the twenty-first century, Earthrise and Blue Marble became victims of their own success.

“Views of Earth are now so ubiquitous as to go unremarked,” he writes. “These two images and their progeny now grace T-shirts and tote bags, cartoons and coffee cups, stamps commemorating Earth Day and posters feting the exploits of suicide bombers.” The whole earth’s very omnipresence means that “we ceased, in a fashion, to see it.”

Perhaps. Benjamin Grant, founder of the Daily Overview, believes we just need to look closer.


The Mount Whaleback Iron Ore Mine in the Pilbara region of Western Australia. 98% of world’s mined iron ore is used to make steel and is thus a major component in the construction of buildings, automobiles, and appliances such as refrigerators. Daily Overview.

II. An Amazing Mistake

In02013, Benjamin Grant, then a brand strategist at a buttoned-up consulting firm in New York City, found himself thinking less about marketing and more about outer space. Earlier that year, a meteor whose light shone brighter than the sun exploded into fragments across Russian skies. In September, NASA confirmed that the Voyager space probe entered interstellar space, becoming the first human-made object to leave the solar system. And SpaceX was making strides with the rockets it hoped would one day carry humans to Mars. Grant was fascinated, and decided to start a space club at work.

“It was not a normal thing for anyone at my job to start a club of any kind,” Grant says. “But I figured I would do it and if I got fired for doing it then it probably was not the right place for me to work anyway.”

Grant started giving talks at the firm, and soon became known to his colleagues as the space guy. One introduced him to a short film by Planetary Collective called Overview.

The film explored the overview effect in meditative detail and shared astronauts’ reactions to seeing the earth from space.

“It was so powerful to me, so profound,” Grant says of watching Overview. “Maybe I was searching for something like that.”

Grant began sharing the video with everyone he knew. The overview effect was very much on his mind when he started preparing for a space club talk on GPS satellites. As he was pulling some satellite imagery for the talk, he entered “Earth” into the Apple Maps search bar, hoping it would take him to a zoomed out view of the whole earth. What he saw instead stunned him: Earth, Texas, a small town in the Northern part of the state with a population of 1,048.

The screenshot Benjamin Grant took of Earth, TX, seen from above. Benjamin Grant

Viewed from above, Earth, Texas is dappled by perfect circle after circle of fields, looking not unlike a pattern of verdant vinyl records.

“I had no idea what I was seeing at the time, but I’d studied art history and was dabbling in photography,” Grant says. “This was so stunningly beautiful and I had absolutely no idea what it was. It was this amazing mistake that set me off on this adventure.”

Grant went back to his apartment, plugged his computer into his big-screen, and showed the image to his roommates. They discovered that they were looking at pivot irrigation fields. The image inspired an evening of searching for similarly arresting satellite imagery of man-made systems. A friend from Europe showed him the shipping containers of the Port of Rotterdam, Europe’s largest sea port. Another friend who worked in energy asked if Grant had ever looked at solar concentrators before. A friend’s girlfriend who worked for an NGO at the time showed them the Dadaab Refugee Camp in Kenya.

Top left: The Port of Rotterdam. Top right: A solar farm in Seville, Spain. Bottom: The Dadaab Refugee camp, Kenya. Daily Overview

In an epiphanous moment not unlike Stewart Brand’s whole earth vision—sans LSD—Grant realized that these seldom-considered perspectives might inspire something akin to what seeing the Earth from space did for astronauts.

He launched the Daily Overview on Instagram soon after. Each day, the account shares an image of the Earth from above, called an Overview, that is optimized to capture fleeting attention on social media. Underneath each arresting image is a bite-size caption of two to three sentences describing what you’re seeing, along with geocoordinates. Daily Overview is one of the most popular blogs on social media. On Instagram, no account with an environmental focus has more followers.

“I think we’re inundated and saturated with so much information all the time now,” Grant tells me, “that if you can focus it to a few simple things it can actually stick with someone.”

The Eixample District in Barcelona, Spain. The neighborhood is characterized by its strict grid pattern, octagonal intersections, and apartments with communal courtyards. Daily Overview.

There’s a key difference between these Overviews and the whole earth photographs of yore: Blue Marble and Earthrise showed a planet seemingly unaffected by human activity. (“Raging nationalistic interests, famines, wars, pestilences don’t show from that distance,” Apollo 8 astronaut Frank Borman said). Zooming in changes that.

What one witnesses from this vantage — intricate and vibrant patterns of human activity, construction, and destruction— is still aesthetically-pleasing. But in asking how those systems came to be, and learning about their impact, Grant hopes that one gains a planetary awareness, and, ideally, a motivation to act in a way that ensures planetary flourishing.

Ipanema Beach, Rio de Janeiro, Brazil. Daily Overview.

“If people have a better understanding of what is going on they’re more likely to behave in a way that serves the planet rather than serving themselves,” Grant tells me. “These images are a way to introduce things that people would never look at. If you were like, ‘I want you to look at waste ponds from this iron ore mine,’ people would say, ‘Why would I spend my time doing that?’ But if you can do that in a beautiful way that gets people engaged and gets people to ask questions about why it looks a certain way or is a certain color that’s an opportunity to educate and potentially change behaviors.”

Left: Iron Ore Mine, Tailings Pond, Negaunee, Michigan, USA. Right: Tulip fields in Lisse, Netherlands. Daily Overview

For Grant, inspiring awe with his overviews is as important as inspiring awareness.

“The things that stimulate awe, such as exposure to perceptually vast things, that you can experience if you go to the Grand Canyon or look out your airplane window, results in fascinating behaviors,” Grant says.

02014 study found that exposure to perceptually vast stimuli that transcend current frames of reference (i.e., awe) resulted in increased ethical decision making, generosity, and prosocial values while leading to decreased feelings of entitlement. “Awe,” the study’s authors concluded, “may help situate individuals within broader social contexts and enhance collective concern.”

Evaporation ponds at a Potash mine, Moab, Utah. The mine produces muriate of potash, a potassium-containing salt that is a major component in fertilizers. Daily Overview.

For Grant, stimulating awe with an overview comes down to not just what the satellite image portrays, but its artfulness. Each overview is stitched together out of as many as 25 images, purposefully cropped with balance and composition in mind. Many of Grant’s overviews evoke the works of Piet Mondrian, Mark Rothko, and Ellsworth Kelly.

“My favorite art is abstract expressionist painting—very simple, almost flat two dimensional painting,” Grant says. “When you look at the world from outer space it also appears flat and two dimensional.”

A juxtaposition of an Overview with a Piet Mondrian tableau. Via Benjamin Grant.

“If I can get people to experience awe,” Grant says, “not only because they’re seeing something that’s visually vast, like seeing an entire city in one frame or an entire mine in one frame, but if also I can compose it in such a way that the artistry of the image itself gets someone to feel awe, perhaps I’m being doubly as effective at getting them to think more prosocially or think beyond themselves or think of the collective.”


The first fully illuminated snapshot of the Earth captured by the DSCOVR satellite, a joint NASA, NOAA, and U.S. Air Force mission (02015).

III. A New Icon?

Grant’s notions about his overviews as art reminds me of something Stewart Brand once said when asked to elaborate on his intentions with getting NASA to release an image of the earth from space

“I saw the whole earth as an icon, mainly,” he said, “one that did indeed replace the mushroom cloud as the main image for understanding our world.”

These days, Brand’s focus has shifted to a creating a new icon for a different age, The Long Now Foundation’s Clock of the Long Now. “Ideally, it would do for thinking about time what the photographs of Earth from space have done for thinking about the environment,” Brand writes. “Such icons reframe how people think.”

Brand’s co-founder at Long Now, Brian Eno, sees both the whole earth photographs and The Clock of The Long Now as works of art that are imbued with a “mythic, metaphorical presence.”

“The 20th Century yielded its share of icons,” Eno writes. “In this, the 21st century, we may need icons more than ever before.”

Grant’s overviews present the Earth in piecemeal — fragments of a larger whole delivered to a global audience on platforms engineered for ephemerality.

When asked if he thinks it’s possible for a single image of the Earth to serve as an icon for our current age like the whole Earth photos did half a century ago, Grant says he doesn’t think so.

“I don’t know if you could unify people in that way now,” he says. “It’s certainly necessary.”

Elon Musk recently sent a Tesla roadster into space.

Nonetheless, Grant believes advances in technology and the current space revolution will make the overview effect more and more a part of our lives. Geostationary satellites with better cameras are creating new Blue Marbles. Space tourism is on the rise, with trips to Mars on the horizon. The perspective the whole earth icon points to could—for those fortunate enough to “slip the surly bonds of earth”—become a direct experience.

“The overview effect is going to become more of a thing,” Grant says. “Whether or not it’s called that, or whether or not people are experiencing it first hand…if awe is generated, regardless of how it happens, it will lead to more prosocial values and more collaboration, and that will create a better planet.”


Notes

[1] The Long Now Foundation uses five digit dates to serve as a reminder of the time scale that we endeavor to work in. Since the Clock of the Long Now is meant to run well past the Gregorian year 10,000, the extra zero is to solve the deca-millennium bug which will come into effect in about 8,000 years.

[2] Poole, Robert. Earthrise: How Man First Saw the Earth (02008), Yale University Press, 198–9.

Learn More

  • Watch Benjamin Grant’s Long Now talk and conversation with Stewart Brand.
  • Read Benjamin Grant’s book about the Daily Overview project, Overview(02016).
  • Read “The Overview Effect: Awe and Self-Transcendent Experience in Space Flight” in Psychology of Consciousness: Theory, Research, and Practice (02016), Vol. 3, №1, 1–11.
  • Read “Awe, the Small Self, and Prosocial Behavior” in Journal of Personality and Social Psychology (02015), Vol. 108, №6, 883–899.
  • Read “The Man Who Changed The World, Twice” by David Brooks.
  • Watch Benjamin Grant’s 02017 TED talk.

TEDCalling all social entrepreneurs + nonprofit leaders: Apply for The Audacious Project

Our first collection of Audacious Project winners takes the stage after a stellar session at TED2018, in which each winner made a big, big wish to move their organization’s vision to the next level with help from a new consortium of nonprofits. Photo: Ryan Lash / TED

Creating wide-scale change isn’t easy. It takes incredible passion around an issue, and smart ideas on how to move the needle and, hopefully, improve people’s lives. It requires bottomless energy, a dedicated team, an extraordinary amount of hope. And, of course, it demands real resources.

TED would like to help, on the last part at least. This is an open invitation to all social entrepreneurs and nonprofit leaders: apply to be a part of The Audacious Project in 2019. We’re looking for big, bold, unique ideas that are capable of affecting more than a million people or driving transformational change on a key issue. We’re looking for unexplored plans that have a real, credible path to execution. That can inspire people around the world to come together to act.

Applications for The Audacious Project are open now through June 10. And here’s the best part — this isn’t a long, detailed grant application that will take hours to complete. We’ve boiled it down to the essential questions that can be answered swiftly. So apply as soon as you can. If your idea feels like a good fit, we’ll be in touch with an extended application that you’ll have four weeks to complete.

The Audacious Project process is rigorous — if selected as a Finalist, you’ll participate in an ideation workshop to help clarify your approach and work with us and our partners on a detailed project proposal spanning three to five years. But the work will be worth it, as it can turbocharge your drive toward change.

More than $406 million has already been committed to the first ideas in The Audacious Project. And further support is coming in following the simultaneous launch of the project at both TED2018 and the annual Skoll World Forum last week. Watch the full session from TED, or highlight reel above that screened the next day at Skoll. And who knows? Perhaps you’ll be a part of the program in 2019.

.
.
From left in the photo at the top of this post: The Bail Project‘s Robin Steinberg; Heidi M. Sosik of the Woods Hole Oceanographic Institution; Caroline Harper of Sightsavers; Vanessa Garrison and T. Morgan Dixon of GirlTrek; Fred Krupp from Environmental Defense Fund; Chloe Davis and Maleek Washington of Camille A. Brown and Dancers and pianist Scott Patterson, who gave an astonishing performance of “New Second Line”; Andrew Youn of the One Acre Fund; and Catherine Foster, Camille A. Brown, Timothy Edwards, Juel D. Lane from Camille A. Brown and Dancers. Obscured behind Catherine Foster is Raj Panjabi of Last Mile Health (and dancer Mayte Natalio is offstage).

TEDA behind-the-scenes view of TED2018, to inspire you to apply for The Audacious Project

What’s it like to stand in the wings, preparing to give your TED Talk and share a big idea to create ripples of change? This video, captured at TED2018, gives a taste of that. It follows the first speakers of The Audacious Project, TED’s new initiative to fund big ideas for global change. These speakers had a lot on the line as they gave their talks — in addition to a packed house at the conference, their talks were viewed around the world via Facebook Watch. And they all crushed it, sharing their ideas with unique power. (Want goosebumps? Watch Robin Steinberg’s talk about ending the injustice of the US bail system.)

Have an idea for the social good that feels in the same spirit? Apply to be a part of The Audacious Project next year. Applications are open now through June 10, 2018 — and the questionnaire is intentionally short to encourage you to apply. So go for it. Share your biggest, wildest vision for how to tackle one of the world’s most pressing problems.

Apply for The Audacious Project »

Sociological ImagesSummer Reading with BBQ Becky

Over the past few months, we have seen several high profile news stories about white Americans threatening to call, or calling, police on people of color for a range of everyday activities like looking out of place on a college tour, speaking Spanish with cashiers at a local restaurant, meeting at Starbucks, and removing luggage from your AirBnB. Perhaps most notably, one viral YouTube video showing a white woman calling the police on a group of Black people supposedly violating park rules by using charcoal on their grill spawned the meme “BBQ Becky.”

While the meme pokes fun at white fears of people of color, these incidents reflect bigger trends about who we think belongs in social settings and public spaces. Often, these perceptions — about who should and shouldn’t be at particular places — are rooted in race and racial difference.

There’s research on that! Beliefs about belonging particularly affect how Black people are treated in America. Sociologist Elijah Anderson has written extensively about how certain social settings are cast as a “white space” or a “black space.” Often, these labels extend to public settings, including businesses, shopping malls, and parks. Labels like these are important because they can lead to differences in how some people are treated, like the exclusion of the two Black men from Starbucks.

When addressing race and social space, social scientists often focus on residential segregation, where certain neighborhoods are predominantly comprised of members of one racial group. While these dynamics have been studied since the mid 20th century, research shows that race is still an important factor in determining where people live and who their neighbors are — an effect compounded by the 2008 financial crisis and its impacts on housing.

The memes are funny, but they can also launch important conversations about core sociological trends in who gets to be in certain social spaces.

Amber Joy is a PhD student in sociology at the University of Minnesota. Her current research interests include punishment, sexual violence and the intersections of race, gender, age, and sexuality. Her work examines how state institutions construct youth victimization.

Neeraj Rajasekar is a Ph.D. student in sociology at the University of Minnesota studying race and media.

(View original at https://thesocietypages.org/socimages)

Krebs on SecurityWill the Real Joker’s Stash Come Forward?

For as long as scam artists have been around so too have opportunistic thieves who specialize in ripping off other scam artists. This is the story about a group of Pakistani Web site designers who apparently have made an impressive living impersonating some of the most popular and well known “carding” markets, or online stores that sell stolen credit cards.

An ad for new stolen cards on Joker’s Stash.

One wildly popular carding site that has been featured in-depth at KrebsOnSecurity — Joker’s Stash — brags that the millions of credit and debit card accounts for sale via their service were stolen from merchants firsthand.

That is, the people running Joker’s Stash say they are hacking merchants and directly selling card data stolen from those merchants. Joker’s Stash has been tied to several recent retail breaches, including those at Saks Fifth Avenue, Lord and Taylor, Bebe Stores, Hilton HotelsJason’s Deli, Whole Foods, Chipotle and Sonic. Indeed, with most of these breaches, the first signs that any of the companies were hacked was when their customers’ credit cards started showing up for sale on Joker’s Stash.

Joker’s Stash maintains a presence on several cybercrime forums, and its owners use those forum accounts to remind prospective customers that its Web site — jokerstash[dot]bazar — is the only way in to the marketplace.

The administrators constantly warn buyers to be aware there are many look-alike shops set up to steal logins to the real Joker’s Stash or to make off with any funds deposited with the impostor carding shop as a prerequisite to shopping there.

But that didn’t stop a prominent security researcher (not this author) from recently plunking down $100 in bitcoin at a site he thought was run by Joker’s Stash (jokersstash[dot]su). Instead, the proprietors of the impostor site said the minimum deposit for viewing stolen card data on the marketplace had increased to $200 in bitcoin.

The researcher, who asked not to be named, said he obliged with an additional $100 bitcoin deposit, only to find that his username and password to the card shop no longer worked. He’d been conned by scammers scamming scammers.

As it happens, prior to hearing from this researcher I’d received a mountain of research from Jett Chapman, another security researcher who swore he’d unmasked the real-world identity of the people behind the Joker’s Stash carding empire.

Chapman’s research, detailed in a 57-page report shared with KrebsOnSecurity, pivoted off of public information leading from the same jokersstash[dot]su that ripped off my researcher friend.

“I’ve gone to a few cybercrime forums where people who have used jokersstash[dot]su that were confused about who they really were,” Chapman said. “Many of them left feedback saying they’re scammers who will just ask for money to deposit on the site, and then you’ll never hear from them again.”

But the conclusion of Chapman’s report — that somehow jokersstash[dot]su was related to the real criminals running Joker’s Stash — didn’t ring completely accurate, although it was expertly documented and thoroughly researched. So with Chapman’s blessing, I shared his report with both the researcher who’d been scammed and a law enforcement source who’d been tracking Joker’s Stash.

Both confirmed my suspicions: Chapman had unearthed a vast network of sites registered and set up over several years to impersonate some of the biggest and longest-running criminal credit card theft syndicates on the Internet.

THE REAL JOKER’S STASH

The real Joker’s Stash can only be reached after installing a browser extension known as “blockchain DNS.” This component is needed to access any sites ending in the top-level domain names of .bazar,.bit (Namecoin), .coin, .lib and .emc (Emercoin).

Most Web sites use the global Domain Name System (DNS), which serves as a kind of phone book for the Internet by translating human-friendly Web site names (example.com) into numeric Internet address that are easier for computers to manage.

Regular DNS maps Internet addresses to domains by relying on a series of distributed, hierarchical lookups. If one server does not know how to find a domain, that server simply asks another server for the information.

Blockchain-based DNS systems also disseminate that mapping information in a distributed fashion, although via a peer-to-peer method. The entities that operate blockchain-based top level domains (e.g., .bazar) don’t answer to any one central authority — such as the Internet Corporation for Assigned Names and Numbers (ICANN), which oversees the global DNS and domain name space. This potentially makes these domains much more difficult for law enforcement agencies to take down.

This batch of some five million cards put up for sale Sept. 26, 2017 on the (real) carding site Joker’s Stash has been tied to a breach at Sonic Drive-In

Dark Reading explains further: “When an individual registers a .bit — or another blockchain-based domain — they are able to do so in just a few steps online, and the process costs mere pennies. Domain registration is not associated with an individual’s name or address but with a unique encrypted hash of each user. This essentially creates the same anonymous system as Bitcoin for Internet infrastructure, in which users are only known through their cryptographic identity.”

And cybercriminals have taken notice. According to security firm FireEye, over the last year there’s been a surge in the number of threat actors that have started incorporating support for blockchain domains in their malware tools.

THE FAKE JOKER’S STASH

In contrast, the fake version of Joker’s Stash — jokersstash[dot]su — exists on the clear Web and displays a list of “trusted” Joker’s Stash domains that can be used to get on the impostor marketplace.  These lists are common on the login pages of carding and other cybercrime sites that tend to lose their domains frequently when Internet do-gooders report them to authorities. The daily reminder helps credit card thieves easily find the new domain should the primary domain get seized by law enforcement or the site’s domain registrar.

Jokersstash[dot]su lists mirror sites in case the generic domain becomes inaccessible.

Most of the domains in the image above are hosted on the same Internet address: 190.14.38.6 (Offshore Racks S.A. in Panama). But Chapman found that many of these domains map back to just a handful of email addresses, including domain@paysafehost.com, fkaboot@gmail.com, and zanebilly30@gmail.com.

Chapman found that adding credit cards to his shopping cart in the fake Joker’s Stash site caused those same cards to show up in his cart when he accessed his account at one of the alternative domains listed in the screenshot above, suggesting that the sites were all connected to the same back-end database.

The email address fkaboot@gmail.com is tied to the name or alias “John Kelly,” as well as 35 domains, according to DomainTools (the full list is here). Most of the sites at those domains borrow names and logos from established credit card fraud sites, including VaultMarket, T12Shop, BriansClub (which uses the head of yours truly on a moving crab to advertise its stolen cards); and the now defunct cybercrime forum Infraud.

Domaintools says the address domain@paysafehost.com also maps to 35 domains, including look-alike domains for major carding sites Bulba, GoldenDumps, ValidShop, McDucks, Mr. Bin, Popeye, and the cybercrime forum Omerta.

The address zanebilly30@gmail.com is connected to 36 domains that feature many of the same impersonated criminal brands as the first two lists.

The domain “paysafehost.com” is not responding at the moment, but until very recently it redirected to a site that tried to scam or phish customers seeking to buy stolen credit card data from VaultMarket. It looks more or less the same as the real VaultMarket’s login page, but Chapman noticed that in the bottom right corner of the screen was a Zendesk chat service soliciting customer questions.

Signing up for an account at paysafehost.com (the fake VaultMarket site) revealed a site that looked like VaultMarket but otherwise massively displayed ads for another carding service — isellz[dot]cc (one of the domains registered to domain@paysafehost.com).

This same Zendesk chat service also was embedded in the homepage of jokersstash[dot]su.

And on isellz[dot]cc:

Notice the same Zendesk chat client in the bottom right corner of the Isellz home page.

According to Farsight Security, a company that maps historical connections between Internet addresses and domain names, several other interesting domains used paysafehost[dot]com as their DNS servers, including cvv[dot]kz (CVV stands for the card verification value and it refers to stolen credit card numbers, names and cardholder address that can be used to conduct e-commerce fraud).

All three domains — cvv[dot]kz, and isellz[dot]cc and paysafehost[dot]com list in their Web site registration records the email address xperiasolution@gmail.com, the site xperiasol.com, and the name “Bashir Ahmad.”

XPERIA SOLUTIONS

Searching online for the address xperiasolution@gmail.com turns up a help wanted ad on the Qatar Living Jobs site from October 2017 for a freelance system administrator. The ad was placed by the user “junaidky“, and gives the xperiasolution@gmail.com email address for interested applicants to contact.

Chapman says at this point in his research he noticed that xperiasolution@gmail.com was also used to register the domain xperiasol.info, which for several years was hosted on the same server as a handful of other sites, such as xperiasol.com — the official Web site Xperia Solution (this site also features a Zen desk chat client in the lower right portion of the homepage).

Xperiasol.com’s Web site says the company is a Web site development firm and domain registrar in Islamabad, Pakistan. The site’s “Meet our Team” page states the founder and CEO of the company is a guy named Muhammad Junaid. Another man pictured as Yasir Ali is the company’s project manager.

The top dogs at Xperia Sol.

We’ll come back to both of these two individuals in a moment. Xperiasol.info also is no longer responding, but not long ago the home page showed several open file directories:

Clicking in the projects directory and drilling down into a project dated Feb. 8, 2018 turns up some kind of chatroom application in development. Recall that dozens of the fake carding domains mentioned above were registered to a “John Kelly” at fkaboot@gmail.com. Have a look at the name next to the chatroom application Web site that was archived at xperiasol.info:

Could Yasir Ali, the project manager of Xperiasol, be the same person who registered so many fake carding domains? What else do we know about Mr. Ali? It appears he runs another business called Agile: Institute of Information Technology. Agile’s domain — aiit.com.pk — was registered to Xperia Sol Technologies in 2016 and hosted on the same server.

Who else that we know besides Mr. Ali is listed on Agile’s “Meet the Team” page? Why Mr. Muhammad Junaid, of course, the CEO and founder of Xperia Sol.

Notice the placeholder “lorem ipsum” content. This can be seen throughout the Web sites for Xperia Sol’s “customers.”

Chapman shared pages of documentation showing that most of the “customers testimonials” supposedly from Xperia Sol’s Web design clients appear to be half-finished sites with plenty of broken links and “lorem ipsum” placeholder content (as is the case with the aiit.com.pk Web site pictured above).

Another “valuable client” listed on Xperia Sol’s home page is Softlottery[dot]com (previously softlogin[dot]com). This site appears to be a business that sells Web site design templates, but it lists its address as Sailor suite room V124, DB 91, Someplace 71745 Earth.

Softlottery/Softlogin features a “corporate business” Web site template that includes a slogan from a major carding forum.

Among the “awesome” corporate design templates that Softlottery has for sale is one loosely based on a motto that has shown up on several carding sites: “We are those, who we are: Verified forum, verified people, serious deals.” Probably the most well-known cybercrime forum using that motto is Omerta (recall from above that the Omerta forum is another brand impersonated by this group).

Flower Land, with the Web address flowerlandllc.com is also listed as a happy Xperia Sol customer and is hosted by Xperia Sol. But most of the links on that site are dead. More importantly, the site’s content appears to have been lifted from the Web site of an actual flower care business in Michigan called myflowerland.com.

Zalmi-TV (zalmi.tv) is supposedly a news media partner of Xperia Sol, but again the Xperia-hosted site is half-finished and full of “lorem ipsum” placeholder content.

THE MASTER MIND?

But what about Xperia Sol’s founder, Muhammad Junaid, you ask? Mr. Junaid is known by several aliases, including his stage name, “Masoom Parinda,” a.k.a. “Master Mind). As Chapman unearthed in his research, Junaid has starred in some B-movie action films in Pakistan, and Masoom Parinda is his character’s name.

The fan page for Masoon Parinda, the character played by Muhammad Junaid Ahmed.

Mr. Junaid also goes by the names Junaid Ahmad Khan, and Muhammad Junaid Ahmed. The latter is the one included in a flight itinerary that Junaid posted to his Facebook page in 2014.

There are also some interesting photos of his various cars — all of which have the Masoom Parinda nickname “Master Mind” written on the back window. There is also something else on each car’s rear window: A picture of a black and red scorpion.

Recall the logo that was used at the top of isellz[dot]cc, the main credit card fraud site tied to xperiasolutions@gmail.com. It features a giant black and red scorpion:

The isellz Web site features a scorpion as a logo.

I reached out to Mr. Junaid/Khan via his Facebook page. Soon after that, his Facebook profile disappeared. But not before KrebsOnSecurity managed to get a copy of the page going back several years. Mr. Junaid/Khan is apparently friends with a local man named Bashar Ahmad. Recall that a “Bashar Ahmad” was the name tied to the domain registrations — cvv[dot]kz, and isellz[dot]cc and paysafehost[dot]com — and to the email address xperiasolution@gmail.com.

Mr. Ahmed also has a Facebook page going back more than seven years. In one of those posts, he publishes a picture of a scorpion very similar to the one on isellz[dot]cc and on Mr. Khan’s automobiles.

A screen shot from Bashir Ahmad’s Facebook postings.

At the conclusion of his research, Chapman said he discovered one final and jarring connection between Xperia Sol and the carding site isellz[dot]cc: When isellz customers have trouble using the site, they can submit a support ticket. Where does that support ticket go? Would you believe to xperiasol@gmail.com? Click the image below to enlarge.

The support page of the carding site isellz[dot]cc points to Xperia Sol. Click to enlarge.

It could be that all of this evidence pointing back to Xperia Sol is just a coincidence, or an elaborate character assassination scheme cooked up by one of the company’s competitors. Or perhaps Mr. Junaind/Khan is simply researching a new role as a hacker in an upcoming Pakistani cinematic thriller:

Mr. Junaid/Khan, in an online promotion for a movie he stars in about crime.

In many ways, creating a network of fake carding sites is the perfect cybercrime. After all, nobody is going to call the cops on people who make a living ripping off cybercriminals. Nor will anyone help the poor sucker who gets snookered by one of these fake carding sites. Caveat Emptor!

CryptogramKidnapping Fraud

Fake kidnapping fraud:

"Most commonly we have unsolicited calls to potential victims in Australia, purporting to represent the people in authority in China and suggesting to intending victims here they have been involved in some sort of offence in China or elsewhere, for which they're being held responsible," Commander McLean said.

The scammers threaten the students with deportation from Australia or some kind of criminal punishment.

The victims are then coerced into providing their identification details or money to get out of the supposed trouble they're in.

Commander McLean said there are also cases where the student is told they have to hide in a hotel room, provide compromising photos of themselves and cut off all contact.

This simulates a kidnapping.

"So having tricked the victims in Australia into providing the photographs, and money and documents and other things, they then present the information back to the unknowing families in China to suggest that their children who are abroad are in trouble," Commander McLean said.

"So quite circular in a sense...very skilled, very cunning."

Worse Than FailureCodeSOD: Modern Art: The Funnel

They say a picture is worth a thousand words, and when it's a picture of code, you could say that it contains a thousand words, too. Especially when it's bad code.

A 35 line enum definition which channels down to a funnel shape, I apologize for not providing the code in textual form, but honestly, this needs to be seen to be believed

Here we have a work of true art. The symmetry hearkens back to the composition of a frame of a Wes Anderson film, and the fact that this snippet starts on line 418 tells us that there's more to this story, something exotic happening just outside of frame. The artist is actively asking questions about what we know is true, with the method calls? &emdash;I think they're method calls&emdash; which take too many parameters, most of which are false. There are hints of an Inner Platform, but they're left for the viewer to discover. And holding it all together are the funnel-like lines which pull the viewer's eyes, straight through the midline, all the way down to the final DataType.STRING, which really says it all, doesn't it? DataType.STRING indeed.

If I ran an art gallery, I would hang this on a wall.

If I ran a programming team, I'd hang the developer instead.

[Advertisement] Otter - Provision your servers automatically without ever needing to log-in to a command prompt. Get started today!

Planet DebianTim Retout: Tokenizing IT jobs

One size does not fit all when it comes to building search applications - it is important to think about the business domain and user expectations. Here's a classic example from recruitment search (a domain which has absorbed six years of my life already...) - imagine you are a candidate searching for IT jobs on your favourite job board.

Recall how a full-text index works as implemented in Solr or Elasticsearch - the job posting documents are treated as a bag of words (i.e. the order of the words doesn't matter in the first instance). When indexing each job, the search engine tokenizes the document to get a list of which words are included. Then, for each individual word we create a list of which documents include each word.

Normally you tell the indexer to exclude so-called "stopwords" which do not provide any useful information to the searcher - e.g. "a", "is", "it", "to", "and". These terms are present in most if not all documents, so would take up a huge amount of space in your index for little benefit. The same stopwords are excluded from queries to reduce the complexity of the search problem.

However, look at the word "it". It matches the term "IT" case-insensitively - and it's quite common for candidates to use lowercase when entering queries. So we want the query [it] to return jobs containing "IT" - this means "it" cannot be a stopword for queries!

To solve this in Solr, we end up doing something much more complicated:

  1. First, "it" is not included in our stopwords list.
  2. At index time, the term "IT" is mapped to "informationtechnology", case-sensitively. (I believe this is so that phrase matches might work? You can ensure that the phrase "Information Technology" maps to the same token.)
  3. At query time, the term "it" and similar is mapped to the same token.

To implement this in Solr, use a separate analyzer for index/query time on the field, pointing at different synonym files.

While the implementation is quite ugly, the principle is simple: the recruiter and the candidate intended different things when writing the job posting versus the query, and we need to handle each according to the intention of the author. For a different application that had nothing to do with IT, you could safely ignore the word "it".

Planet DebianReproducible builds folks: Reproducible Builds: Weekly report #161

Here’s what happened in the Reproducible Builds effort between Sunday May 20 and Saturday May 26 2018:

Packages reviewed and fixed, and bugs filed

diffoscope development

Version 95 was uploaded to unstable by Mattia Rizzolo. It includes contributions already covered by posts in previous weeks as well as new ones from:

tests.reproducible-builds.org development

There were a number of changes to our Jenkins-based testing framework, including:

Misc.

This week’s edition was written by Arnout Engelen, Bernhard M. Wiedemann, Chris Lamb, Holger Levsen and Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianMichal Čihař: Improved Docker container for Weblate

The Docker container for Weblate got several improvements in past days and if you're using it, it might be worth reviewing your setup.

It has been upgraded to Python 3 and Django 2. This should cause no problems as Weblate itself supports both for quite some time, but if you were extending Weblate somehow, you might have to update these extensions to make them compatible.

The default cache backend is now redis. It will be required in future for some features, so you will have to switch at some point anyway. The memcached support is still there in case you want to stick with current setup.

Cron jobs have been integrated into the main container. So you no longer need to trigger them externally. This save quite some pain with offloaded indexing and another features which rely on regular execution.

Another important change is in logging - all logs are now go to the standard output, so you will get them by docker-compose logs and other Docker management commands. This will make debugging easier.

Filed under: Debian English SUSE Weblate

Planet DebianClint Adams: Guidance counselor

“We will have to leave this planet,” he said, according to Geek Wire. “We’re going to leave it, and it’s going to make this planet better.”

“I wonder who ‘we’ is,” she said, “but I have no doubt it will make this planet better.”

Posted on 2018-05-29
Tags: umismu

,

Krebs on SecurityFBI: Kindly Reboot Your Router Now, Please

The Federal Bureau of Investigation (FBI) is warning that a new malware threat has rapidly infected more than a half-million consumer devices. To help arrest the spread of the malware, the FBI and security firms are urging home Internet users to reboot routers and network-attached storage devices made by a range of technology manufacturers.

The growing menace — dubbed VPNFilter — targets Linksys, MikroTik, NETGEAR and TP-Link networking equipment in the small and home office space, as well as QNAP network-attached storage (NAS) devices, according to researchers at Cisco.

Experts are still trying to learn all that VPNFilter is built to do, but for now they know it can do two things well: Steal Web site credentials; and issue a self-destruct command, effectively rendering infected devices inoperable for most consumers.

Cisco researchers said they’re not yet sure how these 500,000 devices were infected with VPNFilter, but that most of the targeted devices have known public exploits or default credentials that make compromising them relatively straightforward.

“All of this has contributed to the quiet growth of this threat since at least 2016,” the company wrote on its Talos Intelligence blog.

The Justice Department said last week that VPNFilter is the handiwork of “APT28,” the security industry code name for a group of Russian state-sponsored hackers also known as “Fancy Bear” and the “Sofacy Group.” This is the same group accused of conducting election meddling attacks during the 2016 U.S. presidential race.

“Foreign cyber actors have compromised hundreds of thousands of home and office routers and other networked devices worldwide,” the FBI said in a warning posted to the Web site of the Internet Crime Complaint Center (IC3). “The actors used VPNFilter malware to target small office and home office routers. The malware is able to perform multiple functions, including possible information collection, device exploitation, and blocking network traffic.”

According to Cisco, here’s a list of the known affected devices:

LINKSYS DEVICES:

E1200
E2500
WRVS4400N

MIKROTIK ROUTEROS VERSIONS FOR CLOUD CORE ROUTERS:

1016
1036
1072

NETGEAR DEVICES:

DGN2200
R6400
R7000
R8000
WNR1000
WNR2000

QNAP DEVICES:

TS251
TS439 Pro

Other QNAP NAS devices running QTS software

TP-LINK DEVICES:

R600VPN

Image: Cisco

Unfortunately, there is no easy way to tell if your device is infected. If you own one of these devices and it is connected to the Internet, you should reboot (or unplug, wait a few seconds, replug) the device now. This should wipe part of the infection, if there is one. But you’re not out of the woods yet.

Cisco said part of the code used by VPNFilter can still persist until the affected device is reset to its factory-default settings. Most modems and DVRs will have a tiny, recessed button that can only be pressed with something small and pointy, such as a paper clip. Hold this button down for at least 10 seconds (some devices require longer) with the device powered on, and that should be enough to reset the device back to its factory-default settings. In some cases, you may need to hold the tiny button down and keep it down while you plug in the power cord, and then hold it for 30 seconds.

After resetting the device, you’ll need to log in to its administrative page using a Web browser. The administrative page of most commercial routers can be accessed by typing 192.168.1.1, or 192.168.0.1 into a Web browser address bar. If neither of those work, try looking up the documentation at the router maker’s site, or checking to see if the address is listed here. If you still can’t find it, open the command prompt (Start > Run/or Search for “cmd”) and then enter ipconfig. The address you need should be next to Default Gateway under your Local Area Connection.

Once you’re there, make sure you’ve changed the factory-default password that allows you to log in to the device (pick something strong that you can remember).

You’ll also want to make sure your device has the latest firmware updates. Most router Web interfaces have a link or button you click to check for newer device firmware. If there are any updates available, install those before doing anything else.

If you’ve reset the router’s settings, you’ll also want to encrypt your connection if you’re using a wireless router (one that broadcasts your modem’s Internet connection so that it can be accessed via wireless devices, like tablets and smart phones). WPA2 is the strongest encryption technology available in most modern routers, followed by WPA and WEP (the latter is fairly trivial to crack with open source tools, so don’t use it unless it’s your only option).

But even users who have a strong router password and have protected their wireless Internet connection with a strong WPA2 passphrase may have the security of their routers undermined by security flaws built into these routers. At issue is a technology called “Wi-Fi Protected Setup” (WPS) that ships with many routers marketed to consumers and small businesses. According to the Wi-Fi Alliance, an industry group, WPS is “designed to ease the task of setting up and configuring security on wireless local area networks. WPS enables typical users who possess little understanding of traditional Wi-Fi configuration and security settings to automatically configure new wireless networks, add new devices and enable security.”

However, WPS also may expose routers to easy compromise. Read more about this vulnerability here. If your router is among those listed as using WPS, see if you can disable WPS from the router’s administration page. If you’re not sure whether it can be, or if you’d like to see whether your router maker has shipped an update to fix the WPS problem on their hardware, check this spreadsheet.

Turning off any remote administration features that may be turned on by default is always a good idea, as is disabling Universal Plug and Play (UPnP), which can easily poke holes in your firewall without you knowing it). However, Cisco researchers say there is no indication that VPNFilter uses UPnP.

For more tips on how to live with your various Internet of Things (IoT) devices without becoming a nuisance to yourself or the Internet at large, please see Some Basic Rules for Securing Your IoT Stuff.

Update, June 2, 10:30 a.m. ET: Netgear provided the following statement about VPNFilter:

To protect against this possible malware, we strongly advise all NETGEAR router owners to take the following steps:

•       Make sure that you are running the latest firmware on your NETGEAR router. Firmware updates include important security fixes and upgrades. For more information, see How do I update my NETGEAR router firmware using the Check button in the router web interface?.
•       Make sure that you have changed your default admin password. For more information, see How do I change the admin password on my NETGEAR router?.
•       Make sure that remote management is turned off on your router. Remote management is turned off by default and can only be turned on in your router’s advanced settings.

To make sure that remote management is turned off on your router:
1.      On a computer that is part of your home network, type http://www.routerlogin.net in the address bar of your browser and press Enter.
2.      Enter your admin user name and password and click OK.
If you never changed your user name and password after setting up your router, the user name is admin and the password is password.
3.      Click Advanced > Remote Management.
4.      If the check box for Turn Remote Management On is selected, clear it and click Apply to save your changes.
If the check box for Turn Remote Management On is not selected, you do not need to take any action.

NETGEAR is investigating and will update this advisory as more information becomes available.

Sky CroeserICA18 Day 4: labour in the gig economy; resistant media; feminist peer review; love, sex, and friendship; illiberal democracy in Eastern and Central Europe

Voices for Social Justice in the Gig Economy: Where Labor, Policy, Technology, and Activism Converge
uberVoices for Social Justice in the Gig Economy, Michelle Rodino-Colocino.
This research discusses the App-Based Driver Association, looking specifically at Seattle. There’s no “there” for gig economy work: previous spaces of organising, such as the shop floor, aren’t available. One space is a parking lot, where people sit waiting to get lifts. There’s one shady tree, where people tend to converge. Another space is an Ethiopian grocery store, as many drivers are East African. The ABDA is largely funded and supported by the teamsters. Drivers interviewed definitely understand that they’re producing for Uber, and that they’re being exploited. They spoke about the challenges of planning – they can’t go watch a movie. Above all, Uber sells drivers’ availability. One driver was told: “we can always get another Mohammed”. Drivers feel dehumanized. They’re not provided with toilets, there’s nowhere to pray. They’re also cautious about organising, as Uber is clearly anti-union.

Work in the European Gig Economy. Kaire Holts, University of Hertfordshire. This research aims to survey and measure the extent and characteristics of crowd work in Europe. Working conditions are characterised by precariousness (including frequent changes to pay levels), unpredictability, work intensity, the impact of customer ratings, abuse from customers, and poor communication with platform staff (including a lack of face to face contact, and no social etiquette). One driver was asked to deliver drugs to a criminal gang late at night. When she told the platform about it they said it was her responsibility to check what was in the bags. Workers face both physical risks and stresses, and issues with mental health. There are some attempts at collective representation of platform workers in Europe. In UK, for example, there’s the Independent Workers Union of Great Britain delivering Deliveroo drivers, and the United Private Hire Drivers (UPHD) representing Uber drivers.

Reimagining Work [didn’t quite catch the current title], Laura Forlano. This draws on a project with Megan Halpern, using workshops and games that helped people collaborate to imagine what work might look like in the future. One participation spoke up the importance of the shift from talking around around each other to needing to actually physically move as part of the workshop process. Shifts in work are linked to reimagining the city as a (new, urban) factory, so we need to reimagine relationships between work, technology, and the city to embed social justice values into our future.

Information and the Gig Economy. Brian Dolber.
Talks about shifting from a tenure-track position to adjunct work, and then taking up work with Uber and Unite Here (campaigning against Airbnb). From 2008 to 2012, Silicon Valley received little of the broader critique addressed at capitalism more generally. Silicon Valley can be seen within Nancy Fraser’s concept of ‘progressive neoliberalism’, but we’re also seeing a shift towards an emergent neofascism. Airbnb’s valuation is greater than all the hotel chains, which is odd when we think about ‘hosts’ as small business owners. Airbnb has created online communities called ‘Airbnb citizen’ which aim to mobilise hosts to affect city policy. The narrative is very much about facilitating people staying in their homes, paying medical bills, supporting the creative industries, which Dolber argues is cultivating a petit bourgeois attitude that shifts us towards an emergent neofascism.

Power Politics of Resistant Media: Critical Voices From Margins to Center

The opening speaker (whose name I unfortunately didn’t get) discusses the ways in which pop feminism works, and the complexity of vulnerability. There’s a distorted mirroring of vulnerability between popular feminism and white misogyny.

Polemology: counterinsurgency and culture jamming, Jack Bratich.
We need a genealogy to elaborate and understand the persistence and connection of struggles across time.

Rosemary Clark-Parsons (University of Pennsylvania) will discuss de Certeau’s concept of “tactics” within the context of her ethnographic work among grassroots feminist collectives in the city of Philadelphia. She focuses on ‘girl army’, a secret Facebook group developed as a space for women and nonbinary people to share experiences. Tilly and Tarrow’s definition of contentious politics would exclude this group, which isn’t in line with women and nonbinary people’s solidarity and organising work within the group. De Certeau’s concept of tactics allows us to take the everyday seriously; can teach us about strategies; and allows explicit recognition of agency within systems of power. There are limitations, too, including issues with addressing differential access to agency, and theorizing structural change over time. The strategies/tactics binary can be reductive and reify power relations.

hashtagactivism#HashtagActivism: race and gender in America’s Networked Counterpublics. Sarah J. Jackson (Northeastern University). Networked counterpublics theory is one way to understand how marginalised communities create their own public spheres. Mainstream media coverage of the public response to #myNYPD mostly treated it as ‘trolling’, or a PR disaster, that could happen to anyone. In the coverage of #Ferguson, there was a flow of the narrative from ordinary people’s framing through to social movement organisations, and finally the media. #GirlsLikeUs is a useful case, because even within counterpublics, there are people at the margins, who produce their own counter-counterpublics.

Jessa Lingel (University of Pennsylvania) focused on “mainstream creep,” referring to the uneasy relationships between countercultural communities and dominant media platforms, where the former uses the latter reluctantly or in highly-limited ways. How do we construct particular bodies as vulnerable: the language of ‘marginalised people’ is important for understanding structures of power, but does it also construct people as essentially weaker?

Gendered Voices and Practices of Open Peer Review
I opened this panel by reflecting on some of the ways in which I am currently trying to understand, and reconfigure, my approaches to both mothering and academia. I’ll put up a blog post about this later.

The Fembot Collective’s Global South Initiatives. Radhika Gajjala, Bowling Green State University. Problems for women in academia in the Global South start with the much-more-oppressive system of neocolonialism. To participate in autoethnography or other feminist methodologies would be a problem because it’s devalued within universities that see it as navel-gazing. Women need to publish in top-tier journals in order to be successful (or even survive) within their academic spaces. How do we as feminist publishers work with women in the Global South to help them access the resources that their institutions value? How do we support them without asking them to do a lot of extra activist work within their institutions? We need to think about power differences within the networks of solidarity and resistance we build across borders. It’s a messy terrain. We need to work to allow women in academia in the Global South to get access to a space where they can speak (and be heard).

Voicing New Forms of Scholarly Publishing. Sarah Kember, Goldsmith’s, University of London. There’s a seismic shift happening at the moment in academic publishing. Revolution and disruption are not the same thing. We need to understand this within the context of efforts to police and politicise scholarly practices: there’s no distinction between these two at the moment. We need to both uphold something (the trust in academic work), but also change it (the opacity of peer review processes). We’re currently seeing a “pay to say” model of academic publishing in open access, at least in the UK. “Openness” works in different ways, with an asymmetrical structure. Goldsmiths has to be open, Google doesn’t. “Open access” publishing is often incredibly expensive, especially where academics are pushed to continue publishing with traditional academic publishers. Kember cites ADA as a big intervention in these models. The disruptions of scholarly publishing models is a by-product of neoliberalism. The disruption of academia isn’t. We need to restate the university press mission, revise it, and rethink it. The policies around scholarly publishing need careful examination. The issue is not about adding ever-more OA panels, which are entrepreneurial, and technicist.

Peer Review is Dead, Long Live Peer Review: Conflicts in the Field of Academic Production. Bryce Peake, University of Maryland, Baltimore County. Academics often undertake review because it gives access to particular networks. Women tend to receive much more negative feedback from review, and to engage in (be asked to do?) more peer review. There are different ways of understanding peer review: as enforcer (for example, of particular norms), networker, gatekeeper (of one particular journal), and/or mentor.

Ada and Affective Labor. Roopika Risam, Salem State University. ADA and the peer review process intervenes in scholarly systems, but is at risk particularly because of that. Risam talks about an experience drawing on theory from the margins: journal editors for a journal with a more experimental peer review process decided to shift from post-publication review to the traditional peer review process. Generosity in peer review is not the same as being ‘nice’: it’s about the level of engagement in the process. It means that the community takes seriously the project that the author is engaged in, rather than what they think the author should be doing. This means that the community has developed and perpetuated a set of norms. Even when editors are advising authors that their text is not ready for publishing, they are kind. Too often, ‘rigor’ has been set up as opposing kindness. This kind of peer review presents a challenge to the masculinist mode of academic production: it’s collectivist rather than individualist, seeing knowledge as an open system rather than a closed hierarchy. How can we look at the intersection of rigor and kindness? Scholarship is more rigorous when it makes its multiple genealogies visible, writing voices which have been made invisible back into academia.

Carol Stabile, in beginning discussion, prompted us to read Toward a Zombie Epistomology by Deanna Day, asking whether we should be should be considering a nonreproductive (or even antireproductive) approach to academia: one not concerned with leaving behind a specific legacy, either institutional or theoretical. Radhika’s answer was very much in line with my thinking on this: that in trying to rethink our approach not only to academia but also to mothering, she (and I) want to think of mothering not as a process of reproducing ourselves, but as a way of making space for children (and students, and colleagues) to be their own people. Thinking about the important challenges and prompts that (re)reading Revolutionary Mothering, The Argonauts, and more informal conversations with the many amazing people I know reflecting on their parenting experiences, have given me, I’d add that it’s also important to consider the ways in which feminist practices of peer review (and academia more generally), should not only not be about reproducing ourselves, but should be about allowing ourselves to be changed.

There was also some excellent discussion about the role of institutions (like the committees that evaluate promotions and tenure), and citation practices. In a response to a question about how to balance attempts to create change against the requirements of tenure, Carol and Sarah spoke on the importance of joining evaluation panels, both to get a better understanding of how they work and to intervene in them. Sarah notes that when we’re forced to write and research more quickly, it can be hard to find sources to draw on beyond the standard offerings. (I’ve particularly noted this myself: after managing not to cite any men, I think, in my last publication before giving birth, my writing since referring to work has relied far more heavily on the most well-known literature.) Sarah prompts peer reviewers to actively consider the breadth of sources that research draws on.

Love, Sex, Friendship: LGBTQ Relationships and Intimacies
Lover(s), Partner(s), and Friends: Exploring Privacy Management Tactics of Consensual Non-Monogamists in Online Spaces. Jade Metzger, Wayne State University. In 1986 a researcher surveyed around 3,000 people, and found that 15-28% of that population didn’t define themselves as monogamous, and more recent research has also found that many young people don’t define themselves as not strictly monogamous. Consensual non-monogamy is often stigmatised. How do we understand disclosure of consensual non-monogamy? Metzger notes that one of the main researchers in this area doesn’t engage in consensual non-monogamy herself. Metzger’s research, which included open-ended interviews and self-disclosure, found that self-disclosure varied, including ‘keeping it an open secret’, using ambiguous terms (like ‘friend’ or ‘partner…s’), or using terms open to interpretation (‘cuties’, ‘comets’, ‘cat’). Reasons cited for privacy included family disapproval, repercussions at work, harm to parental custody, and general discomfort. Privacy is often negotiated at the small-group community level: self-disclosure often implicates others. For some, social media is a risk that has to be navigated carefully: blocking family, for example, or using multiple accounts. Often, it can be hard not to be connected online: it can be painful to not be able to acknowledge people important to you online. Some sites don’t allow you to list multiple partners, embedding heteronormativity into their structure. We need to see privacy as negotiated at the community level (as opposed to individually, as many neoliberal approaches to privacy understand it). The transparency of networks on social media places risks and burdens on those wanting (or needing) to remain private.

28182642460_2012772a36_bDoes Gender Matter? Exploring Friendship Patterns of LGBTQ Youth in a Gender-Neutral Environment. Traci Gillig, USC Annenberg, Leila Bighash, USC – Annenberg School for Communication and Journalism. Gender is not a binary, but we constantly encounter spaces structured by the social gender binary, and gender stereotypes. Gender is a major driver of peer relationships among youth, including LGBTQ people. This research looked at the Brave Trails LGBTQ youth camp, which is gender neutral. Gillig and Bighash found that here, were students weren’t separated out by gender, friendship groupings didn’t cluster by gender.

Hissing and Hollering: Performing Radical Queerness at Dinner. Greg Niedt, Drexel University. The word ‘radical’ is often seen as a confrontational challenge to the mainstream, which is certainly a part of it. But radical queerness can also be about more quiet, everyday moments of queerness: the queer ordinary. In discussing radical queer ‘family dinners’, there is an act of radical queerness to reconstituting family as chosen family. Radical Faeries came out of activism in the 1970s, borrowing – or appropriating – from various forms of paganism and spirituality. Harry Hay was particularly central (and some of his statements about what it means to be queer are kind of what you might expect from a relatively privileged white man). Existing research is limited, and focuses on the high ritual and performativity. Niedt focuses, instead, on weekly fa(e)mily dinners in Center City Philadelphia. The research methodology drew on Dell Hymes (1974).

Music in Queer Intimate Relationships. Marion Wasserbauer, Universiteit Antwerpen. Thea DeNora discusses music as a touchstone of social relations, but there’s a dearth of beographical analysis of sociological study of music consumption. Wasserbauer talked about one interview in which a 44-year-old woman tracked the entanglement of her relationship with music, and how after the breakup she’d never experienced music again. Another 27-year-old-woman, who mostly enjoyed classical and 1920s music, found herself almost crying at a Bryan Adams concert she attended because a woman she was in a relationship with loved him so much.

I rounded out the day at an excellent panel with Maria Bakardjieva, Jakub Macek, Alena Macková, and Monika Metykova (I think – the last two were not listed in the program), discussing attacks on media and political freedoms in the Czech Republic, Hungary, and Bulgaria. Metykova outlined the incredibly worrying range of attacks on independent press and political opposition in Hungary (some of which are outlined here), noting that these have been legal and difficult to fully track, let alone resist. Becasue there a small audience (the last panel on the last day sadly often suffers), it was more of a discussion and I didn’t take notes in the panel, but I strongly encourage you to follow up the speakers’ work – and the situation in Central and Eastern Europe. It was a bit strange to me that ICA as an institution did little to address the specific situation of communications in the Czech Republic – the odd floating ‘placelessness’ of Western-centric academia (with numerous panels addressing US politics).

Cory DoctorowTalking Walkaway, anarchism, social justice and revolution with The Final Straw Radio

I recorded a great interview (MP3) about my novel Walkaway and how it fits into radical politics; a free, fair and open internet; the Nym Wars, parenting, and insurgency.

Worse Than FailureCodeSOD: Classic WTF: Quantum Computering

When does anything but [0-9A-F] equal "2222"? Well, it's a holiday in the US today, so take a look at this classic WTF where that's exactly what happens… -Remy

A little while back, I posted a function that generated random hexadecimal-like strings for a GUID-like string to identify events. At first, I thought it (and the rest of the system that Taka's company purchased) was just bad code. But now that I look at it further, I'm stunned at its unbelievable complexity. I can honestly say that I've never seen code that is actually prepared to run a quantum computer, where binary just isn't as simple as 1's and 0's ...

Function hex2bin(hex)
  Select Case hex
    Case "0"
      hex2bin = "0000"
    Case "1"
      hex2bin = "0001"
    Case "2"
      hex2bin = "0010"
    Case "3"
      hex2bin = "0011"
    Case "4"
      hex2bin = "0100"
    Case "5"
      hex2bin = "0101"
    Case "6"
      hex2bin = "0110"
    Case "7"
      hex2bin = "0111"
    Case "8"
      hex2bin = "1000"
    Case "9"
      hex2bin = "1001"
    Case "A"
      hex2bin = "1010"
    Case "B"
      hex2bin = "1011"
    Case "C"
      hex2bin = "1100"
    Case "D"
      hex2bin = "1101"
    Case "E"
      hex2bin = "1110"
    Case "F"
      hex2bin = "1111"
    Case Else
      hex2bin = "2222"
  End Select
End Function

The library codefiles for this system has plenty of other ultra-advanced functions. We'll have to explore these another day, but I will leave you with this method of handling quantum hexadecimal ...

Function hex2dec(hex)
  Select Case hex
    Case "0"
      hex2dec = 0
    Case "1"
      hex2dec = 1
    Case "2"
      hex2dec = 2
    Case "3"
      hex2dec = 3
    Case "4"
      hex2dec = 4
    Case "5"
      hex2dec = 5
    Case "6"
      hex2dec = 6
    Case "7"
      hex2dec = 7
    Case "8"
      hex2dec = 8
    Case "9"
      hex2dec = 9
    Case "A"
      hex2dec = 10
    Case "B"
      hex2dec = 11
    Case "C"
      hex2dec = 12
    Case "D"
      hex2dec = 13
    Case "E"
      hex2dec = 14
    Case "F"
      hex2dec = 15
    Case Else
      hex2dec = -1
 End Select
End Function
[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

Planet DebianMichal Čihař: New projects on Hosted Weblate

Hosted Weblate provides also free hosting for free software projects. The hosting requests queue has grown too long and waited for more than month, so it's time to process it and include new projects. I hope that gives you have good motivation to spend Christmas break by translating free software.

This time, the newly hosted projects include:

  • ASS Danmaku - firefox extension for downloading danmaku
  • KeePass DX - KeePass implementation for Android
  • Liberapay Everywhere - the official Liberapay browser extension
  • Offline QR Code Generator - browser add-on allowing you to quickly generate a QR code
  • Guake - drop-down terminal for GNOME
  • Planner - Project and Task manager designed for elementary OS
  • EasySSH - The SSH connection manager to make your life easier.
  • Misskey - A Twitter-like SNS
  • Starke Verben - Android Application to learn your strong verbs
  • FMIT - Free Music Instrument Tuner
  • Crappy Firetitle - A firefox extension allowing customisation of windows' title
  • Piano Booster - a MIDI file player that displays the musical notes AND teaches you how to play the piano

If you want to support this effort, please donate to Weblate, especially recurring donations are welcome to make this service alive. You can do that easily on Liberapay or Bountysource.

Filed under: Debian English SUSE Weblate

Planet DebianRuss Allbery: Slightly excessive haul

Okay, yes, I've been buying a lot of books. I seem to be ensuring the rate of acquisition outpaces the rate of consumption even though I've gotten back into reading daily on the train. Well, also, Diane Duane had a crisis and therefore a book sale, so I picked up a whole pile of stuff that I don't have an immediate intention to read.

I still have three written but not yet posted reviews to go through, which are waiting on me finding time to do some editing, and another finished book to review. I'm finishing more non-fiction lately than fiction, possibly due to a less-than-ideal choice of the fiction book for my mood.

The problem with non-fiction is that non-fiction authors keep recommending other interesting-sounding books to read!

Ilona Andrews — On the Edge (sff)
Catherine Asaro (ed.) — Irresistible Forces (sff anthology)
Isaac Asimov (ed.) — The New Hugo Winners (sff anthology)
Fredrik Backman — Beartown (mainstream)
Steven Brust — Good Guys (sff)
Steven Brust & Skyler White — The Skill of Our Hands (sff)
Jo Clayton — Skeen's Leap (sff)
Jo Clayton — Skeen's Return (sff)
Jo Clayton — Skeen's Search (sff)
Diane Duane — So You Want to Be a Wizard (sff)
Diane Duane — Deep Wizardry (sff)
Diane Duane — High Wizardry (sff)
Diane Duane — A Wizard Abroad (sff)
Diane Duane — The Wizard's Dilemma (sff)
Diane Duane — A Wizard Alone (sff)
Diane Duane — Wizard's Holiday (sff)
Diane Duane — Wizards at War (sff)
Diane Duane — A Wizard of Mars (sff)
Diane Duane — Tale of the Five (sff)
Diane Duane — The Big Meow (sff)
Charles Duhigg — The Power of Habit (nonfiction)
Max Gladstone — Four Roads Cross (sff)
Max Gladstone — The Ruin of Angels (sff)
Alison Green — Ask a Manager (nonfiction)
Nicola Griffith — So Lucky (mainstream)
Dorothy J. Heydt — The Witch of Syracuse (sff)
N.K. Jemisin — The Awakened Kingdom (sff)
Richard Kadrey — From Myst to Riven (nonfiction)
T. Kingfisher — The Wonder Engine (sff)
Ilana C. Myer — Last Song Before Night (sff)
Cal Newport — Deep Work (nonfiction)
Cal Newport — So Good They Can't Ignore You (nonfiction)
Emilie Richards — When We Were Sisters (mainstream)
Graydon Saunders — The Human Dress (sff)
Bruce Schneier — Data and Goliath (nonfiction)
Brigid Schulte — Overwhelmed (nonfiction)
Rivers Solomon — An Unkindness of Ghosts (sff)
Douglas Stone & Sheila Heen — Thanks for the Feedback (nonfiction)
Jodi Taylor — Just One Damned Thing After Another (sff)
Catherynne M. Valente — Space Opera (sff)

Phew.

You'll notice a few in here that I've already read and reviewed.

The anthologies, the Backman, and a few others are physical books my parents were getting rid of.

So much good stuff in there I really want to read! And of course I've now started various other personal projects that don't involve spending all of my evenings and weekends reading.

,

Planet DebianDominique Dumont: Shutter, a nice Perl application, may be removed from Debian

Hello

Debian is moving away from Gnome2::VFS. This obsolete module will be removed from next release of Debian.

Unfortunately, Shutter, a very nice Gtk2 screenshot application, depends on Gnome::VFS, which means that Shutter will be removed from Debian unless this dependency is removed from shutter. This would be a shame as Shutter is one of the best screenshot tool available on Linux and one of the best looking Perl application. And its popularity is still growing.

Shutter also provides a way to edit screenshots, for instance to mask confidential data. This graphical editor is based on Goo::Canvas which is already gone from Debian.

To be kept on Debian, Shutter must be updated:

  • to use Gnome GIO instead of Gnome2::VFS
  • to use GooCanvas2 instead of Goo::Canvas
  • may be, to be ported to Gtk3 (that’s less urgent)

I’ve done some work to port Shutter to GIO, but I need to face reality: Maintaining cme is taking most of my free time and I don’t have the time to overhaul Shutter.

To view or clone the code, you can either:

See also the bug reports about Shutter problems on Ubuntu bug tracker

I hope this blog will help finding someone to maintain Shutter…

All the best.

Advertisements

Planet DebianEvgeni Golov: Building Legacy 2.0

I've recently read an article by my dear friend and colleague @liquidat about using Ansible to manage RHEL5 and promised him a nice bashingreply.

Background

Ansible, while being agent-less, is not interpreter-less and requires a working Python installation on the target machine. Up until Ansible 2.3 the minimum Python version was 2.4, which is available in EL5. Starting with Ansible 2.4 this requirement has been bumped to Python 2.6 to accommodate future compatibility with Python 3. Sadly Python 2.6 is not easily available for EL5 and people who want/need to manage such old systems with Ansible have to find a new way to do so.

First, I think it's actually not possible to effectively manage a RHEL5 (or any other legacy/EOL system). Running ad-hoc changes in a mostly controlled manner - yes, but not fully manage them. Just imagine how much cruft might have been collected on a system that was first released in 2007 (that's as old as Debian 4.0 Etch). To properly manage a system you need to be aware of its whole lifecycle, and that's simply not the case here. But this is not the main reason I wanted to write this post.

Possible solutions

liquidat's article shows three ways to apply changes to an EL5 system, which I'd like to discuss.

Use the power of RAW

Ansible contains two modules (raw and script) that don't require Python at all and thus can be used on "any" target. While this is true, you're also losing about every nice feature and safety net that Ansible provides you with its Python-based modules. The raw and script modules are useful to bootstrap Python on a target system, but that's about it. When using these modules, Ansible becomes a glorified wrapper around scp and ssh. With almost the same benefits you could use that for-loop that has been lingering in your shell history since 1998.

Using Ansible for the sake of being able to say "I used Ansible"? Nope, not gonna happen.

Also, this makes all the playbooks that were written for Ansible 2.3 unusable and widens the gap between the EL5 systems and properly managed ones :(

Upgrade to a newer Python version

You can't just upgrade the system Python to a newer verion in EL5, too many tools expect it to be 2.4. But you can install a second version, parallel to the current one.

There are just a few gotchas with that: 1. The easiest way to get a newer Python for EL5 is to install python26 from EPEL. But EPEL for EL5 is EOL and does not get any updates anymore. 2. Python 2.6 is also EOL itself and I am not aware of any usable 2.7 packages for EL5. 3. While you might get Python 2.6 working, what's about all the libs that you might need for the various Ansible modules? The system ones will pretty sure not work for 2.6. 4. (That's my favorite) Are you sure there are no (init) scripts that check for the existence of /usr/bin/python26 and execute the code with that instead of the system Python? Now see 3, 2 and 1 again. Initially you said "but it's only for Ansible", right? 5. Oh, and where do you get an approval for such a change of production systems anyways? ;)

Also, this kinda reminds me of the "Python environment" XKCD:

XKCD: Python Environment

Use Ansible 2.3

This is probably the sanest option available. It does not require changes to your managed systems. Neither does not limit you (a lot) in what you can do in your playbooks.

If only Ansible 2.3 was still supported and getting updates…

And yet, I still think that's the sanest solution available. Just make sure you don't use any modules that communicate with the world (which includes the dig lookup!) and only use 2.3 on an as-needed basis for EL5 hosts.

Conclusion

First of all, please get rid of those EL5 systems. The Extended Life-cycle Support for them ends in 2020 and nobody even talks about support for the hardware it's running on. Document the costs and risks those systems are bringing into the environment and get the workloads migrated, please. (I wrote"please" twice in a paragraph, it must be really important).

I called this post "Building Legacy 2.0" because I fear that's a recurring pattern we'll be seeing. On the one hand legacy systems that need to be kept alive. On the other the wish (and also pressure) to introduce automation with tools that are either not compatible with those legacy systems today or won't be tomorrow as the tool develop much faster than the systems you control using them.

And by still forcing those tools into our legacy environments, we just add more oil to the fire. Instead of maintaining that legacy system, we now also maintain a legacy automation stack to pseudo-manage that legacy system. More legacy, yay.

Sky CroeserICA18, Day 3: activism, subalterns, more activism, post/colonial imaginations, and cultural symbols

Activism and Social Media
mamfakinchMamfakinch: From Protest Slogan to Mediated Activism. Annemarie Iddins, Fairfield University. [CN: rape.]
Iddens argues that the digital must be understood as part of a network of different media – the Mamfakinch collective only makes sense as a response to the limitations of the Moroccan media (which combines strong state influence with neoliberal tendencies). Morocco’s uprising, referred to as M20, used “Mamfakinch” (no concessions) as a slogan. Mamfakinch was developed as a citizen media portal, modelled over Nawaat. M20 was largely focused on reform of the existing political system. Protests were mostly planned online. The collective moves effectively between on and offline locations, supporting some campaigns and sparking others. Amina Filali was a 16 year old who swallowed rat poison after marrying her rapist. Protests took place in physical space and online to change the laws, and nearly two years after Filali’s death the laws that allowed rapists to escape prosecution if they married those they’d raped were changed. Mamfakinch was closed in 2014 after attack from a government-backed spyware attack and loss of momentum. Founders started the Association for Digital Rights (ADN), which is still attempting to register as an organisation. What began as an attempt to establish a viable opposition in Morocco has resulted in a restructuring of the norms of how Moroccans interact with power.

The Purchase of Witnessing in Human Rights Activism. Sandra Ristovska, University of Colorado Boulder. Witnessing is often associated with notions of ‘truth-telling’: this paper maps out two different modes of witnessing. Witnessing an event: bearing witness for historical and ethical reasons. Today, we a see a shift towards witnessing for a purpose. This second mode means that witnessing is very much shaped by a sense of strategic framing for a particular audience. If your end-goal is to appeal to a public audience, or a court, the imperatives are different: do you focus on a particular aesthetic, or on making sure that you get key details (such as badge numbers of police, or landmark shots to show where an event takes place). The push towards shaping witnessing towards particular audiences and institutional contexts can constrain, or oven silence, the voices of activists. Activists may feel they can’t let their own passion, or own voice, speak through as they attempt to meet institutional needs to be heard.

Citizen Media and Civic Engagement. Divya C. McMillin, University of Washington – Tacoma. This research examined the conditions that support particular forms of mobilisation and engagement on the ground: how do movements endure, and how do grassroots movements reclaim local spaces. There were two local case studies of grassroots tourism efforts which aim to preserve heritage and promote eco-friendly environments: Anthony’s Kolkata Heritage Tours, and Native Place in Bangalore. McMillin draws on Massey’s understanding of place as not already-existing, but as becoming – place is transformed by use. Indian cities are changing massively, with seven major Indian cities targeted for “megacity” or “smart city” development which makes them sites of urgent struggle for those living there. Using translation as a theoretical framework allows us to understand negotiations within the global economy: a translation of meaning through the opportunities of encounter. The way in which a space is translated into a place of consumption can also work to reclaim places in ways that the government doesn’t facilitate.

Whose Voices Matter? Digital Media Spaces and the Formation of New Publics in the Global South
fanyusuWhat Happens When the Subaltern Speaks?: Worker’s Voice in Post-Socialist China. Bingchun Meng, London School of Economics. It is important to emphasise the class dimension of how we understand the subaltern. Chinese migrant works can be understood as the subaltern (drawing on Sun 2014). The Hukou system divides and discriminates against the rural population. There is a concentration of symbolic resources and an exercise of epistemic violence, with the marginalisation of migrant workers within China. Migrant workers are represented as the other: the looming spectre of social slippage for the children of middle-class urban people, a force for social instability that needs to be contained. Xu Lizhi’s poetry explores the experiences of migrant workers (he committed suicide, working for Foxconn). Fan Yusu’s writing is, however, more well-known within China, and some is available in English translation. She’s in her mid-40s, from rural Hubei, and works in Beijing as a domestic helper. Her writing draws extensively on Chinese literary tradition, and demonstrates a strong egalitarian view. Responses to her writing have included an outpouring of sympathy from the urban middle-class (which positions the subaltern as disadvantaged); warnings from urban elites against mixing literary criteria with moral judgement (seeing the subaltern as uneducated); and criticism of Fan’s writing about her employer (seeing the subaltern as ungrateful). Fan Yusu’s responses to journalists are not always what they expect: for example, she refuses the valuing of intellectual over physical work.

Social Media and Censorship: the Queer Art Exhibition Case in Brazil. Michel Nicolau Netto, State University of Campinas, and Olívia Bandeira, Federal University of Rio de Janeiro. [CN: homophobia.] Physical violence cannot be understood if we don’t take into account symbolic violence. As an emblematic example, we see the murder of Marielle Franco, which can be understood as a violent response to seeing the subaltern voice start to be valued. This research looks at the Queermuseum Art Exhibition. After the exhibition opened, a man visited wearing a shirt reading “I’m a sexist, indeed”, and recorded the video calling visitors names such as “perverted” and “pedophile” – he shared this on a right-wing Facebook group (“Free Brazil Movement”). After this was further shared, the Santander bank hosting the exhibition cancelled it. Posts about the exhibition were then shared even more widely: right-wing groups were empowered by their success. Most-shared posts in Brazil are disproportionately those from the right wing. The bank’s actions can be seen as a way of supporting the extension of neoliberalism in Brazil, via the strengthening of right-wing extremism.

Sound Clouds: Listening and Citizenship in Indian Public Culture. Aswin Punathambekar, University of Michigan, Ann Arbor.
This paper examines the centrality of sound in conveying voice. Sound technologies and practices serve as a vital infrastructure for political culture. The sonic dimensions of the digital turn have received comparatively little attention. This work disagrees with Tung-Hui Hu’s claims that the prehistory of the cloud is one of silences [I may have misunderstood this], focusing on Kolaveri – a song which was widely shared and remixed. Kolaveri became a sonic text that sparked discussion of inequality, violence, and caste.

Selfies as Voice?: Digital Media, Transnational Publics and the Ironic Performance of Selves. Wendy Willems, London School of Economics and Political Science. African digital users are often seen as being on the other side of the digital divide, not contributing to digital culture. This research looks at responses to boastful selfies from a Zimbabwean businessman, Philip Chiyangwa, mostly in Shona and aimed at discussion within the Zimbabwean diaspora (rather than aimed at an external public). There’s an online archive of 3000 images – often playful and ironic selfies and videos exploring the idea of zvirikufaya (“things are fine”). Discussions between diasporic and home-based Zimbabweans played with the history of colonisation, and reinforced or subverted the idea that diasporic Zimbabweans take on demeaning work overseas (for example, a woman in Australia filming herself being served in a cafe by a white man). Willems is keen to situate discussions of the transnational within a particular historical context, and to shift from ‘flowspeak’ to thinking more about mediated encounters. Diasporas can be seen as fundamentally postcolonial, understanding shifts as being responses specifically to the impacts of colonisation (“we are here because you were there” – A. Sivanandan). How do we understand the role of digital media in transnationalising publics?

Digital Constellations: The Individuation of Digital Media and the Assemblage of Female Voices in South Korea. Jaeho Kang, SOAS, University of London. We need to go beyond the limitations of ‘network’ theory, which reduce the social world to ‘actor-constellations’.  One alternative is to understand protests in terms of assemblages of social individuals: non-conscious cognitive assemblages, collective individuation, and the connective action of affect, and non-representative democracy.

In the response, Nick Couldry invited us to think more about the metaphors around sound, including not only the sonic resonance, but also interference. We also need to think about the ways in which the theoretical language that we use reinforces neoliberal values, rather than subverting them.

Hashtag Activism
#BlackLivesMatter and #AliveWhileBlack: A Study of Topical Orientation of Hashtags and Message Content. Chamil Rathnayake, Middlesex University, Jenifer Sunrise Winter, University of Hawaii at Manoa, and Wayne Buente, University of Hawaii at Manoa.The use of hashtags can be seen within the context of collective coping, which can increase resiliency (while not necessarily leading to political change).

The Voices of #MeToo: From Grassroots Activism to a Viral Roar. Carly Michele Gieseler. Tarana Burke’s original goals for the #metoo mission can be seen as largely silenced (or pushed aside) as the roar grew around the hashtag, echoing broader patterns in white feminism. Outrage is selectively deployed – the wall between white women and Black women within feminism isn’t new, but perhaps the digital space can do something to change it. We need to think about the ways in which white feminisms within academia have ignored or appropriate the work of women of colour. Patricia Hill Collins talks about the painstaking process of collecting ideas and experiences of thrown-away Black women, even when these women started the dialogue.

Voice, Domestic Violence, and Digital Activism: Examining Contradictions in Hashtag Feminism. Jasmine Linabary, Danielle Corple, and Cheryl Cooky, Purdue University. This research looks at #WhyIStayed or #WhyILeft within a postfeminist lens, supplementing data gathered online with interviews. This research highlighted the importance of inviting voice (opening spaces for sharing experiences – but with a focus on the individual, which often lead to victim-blaming); multivocality (with openings for a multitude of identities – but this also opened up the conversation for trolling and co-opting); immediacy in action (which allows responses to current events); and the creation of visibility around domestic violence (unfortunately often neglecting broader structural context). Looking at these hashtags with reference to postfeminist contradictions allows both an understanding of how they were important for those participating, but also the limitations in the focus on the individual.

Women’s Voices in the Saudi Arabian Twittersphere. Walaa Bajnaid, Einar Thorsen, and Chindu Sreedharan, Bournemouth University. This research focuses on women’s resistance to the system of male guardianship, asking about how Twitter facilitate cross-gender communication during the campaign. Women’s tweets connected online and offline mobilisation, for example by posting videos of themselves walking in public unaccompanied. Protesters actively tried to keep the hashtag trending, and to gain international attention. Tweets from male opponents attempted to defend the status quo by attempting to derail the campaign, accusing the protesters of being atheists and/or foreign agents trying to destabilise Saudi Arabia. Men frequently seemed hesitant to support the campaign to end male guardianship.

The Mediated Life of Social Movements: The Case of the Women’s March. Katarzyna Elliott-Maksymowicz, Drexel University. This research draws on the literature on new social movement theory, collective identity, and visuality in social movements. Changing dynamics of hashtags and embedded images is a useful way of understanding how the movement changed over time.

Colonial Imaginations, Techno-Oligarchs, and Digital Technology
(The discussion here was interesting and important, but I struggled a bit to take good notes given the flow of the format. Please excuse the especially fragmentary notes gathered under each presenter, as that seemed easier than taking notes following the flow of discussion.)

[Correction: I initially attributed Payal Arora’s excellent prompts to discussion to Radhika Gajjala.]

Discussant: Payal Arora, Erasmus University Rotterdam
We have to remember that colonial theory is buried in different areas, including development discourse. It’s also important that ‘the margins’ aren’t always positive – the extreme right were also once on the margins (though they are being brought to the centre in many places, including Brazil). Is identity politics toxic to our cause, or should we be leveraging aspects of it? When we talk about visibility in the Global South, we largely celebrate it (“They’ve gained visibility! They’re speaking for themselves!”), without recognising the complicated nature of different identities within nations. There’s a lot of talk about data activism and data justice – we need to also look at data resistance. How do we conceptualise resistance in a broader way without moralising it? We also need to think not just about values in design, but also about who the curators of design are (and how they are embedded within particular territorial spaces and power structures). We also need to think about who is operationalising design.

Digital Neo-Colonization: A Perspective From China, Min Jiang, University of North Carolina – Charlotte.
Min Jiang talks about the challenge of working out: is China the colonised, or the coloniser? Looking at the role of large digital companies, we could see Google as colonising China…but also see Chinese companies as having largely replaced Google now, and as colonising Africa. China has its own colonial history. In China today, there’s been so much crackdown on resistance: colleagues in China working in journalism are forbidden for even mentioning the word resistance.

Islamic State’s Digital Warfare and the Global Media System, Marwan M. Kraidy, Annenberg, University of Pennsylvania
North American white supremacists use digital technologies to mess around with spatial perceptions. Social media platforms are working in tandem with all kinds of techniques of spatial control and surveillance. There’s something about the ways in which these platforms claim innocence from the kinds of feelings that they spark, and we shouldn’t release them from responsibility. Kraidy notes the environmental, social, and economic issues tied up in the ways that data works, using data centres that need to be air-conditioned as an example.

Non-Spectacular Politics: Global Social Media and Ideological Formation, Sahana Udupa, LMU Munich
We need to understand not just intersectional oppression, but also nested inequalities, and the ways in which the digital has lead to increased expressions of nationalism. A decolonial approach requires that we recognise the resurgence of previous forms of racism. Is digital media just a tool for discourses of racism and neonationalism that exist outside it? Udupa argues that we should see digital media cultures as inducing effects on users themselves. In India, Facebook is having a huge (but largely invisible) impact on politics. For example, the BJP uses data extensively in crafting particular political narratives.

Decolonial Computing and the Global Politics of Social Media platforms; Wendy Willems, London School of Economics and Political Science.
A decolonial approach means bringing back in structures, and seeing colonisation as fundamental (rather than additive) to processes of identity formation. It resists claims to speak ‘from nowhere’, and helps us to understand the global aspects of platforms. How might we understand the colonisation of digital space by platforms, including the extraction of data? These platforms are positioned as beneficial (‘connecting the unconnected’) – Willems mentions Zuckerberg visiting Africa in shorts and a t-shirt, the image of white innocence this portrays. There’s a challenge around provoking more discussion of these platforms in Africa. There’s a discussion of Internet shut-downs – the state is being seen as the enemy as it shuts down particular services, but we’re not turning the same critical eye on the platforms themselves. She also distinguished between the use of digital media in resistance, and resistance to digital media and datafication itself – there’s been less of the latter. In South Africa, there was #datamustfall in the wake of #RhodesMustFall (focusing on the costs of accessing digital media, rather than contesting platforms themselves). Operators are crucial gatekeepers in accessing the Internet – we need to look at the relationship between operators, platforms, and the state.

Media Representation of Cultural Symbols, Nationalism and Ethnic and Racial Politics
Framing the American Turban: Media Representations of Sikhs, Islamophobia, and Racialized Violence. Srividya Ramasubramanian and Angie Galal, Texas A&M University.
Sikhism is the fifth largest religion in the world. Several waves of Sikh immigration to the US, with various degrees of control. There’s a history of hate crimes against Sikhs in the US, but disaggregated data only began to be collected (by the FBI) in 2015. Anti-Sikh views, and violence, is tied to the othering and dehumanization of Muslims. There’s a long history of negative portrayals of Sikhs (tangled in with Hindus and Muslims) before 9/11. Going on from this research, it’s also important to look at how Sikhs are resisting negative media portrayals. This research located three key moments of rupture in US media portrayals: 9/11, the Wisconsin shootings, and the Muslim Ban/Trump era.

Selfie Nationalism: Twitter, Narendra Modi, and the Making of a Symbolically Hindu/Ethnic Nation. Shakuntala Rao, SUNY, Plattsburgh.
Modi‘s use of Twitter has been seen as particularly strategic, with extensive use of selfies. He always presents himself as someone who can speak to the layperson as “I”. Rao’s methods involve reading, rather than quantifying, tweets, including replies. For example, as soon as Modi starts ‘praying’ online, people upload videos of himself praying. He tweets in seven languages (using local languages when he travels), but mostly a combination of Gujarati, Hindi, and English. He portrays himself as a Hindu god – some people talk about the ‘banalisation of Hindutva’. Part of this is portraying “every Indian” as special. ‘Selfie Nationalism’ has four characteristisc: Modi’s personification of a symbolic self (and driven by him, not others); a rejection of plural religious/cultural narratives of India; a discourse with a short self life driven by optics as in the frequent launch of new policy initiatives (which are then discarded); less concern with media access and more by media use.

Representing the Divine Cow: Indian Media, Cattle Slaughter and Identity Politics Sudeshna Roy, Stephen F. Austin State University. What are discursive strategies used to generate, resist, sustain, or reify discourses of Hindu nationalism surrounding the Divine Cow? Modi has had a lifelong association with the Hindu nationalist organisation RSS. He has been providing the conditions to support the growth of violent identity politics. In 2014, as Gujrat chief minister, he started attacking the beef export industry. In 2017 he instituted a ban against small-time Muslim and low-caste Dalit, leather-workers. Some low-caste Dalit Hindus do eat beef. Roy notes that while we commonly understand culture as private, our common associations and larger context shape how we understand culture. There have been several cases of Hindu mobs murdering Muslim people for (allegedly) eating beef. Newspaper articles on these events frequently refer to the ceremonial, ritual, and religious roles of the cow, including its sanctity and ahimsa (harmlessness); and pastoral Khrishna. There is, however, no monolithic adherence to the sanctity of the cow for Hindus. There’s a forced conflation of private and public culture in the media’s coverage of the symbolic cow. Hindutva is being presented as a way of life.

Do We Truly Belong: Ethnic and Racial Politics of Post-Disaster News Coverage of Puerto Rico. Sumana Chattopadhyay, Marquette University. In surveys, only a very small majority of people in the mainland US knew that Puerto Ricans are American citizens. However, they can’t vote in the national elections, because they’re not represented in the Electoral College. US mainstream media coverage of Hurricane Maria Puerto Rico is like their coverage of foreign countries.

 

Planet DebianIan Campbell: qcontrol 0.5.6

  • Fix for kernels which have permissions 0200 (write-only) on gpio export device.
  • Updates to systemd unit files.
  • Update to README for (not so) new homepage (thanks to Martin Michlmayr).
  • Add a configuration option in the examples to handle QNAP devices which lack a fan (Debian bug #712841, thanks to Martin Michlmayr for the patch and to Axel Sommerfeldt).

Get it from git or http://www.hellion.org.uk/qcontrol/releases/0.5.6/.

The Debian package will be uploaded shortly.

,

Planet DebianAntoine Beaupré: Diversity, education, privilege and ethics in technology

This article is part of a series on KubeCon Europe 2018.

This is a rant I wrote while attending KubeCon Europe 2018. I do not know how else to frame this deep discomfort I have with the way one of the most cutting edge projects in my community is moving. I see it as a symptom of so many things wrong in society at large and figured it was as good a way as any to open the discussion regarding how free software communities seem to naturally evolved into corporate money-making machines with questionable ethics.

A white male looking at his phone while a hair-dresser prepares him for a video shoot, with plants and audio-video equipment in the background A white man groomed by a white woman

Diversity and education

There is often a great point made of diversity at KubeCon, and that is something I truly appreciate. It's one of the places where I have seen the largest efforts towards that goal; I was impressed by the efforts done in Austin, and mentioned it in my overview of that conference back then. Yet it is still one of the less diverse places I've ever participated in: in comparison, Pycon "feels" more diverse, for example. And then, of course, there's real life out there, where women constitute basically half the population, of course. This says something about the actual effectiveness diversity efforts in our communities.

a large conference room full of people that mostly look like white male, with a speaker on a large stage illuminated in white 4000 white men

The truth is that contrary to programmer communities, "operations" knowledge (sysadmin, SRE, DevOps, whatever it's called these days) comes not from institutional education, but from self-learning. Even though I have years of university training, the day to day knowledge I need in my work as a sysadmin comes not from the university, but from late night experiments on my personal computer network. This was first on the Macintosh, then on the FreeBSD source code of passed down as a magic word from an uncle and finally through Debian consecrated as the leftist's true computing way. Sure, my programming skills were useful there, but I acquired those before going to university: even there teachers expected students to learn programming languages (such as C!) in-between sessions.

A bunch of white geeks hanging out with their phones next to a sign that says 'Thanks to our Diversity Scholarship Sponsors' with a bunch of corporate logos Diversity program

The real solutions to the lack of diversity in our communities not only comes from a change in culture, but also real investments in society at large. The mega-corporations subsidizing events like KubeCon make sure they get a lot of good press from those diversity programs. However, the money they spend on those is nothing compared to tax evasion in their home states. As an example, Amazon recently put 7000 jobs on hold because of a tax the city of Seattle wanted to impose on corporations to help the homeless population. Google, Facebook, Microsoft, and Apple all evade taxes like gangsters. This is important because society changes partly through education, and that costs money. Education is how more traditional STEM sectors like engineering and medicine have changed: women, minorities, and poorer populations were finally allowed into schools after the epic social struggles of the 1970s finally yielded more accessible education. The same way that culture changes are seeing a backlash, the tide is turning there as well and the trend is reversing towards more costly, less accessible education of course. But not everywhere. The impacts of education changes are long-lasting. By evading taxes, those companies are keeping the state from revenues that could level the playing field through affordable education.

Hell, any education in the field would help. There is basically no sysadmin education curriculum right now. Sure you can follow a Cisco CCNA or MSCE private trainings. But anyone who's been seriously involved in running any computing infrastructure knows those are a scam: that will tie you down in a proprietary universe (Cisco and Microsoft, respectively) and probably just to "remote hands monkey" positions and rarely to executive positions.

Velocity

Besides, providing an education curriculum would require the field to slow down so that knowledge would settle down and trickle into a curriculum. Configuration management is pretty old, but because the changes in tooling are fast, any curriculum built in the last decade (or even less) quickly becomes irrelevant. Puppet publishes a new release every 6 month, Kubernetes is barely 4 years old now, and is changing rapidly with a ~3 month release schedule.

Here at KubeCon, Mark Zuckerberg's mantra of "move fast and break things" is everywhere. We call it "velocity": where you are going does not matter as much as how fast you're going there. At one of the many keynotes, Abby Kearns from the Cloud Foundry Foundation boasted at how Home Depot, in trying to sell more hammers than Amazon, is now deploying code to production multiple times a day. I am still unclear as whether this made Home Depot actually sell more hammers, or if it's something that we should even care about in the first place. Shouldn't we converge over selling less hammers? Making them more solid, reliable, so that they are passed down from generations instead of breaking and having to be replaced all the time?

Slide from Kearn's keynote that shows a women with perfect nail polish considering a selection of paint colors with the Home Depot logo and stats about 'speed' in their deployment Home Depot ecstasy

We're solving a problem that wasn't there in some new absurd faith that code deployments will naturally make people happier, by making sure Home Depot sells more hammers. And that's after telling us that Cloud Foundry helped the USAF save 600M$ by moving their databases to the cloud. No one seems bothered by the idea that the most powerful military in existence would move state secrets into a private cloud, out of the control of any government. It's the name of the game, at KubeCon.

Picture of a jet fighter flying over clouds, the logo of the USAF and stats about the cost savings due their move to the cloud USAF saves (money)

In his keynote, Alexis Richardson, CEO of Weaveworks, presented the toaster project as an example of what not to do. "He did not use any sourced components, everything was built from scratch, by hand", obviously missing the fact that toasters are deliberately not built from reusable parts, as part of the planned obsolescence design. The goal of the toaster experiment is also to show how fragile our civilization has become precisely because we depend on layers upon layers of parts. In this totalitarian view of the world, people are also "reusable" or, in that case "disposable components". Not just the white dudes in California, but also workers outsourced out of the USA decades ago; it depends on precious metals and the miners of Africa, the specialized labour of the factories and intricate knowledge of the factory workers in Asia, and the flooded forests of the first nations powering this terrifying surveillance machine.

Privilege

Photo of the Toaster Project book which shows a molten toster that looks like it came out of a H.P. Lovecraft novel "Left to his own devices he couldn’t build a toaster. He could just about make a sandwich and that was it." -- Mostly Harmless, Douglas Adams, 1992

Staying in an hotel room for a week, all expenses paid, certainly puts things in perspectives. Rarely have I felt more privileged in my entire life: someone else makes my food, makes my bed, and cleans up the toilet magically when I'm gone. For me, this is extraordinary, but for many people at KubeCon, it's routine: traveling is part of the rock star agenda of this community. People get used to being served, both directly in their day to day lives, but also through the complex supply chain of the modern technology that is destroying the planet.

An empty shipping container probably made of cardboard hanging over the IBM booth Nothing is like corporate nothing.

The nice little boxes and containers we call the cloud all abstract this away from us and those dependencies are actively encouraged in the community. We like containers here and their image is ubiquitous. We acknowledge that a single person cannot run a Kube shop because the knowledge is too broad to be possibly handled by a single person. While there are interesting collaborative and social ideas in that approach, I am deeply skeptical of its impact on civilization in the long run. We already created systems so complex that we don't truly know who hacked the Trump election or how. Many feel it was, but it's really just a hunch: there were bots, maybe they were Russian, or maybe from Cambridge? The DNC emails, was that really Wikileaks? Who knows! Never mind failing close or open: the system has become so complex that we don't even know how we fail when we do. Even those in the highest positions of power seem unable to protect themselves; politics seem to have become a game of Russian roulette: we cock the bot, roll the secret algorithm, and see what dictator will shoot out.

Ethics

All this is to build a new Skynet; not this one or that one, those already exist. I was able to pleasantly joke about the AI takeover during breakfast with a random stranger without raising as much as an eyebrow: we know it will happen, oh well. I've skipped that track in my attendance, but multiple talks at KubeCon are about AI, TensorFlow (it's opensource!), self-driving cars, and removing humans from the equation as much as possible, as a general principle. Kubernetes is often shortened to "Kube", which I always think of as a reference to the Star Trek Borg all mighty ship, the "cube". This might actually make sense given that Kubernetes is an open source version of Google's internal software incidentally called... Borg. To make such fleeting, tongue-in-cheek references to a totalitarian civilization is not harmless: it makes more acceptable the notion that AI domination is inescapable and that resistance truly is futile, the ultimate neo-colonial scheme.

Captain Jean-Luc Picard, played by Patrick Stewart, assimilated by the Borg as 'Locutus' "We are the Borg. Your biological and technological distinctiveness will be added to our own. Resistance is futile."

The "hackers" of our age are building this machine with conscious knowledge of the social and ethical implications of their work. At best, people admit to not knowing what they really are. In the worse case scenario, the AI apocalypse will bring massive unemployment and a collapse of the industrial civilization, to which Silicon Valley executives are responding by buying bunkers to survive the eventual roaming gangs of revolted (and now armed) teachers and young students coming for revenge.

Only the most privileged people in society could imagine such a scenario and actually opt out of society as a whole. Even the robber barons of the 20th century knew they couldn't survive the coming revolution: Andrew Carnegie built libraries after creating the steel empire that drove much of US industrialization near the end of the century and John D. Rockefeller subsidized education, research and science. This is not because they were humanists: you do not become an oil tycoon by tending to the poor. Rockefeller said that "the growth of a large business is merely a survival of the fittest", a social darwinist approach he gladly applied to society as a whole.

But the 70's rebel beat offspring, the children of the cult of Job, do not seem to have the depth of analysis to understand what's coming for them. They want to "hack the system" not for everyone, but for themselves. Early on, we have learned to be selfish and self-driven: repressed as nerds and rejected in the schools, we swore vengeance on the bullies of the world, and boy are we getting our revenge. The bullied have become the bullies, and it's not small boys in schools we're bullying, it is entire states, with which companies are now negotiating as equals.

The fraud

A t-shirt from the Cloudfoundry booth that reads 'Freedom to create' ...but what are you creating exactly?

And that is the ultimate fraud: to make the world believe we are harmless little boys, so repressed that we can't communicate properly. We're so sorry we're awkward, it's because we're all somewhat on the autism spectrum. Isn't that, after all, a convenient affliction for people that would not dare to confront the oppression they are creating? It's too easy to hide behind such a real and serious condition that does affect people in our community, but also truly autistic people that simply cannot make it in the fast-moving world the magical rain man is creating. But the real con is hacking power and political control away from traditional institutions, seen as too slow-moving to really accomplish the "change" that is "needed". We are creating an inextricable technocracy that no one will understand, not even us "experts". Instead of serving the people, the machine is at the mercy of markets and powerful oligarchs.

A recurring pattern at Kubernetes conferences is the KubeCon chant where Kelsey Hightower reluctantly engages the crowd in a pep chant:

When I say 'Kube!', you say 'Con!'

'Kube!' 'Con!' 'Kube!' 'Con!' 'Kube!' 'Con!'

Cube Con indeed...

I wish I had some wise parting thoughts of where to go from here or how to change this. The tide seems so strong that all I can do is observe and tell stories. My hope is that the people that need to hear this will take it the right way, but I somehow doubt it. With chance, it might just become irrelevant and everything will fix itself, but somehow I fear things will get worse before they get better.

Krebs on SecurityWhy Is Your Location Data No Longer Private?

The past month has seen one blockbuster revelation after another about how our mobile phone and broadband providers have been leaking highly sensitive customer information, including real-time location data and customer account details. In the wake of these consumer privacy debacles, many are left wondering who’s responsible for policing these industries? How exactly did we get to this point? What prospects are there for changes to address this national privacy crisis at the legislative and regulatory levels? These are some of the questions we’ll explore in this article.

In 2015, the Federal Communications Commission under the Obama Administration reclassified broadband Internet companies as telecommunications providers, which gave the agency authority to regulate broadband providers the same way as telephone companies.

The FCC also came up with so-called “net neutrality” rules designed to prohibit Internet providers from blocking or slowing down traffic, or from offering “fast lane” access to companies willing to pay extra for certain content or for higher quality service.

In mid-2016, the FCC adopted new privacy rules for all Internet providers that would have required providers to seek opt-in permission from customers before collecting, storing, sharing and selling anything that might be considered sensitive — including Web browsing, application usage and location information, as well as financial and health data.

But the Obama administration’s new FCC privacy rules didn’t become final until December 2016, a month after then President-elect Trump was welcomed into office by a Republican controlled House and Senate.

Congress still had 90 legislative days (when lawmakers are physically in session) to pass a resolution killing the privacy regulations, and on March 23, 2017 the Senate voted 50-48 to repeal them. Approval of the repeal in the House passed quickly thereafter, and President Trump officially signed it on April 3, 2017.

In an op-ed published in The Washington Post, Ajit Pai — a former Verizon lawyer and President Trump’s pick to lead the FCC — said “despite hyperventilating headlines, Internet service providers have never planned to sell your individual browsing history to third parties.”

FCC Commissioner Ajit Pai.

“That’s simply not how online advertising works,” Pai wrote. “And doing so would violate ISPs’ privacy promises. Second, Congress’s decision last week didn’t remove existing privacy protections; it simply cleared the way for us to work together to reinstate a rational and effective system for protecting consumer privacy.”

Sen. Bill Nelson (D-Fla.) came to a different conclusion, predicting that the repeal of the FCC privacy rules would allow broadband providers to collect and sell a “gold mine of data” about customers.

Sky CroeserICA18 Day 2: narrating voice, digital media and the body, feminist theorisation beyond western cultures, collective memory, and voices of freedom and constraint

Narrating Voice and Building Self on Digital and Social Media
thisislebanon‘This is Lebanon’: Narrating Migrant Labor to Resistive Public. Rayya El Zein, University of Pennsylvania. This research looks at the calling into being of an ideal political subject through social media. ‘This is Lebanon’ is a platform run by a Nepalese immigrant, Dipendra Upetry, where migrant workers have been sharing stories of labour abuses. The Lebanese system for migrant work is particularly conducive to labour abuses, as workers often have a ‘sponsor’ who they may also live with. El Zein is looking at how the voices of labourers affect the political imagination around what it means to be Lebanese. ‘This is Lebanon’ inverts a popular tourism hashtag, #thisislebanon, and when Lebanese citizens complain that “this isn’t Lebanon”, Upetry invites them to change working conditions if they want that to be true. The Kafa campaign, run by a Lebanon NGO in coordination with the International Labour Union, shared a series of ads about a young couple trying to decide what the right thing to do is regarding the person doing domestic work with them, imagining change as coming from educated middle class people who just need guidance. These are ideologically-inflected ideas of politics that position the individual as the mechanism of change.

Instagramming Persian Identity: Ritual Identity Negotiations of Iranians and Persians in/out of Iran. Samira Rajabi, University of Pennsylvania. This research came out of trying to understand why some people refer to themselves as Persians, and others as Iranians. Rajabi looked at how identity is being negotiated on social media, particularly Instagram, which led to exploring particularly the ways in which identity are written on women’s bodies. Many women were part of the Iranian revolution, but they were the first losers after the revolution. Trauma has had a huge impact on how identity is negotiated, and tactical media can be one way to respond to the deep symbolic trauma many people from Iran have experienced.

Hijacking Religion on Facebook. Mona Abdel-Fadil, University of Oslo. This focuses on the Norwegian Cross-Case – a newsreader tried to wear a cross while reading the news, and was told she was in breach of guidelines. There’s a Facebook group: “Yes to wearing the cross whenever I choose”. This is a good case study for understanding identity politics, the role of social media users in amplifying conflicts about religion, modes of performing conflict (and understanding who they are performing to), and the politics of affect. The Facebook group is dominated by conservative Christians who are worried about losing Norway’s Christian heritage; nationalists who see Norwegian identity as inextricably tied to Christianity; humanists (predominantly women) who try to bridge differences; fortified secularists, who argue ferociously, particularly against the nationalists; ardent atheists (predominantly men), who tend to be fan the flames by abusing religious people, then step back. The group is shaped by master narratives that require engagement: that wearing the cross is an act of defiance (often against Muslim attack); that Norwegian cultural heritage is under threat (with compliance from politicians). There’s an intensification and amplification of conflict, including distorting and adding to the original conflict. We need to understand that for some people this is entertainment – an attraction to the tension in the group, and how easy it is to inflame emotions.

Discussion session: Lilie Chouliaraki, in responding, noted the role of trauma and victimhood, inviting speakers to reflect on the role of victimhood and self-victimhood in constituting subjects and identities here. Rajabi noted that trauma requires a different level of response – the stakes are different. But trauma is medicalised, we treat it as something to be dealt with individually rather than politically. Abdel-Fadil is trying to work out how to write from a place of vulnerability about this: how to take the sense of suffering expressed by these people who feel like Christianity or Norwegian identity is under threat seriously, while not necessarily accepting that they are actually victims.

Digital Media and the Body

dem9w1zwsaas5qm

Drawing from Abigail Selzer King

Towards a theory of projectilic media: Notes on Islamic State’s Deployment of Fire. Marwan M. Kraidy, Annenberg, University of Pennsylvania. Kraidy asks why ISIS uses the symbolism of fire so frequently. There’s a distinction between digital images, operative images (for example, drone footages) that are part of an image; projectilic images (images as weapons); and prophylactic images (which build a sense of safety and security). In ISIS’s symbolism, fire becomes a metaphor for sudden birth and sudden death, for the war machine, and for flames of justice. Speed is essential to the war machine, and to fire. A one-hour ISIS video would have about half an hour of projectilic sequences. ISIS uses a torch as a metaphor for the war machine, and the hearth as a a metaphor for the utopian homeland. Fire activates new connections between words and images. Immolation confuses the customary chronology (for example, of beheading videos).

You Have Been Tagged: Incanting Names and Incarnating Bodies on Social Media. Paul Frosh, Hebrew University of Jerusalem. Tagging has become a prevalent technique for circulating images on social media, and serves various purposes for social media platforms (for example, adding more data). Naming and figuration are linked to the life of the self. Names aren’t just linguistic designators – they’re also signifiers of power. Names perform the entanglement of the social subject. Tagging requires a systematic circulation of the name (you must join the platform). Tagging interpolates us as subjects of a particular system, and revitalises the ancient magical power of action at a distance through naming. Tagging is a magical act of germination. Being tagged carries a social weight, prompting us to respond. Tagging sends social signals through others’ images, as opposed to selfies. Tagging goes against the grain of networked selfhood in digital culture, re-centring the body. Tagging is the fleshing out of informational networks.

refugee-selfie-001

Selfies as Testimonies of the Flesh. Lilie Chouliaraki, London School of Economics and Political Science. Aesthetic corporeality becomes important when we think about vulnerable bodies. Digital testimonies produced in conflict zones are elements of a broader landscape of violence and suffering. How does the selfie mediate the faces of refugees? What does the remediation of these faces in Western news sites tell us? Three types of images: refugees being photographed to take selfies; refugee selfies with global leaders; celebrities taking photos as if they were refugees. Chouliaraki notes that refugees taking selfies in Lesbos are celebrating not just having arrived, but also having survived the deadliest sea crossing. Refugee selfies are remediated through a series of disembodiments; their faces are, at best, an absent presence, or, at worst, fully absent.

Feminist Theorizations Beyond Western Cultures
Orientalism, Gender, and Media Representation: A Textual Analysis of Afghan Women in US, Afghan, and Chinese Media. Azeta Hatef, Pennsylvania State University and Luwei Rose Luqui, Hong Kong Baptist University. This study looks at media representations of women in Afghanistan, thinking about the purposes these images serve in relation the war on Afghanistan. Media coverage in China is controlled by the government, but soft news is offered a bit more leeway than hard news outlets. Nevertheless, in China mainstream media conveys the same theme: Afghan women oppressed by brown men. Both US and Chinese media portrays Afghanistan as backwards, with women’s freedoms entirely limited. While violence against women in Afghanistan is worthy of attention, but these media representations operate to amplify distinctions between “us” and “them”, justifying intervention (and failing to recognise the violence done by that intervention).

Production of subject of politics through social media: a practice of Iranian women activists. Gilda Seddighi, University of Bergen. This research looked at an Iranian online network of mourning mothers, drawing on Butler’s conceptualization of politicization. There was a group, “Supporters of Mourning Mothers Harstad”, composed mainly of asylum seekers, connected by Facebook and other mechanisms. Motherhood can be seen here as a source of recognition of political subjects across national border. The notion of motherhood was expanded to include children beyond their own. Nevertheless, many women interviewed spoke of their activism as apolitical, and belonging to a particular nation-state was taken for granted.

Subject Transformations: New Media, New Feminist Discourses. Nithila Kanagasabai, Tata Institute of Social Sciences. This research attempts to look at new strands of feminism in India, particularly in smaller towns in Tamil Nadu. Work from urban areas has tended to position Women’s Studies as urban, upper-caste, middle-class, English-speaking, online, and speaking for marginalised groups. Students who Kanagasabai interviewed drew on ‘the feminist canon’ (for example, Virginia Woolf, Shulamith Firestone), but also on little magazines – small local literary magazines in regional dialects of Tamil, which previously circulated predominantly among unemployed, educated men. These magazines have shifted to allow women, Dalits, and people from scheduled tribes to express themselves. Little magazines open space for subjectivity, offering a critique of seemingly universal social norms, including casteism and gender roles. Students interviewed mention these magazines alongside sources like Jstor and Economic and Political Weekly, which speaks to the development of new methodologies. Publishing in little magazines (as opposed to mainstream feminist journals) is seen not just as convenient, but also as a political decision. Moving online did not mean that little magazines transcended the local or temporal – readership remains limited and local, but they are still important spaces. Following feminists online has lead to a deeper everyday engagement with feminist literature. Lurking needs to be viewed within the framework of collaborative learning, and engagement can happen during key moments. Most students didn’t relate to the title of feminism (which they felt required a particular kind of academic competence), but instead related to women’s studies.

Collective Identities and Memories
Collective Memory Matters: Mobilizing Activist Memory in Autonomous Media. Kamilla Petrick, Sandra Jeppesen, Ellen Craig, Cassidy Croft, & Sharmeen Khan, Lakehead University. Unpaid labour within collectives means that institutional memory isn’t actively shared, but instead embodied within long-term members (who may leave).

detroitwall

By Király-Seth – Own work, CC BY-SA 4.0, https://commons.wikimedia.org/w/index.php?curid=42295509

Emergent Voices in Material Memories: Conceptualizing Public Voices of Segregated Memories in Detroit. Scott Mitchell, Wayne State University. An eight-mile wall remains as a visible reminder of the history of segregation in Detroit, also serving as a space of education and hope. The wall was constructed by developers to raise property values for the White area by separating it from Black communities. Grassroots efforts to add a mural have shifted its meaning.

 

Repertoires, Identities, and Issues of Collective Actions of the Candlelight Movements in S. Korea. Young-Gil Chae, Hankuk University of Foreign Studies and Inho Cho, Hanyang UJaehee Cho, Chung-Ang University.

The Mnemonic Black Hole at Guantánamo: Memory and Counter-Memory Digital Practices on Twitter. Muira McCammon, Annenberg School for Communication at the University of Pennsylvania. Guantánamo is often left off maps: Johann Stein has called it a “legal black hole”. McCammon tried to go to the library at Guantánamo for detainees – being unsuccessful, she tried following the Joint Task Force for Guantánamo on Twitter. McCammon asks what some of the mnemonic strategies used on the Twitter feed are. Only images of higher-up command and celebrities are posted. Traces of Guantánamo as a ‘space of exception’ have been deleted (for example, tweets noting the lack of Internet connection). The official ‘memory maker’, when posting on Twitter, can’t escape others’ memory-making (for example, responses to an official tweet about sexual harassment training at Guantánamo which pointed out the tremendous irony). When studying these issues, there are few systematic ways to track and trace digital military memory makers.

The Voice of Silence: Practices of Participation Among East Jerusalem Palestinians. Maya de Vries, Hebrew University of Jerusalem. This research focuses on participation avoidance, for example the boycotting of Facebook over the ways in which it censors Palestinian content, as an active form of resistance. de Vries notes the complexity of power relations in working with Palestinians in East Jerusalem. Interviewees choose not to engage in anything political on Facebook, knowing that it is monitored by the Israeli state. This state monitoring affects their choices around Facebook. There is also kinship monitoring – knowing that family are reading. Self-monitoring also plays a role. One interviewee notes that when she had to put her location down, there was no option for “East Jerusalem, Palestine”. These layers of monitoring mean that Palestinians negotiate their engagement with Facebook cautiously, frequently choosing non-participation.

Voices of Freedom, Voices of Constraint: Race, Citizenship and Public Memory – Then and Now
Selected Research: “The Fire Next Time in the Civil Sphere: Literary Journalism and Justice in America 1963. Kathy Roberts Forde, Associate Professor, Journalism Department, University of Massachusetts-Amherst. After the end of slavery, new systems were put in place to control Black people, and exploit their labour. Black resistance continued, building a vibrant Black public sphere and paving the way for the civil rights movement. James Baldwin wrote that the only thing that White people had that Black people needed was power. White people should not be a model for how to live. White people destroyed, and were destroying, thousands of lives, and did not know it, and did not want to know it. Baldwin’s writing was hugely influential.

Selected Research: Newspaper Wars: Civil Rights and White Resistance in South Carolina, 1935-1965, 2017. Sid Bedingfield, Assistant Professor, Hubbard School of Journalism and Mass Communication, University of Minnesota-Twin Cities. Talks about NAACP leader Roy Wilkins’ 1964 opinion piece complaining about Black youth crime. This had parallels with segregationists’ narratives, and Wilkins’ had cordial communications with some segregationists. These narratives stripped away historical context and ongoing oppression when covering Black protests and expressions of anger and frustration.

Selected Research: Framing the Black Panthers: The Spectacular Rise of a Black Power Icon, 2017, 2nd edition; Rebel Media: Adventures in the History of the Black Public Sphere, In Progress; Jane Rhodes, Professor and Department Head, African American Studies, University of Illinois at Chicago. Almost everything Rhodes finds in the discourses of the 1960s is still relevant today in discourses of nationalism and race. Stuart Hall argues that each surge of social anxiety finds a temporary respite in the projection of fears onto compellingly anxiety-laden themes – like moral panics about Black people and other racialised others. US coverage of Britain in the 1960s tended to frame Britain as having issues with race, but an unwillingness to deal with it. Meanwhile, British press seemed to have almost a lurid fascination with racial violence in the US (with an undercurrent of fear for white safety in the US, and subsequently in Britain). Deep-seated anxieties around race and social change aren’t subtle. As Enoch Powell came to power, media seemed to be tangled in debates about whether US or UK racism was worse.

,

CryptogramFriday Squid Blogging: Squid Comic

It's not very good, but it has a squid in it.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramSecurity and Human Behavior (SHB 2018)

I'm at Carnegie Mellon University, at the eleventh Workshop on Security and Human Behavior.

SHB is a small invitational gathering of people studying various aspects of the human side of security, organized each year by Alessandro Acquisti, Ross Anderson, and myself. The 50 or so people in the room include psychologists, economists, computer security researchers, sociologists, political scientists, neuroscientists, designers, lawyers, philosophers, anthropologists, business school professors, and a smattering of others. It's not just an interdisciplinary event; most of the people here are individually interdisciplinary.

The goal is to maximize discussion and interaction. We do that by putting everyone on panels, and limiting talks to 7-10 minutes. The rest of the time is left to open discussion. Four hour-and-a-half panels per day over two days equals eight panels; six people per panel means that 48 people get to speak. We also have lunches, dinners, and receptions -- all designed so people from different disciplines talk to each other.

I invariably find this to be the most intellectually stimulating conference of my year. It influences my thinking in many different, and sometimes surprising, ways.

This year's program is here. This page lists the participants and includes links to some of their work. As he does every year, Ross Anderson is liveblogging the talks. (Ross also maintains a good webpage of psychology and security resources.)

Here are my posts on the first, second, third, fourth, fifth, sixth, seventh, eighth, ninth, and tenth SHB workshops. Follow those links to find summaries, papers, and occasionally audio recordings of the various workshops.

Next year, I'll be hosting the event at Harvard.

Planet Linux AustraliaJonathan Adamczewski: Modern C++ Randomness

This thread happened…

So I did a little digging to satisfy my own curiosity about the “modern C++” version, and have learned a few things that I didn’t know previously…

(this is a manual unrolled twitter thread that starts here, with slight modifications)

Nearly all of this I gleaned from the invaluable and . Comments about implementation refer specifically to the gcc-8.1 C++ standard library, examined using Compiler Explorer and the -E command line option.

std::random_device is a platform-specific source of entropy.

std: mt19937 is a parameterized typedef of std::mersenne_twister_engine

specifically:
std::mersenne_twister_engine<uint_fast32_t, 32, 624, 397, 31, 0x9908b0df, 11, 0xffffffff, 7, 0x9d2c5680, 15, 0xefc60000, 18, 1812433253>
(What do those number mean? I don’t know.)

And std::uniform_int_distribution produces uniformly distributed random numbers over a specified range, from a provided generator.

The default constructor for std::random_device takes an implementation-defined argument, with a default value.

The meaning of the argument is implementation-defined – but the type is not: std::string. (I’m not sure why a dynamically modifiable string object was the right choice to be the configuration parameter for an entropy generator.)

There are out-of-line private functions for much of this implementation of std::random_device. The constructor that calls the out-of-line init function is itself inline – so the construction and destruction of the default std::string param is also generated inline.

Also, peeking inside std::random_generator, there is a union with two members:

void* _M_file, which I guess would be used to store a file handle for /dev/urandom or similar.

std::mt19937 _M_mt, which is a … parameterized std::mersenne_twister_engine object.

So it seems reasonable to me that if you can’t get entropy* from outside your program, generate your own approximation. It looks like it is possible that the entropy for the std::mersenne_twister_engine will be provided by a std::mersenne_twister_engine.

Unlike std::random_device, which has its implementation out of line, std::mersenne_twister_engine‘s implementation seems to be all inline. It is unclear what benefits this brings, but it results in a few hundred additional instructions generated.

And then there’s std::uniform_int_distribution, which seems mostly unsurprising. It is again fully inline, which (from a cursory eyeballing) may allow a sufficiently insightful compiler to avoid a couple of branches and function calls.

The code that got me started on this was presented in jest – but (std::random_device + std::mt19937 + std::uniform_int_distribution) is a commonly recommended pattern for generating random numbers using these modern C++ library features.

My takeaways:
std::random_device is potentially very expensive to use – and doesn’t provide strong cross-platform guarantees about the randomness it provides. It is configured with an std::string – the meaning of which is platform dependent. I am not compelled to use this type.

std::mt19937 adds a sizeable chunk of codegen via its inline implementation – and there are better options than Mersenne Twister.

Bottom line: I’m probably going to stick with rand(), and if I need something a little fancier,  or one of the other suggestions provided as replies to the twitter thread.

Addition: the code I was able to gather, representing some relevant parts

Sky CroeserICA Day 1: Kurdish transnational media, racism online, digital labour, and public scholarship

My rough and very incomplete notes from the first day of ICA. There were a bunch of interesting points that I haven’t noted because I was distracted or tired or too busy listening, and great papers that I sadly missed. I mostly use these notes to follow up on work later, but if they’re useful to you too, that’s great!

a_time_for_drunken_horsesUnderstanding Kurdish Media and Communications: Space, Place and Materiality
Theaters of Inhibition and Cinemas of Strategy: Censorship, Space, and Struggle at a Film Festival in Turkey. Josh Carney, American University of Beirut, spoke about Bakur (North), a film about the everyday life of PKK guerrillas. When the Turkish government banned screenings of Bakur, people met at the theatres anyway to discuss the censorship. The directors of Bakur will go on trial in a few days for ‘terrorist propaganda’. Struggles over censorship were tied to struggles over the city space of Istanbul, perhaps in response to the Turkish government’s attempts to erase ideas and spaces that it finds disagreeable. The government wanted to erase Bakur because it was a testament to the peace process, and to the government’s withdrawal from it. This censorship can be seen as an attempt to erase the promise and possibility of peace.

Cinematic Spaces of Solitude, Exile, and Resistance: Telling Kurdish Stories from Norway, Iran, and Turkey. Suncem Koçer, Kadir Has University, spoke on Kurdish filmmaking as a transnational platform for identity politics. Bahman Ghobadi talks about Kurds as a people on the move, and says that cinema as the art of movement is therefore the most suitable medium for documenting Kurdish stories.

Infrastructures, Colonialism and Struggle. Burce Celik, Loughborough University, argues that Kurdish transnational media is still embedded in historical, political, and territorial contexts. Technical and economic concerns, as well as national borders, also shape networks. State interventions can take place at multiple levels. For example, while the Turkish government may not be able to stop television transmissions from Europe, there are reports of police smashing satellite antennas in Kurdish villages. While there are no country-wide Internet shut-downs, there have been region-wide shut-downs in Kurdish provinces of Turkey. We need to consider the materiality of media infrastructures.

Questions: I asked if there were attempts to shift film screenings and other spaces that had been shut down online. Carney noted that film-makers were very resistant to doing this, as film screenings and movie festivals were seen as important. Bakur was leaked online, and the directors asked that people didn’t share or watch it. Koçer affirmed this, and said that censorship in a way also served a generative purpose for film-makers.

Racism in Digital Media Space

Racism in the Hybrid Media System: Analyzing the Finnish ‘Immigration Debate’Gavan Titley, University of Helsinki. Mervi Pantti, U of Helsinki and Kaarina Nikunen, U of Tampere. Pantti opens by noting that even naming racism as racism is often contentious. ‘Hybra’ project – looking at understandings of racism shaped and contested in the interactive everyday cultures of digital media. This paper looks particularly at Suomi24, ‘Finland 24’, one of the largest non-English-language commenting site online. Anti-racist activism in the 1990s helped to fix racism in the public imagination as a result of movements of people, rather than deeper structures. ‘Racism’ is used broadly in Finnish public discourse to mean ‘discrimination’ (for example, ‘obesity racism’), which removes it from it’s particular context. Conservatives talk about “opinion racism”: claims that journalists and others with a ‘multicultural agenda’ are intolerant of other viewpoints. Politically, it’s very difficult to mobilise in terms of racism and anti-racism because of the ways in which this language works.

goodes-smallMore Than Meets the Eye: Understanding Networks of Images in Controversies Around Racism on Social Media. Tim Highfield, Digital Media Research Centre, Queensland University of Technology, and Ariadna Matamoros-Fernandez, Queensland University of Technology. This research, focused on everyday visual representations of racism and counter-racism practices, comes out of the wider literature on racism online that have largely focused on text. It draws on Matamoros-Fernandez’s conceptual work around platform racism. This article looks at the online responses to Adam Goodes’ war cry, many of which used images as a way to push the boundaries for racist viewpoints (often via homophobia). Indigenous social media users frequently added their own images to push back against the racism expressed against Goodes. Mainstream media, though, frequently reinforced hegemonic discourses of racism, rather than giving space to Indigenous voices. There were salient practices on Twitter that are interesting when thinking about platform racism: visual call-outs of racism, often of which were a way of performing distance from Australian racism, which had the effect of amplifying racism. Rather than performing ‘white solidarity’ by amplifying racism, it would be useful to do more to share Indigenous voices and critiques of racism, and link this particular incident to broader structures of racism in Australian society. Visual cultures are an opportunity to understand cover and everyday racism on social media platforms. Even with changes introduced by various platforms to combat racism (after user pressure), there is a lack of consistency and transparency in responses to platformed racism.

Online Hate Speech: Genealogies, Tensions and Contentions. Eugenia Siapera, Dublin City University, Paloma Viejo, Dublin City University and Elena Moreo, Dublin City University.

Theorising Online Racism: The Stream, Affect and Power Laws. Sanjay Sharma, Brunel University. Racialism isn’t an individual act, it’s embedded in material techno-social relations. Ambient racism creates an atmosphere of background hostility. Microaggressions may seem isolated and minor, but they can be all-pervasive.

Working it Out: Emergent Forms of Labor in the Global Digital Economy
Nothing left to lose: bureaucrats in Googleland: Vicki Mayer, Tulane. Stories about Google’s centrality to the economy are highly mediated, even for those working within the organisation. Bureaucrats aren’t meant to sell Google, but they have been pushed to ‘samenwerking’ (planned collaboration) to ‘solve problems’ individually with little structural support. Interviewees used the word “innovative” most often to describe how workers were trying to do more varied tasks with less time and money, while also trying to publicise their achievements. New companies come in all the time saying that they’ll create thousands of jobs, but with limited real results.

radioindigenaDeveloping a Farmworker Low-Power Radio Station in Southern California. Carlos Jimenez, University of Denver. Local Indigenous workers speak Mixteco and Zapotec (sp?) (which is very different from English and Spanish), and listen to Chilena songs – no radio stations in Oxnard catered to this language or musical tastes. The Mexteco Indigena Community Organizing Project partnered with the community. When there was an application made for Radio Indígena for a relatively low-powered antenna, another station fifty miles away, KDB93.7PM, registered a complaint. At first Radio Indígena organisers called to ask them to remove their complaint, but they refused until they received a letter from farmworkers in the area. After a while, the radio community wanted to try shifting towards online transmissions rather than through the radio antenna. But they found that farmworkers’ typical data plans would stop them from listening in. The cost of new media technologies place a greater burden on individual listeners, rather than on the broadcaster.

Production, moderation, representation: three ways of seeing women of color labor in digital culture, Lisa Nakamura, University of Michigan. The lower you go in the chain of production, the more people who aren’t white men you see. It is useful to ask whose labour we misattribute to white men, or even algorithms, on digital platforms. US digital work has been both outsourced and insourced, including to women on reservations. Fairchild ‘invaded’ reservations, and was one of the largest employers in the Navajo Nation until resistance to firings from the American Indian Movement, and unionisation, lead to them leaving. The plant there had produced “high reliability” components, which needed very low failure rates. Employing Navajo workers allowed Fairchild to pay less than the minimum wage. Workers were told that they were building parts for televisions, radios, calculators, and so on (with military applications not mentioned). In a current analogue, moderation work on sites like Facebook is outsourced, sometimes to volunteers. We might also look at the ways in which people like Alexis Ohanian (of Reddit) took credit for the work of teenager Rayouf Alhumedhi in the creation of a hijab emoji.

e2f47f3bc0604084ad088276d23ff610Riot Practices: Immaterial Labor and the Prison-Industrial Complex. Li Cornfeld, Amherst College. There’s a ‘mock prison riot’ at the former state penitentiary in Moundsville yearly, which is a combination of a trade show and a training exercise for ‘correctional officers’. This isn’t what we think of when we consider ‘tech events’, but we should take its claims to be a tech event seriously. It’s a private event, with global attendees. This is one of the ways in which the US exports its technologies of control and norms. It’s also a space to incorporate participants in the tech development process (for example, adding cords to radios for places where batteries are scarce). Technologies of control aren’t just weapons, they include phones, wristbands, and other tracking technologies – many of these are marketed as being not just for prisons, but also for other settings, such as hospitals.

Moving Broadband From Sea to Land: Internet Infrastructure and Labor in Tanzania. Lisa Parks, Massachusetts Institute of Technology. Parks wanted to understand how internet moves from sea to land, and what kinds of digital labor exist in Tanzania to help carry out these operations. She spoke to people who are both formal and informal IT workers, often carrying out risky forms of labour to make the internet more widely available. Drawing on Vicki Mayer, and Labato and Thomas’ The Informal Media Economy. IT ‘development’ projects often lead to unused infrastructure – technology that’s in place, but left unpowered, disconnected, in need of assembly or repair. In Bunda, there are people working in vital jobs like repairing or charging phones. The cost of charging phones is scaled by income. Mobile phone repair workers have designed their own phone which they are going to ask Foxconn to manufacture.

Public Scholars: Engaging With Mainstream Media as Activism

dedcksowaaasm_xThis was a panel discussion, with Amy Adele Hasinoff, University of Colorado Denver;  Charlton McIlwain, New York University; Jean Burgess, Queensland University of Technology; Victor W. Pickard and Maria Repnikova, Georgia State University.
The benefits of media engagement aren’t always direct and obvious – sometimes, for example, they connect unexpected groups and help build alliances. Framing material for a public audience with interventions from editors can be useful in thinking about how we communicate our research, including to other academics outside our own disciplines. Speakers were unsure about the benefits of engaging in hostile spaces – are there useful ways to engage with right-wing media, for example?

There was a lot of interest in the potential issues with engaging with the media. People’s experiences with engaging has differed – some speakers had been discouraged for engaging too much, others felt it was seen as a fundamental part of their job. However, there can be a problem keeping a balance between public scholarship (including dealing with hostile responses) and more traditional academic outputs. It’s important to discriminate between ‘high value’ engagement opportunities and junk.

University support for academics under attack can vary – sometimes they’ll provide legal support, but this isn’t necessarily reliable (or publicised). You’ll often only find out what the university responses to these issues are when a problem comes up. Many of the attacks academics face when speaking publicly aren’t necessarily overt: they might include subtle red-baiting, or questioning about how your background (for example, noting Maria Repnikova’s Russian surname) impacts on your ideas.

There were suggestions for those starting out with media engagement and not yet inundated with media requests:

  • Make sure your colleagues know that you’re interested in media engagement: they should be passing on relevant media queries;
  • Actively contact media when you have research that’s relevant and important – this might involve proposing stories to journalists/editors, or tweeting at journalists.
  • Have useful research to share (especially quantitative data).

How not to get fired? You can’t avoid making any controversial statements – if the press decide to go after you, they will. But aim to have evidence to back your point up, and hopefully aim to also have solidarity networks. (I’d add: maybe join your union!)

When engaging with the media, consider the formats that work for you: text, radio, or television?

Activism, Social Justice and the Role of Contemporary Scholarship
Sasha Costanza-Chock, Massachusetts Institute of Technology. Out of the Shadows, into the Streets! was the result of hands-on, participatory media processes. There isn’t a divide between scholarship and working with social justice organisations: it makes the work more accountable to the people working on the ground, and to their needs. Work with Out for Change led Costanza-Chock to shift their theoretical framework to one of transformative media: it’s about media-making as a healing and identity-forming process.

Kevin Michael Carragee, Suffolk University, began by making a distinction between activist scholarship and scholarship on activism. The former requires establishing partnerships with organisations and movements – there are more calls for this than actual examples. Carragee talked about his work with the Media Research and Action Project. One of the lessons of MRAP is that you want to try to increase the resources available to the group you’re working with. We need to recognise activists as lay scholars. Activists and scholars don’t share the same goals, discourses, and practices – we need to remember that.

Rosemary Clark-Parsons, The Annenberg School for Communication at the University of Pennsylvania. Clark-Parsons draws on feminist standpoint theory: all knowledge is contextually situated; marginalised communities are situated in ways that give them a broader view of power relations; research on those power relations should begin with and centre marginalised communities. To do participatory research, we must position ourselves with activists, but we have to be reflexive about what solidarity means and what power relationships are involved. It’s important to ground theory in practitioners’ perspectives.

Jack Linchuan Qiu, The Chinese University of Hong Kong, talked about the problems with the ‘engagement and impact’ framework, which doesn’t consider how our work has an impact, and to what ends. We need to have hope. As academics we have the luxury of finding hope, and using our classrooms and publications to share that hope.

Chenjerai Kumanyika, Rutgers University – School of Communication and Information. This kind of research offers a corrective to some of the tendencies that exist in our field. Everything Kumanyika has done that’s had an impact has been an “irresponsible job decision”. We have to push back against the priorities of the university, which are about extending empire. We have to push back against understanding class just as an identity parameter, as opposed to a relation between struggles. We need to sneak into the university, be in but not of it.

It was a wrench leaving this final panel of the day, but I had to go meet my partner and Nonsense Baby, so sadly I left before the end.

Planet Linux AustraliaAnthony Towns: Buying in and selling out

I figured “Someday we’ll find it: the Bitcoin connection; the coders, exchanges, and me” was too long for a title. Anyhoo, since very late February I’ve been gainfully employed in the cryptocurrency space, as a developer on Bitcoin Core at Xapo (it always sounds pretentious to shorten that to “bitcoin core developer” to me).

I mentioned this to Rusty, whose immediate response (after “Congratulations”) was “Xapo is weird”. I asked if he could name a Bitcoin company that’s not weird — turns out that’s still an open research problem. A lot of Bitcoin is my kind of weird: open source, individualism, maths, intense arguments, economics, political philosophies somewhere between techno-libertarianism and anarcho-capatalism (“ancap”, which shouldn’t be confused with the safety rating), and a general “we’re going to make the world a better place with more freedom and cleverer technology” vibe of the thing. Xapo in particular is also my kind of weird. For one, it’s founded by Argentinians who have experience with the downsides of inflation (currently sitting at 20% pa, down from 40% and up from 10%), even if that pales in comparison to Venezuela, the world’s current socialist basket case suffering from hyperinflation; and Xapo’s CEO makes what I think are pretty good points about Bitcoin improving global well-being by removing a lot of discretion from monetary policy — as opposed to doing blockchains to make finance more financey, or helping criminals and terrorists out, or just generally getting rich quick. Relatedly, Xapo (seems to me to be) much more of a global company than many cryptocurrency places, which often seem very Silicon Valley focussed (or perhaps NYC, or wherever their respective HQ is); it might be a bit self-indulgent, but I really like being surrounded by people with oddly different cultures, and at least my general impression of a lot of Silicon Valley style tech companies these days is more along the lines of “dysfunctional monoculture” than anything positive. Xapo’s tech choices also seem to be fairly good, or at least in line with my preferences (python! using bitcoin core! microservices!). Xapo is also one of pretty few companies that’s got a strong Bitcoin focus, rather than trying to support every crazy new cryptocurrency or subtoken out there: I tend to think Bitcoin’s the only cryptocurrency that really has good technical and economic fundamentals; so I like “Bitcoin maximilism” in principle, though I guess I’m hard pressed to argue it’s optimal at the business level.

For anyone who follow Bitcoin politics, Xapo might seem a strange choice — Xapo not long ago was on the losing side of the S2X conflict, and why team up with a loser instead of the winners? I don’t take that view for a couple of reasons: I didn’t ever really think doubling the blocksize (the 2X part) was a fundamentally bad idea (not least, because segwit (the S part) already does that and more under some circumstances), but rather the problem was the implementation plan of doing it in just a few months, against the advice of all the most knowledgeable developers, and having an absolutely terrible response when problems with the implementation were found. But although that was probably unavoidable considering the mandate to activate S2X within just a few months, I think the majority of the blame is rightly put on the developers doing the shoddy work, and the solution is for companies to work with developers who can say “no” convincingly, or, preferably, can say “yes, and this is how” long enough in advance that solving the problem well is actually possible. So working with any (or at least most) of the S2X companies just seems like being part of the solution to me. And in any event, I want to live in a world where different viewpoints are welcome and disagreement is okay, and finding out that you’re wrong just means you learned something new, not that you get punished and ostracised.

Likewise, you could argue that anyone who wants to really use Bitcoin should own their private keys, rather than use something like Xapo as a wallet or even a vault, and that working on Xapo is kind-of opposed to the “be your own bank” philosophy at the heart of Bitcoin. My belief is that there’s still a use for banks with Bitcoin: safely storing valuables is hard even when they’re protected by maths instead of (or as well as) locks or guns; so it still makes sense for many people to want to outsource the work of maintaining private keys, and unless you’re an IT professional, it’s probably more sensible to do that to a company that looks kind of like a bank (ie, a custodial wallet like Xapo) rather than one that looks like a software vendor (bitcoin core, electrum, etc) or a hardware vendor (ledger or trezor, eg). In that case, the key benefit that Bitcoin offers is protection from government monetary policy, and, hopefully better/cheaper access or storage of your wealth, which isn’t nothing, even if it’s not fully autonomous control over your wealth.

For the moment, there’s plenty of things to work on at Xapo: I’ve been delaying writing this until I could answer the obvious “when segwit?” question (“now!”), but there’s still more bits to do there, and obviously there are lots of neat things to do improving the app, and even more non-development things to do like dealing with other financial institutions, compliance concerns, and what not. Mostly that’s stuff I help with, but not my focus: instead, the things I’m lucky enough to get to work on are the ones that will make a difference in months/years to come, rather than the next few weeks, which gives me an excuse to keep up to date with things like lightning and Schnorr signatures and work on open source bitcoin stuff in general. It’s pretty fantastic. The biggest risk as I see it is I end up doing too much work on getting some awesome new feature or project prototyped for Xapo and end up having to maintain it, downgrading this from dream job to just a motherforking fantastic one. I mean, aside from the bigger risks like cryptocurrency turns out to be a fad, or we all die from nuclear annihilation or whatever.

I don’t really think disclosure posts are particularly necessary — it’s better to assume everyone has undisclosed interests and biases and judge what they say and do on its own merits. But in the event they are a good idea: financially, I’ve got as yet unvested stock options in Xapo which I plan on exercising and hope will be worth something someday, and some Bitcoin which I’m holding onto and hope will still be worth something some day. I expect those to be highly correlated, so anything good for one will be good for the other. Technically, I think Bitcoin is fascinating, and I’ve put a lot of work into understanding it: I’ve looked through the code, I’ve talked with a bunch of the developers, I’ve looked at a bunch of the crypto, and I’ve even done a graduate diploma in economics over the last couple of years to have some confidence in my ability to judge the economics of it (though to be fair, that wasn’t the reason I had for enrolling initially), and I think it all makes pretty good sense. I can’t say the same about other cryptocurrencies, eg Litecoin’s essentially the same software, but the economics of having a “digital silver” to Bitcoin’s “digital gold” doesn’t seem to make a lot of sense to me, and while Ethereum aims at a bunch of interesting problems and gets the attention it deserves as a result, I’m a long way from convinced it’s got the fundamentals right, and a lot of other cryptocurrency things seem to essentially be scams. Oh, perhaps I should also disclose that I don’t have access to private keys for $10 billion dollars worth of Bitcoin; I’m happily on the open source technology side of things, not on the access to money side.

Of course, my opinions on any of that might change, and my financial interests might change to reflect my changed opinions. I don’t expect to update this blog post, and may or may not post about any new opinions I might form. Which is to say that this isn’t financial advice, I’m not a financial advisor, and if I were, I’m certainly not your financial advisor. If you still want financial advice on crypto, I think Wences’s is reasonable: take 1% of what you’re investing, stick it in Bitcoin, and ignore it for a decade. If Bitcoin goes crazy, great, you’ve doubled your money and can brag about getting in before Bitcoin went up two orders of magnitude; if it goes terrible, you’ve lost next to nothing.

One interesting note: the press is generally reporting Bitcoin as doing terribly this year, maintaining a value of around $7000-$9000 USD after hitting highs of up to $19000 USD mid December. That’s not fake news, but it’s a pretty short term view: for comparison, Wences’s advice linked just above from less than 12 months ago (when the price was about $2500 USD) says “I have seen a number of friends buy at “expensive” prices (say, $300+ per bitcoin)” — but that level of “expensive” is still 20 or 30 times cheaper than today. As a result, in spite of the “bad” news, I think every cryptocurrency company that’s been around for more than a few months is feeling pretty positive at the moment, and most of them are hiring, including Xapo. So if you want to work with me on Xapo’s backend team we’re looking for Python devs. But like every Bitcoin company, expect it to be a bit weird.

CryptogramDetecting Lies through Mouse Movements

Interesting research: "The detection of faked identity using unexpected questions and mouse dynamics," by Merulin Monaro, Luciano Gamberini, and Guiseppe Sartori.

Abstract: The detection of faked identities is a major problem in security. Current memory-detection techniques cannot be used as they require prior knowledge of the respondent's true identity. Here, we report a novel technique for detecting faked identities based on the use of unexpected questions that may be used to check the respondent identity without any prior autobiographical information. While truth-tellers respond automatically to unexpected questions, liars have to "build" and verify their responses. This lack of automaticity is reflected in the mouse movements used to record the responses as well as in the number of errors. Responses to unexpected questions are compared to responses to expected and control questions (i.e., questions to which a liar also must respond truthfully). Parameters that encode mouse movement were analyzed using machine learning classifiers and the results indicate that the mouse trajectories and errors on unexpected questions efficiently distinguish liars from truth-tellers. Furthermore, we showed that liars may be identified also when they are responding truthfully. Unexpected questions combined with the analysis of mouse movement may efficiently spot participants with faked identities without the need for any prior information on the examinee.

Boing Boing post.

Worse Than FailureError'd: Go Home Google News, You're Drunk

"Well, it looks like Google News was inebriated as well!" Daniel wrote.

 

"(Translation: Given names similar to Otto) One must wonder which distance measure algorithm they used to decide that 'Faseaha' is more similar to Otto than Otto," writes Peter W.

 

Andrei V. writes, "What amazing discounts for rental cars offered by Air Baltic!"

 

"I know that Amazon was trying to tell me something about my Kindle author status, but the message appears to have been lost in translation," Bob wrote.

 

"I tried to sign up for severe weather alerts and I'm 100% sure I'm actually signed up. NOT!" writes, Eric R.

 

Lorens writes, "I think the cryptocurrency bubble may have exploded. Or imploded."

 

[Advertisement] BuildMaster allows you to create a self-service release management platform that allows different teams to manage their applications. Explore how!

Don MartiHappy GDPR day. Here's some sensitive data about me.

I know I haven't posted for a while, but I can't skip GDPR Day. You don't see a lot of personal info from me here on this blog. But just for once, I'm going to share something.

I'm a blood donor.

This doesn't seem like a lot of information. People sign up for blood drives all the time. But the serious privacy problem here is that when I give blood, they also test me for a lot of diseases, many of which could have a big impact on my life and how much of certain kinds of healthcare products and services I'm likely to need. The fact that I'm a blood donor might also help people infer something about my sex life but the health data is TMI already.

And I have some bad news. I recently got the ad info from my Facebook account and there it is, in the file advertisers_who_uploaded_a_contact_list_with_your_information.html. American Red Cross Blood Donors. Yes, it looks like the people I chose to trust with some of my most sensitive personal info have given it to the least trusted company on the Internet.

In today's marketing scene, the fact that my blood donor information leaked to Facebook isn't too surprising. The Red Cross clearly has some marketing people, and targeting the existing contact list on Facebook is just one of the things that marketing people do without thinking about it too much.Not thinking about privacy concerns is a problem for Marketing as a career field long-term. If everyone thinks of Marketing as the Department of Creepy Stuff it's going to be harder to recruit creative people.

So, wait a minute. Why am I concerned that Facebook has positive health info on me? Doesn't that help maintain my status in the data-driven economy? What's the downside? (Obvious joke about healthy-blood-craving Facebook board member Peter Thiel redacted—you're welcome.)

The problem is that my control over my personal data isn't just a problem for me. As Prof. Arvind Narayanan said (video), Poor privacy harms society as a whole. Can I trust Facebook to use my blood info just to target me for the Red Cross, and not to sort people by health for other purposes? Of course not. Facebook has crossed every creepy line that they have promised not to. To be fair, that's not just a Facebook thing. Tech bros do risky and mean things all the time without really thinking them through, and even when they do set appropriate defaults they half-ass the implementation and shit happens.

Will blood donor status get you better deals, or apartments, or jobs, in the future? I don't know. I do know that the Red Cross made a big point about confidentiality when they got me signed up. I'm waiting for a reply from the Red Cross privacy officer about this, and will post an update.

Anyway, happy GDPR Day, and, in case you missed it, Salesforce CEO Marc Benioff Calls for a National Privacy Law.

TEDIn Case You Missed It: The dawn of “The Age of Amazement” at TED2018

In Case You Missed It TED2018More than 100 speakers — activists, scientists, adventurers, change-makers and more — took the stage to give the talk of their lives this week in Vancouver at TED2018. One blog post could never hope to hold all of the extraordinary wisdom they shared. Here’s a (shamelessly inexhaustive) list of the themes and highlights we heard throughout the week — and be sure to check out full recaps of day 1, day 2, day 3 and day 4.

Discomfort is a proxy for progress. If we hope to break out of the filter bubbles that are defining this generation, we have to talk to and connect with people we disagree with. This message resonated across the week at TED, with talks from Zachary R. Wood and Dylan Marron showing us the power of reaching out, even when it’s uncomfortable. As Wood, a college student who books “uncomfortable speakers,” says: “Tuning out opposing viewpoints doesn’t make them go away.” To understand how society can progress forward, he says, “we need to understand the counterforces.” Marron’s podcast “Conversations With People Who Hate Me” showcases him engaging with people who have attacked him on the internet. While it hasn’t led to world peace, it has helped him develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

The Audacious Project, a new initiative for launching big ideas, seeks to create lasting change at scale. (Photo: Ryan Lash / TED)

Audacious ideas for big impact. The Audacious Project, TED’s newest initiative, aims to be the nonprofit version of an IPO. Housed at TED, it’s a collaboration among some of the biggest names in philanthropy that asks for nonprofit groups’ most audacious dreams; each year, five will be presented at TED with an invitation for the audience and world to get involved. The inaugural Audacious group includes public defender Robin Steinberg, who’s working to end the injustice of bail; oceanographer Heidi M. Sosik, who wants to explore the ocean’s twilight zone; Caroline Harper from Sight Savers, who’s working to end the scourge of trachoma; conservationist Fred Krupp, who wants to use the power of satellites and data to track methane emissions in unprecedented detail; and T. Morgan Dixon and Vanessa Garrison, who are inspiring a nationwide movement for Black women’s health. Find out more (and how you can get involved) at AudaciousProject.org.

Living means acknowledging death. Philosopher-comedian Emily Levine has stage IV lung cancer — but she says there’s no need to “oy” or “ohhh” over her: she’s OK with it. Life and death go hand in hand, she says; you can’t have one without the other. Therein lies the importance of death: it sets limits on life, limits that “demand creativity, positive energy, imagination” and force you to enrich your existence wherever and whenever you can. Jason Rosenthal’s journey of loss and grief began when his wife, Amy Krouse Rosenthal, wrote about their lives in an article read by millions of people: “You May Want to Marry My Husband” — a meditation on dying disguised as a personal ad for her soon-to-be-solitary spouse. By writing their story, Amy made Jason’s grief public — and challenged him to begin anew. He speaks to others who may be grieving: “I would like to offer you what I was given: a blank sheet of paper. What will you do with your intentional empty space, with your fresh start?”

“It’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” says Yuval Noah Harari. (Photo: Ryan Lash / TED)

Can we rediscover the humanity in our tech?  In a visionary talk about a “globally tragic, astoundingly ridiculous mistake” companies like Google and Facebook made at the foundation of digital culture, Jaron Lanier suggested a way we can fix the internet for good: pay for it. “We cannot have a society in which, if two people wish to communicate, the only way that can happen is if it’s financed by a third person who wishes to manipulate them,” he says. Historian Yuval Noah Harari, appearing onstage as a hologram live from Tel Aviv, warns that with consolidation of data comes consolidation of power. Fascists and dictators, he says, have a lot to gain in our new digital age; and “it’s the responsibility of all of us to get to know our weaknesses, and make sure they don’t become weapons in the hands of enemies of democracy,” he says. Gizmodo writers Kashmir Hill and Surya Mattu survey the world of “smart devices” — the gadgets that “sit in the middle of our home with a microphone on, constantly listening,” and gathering data — to discover just what they’re up to. Hill turned her family’s apartment into a smart home, loading up on 18 internet-connected appliances; her colleague Mattu built a router that tracked how often the devices connected, who they were transmitting to, what they were transmitting. Through the data, he could decipher the Hill family’s sleep schedules, TV binges, even their tooth-brushing habits. And a lot of this data can be sold, including deeply intimate details. “Who is the true beneficiary of your smart home?” he asks. “You, or the company mining you?”

An invitation to build a better world. Actor and activist Tracee Ellis Ross came to TED with a message: the global collection of women’s experiences will not be ignored, and women will no longer be held responsible for the behaviors of men. Ross believes it is past time that men take responsibility to change men’s bad behavior — and she offers an invitation to men, calling them in as allies with the hope they will “be accountable and self-reflective.” She offers a different invitation to women: Acknowledge your fury. “Your fury is not something to be afraid of,” she says. “It holds lifetimes of wisdom. Let it breathe, and listen.”

Wow! discoveries. Among the TED Fellows, explorer and conservationist Steve Boyes’ efforts to chart Africa’s Okavango Delta has led scientists to identify more than 25 new species; University of Arizona astrophysicist Burçin Mutlu-Pakdil discovered a galaxy with an outer ring and a reddish inner ring that was unlike any ever seen before (her reward: it’s now called Burçin’s Galaxy). Another astronomer, University of Hawaii’s Karen Meech saw — and studied for an exhilarating few days — ‘Oumuamua, the first interstellar comet observed from Earth. Meanwhile, engineer Aaswath Raman is harnessing the cold of deep space to invent new ways to keep us cooler and more energy-efficient. Going from the sublime to the ridiculous, roboticist Simone Giertz showed just how much there is to be discovered from the process of inventing useless things.  

Walter Hood shares his work creating public spaces that illuminate shared memories without glossing over past — and present — injustices. (Photo: Ryan Lash / TED)

Language is more than words. Even though the stage program of TED2018 consisted primarily of talks, many went beyond words. Architects Renzo Piano, Vishaan Chakbrabarti, Ian Firth and Walter Hood showed how our built structures, while still being functional, can lift spirits, enrich lives, and pay homage to memories. Smithsonian Museum craft curator Nora Atkinson shared images from Burning Man and explained how, in the desert, she found a spirit of freedom, creativity and collaboration not often found in the commercial art world. Designer Ingrid Fetell Lee uncovered the qualities that make everyday objects a joy to behold. Illustrator Christoph Niemann reminded us how eloquent and hilarious sketches can be; in her portraits of older individuals, photographer Isadora Kosofsky showed us that visuals can be poignant too. Paul Rucker discussed his painful collection of artifacts from America’s racial past and how the artistic act of making scores of Ku Klux Klan robes has brought him some catharsis. Our physical movements are another way we speak  — for choreographer Elizabeth Streb, it’s expressing the very human dream to fly. For climber Alex Honnold, it was attaining a sense of mastery when he scaled El Capitan alone without ropes. Dolby Laboratories chief scientist Poppy Crum demonstrated the emotions that can be read through physical tells like body temperature and exhalations, and analytical chemist Simone Francese revealed the stories told through the molecules in our fingerprints.  

Kate Raworth presents her vision for what a sustainable, universally beneficial economy could look like. (Photo: Bret Hartman / TED)

Is human growth exponential or limited? There will be almost ten billion people on earth by 2050. How are we going to feed everybody, provide water for everybody and get power to everybody? Science journalist Charles C. Mann has spent years asking these questions to researchers, and he’s found that their answers fall into two broad categories: wizards and prophets. Wizards believe that science and technology will let us produce our way out of our dilemmas — think: hyper-efficient megacities and robots tending genetically modified crops. Prophets believe close to the opposite; they see the world as governed by fundamental ecological processes with limits that we transgress to our peril. As he says: “The history of the coming century will be the choice we make as a species between these two paths.” Taking up the cause of the prophets is Oxford economist Kate Raworth, who says that our economies have become “financially, politically and socially addicted” to relentless GDP growth, and too many people (and the planet) are being pummeled in the process. What would a sustainable, universally beneficial economy look like? A doughnut, says Raworth. She says we should strive to move countries out of the hole — “the place where people are falling short on life’s essentials” like food, water, healthcare and housing — and onto the doughnut itself. But we shouldn’t move too far lest we end up on the doughnut’s outside and bust through the planet’s ecological limits.

Seeing opportunity in adversity. “I’m basically nuts and bolts from the knee down,” says MIT professor Hugh Herr, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He dreams of a future where humans have augmented their bodies in a way that redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. In a beautiful, touching talk in the closing session of TED2018, Mark Pollock and Simone George take us inside their relationship — detailing how Pollock became paralyzed and the experimental work they’ve undertaken to help him regain motion. In collaboration with a team of engineers who created an exoskeleton for Pollock, as well as Dr. Reggie Edgerton’s team at UCLA, who developed a way to electrically stimulate the spinal cord of those with paralysis, Pollock was able to pull his knee into his chest during a lab test — proving that progress is definitely still possible.

TED Fellow and anesthesiologist Rola Hallam started the world’s first crowdfunded hospital in Syria. (Photo: Ryan Lash / TED)

Spotting the chance to make a difference. The TED Fellows program was full of researchers, activists and advocates capitalizing on the spaces that go unnoticed. Psychiatrist Essam Daod, found a “golden hour” in refugees’ treks when their narratives can sometimes be reframed into heroes’ journeys; landscape architect Kotcharkorn Voraakhom realized that a park could be designed to allow her flood-prone city of Bangkok mitigate the impact of climate change; pediatrician Lucy Marcil seized on the countless hours that parents spend in doctors’ waiting rooms to offer tax assistance; sustainability expert DeAndrea Salvador realized the profound difference to be made by helping low-income North Carolina residents with their energy bills; and anesthesiologist Rola Hallam is addressing aid shortfalls for local nonprofits, resulting in the world’s first crowdfunded hospital in Syria.

Catch up on previous In Case You Missed It posts from April 10 (Day 1), April 11 (Day 2), April 12 (Day 3), and yesterday, April 13 (Day 4).

TEDIn Case You Missed It: Bold visions for humanity at day 4 of TED2018

In Case You Missed It TED2018Three sessions of memorable TED Talks covering life, death and the future of humanity made the penultimate day of TED2018 a remarkable space for tech breakthroughs and dispatches from the edges of culture.

Here are some of the themes we heard echoing through the opening day, as well as some highlights from around the conference venue in Vancouver.

The future built on genetic code. DNA is built on four letters: G, C, A, T. These letters determine the sequences of the 20 amino acids in our cells that build the proteins that make life possible. But what if that “alphabet” got bigger? Synthetic biologist and chemist Floyd Romesberg suggests that the four letters of the genetic alphabet are not all that unique. He and his colleagues constructed the first “semi-synthetic” life forms based on a 6-letter DNA. With these extra building blocks, cells can construct hitherto unseen proteins. Someday, we could tailor these cells to fulfill all sorts of functions — building new, hyper-targeted medicines, seeking out and destroying cancer, or “eating” toxic materials. And maybe soon, we’ll be able to use that expanded DNA alphabet to teleport. That’s right, you read it here first: teleportation is real. Biologist and engineer Dan Gibson reports from the front lines of science fact that we are now able to transmit the most fundamental parts of who we are: our DNA. It’s called biological teleportation, and the idea is that biological entities including viruses and living cells can be reconstructed in a distant location if we can read and write the sequence of that DNA code. The machines that perform this fantastic feat, the BioXP and the DBC, stitch together both long and short forms of genetic code that can be downloaded from the internet. That means that in the future, with an at-home version of these machines (or even one worlds away, say like, Mars), we may be able to download and print personalized therapeutic medications, prescriptions and even vaccines.

“If we want to create meaningful technology to counter radicalization, we have to start with the human journey at its core,” says technologist Yasmin Green at Session 8 at TED2018: The Age of Amazement, April 13, Vancouver. (Photo: Jason Redmond / TED)

Dispatches from the fight against hate online. At Jigsaw (a division of Alphabet), Yasmin Green and her colleagues were given the mandate to build technology that could help make the world safer from extremism and persecution. In 2016, Green collaborated with Moonshot CVE to pilot a new approach, the “Redirect Method.” She and a team interviewed dozens of former members of violent extremist groups, and used what they learned to create targeted advertising aimed at people susceptible to ISIS’s recruiting — and counter those messages. In English and Arabic, the eight-week pilot program reached more than 300,000 people. “If technology has any hope of overcoming today’s challenges,” Green says, “we must throw our entire selves into understanding these issues and create solutions that are as human as the problems they aim to solve.” Dylan Marron is taking a different approach to the problem of hate on the internet. His video series, such as “Sitting in Bathrooms With Trans People,” have racked up millions of views, and they’ve also sent a slew of internet poison in his direction. He developed a coping mechanism: he calls up the people who leave hateful remarks, opening their chats with a simple question: “Why did you write that?” These exchanges have been captured on Marron’s podcast “Conversations With People Who Hate Me.” While it hasn’t led to world peace, he says it’s caused him to develop empathy for his bullies. “Empathizing with someone I profoundly disagree with doesn’t suddenly erase my deeply held beliefs and endorse theirs,” he cautions. “I simply am acknowledging the humanity of a person who has been taught to think a certain way, someone who thinks very differently than me.”

Is artificial intelligence actually intelligence? Not yet, says Kevin Frans. Earlier in his teen years (he’s now just 18) he joined the OpenAI lab to think about the fascinating problem of making AI that has true intelligence. Right now, he says, a lot of what we call intelligence is just trial-and-error on a massive scale — a machine can try every possible solution, even ones too absurd for a human to imagine, until it finds the thing that works best to solve a single discrete problem. Which really isn’t general intelligence. So Frans is conceptualizing instead a way to think about AI from a skills perspective — specifically, the ability to learn simple skills and assemble them to accomplish tasks. It’s early days for this approach, and for Kevin himself, who is part of the first generation to grow up as AI natives. Picking up on the thread of pitfalls of current AI, artist and technology critic James Bridle describes how automated copycats on YouTube mimic trusted videos by using algorithmic tricks to create “fake news” for kids. End result: children exploring YouTube videos from their favorite cartoon characters are sent down autoplaying rabbit holes, where they can find eerie, disturbing videos filled with very real violence and very real trauma. Algorithms are touted as the fix, but as Bridle says, machine learning is really just what we call software that does things we don’t understand … and we have enough of that already, no?

Chetna Gala Sinha tells us about a bank in India that meets the needs of rural poor women who want to save and borrow. (Photo: Jason Redmond / TED)

Listen and learn. Takemia MizLadi Smith spoke up for the front-desk staffer, the checkout clerk, and everyone who’s ever been told they need to start collecting information from customers, whether it be an email, zip code or data about their race and gender. Smith makes the case to empower every front desk employee who collects data — by telling them exactly how that data will be used. Chetna Gala Sinha, meanwhile, started a bank in India that meets the needs of rural poor women who want to save and borrow — and whom traditional banks would not touch. How does the bank improve their service? As Chetna says: simply by listening. Meanwhile, sex educator Emily Nagoski talked about a syndrome called emotional nonconcordance, where what your body seems to want runs counter to what you actually want. In an intimate situation, ahem, it can be hard to figure out which one to listen to, head or body. Nagoski gives us full permission and encouragement to listen to your head, and to the words coming out of the mouth of your partner. And Harvard Business School prof Frances Frei gave a crash course in trust — building it, keeping it, and the hardest, rebuilding it. She shares lessons from her stint as an embed at Uber, where far from listening to in meetings, staffers would actually text each other during meetings — about the meeting. True listening, the kind that builds trust, starts with putting away your phone.

Bionic man Hugh Herr envisions humanity soaring out of the 21st century. (Photo: Ryan Lash / TED)

A new way to heal our bodies … and build new ones. Optical engineer Mary Lou Jepsen shares an exciting new tool for reading what’s inside our bodies. It exploits the properties of red light, which behaves differently in different body materials. Our bones and flesh scatter red light (as she demonstrates on a piece of raw chicken breast), while our red blood absorbs it and doesn’t let it pass through. By measuring how light scatters, or doesn’t, inside our bodies, and using a technique called holography to study the resulting patterns as the light comes through the other side, Jepsen believe we can gain a new way to spot tumors and other anomalies, and eventually to create a smaller, more efficient replacement for the bulky MRI. MIT professor Hugh Herr is working on a different way to heal — and augment — our bodies. He’s working toward a goal that’s long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He calls it “NeuroEmbodied Design,” a methodology to create cyborg function where the lines between the natural and synthetic world are blurred. This future will provide humanity with new bodies and end disability, Herr says — and it’s already happening. He introduces us to Jim Ewing, a friend who lost a foot in a climbing accident. Using the Agonist-antagonist Myoneural Interface, or AAMI, a method Herr and his team developed at MIT to connect nerves to a prosthetic, Jim’s bones and muscles were integrated with a synthetic limb, re-establishing the neural connection between his ankle and foot muscles and his brain. What might be next? Maybe, the ability to fly.

Announcements! Back in 2014, space scientist Will Marshall introduced us to his company, Planet, and their proposed fleet of tiny satellites. The goal: to image the planet every day, showing us how Earth changes in near-real time. In 2018, that vision has come good: every day, a fleet of about 200 small satellites pictures every inch of the planet, taking 1.5 million 29-megapixel images every day (about 6T of data daily), gathering data on changes both natural and human-made. This week at TED, Marshall announced a consumer version of Planet, called Planet Stories, to let ordinary people play with these images. Start playing now here. Another announcement comes from futurist Ray Kurzweil: a new way to query the text inside books using something called semantic search — which is a search on ideas and concepts, rather than specific words. Called TalkToBooks, the beta-stage product uses an experimental AI to query a database of 120,000 books in about a half a second. (As Kurzweil jokes: “It takes me hours to read a hundred thousand books.”) Jump in and play with TalkToBooks here. Also announced today: “TED Talks India: Nayi Soch” — the wildly popular Hindi-language TV series, created in partnership with StarTV and hosted by Shah Rukh Khan — will be back for three more seasons.

TEDBody electric: Notes from Session 9 of TED2018

Mary Lou Jepsen demonstrates the ability of red light to scatter when it hits our bodies. Can we leverage this property to see inside ourselves? She speaks at TED2018 on April 13, 2018. Photo: Ryan Lash / TED

During the week of TED, it’s tempting to feel like a brain in a jar — to think on a highly abstracted, intellectual, hypertechnical level about every single human issue. But the speakers in this session remind us that we’re still just made of meat. And that our carbon-based life forms aren’t problems to be transcended but, if you will, platforms. Let’s build on them, explore them, and above all feel at home in them.

When red light means go. The last time Mary Lou Jepsen took the TED stage, she shared the science of knowing what’s inside another person’s mind. This time, the celebrated optical engineer shares an exciting new tool for reading what’s inside our bodies. It exploits the properties of red light, which behaves differently in different body materials. Our bones and flesh scatter red light (as she demonstrates on a piece of raw chicken breast), while our red blood absorbs it. By measuring how light scatters, or doesn’t, inside our bodies, and using a technique called holography to study the resulting patterns as the light comes through the other side, Jepsen believe we can gain a new way to spot tumors and other anomalies, and eventually to create a smaller, more efficient replacement for the bulky MRI. Her demo doubles as a crash course in optics, with red and green lasers and all kinds of cool gear (some of which juuuuust squeaked through customs in time). And it’s a wildly inspiring look at a bold effort to solve an old problem in a new way.

Floyd E. Romesberg imagines a couple new letters in DNA that might allow us to create … who knows what. Photo: Jason Redmond / TED

What if DNA had more letters to work with? DNA is built on only four letters: G, C, A, T. These letters determine the sequences of the 20 amino acids in our cells that build the proteins that make life possible. But what if that “alphabet” got bigger? Synthetic biologist and chemist Floyd Romesberg suggests that the letters of the genetic alphabet are not all that unique. For the problem of life, perhaps, “maybe we’re not the only solution, maybe not even the best solution — just a solution.” And maybe new parts can be built to work alongside the natural parts. Inspired by these insights, Romesberg and his colleagues constructed the first “semi-synthetic” life forms based on a 6-letter DNA. With these extra building blocks, cells can construct hitherto unseen proteins. Someday, we could tailor these cells to fulfill all sorts of functions — building new, hyper-targeted medicines, seeking out and destroying cancer, or “eating” toxic materials. Worried about unintended consequences? Romesberg says that his augmented 6-letter DNA cannot be replenished within the body. As the unnatural genetic materials are depleted, the semi-synthetic cells die off, protecting us against nightmarish sci-fi scenarios of rogue microorganisms.

On the slide behind Dan Gibson: a teleportation machine, more or less. It’s a “printer” that can convert digital information into biological material, and it holds the promise of sending things like vaccines and medicines over the internet. Photo: Ryan Lash / TED

Beam our DNA up, Scotty. Teleportation is real. That’s right, you read it here first. This method isn’t quite like what the minds behind Star Trek brought to life, but the massive implications attached are just as futuristic. Biologist and engineer Dan Gibson reports from the front lines of science fact, that we are now able to transmit not our entire selves, but the most fundamental parts of who we are: our DNA. Or, simply put, biological teleportation. “The characteristics and functions of all biological entities including viruses and living cells are written into the code of DNA,” says Gibson. “They can be reconstructed in a distant location if we can read and write the sequence of that DNA code.” The machines that perform this fantastic feat, the BioXP and the DBC, stitch together both long and short forms of genetic code that can be downloaded from the internet. That means that in the future, with an at-home version of these machines (or even one literally worlds away, say like, Mars), we may be able to download and print personalized therapeutic medications, prescriptions and even vaccines. The process takes weeks now, but could someday come down to 1–2 days. (And don’t worry: Gibson, his team and the government screen every synthesis order against a database to make sure viruses and pathogens aren’t being made.) He says: “For now, I will be satisfied beaming new medicines across the globe, fully automated and on-demand to save lives from emerging deadly infectious diseases and to create personalized cancer medicines for those who don’t have time to wait.”

In a powerful talk, sex educator Emily Nagoski educates us about emotional nonconcordance — when our body and our mind “say” different things in an intimate situation. Which to listen to? Photo: Ryan Lash / TED

Busting one of our most dangerous myths about sex. When it comes to pleasure, humans have something that’s often called “the reward center” — but, explains sex educator Emily Nagoski, that “reward center” is actually three intertwined, separate systems: liking, or whether it feels good or bad; wanting, which motivates us to move toward or away from a stimulus; and learning. Learning is best explained by Pavlov’s dogs, whom he trained to salivate when he rang a bell. Were the dogs hungry for the bell (wanting)? Did they find the bell delicious (liking)? Of course not: “What Pavlov did was make the bell food-related.” The separateness of these three things, wanting, liking and learning, helps explain a phenomenon called emotional nonconcordance, when our physiological response doesn’t match our subjective experience. This happens with all sorts of emotional and motivational systems, including sex. “Research over the last thirty years has found that genital blood flow can increase in response to sex-related stimuli, even if those sex-related stimuli are not also associated with a subjective experience of wanting and liking,” she says. The problem is that we don’t recognize nonconcordance when it comes to sex: in fact, there is a dangerous myth that even if someone says they don’t want it or don’t like it, their body can say differently, and the body is the one telling the “truth.” This myth has serious consequences for victims of unwanted and nonconsensual sexual contact, who are sometimes told that their nonconcordant genital response invalidates their experience … and who can even have that response held up as evidence in sexual assault cases. Nagoski urges all of us to share this crucial information with someone — judges, lawyers, your partners, your kids. “The roots of this myth are deep and they are entangled with some very dark forces in our culture, but with every brave conversation we have, we make the world that little bit better,” she says to one of the biggest standing Os in a standing-O-heavy session.

The musicians and songwriters of LADAMA perform and speak at TED2018. Photo: Ryan Lash / TED

Bringing Latin alternative music to Vancouver. Singing in Spanish, Portuguese and English, LADAMA enliven the TED stage with a vibrant, energizing and utterly danceable musical set. The multinational ensemble of women — Maria Fernanda Gonzalez from Venezuela, Lara Klaus from Brazil, Daniela Serna of Colombia, and Sara Lucas from the US — and their bass player collaborator combine traditional South American and Caribbean styles like cumbia, maracatu and joropo with pop, soul and R&B to deliver a pulsing musical experience. The group took attendees on a musical journey with their modern and soulful compositions, playing original songs “Night Traveler” and “Porro Maracatu.”

Hugh Herr lost both legs below the knee, but the new legs he built allow him once again to run, climb and even dance. Photo: Ryan Lash / TED

“The robot became part of me.” MIT professor Hugh Herr takes the TED stage, his sleek bionic legs conspicuous under his sharp grey suit. “I’m basically nuts and bolts from the knee down,” Herr says, demonstrating how his bionic legs — made up of 24 sensors, 6 microprocessors and muscle-tendon-like actuators — allow him to walk, skip and run. Herr builds body parts, and he’s working toward realizing a goal that has long been thought of as science fiction: for synthetic limbs to be integrated into the human nervous system. He calls it “NeuroEmbodied Design,” a methodology to create cyborg function where the lines between the natural and synthetic world are blurred. This future will provide humanity with new bodies and end disability, Herr says — and it’s already happening. He introduces us to Jim Ewing, a friend of Herr’s who was in a climbing accident that resulted in the amputation of his foot. Using the Agonist-antagonist Myoneural Interface, a method Herr and his team developed at MIT to connect nerves to a prosthetic, Jim’s bones and muscles were integrated with a synthetic limb, re-establishing the neural connection between his ankle and foot muscles and his brain. “Jim moves and behaves as if the synthetic limb is part of him,” Herr says. And he’s even back climbing again. Taking a few moments to dream, Herr describes a future where humans have augmented their bodies in a way that fundamentally redefines human potential, giving us unimaginable physical strength — and, maybe, the ability to fly. “I believe humans will become superheroes,” Herr says. “During the twilight years of this century, I believe humans will be unrecognizable in morphology and dynamics from what we are today. Humanity will take flight and soar.”

Jim Ewing, left, lost a limb in a climbing accident; he partnered with MIT professor Hugh Herr, right, to build a limb that got him back up and climbing again. Photo: Ryan Lash / TED

,

Harald WelteOsmoCon 2018 CfP closes on 2018-05-30

One of the difficulties with OsmoCon2017 last year was that almost nobody submitted talks / discussions within the deadline, early enough to allow for proper planning.

This lad to the situation where the sysmocom team had to come up with a schedule/agenda on their own. Later on much after the CfP deadline,people then squeezed in talks, making the overall schedule too full.

It is up to you to avoid this situation again in 2018 at OsmoCon2018 by submitting your talk RIGHT NOW. We will be very strict regarding late submissions. So if you would like to shape the Agenda of OsmoCon 2018, this is your chance. Please use it.

We will have to create a schedule soon, as [almost] nobody will register to a conference unless the schedule is known. If there's not sufficient contribution in terms of CfP response from the wider community, don't complain later that 90% of the talks are from sysmocom team members and only about the Cellular Network Infrastructure topics.

You have been warned. Please make your CfP submission in time at https://pretalx.sysmocom.de/osmocon2018/cfp before the CfP deadline on 2018-05-30 23:59 (Europe/Berlin)

Harald Welteopenmoko.org archive down due to datacenter issues

Unfortunately, since about 11:30 am CEST on MAy 24, openmoko.org is down due to some power outage related issues at Hetzner, the hosting company at which openmoko.org has been hosting for more than a decade now.

The problem seems to have caused quite a lot of fall-out tom many servers (Hetzner is hosting some 200k machines, not sure how many affected, though), and Hetzner is anything but verbose when it comes to actually explaining what the issue is.

All they have published is https://www.hetzner-status.de/en.html#8842 - which is rather tight lipped about some power grid issues. But then, what do you have UPSs for if not for "a strong voltage reduction in the local power grid"?

The openmoko.org archive machine is running in Hetzner DC10, by the way. This is where they've had the largest number of tickets.

In any case, we'll have to wait for them to resolve their tickets. They appear to be working day and night on that.

I have a number of machines hosted at Hetzner, and I'm actually rather happy that none of the more important systems were affected that long. Some machines simply lost their uplink connectivity for some minutes, while some others were rebooted (power outage). The openmoko.org archive is the only machine that didn't automatically boot after the outage, maybe the power supply needs replacement.

In any case, I hope the service will be back up again soon.

btw: Guess who's been paying for hosting costs ever since Openmoko, Inc. has shut down? Yes, yours truly. It was OK for something like 9 years, but I want to recursively pull the dynamic content through some cache, which can then be made permanent. The resulting static archive can then be moved to some VM somewhere, without requiring a dedicated root server. That should reduce the costs down to almost nothing.

Krebs on Security3 Charged In Fatal Kansas ‘Swatting’ Attack

Federal prosecutors have charged three men with carrying out a deadly hoax known as “swatting,” in which perpetrators call or message a target’s local 911 operators claiming a fake hostage situation or a bomb threat in progress at the target’s address — with the expectation that local police may respond to the scene with deadly force. While only one of the three men is accused of making the phony call to police that got an innocent man shot and killed, investigators say the other two men’s efforts to taunt and deceive one another ultimately helped point the gun.

Tyler “SWAuTistic” Barriss. Photo: AP

According to prosecutors, the tragic hoax started with a dispute over a match in the online game “Call of Duty.” The indictment says Shane M. Gaskill, a 19-year-old Wichita, Kansas resident, and Casey S. Viner, 18, had a falling out over a $1.50 game wager.

Viner allegedly wanted to get back at Gaskill, and so enlisted the help of another man — Tyler R. Barriss — a serial swatter known by the alias “SWAuTistic” who’d bragged of “swatting” hundreds of schools and dozens of private residences.

The federal indictment references transcripts of alleged online chats among the three men. In an exchange on Dec. 28, 2017, Gaskill taunts Barriss on Twitter after noticing that Barriss’s Twitter account (@swattingaccount) had suddenly started following him.

Viner and Barriss both allegedly say if Gaskill isn’t scared of getting swatted, he should give up his home address. But the address that Gaskill gave Viner to pass on to Barriss no longer belonged to him and was occupied by a new tenant.

Barriss allegedly then called the emergency 911 operators in Wichita and said he was at the address provided by Viner, that he’d just shot his father in the head, was holding his mom and sister at gunpoint, and was thinking about burning down the home with everyone inside.

Wichita police quickly responded to the fake hostage report and surrounded the address given by Gaskill. Seconds later, 28-year-old Andrew Finch exited his mom’s home and was killed by a single shot from a Wichita police officer. Finch, a father of two, had no party to the gamers’ dispute and was simply in the wrong place at the wrong time.

Just minutes after the fatal shooting, Barriss — who is in Los Angeles  — is allegedly anxious to learn if his Kansas swat attempt was successful. Someone has just sent Barriss a screenshot of a conversation between Viner and Gaskill mentioning police at Gaskill’s home and someone getting killed. So Barriss allegedly then starts needling Gaskill via instant message:

Defendant BARRISS: Yo answer me this
Defendant BARRISS: Did police show up to your house yes or no
Defendant GASKILL: No dumb fuck
Defendant BARRISS: Lmao here’s how I know you’re lying

Prosecutors say Barriss then posted a screen shot showing the following conversation between Viner and Gaskill:

Defendant VINER: Oi
Defendant GASKILL: Hi
Defendant VINER: Did anyone show @ your house?
Defendant VINER: Be honest
Defendant GASKILL: Nope
Defendant GASKILL: The cops are at my house because someone ik just killed his dad

Barriss and Gaskill then allegedly continued their conversation:

Defendant GASKILL: They showed up to my old house retard
Defendant BARRISS: That was the call script
Defendant BARRISS: Lol
Defendant GASKILL: Your literally retarded
Defendant GASKILL: Ik dumb ass
Defendant BARRISS: So you just got caught in a lie
Defendant GASKILL: No I played along with you
Defendant GASKILL: They showed up to my old house that we own and rented out
Defendant GASKILL: We don’t live there anymore bahahaha
Defendant GASKILL: ik you just wasted your time and now your pissed
Defendant BARRISS: Not really
Defendant BARRISS: Once you said “killed his dad” I knew it worked lol
Defendant BARRISS: That was the call lol
Defendant GASKILL: Yes it did buy they never showed up to my house
Defendant GASKILL: You guys got trolled
Defendant GASKILL: Look up who live there we moved out almost a year ago
Defendant GASKILL: I give you props though you’re the 1% that can actually swat babahaha
Defendant BARRISS: Dude MY point is You gave an address that you dont live at but you were acting tough lol
Defendant BARRISS: So you’re a bitch

Later on the evening of Dec. 28, after news of the fatal swatting started blanketing the local television coverage in Kansas, Gaskill allegedly told Barriss to delete their previous messages. “Bape” in this conversation refers to a nickname allegedly used by Casey Viner:

Defendant GASKILL: Dm asap
Defendant GASKILL: Please it’s very fucking impi
Defendant GASKILL: Hello
Defendant BARRISS: ?
Defendant BARRISS: What you want
Defendant GASKILL: Dude
Defendant GASKILL: Me you and bape
Defendant GASKILL: Need to delete everything
Defendant GASKILL: This is a murder case now
Defendant GASKILL: Casey deleted everything
Defendant GASKILL: You need 2 as well
Defendant GASKILL: This isn’t a joke K troll anymore
Defendant GASKILL: If you don’t you’re literally retarded I’m trying to help you both out
Defendant GASKILL: They know it was swat call

The indictment also features chat records between Viner and others in which he admits to his role in the deadly swatting attack. In the follow chat excerpt, Viner was allegedly talking with someone identified only as “J.D.”

Defendant VINER: I literally said you’re gonna be swatted, and the guy who swatted him can easily say I convinced him or something when I said hey can you swat this guy and then gave him the address and he said yes and then said he’d do it for free because I said he doesn’t think anything will happen
Defendant VINER: How can I not worry when I googled what happens when you’re involved and it said a eu [sic] kid and a US person got 20 years in prison min
Defendant VINER: And he didn’t even give his address he gave a false address apparently
J.D.: You didn’t call the hoax in…
Defendant VINER: Does t [sic] even matter ?????? I was involved I asked him to do it in the first place
Defendant VINER: I gave him the address to do it, but then again so did the other guy he gave him the address to do it as well and said do it pull up etc

Barriss is charged with multiple counts of making false information and hoaxes; cyberstalking; threatening to kill another or damage property by fire; interstate threats, conspiracy; and wire fraud. Viner and Gaskill were both charged with wire fraud, conspiracy and obstruction of justice. A copy of the indictment is available here.

The Associated Press reports that the most serious charge of making a hoax call carries a potential life sentence because it resulted in a death, and that some of the other charges carry sentences of up to 20 years.

The moment that police in Kansas fired a single shot that killed Andrew Finch.

As I told the AP, swatting has been a problem for years, but it seems to have intensified around the time that top online gamers started being able to make serious money playing games online and streaming those games live to thousands or even tens of thousands of paying subscribers. Indeed, Barriss himself had earned a reputation as someone who delighted in watching police kick in doors behind celebrity gamers who were live-streaming.

This case is not the first time federal prosecutors have charged multiple people in the same swatting attacks even if only one person was involved in actually making the phony hoax calls to police. In 2013, my home was the target of a swatting attack that thankfully ended without incident. The government ultimately charged four men — several of whom were minors at the time — with conducting that swat attack as well as many others they’d perpetrated against public figures and celebrities.

But despite spending considerable resources investigating those crimes, prosecutors were able to secure only light punishments for those involved in the swatting spree. One of those men, a serial swatter and cyberstalker named Mir Islam, was sentenced to to just one year in jail for his role in multiple swattings.  Another individual who was part of that group — Eric “Cosmo the God” Taylorgot three years of probation.

Something tells me Barriss, Gaskill and Viner aren’t going to be so lucky. Barriss has admitted his role in many swattings, and he admitted to his last, fatal swatting in an interview he gave to KrebsOnSecurity less than 24 hours after Andrew Finch’s murder — saying he was not the person who pulled the trigger.

Sociological ImagesEnglish/Gibberish

One major part of introducing students to sociology is getting to the “this is water” lesson: the idea that our default experiences of social life are often strange and worthy of examining. This can be challenging, because the default is often boring or difficult to grasp, but asking the right questions is a good start (with some potentially hilarious results).

Take this one: what does English sound like to a non-native speaker? For students who grew up speaking it, this is almost like one of those Zen koans that you can’t quite wrap your head around. If you intuitively know what the language means, it is difficult to separate that meaning from the raw sounds.

That’s why I love this video from Italian pop singer Adriano Celentano. The whole thing is gibberish written to imitate how English slang sounds to people who don’t speak it.


Another example to get class going with a laugh is the 1990s video game Fighting Baseball for the SNES. Released in Japan, the game didn’t have the licensing to use real players’ names, so they used names that sounded close enough. A list of some of the names still bounces around the internet:

The popular idea of the Uncanny Valley in horror and science fiction works really well for languages, too. The funny (and sometimes unsettling) feelings we get when we watch imitations of our default assumptions fall short is a great way to get students thinking about how much work goes into our social world in the first place.

Evan Stewart is a Ph.D. candidate in sociology at the University of Minnesota. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

Worse Than FailureImprov for Programmers: Just for Transformers

We're back again with a little something different, brought to you by Raygun. Once again, the cast of "Improv for Programmers" is going to create some comedy on the fly for you, and this time… you could say it's… transformative. Today's episode contains small quantities of profanity.

Raygun provides a window into how users are really experiencing your software applications.

Unlike traditional logging, Raygun silently monitors applications for issues affecting end users in production, then allows teams to pinpoint the root cause behind a problem with greater speed and accuracy by providing detailed diagnostic information for developers. Raygun makes fixing issues 1000x faster than traditional debugging methods using logs and incomplete information.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integrated, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] ProGet can centralize your organization's software applications and components to provide uniform access to developers and servers. Check it out!

,

Harald WelteMailing List hosting for FOSS Projects

Recently I've encountered several occasions in which a FOSS project would have been interested in some reliable, independent mailing list hosting for their project communication.

I was surprised how difficult it was to find anyone running such a service.

From the user / FOSS project point of view, the criteria that I would have are:

  • operated by some respected entity that is unlikely to turn hostile, discontinue the service or go out of business altogether
  • free of any type of advertisements (we all know how annoying those are)
  • cares about privacy, i.e. doesn't sell the subscriber lists or non-public archives
  • use FOSS to run the service itself, such as GNU mailman, listserv, ezmlm, ...
  • an easy path to migrate away to another service (or self-hosting) as they grow or their requirements change. A simple mail forward to that new address for the related addresses is typically sufficient for that

If you think mailing lists serve no purpose these days anyways, and everyone is on github: Please have a look at the many thousands of FOSS project mailing lists out there still in use. Not everyone wants to introduce a dependency to the whim of a proprietary software-as-a-service provider.

I never had this problem as I always hosted my own mailman instance on lists.gnumonks.org anyway, and all the entities that I've been involved in (whether non-profit or businesses) had their own mailing list hosts. From franken.de in the 1990ies to netfilter.org, openmoko.org and now osmocom.org, we all pride oursevles in self-hosting.

But then there are plenty of smaller projects that neither have the skills nor the funding available. So they go to yahoo groups or some other service that will then hold them hostage without a way to switch their list archives from private to public, without downloadable archives or forwarding in the case they want to move away :(

Of course the larger FOSS projects also have their own list servers, starting from vger.kernel.org to Linux distributions like Debian GNU/Linux. But what if your FOSS project is not specifically Linux related?

The sort-of obvious candidates that I found all don't really fit:

Now don't get me wrong, I'm of course not expecting that there are commercial entities operating free-of charge list hosting services where you neither pay with money, nor your data, nor by becoming a spam receiver.

But still, in the wider context of the Free Software community, I'm seriously surprised that none of the various non-for-profit / non-commercial foundations or associations are offering a public mailing list hosting service for FOSS projects.

One can of course always ask any from the above list and ask for a mailing list even though it's strictly speaking off-topic to them. But who will do that, if he has to ask uninvited for a favor?

I think there's something missing. I don't have the time to set up a related service, but I would certainly want to contribute in terms of funding in case any existing FOSS related legal entity wanted to expand. If you already have a legal entity, abuse contacts, a team of sysadmins, then it's only half the required effort.

Worse Than FailureBusiness Driven Development

Every now and then, you come across a special project. You know the sort, where some business user decides that they know exactly what they need and exactly how it should be built. They get the buy-in of some C-level shmoe by making sure that their lips have intimate knowledge of said C-level butt. Once they have funding, they have people hired and begin to bark orders.

Toonces, the Driving Cat

About 8 years ago, I had the privilege experience of being on such a project. When we were given the phase-I specs, all the senior tech people immediately said that there was no way to perform a sane daily backup and data-roll for the next day. The response was "We're not going to worry about backups and daily book-rolls until later". We all just cringed, made like good little HPCs and followed our orders to march onward.

Fast forward about 10 months and the project had a sufficient amount of infrastructure that the business user had no choice but to start thinking about how to close the books each day, and roll things forward for the next day. The solution he came up with was as follows:

   1. Shut down all application servers and the DB
   2. Remove PK/FK relationships and rename all the tables in the database from: xxx to: xxx.yyyymmdd
   3. Create all new empty tables in the database (named: xxx)
   4. Create all the PK/FK relationships, indices, triggers, etc.
   5. Prime the new: xxx tables with data from the: xxx.<prev-business-date> tables
   6. Run a job to mirror the whole thing to offsite DB servers
   7. Run the nightly backups (to tape)
   8. Fire up the DB and application servers

Naturally, all the tech people groaned, mentioning things like history tables, wasted time regenerating indices, nightmares if errors occurred while renaming tables, etc., but they were ignored.

Then it happened. As is usually the case when non-technical people try to do technical designs, the business user found himself designed into a corner.

The legitimate business-need came up to make adjustments to transactions for the current business day after the table-roll to the next business day had completed.

The business user pondered it for a bit and came up with the following:

    1. Shut down all application servers and the DB
    2. Remove PK/FK relationships and rename the post-roll tables of tomorrow from xxx to xxx.tomorrow
    3. Copy SOME of the xxx.yyyymmdd tables from the pre-roll current day back to: xxx
       (leaving the PK's and indices notably absent)
    4. Restart the DB and application servers (with some tables rolled and some not rolled)
    5. Let the users make changes as needed
    6. Shut down the application and DB servers
    7. Manually run ad-hoc SQL to propagate all changes to the xxx.tomorrow table(s)
    8. Rename the: xxx tables to: xxx.yyyymmdd.1 
       (or 2 or 3, depending upon how many times this happened per day)
    9. Rename the xxx.tomorrow tables back to: xxx
   10. Rebuild all the PK/FK relationships, create new indices and re-associate triggers, etc.
   11. Rerun the mirroring and backup scripts
   12. Restart the whole thing

When we pointed out the insanity of all of this, and the extremely high likelihood of any failure in the table-renaming/moving/manual-updating causing an uncorrectable mess that would result in losing the entire day of transactions, we were summarily terminated as our services were no longer required — because they needed people who knew how to get things done.

I'm the first to admit that there are countless things that I do not know, and the older I get, the more that list seems to grow.

I'm also adamant about not making mistakes I know will absolutely blow up in my face - even if it costs me a job. If you need to see inside of a gas tank, throwing a lit match into it will illuminate the inside, but you probably won't like how it works out for you.

Five of us walked out of there, unemployed and laughing hysterically. We went to our favorite watering hole and decided to keep tabs on the place for the inevitable explosion.

Sure enough, 5 weeks after they had junior offshore developers (who didn't have the spine to say "No") build what they wanted, someone goofed in the rollback, and then goofed again while trying to unroll the rollback.

It took them three days to figure out what to restore and in what sequence, then restore it, rebuild everything and manually re-enter all of the transactions since the last backup. During that time, none of their customers got the data files that they were paying for, and had to find alternate sources for the information.

When they finally got everything restored, rebuilt and updated, they went to their customers and said "We're back". In response, the customers told them that they had found other ways of getting the time-sensitive information and no longer required their data product.

Not only weren't the business users fired, but they got big bonuses for handling the disaster that they had created.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

,

Cory DoctorowWhere to find me at Phoenix Comics Fest this week

I’m heading to Phoenix Comics Fest tomorrow (going straight to the airport from my daughter’s elementary school graduation) (!), and I’ve got a busy schedule so I thought I’d produce a comprehensive list of the places you can find me in Phoenix:


Wednesday, May 23: Elevenageddon at Poisoned Pen books, 4014 N Goldwater Blvd, Scottsdale, AZ 85251, 7-8PM (“A Multi-Author Sci-Fi Event”)

Thursday, May 24:

Transhumans and Transhumanism in Fiction, North 126AB, with Emily Devenport and Sylvain Neuvel, 12PM-1PM

Prophets of Sci-Fi, North 125AB, with Emily Devenport, Sylvain Neuvel and John Scalzi, 3PM-4PM

Tor Authors Signing, Exhibitor Hall Author Signing area, 4:30PM-530PM

Building a Franken-Book, North 126C, with Bob Beard, Joey Eschrich and Ed Finn


Friday, May 25:

Two Truths and a Lie, North 122ABC, with Myke Cole, Emily Devenport, K Arsenault Rivera and John Scalzi, 1030AM-1130AM

Solo Presentation, North 122ABC, 1:30PM-2:30PM

Signing, Exhibitor Hall Author Signing Area, 3PM-4PM

Saturday, May 26:

Cory Doctorow & John Scalzi in Conversation about Politics in Sci Fi and Fantasy, North 125AB, 12PM-1PM

Signing, North 124AB, 1:15PM-2:15PM

Rondam RamblingsA quantum mechanics puzzle, part drei

[This post is the third part of a series.  You should read parts one and two before reading this or it won't make any sense.] So we have two more cases to consider: Case 3: we pulse the laser with very short pulses, emitting only one photon at a time.  This is actually not possible with a laser, but it is possible with something like this single-photon-emitting light source (which was actually

Krebs on SecurityMobile Giants: Please Don’t Share the Where

Your mobile phone is giving away your approximate location all day long. This isn’t exactly a secret: It has to share this data with your mobile provider constantly to provide better call quality and to route any emergency 911 calls straight to your location. But now, the major mobile providers in the United States — AT&T, Sprint, T-Mobile and Verizon — are selling this location information to third party companies — in real time — without your consent or a court order, and with apparently zero accountability for how this data will be used, stored, shared or protected.

Think about what’s at stake in a world where anyone can track your location at any time and in real-time. Right now, to be free of constant tracking the only thing you can do is remove the SIM card from your mobile device never put it back in unless you want people to know where you are.

It may be tough to put a price on one’s location privacy, but here’s something of which you can be sure: The mobile carriers are selling data about where you are at any time, without your consent, to third-parties for probably far less than you might be willing to pay to secure it.

The problem is that as long as anyone but the phone companies and law enforcement agencies with a valid court order can access this data, it is always going to be at extremely high risk of being hacked, stolen and misused.

Consider just two recent examples. Earlier this month The New York Times reported that a little-known data broker named Securus was selling local police forces around the country the ability to look up the precise location of any cell phone across all of the major U.S. mobile networks. Then it emerged that Securus had been hacked, its database of hundreds of law enforcement officer usernames and passwords plundered. We also found out that Securus’ data was ultimately obtained from a California-based location tracking firm LocationSmart.

On May 17, KrebsOnSecurity broke the news of research by Carnegie Mellon University PhD student Robert Xiao, who discovered that a LocastionSmart try-before-you-buy opt-in demo of the company’s technology was wide open — allowing real-time lookups from anyone on anyone’s mobile device — without any sort of authentication, consent or authorization.

Xiao said it took him all of about 15 minutes to discover that LocationSmart’s lookup tool could be used to track the location of virtually any mobile phone user in the United States.

Securus seems equally clueless about protecting the priceless data to which it was entrusted by LocationSmart. Over the weekend KrebsOnSecurity discovered that someone — almost certainly a security professional employed by Securus — has been uploading dozens of emails, PDFs, password lists and other files to Virustotal.com — a service owned by Google that can be used to scan any submitted file against dozens of commercial antivirus tools.

Antivirus companies willingly participate in Virustotal because it gives them early access to new, potentially malicious files being spewed by cybercriminals online. Virustotal users can submit suspicious files of all kind; in return they’ll see whether any of the 60+ antivirus tools think the file is bad or benign.

One basic rule that all Virustotal users need to understand is that any file submitted to Virustotal is also available to customers who purchase access to the service’s file repository. Nevertheless, for the past two years someone at Securus has been submitting a great deal of information about the company’s operations to Virustotal, including copies of internal emails and PDFs about visitation policies at a number of local and state prisons and jails that made up much of Securus’ business.

Some of the many, many files uploaded to Virustotal.com over the years by someone at Securus Technologies.

One of the files, submitted on April 27, 2018, is titled “38k user pass microsemi.com – joomla_production.mic_users_blockedData.txt”.  This file includes the names and what appear to be hashed/scrambled passwords of some 38,000 accounts — supposedly taken from Microsemi, a company that’s been called the largest U.S. commercial supplier of military and aerospace semiconductor equipment.

Many of the usernames in that file do map back to names of current and former employees at Microsemi. KrebsOnSecurity shared a copy of the database with Microsemi, but has not yet received a reply. Securus also has not responded to requests for comment.

These files that someone at Securus apparently submitted regularly to Virustotal also provide something of an internal roadmap of Securus’ business dealings, revealing the names and login pages for several police departments and jails across the country, such as the Travis County Jail site’s Web page to access Securus’ data.

Check out the screen shot below. Notice that forgot password link there? Clicking that prompts the visitor to enter their username and to select a “security question” to answer. There are but three questions: “What is your pet’s name? What is your favorite color? And what town were you born in?” There don’t appear to be any limits on the number of times one can attempt to answer a secret question.

Choose wisely and you, too, could gain the ability to look up anyone’s precise mobile location.

Given such robust, state-of-the-art security, how long do you think it would take for someone to figure out how to reset the password for any authorized user at Securus’ Travis County Jail portal?

Yes, companies like Securus and Location Smart have been careless with securing our prized location data, but why should they care if their paying customers are happy and the real-time data feeds from the mobile industry keep flowing?

No, the real blame for this sorry state of affairs comes down to AT&T, Sprint, T-Mobile and Verizon. T-Mobile was the only one of the four major providers that admitted providing Securus and LocationSmart with the ability to perform real-time location lookups on their customers. The other three carriers declined to confirm or deny that they did business with either company.

As noted in my story last Thursday, LocationSmart included the logos of the four carriers on their home page — in addition to those of several other major firms (that information is no longer available on the company’s site, but it can still be viewed by visiting this historic record of it over at the Internet Archive).

Now, don’t think for a second that these two tiny companies are the only ones with permission from the mobile giants to look up such sensitive information on demand. At a minimum, each one of these companies can in theory resell (or leak) this information and access to others. On 15 May, ZDNet reported that Securus was getting its data from the carriers by going through an intermediary: 3Cinteractive, which was getting it from LocationSmart.

However, it is interesting that the first insight we got that the mobile firms were being so promiscuous with our private location data came in the Times story about law enforcement officials seeking the ability to access any mobile device’s location data in real time.

All technologies are double-edged swords, which means that each can be used both for good and malicious ends. As much as police officers may wish to avoid the hassle and time constraints of having to get a warrant to determine the precise location of anyone they please whenever they wish, those same law enforcement officers should remember that this technology works both ways: It also can just as easily be abused by criminals to track the real-time movements of police and their families, informants, jurors, witnesses and even judges.

Consider the damage that organized crime syndicates — human traffickers, drug smugglers and money launderers — could inflict armed with an app that displays the precise location of every uniformed officer from within 300 ft to across the country. All because they just happened to know the cell phone number tied to each law enforcement official.

Maybe you have children or grandchildren who — like many of their peers these days — carry a mobile device at all times for safety and for quick communication with parents or guardians. Now imagine that anyone in the world has the instant capability to track where your kid is at any time of day. All they’d need is your kid’s digits.

Maybe you’re the current or former target of a stalker, jilted ex-spouse, or vengeful co-worker. Perhaps you perform sensitive work for the government. All of the above-mentioned parties and many more are put at heightened personal risk by having their real-time location data exposed to commercial third parties.

Some people might never sell their location data for any price: I suspect most of us would like this information always to be private unless and until we change the defaults (either in a binary “on/off” way or app-specific). On the other end of the spectrum there are probably plenty of people who don’t care one way or another provided that sharing their location information brings them some real or perceived financial or commercial benefit.

The point is, for many of us location privacy is priceless because, without it, almost everything else we’re doing to safeguard our privacy goes out the window.

And this sad reality will persist until the mobile providers state unequivocally that they will no longer sell or share customer location data without having received and validated some kind of legal obligation — such as a court-ordered subpoena.

But even that won’t be enough, because companies can and do change their policies all the time without warning or recourse (witness the current reality). It won’t be enough until lawmakers in this Congress step up and do their jobs — to prevent the mobile providers from selling our last remaining bastion of privacy in the free world to third party companies who simply can’t or won’t keep it secure.

The next post in this series will examine how we got here, and what Congress and federal regulators have done and might do to rectify the situation.

Update, May 23, 12:34 am ET: Securus responded with the following comment:

“Securus Technologies does not use the Google tool, Virustotal.com as part of our normal business practice for confidential information.  We use other antivirus tools that meet our high standards for security and reliability.  Importantly,Virustotal.com will associate a file with a URL or domain merely because the URL or domain is included in the file.  Our initial review concluded that the overwhelming majority of files that Virustotal.com associates with www.securustech.net were not uploaded by Securus.  Our review also showed that a few employees accessed the site in an abundance of caution to verify that outside emails were virus free.  As a result, many of the files indicated in your article were not directly uploaded by Securus and/or are not Securus documents. A vast majority of files merely mention our URL.  Our review also determined that the Microsemi file mentioned in your article is only associated with Securus because two Securus employee email addresses were included in the file, and not because Securus uploaded the file.”

“Because we take the security of information very seriously, we are continuing to look into this matter to ensure proper procedures are followed to protect company and client information. We will update you if we learn that procedures were not followed.”

CryptogramAnother Spectre-Like CPU Vulnerability

Google and Microsoft researchers have disclosed another Spectre-like CPU side-channel vulnerability, called "Speculative Store Bypass." Like the others, the fix will slow the CPU down.

The German tech site Heise reports that more are coming.

I'm not surprised. Writing about Spectre and Meltdown in January, I predicted that we'll be seeing a lot more of these sorts of vulnerabilities.

Spectre and Meltdown are pretty catastrophic vulnerabilities, but they only affect the confidentiality of data. Now that they -- and the research into the Intel ME vulnerability -- have shown researchers where to look, more is coming -- and what they'll find will be worse than either Spectre or Meltdown.

I still predict that we'll be seeing lots more of these in the coming months and years, as we learn more about this class of vulnerabilities.

Cory DoctorowThe paperback of Walkaway is out today, along with reissues of all my adult novels in matching covers!

Today marks the release of the paperback of Walkaway, along with reissues of my five other adult novels, all in matching covers designed by the incredible Will Stahle (and if ebooks are your thing, check out my fair-trade ebook store, where you can get all my audiobooks and ebooks sold on the same terms as physical editions, with no DRM and no license agreements!).

Worse Than FailureRepresentative Line: Aggregation of Concatenation

A few years back, JSON crossed the “really good hammer” threshold. It has a good balance of being human readable, relatively compact, and simple to parse. It thus has become the go-to format for everything. “KoHHeKT” inherited a service which generates some JSON from an in-memory tree structure. This is exactly the kind of situation where JSON shines, and it would be trivial to employ one of the many JSON serialization libraries available for C# to generate JSON on demand.

Orrrrr… you could use LINQ aggregations, string formatting and trims…

private static string GetChildrenValue(int childrenCount)
{
        string result = Enumerable.Range(0, childrenCount).Aggregate("", (s, i) => s + $"\"{i}\",");
        return $"[{result.TrimEnd(',')}]";
}

Now, the concatenation and trims and all of that is bad. But I’m mostly stumped by what this method is supposed to accomplish. It’s called GetChildrenValue, but it doesn’t return a value- it returns an array of numbers from 0 to children count. Well, not an array, obviously- a string that can be parsed into an array. And they’re not actually numbers- they’re enclosed in quotes, so it’s actually text, not that any JavaScript client would care about the difference.

Why? How is this consumed? KoHHeKT couldn’t tell us, and we certainly aren’t going to figure it out from this block. But it is representative of the entire JSON constructing library- aggregations and concatenations with minimal exception handling and no way to confirm that it output syntactically valid JSON because nothing sanitizes its inputs.

[Advertisement] Forget logs. Next time you're struggling to replicate error, crash and performance issues in your apps - Think Raygun! Installs in minutes. Learn more.

Planet Linux AustraliaOpenSTEM: Nellie Bly – investigative journalist extraordinaire!

May is the birth month of Elizabeth Cochrane Seaman, better known as “Nellie Bly“. Here at OpenSTEM, we have a great fondness for Nellie Bly – an intrepid 19th century journalist and explorer, who emulated Jules Verne’s fictional character, Phileas Fogg, in racing around the world in less than 80 days in 1889/1890. Not only […]

,

CryptogramJapan's Directorate for Signals Intelligence

The Intercept has a long article on Japan's equivalent of the NSA: the Directorate for Signals Intelligence. Interesting, but nothing really surprising.

The directorate has a history that dates back to the 1950s; its role is to eavesdrop on communications. But its operations remain so highly classified that the Japanese government has disclosed little about its work ­ even the location of its headquarters. Most Japanese officials, except for a select few of the prime minister's inner circle, are kept in the dark about the directorate's activities, which are regulated by a limited legal framework and not subject to any independent oversight.

Now, a new investigation by the Japanese broadcaster NHK -- produced in collaboration with The Intercept -- reveals for the first time details about the inner workings of Japan's opaque spy community. Based on classified documents and interviews with current and former officials familiar with the agency's intelligence work, the investigation shines light on a previously undisclosed internet surveillance program and a spy hub in the south of Japan that is used to monitor phone calls and emails passing across communications satellites.

The article includes some new documents from the Snowden archive.

Worse Than FailureThe New Guy (Part I)

After working mind-numbing warehouse jobs for several years, Jesse was ready for a fresh start in Information Technology. The year 2015 brought him a newly-minted Computer and Networking Systems degree from Totally Legit Technical Institute. It would surely help him find gainful employment, all he had to do was find the right opportunity.

DNS hierarchy Seeking the right opportunity soon turned in to any opportunity. Jesse came across a posting for an IT Systems Administrator that piqued his interest but the requirements and responsibilities left a lot to be desired. They sought someone with C++ and Microsoft Office experience who would perform "General IT Admin Work" and "Other Duties as assigned". None of those things seemed to fit together, but he applied anyway.

During the interview, it became clear that Jesse and this small company were essentially in the same boat. While he was seeking any IT employment, they were seeking any IT Systems admin. Their lone admin recently departed unexpectedly and barely left any documentation of what he actually did. Despite several red flags about the position, he decided to accept anyway. Jesse was assured of little oversight and freedom to do things his way - an extreme rarity for a young IT professional.

Jesse got to work on his first day determined to map out the minefield he was walking in to. The notepad with all the admin passwords his predecessor left behind was useful for logging in to things. Over the next few days, he prodded through the network topology to uncover all the horrors that lie within. Among them:

  • The front-end of their most-used internal application was using Access 97 that interfaced with a SQL Server 2008 machine
  • The desktop computers were all using Windows XP (Half of them upgraded from NT 4.0)
  • The main file server and domain controller were still running on NT 4.0
  • There were two other mystery servers that didn't seem to perform any discernible function. Jesse confirmed this by unplugging them and leaving them off

While sorting through the tangled mess he inherited, Jesse got a high priority email from Ralph, the ancient contracted Networking Admin whom he hadn't yet had the pleasure of meeting. "U need to fix the website. FTP not working." While Ralph wasn't one for details, Jesse did learn something from him - they had a website, it used FTP for something, and it was on him to fix it.

Jesse scanned the magic password notepad and came across something called "Website admin console". He decided to give that a shot, only to be told the password was expired and needed to be reset. Unfortunately the reset email was sent to his predecessor's deactivated account. He replied to Ralph telling him he wasn't able to get to the admin console to fix anything.

All that he got in return was a ticket submitted by a customer explaining the problem and the IP address of the FTP server. It seemed they were expecting to be able to fetch PDF reports from an FTP location and were no longer able to. He went to the FTP server and didn't find anything out of the ordinary, other than the fact that is should really be using SFTP. Despite the lack of security, something was still blocking the client from accessing it.

Jesse suddenly had an idea born of inexperience for how to fix the problem. When he was having connectivity issues on his home WiFi network, all he had to do was reboot the router and it would work! That same logic could surely apply here. After tracking down the router, he found the outlet wasn't easily accessible. So he decided to hit the (factory) Reset button on the back.

Upon returning to his desk, he was greeted by nearly every user in their small office. Nobody's computer worked any more. After turning a deep shade of red, Jesse assured everyone he would fix it. He remembered something from TL Tech Institute called DNS that was supposed to let computers talk to each other. He went around and set everyone's DNS server to 192.168.1.0, the address they always used in school. It didn't help.

Jesse put in a call to Ralph and explained the situation. All he got was a lecture from the gravelly-voiced elder on the other end, "You darn kids! Why don't ye just leave things alone! I've been working networks since before there were networks! Give me a bit, I'll clean up yer dang mess!" Within minutes, Ralph managed to restore connectivity to the office. Jesse checked his DNS settings out of curiosity to find that the proper setting was 2.2.2.0.

The whole router mishap made him completely forget about the original issue - the client's FTP. Before he could start looking at it again, Ralph forwarded him an email from the customer thanking them for getting their reports back. Jesse had no idea how or why that was working now, but he was willing to accept the praise. He solved his first problem, but the fun was just beginning...

To be continued...

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2018 Workshop: Being an Acrobat: Linux and PDFs

Jun 16 2018 12:30
Jun 16 2018 16:30
Jun 16 2018 12:30
Jun 16 2018 16:30
Location: 
Infoxchange, 33 Elizabeth St. Richmond

Portable Document Format (PDF) is a file format first specified by Adobe Systems in 1993. It was a proprietary format until it was released as an open standard on July 1, 2008, and published by the International Organization for Standardization.

This workshop presentation will provide various ways that PDF files can can be efficiently manipulated in Linux and other free software that may not be easy in proprietary operating systems or applications.

The meeting will be held at Infoxchange, 33 Elizabeth St. Richmond 3121.  Late arrivals please call (0421) 775 358 for access to the venue.

LUV would like to acknowledge Infoxchange for the venue.

Linux Users of Victoria is a subcommittee of Linux Australia.

June 16, 2018 - 12:30

read more

Planet Linux AustraliaLinux Users of Victoria (LUV) Announce: LUV June 2018 Main Meeting: VoxxedDays conference report

Jun 5 2018 18:30
Jun 5 2018 20:30
Jun 5 2018 18:30
Jun 5 2018 20:30
Location: 
Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

PLEASE NOTE NEW LOCATION

6:30 PM to 8:30 PM Tuesday, June 5, 2018
Meeting Room 3, Kathleen Syme Library, 251 Faraday Street Carlton VIC 3053

Speakers:

  • Andrew Pam, Voxxed Days conference report

Andrew will report on a conference he recently attended, covering Language-Level Virtualization with GraalVM, Aggressive Web Apps and more.

Many of us like to go for dinner nearby after the meeting, typically at Trotters Bistro in Lygon St.  Please let us know if you'd like to join us!

Linux Users of Victoria is a subcommittee of Linux Australia.

June 5, 2018 - 18:30

,

CryptogramFriday Squid Blogging: Flying Squid

Flying squid are real.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Krebs on SecurityT-Mobile Employee Made Unauthorized ‘SIM Swap’ to Steal Instagram Account

T-Mobile is investigating a retail store employee who allegedly made unauthorized changes to a subscriber’s account in an elaborate scheme to steal the customer’s three-letter Instagram username. The modifications, which could have let the rogue employee empty bank accounts associated with the targeted T-Mobile subscriber, were made even though the victim customer already had taken steps recommended by the mobile carrier to help minimize the risks of account takeover. Here’s what happened, and some tips on how you can protect yourself from a similar fate.

Earlier this month, KrebsOnSecurity heard from Paul Rosenzweig, a 27-year-old T-Mobile customer from Boston who had his wireless account briefly hijacked. Rosenzweig had previously adopted T-Mobile’s advice to customers about blocking mobile number port-out scams, an increasingly common scheme in which identity thieves armed with a fake ID in the name of a targeted customer show up at a retail store run by a different wireless provider and ask that the number to be transferred to the competing mobile company’s network.

So-called “port out” scams allow crooks to intercept your calls and messages while your phone goes dark. Porting a number to a new provider shuts off the phone of the original user, and forwards all calls to the new device. Once in control of the mobile number, thieves who have already stolen a target’s password(s) can request any second factor that is sent to the newly activated device, such as a one-time code sent via text message or or an automated call that reads the one-time code aloud.

In this case, however, the perpetrator didn’t try to port Rosenzweig’s phone number: Instead, the attacker called multiple T-Mobile retail stores within an hour’s drive of Rosenzweig’s home address until he succeeded in convincing a store employee to conduct what’s known as a “SIM swap.”

A SIM swap is a legitimate process by which a customer can request that a new SIM card (the tiny, removable chip in a mobile device that allows it to connect to the provider’s network) be added to the account. Customers can request a SIM swap when their existing SIM card has been damaged, or when they are switching to a different phone that requires a SIM card of another size.

However, thieves and other ne’er-do-wells can abuse this process by posing as a targeted mobile customer or technician and tricking employees at the mobile provider into swapping in a new SIM card for that customer on a device that they control. If successful, the SIM swap accomplishes more or less the same result as a number port out (at least in the short term) — effectively giving the attackers access to any text messages or phone calls that are sent to the target’s mobile account.

Rosenzweig said the first inkling he had that something wasn’t right with his phone was on the evening of May 2, 2018, when he spotted an automated email from Instagram. The message said the email address tied to the three-letter account he’d had on the social media platform for seven years — instagram.com/par — had been changed. He quickly logged in to his Instagram account, changed his password and then reverted the email on the account back to his original address.

By this time, the SIM swap conducted by the attacker had already been carried out, although Rosenzweig said he didn’t notice his phone displaying zero bars and no connection to T-Mobile at the time because he was at home and happily surfing the Web on his device using his own wireless network.

The following morning, Rosenzweig received another notice — this one from Snapchat — stating that the password for his account there (“p9r”) had been changed. He subsequently reset the Instagram password and then enabled two factor authentication on his Snapchat account.

“That was when I realized my phone had no bars,” he recalled. “My phone was dead. I couldn’t even call 611,” [the mobile short number that all major wireless providers make available to reach their customer service departments].”

It appears that the perpetrator of the SIM swap abused not only internal knowledge of T-Mobile’s systems, but also a lax password reset process at Instagram. The social network allows users to enable notifications on their mobile phone when password resets or other changes are requested on the account.

But this isn’t exactly two-factor authentication because it also lets users reset their passwords via their mobile account by requesting a password reset link to be sent to their mobile device. Thus, if someone is in control of your mobile phone account, they can reset your Instagram password (and probably a bunch of other types of accounts).

Rosenzweig said even though he was able to reset his Instagram password and restore his old email address tied to the account, the damage was already done: All of his images and other content he’d shared on Instagram over the years was still tied to his account, but the attacker had succeeded in stealing his “par” username, leaving him with a slightly less sexy “par54384321,” (apparently chosen for him at random by either Instagram or the attacker).

As I wrote in November 2015, short usernames are something of a prestige or status symbol for many youngsters, and some are willing to pay surprising sums of money for them. Known as “OG” (short for “original” and also “original gangster”) in certain circles online, these can be usernames for virtually any service, from email accounts at Webmail providers to social media services like InstagramSnapchatTwitter and Youtube.

People who traffic in OG accounts prize them because they can make the account holder appear to have been a savvy, early adopter of the service before it became popular and before all of the short usernames were taken.

Rosenzweig said a friend helped him work with T-Mobile to regain control over his account and deactivate the rogue SIM card. He said he’s grateful the attackers who hijacked his phone for a few hours didn’t try to drain bank accounts that also rely on his mobile device for authentication.

“It definitely could have been a lot worse given the access they had,” he said.

But throughout all of this ordeal, it struck Rosenzweig as odd that he never once received an email from T-Mobile stating that his SIM card had been swapped.

“I’m a software engineer and I thought I had pretty good security habits to begin with,” he said. “I never re-use passwords, and it’s hard to see what I could have done differently here. The flaw here was with T-Mobile mostly, but also with Instagram. It seems like by having the ability to change one’s [Instagram] password by email or by mobile alone negates the second factor and it becomes either/or from the attackers point of view.”

Sources close to the investigation say T-Mobile is investigating a current or former employee as the likely culprit. The mobile company also acknowledged that it does not currently send customers an email to the email address on file when SIM swaps take place. A T-Mobile spokesperson said the company was considering changing the current policy, which sends the customer a text message to alert them about the SIM swap.

“We take our customers privacy and security very seriously and we regret that this happened,” the company said in a written statement. “We notify our customers immediately when SIM changes occur, but currently we do not send those notifications via email. We are actively looking at ways to improve our processes in this area.”

In summary, when a SIM swap happens on a T-Mobile account, T-Mobile will send a text message to the phone equipped with the new SIM card. But obviously that does not help someone who is the target of a SIM swap scam.

As we can see, just taking T-Mobile’s advice to place a personal identification number (PIN) on your account to block number port out scams does nothing to flag one’s account to make it harder to conduct SIM swap scams.

Rather, T-Mobile says customers need to call in to the company’s customer support line and place a separate “SIM lock” on their account, which can only be removed if the customer shows up at a retail store with ID (or, presumably, anyone with a fake ID who also knows the target’s Social Security Number and date of birth).

I checked with the other carriers to see if they support locking the customer’s current SIM to the account on file. I suspect they do, and will update this piece when/if I hear back from them. In the meantime, it might be best just to phone up your carrier and ask.

Please note that a SIM lock on your mobile account is separate from a SIM PIN that you can set via your mobile phone’s operating system. A SIM PIN is essentially an additional layer of physical security that locks the current SIM to your device, requiring you to input a special PIN when the device is powered on in order to call, text or access your data plan on your phone. This feature can help block thieves from using your phone or accessing your data if you lose your phone, but it won’t stop thieves from physically swapping in their own SIM card.

iPhone users can follow these instructions to set or change a device’s SIM PIN. Android users can see this page. You may need to enter a carrier-specific default PIN before being able to change it. By default, the SIM PIN for all Verizon and AT&T phones is “1111;” for T-Mobile and Sprint it should default to “1234.”

Be advised, however, that if you forget your SIM PIN and enter the wrong PIN too many times, you may end up having to contact your wireless carrier to obtain a special “personal unlocking key” (PUK).

At the very least, if you haven’t already done so please take a moment to place a port block PIN on your account. This story explains exactly how to do that.

Also, consider reviewing twofactorauth.org to see whether you are taking full advantage of any multi-factor authentication offerings so that your various accounts can’t be trivially hijacked if an attacker happens to guess, steal, phish or otherwise know your password.

One-time login codes produced by mobile apps such as Authy, Duo or Google Authenticator are more secure than one-time codes sent via automated phone call or text — mainly because crooks can’t steal these codes if they succeed in porting your mobile number to another service or by executing a SIM swap on your mobile account [full disclosure: Duo is an advertiser on this blog].

Update, May 19, 3:16 pm ET: Rosenzweig reports that he has now regained control over his original Instagram account name, “par.” Good on Instagram for fixing this, but it’s not clear the company has a real strong reporting process for people who find their usernames are hijacked.

CryptogramMaliciously Changing Someone's Address

Someone changed the address of UPS corporate headquarters to his own apartment in Chicago. The company discovered it three months later.

The problem, of course, is that in the US there isn't any authentication of change-of-address submissions:

According to the Postal Service, nearly 37 million change-of-address requests ­ known as PS Form 3575 ­ were submitted in 2017. The form, which can be filled out in person or online, includes a warning below the signature line that "anyone submitting false or inaccurate information" could be subject to fines and imprisonment.

To cut down on possible fraud, post offices send a validation letter to both an old and new address when a change is filed. The letter includes a toll-free number to call to report anything suspicious.

Each year, only a tiny fraction of the requests are ever referred to postal inspectors for investigation. A spokeswoman for the U.S. Postal Inspection Service could not provide a specific number to the Tribune, but officials have previously said that the number of change-of-address investigations in a given year totals 1,000 or fewer typically.

While fraud involving change-of-address forms has long been linked to identity thieves, the targets are usually unsuspecting individuals, not massive corporations.

Worse Than FailureError'd: Perfectly Technical Difficulties

David G. wrote, "For once, I'm glad to see technical issues being presented in a technical way."

 

"Springer has a very interesting pricing algorithm for downloading their books: buy the whole book at some 10% of the sum of all its individual chapters," writes Bernie T.

 

"While browsing PlataGO! forums, I noticed the developers are erasing technical debt...and then some," Dariusz J. writes.

 

Bill K. wrote, "Hooray! It's an 'opposite sale' on Adidas' website!"

 

"A trail camera disguised at a salad bowl? Leave that at an all you can eat buffet and it'll blend right in," wrote Paul T.

 

Brian writes, "Amazon! That's not how you do math!"

 

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

Planet Linux AustraliaMichael Still: How to maintain a local mirror of github repositories

Share

Similarly to yesterday’s post about mirroring ONAP’s git, I also want to mirror all of the git repositories for certain github projects. In this specific case, all of the Kubernetes repositories.

So once again, here is a script based on something Tony Breeds and I cooked up a long time ago for OpenStack…

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

from github import Github as github


GITHUB_ACCESS_TOKEN = '...use yours!...'


def get_github_projects():
    g = github(GITHUB_ACCESS_TOKEN)
    for user in ['kubernetes']:
        for repo in g.get_user(login=user).get_repos():
            yield('https://github.com', repo.full_name)


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = []
for res in list(get_github_projects()):
    if len(res) == 3:
        projects.append(res)
    else:
        projects.append((res[0], res[1], res[1]))
    
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(starting_dir)

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

This script is basically the same as the ONAP one, but it understands how to get a project list from github and doesn’t need to handle ONAP’s slightly strange repository naming scheme.

I hope it is useful to someone other than me.

Share

The post How to maintain a local mirror of github repositories appeared first on Made by Mikal.

,

Krebs on SecurityTracking Firm LocationSmart Leaked Location Data for Customers of All Major U.S. Mobile Carriers Without Consent in Real Time Via Its Web Site

LocationSmart, a U.S. based company that acts as an aggregator of real-time data about the precise location of mobile phone devices, has been leaking this information to anyone via a buggy component of its Web site — without the need for any password or other form of authentication or authorization — KrebsOnSecurity has learned. The company took the vulnerable service offline early this afternoon after being contacted by KrebsOnSecurity, which verified that it could be used to reveal the location of any AT&T, Sprint, T-Mobile or Verizon phone in the United States to an accuracy of within a few hundred yards.

On May 10, The New York Times broke the news that a different cell phone location tracking company called Securus Technologies had been selling or giving away location data on customers of virtually any major mobile network provider to a sheriff’s office in Mississippi County, Mo.

On May 15, ZDnet.com ran a piece saying that Securus was getting its data through an intermediary — Carlsbad, CA-based LocationSmart.

Wednesday afternoon Motherboard published another bombshell: A hacker had broken into the servers of Securus and stolen 2,800 usernames, email addresses, phone numbers and hashed passwords of authorized Securus users. Most of the stolen credentials reportedly belonged to law enforcement officers across the country — stretching from 2011 up to this year.

Several hours before the Motherboard story went live, KrebsOnSecurity heard from Robert Xiao, a security researcher at Carnegie Mellon University who’d read the coverage of Securus and LocationSmart and had been poking around a demo tool that LocationSmart makes available on its Web site for potential customers to try out its mobile location technology.

LocationSmart’s demo is a free service that allows anyone to see the approximate location of their own mobile phone, just by entering their name, email address and phone number into a form on the site. LocationSmart then texts the phone number supplied by the user and requests permission to ping that device’s nearest cellular network tower.

Once that consent is obtained, LocationSmart texts the subscriber their approximate longitude and latitude, plotting the coordinates on a Google Street View map. [It also potentially collects and stores a great deal of technical data about your mobile device. For example, according to their privacy policy that information “may include, but is not limited to, device latitude/longitude, accuracy, heading, speed, and altitude, cell tower, Wi-Fi access point, or IP address information”].

But according to Xiao, a PhD candidate at CMU’s Human-Computer Interaction Institute, this same service failed to perform basic checks to prevent anonymous and unauthorized queries. Translation: Anyone with a modicum of knowledge about how Web sites work could abuse the LocationSmart demo site to figure out how to conduct mobile number location lookups at will, all without ever having to supply a password or other credentials.

“I stumbled upon this almost by accident, and it wasn’t terribly hard to do,” Xiao said. “This is something anyone could discover with minimal effort. And the gist of it is I can track most peoples’ cell phone without their consent.”

Xiao said his tests showed he could reliably query LocationSmart’s service to ping the cell phone tower closest to a subscriber’s mobile device. Xiao said he checked the mobile number of a friend several times over a few minutes while that friend was moving and found he was then able to plug the coordinates into Google Maps and track the friend’s directional movement.

“This is really creepy stuff,” Xiao said, adding that he’d also successfully tested the vulnerable service against one Telus Mobility mobile customer in Canada who volunteered to be found.

Before LocationSmart’s demo was taken offline today, KrebsOnSecurity pinged five different trusted sources, all of whom gave consent to have Xiao determine the whereabouts of their cell phones. Xiao was able to determine within a few seconds of querying the public LocationSmart service the near-exact location of the mobile phone belonging to all five of my sources.

LocationSmart’s demo page.

One of those sources said the longitude and latitude returned by Xiao’s queries came within 100 yards of their then-current location. Another source said the location found by the researcher was 1.5 miles away from his current location. The remaining three sources said the location returned for their phones was between approximately 1/5 to 1/3 of a mile at the time.

Reached for comment via phone, LocationSmart Founder and CEO Mario Proietti said the company was investigating.

“We don’t give away data,” Proietti said. “We make it available for legitimate and authorized purposes. It’s based on legitimate and authorized use of location data that only takes place on consent. We take privacy seriously and we’ll review all facts and look into them.”

LocationSmart’s home page features the corporate logos of all four the major wireless providers, as well as companies like Google, Neustar, ThreatMetrix, and U.S. Cellular. The company says its technologies help businesses keep track of remote employees and corporate assets, and that it helps mobile advertisers and marketers serve consumers with “geo-relevant promotions.”

LocationSmart’s home page lists many partners.

It’s not clear exactly how long LocationSmart has offered its demo service or for how long the service has been so permissive; this link from archive.org suggests it dates back to at least January 2017. This link from The Internet Archive suggests the service may have existed under a different company name — loc-aid.com — since mid-2011, but it’s unclear if that service used the same code. Loc-aid.com is one of four other sites hosted on the same server as locationsmart.com, according to Domaintools.com.

LocationSmart’s privacy policy says the company has security measures in place…”to protect our site from the loss or misuse of information that we have collected. Our servers are protected by firewalls and are physically located in secure data facilities to further increase security. While no computer is 100% safe from outside attacks, we believe that the steps we have taken to protect your personal information drastically reduce the likelihood of security problems to a level appropriate to the type of information involved.”

But these assurances may ring hollow to anyone with a cell phone who’s concerned about having their physical location revealed at any time. The component of LocationSmart’s Web site that can be abused to look up mobile location data at will is an insecure “application programming interface” or API — an interactive feature designed to display data in response to specific queries by Web site visitors.

Although the LocationSmart’s demo page required users to consent to having their phone located by the service, LocationSmart apparently did nothing to prevent or authenticate direct interaction with the API itself.

API authentication weaknesses are not uncommon, but they can lead to the exposure of sensitive data on a great many people in a short period of time. In April 2018, KrebsOnSecurity broke the story of an API at the Web site of fast-casual bakery chain PaneraBread.com that exposed the names, email and physical addresses, birthdays and last four digits of credit cards on file for tens of millions of customers who’d signed up for an account at PaneraBread to order food online.

In a May 9 letter sent to the top four wireless carriers and to the U.S. Federal Communications Commission in the wake of revelations about Securus’ alleged practices, Sen. Ron Wyden (D-Ore.) urged all parties to take “proactive steps to prevent the unrestricted disclosure and potential abuse of private customer data.”

“Securus informed my office that it purchases real-time location information on AT&T’s customers — through a third party location aggregator that has a commercial relationship with the major wireless carriers — and routinely shares that information with its government clients,” Wyden wrote. “This practice skirts wireless carrier’s legal obligation to be the sole conduit by which the government may conduct surveillance of Americans’ phone records, and needlessly exposes millions of Americans to potential abuse and unchecked surveillance by the government.”

Securus, which reportedly gets its cell phone location data from LocationSmart, told The New York Times that it requires customers to upload a legal document — such as a warrant or affidavit — and to certify that the activity was authorized. But in his letter, Wyden said “senior officials from Securus have confirmed to my office that it never checks the legitimacy of those uploaded documents to determine whether they are in fact court orders and has dismissed suggestions that it is obligated to do so.”

Securus did not respond to requests for comment.

THE CARRIERS RESPOND

It remains unclear what, if anything, AT&T, Sprint, T-Mobile and Verizon plan to do about any of this. A third-party firm leaking customer location information not only would almost certainly violate each mobile providers own stated privacy policies, but the real-time exposure of this data poses serious privacy and security risks for virtually all U.S. mobile customers (and perhaps beyond, although all my willing subjects were inside the United States).

None of the major carriers would confirm or deny a formal business relationship with LocationSmart, despite LocationSmart listing them each by corporate logo on its Web site.

AT&T spokesperson Jim Greer said AT&T does not permit the sharing of location information without customer consent or a demand from law enforcement.

“If we learn that a vendor does not adhere to our policy we will take appropriate action,” Greer said.

T-Mobile referred me to their privacy policy, which says T-Mobile follows the “best practices” document (PDF) for subscriber location data as laid out by the CTIA, the international association for the wireless telecommunications industry.

A T-Mobile spokesperson said that after receiving Sen. Wyden’s letter, the company quickly shut down any transaction of customer location data to Securus and LocationSmart.

“We take the privacy and security of our customers’ data very seriously,” the company said in a written statement. “We have addressed issues that were identified with Securus and LocationSmart to ensure that such issues were resolved and our customers’ information is protected. We continue to investigate this.”

Verizon also referred me to their privacy policy.

Sprint officials shared the following statement:

“Protecting our customers’ privacy and security is a top priority, and we are transparent about our Privacy Policy. To be clear, we do not share or sell consumers’ sensitive information to third parties. We share personally identifiable geo-location information only with customer consent or in response to a lawful request such as a validated court order from law enforcement.”

“We will answer the questions raised in Sen. Wyden’s letter directly through appropriate channels. However, it is important to note that Sprint’s relationship with Securus does not include data sharing, and is limited to supporting efforts to curb unlawful use of contraband cellphones in correctional facilities.”

WHAT NOW?

Stephanie Lacambra, a staff attorney with the the nonprofit Electronic Frontier Foundation, said that wireless customers in the United States cannot opt out of location tracking by their own mobile providers. For starters, carriers constantly use this information to provide more reliable service to the customers. Also, by law wireless companies need to be able to ascertain at any time the approximate location of a customer’s phone in order to comply with emergency 911 regulations.

But unless and until Congress and federal regulators make it more clear how and whether customer location information can be shared with third-parties, mobile device customers may continue to have their location information potentially exposed by a host of third-party companies, Lacambra said.

“This is precisely why we have lobbied so hard for robust privacy protections for location information,” she said. “It really should be only that law enforcement is required to get a warrant for this stuff, and that’s the rule we’ve been trying to push for.”

Chris Calabrese is vice president of the Center for Democracy & Technology, a policy think tank in Washington, D.C. Calabrese said the current rules about mobile subscriber location information are governed by the Electronic Communications Privacy Act (ECPA), a law passed in 1986 that hasn’t been substantially updated since.

“The law here is really out of date,” Calabrese said. “But I think any processes that involve going to third parties who don’t verify that it’s a lawful or law enforcement request — and that don’t make sure the evidence behind that request is legitimate — are hugely problematic and they’re major privacy violations.”

“I would be very surprised if any mobile carrier doesn’t think location information should be treated sensitively, and I’m sure none of them want this information to be made public,” Calabrese continued. “My guess is the carriers are going to come down hard on this, because it’s sort of their worst nightmare come true. We all know that cell phones are portable tracking devices. There’s a sort of an implicit deal where we’re okay with it because we get lots of benefits from it, but we all also assume this information should be protected. But when it isn’t, that presents a major problem and I think these examples would be a spur for some sort of legislative intervention if they weren’t fixed very quickly.”

For his part, Xiao says we’re likely to see more leaks from location tracking companies like Securus and LocationSmart as long as the mobile carriers are providing third party companies any access to customer location information.

“We’re going to continue to see breaches like this happen until access to this data can be much more tightly controlled,” he said.

Sen. Wyden issued a statement on Friday in response to this story:

“This leak, coming only days after the lax security at Securus was exposed, demonstrates how little companies throughout the wireless ecosystem value Americans’ security. It represents a clear and present danger, not just to privacy but to the financial and personal security of every American family. Because they value profits above the privacy and safety of the Americans whose locations they traffic in, the wireless carriers and LocationSmart appear to have allowed nearly any hacker with a basic knowledge of websites to track the location of any American with a cell phone.”

“The threats to Americans’ security are grave – a hacker could have used this site to know when you were in your house so they would know when to rob it. A predator could have tracked your child’s cell phone to know when they were alone. The dangers from LocationSmart and other companies are limitless. If the FCC refuses to act after this revelation then future crimes against Americans will be the commissioners’ heads.”

 

Sen. Mark Warner (D-Va.) also issued a statement:

“This is one of many developments over the last year indicating that consumers are really in the dark on how their data is being collected and used,” Sen. Warner said. “It’s more evidence that we need 21st century rules that put users in the driver’s seat when it comes to the ways their data is used.”

In a statement provided to KrebsOnSecurity on Friday, LocationSmart said:

“LocationSmart provides an enterprise mobility platform that strives to bring secure operational efficiencies to enterprise customers. All disclosure of location data through LocationSmart’s platform relies on consent first being received from the individual subscriber. The vulnerability of the consent mechanism recently identified by Mr. Robert Xiao, a cybersecurity researcher, on our online demo has been resolved and the demo has been disabled. We have further confirmed that the vulnerability was not exploited prior to May 16th and did not result in any customer information being obtained without their permission.”

“On that day as many as two dozen subscribers were located by Mr. Xiao through his exploitation of the vulnerability. Based on Mr. Xiao’s public statements, we understand that those subscribers were located only after Mr. Xiao personally obtained their consent. LocationSmart is continuing its efforts to verify that not a single subscriber’s location was accessed without their consent and that no other vulnerabilities exist. LocationSmart is committed to continuous improvement of its information privacy and security measures and is incorporating what it has learned from this incident into that process.”

It’s not clear who LocationSmart considers “customers” in the phrase, “did not result in any customer information being obtained without their permission,” since anyone whose location was looked up through abuse of the service’s buggy API could not fairly be considered a “customer.”

Update, May 18, 11:31 AM ET: Added comments from Sens. Wyden and Warner, as well as updated statements from LocationSmart and T-Mobile.

Sociological Images“I Felt Like Destroying Something Beautiful”

When I was eight, my brother and I built a card house. He was obsessed with collecting baseball cards and had amassed thousands, taking up nearly every available corner of his childhood bedroom. After watching a particularly gripping episode of The Brady Bunch, in which Marsha and Greg settled a dispute by building a card house, we decided to stack the cards in our favor and build. Forty-eight hours later a seven-foot monstrosity emerged…and it was glorious.

I told this story to a group of friends as I ran a stack of paper coasters through my fingers. We were attending Oktoberfest 2017 in a rural university town in the Midwest. They collectively decided I should flex my childhood skills and construct a coaster card house. Supplies were in abundance and time was no constraint. 

I began to construct. Four levels in, people around us began to take notice; a few snapped pictures. Six levels in, people began to stop, actively take pictures, and inquire as to my progress and motivation. Eight stories in, a small crowd emerged. Everyone remained cordial and polite. At this point it became clear that I was too short to continue building. In solidarity, one of my friends stood on a chair to encourage the build. We built the last three levels together, atop chairs, in the middle of the convention center. 

Where inquires had been friendly in the early stages of building, the mood soon turned. The moment chairs were used to facilitate the building process was the moment nearly everyone in attendance began to take notice. As the final tier went up, objects began flying at my head. Although women remained cordial throughout, a fraction of the men in the crowd began to become more and more aggressive. Whispers of  “I bet you $50 that you can’t knock it down” or “I’ll give you $20 if you go knock it down” were heard throughout.  A man chatted with my husband, criticizing the structural integrity of the house and offering insight as to how his house would be better…if he were the one building. Finally, a group of very aggressive men began circling like vultures. One man chucked empty plastic cups from a few tables away. The card house was complete for a total of 2-minutes before it fell. The life of the tower ended as such: 

Man: “Would you be mad if someone knocked it down?”

Me: “I’m the one who built it so I’m the one who gets to knock it down.”

Man: “What? You’re going to knock it down?”

The man proceeded to punch the right side of the structure; a quarter of the house fell. Before he could strike again, I stretched out my arms knocking down the remainder. A small curtsey followed, as if to say thank you for watching my performance. There was a mixture of cheers and boos. Cheers, I imagine from those who sat in nearby tables watching my progress throughout the night. Boos, I imagine, from those who were denied the pleasure of knocking down the structure themselves.

As an academic it is difficult to remove my everyday experiences from research analysis.  Likewise, as a gender scholar the aggression displayed by these men was particularly alarming. In an era of #metoo, we often speak of toxic masculinity as enacting masculine expectations through dominance, and even violence. We see men in power, typically white men, abuse this very power to justify sexual advances and sexual assault. We even see men justify mass shootings and attacks based on their perceived subordination and the denial of their patriarchal rights.

Yet toxic masculinity also exits on a smaller scale, in their everyday social worlds. Hegemonic masculinity is a more apt description for this destructive behavior, rather than outright violent behavior, as hegemonic masculinity describes a system of cultural meanings that gives men power — it is embedded in everything from religious doctrines, to wage structures, to mass media. As men learn hegemonic expectations by way of popular culture—from Humphrey Bogart to John Wayne—one cannot help but think of the famous line from the hyper-masculine Fight Club (1999), “I just wanted to destroy something beautiful.”

Power over women through hegemonic masculinity may best explain the actions of the men at Ocktoberfest. Alcohol consumption at the event allowed men greater freedom to justify their destructive behavior. Daring one another to physically remove a product of female labor, and their surprise at a woman’s choice to knock the tower down herself, are both in line with this type of power over women through the destruction of something “beautiful”.

Physical violence is not always a key feature of hegemonic masculinity (Connell 1987: 184). When we view toxic masculinity on a smaller scale, away from mass shootings and other high-profile tragedies, we find a form of masculinity that embraces aggression and destruction in our everyday social worlds, but is often excused as being innocent or unworthy of discussion.

Sandra Loughrin is an Assistant Professor at the University of Nebraska at Kearney. Her research areas include gender, sexuality, race, and age.

(View original at https://thesocietypages.org/socimages)

Cory DoctorowTalking education and technology with the Future Trends Forum

“Science fiction writer and cyberactivist Cory Doctorow joined the Future Trends Forum to explore possibilities for technology and education.”

CryptogramWhite House Eliminates Cybersecurity Position

The White House has eliminated the cybersecurity coordinator position.

This seems like a spectacularly bad idea.

Worse Than FailureImprov for Programmers: Inventing the Toaster

We always like to change things up a little bit here at TDWTF, and thanks to our sponsor Raygun, we've got a chance to bring you a little treat, or at least something a little different.

We're back with a new podcast, but this one isn't a talk show or storytelling format, or even a radio play. Remy rounded up some of the best comedians in Pittsburgh who were also in IT, and bundled them up to do some improv, using articles from our site and real-world IT news as inspiration. It's… it's gonna get weird.

Thanks to Erin Ross, Ciarán Ó Conaire, and Josh Cox for lending their time and voices to this project.

Music: "Happy Happy Game Show" Kevin MacLeod (incompetech.com) Licensed under Creative Commons: By Attribution 3.0 License http://creativecommons.org/licenses/by/3.0/

Raygun gives you a window into the real user-experience for your software. With a few minutes of setup, all the errors, crashes, and performance issues will be identified for you, all in one tool. Not only does it make your applications better, with Raygun APM, it proactively identifies performance issues and builds a workflow for solving them. Raygun APM sorts through the mountains of data for you, surfacing the most important issues so they can be prioritized, triaged and acted on, cutting your Mean Time to Resolution (MTTR) and keeping your users happy.

Now’s the time to sign up. In a few minutes, you can have a build of your app with Raygun integration, and you’ll be surprised at how many issues it can identify. There’s nothing to lose with a 14-day free trial, and there are pricing options available that fit any team size.

[Advertisement] Continuously monitor your servers for configuration changes, and report when there's configuration drift. Get started with Otter today!

Planet Linux AustraliaMichael Still: How to maintain a local mirror of ONAP’s git repositories

Share

For various reasons, I like to maintain a local mirror of git repositories I use a lot, in this case ONAP. This is mostly because of the generally poor network connectivity in Australia, but its also because it makes cloning a new repository super fast.

Tony Breeds and I baked up a script to do this for OpenStack repositories a while ago. I therefore present a version of that mirror script which does the right thing for ONAP projects.

One important thing to note here that differs from OpenStack — ONAP projects aren’t named in a way where they will consistently sit in a directory structure together. For example, there is an “oom” repository, as well as an “oom/registrator” repository. We therefore need to normalise repository names on clone to ensure they don’t clobber each other — I do that by replacing path separators with underscores.

So here’s the script:

#!/usr/bin/env

from __future__ import print_function

import datetime
import json
import os
import subprocess
import random
import requests

ONAP_GIT_BASE = 'ssh://mikal@gerrit.onap.org:29418'


def get_onap_projects():
    data = subprocess.check_output(
               ['ssh', 'gerrit.onap.org', 'gerrit',
                'ls-projects']).split('\n')
    for project in data:
        yield (ONAP_GIT_BASE, project,
               'onap/%s' % project.replace('/', '_'))


def _ensure_path(path):
    if not path:
        return

    full = []
    for elem in path.split('/'):
        full.append(elem)
        if not os.path.exists('/'.join(full)):
            os.makedirs('/'.join(full))


starting_dir = os.getcwd()
projects = list(get_onap_projects())
random.shuffle(projects)

for base_url, project, subdir in projects:
    print('%s Considering %s %s'
          %(datetime.datetime.now(), base_url, project))
    os.chdir(os.path.abspath(starting_dir))

    if os.path.isdir(subdir):
        os.chdir(subdir)

        print('%s Updating %s'
              %(datetime.datetime.now(), project))
        try:
            subprocess.check_call(
                ['git', 'remote', '-vvv', 'update'])
        except Exception as e:
            print('%s FAILED: %s'
                  %(datetime.datetime.now(), e))
    else:
        git_url = os.path.join(base_url, project)
        _ensure_path('/'.join(subdir.split('/')[:-1]))

        print('%s Cloning %s'
              %(datetime.datetime.now(), project))
        subprocess.check_call(
            ['ionice', '-c', 'idle', 'git', 'clone',
             '-vvv', '--mirror', git_url, subdir])

Note that your ONAP gerrit username probably isn’t “mikal”, so you might want to change that.

This script will checkout all ONAP git repositories into a directory named “onap” in your current working directory. A second run will add any new repositories, as well as updating the existing ones. Note that these are clones intended to be served with a local git server, instead of being clones you’d edit directly. To clone one of the mirrored repositories for development, you would then do something like:

$ git clone onap/aai_babel development/aai_babel

Or similar.

Share

The post How to maintain a local mirror of ONAP’s git repositories appeared first on Made by Mikal.

,

CryptogramAccessing Cell Phone Location Information

The New York Times is reporting about a company called Securus Technologies that gives police the ability to track cell phone locations without a warrant:

The service can find the whereabouts of almost any cellphone in the country within seconds. It does this by going through a system typically used by marketers and other companies to get location data from major cellphone carriers, including AT&T, Sprint, T-Mobile and Verizon, documents show.

Another article.

Boing Boing post.

Worse Than FailureCodeSOD: Return of the Mask

Sometimes, you learn something new, and you suddenly start seeing it show up anywhere. The Baader-Meinhof Phenomenon is the name for that. Sometimes, you see one kind of bad code, and the same kind of bad code starts showing up everywhere. Yesterday we saw a nasty attempt to use bitmasks in a loop.

Today, we have Michele’s contribution, of a strange way of interacting with bitmasks. The culprit behind this code was a previous PLC programmer, even if this code wasn’t running straight on the PLC.

public static bool DecodeBitmask(int data, int bitIndex)
{
        var value = data.ToString();
        var padding = value.PadLeft(8, '0');
        return padding[bitIndex] == '1';
}

Take a close look at the parameters there- data is an int. That’s about what you’d expect here… but then we call data.ToString() which is where things start to break down. We pad that string out to 8 characters, and then check and see if a '1' happens to be in the spot we’re checking.

This, of course, defeats the entire purpose and elegance of bit masks, and worse, doesn’t end up being any more readable. Passing a number like 2 isn’t going to return true for any index.

Why does this work this way?

Well, let’s say you wanted a bitmask in the form 0b00000111. You might say, “well, that’s a 7”. What Michele’s predecssor said was, "that’s text… "00000111". But the point of bitmasks is to use an int to pass data around, so this developer went ahead on and turned "00000111" into an integer by simply parsing it, creating the integer 111. But there’s no possibly way to check if a certain digit is 1 or not, so we have to convert it back into a string to check the bitmask.

Unfortunately, the software is so fragile and unreliable that no one is willing to let the developers make any changes beyond “it’s on fire, put it out”.

[Advertisement] ProGet supports your applications, Docker containers, and third-party packages, allowing you to enforce quality standards across all components. Download and see how!

,

LongNowThe Role of Art in Addressing Climate Change: An interview with José Luis de Vicente

Sounds super depressing,” she texted. “That’s why I haven’t gone. Sort of went full ostrich.”

That was my friend’s response when I asked her if she had attended Després de la fi del món (After the End of the World), the exhibition on the present and future of climate change at the Center of Contemporary Culture in Barcelona (CCCB).

Burying one’s head in the sand when it comes to climate change is a widespread impulse. It is, to put it brusquely, a bummer story — one whose drama is slow-moving, complex, and operating at planetary scale. The media, by and large, underreports it. Politicians who do not deny its existence struggle to coalesce around long-term solutions. And while a majority of people are concerned about climate change, few talk about it with friends and family.

Given all of this, it would seem unlikely that art, of all things, can make much of a difference in how we think about that story.

José Luis de Vicente, the curator of Després de la fi del món, believes that it can.

“The arts can play a role of fleshing out social scenarios showing that other worlds are possible, and that we are going to be living in them,” de Vicente wrote recently. “Imagining other forms of living is key to producing them.”

Scenes from “After the End of the World.” Via CCCB.

The forms of living on display at Després de la fi del món are an immersive, multi-sensory confrontation. The show consists of nine scenes, each a chapter in a spatial essay on the present and future of the climate crisis by some of the foremost artists and thinkers contemplating the implications of the anthropocene.

“Mitigation of Shock” by Superflux. Via CCCB.

In one, I find myself in a London apartment in the year 02050.¹ The familiar confines of cookie-cutter IKEA furniture give way to an unsettling feeling as the radio on the kitchen counter speaks of broken food supply chains, price hikes, and devastating hurricanes. A newspaper on the living room table asks “HOW WILL WE EAT?” The answer is littered throughout the apartment, in the form of domestic agriculture experiments glowing under purple lights, improvised food computers, and recipes for burgers made out of flies.

“Overview” by Benjamin Grant. Via Daily Overview.

In another, I am surrounded by satellite imagery of the Earth that reveals the beauty of human-made systems and their impact on the planet.

“Win><Win” by Rimini Protokoll. Via CCCB.

The most radical scene, Rimini Protokoll’s “Win><Win,” is one de Vicente has asked me not to disclose in detail, so as to not ruin the surprise when Després de la fi del món goes on tour in the United Kingdom and Singapore. All I can say is that it has something to do with jellyfish, and that it is one of the most remarkable pieces of interactive theater I have ever seen.

A “decompression chamber” featuring philosopher Timothy Morton. Via CCCB.

Visitors transition between scenes via waiting rooms that de Vicente describes as “decompression chambers.” In each chamber, the Minister Of The Future, played by philosopher Timothy Morton, frames his program. The Minister claims to represent the interests of those who cannot exert influence on the political process, either because they have not yet been born, or because they are non-human, like the Great Barrier Reef.

“Aerocene” by Tomás Seraceno. Via Aerocene Foundation.

A key thesis of Després de la fi del món is that knowing the scientific facts of climate change is not enough to adequately address its challenges. One must be able to feel its emotional impact, and find the language to speak about it.

My fear—and the reason I go “full ostrich”—has long been that such a feeling would come about only once we experience climate change’s deleterious effects as an irrevocable part of daily life. My hope, after attending the exhibition and speaking with José Luis de Vicente, is that it might come, at least in part, through art.


“This Civilization is Over. And Everybody Knows It.”

The following interview has been edited for length and clarity.

AHMED KABIL: I suspect that for a lot of us, when we think about climate change, it seems very distant — both in terms of time and space. If it’s happening, it’s happening to people over there, or to people in the future; it’s not happening over here, or right now. The New York Times, for example, published a story finding that while most in the United States think that climate change will harm Americans, few believe that it will harm them personally. One of the things that I found most compelling about Després de la fi del món was how the different scenes of the exhibition made climate change feel much more immediate. Could you say a little bit about how the show was conceived and what you hoped to achieve?

José Luis de Vicente. Photo by Ahmed Kabil.

JOSÉ LUIS DE VICENTE: We wanted the show to be a personal journey, but not necessarily a cohesive one. We wanted it to be like a hallucination, like the recollection of a dream where you’re picking up the pieces here and there.

We didn’t want to do a didactic, encyclopedic show on the science and challenge of climate change. Because that show has been done many, many times. And also, we thought the problem with the climate crisis is not a problem of information. We don’t need to be told more times things that we’ve been told thousands of times.

“Unravelled” by Unknown Fields Division. Via CCCB.

We wanted something that would address the elephant in the room. And the elephant in the room for us was: if this is the most important crisis that we face as a species today, if it transcends generations, if this is going to be the background crisis of our lives, why don’t we speak about it? Why don’t we know how to relate to it directly? Why does it not lead newspapers in five columns when we open them in the morning? That emotional distance was something that we wanted to investigate.

One of the reasons that distance happens is because we’re living in a kind of collective trauma. We are still in the denial phase of that trauma. The metaphor I always like to use is, our position right now is like the one you’re in when you go to the doctor, and the doctor gives you a diagnosis saying that actually, there’s a big, big problem, and yet you still feel the same. You don’t feel any different after being given that piece of news, but at the same time intellectually you know at that point that things are never going to be the same. That’s where we are collectively when it comes to climate change. So how do we transition out of this position of trauma to one of empathy?

“Win><Win” by Rimini Protokoll. Via CCCB.

We also wanted to look at why this was politically an unmanageable crisis. And there’s two reasons for that. One is because it’s a political message no politician will be able to channel into a marketable idea, which is: “We cannot go on living the way we live.” There is no political future for any way you market that idea.

The other is—and Timothy Morton’s work was really influential in this idea—the notion that: “What if simply our senses and communicative capacities are not tuned to understanding the problem because it moves in a different resolution, because it proceeds on a scale that is not the scale of our senses?”

Morton’s notion of the hyper-objectthis idea that there are things that are too big and move too slow for us to see—was very important. The title of the show comes from the title of his book Hyperobjects: An Ecology of Nature After the End of the World (02013).

AHMED KABIL: One of the recent instances of note where climate change did make front-page news was the 02015 Paris Agreement. In Després de la fi del món, the Paris Agreement plays a central role in framing the future of climate change. Why?

JOSÉ LUIS DE VICENTE: If we follow the Paris Agreement to its final consequences, what it’s saying is that, in order to prevent global temperature from rising from 3.6 to 4.8 median degrees Celsius by the end of the 21st century, we have to undertake the biggest transformation that we’ve ever done. And even doing that will mean that we’re only halfway to our goal of having global temperatures not rise more than 2 degrees, ideally 1.5, and we’re already at 1 degree. So that gives a sense of the challenge. And we need to do it for the benefit of the humans and non-humans of 02100, who don’t have a say in this conversation.

“Overview” by Benjamin Grant. Via CCCB.

There are two possibilities here: either we make the goals of the Paris Agreement—the bad news here being that this problem is much, much bigger than just replacing fossil fuels with renewable energies. The Tesla way of going at it, of replacing every car in the world with a Tesla—the numbers just don’t add up. We’re going to have to rethink most systems in society to make this a possibility. That’s possibility number one.

Possibility number two: if we don’t make the goals of the Paris Agreement, we know that there’s no chance that life in the end of the 21st century is going to look remotely similar to today. We know that the kind of systemic crises we have are way more serious than the ones that would allow essential normalcy as we understand it today. So whether we make the goals of the Paris Agreement or not, there is no way that life in the second part of the 21st century looks as it does today.

That’s why we open the exhibition with McKenzie Wark’s quote.

“This civilization is over. And everybody knows it.” — McKenzie Wark

This civilization is over, not in the apocalyptic sense that the end of the world is coming, but that the civilization we built from the mid-nineteenth century onward on this capacity of taking fossil fuels out of the Earth and turning that into a labor force and turning that into an equation of “growth equals development equals progress” is just not sustainable.

“Environmental Health Clinic” by Natalie Jeremijenko. Via CCCB.

So with all these reference points, the show asks: What does it mean to understand this story? What does it mean to be citizens acknowledging this reality? What are possible scenes that look at either aspects of the anthropocene planet today or possible post-Paris futures?

This show should mean different things for you whether you’re fifty-five or you’re twelve. Because if you’re fifty-five, these are all hypothetical scenarios for a world that you’re not going to see. But if you’re twelve this is the world that you’re going to grow up into.

02100 may seem very far away, but the people who will see the world of 02100 are already born.

AHMED KABIL: What role will technology play in our climate change future?

JOSÉ LUIS DE VICENTE: Technology will, of course, play a role, but I think we have to be non-utopian about what that role will be.

The climate crisis is not a technological or socio-cultural or political problem; it’s all three. So the problem can only be solved at the three axes. The one that I am less hopeful about is the political axis, because how do we do it? How do we break that cycle of incredibly short-term incentives built into the political power structure? How do we incorporate the idea of: “Okay, what you want as my constituent is not the most important thing in the world, so I cannot just give you what you want if you vote for me and my position of power.” Especially when we’re seeing the collapse of systems and mechanisms of political representation.

“Sea State 9: Proclamation” by Charles Lim. Via CCCB.

I want to believe—and I’m not a political scientist—that huge social transformations translate to political redesigns, in spite of everything. I’m not overly optimistic or utopian about where we are right now. But our capacity to coalesce and gather around powerful ideas that transmit very easily to the masses allows for shifts of paradigm better than previously. Not only good ones, but bad ones as well.

AHMED KABIL: Is there a case for optimism on climate change?

JOSÉ LUIS DE VICENTE: I cannot be optimistic looking at the data on the table and the political agendas, but I am in the sense of saying that incredible things are happening in the world. We’re witnessing a kind of political awakening. These huge social shifts can happen at any moment.

And I think, for instance, that the fossil fuel industry knows that it’s the end of the party. What we’re seeing now is their awareness that their business model is not going to be viable for much longer. And obviously neither Putin nor Trump are good news for the climate, but nevertheless these huge shifts are coming.

“Mitigation of Shock” by Superflux. Via CCCB.

Kim Stanley Robinson always mentions this “pessimism of the intellect, optimism of the will.” I think that’s where you need to be, knowing that big changes are possible. Of course, I have no utopian expectations about it—this is going to be the backstory for the rest of our lives and we’re going to have traumatic, sad things happening because they’re already happening. But I’m quite positive that the world will definitely not look like this one in many aspects, and many things that big social revolutions in the past tried to make possible will be made possible.

If this show has done anything I hope it’s made a small contribution in answering the question of how we think about the future of climate change, how we talk about it, and how we understand what it means. We have to exist on timescales more expansive than the tiny units of time of our lives. We have to think of the world in ways that are non-anthropocentric. We have to think that the needs and desires of the humans of now are not the only thing that matters. That’s a huge philosophical revolution. But I think it’s possible.


Notes

[1] The Long Now Foundation uses five digit dates to serve as a reminder of the time scale that we endeavor to work in. Since the Clock of the Long Now is meant to run well past the Gregorian year 10,000, the extra zero is to solve the deca-millennium bug which will come into effect in about 8,000 years.

Learn More

  • Stay updated on the After The End of The World exhibition.
  • Read The Guardian’s 02015 profile of Timothy Morton.
  • Watch Benjamin Grant’s upcoming Seminar About Long-Term Thinking, “Overview: Earth and Civilization in the Macroscope.”
  • Watch Kim Stanley Robinson’s 02016 talk at The Interval At Long Now on how climate will evolve government and society.
  • Read José Luis de Vicente’s interview with Kim Stanley Robinson.