Planet Russell

,

CryptogramUS Army Researching Bot Swarms

The US Army Research Agency is funding research into autonomous bot swarms. From the announcement:

The objective of this CRA is to perform enabling basic and applied research to extend the reach, situational awareness, and operational effectiveness of large heterogeneous teams of intelligent systems and Soldiers against dynamic threats in complex and contested environments and provide technical and operational superiority through fast, intelligent, resilient and collaborative behaviors. To achieve this, ARL is requesting proposals that address three key Research Areas (RAs):

RA1: Distributed Intelligence: Establish the theoretical foundations of multi-faceted distributed networked intelligent systems combining autonomous agents, sensors, tactical super-computing, knowledge bases in the tactical cloud, and human experts to acquire and apply knowledge to affect and inform decisions of the collective team.

RA2: Heterogeneous Group Control: Develop theory and algorithms for control of large autonomous teams with varying levels of heterogeneity and modularity across sensing, computing, platforms, and degree of autonomy.

RA3: Adaptive and Resilient Behaviors: Develop theory and experimental methods for heterogeneous teams to carry out tasks under the dynamic and varying conditions in the physical world.

Slashdot thread.

And while we're on the subject, this is an excellent report on AI and national security.

Worse Than FailureCodeSOD: This or That

Processing financial transactions is not the kind of software you want to make mistakes in. If something is supposed to happen, it is definitely supposed to happen. Not partially happen. Not maybe happen.

Thus, a company like Charles R’s uses a vendor-supplied accounting package. That vendor has a professional services team, so when the behavior needs to be customized, Charles’s company outsources that development to the vendor.

Of course, years later, that code needs to get audited, and it’s about then that you find out that the vendor outsourced their “professional services” to the lowest bidder, creating a less-than-professional service result.

If you want to make sure than when the country code is equal to "HND", you want to be really sure.

if(transaction.country == config.country_code.HND || transaction.country == config.country_code.HND)
    parts[0] = parts[0].replace(/\B(?=(\d{3})+(?!\d))/g, ",");
else
    parts[0] = parts[0].replace(/\B(?=(\d{3})+(?!\d))/g, ".");
[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

Planet DebianJonathan Carter: Plans for DebCamp17

In a few days, I’ll be attending DebCamp17 in Montréal, Canada. Here are a few things I plan to pay attention to:

  • Calamares: My main goal is to get Calamares in a great state for inclusion in the archives, along with sane default configuration for Debian (calamares-settings-debian) that can easily be adapted for derivatives. Calamares itself is already looking good and might make it into the archives before DebCamp even starts, but the settings package will still need some work.
  • Gnome Shell Extensions: During the stretch development cycle I made great progress in packaging some of the most popular shell extensions, but there are a few more that I’d like to get in for buster: apt-update-indicator, dash-to-panel (done), proxy-switcher, tilix-dropdown, tilix-shortcut
  • zram-tools: Fix a few remaining issues in zram-tools and get it packaged into the archive.
  • DebConf Committee: Since January, I’ve been a delegate on the DebConf committee, I’m hoping that we get some time to catch up in person before DebConf starts. We’ve been working on some issues together recently and so far we’ve been working together really well. We’re working on keeping DebConf organisation stable and improving community processes, and I hope that by the end of DebConf we’ll have some proposals that will prevent some re-occurring problems and also help mend old wounds from previous years.
  • ISO Image Writer: I plan to try out Jonathan Riddell’s iso image writer tool and check whether it works with the use cases I’m interested in (Debian install and live media, images created with Calamares, boots UEFI/BIOS on optical and usb media). If it does what I want I’ll probably package it too if Jonathan Riddell didn’t get to it yet.
  • Hashcheck: Kyle Robertze wrote a tool called Hashcheck for checking install media checksums from a live environment. If he gets a break during DebCamp from video team stuff, I’m hoping we can look at some improvements for it and also getting it packaged in Debian.

,

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 3, week 3

This week our youngest students are playing games from different places around the world, in the past. Slightly older students are completing the Timeline Activity. Students in Years 4, 5 and 6 are starting to sink their teeth into their research project for the term, using the Scientific Process.

Foundation/Prep/Kindy to Year 3

Playing hoopsThis week students in stand-alone Foundation/Prep/Kindy classes (Unit F.3) and those integrated with Year 1 (Unit F-1.3) are examining games from the past. The teacher can choose to match these to the stories from Week 1 of the unit, as games are listed matching each of the places and time periods included in those stories. However, some games are more practical to play than others, and some require running around, so the teacher may wish to choose games which suit the circumstances of each class. Teachers can discuss how different places have different types of games and why these games might be chosen in those places (e.g. dragons in China and lions in Africa).

Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) have this week to finish off the Timeline Activity. The Timeline activity requires some investment of time, which can be done as 2 half hour sessions or one longer session. Some flexible timing is built into the unit for teachers who want to match this activity to the number line in Maths, and other revise or cover the number line in more depth as a complement to this activity.

Years 3 to 6

Arthur Phillip

Last week students in Years 3 to 6 chose a research topic, related to a theme in Australian History. Different themes are studied by different year levels. Students in Year 3 (Unit 3.7) study a topic in the history of their capital city or local community. Students in Year 4 (Unit 4.3) study a topic from Australian history in the precolonial or early colonial periods. Students in Year 5 (Unit 5.3) study a topic from Australian colonial history and students in Year 6 (Unit 6.3) study a topic related to Federation or 20th century Australian history. These research topics are undertaken as a Scientific Investigation. This week the focus is on defining a Research Question and undertaking Background Research. Student workbooks will guide students through the process of choosing a research question within their chosen topic, and then how to start the Background Research. These sections will be included in the Scientific Report each student produces at the end of this unit. OpenSTEM resources available with each unit provide a starting point for this Background Research.

 

Rondam RamblingsDonald Trump shows that democracy is working. Alas.

I must confess to indulging in a certain amount of schadenfreude watching Donald Trump squirm.  I have been an unwavering never-Trumper since before he announced he was running for president.  And yet I am mindful of the fact that nearly all of the predictions I have made about Trump's political fortunes have been wrong.  In fact, while researching links for this post I realized that I wrote

Planet DebianGregor Herrmann: RC bugs 2017/08-29

long time no blog post. – & the stretch release happened without many RC bug fixes from me; in practice, the auto-removals are faster & more convenient.

what I nevertheless did in the last months was to fix RC bugs in pkg-perl packages (it still surprises me how fast rotting & constantly moving code is); prepare RC bug fixes for jessie (also for pkg-perl packages); & in the last weeks provide patches & carry out NMUs for perl packages as part of the ongoing perl 5.26 transition.

  • #783656 – libhtml-microformats-perl: "libhtml-microformats-perl: missing dependency on libmodule-pluggable-perl"
    fix in jessie (pkg-perl)
  • #788008 – libcgi-application-plugin-anytemplate-perl: "libcgi-application-plugin-anytemplate-perl: missing dependency on libclone-perl"
    fix in jessie (pkg-perl)
  • #788350 – libhttp-proxy-perl: "libhttp-proxy-perl: FTBFS - proxy tests"
    fix in jessie (pkg-perl)
  • #808454 – src:libdata-faker-perl: "libdata-faker-perl: FTBFS under some locales (eg. fr_CH.UTF-8)"
    fix in jessie (pkg-perl)
  • #824843 – libsys-syscall-perl: "libsys-syscall-perl: FTBFS on arm64: test suite failures"
    fix in jessie (pkg-perl)
  • #824936 – libsys-syscall-perl: "libsys-syscall-perl: FTBFS on mips*: test failures"
    fix in jessie (pkg-perl)
  • #825012 – libalgorithm-permute-perl: "libalgorithm-permute-perl: FTBFS with Perl 5.24: undefined symbol: PUSHBLOCK"
    upload new upstream release (pkg-perl)
  • #826136 – libsys-syscall-perl: "libsys-syscall-perl FTBFS on hppa arch (with patch)"
    fix in jessie (pkg-perl)
  • #826471 – intltool: "intltool: Unescaped left brace in regex is deprecated at /usr/bin/intltool-update line 1070"
    extend existing patch, uploaded by maintainer
  • #826502 – quilt: "quilt: FTBFS with perl 5.26: Unescaped left brace in regex"
    apply patch from ntyni (pkg-perl)
  • #826505 – swissknife: "swissknife: FTBFS with perl 5.26: Unescaped left brace in regex"
    add patches from ntyni, upload to DELAYED/5
  • #839208 – libio-compress-perl: "libio-compress-perl: uninstallable, current version superseded by Perl 5.24.1"
    upload newer upstream release (pkg-perl)
  • #839218 – nama: "nama: FTBFS because of perl's lack of stack reference counting"
    apply patch from Balint Reczey (pkg-perl)
  • #848060 – src:libx11-protocol-other-perl: "libx11-protocol-other-perl: FTBFS randomly (failing tests)"
    fix in jessie (pkg-perl)
  • #855920 – src:fail2ban: "fail2ban: FTBFS: test_rewrite_file: AssertionError: False is not true"
    try to reproduce, later propose different fix
  • #855951 – src:libsecret: "libsecret FTBFS with test failures on many architectures"
    try to reproduce
  • #856133 – src:shiboken: "shiboken FTBFS on i386/armel/armhf: other_collector_external_operator test failed"
    try to reproduce
  • #857087 – nsscache: "nsscache: /usr/share/nsscache not installed"
    triaging, later fixed by maintainer
  • #858501 – libmojomojo-perl: "libmojomojo-perl: broken symlink: /usr/share/perl5/MojoMojo/root/static/js/jquery.autocomplete.js -> ../../../../../javascript/jquery-ui/ui/jquery.ui.autocomplete.min.js"
    fix symlink (pkg-perl)
  • #858509 – cl-csv-data-table: "cl-csv-data-table: broken symlink: /usr/share/common-lisp/systems/cl-csv-data-table.asd -> ../source/cl-csv-data-table/cl-csv-data-table.asd"
    propose a patch
  • #859539 – filezilla: "filezilla: Filezilla crashes at startup"
    simply close the bug :)
  • #859963 – src:mimetic: "mimetic FTBFS on architectures where char is unsigned"
    add patch to make variable signed
  • #860142 – libgeo-ip-perl: "libgeo-ip-perl: should recommend geoip-database and geoip-database-extra"
    analyse, lower severity (pkg-perl)
  • #863049 – release.debian.org: "jessie-pu: package shutter/0.92-0.1+deb8u2"
    fix in jessie (pkg-perl)
  • #865045 – xmltv: "xmltv: FTBFS with Perl 5.26: t/test_filters.t failure"
    sponsor maintainer upload
  • #865224 – src:uwsgi: "uwsgi: ftbfs with multiple supported python3 versions"
    adjust build dependencies, upload NMU with maintainer's permission
  • #865380 – libtest-unixsock-perl: "libtest-unixsock-perl: Build-Conflicts-Indep: libtest-simple-perl (>= 1.3), including Perl 5.26"
    conditionally skip a test (pkg-perl)
  • #865888 – nagios-plugin-check-multi: "nagios-plugin-check-multi: FTBFS with Perl 5.26: Unescaped left brace in regex is deprecated here"
    update unescaped-left-brace-in-regex.patch, upload to DELAYED/5
  • #866317 – html2ps: "html2ps: relies on deprecated Perl syntax/features, breaks with 5.26"
    prepare a patch, later do QA upload
  • #866934 – libhttp-oai-perl: "libhttp-oai-perl: /usr/bin/oai_pmh uses the 'encoding' pragma, breaks with Perl 5.26"
    patch out "use encoding" (pkg-perl)
  • #866944 – libmecab-perl: "libmecab-perl: FTBFS: no such file or directory: /var/lib/mecab/dic/debian/dicrc"
    build-depend on fixed version (pkg-perl)
  • #866981 – src:libpgobject-type-datetime-perl: "libpgobject-type-datetime-perl FTBFS: You planned 15 tests but ran 5."
    upload new upstream release (pkg-perl)
  • #866984 – src:libpgobject-type-json-perl: "libpgobject-type-json-perl FTBFS: You planned 9 tests but ran 5."
    upload new upstream release (pkg-perl)
  • #866991 – libcpan-meta-perl: "libcpan-meta-perl: uninstallable in unstable"
    fix versioned Breaks/Replaces (pkg-perl)
  • #867046 – gwhois: "gwhois: fails with perl 5.26: The encoding pragma is no longer supported at /usr/bin/gwhois line 80."
    drop "use encoding", upload to DELAYED/5
  • #867208 – src:libpgobject-type-bytestring-perl: "libpgobject-type-bytestring-perl FTBFS: Failed 1/5 test programs. 0/8 subtests failed."
    upload new upstream release (pkg-perl)
  • #867210 – libtext-mecab-perl: "libtext-mecab-perl: FTBFS: test failures: Failed to create mecab instance"
    build-depend on fixed version (pkg-perl)
  • #867983 – libclass-load-perl: "libclass-load-perl: FTBFS: t/012-without-implementation.t failure"
    upload new upstream release (pkg-perl)
  • #867984 – libclass-load-xs-perl: "libclass-load-xs-perl: FTBFS: t/012-without-implementation.t failure"
    upload new upstream release (pkg-perl)
  • #868069 – liburi-namespacemap-perl: "liburi-namespacemap-perl: unbuildable with sbuild"
    fix build dependencies (pkg-perl)
  • #868075 – libperlx-assert-perl: "libperlx-assert-perl: missing dependency on libkeyword-simple-perl | libdevel-declare-perl"
    fix dependencies (pkg-perl)
  • #868613 – src:liblog-report-lexicon-perl: "liblog-report-lexicon-perl FTBFS: Failed 2/10 test programs. 2/305 subtests failed."
    upload new upstream release (pkg-perl)
  • #868888 – get-flash-videos: "Fails: Can't locate Term/ProgressBar.pm"
    add missing dependency (pkg-perl)
  • #869208 – src:libalgorithm-svm-perl: "libalgorithm-svm-perl FTBFS on 32bit: SVM.c: loadable library and perl binaries are mismatched (got handshake key 0x80c0080, needed 0x81c0080)"
    add patch to add ccflags (pkg-perl)
  • #869360 – slic3r: "slic3r: missing dependency on perlapi-*"
    propose a patch
  • #869383 – src:clanlib: "clanlib FTBFS with perl 5.26"
    propose a patch
  • #869418 – src:libtest-taint-perl: "libtest-taint-perl FTBFS on armel: not ok 2 - $ENV{TEST_ACTIVE} is tainted"
    add a patch (pkg-perl)
  • #869429 – vile: "vile: Uninstallable after BinNMU / not binNMU-safe"
    some triaging

Planet DebianNorbert Preining: Gaming: Refunct

A lovely little game, Refunct, just crossed my Steam installation. A simple platformer developed with much love. The game play consists of reviving a lost area by stepping on towers, finding buttons, and making more towers appear through this.

The simple game play idea is enriched with wonderful lightening due to the movement of the sun. I really enjoyed the changing of environment and mood that was created by this effect.

Although not terrible useful by now, one can also swim and dive and enjoy the world from below, which is just an added nice bonus.

The game is currently (till Monday night as far as I see) on sale on Steam for 149 Yen, i.e., slightly above a Euro/Dollar. Well worth the investment for about 1h of game play.

Addition: The fastest run at the moment is at 2:46.57, that is under 3min! I managed barely under 25min. Incredible.

TEDOur podcast “Sincerely, X” co-produced with Audible now available free worldwide

Last year, TED and Audible co-produced a new audio series that invited speakers to share ideas—anonymously. Our goal was to make room for an entirely new trove of ideas: those that could only be broadcast publicly if the speaker’s identity remained private.

The series debuted with a number of powerful stories, and we learned a lot in the process (read about producer Cloe Shasha’s personal experience here).

Now, we’re bringing that first season for free to Apple Podcasts, the TED Android app, or wherever you get your podcasts.

We begin with our first episode, “Dr. Burnout,” featuring a doctor who says she committed a fatal mistake with a patient, leading her to a disturbing diagnosis: the medical field pushes for professional burnout. She unveils a powerful perspective on how doctors must deepen their self-awareness.

We’ll be releasing new episodes every Thursday for the next 10 weeks.

Fans can also access all the episodes today at audible.com/sincerelyx

 


Planet DebianVincent Fourmond: Screen that hurts my eyes, take 2

Six month ago, I wrote a lengthy post about my new computer hurting my eyes. I haven't made any progress with that, but I've accidentally upgraded my work computer from Kernel 4.4 to 4.8 and the nvidia drivers from 340.96-4 to 375.66-2. Well, my work computer now hurts, I've switched back to the previous kernel and drivers, I hope it'll be back to normal.

Any ideas of something specific that changed, either between 4.4 and 4.8 (kernel startup code, default framebuffer modes, etc ?), or between the 340.96 and the 375.66 drivers ? In any case, I'll try that specific combination of kernels/drivers home to see if I can get it to a useable state.

Update, July 23rd

I reverted to the earlier drivers/kernel, to no avail. But it seems in fact the problem with my work computer is linked to an allergy of some kind, since antihistamine drugs have an effect (there are construction works in my lab, perhaps they are the cause ?). No idea still for my home computer, for which the problem is definitely not solved.

Planet Linux AustraliaGabriel Noronha: test post

test posting from wordpress.com

Can’t post from wordpress app since server move might be a mod security issue

Don MartiMy bot parsed 12,387 RSS feeds and all I got were these links.

Bryan Alexander has a good description of an "open web" reading pipeline in I defy the world and go back to RSS. I'm all for the open web, but 40 separate folders for 400 feeds? That would drive me nuts. I'm a lumper, not a splitter. I have one folder for 12,387 feeds.

My chosen way to use RSS (and one of the great things about RSS is you can choose UX independently of information sources) is a "scored river". Something like Dave Winer's River of News concept, that you can navigate by just scrolling, but not exactly a river of news.

  • with full text if available, but without images. I can click through if I want the images.

  • items grouped by score, not feed. (Scores assigned managed by a dirt-simple algorithm where a feed "invests" a percentage of its points in every link, and the investments pay out in a higher score for that feed if the user likes a link.)

I also put the byline at the bottom of each item. Anyway, one thing I have found out about manipulating my own filter bubble is that linklog feeds and blogrolls are great inputs. So here's a linklog feed. (It's mirrored from the live site, which annoys everyone except me.)

Here are some actual links.

This might look funny: How I ran my kids like an Atlassian team for a month. But think about it for a minute. Someone at every app or site your kids use is doing the same thing, and their goals don't include "Dignity and Respect" or "Hard Work Smart Work".

Global network of 'hunters' aim to take down terrorists on the internet It took me a few days to figure things out and after a few weeks I was dropping accounts like flies…

Google's been running a secret test to detect bogus ads — and its findings should make the industry nervous. (This is a hella good idea. Legit publishers could borrow it: just go ad-free for a few minutes at random, unannounced, a couple of times a week, then send the times straight to CMOs. Did you buy ads that someone claimed ran on our site at these times? Well, you got played.)

For an Inclusive Culture, Try Working Less As I said, to this day, my team at J.D. Edwards was the most diverse I’ve ever worked on....Still, I just couldn’t get over that damned tie.

The Al Capone theory of sexual harassment Initially, the connection eluded us: why would the same person who made unwanted sexual advances also fake expense reports, plagiarize, or take credit for other people’s work?

Jon Tennant - The Cost of Knowledge But there’s something much more sinister to consider; recently a group of researchers saw fit to publish Ebola research in a ‘glamour magazine’ behind a paywall; they cared more about brand association than the content. This could be life-saving research, why did they not at least educate themselves on the preprint procedure....

Twitter Is Still Dismissing Harassment Reports And Frustrating Victims

This Is How Your Fear and Outrage Are Being Sold for Profit (Profit? What about TEH LULZ??!?!1?)

Fine, have some cute animal photos, I was done with the other stuff anyway: Photographer Spends Years Taking Adorable Photos of Rats to Break the Stigma of Rodents

,

Planet DebianSean Whitton: I'm going to DebCamp17, Montréal, Canada

Here’s what I’m planning to work on – please get in touch if you want to get involved with any of these items. In rough order of priority/likelihood of completion:

  • Debian Policy sprint

  • conversations about using git for Debian packaging, especially with regard to dgit

    • writing up or following up on feature requests
  • Emacs team sprint

    • talking to maintainers about transitioning their packages to use dh-elpa
  • submitting and revising patch series to dgit

  • writing a test suite for git-remote-gcrypt

Cory DoctorowCome see me at San Diego Comic-Con!


There are three more stops on my tour for Walkaway: tomorrow at San Diego Comic-Con, next weekend at Defcon 25 in Las Vegas, and August 10th at the Burbank Public Library.


My Comic-Con day is tomorrow/Sunday, July 23: first, a 10AM signing at the Tor Books booth (#2701); then a panel, The Future is Bleak, with Annalee Newitz, Scott Westerfeld, Scott Reintgen and Alex R. Kahler; and finally a 1:15PM signing at autographic area AA06.


(Image: Gage Skidmore, CC-BY-SA)

Planet DebianNorbert Preining: Making fun of Trump – thanks France

I mean, it is easy to make fun of Trump, he is just too stupid and incapable and uneducated. But what the French president Emmanuel Macron did on Bastille Day, in presence of the usual Trumpies, was just above the usual level of making fun of Trump. The French made Trump watch a French band playing a medley of Daft Punk. And as we know – Trump seemed to be very unimpressed, most probably because he doesn’t have a clue.

I mean, normally you play these pathetic rubbish, look at the average US (or Chinese or North Korean) parades, and here we have the celebration of an event much older then anything the US can put on the table, and they are playing Daft Punk!

France, thanks. You made my day – actually not only one!

Planet DebianNiels Thykier: Improving bulk performance in debhelper

Since debhelper/10.3, there has been a number of performance related changes.  The vast majority primarily improves bulk performance or only have visible effects at larger “input” sizes.

Most visible cases are:

  • dh + dh_* now scales a lot better for large number of binary packages.  Even more so with parallel builds.
  • Most dh_* tools are now a lot faster when creating many directories or installing files.
  • dh_prep and dh_clean now bulk removals.
  • dh_install can now bulk some installations.  For a concrete corner-case, libssl-doc went from approximately 11 seconds to less than a second.  This optimization is implicitly disabled with –exclude (among other).
  • dh_installman now scales a lot better with many manpages.  Even more so with parallel builds.
  • dh_installman has restored its performance under fakeroot (regression since 10.2.2)

 

For debhelper, this mostly involved:

  • avoiding fork+exec of commands for things doable natively in perl.  Especially, when each fork+exec only process one file or dir.
  • bulking as many files/dirs into the call as possible, where fork+exec is still used.
  • caching / memorizing slow calls (e.g. in parts of pkgfile inside Dh_Lib)
  • adding an internal API for dh to do bulk check for pkgfiles. This is useful for dh when checking if it should optimize out a helper.
  • and, of course, doing things in parallel where trivially possible.

 

How to take advantage of these improvements in tools that use Dh_Lib:

  • If you use install_{file,prog,lib,dir}, then it will come out of the box.  These functions are available in Debian/stable.  On a related note, if you use “doit” to call “install” (or “mkdir”), then please consider migrating to these functions instead.
  • If you need to reset owner+mode (chown 0:0 FILE + chmod MODE FILE), consider using reset_perm_and_owner.  This is also available in Debian/stable.
    • CAVEAT: It is not recursive and YMMV if you do not need the chown call (due to fakeroot).
  • If you have a lot of items to be processed by a external tool, consider using xargs().  Since 10.5.1, it is now possible to insert the items anywhere in the command rather than just in the end.
  • If you need to remove files, consider using the new rm_files function.  It removes files and silently ignores if a file does not exist. It is also available since 10.5.1.
  • If you need to create symlinks, please consider using make_symlink (available in Debian/stable) or make_symlink_raw_target (since 10.5.1).  The former creates policy compliant symlinks (e.g. fixup absolute symlinks that should have been relative).  The latter is closer to a “ln -s” call.
  • If you need to rename a file, please consider using rename_path (since 10.5).  It behaves mostly like “mv -f” but requires dest to be a (non-existing) file.
  • Have a look at whether on_pkgs_in_parallel() / on_items_in_parallel() would be suitable for enabling parallelization in your tool.
    • The emphasis for these functions is on making parallelization easy to add with minimal code changes.  It pre-distributes the items which can lead to unbalanced workloads, where some processes are idle while a few keeps working.

Credits:

I would like to thank the following for reporting performance issues, regressions or/and providing patches.  The list is in no particular order:

  • Helmut Grohne
  • Kurt Roeckx
  • Gianfranco Costamagna
  • Iain Lane
  • Sven Joachim
  • Adrian Bunk
  • Michael Stapelberg

Should I have missed your contribution, please do not hesitate to let me know.

 


Filed under: Debhelper, Debian

Planet DebianJunichi Uekawa: asterisk fails to start on my raspberry pi.

asterisk fails to start on my raspberry pi. I don't quite understand what the error message is but systemctl tells me there was a timeout. Don't know which timeout it hits.

Don Martithe other dude

Making the rounds, this is a fun one: A computer was asked to predict which start-ups would be successful. The results were astonishing.

  • 2014: When there's no other dude in the car, the cost of taking an Uber anywhere becomes cheaper than owning a vehicle. So the magic there is, you basically bring the cost below the cost of ownership for everybody, and then car ownership goes away.

  • 2018 (?): When there's no other dude in the fund, the cost of financing innovation anywhere becomes cheaper than owning a portfolio of public company stock. So the magic there is, you basically bring the transaction costs of venture capital below the cost of public company ownership for everybody, and then public companies go away.

Could be a thing for software/service companies faster than we might think. Futures contracts on bugs→equity crowdfunding and pre-sales of tokens→bot-managed follow-on fund for large investors.

,

TEDProsthetics that feel more natural, how mushrooms may help save bees, and more

Please enjoy your roundup of TED-related news:

Prosthetics that feel more natural. A study in Science Robotics lays out a surgical technique developed by Shriya Srinivasan, Hugh Herr and others that may help prosthetics feel more like natural limbs. During an amputation, the muscle pairs that allow our brains to sense how much force is applied to a limb and where it is in space are severed, halting sensory feedback to and from the brain and affecting one’s ability to balance, handle objects and move. But nerves that send signals to the amputated limb remain intact in many amputees. Using rats, the scientists connected these nerves with muscles grafted from other parts of the body — a technique that successfully restored the muscle pair relationship and sensory feedback being sent to the brain. Combined with other research on translating nerve signals into instructions for moving the prosthetic limb, the technique could help amputees regain the ability to sense where the prosthetic is in space and the forces applied to it. They plan to begin implementing this technique in human amputees. (Watch Herr’s TED Talk)

From mathematician to politician. Emmanuel Macron wants France to be at the forefront of science, and science to be incorporated in global politics, but this is easier said than done. The election of Cédric Villani to the French National Assembly—a mathematician, Fields medalist, and TED speaker—provides a reason for optimism. “Currently, scientific knowledge within French political circles is close to zero,” Villani said in an interview with Science. “It’s important that some scientific expertise is present in the National Assembly.” Villani’s election is a step in that direction. (Watch Villani’s TED Talk)

A digital upgrade for the US government. The United States Digital Services, of which Matt Cutts is acting administrator, released its July Report to Congress. Since 2014, the USDS has worked with Silicon Valley engineers and experienced government employees to streamline federal websites and online services. Currently, the USDS is working with seven federal agencies, including the Department of Defense, the Department of Health and Human Services and the Department of Education. Ultimately, the USDS’ digital intervention is not just about reducing cost and increasing efficiency– it’s about restoring people’s trust in government. (Watch Cutts’ TED Talk)

Can mushrooms help save bees? Bee populations have been in decline for the past decade, and the consequences could be dire. But in a video for Biographic, produced by Louie Schwartzberg and including mycologist Paul Stamets, scientists discuss an unexpected solution: mushrooms. The spores and extract from Metarhizium anisopliae, a common species of mushroom, are toxic to varroa mites, the vampiric parasite which sucks blood from bees and causes colony collapse disorder. However, bees can tolerate low doses free of harm. Metarhizium anisopliae has even been shown to promote beehive longevity. This could be a step forward in curbing the mortality rate of nature’s most prolific pollinator. (Watch Schwartzberg’s TED Talk and Stamets’ TED Talk)

Support for women entrepreneurs. The World Bank Group announced its creation of The Women Entrepreneurs Finance Initiative (We-Fi), a facility that will create a $1 billion fund to support and encourage female entrepreneurship. Initiated by the U.S. and Germany, it quickly received support from other nations including Canada, Japan, Saudi Arabia and South Korea. Nearly 70% of small and medium-sized enterprises owned by women in developing countries are denied or unable to receive adequate financial services. We-Fi aims to overcome these and many other obstacles by providing early support, networking opportunities and access to markets. “Women’s economic empowerment is critical to achieve the inclusive economic growth required to end extreme poverty, which is why it has been such a longstanding priority for us,” World Bank Group President Jim Yong Kim said. “This new facility offers an unprecedented opportunity to harness both the public and private sectors to open new doors of opportunity for women entrepreneurs and women-owned firms in developing countries around the globe.” (Watch Kim’s TED Talk)

Daring to drive. Getting behind the wheel of a car is something many of us take for granted. However, as Manal al-Sharif details in her new memoir, Daring to Drive: A Saudi Woman’s Awakening, it’s not that way for everybody. The daughter of a taxi driver, al-Sharif got an education and landed a good job. The real challenge was simply getting to work—as a rule, Saudi women are not allowed to drive. Daring to Drive tells the story of her activism in the face of adversity. (Watch al-Sharif’s TED Talk)

Have a news item to share? Write us at contact@ted.com and you may see it included in this biweekly round-up.


CryptogramFriday Squid Blogging: Giant Squid Caught Off the Coast of Ireland

It's the second in two months. Video.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

TEDWhat if? … and other questions that lead to big ideas: The talks of TED@UPS

Hosts Bryn Freedman and Kelly Stoetzel welcome us to the show at TED@UPS, July 20, 2017, at SCADshow in Atlanta, Georgia. (Photo: Mary Anne Morgan / TED)

What if one person could change the world? What if we could harness our collective talent, insight and wisdom? And what if, together, we could spark a movement with positive impact far into the future?

For a third year, UPS has partnered with TED to bring experts in business, logistics, design and technology to the stage to share ideas from the forefront of innovation. At this year’s TED@UPS — held on July 20, 2017, at SCADShow in Atlanta, Georgia — 18 speakers and performers showed how daring human imagination can solve our most difficult problems. 

After opening remarks from Juan Perez, UPS’s chief information and engineering officer, the talks in Session 1

Why protectionism isn’t a good deal. We’ve heard a lot of rhetoric lately suggesting that importers, like the US, are losing valuable manufacturing jobs to exporters like China, Mexico and Vietnam. In reality, those manufacturing jobs haven’t disappeared for the reasons you may think, says border and logistics specialist Augie Picado. Automation, not offshoring, is really to blame, he says; in fact, of the 5.7 million manufacturing jobs lost in the US between 2000 and 2010, 87 percent of them were lost to automation. If that trend continues, it means that future protectionist policies would save 1 in 10 manufacturing jobs, at best — but, more likely, they’d lead to tariffs and trade wars. And with the nature of modern manufacturing inexorably trending toward shared production, in which individual products are manufactured using materials produced in many different countries, protectionist policies make even less sense. Shared production allows us to manufacture higher-quality products at prices we can afford, but it’s impossible without efficient cross-border movement of materials and products. As Picado asks: “Does it make more sense to drive up prices to the point where we can’t afford basic goods, for the sake of protecting a job that might be eliminated by automation in a few years anyway?” 

Christine Thach shares her experience growing up in a refugee community — and the lessons it taught her about life and business — at TED@UPS. (Photo: Mary Anne Morgan / TED)

Capitalism for the collective. Christine Thach was raised within a tight-knit community of Cambodian refugees in the United States. Time after time, she witnessed the triumphs of community-first thinking through her own family’s hardships, steadfast relationships and continuous investment in refugee-owned businesses. “This collective-success mindset we’ve seen in refugees can actually improve the way we do business,” she says. “The self-interested foundations of capitalism, and the refugee collectivist mindset, are not in direct conflict with each other. They’re actually complementary.” Thach thinks an all-for-one, one-for-all mentality may just be able to shake up capitalism in a way that benefits everyone — if companies shift away from the individual and rally for group prosperity.

In defense of perfectionism. Some people think perfectionism is a bad thing, that it only leaves us disappointed. Jon Bowers disagrees; he sees perfectionism as “a willingness to do what is difficult to achieve what is right.” Bowers manages a facility where he trains professional delivery drivers. The stakes are high — 100 people in the US die every day in car accidents. So he’s a fan of striving to get as close to perfect as possible. We shouldn’t lower our standards because we’re afraid to fail, Bowers says. “We need to fail … failure is a natural stepping stone toward perfection.”

Uma Adwani shares the joys of teaching math at TED@UPS. (Photo: Mary Anne Morgan / TED)

Math’s hidden messages. “I hated math until it saved my life,” says Uma Adwani. As a young woman, Adwani left her small hometown of Akola, India, to start a career and life for herself in an unfamiliar city on her own. For months, she scraped by on three dollars a day — until a primary school hired her to teach the subject she loathed the most: math. But as Uma worked to prepare her lessons (and keep her job!), she started to discover “the magic of even and odd numbers, the poetry, the symmetry.” She shares the secret wisdom she found in the multiplication tables, like this one: if I am even to myself, no matter what I am multiplied with or what I go through in life, the result will always be even!

Truck driver turned activist John McKown tells sobering stories of human trafficking at TED@UPS. (Photo: Mary Anne Morgan / TED)

Activism on the road. As a small-town police officer, John McKown dealt with his share of prostitution cases. But after he left the force and became a truck driver, he faced prostitution in a new light — at truck stops. After first brushing them off as an annoyance, Bowers came to realize that the many prostitutes who go from truck to truck offering “dates” at truck stops weren’t just stuck, they were enslaved. According to the FBI, 293,000 American children are at risk of enslavement, McKown says, and now he sees it as a moral imperative to help. When he pulls into a truck stop, he’s not just looking for a parking spot; he’s looking for a way to help — and he encourages others not to turn a blind eye to this problem.

A life of awe. For artist Jennifer Allison, getting dressed can feel like rubbing against a cactus, the lights at the grocery store seem more like strobes at a disco, and the number four is always royal blue. It wasn’t until Allison was an adult that she was given a name for the strange, and often painful, way her brain processes information — Sensory Processing Disorder (SPD). Allison shares the many ways she tried to cope with her condition — from stealing cars (and returning them) to self-medication and eventually an overdose — before returning to her childhood love: art. In an intimate talk, Allison shares how art saved her life, transforming her world “from pain and chaos to mesmerizing awe and wonder.” She urges us to find what transforms our own worlds, “whether it’s through art or science, nature or religion.” Because, she explains, it’s this sense of awe that connects us to the bigger picture and each other, grounding us and making life worth living.

Johnny Staats grew up singing gospel in church and his family band. Now a UPS driver and bluegrass virtuoso, he plays music with people along his route and at Carnegie Hall. Joined by multi-instrumentalist Davey Vaughn, Staats closes out Session 1 of TED@UPS with a performance of his original song, “His Love Has Got a Hold on Me.”

Singer Stella Stevenson and pianist Danny Bauer open Session 2 by transforming the TED@UPS stage into a jazz lounge with a bold, smoky cover of “Our Day Will Come.”

What’s the point of living in the city? Leading organizations predict that by 2050, 66 percent of the population will live in cities with worsening crime, congestion and inequality. Julio Gil believes the opposite. Trends come and go, he says, and city living will eventually go, as people realize we can now get the same benefits of city while living in the countryside. With the delivery innovations and ubiquitous technology of modern life, there’s no reason not to settle outside the city for a bigger piece of land. Soon enough, he says, “city life” will able to be lived anywhere with the help of drones, social media and augmented reality. Gil challenges the TED@UPS audience to think outside big-city walls to consider the advantages of greener pastures.

Sebastian Guo heralds the arrival of the Chinese millennials — the biggest emerging consumer demographic in the world — at TED@UPS. (Photo: Mary Anne Morgan / TED)

Pay attention to Chinese millennials. The business world is obsessed with American millennials, but Sebastian Guo suggests that a different group is about to take over the world: Chinese millennials. If they were their own country, Chinese millennials would be the world’s third largest. They’re well-educated and super motivated — 57 percent have a bachelor’s degree and 23 percent have a master’s, and they’re choosing majors that give them a competitive edge, specifically STEM and business management. As the biggest emerging consumer demographic on the planet, Chinese millennials spend four times more on mobile purchases than their American counterparts. And then there are the intangibles. The Chinese are big-picture people whose thinking starts from the overview and makes its way to the specific, Guo says, which means they focus on growth and the future in the workplace. And 10 years of smartphones hasn’t erased thousands of years of Confucian ideals, which emphasize a sense of hierarchy in social relations and suggest that a Chinese millennial might be more deferential to their managers at work. The world is tilted towards China now, Guo says, and Chinese millennials are ready to be explorers in this new adventure.

Robot-proof our jobs. “Driver” is the most common job in 29 of the 50 states — and with self-driving cars on the horizon, this could quickly turn into a big problem. To keep robots from taking our jobs, innovation architect David Lee says that we should stop asking people to work like robots and let work feel like … the weekend! “Human beings are amazing on weekends,” Lee says. They’re artists, carpenters, chefs and athletes. The key is to start asking people what problems they are inspired to solve and what talents they want to bring to work. Let them lead the way. “When you invite people to be more, they can amaze us with how much more they can be,” Lee says.

Back with a welcomed musical interlude, Johnny Staats and Davey Vaughn return to the TED@UPS stage to perform an original song, “The West Virginia Coal Miner.”

How drones are revolutionizing healthcare. Partnering across disciplines, UPS has helped create the world’s first drone-based medical delivery system, Zipline. The scalable system transports emergency medical supplies to remote villages in Rwanda. On track to its goal of saving thousands of lives a year, it could help transform how we deliver medical resources in the future as populations outgrow aging infrastructure. Learn more about Zipline in the mini-doc “Collaboration Lifeline,” shown for the first time at TED@UPS.

Planning happiness. City planners are already busy designing futures full of bike paths and LED-certified buildings. But are they designing for our happiness? It’s hard to define, and even harder to plan for, but urban planner Thomas Madrecki has a simple solution: Ask the public. “Our quality of life improves most when we feel engaged and empowered,” he explains, and one of the best ways planners can do this is by making public participation a priority. He calls for an “overhaul of the planning process” through public engagement, clear communication, and meetings the public actually want to attend. It’s not enough for urban planners to be trained in zoning regulations, data methods and planning history — they need to be trained in people, says Madrecki. After all, happiness and health are not engineering problems; they’re people problems.

Innovators don’t see different things; they see things differently. As a Colonel in the Air Force Reserve and an MD-11 Captain at UPS, Jeff Kozak thinks a lot about fuel, and for good reason. For his airline, fuel is by far the largest expense, at over $1.3 billion a year. Kozak tells the story of a counterintuitive idea he had to optimize fuel efficiency and cut carbon emissions by focusing on finding the exact amount of fuel needed for each plane to get to each leg of its journey. Initially met with resistance by an industry that believed more fuel was always better, the plan worked — after just ten days the airline saved $500,000 and eliminated 1,300 tons of CO2 emissions. “Let’s all continue to strive to see things differently and stay open to ideas that go against conventional thinking,” Kozak says. “Despite the resistance this type of thinking can often bring, embracing the counterintuitive can make all the difference.”

Former professional wrestler Mike Kinney encourages us all to turn ourselves up at TED@UPS. (Photo: Mary Anne Morgan / TED)

That’s me … in the chaps. How do you go from a typical high school senior to a sweaty wild man in chaps and a cowboy hat? “You turn yourself up!” says retired professional wrestler and UPS sales supervisor Mike Kinney. For years Kinney was a professional wrestler with the stage name Cowboy Gator Magraw, a persona he invented for the ring by amplifying the best parts of himself, the things about him that made him unique. In a talk equal parts funny and smart, Kinney taps into some locker-room wisdom to show us how we can all turn up to reach our full potential.

To close out the show, violinist Jessica Cambron and flutist Paige James play a moving rendition of the goodnight waltz (and Ken Burns fan favorite) “Ashokan Farewell,” accompanied by Johnny Staats and Davey Vaughn.


TEDTEDGlobal 2017: Announcing the speaker lineup for our Arusha conference

TEDGlobal 2017 kicks off August 27–30, 2017, in Arusha, Tanzania. Ten years after the last TEDGlobal in Arusha, we’ll again gather a community from across the continent and around the world to explore ideas that may propel Africa’s next leap — in business, politics and justice, creativity and entrepreneurship, science and tech.

Today, we’re thrilled to announce our speaker lineup for TEDGlobal 2017! It’s a powerful list you can skim here — to dive into speaker bios and learn about the 8 themed sessions of TEDGlobal 2017, visit our full Program Guide.

OluTimehin Adegbeye, Writer and activist: Writing on gender justice, sexual and reproductive rights, urban poverty and media OluTimehin Adegbeye shares her (often very strong) opinions on Twitter and in long-form work. @OhTimehin

Oshiorenoya Agabi, Neurotechnology entrepreneur: Oshiorenoya Agabi is engineering neurons to express synthetic receptors which give them an unprecedented ability to become aware of surroundings. koniku.io

Nabila Alibhai, Place-maker: Nabila Alibhai leads inCOMMONS, a new organization focused on civic engagement, public spaces, and building collective responsibility for our shared places.@NabilaAlibhai

Bibi Bakare-Yusuf, Publisher: Bibi Bakare-Yusuf is co-founder and publishing director of one of Africa’s leading publishing houses, Cassava Republic Press. cassavarepublic.biz

Christian Benimana, Architect: Christian Benimana is co-founder of the African Design Center, a training program for young architects. massdesigngroup.org

Gus Casely-Hayford, Cultural historian: Gus Casely-Hayford writes, lectures, curates and broadcasts widely about African culture.

In Session 5, Repatterning, speakers will talk about the worlds we create — in fiction, fashion, design, music.

Natsai Audrey Chieza, Designer: Natsai Audrey Chieza is a design researcher whose fascinating work crosses boundaries between technology, biology, design and cultural studies. @natsaiaudrey

Tania Douglas, Biomedical engineer: Tania Douglas imagines how biomedical engineering can help address some of Africa’s health challenges. @tania_douglas

Touria El Glaoui, Art fair curator: To showcase vital new art from African nations and the diaspora, Touria El Glaoui founded the powerhouse 1:54 Contemporary African Art Fair. @154artfair

Meron Estefanos, Refugee activist: Meron Estefanos is the executive director of the Eritrean Initiative on Refugee Rights, advocating for refugees and victims of trafficking and torture. @meronina

Chika Ezeanya-Esiobu, Indigenous knowledge expert: Working across disciplines, Chika Ezeanya-Esiobu explores indigenous knowledge, homegrown and grassroots approaches to the sustainable advancement of Sub-Saharan Africa. chikaforafrica.com

Kamau Gachigi, Technologist: At Gearbox, Kamau Gachigi empowers Kenya’s next generation of creators to prototype and fabricate their visions. @kamaufablab

Ameenah Gurib-Fakim: President of Mauritius: Ameenah Gurib-Fakim is the 6th president of the island of Mauritius. As a biodiversity scientist as well, she explores the medical and nutrition secrets of her home. @aguribfakim

Leo Igwe, Human rights activist: Leo Igwe works to end a variety of human rights violations that are rooted in superstition, including witchcraft accusations, anti-gay hate, caste discrimination and ritual killing. @leoigwe

Joel Jackson, Transport entrepreneur: Joel Jackson is the founder and CEO of Mobius Motors, set to launch a durable, low-cost SUV made in Africa. mobiusmotors.com

Tunde Jegede, Composer, cellist, kora virtuoso: TED Fellow Tunde Jegede combines musical traditions to preserve classical forms and create new ones. tundejegede.com

Paul Kagame, President of the Republic of Rwanda: As president of Rwanda, Paul Kagame has received recognition for his leadership in peace-building, development, good governance, promotion of human rights and women’s empowerment, and advancement of education and ICT. @PaulKagame

Zachariah Mampilly, Political scientist: Zachariah Mampilly is an expert on the politics of both violent and non-violent resistance. He is the author of “Rebel Rulers: Insurgent Governance and Civilian Life during War” and “Africa Uprising: Popular Protest and Political Change.” @Ras_Karya

Vivek Maru, Legal empowerment advocate: Vivek Maru is the founder of Namati, a movement for legal empowerment around the world powered by cadres of grassroots legal advocates. Global Legal Empowerment Network

In Session 6: A Hard Look, these speakers will confront myths and hard facts about the continent, from the lens of politics and human rights as well as the reality of life as a small farmer.

Kola Masha, Agricultural leader: Kola Masha is the managing director of Babban Gona, an award-winning, high-impact, financially sustainable and highly scalable social enterprise, part-owned by the farmers they serve. @BabbanGona

Clapperton Chakanetsa Mavhunga, MIT professor, grassroots thinker-doer, author: Clapperton Chakanetsa Mavhunga studies the history, theory, and practice of science, technology, innovation, and entrepreneurship in the international context, with a focus on Africa. sts-program.mit.edu/people/sts-faculty/c-clapperton-mavhunga/

Thandiswa Mazwai, Singer: Thandiswa is one of the most influential South African musicians of this generation. @thandiswamazwai

Yvonne Chioma Mbanefo, Digital learning advocate: After searching for an Igbo language learning tool for her kids, digital strategist Yvonne Mbanefo helped create the first illustrated Igbo dictionary for children. Now she’s working on Yoruba, Hausa, Gikuyu and more. @yvonnembanefo

Sara Menker, Technology entrepreneur: Sara Menker is founder and CEO of Gro Intelligence, a tech company that marries the application of machine learning with domain expertise and enables users to understand and predict global food and agriculture markets. @SaraMenker

Eric Mibuari, Computer scientist: Eric Mibuari studies the blockchain at IBM Research, and is the founder of the Laara Community Technology Centre in Meru, Kenya. laare.csail.mit.edu

Kingsley Moghalu, Political economist: Kingsley Moghalu is a global leader who has made contributions to the stability, progress and wealth of nations, societies and individuals across such domains as academia, economic policy, banking and finance, entrepreneurship, law and diplomacy. kingsleycmoghalu.com

Sethembile Msezane, Artist: Sethembile Msezane the act of public commemoration — how it creates myths, constructs histories, includes some and excludes others. @sthemse

Kisilu Musya, Farmer and filmmaker: For six years, Kisilu Musya has filmed his life on a small farm in South East Kenya, to make the documentary “Thank You for the Rain.” thankyoufortherain.com

Robert Neuwirth, Author: To research his book “Stealth of Nations,” Robert Neuwirth spent four years among street vendors, smugglers and “informal” import/export firms. @RobertNeuwirth

Kevin Njabo, Biodiversity scientist: Kevin Njabo is coordinating the development of UCLA’s newly established Congo Basin Institute (CBI) in Yaoundé, Cameroon.

Alsarah and the Nubatones, East African retro-popsters: Inspired by both the golden age of Sudanese pop music of the ’70s and the New York effervescence, Alsarah & the Nubatones have built a repertoire where an exhilarating oud plays electric melodies on beautiful jazz-soul bass lines, and where sharp and modern percussions breathe new life to age-old rhythms. alsarah.com

Ndidi Nwuneli, Social innovation expert: Through her work in food and agriculture, and as a leadership development mentor, Ndidi Okonkwo Nwuneli commits to building economies in West Africa. @ndidiNwuneli

Dayo Ogunyemi, Cultural media builder: Dayo Ogunyemi is the founder of 234 Media, which makes principal investments in the media, entertainment and technology sectors. @AfricaMET

Nnedi Okorafor, Science fiction writer: Nnedi Okorafor weaves African cultures into the evocative settings and memorable characters of her science fiction work for kids and adults. @Nnedi

Fredros Okumu, Mosquito scientist: Fredros Okumu studies human-mosquito interactions, hoping to understand how to keep people from getting malaria. ihi.or.tz

Qudus Onikeku, Dancer, choreographer: With a background as an acrobat and dancer, Qudus Onikeku is one of the preeminent Nigerian choreographers working today. @qudusonikeku

DK Osseo-Asare, Designer: DK Osseo-Asare is a designer who makes buildings, landscapes, cities, objects and digital tools. @dkoa

Keller Rinaudo, Robotics entrepreneur: Keller Rinaudo is CEO and co-founder of Zipline, building drone delivery for global public health customers. @kellerrinaudo

Reeta Roy, President and CEO, The Mastercard Foundation: A thoughtful leader and an advocate for the world’s most vulnerable, Reeta Roy has worked tirelessly to build a foundation that is collaborative and known for its lasting impact. mastercardfdn.org

Chris Sheldrick, Co-founder & CEO, what3words: With what3words, Chris Sheldrick is providing a precise and simple way to talk about location, by dividing the world into a grid of 3m x 3m squares and assigning each one a unique 3 word address. what3words.com

George Steinmetz, Aerial photographer: Best known f­or his exploration photography, George Steinmetz has a restless curiosity for the unknown: remote deserts, obscure cultures, the ­mysteries of science and technology. georgesteinmetz.com

Olúfẹ́mi Táíwò, Historian and philosopher: Drawing on a rich cultural and personal history, Olúfẹ́mi Táíwò studies philosophy of law, social and political philosophy, Marxism, and African and Africana philosophy. africana.cornell.edu/

Pierre Thiam, Chef: Pierre Thiam shares the cuisine of his home in Senegal through global restaurants and highly praised cookbooks. pierrethiam.com

Iké Udé, Artist: The work of Nigerian-born Iké Udé explores a world of dualities: photographer/performance artist, artist/spectator, African/postnationalist, mainstream/marginal, individual/everyman and fashion/art. ikeude.com

Washington Wachira, Wildlife ecologist and nature photographer: Birder and ecologist Washington Wachira started the Youth Conservation Awareness Programme (YCAP) to nurture young environmental enthusiasts in Kenya. washingtonwachira.com

Ghada Wali, Designer: A pioneering graphic designer in Egypt, Ghada Wali has designed fonts, brands and design-driven art projects. ghadawali.com


CryptogramHacking a Segway

The Segway has a mobile app. It is hackable:

While analyzing the communication between the app and the Segway scooter itself, Kilbride noticed that a user PIN number meant to protect the Bluetooth communication from unauthorized access wasn't being used for authentication at every level of the system. As a result, Kilbride could send arbitrary commands to the scooter without needing the user-chosen PIN.

He also discovered that the hoverboard's software update platform didn't have a mechanism in place to confirm that firmware updates sent to the device were really from Segway (often called an "integrity check"). This meant that in addition to sending the scooter commands, an attacker could easily trick the device into installing a malicious firmware update that could override its fundamental programming. In this way an attacker would be able to nullify built-in safety mechanisms that prevented the app from remote-controlling or shutting off the vehicle while someone was on it.

"The app allows you to do things like change LED colors, it allows you to remote-control the hoverboard and also apply firmware updates, which is the interesting part," Kilbride says. "Under the right circumstances, if somebody applies a malicious firmware update, any attacker who knows the right assembly language could then leverage this to basically do as they wish with the hoverboard."

Worse Than FailureError'd: No Thanks Necessary

"I guess we're not allowed to thank the postal carriers?!" Brian writes.

 

"So, does the CPU time mean that Microsoft has been listening to every noise I have made since before I was born?" writes Shaun F.

 

"No problem. I will not attempt to re-use your error message without permission," wrote Alex K.

 

Mark B. writes, "Ah, if only we could have this in real life."

 

"Good work Google! Another perfect translation into German," Kolja wrote.

 

"I was searching for an Atmel MCU, so I naturally opened Atmel's Product Finder. I kind of wish that I didn't," writes Michael B.,

 

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianMichal Čihař: Making Weblate more secure and robust

Having publicly running web application always brings challenges in terms of security and in generally in handling untrusted data. Security wise Weblate has been always quite good (mostly thanks to using Django which comes with built in protection against many vulnerabilities), but there were always things to improve in input validation or possible information leaks.

When Weblate has joined HackerOne (see our first month experience with it), I was hoping to get some security driven core review, but apparently most people there are focused on black box testing. I can certainly understand that - it's easier to conduct and you need much less knowledge of the tested website to perform this.

One big area where reports against Weblate came in was authentication. Originally we were mostly fully relying on default authentication pipeline coming with Python Social Auth, but that showed some possible security implications and we ended up with having heavily customized authentication pipeline to avoid several risks. Some patches were submitted back, some issues reported, but still we've diverged quite a lot in this area.

Second area where scanning was apparently performed, but almost none reports came, was input validation. Thanks to excellent XSS protection in Django nothing was really found. On the other side this has triggered several internal server errors on our side. At this point I was really happy to have Rollbar configured to track all errors happening in the production. Thanks to having all such errors properly recorded and grouped it was really easy to go through them and fix them in our codebase.

Most of the related fixes have landed in Weblate 2.14 and 2.15, but obviously this is ongoing effort to make Weblate better with every release.

Filed under: Debian English SUSE Weblate

,

TEDAnonymous ideas worth spreading — and the surprising discoveries behind their curation

The intimacy of listening: Producer Cloe Shasha shares what she and her team learned while producing TED and Audible’s original audio series “Sincerely, X.”

In the spring of 2016, we put out a call for submissions for anonymous talks from around the world for the first season of our new podcast, Sincerely, X. We received hundreds of ideas — stories touching on a broad range of topics. As we read through them, we found ourselves flooded by tragedy, comedy, intrigue and surprise. Stories of victims of abuse, struggles with mental health, lessons from prison, insider secrets within companies and governmental organizations, and so much more.

>> Sincerely, X was co-produced with Audible. Episode 1, “Dr. Burnout,” is available now on Apple Podcasts and the TED Android app. <<

The premise of the podcast Sincerely, X felt simple at first: sharing important ideas, anonymously. The episodes would include speakers who need to separate their professional ideas from their personal lives; those who want to share an idea, but fear it would hurt someone in their family if they did so publicly; and quiet idealists whose solutions could transform lives. Why anonymous? Our theory was that inviting people to share ideas without having to reveal their identity might allow for an entirely new category of talks.

We dove into this pool of submissions to figure out who would make a great speaker for the show, and started interviewing people by phone. We were looking for compelling stories that had a strong need for anonymity while also considering them through the lens that we use for TED Talk submissions. In other words, did each story have an idea worth spreading?

Throughout the process of creating Sincerely, X season 1, we realized that we had to think about these talks quite differently from TED Talks on a stage, and we adapted along the way.

Signposting in an audio talk

When you’re watching a speaker on a stage, context and sentiment are communicated through the speaker’s body language, facial expressions and images (if they have slides). In audio, with only one of our senses engaged, a lot more information has to be transmitted through a speaker’s voice alone.

This came up when we worked with the speaker in episode 2, “Pepper Spray.” It’s the story of a woman who lived a normal-seeming life — until one day she lashed out in a department store and began pepper-spraying strangers. There are a lot of details that she shares about her life in that episode — both before and after the pepper spray incident. If she were telling this story on a stage, the audience would experience visual cues that would indicate whether she were reflecting on the far past versus the recent past, or whether she felt shameful or justified in her actions. (Watch a TED Talk with the sound off sometime, and you’ll be surprised at how much context you can pick up!) But when we shared the audio with colleagues for their feedback, they were at times confused by the sequence of events in the story. So we worked with the speaker to help her find places to include signposting sentences such as, “But I want to come back to the hero of the story.” In other words, phrases that could ground the listener in what’s about to come.  

The intimacy of listening

In the same way that hearing a ghost story around a campfire conjures up scary visualizations, hearing a difficult story on a podcast can build intense images in your mind. Drawing the line between deeply moving content and manipulative content can be tricky and nuanced.

In the case of some Sincerely, X episodes, a few of the early drafts of talks contained details that felt disturbingly intimate — details that might have packed an emotional punch from the distance of a stage, but that felt too intimate coming out of earbuds. We had to learn how to mitigate that intensity by listening to the content and getting feedback from early screeners who shared honest reactions.

This was a relevant dynamic for several of our speakers, including our speaker in episode 6, “Rescued by Ritual.” This speaker talks about a private ritual she invented in order to cope with the horror of her abusive marriage before she left her ex-husband. In the earliest draft, in order to provide context for the purpose of her ritual, the leadup to the reenactment of the ritual involved details that were difficult to hear for some early listeners. So we worked with the speaker to figure out which details she felt were most needed in order to paint an accurate picture of that time in her life.

To read or to memorize?

When it comes to our TED speakers on the stage, we typically encourage two ways of preparing for a talk: either memorizing their content so thoroughly that they can recite it seamlessly while standing on one foot with the television blaring, or memorizing an outline and riffing off that rehearsed structure once onstage. As Chris Anderson says, partially memorizing a talk produces an “uncanny valley” effect — a seemingly robotic or artificial performance. It’s hard to appear authentic while devoting a fair amount of energy to the process of recall. So if someone is not a great memorizer, we encourage improvising the sentences based on a solid outline of the concepts. Both of these forms of preparation are aimed at fostering an authentic delivery from the speaker, which cultivates a powerful connection between the speaker and the audience.

In the context of Sincerely, X, we thought about how to foster that authentic delivery, and considered that preparing speakers to read their talks might be a lower-stress way to record speakers in the studio. But it soon became clear that unless a speaker had acting experience, reading a talk sounded like… reading. So we experimented with having speakers memorize their talks extremely thoroughly before coming into the studio. And this worked for some speakers; when we recorded the speaker in episode 1, “Dr. Burnout,” she delivered her talk beautifully once she had fully committed it to memory.

Sincerely, X was co-produced with Audible. Episode 1, “Dr. Burnout,” is available now on Apple Podcasts and the TED Android app. We’ll be releasing new episodes every Thursday for the next ten weeks.


Krebs on SecurityExclusive: Dutch Cops on AlphaBay ‘Refugees’

Following today’s breaking news about U.S. and international authorities taking down the competing Dark Web drug bazaars AlphaBay and Hansa Market, KrebsOnSecurity caught up with the Dutch investigators who took over Hansa on June 20, 2017. When U.S. authorities shuttered AlphaBay on July 5, police in The Netherlands saw a massive influx of AlphaBay refugees who were unwittingly fleeing directly into the arms of investigators. What follows are snippets from an exclusive interview with Petra Haandrikman, team leader of the Dutch police unit that infiltrated Hansa.

Vendors on both AlphaBay and Hansa sold a range of black market items — most especially controlled substances like heroin. According to the U.S. Justice Department, AlphaBay alone had some 40,000 vendors who marketed a quarter-million sales listings for illegal drugs to more than 200,000 customers. The DOJ said that as of earlier this year, AlphaBay had 238 vendors selling heroin. Another 122 vendors advertised Fentanyl, an extremely potent synthetic opioid that has been linked to countless overdoses and deaths.

In our interview, Haandrikman detailed the dual challenges of simultaneously dealing with the exodus of AlphaBay users to Hansa and keeping tabs on the giant increase in new illicit drug orders that were coming in daily as a result.

The profile and feedback of a top AlphaBay vendor.

The profile and feedback of a top AlphaBay vendor. Image: ShadowDragon.io

KrebsOnSecurity (K): Talk a bit about how your team was able to seize control over Hansa.

Haandrikman (H): When we knew the FBI was working on AlphaBay, we thought ‘What’s better than if they come to us?’ The FBI wanted [the AlphaBay takedown] to look like an exit scam [where the proprietors of a dark web marketplace suddenly abscond with everyone’s money]. And we knew a lot of vendors on AlphaBay would probably come over to Hansa when AlphaBay was closed.

K: Where was Hansa physically based?

H: We knew the Hansa servers were in Lithuania, so we sent an MLAT (mutual legal assistance treaty) request to Lithuania and requested if we could proceed with our planned actions in their country. They were very willing to help us in our investigations.

K: So you made a copy of the Hansa servers?

H: We gained physical access to the machines in Lithuania, and were able to set up some clustering between the [Hansa] database servers in Lithuania and servers we were running in our country. With that, we were able to get a real time copy of the Hansa database, and then copy over the Web site code itself.

K: Did you have to take Hansa offline for a while during this process?

H: No, it didn’t really go offline. We were able to create our own copy of the site that was running on servers in the Netherlands. So there were two copies of the site running simultaneously.

The now-defunct Hansa Market.

The now-defunct Hansa Market.

K: At a press conference on this effort at the U.S. Justice Department in Washington, D.C. today, Rob Wainwright, director of the European law enforcement organization Europol, detailed how the closure of AlphaBay caused a virtual stampede of former AlphaBay buyers and sellers taking their business to Hansa Market. Tell us more about what that influx was like, and how you handled it.

H: Yes, we called them “AlphaBay refugees.” It wasn’t the technical challenge that caused problems. Because this was a police operation, we wanted to keep up with the orders to see if there were any large amounts [of drugs] being ordered to one place, [so that] we could share information with our law enforcement partners internationally.

K: How exactly did you deal with that? Were you able to somehow slow down the orders coming in?

H: We just closed registration on Hansa for new users for a few days. So there was a temporary restriction for being able to register on the site, which slowed down the orders each day to make sure that we could cope with the orders that were coming in.

K: Did anything unexpected happen as a result?

H: Some people started selling their Hansa accounts on Reddit. I read somewhere that one Hansa user sold his account for $40. The funny part about that was that sale happened about five minutes before we re-opened registration. There was a lot of frustration from ex-AlphaBay users that weren’t allowed to register on the site. But we also got defended by the Hansa community on social media, who said it was a great decision by us to educate certain AlphaBay users on Hansa etiquette, which doesn’t allow the sale of things permitted on AlphaBay and other dark markets, such as child pornography and firearms.

A message from Dutch authorities listing the top dark market vendors by nickname.

A message from Dutch authorities listing the top dark market vendors by nickname.

K: You mentioned earlier that the FBI wanted AlphaBay users to think that the reason for the closure of that marketplace was that its operators and administrators had conducted an ‘exit scam’ where they ran off with all of the Bitcoin and virtual currency that vendors and buyers had stored in their marketplace wallets temporarily. Why do you think they wanted this to look like an exit scam?

H: The idea was to hit the dark markets even harder when they think they’re just moving to another market and it turns to be law enforcement. Breaking the trust, so that [users] would not feel safe on a dark market.

K: It has been reported that just a few days ago the Hansa market administrators decided to ban the sale of Fentanyl. Were Dutch police involved in that at all?

H: It was a combination of things. One of the site’s employees or moderators started a discussion about this drug. We obviously also had our own opinion about it. It was a pretty good dialogue between us and the Hansa moderators to ban this from the site, and [that decision received] a lot of support from the community. But we didn’t instigate that discussion.

K: Have the Dutch police arrested anyone in connection with this investigation so far?

H: Yes, we identified several people in the Netherlands using the site, and there have already been several arrests made [tied to] Fentanyl.

K: Can you talk about whether your control over Hansa helped you identify users?

H: We did use some technical tricks to find out who people are, but we can’t go into that a lot because the investigation is still going on. But we did try to change the behavior [of some Hansa users] by asking for things that helped us to identify a lot of people and money.

K: What is your overall strategy in all of this?

H: Our strategy is that we want people to know that the Dark Web is not an anonymous place for criminals. Don’t think you can just buy or sell your drugs there without eventually getting caught by law enforcement. We want people to know you’re not safe on the Dark Web. Sooner or later we will come to get you.

Further reading: After AlphaBay’s Demise, Customers Flocked to Dark Market Run by Dutch Police

Krebs on SecurityAfter AlphaBay’s Demise, Customers Flocked to Dark Market Run by Dutch Police

Earlier this month, news broke that authorities had seized the Dark Web marketplace AlphaBay, an online black market that peddled everything from heroin to stolen identity and credit card data. But it wasn’t until today, when the U.S. Justice Department held a press conference to detail the AlphaBay takedown that the other shoe dropped: Police in The Netherlands for the past month have been operating Hansa Market, a competing Dark Web bazaar that enjoyed a massive influx of new customers immediately after the AlphaBay takedown.

The normal home page for the dark Web market Hansa has been replaced by this message from U.S. law enforcement authorities.

The normal home page for the dark Web market Hansa has been replaced by this message from U.S. law enforcement authorities.

U.S. Attorney General Jeff Sessions called the AlphaBay closure “the largest takedown in world history,” targeting some 40,000 vendors who marketed a quarter-million listings for illegal drugs to more than 200,000 customers.

“By far, most of this activity was in illegal drugs, pouring fuel on the fire of a national drug epidemic,” Sessions said. “As of earlier this year, 122 vendors advertised Fentanyl. 238 advertised heroin. We know of several Americans who were killed by drugs on AlphaBay.”

Andrew McCabe, acting director of the FBI, said AlphaBay was roughly 10 times the size of the Silk Road, a similar dark market that was shuttered in a global law enforcement sting in October 2013.

As impressive as those stats may be, the real coup in this law enforcement operation became evident when Rob Wainwright, director of the European law enforcement organization Europol, detailed how the closure of AlphaBay caused a virtual stampede of former AlphaBay buyers and sellers taking their business to Hansa Market, which had been quietly and completely taken over by Dutch police one month earlier — on June 20.

“What this meant…was that we could identify and disrupt the regular criminal activity that was happening on Hansa Market but also sweep up all of those new users that were displaced from AlphaBay and looking for a new trading plot form for their criminal activities,” Wainwright told the media at today’s press conference, which seemed more interested in asking Attorney General Sessions about a recent verbal thrashing from President Trump.

“In fact, they flocked to Hansa in droves,” Wainwright continued. “We recorded an eight times increase in the number of human users on Hansa immediately following the takedown of AlphaBay. Since the undercover operation to take over Hansa market by the Dutch Police, usernames and passwords of thousands of buyers and sellers of illicit commodities have been identified and are the subject of follow-up investigations by Europol and our partner agencies.”

On July 5, the same day that AlphaBay went offline, authorities in Thailand arrested Alexandre Cazes — a 25-year-old Canadian citizen living in Thailand — on suspicion of being the creator and administrator of AlphaBay. He was charged with racketeering, conspiracy to distribute narcotics, conspiracy to commit identity theft and money laundering, among other alleged crimes.

Alexandre Cazes, standing in front of one of four Lamborghini sports cars he owned. Image: Hanke.io.

Alexandre Cazes, standing in front of one of four Lamborghini sports cars he owned. Image: Hanke.io.

Law enforcement authorities in the US and abroad also seized millions of dollars worth of Bitcoin and other assets allegedly belonging to Cazes, including four Lamborghini cars and three properties.

However, law enforcement officials never got a chance to extradite Cazes to the United States to face trial. Cazes, who allegedly went by the nicknames “Alpha02” and “Admin,” reportedly committed suicide while still in custody in Thailand.

Online discussions dedicated to the demise of AlphaBay, Hansa and other Dark Web markets — such as this megathread over at Reddit — observe that law enforcement officials may have won this battle with their clever moves, but that another drug bazaar will simply step in to fill the vacuum.

But Ronnie Tokazowski, a senior analyst at New York City-based threat intelligence firm Flashpoint, said the actions by the Dutch and American authorities could make it more difficult for established vendors from AlphaBay and Hansa to build a presence using the same identities at alternative Dark Web marketplaces.

Vendors on Dark Web markets tend to re-use the same nickname across multiple marketplaces, partly so that other cybercriminals won’t try to assume and abuse their good names on other forums, but also because a reputation for quality customer service means everything on these marketplaces and is worth a pretty penny.

But Tokazowski said even if top vendors from AlphaBay/Hansa already have a solid reputation among buyers on other marketplaces, some of those vendors may choose to walk away from their former identities and start anew.

“One of the things [the Dutch Police and FBI] mentioned was they were going after other markets using some of the several thousand password credentials they had from AlphaBay and Hansa, as a way to get access to vendor accounts,” on other marketplaces, he said. “These actions are really going to have a lot of people asking who they can trust.”

A message from Dutch authorities listing the top dark market vendors by nickname.

A message from Dutch authorities listing the top dark market vendors by nickname.

“There are dozens of these Dark Web markets, people will start to scatter to them, and it will be interesting to see who steps up to become the next AlphaBay,” Tokazowski continued. “But if people were re-using usernames and passwords across dark markets, it’s going to be a bad day for them. And from a vendor perspective, [the takedowns] make it harder for sellers to transfer reputation to another market.”

For more on how the Dutch Police’s National High Tech Crimes Unit (NHTCU) quietly assumed control over the Hansa Market, check out this story.

This story may be updated throughout the day (as per usual, any updates will be noted with a timestamp). In the meantime, the Justice Department has released a redacted copy of the indictment against Cazes (PDF), as well as a forfeiture complaint (PDF).

Update, 4:00 p.m. ET: Added perspectives from Flashpoint, and link to exclusive interview with the leader of the Dutch police unit that infiltrated Hansa.

CryptogramEthereum Hacks

The press is reporting a $32M theft of the cryptocurrency Ethereum. Like all such thefts, they're not a result of a cryptographic failure in the currencies, but instead a software vulnerability in the software surrounding the currency -- in this case, digital wallets.

This is the second Ethereum hack this week. The first tricked people in sending their Ethereum to another address.

This is my concern about digital cash. The cryptography can be bulletproof, but the computer security will always be an issue.

Worse Than FailureFinding the Lowest Value

Max’s team moved into a new office, which brought with it the low-walled, “bee-hive” style cubicle partitions. Their project manager cheerfully explained that the new space “would optimize collaboration”, which in practice meant that every random conversation between any two developers turned into a work-stopping distraction for everyone else.

That, of course, wasn’t the only change their project manager instituted. The company had been around for a bit, and their original application architecture was a Java-based web application. At some point, someone added a little JavaScript to the front end. Then a bit more. This eventually segregated the team into two clear roles: back-end Java developers, and front-end JavaScript developers.

An open pit copper mine

“Silos,” the project manager explained, “are against the ethos of collaboration. We’re all going to be full stack developers now.” Thus everyone’s job description and responsibilities changed overnight.

Add an overly ambitious release schedule and some unclear requirements, and the end result is a lot of underqualified developers rushing to hit targets with tools that they don’t fully understand, in an environment that isn’t conducive to concentration in the first place.

Max was doing his best to tune out the background noise, when Mariella stopped into Dalton’s cube. Dalton, sitting straight across from Max, was the resident “front-end expert”, or at least, he had been before everyone was now a full-stack developer. Mariella was a long-time backend JEE developer who hadn’t done much of the web portion of their application at all, and was doing her best to adapt to the new world.

“Dalton, what’s the easiest way to get the minimum value of an array of numbers in JavaScript?” Mariella asked.

Max did his best to ignore the conversation. He was right in the middle of a particularly tricky ORM-related bug, and was trying to figure out why one fetch operation was generating just awful SQL.

“Hrmmmm…” Dalton said, tapping at his desk and adding to the distraction while he thought. “That’s a tough one. Oh! You should use a filter!”

“A filter, what would I filter on?”

Max combed through the JPA annotations that controlled their data access, cursing the “magic” that generated SQL queries, but as he started to piece it together, Dalton and Mariella continued their “instructional” session.

“In the filter callback, you’d just check to see if each value is the lowest one, and if it is, return true, otherwise return false.” Dalton knocked out a little drum solo on his desk, to celebrate his cleverness.

“But… I wouldn’t know which value is the lowest one, yet,” Mariella said.

“Oh, yeah… I see what you mean. Yeah, this is a tricky one.”

Max traced through the code. Okay, so the @JoinColumn is CUST_ID, so why is it generating a LIKE comparison instead of an equals? Wait, I think I’ve-

“Ah ha!” Dalton said, chucking Max’s train of thought off the rails and through an HO-scale village. “You just sort the array and take the first value!” *Thumpa thumpa tadatada* went Dalton’s little desk drum solo.

“I guess that makes sense,” Mariella said.

At this point, Max couldn’t stay out of the conversation. “No! Don’t do that. Use reduce. Sorting’s an n(lg n) operation.”

“Hunh?” Dalton said. His fingers nervously hovered over his desk, ready to play his next drum solo once he had a vague clue what Max was talking about. “In logs in? We’re not doing logging…”

Max tried again, in simple English. “Sorting is slow. The computer does a lot of extra work to sort all the elements.”

“No it won’t,” Dalton said. “It’ll just take the first element.”

“Ahem.” Max turned to discover the project manager looming over his cube. “We want to encourage collaboration,” the PM said, sternly, “but right now, Max, you’re being disruptive. Please be quiet and let the people around you work.”

And that was how Dalton’s Minimum Finding Algorithm got implemented, and released as part of their production code base.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianGunnar Wolf: Hey, everybody, come share the joy of work!

I got several interesting and useful replies, both via the blog and by personal email, to my two previous posts where I mentioned I would be starting a translation of the Made With Creative Commons book. It is my pleasure to say: Welcome everybody, come and share the joy of work!

Some weeks ago, our project was accepted as part of Hosted Weblate, lowering the bar for any interested potential contributor. So, whoever wants to be a part of this: You just have to log in to Weblate (or create an account if needed), and start working!

What is our current status? Amazingly better than anything I have exepcted: Not only we have made great progress in Spanish, reaching >28% of translated source strings, but also other people have started translating into Norwegian Bokmål (hi Petter!) and Dutch (hats off to Heimen Stoffels!). So far, Spanish (where Leo Arias and myself are working) is most active, but anything can happen.

I still want to work a bit on the initial, pre-po4a text filtering, as there are a small number of issues to fix. But they are few and easy to spot, your translations will not be hampered much when I solve the missing pieces.

So, go ahead and get to work! :-D Oh, and if you translate sizeable amounts of work into Spanish: As my university wants to publish (in paper) the resulting works, we would be most grateful if you can fill in the (needless! But still, they ask me to do this...) authorization for your work to be a part of a printed book.

Planet DebianNorbert Preining: The poison of academia.edu

All those working in academics or research have surely heard about academia.edu. It started out as a service for academics, in their own words:

Academia.edu is a platform for academics to share research papers. The company’s mission is to accelerate the world’s research.

But as with most of these platforms, they need to get money, and since some months now academia.edu is pressing users to pay into a premium account at the incredible rate of 8.25USD per month.

This is about he same you pay for Netflix, or some other streaming service. If you remain on the free side, what remains for you to do is SNS-like stuff, and uploading your papers so that academia.edu can make money from it.

What I am really surprised that they can pull this of at a .edu domain. The registry requirements state

For Institutions Within the United States. To obtain an Internet name in the .edu domain, your institution must be a postsecondary institution that is institutionally accredited by an agency on the U.S. Department of Education’s list of Nationally Recognized Accrediting Agencies (see recognized accrediting bodies).
Educause web site

Seeing what they are doing I think it is high time to request removal of the domain name.

So let us see what they are offering for their paid service:

  • Reader “The Readers feature tells you who is reading, downloading, and bookmarking your papers.”
  • Mentions “Get notified when you’re cited or mentioned, including in papers, books, drafts, theses, and syllabuses that Google Scholar can’t find.”
  • Advanced Search “Search the full text and citations of over 18 million papers”
  • Analytics “Learn more about who visits your profile”
  • Homepage – automatically generated home page from the data you enter into the system

On the other hand, the free service is consisting of SNS elements where you can follow other researchers, see when they upload/input an event, and that is it more or less. They have lured a considerable amount of academics into this service, gathered lots of papers, and now they are showing their real face – money.

In contrast to LinkedIn, which also offers paid tier, but keeps the free tier reasonably usable, academia.edu has broken its promise to “accelerate the world’s research” and even worse, it is NOT a “platform for academics to share research papers”. They are collecting papers and sell access to them, like the publisher paywalls.

I consider this kind of service highly poisonous for the academic environment and researchers.

Planet Linux AustraliaOpenSTEM: New Dates for Earliest Archaeological Site in Aus!

Thylacine or Tasmanian Tiger.

This morning news was released of a date of 65,000 years for archaeological material at the site of Madjedbebe rock shelter in the Jabiluka mineral lease area, surrounded by Kakadu National Park. The site is on the land of the Mirarr people, who have partnered with archaeologists from the University of Queensland for this investigation. It has also produced evidence of the earliest use of ground-stone tool technology, the oldest seed-grinding tools in Australia and stone points, which may have been used as spears. Most fascinating of all, there is the jawbone of a Tasmanian Tiger or Thylacine (which was found across continental Australia during the Ice Age) coated in a red pigment, thought to be the reddish rock, ochre. There is much evidence of use of ochre at the site, with chucks and ground ochre found throughout the site. Ochre is often used for rock art and the area has much beautiful rock art, so we can deduce that these rock art traditions are as old as the occupation of people in Australia, i.e. at least 65,000 years old! The decoration of the jawbone hints at a complex realm of abstract thought, and possibly belief, amongst our distant ancestors – the direct forebears of modern Aboriginal people.

Kakadu view, NT Tourism.

Placing the finds from Madjebebe rock shelter within the larger context, the dating, undertaken by Professor Zenobia Jacobs from the University of Wollongong, shows that people were living at the site during the Ice Age, a time when many, now-extinct, giant animals roamed Australia; and the tiny Homo floresiensis was living in Indonesia. These finds show that the ancestors of Aboriginal people came to Australia with much of the toolkit of their rich, complex lives already in place. This technology, extremely advanced for the time, allowed them to populate the entire continent of Australia, first managing to survive in the hash Ice Age environment and then also managing to adapt to the enormous changes in sea level, climate and vegetation at the end of the Ice Age.

The team of archaeologists working at Madjebebe rock shelter, in conjunction with Mirarr traditional owners, are finding all sorts of wonderful archaeological material, from which they can deduce much rich, detailed information about the lives of the earliest people in Australia. We look forward to hearing more from them in the future. Students who are interested, especially those in Years 4, 5 and 6, can read more about these sites and the animals and lives of people in Ice Age Australia in our resources People Reach Australia, Early Australian Sites, Ice Age Animals and the Last Ice Age, which are covered in Units 4.1, 5.1 and 6.1.

Planet DebianBenjamin Mako Hill: Testing Our Theories About “Eternal September”

Graph of subscribers and moderators over time in /r/NoSleep. The image is taken from our 2016 CHI paper.

Last year at CHI 2016, my research group published a qualitative study examining the effects of a large influx of newcomers to the /r/nosleep online community in Reddit. Our study began with the observation that most research on sustained waves of newcomers focuses on the destructive effect of newcomers and frequently invokes Usenet’s infamous “Eternal September.” Our qualitative study argued that the /r/nosleep community managed its surge of newcomers gracefully through strategic preparation by moderators, technological systems to reign in on norm violations, and a shared sense of protecting the community’s immersive environment among participants.

We are thrilled that, less a year after the publication of our study, Zhiyuan “Jerry” Lin and a group of researchers at Stanford have published a quantitative test of our study’s findings! Lin analyzed 45 million comments and upvote patterns from 10 Reddit communities that a massive inundation of newcomers like the one we studied on /r/nosleep. Lin’s group found that these communities retained their quality despite a slight dip in its initial growth period.

Our team discussed doing a quantitative study like Lin’s at some length and our paper ends with a lament that our findings merely reflected, “propositions for testing in future work.” Lin’s study provides exactly such a test! Lin et al.’s results suggest that our qualitative findings generalize and that sustained influx of newcomers need not doom a community to a descent into an “Eternal September.” Through strong moderation and the use of a voting system, the subreddits analyzed by Lin appear to retain their identities despite the surge of new users.

There are always limits to research projects work—quantitative and qualitative. We think the Lin’s paper compliments ours beautifully, we are excited that Lin built on our work, and we’re thrilled that our propositions seem to have held up!

This blog post was written with Charlie Kiene. Our paper about /r/nosleep, written with Charlie Kiene and Andrés Monroy-Hernández, was published in the Proceedings of CHI 2016 and is released as open access. Lin’s paper was published in the Proceedings of ICWSM 2017 and is also available online.

TED10 books from TEDWomen for your summer reading list — and beyond

There’s no doubt that the speakers we invite to TEDWomen each year have amazing stories to tell. And many of them are published authors (or about to be!) whose work is worth exploring beyond their brief moments in the TED spotlight. So, if you’re looking for some inspiring, instructive and provocative books to add to your summer reading list, these recent books from 2016 TEDWomen speakers are worthy additions.

1. Beyond Respectability: The Intellectual Thought of Race Women by Brittney Cooper

Brittney Cooper wowed us at TEDWomen with her presentation on the racial politics of time. And in her new book, Beyond Respectability: The Intellectual Thought of Race Women, released in May, she doesn’t disappoint. Brittney says she got started studying black women intellectuals in graduate school. Although she learned a lot about the histories of black male intellectuals as an undergrad at Howard University, she “somehow managed not to learn anything about” the storied history of black women intellectuals in her four years there.

In her book, Brittney looks at the far-reaching intellectual achievements of female thinkers and activists like Ida B. Wells, Anna Julia Cooper, Mary Church Terrell, Fannie Barrier Williams, Pauli Murray and Toni Cade Bambara. NPR’s Genevieve Valentine writes that Brittney’s book is “a work of crucial cultural study … [that] lays out the complicated history of black woman as intellectual force, making clear how much work she has done simply to bring that category into existence.”

2. South of Forgiveness by Thordis Elva and Tom Stranger

One of the most intensely personal talks in San Francisco came from Thordis Elva and Tom Stranger. In 1996, 16-year-old Thordis shared a teenage romance with Tom, an exchange student from Australia. After a school dance, Tom raped Thordis. They didn’t speak for many years. Then, in her twenties, Thordis wrote to Tom, wanting to talk about what he did to her, and remarkably, he responded. For the first time, in front of the TEDWomen audience, Thordis and Tom talked openly about what happened and why she wanted to talk to him, and he to her.

South of Forgiveness: A True Story of Rape and Responsibility is a profoundly moving, open-chested and critical book. It is an exploration into sexual violence and self-knowledge that shines a healing light into the shrouded corners of our universal humanity. There is a disarming power in these pages that has the potential to change our language, shift our divisions, and invite us to be brave in discussing this pressing, global issue.

3. Girls & Sex by Peggy Orenstein

In a TED Talk that has already been viewed over 1.5 million times, author and journalist Peggy Orenstein, shared some of the things she learned about young girls and how they think about sex while researching her 2016 book, Girls & Sex: Navigating the Complicated New Landscape. In it, she explores the changing landscape of modern sexual expectations and its troubling impact on adolescents and especially young women. If you’re the parent of a young girl (or boy), it’s a must-read for understanding the “hidden truths, hard lessons, and important possibilities of girls’ sex lives in the modern world.”

4. Born Bright by C. Nicole Mason

At TEDWomen, C. Nicole Mason talked about what happens when we disrupt the path that society has paved for us based on where we were born, stereotypes and stigma. In her memoir, Born Bright: A Young Girl’s Journey from Nothing to Something in America, Nicole talks about how she did it in her own life, chronicling her own path out of poverty. In a beautifully written book, she examines “the conditions that make it nearly impossible to escape” and her own struggles with feeling like an outsider in academia and professional settings because of the way she talked, dressed and wore her hair.

5. The Gutsy Girl by Caroline Paul

Caroline Paul has a pretty amazing backstory. Once a young self-described “scaredy-cat,” Caroline grew up to fly planes, raft rivers, climb mountains, and fight fires. That’s right, she was one of the first women to work for the San Francisco Fire Department — a job that inspired her first work of nonfiction, Fighting Fire. In her most recent book, The Gutsy Girl: Escapades for Your Life of Epic Adventure, she expands on some of the stories she shared in her TED Talk, writing about “her greatest escapades — as well as those of other girls and women from throughout history.”

6. Marrow: A Love Story by Elizabeth Lesser

In a beautiful and surprisingly funny talk about strained family relationships and the death of a loved one, Elizabeth Lesser described the healing process of putting aside pride and defensiveness to make way for honest communication. “You don’t have to wait for a life-or-death situation to clean up the relationships that matter to you,” she says. “Be like a new kind of first responder … the one to take the first courageous step toward the other.”

In her courageous memoir, Marrow: A Love Story, the bestselling author of Broken Open shares the full story of her sister Maggie’s cancer and the difficult conversations they had during her illness as they healed their imperfect relationship and learned to love each other’s true selves.

7. I Know How She Does It by Laura Vanderkam

The theme of last year’s TEDWomen, as many of you will recall, was Time — all of us wrestle with how to be more productive, more engaged, more informed, to use our time wisely and well, to be more fully present in our lives. Writer and author Laura Vanderkam tackled the practical aspects of time management in her TED Talk. There are 168 hours in each week. How do we find time for what matters most?

In her book I Know How She Does It, Laura explains how successful women make the most of their time. With research, hard data and a lot of analysis, Laura “offers a framework for anyone who wants to thrive at work and life.”

8. Always Another Country by Sisonke Msimang

In her work, South African writer and activist Sisonke Msimang untangles the threads of race, class and gender that run through the fabric of African and global culture. In her popular TED Talk, she addressed the power of stories to promote change in our world and their “limitations, particularly for those of us who are interested in social justice.”

I am so pleased to report that after a very competitive bidding war, Sisonke will be publishing her first book, to be titled Always Another Country, in October.  The book, a memoir, will cover “her childhood in exile in Zambia and Kenya, her young adulthood and student years in North America and her return to South Africa during the euphoria of the 1990s.” I am so looking forward to reading her book and so should you.

9. When They Call You a Terrorist by Patrisse Cullors

Patrisse Cullors, one of the three co-founders of Black Lives Matter, is also working on a memoir due out in January 2018 titled When They Call You a Terrorist. Activist Eve Ensler writes that Patrisse “is a leading visionary and activist, feminist, civil rights leader who has literally changed the trajectory of politics and resistance in America.” Co-written with asha bandele, the memoir will recount the founding of the movement and serve as a reminder “that protest in the interest of the most vulnerable comes from love.”

10. On Intersectionality: Essential Writings by Kimberlé Crenshaw

Civil rights advocate Kimberlé Crenshaw had the TEDWomen audience on their feet during her passionate talk dissecting intersectionality, a term she coined 20 years ago that describes the double bind faced by victims of simultaneous racial and gender prejudice. “What do you call being impacted by multiple forces and then abandoned to fend for yourself?” she asked the audience. “Intersectionality seemed to do it for me.”

In a new collection of her writing, titled On Intersectionality: Essential Writings, due to be released next year, “readers will find the key essays and articles that have defined the concept of intersectionality and made Crenshaw a legal superstar.” Don’t miss it.

TEDWomen 2017

I also want to mention that registration for TEDWomen 2017 is open, so if you haven’t registered yet, please click this link and apply today — space is limited. This year, TEDWomen will be held November 1–3 in New Orleans. The theme is Bridges: We build them, we cross them, and sometimes we even burn them. We’ll explore the many aspects of this year’s theme through curated TED Talks, community dinners and activities.

Join us!
– Pat

Featured image: Reading a book at the beach (Simon Cocks, Flickr CC 2.0)


,

Cory DoctorowRudy Rucker on Walkaway



Walkaway is my first novel for adults since 2009 and I had extremely high hopes (and not a little anxiety) for it as it entered the world, back in April. Since then, I’ve been gratified by the kind words of many of my literary heroes, from William Gibson to Bruce Sterling to the kind cover quotes from Edward Snowden, Neal Stephenson and Kim Stanley Robinson.


Today I got a most welcome treat on those lines: a review by Rudy Rucker, lavishly illustrated with some of his excellent photos. Rucker really got the novel, got excited about the parts that excited me, and you can’t really ask for better than that.

“I’m groundhog daying again, aren’t I?”

Who’s saying this? It’s the character Dis. Her body is dead, but before she died, they managed (thanks to Dis’s work) to copy or transfer the brain processes into the cloud, that is, into a network of computers. And she can run as a sim in there. And she’s having trouble getting her sim to stabilize. It keeps freaking out and crashing. And each time she restarts the character Iceweasel sits there talking to the computer sim, trying to mellow it out, and Dis will realize she’s been rebooted, or restarted like Bill Murray in that towering cinematic SF masterpiece Groundhog Day. And Cory has the antic wit to make that verb.

The first half of the book is kind of a standard good young people against evil corporate rich people thing. But then, when Dis is talking about groundhog dayhing, it kicks into another gear. Cory pulls out a different stop on the mighty SF Wurlitzer organ: the software immortality trope. As I’m fond of saying, in my 1980 novel Software, I became one of the very first authors to write about the by-now-familiar notion of the mind as software. That is, your mind is in some sense like software running on your physical body. If we could create a sufficiently rich and flexible computer, the computer might be able to emulate a person.

There’s been a zillion movies, TV shows, SF stories and novels using this idea since then. What I liked so much about Walkaway is that Cory finds a way to make this (still fairly fantastic and unlikely) idea seem real and new.

Cory Doctorow’s WALKAWAY [Rudy Rucker]

LongNowInterview: Alexander Rose and Phil Libin on Long-Term Thinking

Long Now Executive Director Alexander Rose and former Evernote CEO Phil Libin recently spoke with the design agency Dialogue about the layers of civilization, the future of products, and the Clock of the Long Now.

The interview is wide-ranging, covering everything from the early tech, design and science fiction influences in Rose and Libin’s childhoods to how Long Now’s pace layers theory helps reconcile the tension between long-term planning and Silicon Valley’s fast-paced approach to entrepreneurship and product innovation.

The interview also provides a look at a little-known chapter in Long Now’s history, namely, how Alexander Rose left a career in video games and virtual world design after hearing about The Clock Project:

Stewart told me about The Clock Project. Back then the project was just a conversation between Danny Hillis, Brian Eno, and Stewart, but I just couldn’t get it out of my head when I heard about it. By strange luck, there was a Board meeting a week after where I met Danny for the first time. It was then that he told me he had a funder for the first prototype of the Clock and asked if I wanted to help build it. I immediately said, “Yes, this is what I want to do. I don’t want to work on video games anymore.”

Read Dialogue’s interview with Alexander Rose and Phil Libin in full (LINK).

Watch Stewart Brand and Long Now board member Paul Saffo discuss the Pace Layers of Civilization in a 02015 Conversation at The Interval (LINK).

Krebs on SecurityTrump Hotels Hit By 3rd Card Breach in 2 Years

Maybe some of you missed this amid all the breach news recently (I know I did), but Trump International Hotels Management LLC last week announced its third credit-card data breach in the past two years. I thought it might be useful to see these events plotted on a timeline, because it suggests that virtually anyone who used a credit card at a Trump property in the past two years likely has had their card data stolen and put on sale in the cybercrime underground as a result.

On May 2, 2017, KrebsOnSecurity broke the story that travel industry giant Sabre Corp. experienced a significant breach of its payment and customer data tied to bookings processed through a reservations system that serves more than 32,000 hotels and other lodging establishments. Last week, Trump International Hotels disclosed the SABRE breach impacted at least 13 Trump Hotel properties between August 2016 and March 2017. Trump Hotels said it was first notified of the breach on June 5.

A timeline of Trump Hotels’ credit card woes over the past two years. Click to enlarge.

According to Verizon‘s latest annual Data Breach Investigations Report (DBIR), malware attacks on point-of-sale systems used at front desk and hotel restaurant systems “are absolutely rampant” in the hospitality sector. Accommodation was the top industry for point-of-sale intrusions in this year’s data, with 87% of breaches within that pattern.

Other hotel chains that disclosed this past week getting hit in the Sabre breach include 11 Hard Rock properties (another chain hit by multiple card breach incidents); Four Seasons Hotels and Resorts; and at least two dozen Loews Hotels in the United States and Canada.

ANALYSIS/RANT

Given its abysmal record of failing to protect customer card data, you might think the hospitality industry would be anxious to assuage guests who may already be concerned that handing over their card at the hotel check-in desk also means consigning that card to cybercrooks (e.g. at underground carding shops like Trumps Dumps).

However, so far this year I’ve been hard-pressed to find any of the major hotel chains that accept more secure chip-based cards, which are designed to make card data stolen by point-of-sale malware and skimmers much more difficult to turn into counterfeit cards. I travel quite a bit — at least twice a month — and I have yet to experience a single U.S.-based hotel in the past year asking me to dip my chip-based card as opposed to swiping it.

A carding shop that sells stolen credit cards and invokes 45's likeness and name. No word yet on whether this cybercriminal store actually sold any cards stolen from Trump Hotel properties.

A carding shop that sells stolen credit cards and invokes 45’s likeness and name. No word yet on whether this cybercriminal store actually sold any cards stolen from Trump Hotel properties.

True, chip cards alone aren’t going to solve the whole problem. Hotels and other merchants that implement the ability to process chip cards still need to ensure the data is encrypted at every step of the transaction (known as “point-to-point” or “end-to-end” encryption). Investing in technology like tokenization — which allows merchants to store a code that represents the customer’s card data instead of the card data itself — also can help companies become less of a target.

Maybe it wouldn’t be so irksome if those of us concerned about security or annoyed enough at getting our cards replaced three or four times a year due to fraud could stay at a major hotel chain in the United States and simply pay with cash. But alas, we’re talking about an industry that essentially requires customers to pay by credit card.

Well, at least I’ll continue to accrue reward points on my credit card that I can use toward future rounds of Russian roulette with the hotel’s credit card systems.

It’s bad enough that cities and states routinely levy huge taxes on lodging establishments (the idea being the tax is disproportionately paid by people who don’t vote or live in the area); now we have the industry-wide “carder tax” conveniently added to every stay.

What’s the carder tax you ask? It’s the sense of dread and the incredulous “really?” that wells up when one watches his chip card being swiped yet again at the check-out counter.

It’s the time wasted on the phone with your bank trying to sort out whether you really made all those fraudulent purchases, and then having to enter your new card number at all those sites and services where the old one was stored. It’s that awkward moment when the waiter says in front of your date or guests that your card has been declined.

If you’re brave enough to pay for everything with a debit card (bad idea), it may be the time you spend without access to cash while your bank sorts things out. It may be the aggravation of dealing with bounced checks as a result of the fraud.

I can recall a recent stay wherein right next to the credit card machine at the hotel’s front desk was a stack of various daily newspapers, one of which had a very visible headline warning of an ongoing credit card breach at the same hotel that was getting ready to swipe my card yet again (by the way, I’m still kicking myself for not snapping a selfie right then).

After I checked out of that particular hotel, I descended to the parking garage to retrieve a rental car. The garage displayed large signs everywhere warning customers that the property was not responsible for any damage or thefts that may be inflicted on vehicles parked there. I recall thinking at the time that this same hotel probably should have been required to display a similar sign over their credit card machines (actually, they all should).

“The privacy and protection of our guests’ information is a matter we take very seriously.” This is from boilerplate text found in both the Trump Hotels and Loews Hotel statements. It sounds nice. Too bad it’s all hogwash. Once again, the timeline above speaks far more about the hospitality industry’s attitudes on credit card security than any platitudes offered in these all-too-common breach notifications.

Further reading:

Banks: Card Breach at Trump Hotel Properties
Trump Hotel Collection Confirms Card Breach
Sources: Trump Hotels Breached Again
Trump Hotels Settles Over Data Breach: To Pay $50,000 for 70,000 Stolen Cards
Breach at Sabre Corp.’s Hospitality Unit

CryptogramPassword Masking

Slashdot asks if password masking -- replacing password characters with asterisks as you type them -- is on the way out. I don't know if that's true, but I would be happy to see it go. Shoulder surfing, the threat is defends against, is largely nonexistent. And it is becoming harder to type in passwords on small screens and annoying interfaces. The IoT will only exacerbate this problem, and when passwords are harder to type in, users choose weaker ones.

Planet DebianDirk Eddelbuettel: RcppAPT 0.0.4

A new version of RcppAPT -- our interface from R to the C++ library behind the awesome apt, apt-get, apt-cache, ... commands and their cache powering Debian, Ubuntu and the like -- arrived on CRAN yesterday.

We added a few more functions in order to compute on the package graph. A concrete example is shown in this vignette which determines the (minimal) set of remaining Debian packages requiring a rebuild under R 3.4.* to update their .C() and .Fortran() registration code. It has been used for the binNMU request #868558.

As we also added a NEWS file, its (complete) content covering all releases follows below.

Changes in version 0.0.4 (2017-07-16)

  • New function getDepends

  • New function reverseDepends

  • Added package registration code

  • Added usage examples in scripts directory

  • Added vignette, also in docs as rendered copy

Changes in version 0.0.3 (2016-12-07)

  • Added dumpPackages, showSrc

Changes in version 0.0.2 (2016-04-04)

  • Added reverseDepends, dumpPackages, showSrc

Changes in version 0.0.1 (2015-02-20)

  • Initial version with getPackages and hasPackages

A bit more information about the package is available here as well as as the GitHub repo.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Worse Than FailureCodeSOD: A Pre-Packaged Date

Microsoft’s SQL Server Integration Services is an ETL tool that attempts to mix visual programming (for designing data flows) with the reality that at some point, you’re just going to need to write some code. Your typical SSIS package starts as a straightforward process that quickly turns into a sprawling mix of spaghetti-fied .NET code, T-SQL stored procedures, and developer tears.

TJ L. inherited an SSIS package. This particular package contained a step where a C# sub-module needed to pass a date (but not a date-time) to the database. Now, this could be done easily by using C#’s date-handling objects, or even in the database by simply using the DATE type, instead of the DATETIME type.

Instead, TJ’s predecessor took this route instead:

CREATE PROC [dbo].[SetAsOfDate]
        @Date datetime = NULL
AS
SELECT @Date = CASE WHEN YEAR(@DATE) < 1950 THEN GETDATE()
                                        WHEN @Date IS NULL THEN GETDATE()
                                        ELSE @Date
                                END;

SELECT CAST(FLOOR(CAST(@Date AS FLOAT)) AS DATETIME) AS CurrentDate

The good about this code is that it checks its input parameters. That’s defensive programming. The ugly is the less-than 1950 check, which I can only assume is a relic of some Y2K bugfixes. The bad is the `CAST(FLOOR(CAST(@Date AS FLOAT)) as DATETIME).

[Advertisement] Otter, ProGet, BuildMaster – robust, powerful, scalable, and reliable additions to your existing DevOps toolchain.

Planet DebianLars Wirzenius: Dropping Yakking from Planet Debian

A couple of people objected to having Yakking on Planet Debian, so I've removed it.

,

Harald WelteVirtual Um interface between OsmoBTS and OsmocomBB

During the last couple of days, I've been working on completing, cleaning up and merging a Virtual Um interface (i.e. virtual radio layer) between OsmoBTS and OsmocomBB. After I started with the implementation and left it in an early stage in January 2016, Sebastian Stumpf has been completing it around early 2017, with now some subsequent fixes and improvements by me. The combined result allows us to run a complete GSM network with 1-N BTSs and 1-M MSs without any actual radio hardware, which is of course excellent for all kinds of testing scenarios.

The Virtual Um layer is based on sending L2 frames (blocks) encapsulated via GSMTAP UDP multicast packets. There are two separate multicast groups, one for uplink and one for downlink. The multicast nature simulates the shared medium and enables any simulated phone to receive the signal from multiple BTSs via the downlink multicast group.

/images/osmocom-virtum.png

In OsmoBTS, this is implemented via the new osmo-bts-virtual BTS model.

In OsmocomBB, this is realized by adding virtphy virtual L1, which speaks the same L1CTL protocol that is used between the real OsmcoomBB Layer1 and the Layer2/3 programs such as mobile and the like.

Now many people would argue that GSM without the radio and actual handsets is no fun. I tend to agree, as I'm a hardware person at heart and I am not a big fan of simulation.

Nevertheless, this forms the basis of all kinds of possibilities for automatized (regression) testing in a way and for layers/interfaces that osmo-gsm-tester cannot cover as it uses a black-box proprietary mobile phone (modem). It is also pretty useful if you're traveling a lot and don't want to carry around a BTS and phones all the time, or get some development done in airplanes or other places where operating a radio transmitter is not really a (viable) option.

If you're curious and want to give it a shot, I've put together some setup instructions at the Virtual Um page of the Osmocom Wiki.

Planet DebianDaniel Silverstone: Yay, finished my degree at last

A little while back, in June, I sat my last exam for what I hoped would be the last module in my degree. For seven years, I've been working on a degree with the Open University and have been taking advantage of the opportunity to have a somewhat self-directed course load by taking the 'Open' degree track. When asked why I bothered to do this, I guess my answer has been a little varied. In principle it's because I felt like I'd already done a year's worth of degree and didn't want it wasted, but it's also because I have been, in the dim and distant past, overlooked for jobs simply because I had no degree and thus was an easy "bin the CV".

Fed up with this, I decided to commit to the Open University and thus began my journey toward 'qualification' in 2010. I started by transferring the level 1 credits from my stint at UCL back in 1998/1999 which were in a combination of basic programming in Java, some mathematics including things like RSA, and some psychology and AI courses which at the time were aiming at a degree called 'Computer Science with Cognitive Sciences'.

Then I took level 2 courses, M263 (Building blocks of software), TA212 (The technology of music) and MS221 (Exploring mathematics). I really enjoyed the mathematics course and so...

At level 3 I took MT365 (Graphs, networks and design), M362 (Developing concurrent distributed systems), TM351 (Data management and analysis - which I ended up hating), and finally finishing this June with TM355 (Communications technology).

I received an email this evening telling me the module result for TM355 had been posted, and I logged in to find I had done well enough to be offered my degree. I could have claimed my degree 18+ months ago, but I persevered through another two courses in order to qualify for an honours degree which I have now been awarded. Since I don't particularly fancy any ceremonial awarding, I just went through the clicky clicky and accepted my qualification of 'Batchelor of Science (Honours) Open, Upper Second-class Honours (2.1)' which grants me the letters 'BSc (Hons) Open (Open)' which, knowing me, will likely never even make it onto my CV because I'm too lazy.

It has been a significant effort, over the course of the past few years, to complete a degree without giving up too much of my personal commitments. In addition to earning the degree, I have worked, for six of the seven years it has taken, for Codethink doing interesting work in and around Linux systems and Trustable software. I have designed and built Git server software which is in use in some universities, and many companies, along with a good few of my F/LOSS colleagues. And I've still managed to find time to attend plays, watch films, read an average of 2 novel-length stories a week (some of which were even real books), and be a member of the Manchester Hackspace.

Right now, I'm looking forward to a stress free couple of weeks, followed by an immense amount of fun at Debconf17 in Montréal!

Krebs on SecurityExperts in Lather Over ‘gSOAP’ Security Flaw

Axis Communications — a maker of high-end security cameras whose devices can be found in many high-security areas — recently patched a dangerous coding flaw in virtually all of its products that an attacker could use to remotely seize control over or crash the devices.

The problem wasn’t specific to Axis, which seems to have reacted far more quickly than competitors to quash the bug. Rather, the vulnerability resides in open-source, third-party computer code that has been used in countless products and technologies (including a great many security cameras), meaning it may be some time before most vulnerable vendors ship out a fix — and even longer before users install it.cam2cam

At issue is a flaw in a bundle of reusable code (often called a “code library“) known as gSOAP, a widely-used toolkit that software or device makers can use so that their creations can talk to the Internet (or “parse XML” for my geek readers). By some estimates, there are hundreds — if not thousands — of security camera types and other so-called “Internet of Things”(IoT) devices that rely upon the vulnerable gSOAP code.

By exploiting the bug, an attacker could force a vulnerable device to run malicious code, block the owner from viewing any video footage, or crash the system. Basically, lots of stuff you don’t want your pricey security camera system to be doing.

Genivia, the company that maintains gSOAP, released an update on June 21, 2017 that fixes the flaw. In short order, Axis released a patch to plug the gSOAP hole in nearly 250 of its products.

Genivia chief executive Robert Van Engelen said his company has already reached out to all of its customers about the issue. He said a majority of customers use the gSOAP software to develop products, but that mostly these are client-side applications or non-server applications that are not affected by this software crash issue.

“It’s a crash, not an exploit as far as we know,” Van Engelen said. “I estimate that over 85% of the applications are unlikely to be affected by this crash issue.”

Still, there are almost certainly dozens of other companies that use the vulnerable gSOAP code library and haven’t (or won’t) issue updates to fix this flaw, says Stephen Ridley, chief technology officer and founder of Senrio — the security company that discovered and reported the bug. What’s more, because the vulnerable code is embedded within device firmware (the built-in software that powers hardware), there is no easy way for end users to tell if the firmware is affected without word one way or the other from the device maker.

“It is likely that tens of millions of products — software products and connected devices — are affected by this,” Ridley said.

“Genivia claims to have more than 1 million downloads of gSOAP (most likely developers), and IBM, Microsoft, Adobe and Xerox as customers,” the Senrio report reads. “On Sourceforge, gSOAP was downloaded more than 1,000 times in one week, and 30,000 times in 2017. Once gSOAP is downloaded and added to a company’s repository, it’s likely used many times for different product lines.”

Anyone familiar with the stories published on this blog over the past year knows that most IoT devices — security cameras in particular — do not have a stellar history of shipping in a default-secure state (heck, many of these devices are running versions of Linux that date back more than a decade). Left connected to the Internet in an insecure state, these devices can quickly be infected with IoT threats like Mirai, which enslave them for use in high-impact denial-of-service attacks designed to knock people and Web sites offline.

When I heard about this bug I pinged the folks over at IPVM, a trade publication that tracks the video surveillance industry. IPVM Business Analyst Brian Karas said the type of flaw (known as a buffer overflow) in this case doesn’t expose the vulnerable systems to IoT worms like Mirai, which can spread to devices that are running under factory-default usernames and passwords.

IPVM polled almost a dozen top security camera makers, and said only two (including Axis) responded that they used the vulnerable gSOAP library in their products. Another three said they hadn’t yet determined whether any of their products were potentially vulnerable.

“You probably wouldn’t be able to make a universal, Mirai-style exploit for this flaw because it lacks the elements of simplicity and reproduceability,” Karas said, noting that the exploit requires that an attacker be able to upload at least a 2 GB file to the Web interface for a vulnerable device.

“In my experience, I don’t think it’s that common for embedded systems to accept a 2-gigabyte file upload,” Karas said. “Every device is going to respond slightly differently, and it would probably take a lot of time to research each device and put together some kind of universal attack tool. Yes, people should be aware of this and patch if they can, but this is nowhere near as bad as [the threat from] Mirai.”

Karas said similar to most other cyber security vulnerabilities in network devices, restricting network access to the unit will greatly reduce the chance of exploit.

“Cameras utilizing a VMS (video management system) or recorder for remote access, instead of being directly connected to the internet, are essentially immune from remote attack (though it is possible for the VMS itself to have vulnerabilities),” IPVM wrote in an analysis of the gSOAP bug. In addition, changing the factory default settings (e.g., picking decent administrator passwords) and updating the firmware on the devices to the latest version may go a long way toward sidestepping any vulnerabilities.

TEDSneak preview lineup unveiled for Africa’s next TED Conference

On August 27, an extraordinary group of people will gather in Arusha, Tanzania, for TEDGlobal 2017, a four-day TED Conference for “those with a genuine interest in the betterment of the continent,” says curator Emeka Okafor.

As Okafor puts it: “Africa has an opportunity to reframe the future of work, cultural production, entrepreneurship, agribusiness. We are witnessing the emergence of new educational and civic models. But there is, on the flip side, a set of looming challenges that include the youth bulge and under-/unemployment, a food crisis, a risky dependency on commodities, slow industrializations, fledgling and fragile political systems. There is a need for a greater sense of urgency.”

He hopes the speakers at TEDGlobal will catalyze discussion around “the need to recognize and amplify solutions from within the Africa and the global diaspora.”

Who are these TED speakers? A group of people with “fresh, unique perspectives in their initiatives, pronouncements and work,” Okafor says. “Doers as well as thinkers — and contrarians in some cases.” The curation team, which includes TED head curator Chris Anderson, went looking for speakers who take “a hands-on approach to solution implementation, with global-level thinking.”

Here’s the first sneak preview — a shortlist of speakers who, taken together, give a sense of the breadth and topics to expect, from tech to the arts to committed activism and leadership. Look for the long list of 35–40 speakers in upcoming weeks.

The TEDGlobal 2017 conference happens August 27–30, 2017, in Arusha, Tanzania. Apply to attend >>

Kamau Gachigi, Maker

“In five to ten years, Kenya will truly have a national innovation system, i.e. a system that by its design audits its population for talented makers and engineers and ensures that their skills become a boon to the economy and society.” — Kamau Gachigi on Engineering for Change

Dr. Kamau Gachigi is the executive director of Gearbox, Kenya’s first open makerspace for rapid prototyping, based in Nairobi. Before establishing Gearbox, Gachigi headed the University of Nairobi’s Science and Technology Park, where he founded a Fab Lab full of manufacturing and prototyping tools in 2009, then built another one at the Riruta Satellite in an impoverished neighborhood in the city. At Gearbox, he empowers Kenya’s next generation of creators to build their visions. @kamaufablab

Mohammed Dewji, Business leader

“My vision is to facilitate the development of a poverty-free Tanzania. A future where the opportunities for Tanzanians are limitless.” — Mohammed Dewji

Mohammed Dewji is a Tanzanian businessman, entrepreneur, philanthropist, and former politician. He serves as the President and CEO of MeTL Group, a Tanzanian conglomerate operating in 11 African countries. The Group operates in areas as diverse as trading, agriculture, manufacturing, energy and petroleum, financial services, mobile telephony, infrastructure and real estate, transport, logistics and distribution. He served as Member of Parliament for Singida-Urban from 2005 until his retirement in 2015. Dewji is also the Founder and Trustee of the Mo Dewji Foundation, focused on health, education and community development across Tanzania. @moodewji

Meron Estefanos, Refugee activist

“Q: What’s a project you would like to move forward at TEDGlobal?
A: Bringing change to Eritrea.” —Meron Estefanos

Meron Estefanos is an Eritrean human rights activist, and the host and presenter of Radio Erena’s weekly program “Voices of Eritrean Refugees,” aired from Paris. Estefanos is executive director of the Eritrean Initiative on Refugee Rights (EIRR), advocating for the rights of Eritrean refugees, victims of trafficking, and victims of torture. Ms Estefanos has been key in identifying victims throughout the world who have been blackmailed to pay ransom for kidnapped family members, and was a key witness in the first trial in Europe to target such blackmailers. She is co-author of Human Trafficking in the Sinai: Refugees between Life and Death and The Human Trafficking Cycle: Sinai and Beyond, and was featured in the film Sound of Torture. She was nominated for the 2014 Raoul Wallenberg Award for her work on human rights and victims of trafficking. @meronina

Touria El Glaoui, Art fair founder

“I’m looking forward to discussing the roles we play as leaders and tributaries in redressing disparities within arts ecosystems. The art fair is one model which has had a direct effect on the ways in which audiences engage with art, and its global outlook has contributed to a highly mobile and dynamic means of interaction.” — Touria El Glaoui

Touria El Glaoui is the founding director of the 1:54 Contemporary African Art Fair, which takes place in London and New York every year and, in 2018, launches in Marrakech. The fair highlights work from artists and galleries across Africa and the diaspora, bringing visibility in global art markets to vital upcoming visions. El Glaoui began her career in the banking industry before founding 1:54 in 2013. Parallel to her career, Touria has organised and co-curated exhibitions of her father’s work, the Moroccan artist Hassan El Glaoui, in London and Morocco. @154artfair

Gus Casely-Hayford, Historian

“Technological, demographic, economic and environmental change are recasting the world profoundly and rapidly. The sentiment that we are traveling through unprecedented times has left many feeling deeply unsettled, but there may well be lessons to learn from history — particularly African history — lessons that show how brilliant leadership and strategic intervention have galvanised and united peoples around inspirational ideas.” — Gus Casely-Hayford

Dr. Gus Casely-Hayford is a curator and cultural historian who writes, lectures and broadcasts widely on African culture. He has presented two series of The Lost Kingdoms of Africa for the BBC and has lectured widely on African art and culture, advising national and international bodies on heritage and culture. He is currently developing a National Portrait Gallery exhibition that will tell the story of abolition of slavery through 18th- and 19th-century portraits — an opportunity to bring many of the most important paintings of black figures together in Britain for the first time.

Oshiorenoya Agabi, Computational neuroscientist

“Koniku eventually aims to build a device that is capable of thinking in the biological sense, like a human being. We think we can do this in the next two to five years.” — Oshiorenoya Agabi on IndieBio.co

With his startup Koniku, Oshiorenoya Agabi is working to integrate biological neurons and silicon computer chips, to build computers that can think like humans can. Faster, cleverer computer chips are key to solving the next big batch of computing problems, like particle detection or sophisticated climate modeling — and to get there, we need to move beyond the limitations of silicon, Agabi believes. Born and raised in Lagos, Nigeria, Agabi is now based in the SF Bay Area, where he and his lab mates are working on the puzzle of connecting silicon to biological systems.

Natsai Audrey Chieza, Design researcher

Photo: Natsai Audrey Chieza

Natsai Audrey Chieza is a design researcher whose fascinating work crosses boundaries between technology, biology, design and cultural studies. She is founder and creative director of Faber Futures, a creative R&D studio that conceptualises, prototypes and evaluates the resilience of biomaterials emerging through the convergence of bio-fabrication, digital fabrication and traditional craft processes. As Resident Designer at the Department of Biochemical Engineering, University College London, she established a design-led microbiology protocol that replaces synthetic pigments with natural dyes excreted by bacteria — producing silk scarves dyed brilliant blues, reds and pinks. The process demands a rethink of the entire system of fashion and textile production — and is also a way to examine issues like resource scarcity, provenance and cultural specificity. @natsaiaudrey

Stay tuned for more amazing speakers, including leaders, creators, and more than a few truth-tellers … learn more >>


Valerie AuroraThe Al Capone theory of sexual harassment

This post was co-authored by Valerie Aurora and Leigh Honeywell and cross-posted on both of our blogs.

Mural of Al Capone, laughing and smoking a cigar
CC BY-SA 2.0 r2hox

We’re thrilled with the recent trend towards sexual harassment in the tech industry having actual consequences – for the perpetrator, not the target, for a change. We decided it was time to write a post explaining what we’ve been calling “the Al Capone Theory of Sexual Harassment.” (We can’t remember which of us came up with the name, Leigh or Valerie, so we’re taking joint credit for it.) We developed the Al Capone Theory over several years of researching and recording racism and sexism in computer security, open source software, venture capital, and other parts of the tech industry. To explain, we’ll need a brief historical detour – stick with us.

As you may already know, Al Capone was a famous Prohibition-era bootlegger who, among other things, ordered murders to expand his massively successful alcohol smuggling business. The U.S. government was having difficulty prosecuting him for either the murdering or the smuggling, so they instead convicted Capone for failing to pay taxes on the income from his illegal business. This technique is standard today – hence the importance of money-laundering for modern successful criminal enterprises – but at the time it was a novel approach.

The U.S. government recognized a pattern in the Al Capone case: smuggling goods was a crime often paired with failing to pay taxes on the proceeds of the smuggling. We noticed a similar pattern in reports of sexual harassment and assault: often people who engage in sexually predatory behavior also faked expense reports, plagiarized writing, or stole credit for other people’s work. Just three examples: Mark Hurd, the former CEO of HP, was accused of sexual harassment by a contractor, but resigned for falsifying expense reports to cover up the contractor’s unnecessary presence on his business trips. Jacob Appelbaum, the former Tor evangelist, left the Tor Foundation after he was accused of both sexual misconduct and plagiarism. And Randy Komisar, a general partner at venture capital firm KPCB, gave a book of erotic poetry to another partner at the firm, and accepted a board seat (and the credit for a successful IPO) at RPX that would ordinarily have gone to her.

Initially, the connection eluded us: why would the same person who made unwanted sexual advances also fake expense reports, plagiarize, or take credit for other people’s work? We remembered that people who will admit to attempting or committing sexual assault also disproportionately commit other types of violence and that “criminal versatility” is a hallmark of sexual predators. And we noted that taking credit for others’ work is a highly gendered behavior.

Then we realized what the connection was: all of these behaviors are the actions of someone who feels entitled to other people’s property – regardless of whether it’s someone else’s ideas, work, money, or body. Another common factor was the desire to dominate and control other people. In venture capital, you see the same people accused of sexual harassment and assault also doing things like blacklisting founders for objecting to abuse and calling people nasty epithets on stage at conferences. This connection between dominance and sexual harassment also shows up as overt, personal racism (that’s one reason why we track both racism and sexism in venture capital).

So what is the Al Capone theory of sexual harassment? It’s simple: people who engage in sexual harassment or assault are also likely to steal, plagiarize, embezzle, engage in overt racism, or otherwise harm their business. (Of course, sexual harassment and assault harms a business – and even entire fields of endeavor – but in ways that are often discounted or ignored.) Ask around about the person who gets handsy with the receptionist, or makes sex jokes when they get drunk, and you’ll often find out that they also violated the company expense policy, or exaggerated on their résumé, or took credit for a colleague’s project. More than likely, they’ve engaged in sexual misconduct multiple times, and a little research (such as calling previous employers) will show this, as we saw in the case of former Uber and Google employee Amit Singhal.

Organizations that understand the Al Capone theory of sexual harassment have an advantage: they know that reports or rumors of sexual misconduct are a sign they need to investigate for other incidents of misconduct, sexual or otherwise. Sometimes sexual misconduct is hard to verify because a careful perpetrator will make sure there aren’t any additional witnesses or records beyond the target and the target’s memory (although with the increase in use of text messaging in the United States over the past decade, we are seeing more and more cases where victims have substantial written evidence). But one of the implications of the Al Capone theory is that even if an organization can’t prove allegations of sexual misconduct, the allegations themselves are sign to also urgently investigate a wide range of aspects of an employee’s conduct.

Some questions you might ask: Can you verify their previous employment and degrees listed on their résumé? Do their expense reports fall within normal guidelines and include original receipts? Does their previous employer refuse to comment on why they left? When they give references, are there odd patterns of omission? For example, a manager who doesn’t give a single reference from a person who reported to them can be a hint that they have mistreated people they had power over.

Another implication of the Al Capone theory is that organizations should put more energy into screening potential employees or business partners for allegations of sexual misconduct before entering into a business relationship with them, as recently advocated by LinkedIn cofounder and Greylock partner Reid Hoffman. This is where tapping into the existing whisper network of targets of sexual harassment is incredibly valuable. The more marginalized a person is, the more likely they are to be the target of this kind of behavior and to be connected with other people who have experienced this behavior. People of color, queer people, people with working class jobs, disabled people, people with less money, and women are all more likely to know who sends creepy text messages after a business meeting. Being a member of more than one of these groups makes people even more vulnerable to this kind of harassment – we don’t think it was a coincidence that many of the victims of sexual harassment who spoke out last month were women of color.

What about people whose well-intentioned actions are unfairly misinterpreted, or people who make a single mistake and immediately regret it? The Al Capone theory of sexual harassment protects these people, because when the organization investigates their overall behavior, they won’t find a pattern of sexual harassment, plagiarism, or theft. A broad-ranging investigation in this kind of case will find only minor mistakes in expense reports or an ambiguous job title in a resume, not a pervasive pattern of deliberate deception, theft, or abuse. To be perfectly clear, it is possible for someone to sexually harass someone without engaging in other types of misconduct. In the absence of clear evidence, we always recommend erring on the side of believing accusers who have less power or privilege than the people they are accusing, to counteract the common unconscious bias against believing those with less structural power and to take into account the enormous risk of retaliation against the accuser.

Some people ask whether the Al Capone theory of sexual harassment will subject men to unfair scrutiny. It’s true, the majority of sexual harassment is committed by men. However, people of all genders commit sexual harassment. We personally know of two women who have sexually touched other people without consent at tech-related events, and we personally took action to stop these women from abusing other people. At the same time, abuse more often occurs when the abuser has more power than the target – and that imbalance of power is often the result of systemic oppression such as racism, sexism, cissexism, or heterosexism. That’s at least one reason why a typical sexual harasser is more likely to be one or all of straight, white, cis, or male.

What does the Al Capone theory of sexual harassment mean if you are a venture capitalist or a limited partner in a venture fund? Your first priority should be to carefully vet potential business partners for a history of unethical behavior, whether it is sexual misconduct, lying about qualifications, plagiarism, or financial misdeeds. If you find any hint of sexual misconduct, take the allegations seriously and step up your investigation into related kinds of misconduct (plagiarism, lying on expense reports, embezzlement) as well as other incidents of sexual misconduct.

Because sexual harassers sometimes go to great lengths to hide their behavior, you almost certainly need to expand your professional network to include more people who are likely to be targets of sexual harassment by your colleagues – and gain their trust. If you aren’t already tapped into this crucial network, here are some things you can do to get more access:

These are all aspects of ally skills – concrete actions that people with more power and privilege can take to support people who have less.

Finally, we’ve seen a bunch of VCs pledging to donate the profits of their investments in funds run by accused sexual harassers to charities supporting women in tech. We will echo many other women entrepreneurs and say: don’t donate that money, invest it in women-led ventures – especially those led by women of color.


Tagged: ally skills, feminism, tech

TEDThe TED2018 Fellows application is open. Apply now!

TED2018_Fellows_application

TED is looking for early-career, visionary thinkers from around the world to join the Fellows program at the upcoming TED2018 conference in Vancouver, British Columbia.

Do you have an original approach to your work that’s worth sharing with the world? Are you working to uplift and empower your local community through innovative science, art or entrepreneurship? Are you ready to take full advantage of the TED platform and the support of a dynamic global community of innovators? If yes, you should apply to be a TED Fellow.

TED Fellows are a multidisciplinary group of remarkable individuals who are chosen through an open and rigorous application process. For each TED conference, we select a class of 20 Fellows based on their exceptional achievement and an innovative approach to tackling the world’s toughest problems, as well as on their character, grit and collaborative spirit.

Apply by September 10 at go.ted.com/tedfellowsapply.

TED2018 — themed “The Age of Amazement” — will take a deep-dive into the key developments driving our future, from jaw-dropping AI to glorious new forms of creativity to courageous advocates of radical social change. If selected, you will attend the TED2018 conference and participate in a Fellows-only pre-conference designed especially to inspire, empower and support your work. Fellows also deliver a TED Talk at the conference, filmed and considered for publication on TED.com.  

The TED Fellows program is designed to catapult your career through transformational support like coaching and mentorship, public relations guidance for sharing your latest projects, hands on speaker training — and, most importantly, access to the vibrant global network of more than 400 Fellows from over 90 countries.

The online application includes general biographical questions, short essays on your work and three references. Only those aged 18 and older can apply. If selected, Fellows must reserve April 10 – April 15, 2018 on their calendars for the TED2018 conference in Vancouver, British Columbia.

Think you have what it takes to be a TED Fellow? Apply now.

More information
Questions?: ted.com/participate/ted-fellows-program
Visit: ted.com/fellows
Follow: @TEDFellow
Like: facebook.com/TEDFellow
Read: fellowsblog.ted.com


CryptogramMany of My E-Books for Cheap

Humble Bundle is selling a bunch of cybersecurity books very cheaply. You can get copies of Applied Cryptography, Secrets and Lies, and Cryptography Engineering -- and also Ross Anderson's Security Engineering, Adam Shostack's Threat Modeling, and many others.

This is the cheapest you'll ever see these books. And they're all DRM-free.

Worse Than FailureThe Little Red Button

Bryan T. had worked for decades to amass the skills, expertise and experience to be a true architect, but never quite made the leap. Finally, he got a huge opportunity in the form of an interview with a Silicon Valley semi-conductor firm project manager who was looking for a consultant to do just that. The discussions revolved around an application that three developers couldn't get functioning correctly in six months, and Bryan was to be the man to reign it in and make it work; he was lured with the promise of having complete control of the software.

The ZF-1 pod weapon system from the Fifth Element

Upon starting and spelunking through the code-base, Bryan discovered the degree of total failure that caused them to yield complete control to him. It was your typical hodgepodge of code slapped together with anti-patterns, snippets of patterns claiming to be the real deal, and the usual Assortment-o-WTF™ we've all come to expect.

Once he recognized the futility of attempting to fix this mess, Bryan scrapped it and rewrote it as a full-blown modular and compositional application, utilizing MVVM, DDD, SOA, pub/sub; the works. Within three weeks, he had it back to the point it was when he started, only his version actually worked.

While he had righted the sinking ship, it was so successful that the project team started managing it, which proved to be its undoing.

Given the sudden success of the project, the department head committed the application to all the divisions company wide within three quarters - without informing Bryan or anyone else on the team. After all, it's not like developers need to plan for code and resource scalability issues beyond the original design requirements or anything.

We've read countless stories about how difficult it is to work with things like dates and even booleans, but buttons are pretty much solidly understood. Some combination of text, text+image or just image, and an onAction callback pretty much covers it. Oh sure, you can set fg/bg colors and the font, but that's usually to just give visual clues. Unfortunately, buttons would be the beginning of a downward spiral so steep, that sheer inertia would derail the project.

The project manager decided that images were incredibly confusing, so all buttons should have text instead of icons. Bryan had created several toolbars (because ribbons were shot down already) which, according to management, made the application unusable. In particular, there was a fairly standard user icon with a pencil bullet that was meant to (as you might have guessed it) edit users...

  Manager:  So I looked at it with Lisa and she had no clue what it was.  
            It was so confusing that no one would ever be able to use our 
            application with it.  Buttons should all be text and not images!

OK, let’s forget that ribbons and toolbars have been an application standard for decades now; let’s focus on how confusing this really is. To that end, Bryan did the nanny test. He asked his nanny what she thought it meant and she thought that the button had something to do with people. Awesome, on the ball! After explaining what it did she agreed it made sense.

  Bryan: How about we explain it to the users and add a tooltip?
  Mgr:   Tooltips take way too long to display and it’s still 
         incredibly confusing – no one would remember it. We
         don't want people pressing the wrong buttons!
         And why are some of the buttons different colors than others?

Bryan wasn't sure if the manager realized how stupidly he was treating his users, if he was just oblivious, or if he was just pushing for his personal preference. In the end, all the toolbars were removed and the icons were replaced with text. This left an application with assorted colored buttons with text. Unfortunately, some of the buttons were so small that the text got displayed as a truncated string. Also, no amount of explanation could convey that color could also convey meaning (think traffic lights).

As his opinion in UI matters dwindled to nothing over the next couple of months, one of the four BA’s on the team of six pinged Bryan for a meeting about scalability. He wanted to make sure that the project was scalable for the next three quarters. Enter the Holy Hell Twilight Zone moment in the land where no ribbons or toolbars exist, as the project manager was also involved.

  Mgr:   I’ve got to make sure we have everything we need to 
         scale for the next three quarters.
  Bryan: I can’t get the project manager to commit to lay out
         three weeks of planning for development. I can’t even 
         begin to guess if we have what we need for the next three 
         quarters.
  Mgr:   Well the vice president has a commitment to deploy this 
         to all divisions in the company within three quarters and 
         I’m tasked to make sure we have what we need.

Now Bryan could make up statistics better than 84.3% of people, but what was asked was impossible to determine. Additionally there was a flat out refusal to even vaguely commit to development more than a week or two in advance, so there was heavy resistance just to get the information needed to try!

At this point in time, Bryan felt the need to bail out, but before he left town he grabbed his prized coffee mug from the office. He wasn’t going to be back in town for at least three weeks and from his prior experiences he knew where this was going.

Of course the guy who originally sunk the ship in the first place had a true killer instinct, apparently knew better than Bryan and was left to steer the ship again. All these problems and issues that Bryan saw coming were either over exaggerations or without merit. The project manager felt so comfortable with the architecture and frameworks that Bryan put in place that he felt confident that there was absolutely nothing that he couldn’t handle. After all, he now had the buttons he wanted and understood. Bryan repeatedly asked if he wanted code walkthroughs and was denied. He didn't need to know what the different colors on the buttons were for. Bryan was even given 40 free consulting hours and even told not to check in his latest bug fixes.

Bryan sent his final farewell with a picture of him drinking from his coffee mug at home and out of state.

A real killer, when handed the ZF-1, would've immediately asked about the little red button on the bottom of the gun.

[Advertisement] Infrastructure as Code built from the start with first-class Windows functionality and an intuitive, visual user interface. Download Otter today!

Planet DebianFoteini Tsiami: Internationalization, part three

The first builds of the LTSP Manager were uploaded and ready for testing. Testing involves installing or purging the ltsp-manager package, along with its dependencies, and using its GUI to configure LTSP, create users, groups, shared folders etc. Obviously, those tasks are better done on a clean system. And the question that emerges is: how can we start from a clean state, without having to reinstall the operating system each time?

My mentors pointed me to an answer for that: VirtualBox snapshots. VirtualBox is a virtualization application (others are KVM or VMware) that allows users to install an operating system like Debian in a contained environment inside their host operating system. It comes with an easy to use GUI, and supports snapshots, which are points in time where we mark the guest operating system state, and can revert to that state later on.

So I started by installing Debian Stretch with the MATE desktop environment in VirtualBox, and I took a snapshot immediately after the installation. Now whenever I want to test LTSP Manager, I revert to that snapshot, and that way I have a clean system where I can properly check the installation procedure and all of its features!


Planet DebianReproducible builds folks: Reproducible Builds: week 116 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday July 9 and Saturday July 15 2017:

Packages reviewed and fixed, and bugs filed

Reviews of unreproducible packages

13 package reviews have been added, 12 have been updated and 19 have been removed in this week, adding to our knowledge about identified issues.

2 issue types have been added:

3 issue types have been updated:

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (47)

diffoscope development

Version 84 was uploaded to unstable by Mattia Rizzolo. It included contributions already reported from the previous weeks, as well as new ones:

After the release, development continued in git with contributions from:

strip-nondeterminism development

Versions 0.036-1, 0.037-1 and 0.038-1 were uploaded to unstable by Chris Lamb. They included contributions from:

reprotest development

Development continued in git with contributions from:

buildinfo.debian.net development

tests.reproducible-builds.org

  • Mattia Rizzolo:
    • Make database backups quicker to restore by avoiding --column-inserts's pg_dump option.
    • Fixup the deployment scripts after the stretch migration.
    • Fixup Apache redirects that were broken after introducing the buster suite
    • Fixup diffoscope jobs that were not always installing the highest possible version of diffoscope
  • Holger Levsen:
    • Add a node health check for a too big jenkins.log.

Misc.

This week's edition was written by Bernhard M. Wiedemann, Chris Lamb, Mattia Rizzolo, Vagrant Cascadian & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Don MartiStupid ideas department

Here's a probably stupid idea: give bots the right to accept proposed changes to a software project. Can automation encourage less burnout-provoking behavior?

A set of bots could interact in interesting ways.

  • Regression-test-bot: If a change only adds a test, applies cleanly to both the current version and to a previous version, and the previous version passses the test, accept it, even if the test fails for the current version.

  • Harmless-change-bot: If a change is below a certain size, does not modify existing tests, and all tests (including any new ones) pass, accept it.

  • Revert-bot: If any tests are failing on the current version, and have been failing for more than a certain amount of time, revert back to a version that passes.

Would more people write regression tests for their issues if they knew that a bot would accept them? Or say that someone makes a bad change but gets it past harmless-change-bot because no existing test covers it. No lengthy argument needed. Write a regression test and let regression-test-bot and revert-bot team up to take care of the problem. In general, move contributor energy away from arguing with people and toward test writing, and reduce the size of the maintainer's to-do list.

Planet DebianMatthew Garrett: Avoiding TPM PCR fragility using Secure Boot

In measured boot, each component of the boot process is "measured" (ie, hashed and that hash recorded) in a register in the Trusted Platform Module (TPM) build into the system. The TPM has several different registers (Platform Configuration Registers, or PCRs) which are typically used for different purposes - for instance, PCR0 contains measurements of various system firmware components, PCR2 contains any option ROMs, PCR4 contains information about the partition table and the bootloader. The allocation of these is defined by the PC Client working group of the Trusted Computing Group. However, once the boot loader takes over, we're outside the spec[1].

One important thing to note here is that the TPM doesn't actually have any ability to directly interfere with the boot process. If you try to boot modified code on a system, the TPM will contain different measurements but boot will still succeed. What the TPM can do is refuse to hand over secrets unless the measurements are correct. This allows for configurations where your disk encryption key can be stored in the TPM and then handed over automatically if the measurements are unaltered. If anybody interferes with your boot process then the measurements will be different, the TPM will refuse to hand over the key, your disk will remain encrypted and whoever's trying to compromise your machine will be sad.

The problem here is that a lot of things can affect the measurements. Upgrading your bootloader or kernel will do so. At that point if you reboot your disk fails to unlock and you become unhappy. To get around this your update system needs to notice that a new component is about to be installed, generate the new expected hashes and re-seal the secret to the TPM using the new hashes. If there are several different points in the update where this can happen, this can quite easily go wrong. And if it goes wrong, you're back to being unhappy.

Is there a way to improve this? Surprisingly, the answer is "yes" and the people to thank are Microsoft. Appendix A of a basically entirely unrelated spec defines a mechanism for storing the UEFI Secure Boot policy and used keys in PCR 7 of the TPM. The idea here is that you trust your OS vendor (since otherwise they could just backdoor your system anyway), so anything signed by your OS vendor is acceptable. If someone tries to boot something signed by a different vendor then PCR 7 will be different. If someone disables secure boot, PCR 7 will be different. If you upgrade your bootloader or kernel, PCR 7 will be the same. This simplifies things significantly.

I've put together a (not well-tested) patchset for Shim that adds support for including Shim's measurements in PCR 7. In conjunction with appropriate firmware, it should then be straightforward to seal secrets to PCR 7 and not worry about things breaking over system updates. This makes tying things like disk encryption keys to the TPM much more reasonable.

However, there's still one pretty major problem, which is that the initramfs (ie, the component responsible for setting up the disk encryption in the first place) isn't signed and isn't included in PCR 7[2]. An attacker can simply modify it to stash any TPM-backed secrets or mount the encrypted filesystem and then drop to a root prompt. This, uh, reduces the utility of the entire exercise.

The simplest solution to this that I've come up with depends on how Linux implements initramfs files. In its simplest form, an initramfs is just a cpio archive. In its slightly more complicated form, it's a compressed cpio archive. And in its peak form of evolution, it's a series of compressed cpio archives concatenated together. As the kernel reads each one in turn, it extracts it over the previous ones. That means that any files in the final archive will overwrite files of the same name in previous archives.

My proposal is to generate a small initramfs whose sole job is to get secrets from the TPM and stash them in the kernel keyring, and then measure an additional value into PCR 7 in order to ensure that the secrets can't be obtained again. Later disk encryption setup will then be able to set up dm-crypt using the secret already stored within the kernel. This small initramfs will be built into the signed kernel image, and the bootloader will be responsible for appending it to the end of any user-provided initramfs. This means that the TPM will only grant access to the secrets while trustworthy code is running - once the secret is in the kernel it will only be available for in-kernel use, and once PCR 7 has been modified the TPM won't give it to anyone else. A similar approach for some kernel command-line arguments (the kernel, module-init-tools and systemd all interpret the kernel command line left-to-right, with later arguments overriding earlier ones) would make it possible to ensure that certain kernel configuration options (such as the iommu) weren't overridable by an attacker.

There's obviously a few things that have to be done here (standardise how to embed such an initramfs in the kernel image, ensure that luks knows how to use the kernel keyring, teach all relevant bootloaders how to handle these images), but overall this should make it practical to use PCR 7 as a mechanism for supporting TPM-backed disk encryption secrets on Linux without introducing a hug support burden in the process.

[1] The patchset I've posted to add measured boot support to Grub use PCRs 8 and 9 to measure various components during the boot process, but other bootloaders may have different policies.

[2] This is because most Linux systems generate the initramfs locally rather than shipping it pre-built. It may also get rebuilt on various userspace updates, even if the kernel hasn't changed. Including it in PCR 7 would entirely break the fragility guarantees and defeat the point of all of this.

comment count unavailable comments

Planet DebianNorbert Preining: Calibre and rar support

Thanks to the cooperation with upstream authors and the maintainer Martin Pitt, the Calibre package in Debian is now up-to-date at version 3.4.0, and has adopted a more standard packaging following upstream. In particular, all the desktop files and man pages have been replaced by what is shipped by Calibre. What remains to be done is work on RAR support.

Rar support is necessary in the case that the eBook uses rar as compression, which happens quite often in comic books (cbr extension). Calibre 3 has split out rar support into a dynamically loaded module, so what needs to be done is packaging it. I have prepared a package for the Python library unrardll which allows Calibre to read rar-compressed ebooks, but it depends on the unrar shared library, which unfortunately is not built in Debian. I have sent a patch to fix this to the maintainer, see bug 720051, but without reaction from the maintainer.

Thus, I am publishing updated packages for unrar shipping also libunrar5, and unrardll Python package in my calibre repository. After installing python-unrardll Calibre will happily import meta-data from rar-compressed eBooks, as well as display them.

deb http://www.preining.info/debian/ calibre main
deb-src http://www.preining.info/debian/ calibre main

The releases are signed with my Debian key 0x6CACA448860CDC13

Enjoy

,

Cory DoctorowSan Diego! Come hear me read from Walkaway tomorrow night at Comickaze Liberty Station!

I’m teaching the Clarion Science Fiction writing workshop at UCSD in La Jolla this week, and tomorrow night at 7PM, I’ll be reading from my novel Walkaway at Comickaze Liberty Station, 2750 Historic Decatur Rd #101, San Diego, CA 92106. Hope to see you!

Planet DebianJonathan McDowell: Just because you can, doesn't mean you should

There was a recent Cryptoparty Belfast event that was aimed at a wider audience than usual; rather than concentrating on how to protect ones self on the internet the 3 speakers concentrated more on why you might want to. As seems to be the way these days I was asked to say a few words about the intersection of technology and the law. I think people were most interested in all the gadgets on show at the end, but I hope they got something out of my talk. It was a very high level overview of some of the issues around the Investigatory Powers Act - if you’re familiar with it then I’m not adding anything new here, just trying to provide some sort of details about why it’s a bad thing from both a technological and a legal perspective.

Download

CryptogramAustralia Considering New Law Weakening Encryption

News from Australia:

Under the law, internet companies would have the same obligations telephone companies do to help law enforcement agencies, Prime Minister Malcolm Turnbull said. Law enforcement agencies would need warrants to access the communications.

"We've got a real problem in that the law enforcement agencies are increasingly unable to find out what terrorists and drug traffickers and pedophile rings are up to because of the very high levels of encryption," Turnbull told reporters.

"Where we can compel it, we will, but we will need the cooperation from the tech companies," he added.

Never mind that the law 1) would not achieve the desired results because all the smart "terrorists and drug traffickers and pedophile rings" will simply use a third-party encryption app, and 2) would make everyone else in Australia less secure. But that's all ground I've covered before.

I found this bit amusing:

Asked whether the laws of mathematics behind encryption would trump any new legislation, Mr Turnbull said: "The laws of Australia prevail in Australia, I can assure you of that.

"The laws of mathematics are very commendable but the only law that applies in Australia is the law of Australia."

Next Turnbull is going to try to legislate that pi = 3.2.

Another article. BoingBoing post.

EDITED TO ADD: More commentary.

Planet DebianSteinar H. Gunderson: Solskogen 2017: Nageru all the things

Solskogen 2017 is over! What a blast that was; I especially enjoyed that so many old-timers came back to visit, it really made the party for me.

This was the first year we were using Nageru for not only the stream but also for the bigscreen mix, and I was very relieved to see the lack of problems; I've had nightmares about crashes with 150+ people watching (plus 200-ish more on stream), but there were no crashes and hardly a dropped frame. The transition to a real mixing solution as well as from HDMI to SDI everywhere gave us a lot of new opportunities, which allowed a number of creative setups, some of them cobbled together on-the-spot:

  • Nageru with two cameras, of which one was through an HDMI-to-SDI converter battery-powered from a 20000 mAh powerbank (and sent through three extended SDI cables in series): Live music compo (with some, er, interesting entries).
  • 1080p60 bigscreen Nageru with two computer inputs (one of them through a scaler) and CasparCG graphics run from an SQL database, sent on to a 720p60 mixer Nageru (SDI pass-through from the bigscreen) with two cameras mixed in: Live graphics compo
  • Bigscreen Nageru switching from 1080p50 to 1080p60 live (and stream between 720p50 and 720p60 correspondingly), running C64 inputs from the Framemeister scaler: combined intro compo
  • And finally, it's Nageru all the way down: A camera run through a long extended SDI cable to a laptop running Nageru, streamed over TCP to a computer running VLC, input over SDI to bigscreen Nageru and sent on to streamer Nageru: Outdoor DJ set/street basket compo (granted, that one didn't run entirely smoothly, and you can occasionally see Windows device popups :-) )

It's been a lot of fun, but also a lot of work. And work will continue for an even better show next year… after some sleep. :-)

Cory DoctorowI’m profiled in the new issue of Locus Magazine

Cory Doctorow: Bugging In:

‘‘Walkaway is an ‘optimistic disaster novel.’ It’s about people who, in a crisis, come together, rather than turning on each other. Its villains aren’t the people next door, who’ve secretly been waiting for civilization’s breakdown as an excuse to come and eat you, but the super-rich who are convinced that without the state and its police, the poors are coming to eat them.

‘‘In Walkaway, the economy has comprehensively broken down, and so has the planet. Climate refugees drift in huge, unstoppable numbers from place to place, seeking refuge. The world has no jobs for most people, because when robots do all the work, the forces of capital require a few foremen to boss the robots, and a few unemployed people mooching around the factory gates to threaten the supervisors with if they demand higher wages. Everyone else is surplus to requirements.

‘‘Awareness of self-deception is a tactic that’s deployed very usefully by a lot of people now. It’s at the core of things like cognitive behavioral therapy – the idea that you must become an empiricist of your emotions because your recollections of emotions are always tainted, so you have to write down your experiences and go back to see what actually happened. Do you remember the term Endless September? It’s from when AOL came on to the net, and suddenly new people were getting online all the time, who didn’t know how things worked. The onboarding process to your utopian project is always difficult. It’s a thing Burning Man is struggling with, and it’s a thing fandom is struggling with right now. We were just talking about what it’s like to go to a big media convention, a San Diego Comic-Con or something, and to what extent that’s a new culture, or it’s continuous with the old culture, or it’s preserving the best things or bringing in the worst things, or it’s overwhelming the old, or whatever. It’s a real problem, and there is a shibboleth, which is, ‘I don’t object to all these newcomers, but they’re coming in such numbers that they’re overwhelming our ability to assimilate them.’ This is what every xenophobe who voted for Brexit said, but you hear that lament in science fiction too, and you hear it even about such things as gender parity in the workplace.”

*

‘‘For me, I live by the aphorism, ‘fail better, fail faster.’ To double your success rate, triple your failure rate. What the walkaways figured out how to do is reduce the cost of failure, to make it cheaper to experiment with new ways of succeeding. One of the great bugaboos of the rationalist movement is loss aversion. There is another name for it, ‘the entitlement effect’: basically, people value something they have more than they would pay for it before they got it. How much is your IKEA furniture worth before and after you assemble it? People grossly overestimate the value of their furniture after they’ve assembled it, because having infused it with their labor and ownership, they feel an attachment to it that is not economically rational. Sunk cost is another great fallacy. You can offer somebody enough money to buy the furniture again, and pay somebody to assemble it, and they’ll turn you down, because now that they have it, they don’t want to lose it. That was the wisdom of Obama with Obamacare. He understood that Obamacare is not sustainable, that basically letting insurance companies set the price without any real limits means that the insurance companies will eventually price it out of the government’s ability to pay, but he also understood that once you give 22 million people healthcare, when the insurance companies blew it up, the people would then demand some other healthcare system be found. The idea of just going without healthcare, which was a thing that people were willing to put up with for decades, is something they’ll never go back to. Any politician who proposes that when Obamacare blows up that we replace it with nothing, as opposed to single payer – where it’s going to end up – that politician is dead in the water. ”


More…

Worse Than FailureCodeSOD: Impersonated Programming

Once upon a time, a long long time ago, I got contracted to show a government office how to build and deliver applications… in Microsoft Access. I’m sorry. I’m so, so sorry. As horrifying and awful as it is, Access is actually built with some mechanisms to actually support that- you can break the UI and behavior off into one file, while keeping the data in another, and you can actually construct linked tables that connect to a real database, if you don’t mind gluing a UI made out of evil and sin to your “real” database.

Which brings us to poor Alex Rao. Alex has an application built in Access. This application uses linked tables, which he wants to convert to local tables. The VBA API exposed by Access doesn’t give him any way to do this, so he came up with this solution…

Public Function convertToLocal()
    Dim dBase As DAO.Database
    Dim tdfTable As DAO.TableDef

    Set dBase = CurrentDb

    For Each tdfTable In dBase.TableDefs
        If tdfTable.Connect <> "" Then
            ' OH. MY. GOSH. I hate myself so much for doing this. For the love of everything holy,
            ' dear reader, if you can come up with a better way to do this, please tell me about it
            ' AS SOON AS POSSIBLE
            ' I have literally been trying to do this for the past week. For reference, here is what I
            ' am trying to do:
            '   Convert a "linked" table to a "local" one
            '   Keep relationships intact.
            ' Now, Access has this handy tool, "Convert to Local Table" - you'll see it if you right click
            ' on a linked table. However, THERE IS NO WAY IN VBA TO DO THIS. I am aware of the following:
            '   DoCmd.SelectObject acTable, "Company", True RunCommand acCmdConvertLinkedTableToLocal
            ' Note that this no longer works as of Access 2016 because the wonderful programmers at Microsoft decided
            ' that "It wasn't used anymore".
            '
            ' So, onto my solution:
            '   First, I select the table object, making sure it's actually selected (i.e., like a user selected it)
            '   Then, I pause for one second (I hope to the man upstairs that's long enough)
            '   Then, I send the "Context Menu" key (SHIFT+F10)
            '   Then, I pause for another second (Again, fingers crossed)
            '   Then, I send the "v" key - to activate the "ConVert to Local Table" command shortcut
            '
            ' I literally send KEYPRESSES to the active application, and hope to God that Access is ready to go.
            ' And if the user selected a different application (or literally anything else) in that time? Well,
            ' then Screw you, user.
            '
            ' God help us.
            DoCmd.SelectObject acTable, tdfTable.Name, True
            Pause 1
            SendKeys "+{F10}", True
            Pause 1
            SendKeys "v", True
        End If
    Next tdfTable
End Function

Don’t feel bad, Alex. I’m certain this isn’t the worst thing ever built in Access.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Don MartiPlaying for third place

Just tried a Twitter advertising trick that a guy who goes by "weev" posted two years ago.

It still works.

They didn't fix it.

Any low-budget troll who can read that old blog post and come up with a valid credit card number can still do it.

Maybe Twitter is a bad example, but the fast-moving nationalist right wing manages to outclass its opponents on other social marketing platforms, too. Facebook won't even reveal how badly they got played in 2016. They thought they were putting out cat food for cute Internet kittens, but the rats ate it.

This is not new. Right-wing shitlords, at least the best of them, are the masters of database marketing. They absolutely kill it, and they have been ever since Marketing as we know it became a thing. Some good examples:

All the creepy surveillance marketing stuff they're doing today is just another set of tools in an expanding core competency.

Every once in a while you get an exception. The environmental movement became a direct mail operation in response to Interior Secretary James G. Watt, who alarmed environmentalists enough that organizations could reliably fundraise with direct mail copy quoting from Watt's latest speech. And the Democrats tried that "Organizing for America" thing for a little while, but, man, their heart just wasn't in it. They dropped it like a Moodle site during summer vacation. Somehow, the creepier the marketing, the more it skews "red". The more creativity involved, the more it skews "blue" (using the USA meanings of those colors.) When we make decisions about how much user surveillance we're going to allow on a platform, we're making a political decision.

Anyway. News Outlets to Seek Bargaining Rights Against Google and Facebook.

The standings so far.

  1. Shitlords and fraud hackers

  2. Adtech and social media bros

  3. NEWS SITES HERE (?)

News sites want to go to Congress, to get permission to play for third place in their own business? You want permission to bring fewer resources and less experience to a surveillance marketing game that the Internet companies are already losing?

We know the qualities of a medium that you win by being creepier, and we know the qualities of a medium that you can win with reputation and creativity. Why waste time and money asking Congress for the opportunity to lose, when you could change the game instead?

Maybe achieving balance in political views depends on achieving balance in business model. Instead of buying in to the surveillance marketing model 100%, and handing an advantage to one side, maybe news sites should help users control what data they share in order to balance competing political interests.

Planet Linux Australiasthbrx - a POWER technical blog: XDP on Power

This post is a bit of a break from the standard IBM fare of this blog, as I now work for Canonical. But I have a soft spot for Power from my time at IBM - and Canonical officially supports 64-bit, little-endian Power - so when I get a spare moment I try to make sure that cool, officially-supported technologies work on Power before we end up with a customer emergency! So, without further ado, this is the story of XDP on Power.

XDP

eXpress Data Path (XDP) is a cool Linux technology to allow really fast processing of network packets.

Normally in Linux, a packet is received by the network card, an SKB (socket buffer) is allocated, and the packet is passed up through the networking stack.

This introduces an inescapable latency penalty: we have to allocate some memory and copy stuff around. XDP allows some network cards and drivers to process packets early - even before the allocation of the SKB. This is much faster, and so has applications in DDOS mitigation and other high-speed networking use-cases. The IOVisor project has much more information if you want to learn more.

eBPF

XDP processing is done by an eBPF program. eBPF - the extended Berkeley Packet Filter - is an in-kernel virtual machine with a limited set of instructions. The kernel can statically validate eBPF programs to ensure that they terminate and are memory safe. From this it follows that the programs cannot be Turing-complete: they do not have backward branches, so they cannot do fancy things like loops. Nonetheless, they're surprisingly powerful for packet processing and tracing. eBPF programs are translated into efficient machine code using in-kernel JIT compilers on many platforms, and interpreted on platforms that do not have a JIT. (Yes, there are multiple JIT implementations in the kernel. I find this a terrifying thought.)

Rather than requiring people to write raw eBPF programs, you can write them in a somewhat-restricted subset of C, and use Clang's eBPF target to translate them. This is super handy, as it gives you access to the kernel headers - which define a number of useful data structures like headers for various network protocols.

Trying it

There are a few really interesting project that are already up and running that allow you to explore XDP without learning the innards of both eBPF and the kernel networking stack. I explored the samples in the bcc compiler collection and also the samples from the netoptimizer/prototype-kernel repository.

The easiest way to get started with these is with a virtual machine, as recent virtio network drivers support XDP. If you are using Ubuntu, you can use the uvt-kvm tooling to trivially set up a VM running Ubuntu Zesty on your local machine.

Once your VM is installed, you need to shut it down and edit the virsh XML.

You need 2 vCPUs (or more) and a virtio+vhost network card. You also need to edit the 'interface' section and add the following snippet (with thanks to the xdp-newbies list):

<driver name='vhost' queues='4'>
    <host tso4='off' tso6='off' ecn='off' ufo='off'/>
    <guest tso4='off' tso6='off' ecn='off' ufo='off'/>
</driver>

(If you have more than 2 vCPUs, set the queues parameter to 2x the number of vCPUs.)

Then, install a modern clang (we've had issues with 3.8 - I recommend v4+), and the usual build tools.

I recommend testing with the prototype-kernel tools - the DDOS prevention tool is a good demo. Then - on x86 - you just follow their instructions. I'm not going to repeat that here.

POWERful XDP

What happens when you try this on Power? Regular readers of my posts will know to expect some minor hitches.

XDP does not disappoint.

Firstly, the prototype-kernel repository hard codes x86 as the architecture for kernel headers. You need to change it for powerpc.

Then, once you get the stuff compiled, and try to run it on a current-at-time-of-writing Zesty kernel, you'll hit a massive debug splat ending in:

32: (61) r1 = *(u32 *)(r8 +12)
misaligned packet access off 0+18+12 size 4
load_bpf_file: Permission denied

It turns out this is because in Ubuntu's Zesty kernel, CONFIG_HAS_EFFICIENT_UNALIGNED_ACCESS is not set on ppc64el. Because of that, the eBPF verifier will check that all loads are aligned - and this load (part of checking some packet header) is not, and so the verifier rejects the program. Unaligned access is not enabled because the Zesty kernel is being compiled for CPU_POWER7 instead of CPU_POWER8, and we don't have efficient unaligned access on POWER7.

As it turns out, IBM never released any officially supported Power7 LE systems - LE was only ever supported on Power8. So, I filed a bug and sent a patch to build Zesty kernels for POWER8 instead, and that has been accepted and will be part of the next stable update due real soon now.

Sure enough, if you install a kernel with that config change, you can verify the XDP program and load it into the kernel!

If you have real powerpc hardware, that's enough to use XDP on Power! Thanks to Michael Ellerman, maintainer extraordinaire, for verifying this for me.

If - like me - you don't have ready access to Power hardware, you're stuffed. You can't use qemu in TCG mode: to use XDP with a VM, you need multi-queue support, which only exists in the vhost driver, which is only available for KVM guests. Maybe IBM should release a developer workstation. (Hint, hint!)

Overall, I was pleasantly surprised by how easy things were for people with real ppc hardware - it's encouraging to see something not require kernel changes!

eBPF and XDP are definitely growing technologies - as Brendan Gregg notes, now is a good time to learn them! (And those on Power have no excuse either!)

,

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 3, week 2.

This week older students start their research projects for the term, whilst younger students are doing the Timeline Activity. Our youngest students are thinking about the places where people live and can join together with older students as buddies to Build A Humpy together.

Foundation/Prep/Kindy to Year 3

Students in stand-alone Foundation/Prep/Kindy classes (Unit F.3), or those in classes integrated with Year 1 (Unit F-1.3) are considering different types of homes this week. They will think about where the people in the stories from last week live and compare that to their own houses. They can consider how homes were different in the past and how our homes help us meet our basic needs. There is an option this week for these students to buddy with older students, especially those in Years 4, 5 and 6, to undertake the Building A Humpy activity together. In this activity students collect materials to build a replica Aboriginal humpy or shelter outside. Many teachers find that both senior primary and the younger students get a lot of benefit from helping each other with activities, enriching the learning experience. The Building a Humpy activity is one where the older students can assist the younger students with the physical requirements of building a humpy, whilst each group considers aspects of the activity relevant to their own studies, and comparing past ways of life to their own.

Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) are undertaking the Timeline Activity this week. This activity is designed to complement the concept of the number line from the Mathematics curriculum, whilst helping students to learn to visualise the abstract concepts of the past and different lengths of time between historical events and the present. In this activity students walk out a timeline, preferably across a large open space such as the school Oval, whilst attaching pieces of paper at intervals to a string. The pieces of paper refer to specific events in history (starting with their own birth years) and cover a wide range of events from the material covered this year. Teachers can choose from events in Australian and world history, covering 100s, 1000s and even millions of years, back to the dinosaurs. Teachers can also add their own events. Thus the details of the activity are able to be altered in different years to maintain student interest. Depending on the class, the issue of scale can be addressed in various ways. By physically moving their bodies, students will start to understand the lengths of time involved in examinations of History. This activity is repeated in increasing detail in higher years, to make sure that the fundamental concepts are absorbed by students over time.

Years 3 to 6

Science ExplosionStudents in Years 3 to 6 are starting their term research projects on Australian history this week. Students in Year 3 (Unit 3.7) concentrate on topics from the history of their capital city or local community. Suggested topics are included for Brisbane, Melbourne, Sydney, Adelaide, Darwin, Hobart, Perth and Canberra. Teachers can substitute their own topics for a local community study. Students will undertake a Scientific Investigation into an aspect of their chosen research project and will produce a Scientific Report. It is recommended that teachers supplement the resources provided with old photographs, books, newspapers etc, many of which can be accessed online, to provide the students with extra material for their investigation.

First Fleet 1788First Fleet

Students in Year 4 (Unit 4.3) will be focusing on Australia in the period up to and including the arrival of the First Fleet and the early colonial period. OpenSTEM’s Understanding Our World® program encompasses the whole Australian curriculum for HASS and thus does not simply rely on “flogging the First Fleet to death”! There are 7 research themes for Year 4 students: “Australia Before 1788”; “The First Fleet”; “Convicts and Settlers”; “Aboriginal People in Colonial Australia”; “Australia and Other Nations in the 17th, 18th and 19th centuries”; “Colonial Children”; “Colonial Animals and their Impact”. These themes are allocated to groups of students and each student chooses an individual research topic within their groups themes. Suggested topics are given in the Teacher Handbook, as well as suggested resources.

19th century china dolls

Year 5 (Unit 5.3) students focus on the colonial period in Australia. There are 9 research themes for Year 5 students. These are: “The First Fleet”; “Convicts and Settlers”; “The 6 Colonies”; “Aboriginal People in Colonial Australia”; “Resistance to Colonial Authorities”; “Sugar in Queensland”; “Colonial Children”; “Colonial Explorers” and “Colonial Animals and their Impact”. As well as themes unique to Year 5, some overlap is provided to facilitate teaching in multi-year classes. The range of themes also allows for the possibility of teachers choosing different themes in different years. Once again individual topics and resources are suggested in the Teacher Handbook.

Year 6 (Unit 6.3) students will examine research themes around Federation and the early 20th century. There are 8 research themes for Year 6 students: “Federation and Sport”; “Women’s Suffrage”; “Aboriginal Rights in Australia”; “Henry Parkes and Federation”; “Edmund Barton and Federation”; “Federation and the Boer War”; “Samuel Griffith and the Constitution”; “Children in Australian History”. Individual research topics and resources are suggested in the Teachers Handbook. It is expected that students in Year 6 will be able to research largely independently, with weekly guidance from their teacher. OpenSTEM’s Understanding Our World® program is aimed at developing research skills in students progressively, especially over the upper primary years. If the program is followed throughout the primary years, students are well prepared for high school by the end of Year 6, having practised individual research skills for several years.

 

Rondam RamblingsThings I wish someone had told me before I started angel investing

Back in 2005 I suddenly found myself sitting on a big pile of money after the Google IPO so I did what any young nouveau-riche high-tech dilettante would do: I started angel investing.  I figured it would be more fun to be the beggee than the beggor for a change, and I was right about that.  But I was wrong about just about everything else, and I got a very expensive education as a result. Now

Planet DebianJose M. Calhariz: Crossgrading a complex Desktop and Debian Developer machine running Debian 9

This article is an experiment in progress, please recheck, while I am updating with the new information.

I have a very old installation of Debian, possibly since v2, dot not remember, that I have upgraded since then both in software and hardware. Now the hardware is 64bits, runs a kernel of 64bits but the run-time is still 32bits. For 99% of tasks this is very good. Now that I have made many simulations I may have found a solution to do a crossgrade of my desktop. I write here the tentative procedure and I will update with more ideias on the problems that I may found.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading and the instalation of all the libs as amd64:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
 dpkg --list > original.dpkg
 for pack32 in $(grep i386 original.dpkg | awk '{print $2}' ) ; do 
   echo $pack32 ; 
   if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
     apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
   fi ; 
 done
 cd /var/cache/apt/archives/
 dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
 dpkg --configure --pending
 dpkg -i --skip-same-version dpkg_1.18.24_amd64.deb apt_1.4.6_amd64.deb bash_4.4-5_amd64.deb dash_0.5.8-2.4_amd64.deb mawk_1.3.3-17+b3_amd64.deb *.deb

 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures

But this step does not prevent the apt-get install to have broken dependencies. So instead of only installing the libs with dpkg -i, I am going to try to install all the packages with dpkg -i:

apt-get clean
apt-get upgrade
apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
dpkg --list > original.dpkg
for pack32 in $(grep i386 original.dpkg | awk '{print $2}' ) ; do 
  echo $pack32 ; 
  if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
    apt-get --download-only install -y --allow-remove-essential ${pack32%:i386}:amd64 ; 
  fi ; 
done
cd /var/cache/apt/archives/
dpkg --install dpkg_*amd64.deb tar_*amd64.deb apt_*amd64.deb bash_*amd64.deb dash_*amd64.deb *.deb
dpkg --configure --pending
dpkg --install dpkg_*_amd64.deb apt_*_amd64.deb bash_*_amd64.deb dash_*_amd64.deb mawk_*_amd64.deb *.deb

dpkg --install /var/cache/apt/archives/*_amd64.deb
dpkg --install /var/cache/apt/archives/*_amd64.deb
dpkg --print-architecture
dpkg --print-foreign-architectures

Forth, do a full crossgrade:

 if ! apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) ; then
   apt-get --fix-broken --allow-remove-essential install
   apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//)
 fi

Planet DebianVasudev Kamath: Overriding version information from setup.py with pbr

I recently raised a pull request on zfec for converting its python packaging from pure setup.py to pbr based. Today I got review from Brian Warner and one of the issue mentioned was python setup.py --version is not giving same output as previous version of setup.py.

Previous version used versioneer which extracts version information needed from VCS tags. Versioneer also provides flexibility of specifying type of VCS used, style of version, tag prefix (for VCS) etc. pbr also does extract version information from git tag but it expects git tag to be of format tags/refs/x.y.z format but zfec used a zfec- prefix to tag (example zfec-1.4.24) and pbr does not process this. End result, I get a version in format 0.0.devNN where NN is number of commits in the repository from its inception.

Me and Brian spent few hours trying to figure out a way to tell pbr that we would like to override version information it auto deduces, but there was none other than putting version string in PBR_VERSION environment variable. That documentation was contributed by me 3 years back to pbr project.

So finally I used versioneer to create a version string and put it in the environment variable PBR_VERSION.

import os
import versioneer

os.environ['PBR_VERSION'] = versioneer.get_version()

...
setup(
    setup_requires=['pbr'],
    pbr=True,
    ext_modules=extensions
)

And added below snippet to setup.cfg which is how versioneer can be configured with various information including tag prefixes.

[versioneer]
VCS = git
style = pep440
versionfile_source = zfec/_version.py
versionfile_build = zfec/_version.py
tag_prefix = zfec-
parentdir_prefix = zfec-

Though this work around gets the work done, it does not feel correct to set environment variable to change the logic of other part of same program. If you guys know the better way do let me know!. Also probably I should consider filing an feature request against pbr to provide a way to pass tag prefix for version calculation logic.

Planet DebianLior Kaplan: PDO_IBM: tracking changes publicly

As part of my work at Zend (now a RogueWave company), I maintain the various patch sets. One of those is the changes for PDO_IBM extension for PHP.

After some patch exchange I decided it’s would be easier to manage the whole process over a public git repository, and maybe gain some more review / feedback along the way. Info at https://github.com/kaplanlior/pecl-database-pdo_ibm/commits/zend-patches

Another aspect of this, is having IBMi specific patches from YIPS (young i professionals) at http://www.youngiprofessionals.com/wiki/index.php/XMLService/PHP, which itself are patches on top of vanilla releases. Info at https://github.com/kaplanlior/pecl-database-pdo_ibm/commits/zend-patches-for-yips

So keeping track over these changes as well is easier while using git’s ability to rebase efficiently, so when a new release is done, I can adapt my patches quite easily. Make sure the changes can be back and forward ported between vanilla and IBMi versions of the extension.


Filed under: PHP

Krebs on SecurityPorn Spam Botnet Has Evil Twitter Twin

Last month KrebsOnSecurity published research into a large distributed network of apparently compromised systems being used to relay huge blasts of junk email promoting “online dating” programs — affiliate-driven schemes traditionally overrun with automated accounts posing as women. New research suggests that another bot-promoting botnet of more than 80,000 automated female Twitter accounts has been pimping the same dating scheme and prompting millions of clicks from Twitter users in the process.

One of the 80,000+ Twitter bots ZeroFOX found that were enticing male Twitter users into viewing their profile pages.

One of the 80,000+ Twitter bots ZeroFOX found that were enticing male Twitter users into viewing their profile pages.

Not long after I published Inside a Porn-Pimping Spam Botnet, I heard from researchers at ZeroFOX, a security firm that helps companies block attacks coming through social media.

Zack Allen, manager of threat operations at ZeroFOX, said he had a look at some of the spammy, adult-themed domains being promoted by the botnet in my research and found they were all being promoted through a botnet of bogus Twitter accounts.

Those phony Twitter accounts all featured images of attractive or scantily-clad women, and all were being promoted via suggestive tweets, Allen said.

Anyone who replied was ultimately referred to subscription-based online dating sites run by Deniro Marketing, a company based in California. This was the same company that was found to be the beneficiary of spam from the porn botnet I’d written about in June. Deniro did not respond to requests for comment.

“We’ve been tracking this thing since February 2017, and we concluded that the social botnet controllers are probably not part of Deniro Marketing, but most likely are affiliates,” Allen said.

ZeroFOX found more than 86,262 Twitter accounts were responsible for more than 8.6 million posts on Twitter promoting porn-based sites, many of them promoting domains in a swath of Internet address space owned by Deniro Marketing (ASN19884).

Allen said 97.4% of bot display names had the pattern “Firstname Surname” with the first letters of each name capitalized, and each name separated by a single whitespace character that corresponded to common female names.

An analysis of the Twitter bot names used in the scheme. Graphic: ZeroFOX.

An analysis of the Twitter bot names used in the scheme. Graphic: ZeroFOX.

The accounts advertise adult content by routinely injecting links from their twitter profiles to a popular hashtag, or by @-mentioning a popular user or influencer on Twitter. Those profile links are shortened with Google’s goo.gl link shortening service, which then redirects to a free hosting domain in the dot-tk (.tk) domain space (.tk is the country code for Tokelau — a group of atolls in the South Pacific).

From there the system is smart enough to redirect users back to Twitter if they appear to be part of any automated attempt to crawl the links (e.g. by using site download and mirroring tools like cURL), the researchers found. They said this was likely a precaution on the part of the spammers to avoid detection by automated scanners looking for bot activity on Twitter. Requests from visitors who look like real users responding to tweets are redirected to the porn spam sites.

Because the links promoted by those spammy Twitter accounts all abused short link services from Twitter and Google, the researchers were able to see that this entire botnet has generated more than 30 million unique clicks from February to June 2017.

[SIDE NOTE: Anyone seeking more context about what’s being promoted here can check out the Web site datinggold[dot]com [Caution: Not-Safe-for-Work], which suggests it’s an affiliate program that rewards marketers who drive new signups to its array of “online dating” offerings — mostly “cheating,” “hookup” and “affair-themed” sites like “AdsforSex,” “Affair Hookups,” and “LocalCheaters.” Note that this program is only interested in male signups.]

The datinggold affiliate site which pays spammers to bring male signups to "online dating" services.

The datinggold affiliate site which pays spammers to bring male signups to “online dating” services.

Allen said the Twitter botnet relies heavily on accounts that have been “aged” for a period of time as another method to evade anti-spam techniques used by Twitter, which may treat tweets from new accounts with more prejudice than those from established accounts. ZeroFOX said about 20 percent of the Twitter accounts identified as part of the botnet were aged at least one year before sending their first tweet, and that the botnet overall demonstrates that these affiliate programs have remained lucrative by evolving to harness social media.

“The final redirect sites encourage the user to sign up for subscription pornography, webcam sites, or fake dating,” ZeroFOX wrote in a report being issued this week. “These types of sites, although legal, are known to be scams.”

Perhaps the most well-known example of the subscription-based dating/cheating service that turned out to be mostly phony was AshleyMadison. After AshleyMadison’s user databases were plundered and published online, the company admitted that its service used at least 70,000 female chatbots that were programmed to message new users and try to entice them into replying — which required a paid account.

“Many of the sites’ policies claim that the site owners operate most of the profiles,” ZeroFOX charged. “They also have overbearing policies that can use personally information of their customers to send to other affiliate programs, yielding more spam to the victim. Much like the infamous ‘partnerka’ networks from the Russian Business Network, money is paid out via clicks and signups on affiliate programs” [links added].

Although the Twitter botnet discovered by ZeroFOX has since been dismantled, it not hard to see how this same approach could be very effective at spreading malware. Keep your wits about you while using or cruising social media sites, and be wary of any posts or profiles that match the descriptions and behavior of the bot accounts described here.

For more on this research, see ZeroFOX’s blog post Inside a Massive Siren Social Network Spam Botnet.

,

Planet DebianJoey Hess: Functional Reactive Propellor

I wrote this code, and it made me super happy!

data Variety = Installer | Target
    deriving (Eq)

seed :: UserInput -> Versioned Variety Host
seed userinput ver = host "foo"
    & ver (   (== Installer) --> hostname "installer"
          <|> (== Target)    --> hostname (inputHostname userinput)
          )
    & osDebian Unstable X86_64
    & Apt.stdSourcesList
    & Apt.installed ["linux-image-amd64"]
    & Grub.installed PC
    & XFCE.installed
    & ver (   (== Installer) --> desktopUser defaultUser
          <|> (== Target)    --> desktopUser (inputUsername userinput)
          )
    & ver (   (== Installer) --> autostartInstaller )

This is doing so much in so little space and with so little fuss! It's completely defining two different versions of a Host. One version is the Installer, which in turn installs the Target. The code above provides all the information that propellor needs to convert a copy of the Installer into the Target, which it can do very efficiently. For example, it knows that the default user account should be deleted, and a new user account created based on the user's input of their name.

The germ of this idea comes from a short presentation I made about propellor in Portland several years ago. I was describing RevertableProperty, and Joachim Breitner pointed out that to use it, the user essentially has to keep track of the evolution of their Host in their head. It would be better for propellor to know what past versions looked like, so it can know when a RevertableProperty needs to be reverted.

I didn't see a way to address the objection for years. I was hung up on the problem that propellor's properties can't be compared for equality, because functions can't be compared for equality (generally). And on the problem that it would be hard for propellor to pull old versions of a Host out of git. But then I ran into the situation where I needed these two closely related hosts to be defined in a single file, and it all fell into place.

The basic idea is that propellor first reverts all the revertible properties for other versions. Then it ensures the property for the current version.

Another use for it would be if you wanted to be able to roll back changes to a Host. For example:

foos :: Versioned Int Host
foos ver = host "foo"
    & hostname "foo.example.com"
    & ver (   (== 1) --> Apache.modEnabled "mpm_worker"
          <|> (>= 2) --> Apache.modEnabled "mpm_event"
          )
    & ver ( (>= 3)   --> Apt.unattendedUpgrades )

foo :: Host
foo = foos `version` (4 :: Int)

Versioned properties can also be defined:

foobar :: Versioned Int -> RevertableProperty DebianLike DebianLike
foobar ver =
    ver (   (== 1) --> (Apt.installed "foo" <!> Apt.removed "foo")
        <|> (== 2) --> (Apt.installed "bar" <!> Apt.removed "bar")
        )

Notice that I've embedded a small DSL for versioning into the propellor config file syntax. While implementing versioning took all day, that part was super easy; Haskell config files win again!

API documentation for this feature

PS: Not really FRP, probably. But time-varying in a FRP-like way.


Development of this was sponsored by Jake Vosloo on Patreon.

Planet DebianDirk Eddelbuettel: Rcpp 0.12.12: Rounding some corners

The twelveth update in the 0.12.* series of Rcpp landed on CRAN this morning, following two days of testing at CRAN preceded by five full reverse-depends checks we did (and which are always logged in this GitHub repo). The Debian package has been built and uploaded; Windows and macOS binaries should follow at CRAN as usual. This 0.12.12 release follows the 0.12.0 release from late July, the 0.12.1 release in September, the 0.12.2 release in November, the 0.12.3 release in January, the 0.12.4 release in March, the 0.12.5 release in May, the 0.12.6 release in July, the 0.12.7 release in September, the 0.12.8 release in November, the 0.12.9 release in January, the 0.12.10.release in March, and the 0.12.11.release in May making it the sixteenth release at the steady and predictable bi-montly release frequency.

Rcpp has become the most popular way of enhancing GNU R with C or C++ code. As of today, 1097 packages (and hence 71 more since the last release in May) on CRAN depend on Rcpp for making analytical code go faster and further, along with another 91 in BioConductor.

This releases contain a fairly large number of fairly small and focused pull requests most of which either correct some corner cases or improve other aspects. JJ tirelessly improved the package registration added in the previous release and following R 3.4.0. Kirill tidied up a number of small issues allowing us to run compilation in even more verbose modes---usually a good thing. Jeroen, Elias Pipping and Yo Gong all contributed as well, and we thank everybody for their contributions.

All changes are listed below in some detail.

Changes in Rcpp version 0.12.12 (2017-07-13)

  • Changes in Rcpp API:

    • The tinyformat.h header now ends in a newline (#701).

    • Fixed rare protection error that occurred when fetching stack traces during the construction of an Rcpp exception (Kirill Müller in #706).

    • Compilation is now also possibly on Haiku-OS (Yo Gong in #708 addressing #707).

    • Dimension attributes are explicitly cast to int (Kirill Müller in #715).

    • Unused arguments are no longer declared (Kirill Müller in #716).

    • Visibility of exported functions is now supported via the R macro atttribute_visible (Jeroen Ooms in #720).

    • The no_init() constructor accepts R_xlen_t (Kirill Müller in #730).

    • Loop unrolling used R_xlen_t (Kirill Müller in #731).

    • Two unused-variables warnings are now avoided (Jeff Pollock in #732).

  • Changes in Rcpp Attributes:

    • Execute tools::package_native_routine_registration_skeleton within package rather than current working directory (JJ in #697).

    • The R portion no longer uses dir.exists to no require R 3.2.0 or newer (Elias Pipping in #698).

    • Fix native registration for exports with name attribute (JJ in #703 addressing #702).

    • Automatically register init functions for Rcpp Modules (JJ in #705 addressing #704).

    • Add Shield around parameters in Rcpp::interfaces (JJ in #713 addressing #712).

    • Replace dot (".") with underscore ("_") in package names when generating native routine registrations (JJ in #722 addressing #721).

    • Generate C++ native routines with underscore ("_") prefix to avoid exporting when standard exportPattern is used in NAMESPACE (JJ in #725 addressing #723).

Thanks to CRANberries, you can also look at a diff to the previous release. As always, even fuller details are on the Rcpp Changelog page and the Rcpp page which also leads to the downloads page, the browseable doxygen docs and zip files of doxygen output for the standard formats. A local directory has source and documentation too. Questions, comments etc should go to the rcpp-devel mailing list off the R-Forge page.

This post by Dirk Eddelbuettel originated on his Thinking inside the box blog. Please report excessive re-aggregation in third-party for-profit settings.

Planet DebianJunichi Uekawa: revisiting libjson-spirit.

revisiting libjson-spirit. I tried compiling a program that uses libjson-spirit and noticed that it still is broken. New programs compiled against the header does not link with the provided static library. Trying to rebuild it fixes it, but it uses compat version 8, and that needs to be fixed (trivially). hmm... actually code doesn't build anymore and there's multiple new upstream versions. ... and then I noticed that it was a stale copy already removed from Debian repository. What's a good C++ json implementation these days?

Planet DebianVasudev Kamath: debcargo: Replacing subprocess crate with git2 crate

In my previous post I talked about using subprocess crate to extract beginning and ending year from git repository for generating debian/copyright file. In this post I'm going to talk on how I replaced subprocess with native git2 crate and achieved the same result in much cleaner and safer way.

git2 is a native Rust crate which provides access to Git repository internals. git2 does not involve any unsafe invocation as it is built against libgit2-sys which is actually using Rust FFI to directly bind to underlying libgit library. Below is the new copyright_fromgit function with git2 crate.

fn copyright_fromgit(repo_url: &str) -> Result<String> {
   let tempdir = TempDir::new_in(".", "debcargo")?;
   let repo = Repository::clone(repo_url, tempdir.path())?;

   let mut revwalker = repo.revwalk()?;
   revwalker.push_head()?;

   // Get the latest and first commit id. This is bit ugly
   let latest_id = revwalker.next().unwrap()?;
   let first_id = revwalker.last().unwrap()?; // revwalker ends here is consumed by last

   let first_commit = repo.find_commit(first_id)?;
   let latest_commit = repo.find_commit(latest_id)?;

   let first_year =
       DateTime::<Utc>::from_utc(
               NaiveDateTime::from_timestamp(first_commit.time().seconds(), 0),
               Utc).year();

   let latest_year =
       DateTime::<Utc>::from_utc(
             NaiveDateTime::from_timestamp(latest_commit.time().seconds(), 0),
             Utc).year();

   let notice = match first_year.cmp(&latest_year) {
       Ordering::Equal => format!("{}", first_year),
       _ => format!("{}-{},", first_year, latest_year),
   };

   Ok(notice)
}
So here is what I'm doing
  1. Use git2::Repository::clone to clone the given URL. We are thus avoiding exec of git clone command.
  2. Get a revision walker object. git2::RevWalk implements Iterator trait and allows walking through the git history. This is what we are using to avoid exec of git log command.
  3. revwalker.push_head() is important because we want to tell revwalker from where we want to walk the history. In this case we are asking it to walk history from repository HEAD. Without this line next line will not work. (Learned it in hard way :-) ).
  4. Then we extract git2::Oid which is we can say similar to commit hash and can be used to lookup a particular commit. We take latest commit hash using RevWalk::next call and the first commit using Revwalk::last, note the order this is because Revwalk::last consumes the revwalker so doing it in reverse order will make borrow checker unhappy :-). This replaces exec of head -n1 command.
  5. Look up the git2::Commit objects using git2::Repository::find_commit
  6. Then convert the git2::Time to chrono::DateTime and take out the years.

After this change I found an obvious error which went unnoticed in previous version, that is if there was no repository key in Cargo.toml. When there was no repo URL git clone exec did not error out and our shell commands happily extracted year from the debcargo repository!. Well since I was testing code from debcargo repository It never failed, but when I executed from non-git repository folder git threw an error but that was git log and not git clone. This error was spotted right away because git2 threw me an error that I gave it empty URL.

When it comes to performance I see that debcargo is faster compared to previous version. This makes sense because previously it was doing 5 fork and exec system calls and now that is avoided.

,

CryptogramBook Review: Twitter and Tear Gas, by Zeynep Tufekci

There are two opposing models of how the Internet has changed protest movements. The first is that the Internet has made protesters mightier than ever. This comes from the successful revolutions in Tunisia (2010-11), Egypt (2011), and Ukraine (2013). The second is that it has made them more ineffectual. Derided as "slacktivism" or "clicktivism," the ease of action without commitment can result in movements like Occupy petering out in the US without any obvious effects. Of course, the reality is more nuanced, and Zeynep Tufekci teases that out in her new book Twitter and Tear Gas.

Tufekci is a rare interdisciplinary figure. As a sociologist, programmer, and ethnographer, she studies how technology shapes society and drives social change. She has a dual appointment in both the School of Information Science and the Department of Sociology at University of North Carolina at Chapel Hill, and is a Faculty Associate at the Berkman Klein Center for Internet and Society at Harvard University. Her regular New York Times column on the social impacts of technology is a must-read.

Modern Internet-fueled protest movements are the subjects of Twitter and Tear Gas. As an observer, writer, and participant, Tufekci examines how modern protest movements have been changed by the Internet­ -- and what that means for protests going forward. Her book combines her own ethnographic research and her usual deft analysis, with the research of others and some big data analysis from social media outlets. The result is a book that is both insightful and entertaining, and whose lessons are much broader than the book's central topic.

"The Power and Fragility of Networked Protest" is the book's subtitle. The power of the Internet as a tool for protest is obvious: it gives people newfound abilities to quickly organize and scale. But, according to Tufekci, it's a mistake to judge modern protests using the same criteria we used to judge pre-Internet protests. The 1963 March on Washington might have culminated in hundreds of thousands of people listening to Martin Luther King Jr. deliver his "I Have a Dream" speech, but it was the culmination of a multi-year protest effort and the result of six months of careful planning made possible by that sustained effort. The 2011 protests in Cairo came together in mere days because they could be loosely coordinated on Facebook and Twitter.

That's the power. Tufekci describes the fragility by analogy. Nepalese Sherpas assist Mt. Everest climbers by carrying supplies, laying out ropes and ladders, and so on. This means that people with limited training and experience can make the ascent, which is no less dangerous -- to sometimes disastrous results. Says Tufekci: "The Internet similarly allows networked movements to grow dramatically and rapidly, but without prior building of formal or informal organizational and other collective capacities that could prepare them for the inevitable challenges they will face and give them the ability to respond to what comes next." That makes them less able to respond to government counters, change their tactics­ -- a phenomenon Tufekci calls "tactical freeze" -- make movement-wide decisions, and survive over the long haul.

Tufekci isn't arguing that modern protests are necessarily less effective, but that they're different. Effective movements need to understand these differences, and leverage these new advantages while minimizing the disadvantages.

To that end, she develops a taxonomy for talking about social movements. Protests are an example of a "signal" that corresponds to one of several underlying "capacities." There's narrative capacity: the ability to change the conversation, as Black Lives Matter did with police violence and Occupy did with wealth inequality. There's disruptive capacity: the ability to stop business as usual. An early Internet example is the 1999 WTO protests in Seattle. And finally, there's electoral or institutional capacity: the ability to vote, lobby, fund raise, and so on. Because of various "affordances" of modern Internet technologies, particularly social media, the same signal -- a protest of a given size -- reflects different underlying capacities.

This taxonomy also informs government reactions to protest movements. Smart responses target attention as a resource. The Chinese government responded to 2015 protesters in Hong Kong by not engaging with them at all, denying them camera-phone videos that would go viral and attract the world's attention. Instead, they pulled their police back and waited for the movement to die from lack of attention.

If this all sounds dry and academic, it's not. Twitter and Tear Gasis infused with a richness of detail stemming from her personal participation in the 2013 Gezi Park protests in Turkey, as well as personal on-the-ground interviews with protesters throughout the Middle East -- particularly Egypt and her native Turkey -- Zapatistas in Mexico, WTO protesters in Seattle, Occupy participants worldwide, and others. Tufekci writes with a warmth and respect for the humans that are part of these powerful social movements, gently intertwining her own story with the stories of others, big data, and theory. She is adept at writing for a general audience, and­despite being published by the intimidating Yale University Press -- her book is more mass-market than academic. What rigor is there is presented in a way that carries readers along rather than distracting.

The synthesist in me wishes Tufekci would take some additional steps, taking the trends she describes outside of the narrow world of political protest and applying them more broadly to social change. Her taxonomy is an important contribution to the more-general discussion of how the Internet affects society. Furthermore, her insights on the networked public sphere has applications for understanding technology-driven social change in general. These are hard conversations for society to have. We largely prefer to allow technology to blindly steer society or -- in some ways worse -- leave it to unfettered for-profit corporations. When you're reading Twitter and Tear Gas, keep current and near-term future technological issues such as ubiquitous surveillance, algorithmic discrimination, and automation and employment in mind. You'll come away with new insights.

Tufekci twice quotes historian Melvin Kranzberg from 1985: "Technology is neither good nor bad; nor is it neutral." This foreshadows her central message. For better or worse, the technologies that power the networked public sphere have changed the nature of political protest as well as government reactions to and suppressions of such protest.

I have long characterized our technological future as a battle between the quick and the strong. The quick -- dissidents, hackers, criminals, marginalized groups -- are the first to make use of a new technology to magnify their power. The strong are slower, but have more raw power to magnify. So while protesters are the first to use Facebook to organize, the governments eventually figure out how to use Facebook to track protesters. It's still an open question who will gain the upper hand in the long term, but Tufekci's book helps us understand the dynamics at work.

This essay originally appeared on Vice Motherboard.

The book on Amazon.com.

CryptogramFriday Squid Blogging: Eyeball Collector Wants a Giant-Squid Eyeball

They're rare:

The one Dubielzig really wants is an eye from a giant squid, which has the biggest eye of any living animal -- it's the size of a dinner plate.

"But there are no intact specimens of giant squid eyes, only rotten specimens that have been beached," he says.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramForged Documents and Microsoft Fonts

A set of documents in Pakistan were detected as forgeries because their fonts were not in circulation at the time the documents were dated.

Worse Than FailureError'd: Unfortunate Timing

"Apparently, I viewed the page during one of those special 31 seconds of the year," wrote Richard W.

 

"Well, it looks like I'll be paying full price for this repair," wrote Ryan W.

 

Marco writes, "So...that's like September 2nd?"

 

"Office 365????? I guess so, but ONLY if you're sure, Microsoft..." writes Leonid T.

 

Brandon writes, "Being someone who lives in 'GMT +1', I have to wonder, is it the 6th, 11th, or 12th of July or the 7th of June, November, or December we're talking about here?"

 

"Translated, it reads, 'Bayrou and Modem in angry mode', assuming of course it really is Mr. Bayrou in the pic," wrote Matt.

 

[Advertisement] Universal Package Manager - ProGet easily integrates with your favorite Continuous Integration and Build Tools, acting as the central hub to all your essential components. Learn more today!

Planet DebianChris Lamb: Installation birthday

Fancy receiving congratulations on the anniversary of when you installed your system?

Installing the installation-birthday package on your Debian machines will celebrate each birthday of your system by automatically sending a message to the local system administrator.

The installation date is based on the system installation time, not the package itself.

installation-birthday is available in Debian 9 ("stretch") via the stretch-backports repository, as well as in the testing and unstable distributions:

$ apt install installation-birthday

Enjoy, and patches welcome. :)

Don MartiSmart futures contracts on software issues talk, and bullshit walks?

Previously: Benkler’s Tripod, transactions from a future software market, more transactions from a future softwware market

Owning "equity" in an outcome

John Robb: Revisiting Open Source Ventures:

Given this, it appears that an open source venture (a company that can scale to millions of worker/owners creating a new economic ecosystem) that builds massive human curated databases and decentralizes the processing load of training these AIs could become extremely competitive.

But what if the economic ecosystem could exist without the venture? Instead of trying to build a virtual company with millions of workers/owners, build a market economy with millions of participants in tens of thousands of projects and tasks? All of this stuff scales technically much better than it scales organizationally—you could still be part of a large organization or movement while only participating directly on a small set of issues at any one time. Instead of holding equity in a large organization with all its political risk, you could hold a portfolio of positions in areas where you have enough knowledge to be comfortable.

Robb's opportunity is in training AIs, not in writing code. The "oracle" for resolving AI-training or dataset-building contracts would have to be different, but the futures market could be the same.

The cheating project problem

Why would you invest in a futures contract on bug outcomes when the project maintainer controls the bug tracker?

And what about employees who are incentivized from both sides: paid to fix a bug but able to buy futures contracts (anonymously) that will let them make more on the market by leaving it open?

In order for the market to function, the total reputation of the project and contributors must be high enough that outside participants believe that developers are more motivated to maintain that reputation than to "take a dive" on a bug.

That implies that there is some kind of relationship between the total "reputation capital" of a project and the maximum market value of all the futures contracts on it.

Open source metrics

To put that another way, there must be some relationship between the market value of futures contracts on a project and the maximum reputation value of the project. So that could be a proxy for a difficult-to-measure concept such as "open source health."

Open source journalism

Hey, tickers to put into stories! Sparklines! All the charts and stuff that finance and sports reporters can build stories around!

Planet DebianNorbert Preining: TeX Live contrib repository (re)new(ed)

It is my pleasure to announce the renewal/rework/restart of the TeX Live contrib repository service. The repository is collecting packages that cannot enter TeX Live directly (mostly due to license reasons), but are free to distribute. The basic idea is to provide a repository mimicking Debian’s nonfree branch.

The packages on this site are not distributed inside TeX Live proper for one or another of the following reasons:

  • because it is not free software according to the FSF guidelines;
  • because it is an executable update;
  • because it is not available on CTAN;
  • because it is an intermediate release for testing.

In short, anything related to TeX that can not be on TeX Live but can still legally be distributed over the Internet can hav e a placeon TLContrib.

Currently there are 52 packages in the repository, falling roughly into the following categories:

  • nosell fonts fonts and macros with nosell licenses, e.g., garamond, garamondx, etc. These fonts are mostly those that are also available via getnonfreefonts.
  • nosell packages packages with nosell licenses, e.g. acmtrans
  • nonfree support packages those packages that require non-free tools or fonts, e.g., acrotex, lucida-otf, verdana, etc

The full list of packages can be seen here.

The ultimate goal is to provide a companion to the core TeX Live (tlnet) distribution in much the same way as Debian‘s non-free tree is a companion to the normal distribution. The goal is not to replace TeX Live: packages that could go into TeX Live itself should stay (or be added) there. TLContrib is simply trying to fill in a gap in the current distribution system.

Quick Start

If you are running the current version of TeX Live, which is 2017 at the moment, the following code will suffice:

  tlmgr repository add http://contrib.texlive.info/current tlcontrib
  tlmgr pinning add tlcontrib '*'

In future there might be releases for certain years.

Verification

The packages are signed with my GnuPG RSA key: 0x6CACA448860CDC13. tlmgr will automatically verify authenticity if you add my key:

  curl -fsSL https://www.preining.info/rsa.asc | tlmgr key add -

After that tlmgr will tell you about failed authentication of this repository.

History

Taco Hoekwater started in 2010, but it hasn’t seen much activity in recent years. Taco agreed to hand it over to myself, who is currently maintaining the repository. Big thanks to Taco for his long support and cooperation.

In contrast to the original tlcontrib page, we don’t offer an automatic upload of packages or user registration. If you want to add packages here, see below.

Adding packages

If you want to see a package included here that cannot enter TeX Live proper, the following ways are possible (from most appreciated to least appreciated):

  • clone the tlcontrib git repo (see below), add the package, and publish the branch where I can pull from it. Then send me an email with the URL and explanation about free distributability/license;
  • send me the package in TDS format, with explanation about free distributability/license;
  • send me the package as distributed by the author (or on CTAN), with explanation about free distributability/license;
  • send me a link to the package explanation about free distributability/license.

Git repository

The packages are kept in a git repository and the tlmgr repo is built from after changes. The location is https://git.texlive.info/tlcontrib.

Enjoy.

,

Planet DebianJose M. Calhariz: Crossgrading a more typical server in Debian9

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64 mawk:amd64
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures

Third, do a crossgrade of the libraries:

 dpkg --list > original.dpkg
 apt-get --fix-broken --allow-remove-essential install
 for pack32 in $(grep :i386 original.dpkg | awk '{print $2}' ) ; do 
   if dpkg --status $pack32 | grep -q "Multi-Arch: same" ; then 
     apt-get install --yes --allow-remove-essential ${pack32%:i386} ; 
   fi ; 
 done

Forth, do a full crossgrade:

 if ! apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//) ; then
   apt-get --fix-broken --allow-remove-essential install
   apt-get install --allow-remove-essential $(grep :i386 original.dpkg | awk '{print $2}' | sed -e s/:i386//)
 fi

Krebs on SecurityThieves Used Infrared to Pull Data from ATM ‘Insert Skimmers’

A greater number of ATM skimming incidents now involve so-called “insert skimmers,” wafer-thin fraud devices made to fit snugly and invisibly inside a cash machine’s card acceptance slot. New evidence suggests that at least some of these insert skimmers — which record card data and store it on a tiny embedded flash drive  — are equipped with technology allowing them to transmit stolen card data wirelessly via infrared, the same communications technology that powers a TV remote control.

Last month the Oklahoma City metropolitan area experienced rash of ATM attacks involving insert skimmers. The local KFOR news channel on June 28, 2017 ran a story stating that at least four banks in the area were hit with insert skimmers.

The story quoted a local police detective saying “the skimmer contains an antenna which transmits your card information to a tiny camera hidden somewhere outside the ATM.”

Financial industry sources tell KrebsOnSecurity that preliminary analysis of the insert skimmers used in the attacks suggests they were configured to transmit stolen card data wirelessly to the hidden camera using infrared, a short-range communications technology most commonly found in television remote controls.

Here’s a look at one of the insert skimmers that Oklahoma authorities recently seized from a compromised ATM:

An insert skimmer retrieved from a compromised cash machine in Oklahoma City.

An insert skimmer retrieved from a compromised cash machine in Oklahoma City. Image: KrebsOnSecurity.com.

In such an attack, the hidden camera has a dual function: To record time-stamped videos of ATM users entering their PINs; and to receive card data recorded and transmitted by the insert skimmer. In this scenario, the fraudster could leave the insert skimmer embedded in the ATM’s card acceptance slot, and merely swap out the hidden camera whenever its internal battery is expected to be depleted.

Of course, the insert skimmer also operates on an embedded battery, but according to my sources the skimmer in question was designed to turn on only when someone uses the cash machine, thereby preserving the battery.

Thieves involved in skimming attacks have hidden spy cameras in some pretty ingenious places, such as a brochure rack to the side of the cash machine or a safety mirror affixed above the cash machine (some ATMs legitimately place these mirrors so that customers will be alerted if someone is standing behind them at the machine).

More often than not, however, hidden cameras are placed behind tiny pinholes cut into false fascias that thieves install directly above or beside the PIN pad. Unfortunately, I don’t have a picture of a hidden camera used in the recent Oklahoma City insert skimming attacks.

Here’s a closer look at the insert skimmer found in Oklahoma:

Image: KrebsOnSecurity.com.

Image: KrebsOnSecurity.com.

A source at a financial institution in Oklahoma shared the following images of the individuals who are suspected of installing these insert skimming devices.

Individuals suspected of installing insert skimmers in a rash of skimming attacks last month in Oklahoma City. Image: KrebsOnSecurity.com.

Individuals suspected of installing insert skimmers in a rash of skimming attacks last month in Oklahoma City. Image: KrebsOnSecurity.com.

As this skimming attack illustrates, most skimmers rely on a hidden camera to record the victim’s PIN, so it’s a good idea to cover the pin pad with your hand, purse or wallet while you enter it.

Yes, there are skimming devices that rely on non-video methods to obtain the PIN (such as PIN pad overlays), but these devices are comparatively rare and quite a bit more expensive for fraudsters to build and/or buy.

So cover the PIN pad. It also protects you against some ne’er-do-well behind you at the ATM “shoulder surfing” you to learn your PIN (which would likely be followed by a whack on the head).

It’s an elegant and simple solution to a growing problem. But you’d be amazed at how many people fail to take this basic, hassle-free precaution.

If you’re as fascinated as I am with all these skimming devices, check out my series All About Skimmers.

CryptogramTomato-Plant Security

I have a soft spot for interesting biological security measures, especially by plants. I've used them as examples in several of my books. Here's a new one: when tomato plants are attacked by caterpillars, they release a chemical that turns the caterpillars on each other:

It's common for caterpillars to eat each other when they're stressed out by the lack of food. (We've all been there.) But why would they start eating each other when the plant food is right in front of them? Answer: because of devious behavior control by plants.

When plants are attacked (read: eaten) they make themselves more toxic by activating a chemical called methyl jasmonate. Scientists sprayed tomato plants with methyl jasmonate to kick off these responses, then unleashed caterpillars on them.

Compared to an untreated plant, a high-dose plant had five times as much plant left behind because the caterpillars were turning on each other instead. The caterpillars on a treated tomato plant ate twice as many other caterpillars than the ones on a control plant.

Planet Linux AustraliaAnthony Towns: Bitcoin: ASICBoost – Plausible or not?

So the first question: is ASICBoost use plausible in the real world?

There are plenty of claims that it’s not:

  • “Much conspiracy around today. I don’t believe SegWit non-activation has anything to do with AsicBoost!” – Timo Hanke, one of the patent applicants, on twitter
  • “there’s absolutely nothing but baseless accusations flying around” – Emin Gun Sirer’s take, linked from the Bitmain statement
  • “no company would ever produce a chip that would have a switch in to hide that it’s actually an ASICboost chip.” – Sam Cole formerly of KNCMiner which went bankrupt due to being unable to compete with Bitmain in 2016
  • “I believe their claim about not activating ASICBoost. It is very small money for them.” – Guy Corem of SpoonDoolies, who independently discovered ASICBoost
  • “No one is even using Asicboost.” – Roger Ver (/u/memorydealers) on reddit

A lot of these claims don’t actually match reality though: ASICBoost is implemented in Bitmain miners sold to the public, and since it defaults to off, a switch to hide it is obviously easily possible since it’s disabled by default, contradicting Sam Cole’s take. There’s plenty of circumstantial evidence of ASICBoost-related transaction structuring in blocks, contradicting the basis on which Emin Gun Sirer’s dismisses the claims. The 15%-30% improvement claims that Guy Corem and Sam Cole cite are certainly large enough to be worth looking into — and  Bitmain confirms to have done on testnet. Even Guy Corem’s claim that they only amount to $2,000,000 in savings per year rather than $100,000,000 seems like a reason to expect it to be in use, rather than so little that you wouldn’t bother.

If ASICBoost weren’t in use on mainnet it would probably be relatively straightforward to prove that: Bitmain could publish the benchmarks results they got when testing on testnet, and why that proved not to be worth doing on mainnet, and provide instructions for their customers on how to reproduce their results, for instance. Or Bitmain and others could support efforts to block ASICBoost from being used on mainnet, to ensure no one else uses it, for the greater good of the network — if, as they claim, they’re already not using it, this would come at no cost to them.

To me, much of the rhetoric that’s being passed around seems to be a much better match for what you would expect if ASICBoost were in use, than if it was not. In detail:

  • If ASICBoost were in use, and no one had any reason to hide it being used, then people would admit to using it, and would do so by using bits in the block version.
  • If ASICBoost were in use, but people had strong reasons to hide that fact, then people would claim not to be using it for a variety of reasons, but those explanations would not stand up to more than casual analysis.
  • If ASICBoost were not in use, and it was fairly easy to see there is no benefit to it, then people would be happy to share their reasoning for not using it in detail, and this reasoning would be able to be confirmed independently.
  • If ASICBoost were not in use, but the reasons why it is not useful require significant research efforts, then keeping the detailed reasoning private may act as a competitive advantage.

The first scenario can be easily verified, and does not match reality. Likewise the third scenario does not (at least in my opinion) match reality; as noted above, many of the explanations presented are superficial at best, contradict each other, or simply fall apart on even a cursory analysis. Unfortunately that rules out assuming good faith — either people are lying about using ASICBoost, or just dissembling about why they’re not using it. Working out which of those is most likely requires coming to our own conclusion on whether ASICBoost makes sense.

I think Jimmy Song had some good posts on that topic. His first, on Bitmain’s ASICBoost claims finds some plausible examples of ASICBoost testing on testnet, however this was corrected in the comments as having been performed by Timo Hanke, rather than Bitmain. Having a look at other blocks’ version fields on testnet seems to indicate that there hasn’t been much other fiddling of version fields, so presumably whatever testing of ASICBoost was done by Bitmain, fiddling with the version field was not used; but that in turn implies that Bitmain must have been testing covert ASICBoost on testnet, assuming their claim to have tested it on testnet is true in the first place (they could quite reasonably have used a private testnet instead). Two later posts, on profitability and ASICBoost and Bitmain’s profitability in particular, go into more detail, mostly supporting Guy Corem’s analysis mentioned above. Perhaps interestingly, Jimmy Song also made a proposal to the bitcoin-dev shortly after Greg’s original post revealing ASICBoost and prior to these posts; that proposal would have endorsed use of ASICBoost on mainnet, making it cheaper and compatible with segwit, but would also have made use of ASICBoost readily apparent to both other miners and patent holders.

It seems to me there are three different ways to look at the maths here, and because this is an economics question, each of them give a different result:

  • Greg’s maths splits miners into two groups each with 50% of hashpower. One group, which is unable to use ASICBoost is assumed to be operating at almost zero profit, so their costs to mine bitcoins are only barely below the revenue they get from selling the bitcoin they mine. Using this assumption, the costs of running mining equipment are calculated by taking the number of bitcoin mined per year (365*24*6*12.5=657k), multiplying that by the price at the time ($1100), and halving the costs because each group only mines half the chain. This gives a cost of mining for the non-ASICBoost group of $361M per year. The other group, which uses ASICBoost, then gains a 30% advantage in costs, so only pays 70%, or $252M, a comparative saving of approximately $100M per annum. This saving is directly proportional to hashrate and ASICBoost advantage, so using Guy Corem’s figures of 13.2% hashrate and 15% advantage, this reduces from $95M to $66M, saving about $29M per annum.
  • Guy Corem’s maths estimates Bitmain’s figures directly: looking at the AntPool hashpower share, he estimates 500PH/s in hashpower (or 13.2%); he uses the specs of the AntMiner S9 to determine power usage (0.1 J/GH); he looks at electricity prices in China and estimates $0.03 per kWh; and he estimates the ASICBoost advantage to be 15%. This gives a total cost of 500M GH/s * 0.1 J/GH / 1000 W/kW * $0.03 per kWh * 24 * 365 which is $13.14 M per annum, so a 15% saving is just under $2M per annum. If you assume that the hashpower was 50% and ASICBoost gave a 30% advantage instead, this equates to about 1900 PH/s, and gives a benefit of just under $15M per annum. In order to get the $100M figure to match Greg’s result, you would also need to increase electricity costs by a factor of six, from 3c per kWH to 20c per kWH.
  • The approach I prefer is to compare what your hashpower would be keeping costs constant and work out the difference in revenue: for example, if you’re spending $13M per annum in electricity, what is your profit with ASICBoost versus without (assuming that the difficulty retargets appropriately, but no one else changes their mining behaviour). Following this line of thought, if you have 500PH/s with ASICBoost giving you a 30% boost, then without ASICBoost, you have 384 PH/s (500/1.3). If that was 13.2% of hashpower, then the remaining 86.8% of hashpower is 3288 PH/s, so when you stop using ASICBoost and a retarget occurs, total hashpower is now 3672 PH/s (384+3288), and your percentage is now 10.5%. Because mining revenue is simply proportional to hashpower, this amounts to a loss of 2.7% of the total bitcoin reward, or just under $20M per year. If you match Greg’s assumptions (50% hashpower, 30% benefit) that leads to an estimate of $47M per annum; if you match Guy Corem’s assumptions (13.2% hashpower, 15% benefit) it leads to an estimate of just under $11M per annum.

So like I said, that’s three different answers in each of two scenarios: Guy’s low end assumption of 13.2% hashpower and a 15% advantage to ASICBoost gives figures of $29M/$2M/$11M; while Greg’s high end assumptions of 50% hashpower and 30% advantage give figures of $100M/$15M/$47M. The differences in assumptions there is obviously pretty important.

I don’t find the assumptions behind Greg’s maths realistic: in essence, it assumes that mining be so competitive that it is barely profitable even in the short term. However, if that were the case, then nobody would be able to invest in new mining hardware, because they would not recoup their investment. In addition, even if at some point mining were not profitable, increases in the price of bitcoin would change that, and the price of bitcoin has been increasing over recent months. Beyond that, it also assumes electricity prices do not vary between miners — if only the marginal miner is not profitable, it may be that some miners have lower costs and therefore are profitable; and indeed this is likely the case, because electricity prices vary over time due to both seasonal and economic factors. The method Greg uses does is useful for establishing an upper limit, however: the only way ASICBoost could offer more savings than Greg’s estimate would be if every block mined produced less revenue than it cost in electricity, and miners were making a loss on every block. (This doesn’t mean $100M is an upper limit however — that estimate was current in April, but the price of bitcoin has more than doubled since then, so the current upper bound via Greg’s maths would be about $236M per year)

A downside to Guy’s method from the point of view of outside analysis is that it requires more information: you need to know the efficiency of the miners being used and the cost of electricity, and any error in those estimates will be reflected in your final figure. In particular, the cost of electricity needs to be a “whole lifecycle” cost — if it costs 3c/kWh to supply electricity, but you also need to spend an additional 5c/kWh in cooling in order to keep your data-centre operating, then you need to use a figure of 8c/kWh to get useful results. This likely provides a good lower bound estimate however: using ASICBoost will save you energy, and if you forget to account for cooling or some other important factor, then your estimate will be too low; but that will still serve as a loose lower bound. This estimate also changes over time however; while it doesn’t depend on price, it does depend on deployed hashpower — since total hashrate has risen from around 3700 PH/s in April to around 6200 PH/s today, if Bitmain’s hashrate has risen proportionally, it has gone from 500 PH/s to 837 PH/s, and an ASICBoost advantage of 15% means power cost savings have gone from $2M to $3.3M per year; or if Bitmain has instead maintained control of 50% of hashrate at 30% advantage, the savings have gone from $15M to $25M per year.

The key difference between my method and both Greg’s and Guy’s is that they implicitly assume that consuming more electricity is viable, and costs simply increase proportionally; whereas my method assumes that this is not viable, and instead that sufficient mining hardware has been deployed that power consumption is already constrained by some other factor. This might be due to reaching the limit of what the power company can supply, or the rating of the wiring in the data centre, or it might be due to the cooling capacity, or fire risk, or some other factor. For an operation spanning multiple data centres this may be the case for some locations but not others — older data centres may be maxed out, while newer data centres are still being populated and may have excess capacity, for example. If setting up new data centres is not too difficult, it might also be true in the short term, but not true in the longer term — that is having each miner use more power due to disabling ASICBoost might require shutting some miners down initially, but they may be able to be shifted to other sites over the course of a few weeks or month, and restarted there, though this would require taking into account additional hosting costs beyond electricity and cooling. As such, I think this is a fairly reasonable way to produce an plausible estimate, and it’s the one I’ll be using. Note that it depends on the bitcoin price, so the estimates this method produces have also risen since April, going from $11M to $24M per annum (13.2% hash, 15% advantage) or from $47M to $103M (50% hash, 30% advantage).

The way ASICBoost works is by allowing you to save a few steps: normally when trying to generate a proof of work, you have to do essentially six steps:

  1. A = Expand( Chunk1 )
  2. B = Compress( A, 0 )
  3. C = Expand( Chunk2 )
  4. D = Compress( C, B )
  5. E = Expand( D )
  6. F = Compress( E )

The expected process is to do steps (1,2) once, then do steps (3,4,5,6) about four billion (or more) times, until you get a useful answer. You do this process in parallel across many different chips. ASICBoost changes this process by observing that step (3) is independent of steps (1,2) — so by finding a variety of Chunk1s — call them Chunk1-A, Chunk1-B, Chunk1-C and Chunk1-D that are each compatible with a common Chunk2. In that case, you do steps (1,2) four times for each different Chunk1, then do step (3) four billion (or more) times, and do steps (4,5,6) 16 billion (or more) times, to get four times the work, while saving 12 billion (or more) iterations of step (3). Depending on the number of Chunk1’s you set yourself up to find, and the relative weight of the Expand versus Compress steps, this comes to (n-1)/n / 2 / (1+c/e), where n is the number of different Chunk1’s you have. If you take the weight of Expand and Compress steps as about equal, it simplifies to 25%*(n-1)/n, and with n=4, this is 18.75%. As such, an ASICBoost advantage of about 20% seems reasonably practical to me. At 50% hash and 20% advantage, my estimates for ASICBoost’s value are $33M in April, and $72M today.

So as to the question of whether you’d use ASICBoost, I think the answer is a clear yes: the lower end estimate has risen from $2M to $3.3M per year, and since Bitmain have acknowledged that AntMiner’s support ASICBoost in hardware already, the only additional cost is finding collisions which may not be completely trivial, but is not difficult and is easily automated.

If the benefit is only in this range, however, this does not provide a plausible explanation for opposing segwit: having the Bitcoin industry come to a consensus about how to move forward would likely increase the bitcoin price substantially, definitely increasing Bitmain’s mining revenue — even a 2% increase in price would cover their additional costs. However, as above, I believe this is only a lower bound, and a more reasonable estimate is on the order of $11M-$47M as of April or $24M-$103M as of today. This is a much more serious range, and would require an 11%-25% increase in price to not be an outright loss; and a far more attractive proposition would be to find a compromise position that both allows the industry to move forward (increasing the price) and allows ASICBoost to remain operational (maintaining the cost savings / revenue boost).

 

It’s possible to take a different approach to analysing the cost-effectiveness of mining given how much you need to pay in electricity costs. If you have access to a lot of power at a flat rate, can deal with other hosting issues, can expand (or reduce) your mining infrastructure substantially, and have some degree of influence in how much hashpower other miners can deploy, then you can derive a formula for what proportion of hashpower is most profitable for you to control.

In particular, if your costs are determined by an electricity (and cooling, etc) price, E, in dollars per kWh and performance, r, in Joules per gigahash, then given your hashrate, h in terahash/second, your power usage in watts is (h*1e3*r), and you run this for 600 seconds on average between each block (h*r*6e5 Ws), which you divide by 3.6M to convert to kWh (h*r/6), then multiply by your electricity cost to get a dollar figure (h*r*E/6). Your revenue depends on the hashrate of the everyone else, which we’ll call g, and on average you receive (p*R*h/(h+g)) every 600 seconds where p is the price of Bitcoin in dollars and R is the reward (subsidy and fees) you receive from a block. Your profit is just the difference, namely h*(p*R/(h+g) – r*E/6). Assuming you’re able to manufacture and deploy hashrate relatively easily, at least in comparison to everyone else, you can optimise your profit by varying h while the other variables (bitcoin price p, block reward R, miner performance r, electricity cost E, and external hashpower g) remain constant (ie, set the derivative of that formula with respect to h to zero and simplify) which gives a result of 6gpR/Er = (g+h)^2.

This is solvable for h (square root both sides and subtract g), but if we assume Bitmain is clever and well funded enough to have already essentially optimised their profits, we can get a better sense of what this means. Since g+h is just the total bitcoin hashrate, if we call that t, and divide both sides, we get 6gpR/Ert = t, or g/t = (Ert)/(6pR), which tells us what proportion of hashrate the rest of the network can have (g/t) if Bitmain has optimised its profits, or, alternative we can work out h/t = 1-g/t = 1-(Ert)/(6pR) which tells us what proportion of hashrate Bitmain will have if it has optimised its profits.  Plugging in E=$0.03 per kWH, r=0.1 J/GH, t=6e6 TH/s, p=$2400/BTC, R=12.5 BTC gives a figure of 0.9 – so given the current state of the network, and Guy Corem’s cost estimate, Bitmain would optimise its day to day profits by controlling 90% of mining hashrate. I’m not convinced $0.03 is an entirely reasonable figure, though — my inclination is to suspect something like $0.08 per kWh is more reasonable; but even so, that only reduces Bitmain’s optimal control to around 73%.

Because of that incentive structure, if Bitmain’s current hashrate is lower than that amount, then lowering manufacturing costs for own-use miners by 15% (per Sam Cole’s estimates) and lowering ongoing costs by 15%-30% by using ASICBoost could have a compounding effect by making it easier to quickly expand. (It’s not clear to me that manufacturing a line of ASICBoost-only miners to reduce manufacturing costs by 15% necessarily makes sense. For one thing, this would come at a cost of not being able to mine with them while they are state of the art, then sell them on to customers once a more efficient model has been developed, which seems like it might be a good way to manage inventory. For another, it vastly increases the impact of ASICBoost not being available: rather than simply increasing electricity costs by 15%-30%, it would mean reducing output to 10%-25% of what it was, likely rendering the hardware immediately obsolete)

Using the same formula, it’s possible to work out a ratio of bitcoin price (p) to hashrate (t) that makes it suboptimal for a manufacturer to control a hashrate majority (at least just due to normal mining income): h/t < 0.5, 1-Ert/6pR < 0.5, so t > 3pR/Er. Plugging in p=2400, R=12.5, e=0.08, r=0.1, this gives a total hash rate of 11.25M TH/s, almost double the current hash rate. This hashrate target would obviously increase as the bitcoin price increases, halve if the block reward halves (if a fall in the inflation subsidy is not compensated by a corresponding increase in fee income eg), increase if the efficiency of mining hardware increases, and decrease if the cost of electricity increases. For a simpler formula, assuming the best hosting price is $0.08 per kWh, and while the Antminer S9’s efficiency at 0.1 J/GH is state of the art, and the block reward is 12.5 BTC, the global hashrate in TH/s should be at least around 5000 times the price (ie 3R/Er = 4787.5, near enough to 5000).

Note that this target also sets a limit on the range at which mining can be profitable: if it’s just barely better to allow other people to control >50% of miners when your cost of electricity is E, then for someone else whose cost of electricity is 2*E or more, optimal profit is when other people control 100% of hashrate, that is, you don’t mine at all. Thus if the best large scale hosting globally costs $0.08/kWh, then either mining is not profitable anywhere that hosting costs $0.16/kWh or more, or there’s strong centralisation pressure for a mining hardware manufacturer with access to the cheapest electrictiy to control more than 50% of hashrate. Likewise, if Bitmain really can do hosting at $0.03/kWh, then either they’re incentivised to try to control over 50% of hashpower, or mining is unprofitable at $0.06/kWh and above.

If Bitmain (or any mining ASIC manufacturer) is supplying the majority of new hashrate, they actually have a fairly straightforward way of achieving that goal: if they dedicate 50-70% of each batch of ASICs built for their own use, and sell the rest, with the retail price of the sold miners sufficient to cover the manufacturing cost of the entire batch, then cashflow will mostly take care of itself. At $1200 retail price and $500 manufacturing costs (per Jimmy Song’s numbers), that strategy would imply targeting control of up to about 58% of total hashpower. The above formula would imply that’s the profit-maximising target at the current total hashrate and price if your average hosting cost is about $0.13 per kWh. (Those figures obviously rely heavily on the accuracy of the estimated manufacturing costs of mining hardware; at $400 per unit and $1200 retail, that would be 67% of hashpower, and about $0.09 per kWh)

Strategies like the above are also why this analysis doesn’t apply to miners who buy their hardware rather from a vendor, rather than building their own: because every time they increase their own hash rate (h), the external hashrate (g) also increases as a direct result, it is not valid to assume that g is constant when optimising h, so the partial derivative and optimisation is in turn invalid, and the final result is not applicable.

 

Bitmain’s mining pool, AntPool, obviously doesn’t directly account for 58% or more of total hashpower; though currently they’re the pool with the most hashpower at about 20%. As I understand it, Bitmain is also known to control at least BTC.com and ConnectBTC which add another 7.6%. The other “Emergent Consensus” supporting pools (Bitcoin.com, BTC.top, ViaBTC) account for about 22% of hashpower, however, which brings the total to just under 50%, roughly the right ballpark — and an additional 8% or 9% could easily be pointed at other public pools like slush or f2pool. Whether the “emergent consensus” pools are aligned due to common ownership and contractual obligations or simply similar interests is debatable, though. ViaBTC is funded by Bitmain, and Canoe was built and sold by Bitmain, which means strong contractual ties might exist, however  Jihan Wu, Bitmain’s co-founder, has disclaimed equity ties to BTC.top. Bitcoin.com is owned by Roger Ver, but I haven’t come across anything implying a business relationship between Bitmain and Bitcoin.com beyond supplier and customer. However John McAffee’s apparently forthcoming MGT mining pool is both partnered with Bitmain and advised by Roger Ver, so the existence of tighter ties may be plausible.

It seems likely to me that Bitmain is actually behaving more altruistically than is economically rational according to the analysis above: while it seems likely to me that Bitcoin.com, BTC.top, ViaBTC and Canoe have strong ties to Bitmain and that Bitmain likely has a high level of influence — whether due to contracts, business relationships or simply due to the loyalty and friendship — this nevertheless implies less control over the hashpower than direct ownership and management, and likely less profit. This could be due to a number of factors: perhaps Bitmain really is already sufficiently profitable from mining that they’re focusing on building their business in other ways; perhaps they feel the risks of centralised mining power are too high (and would ultimately be a risk to their long term profits) and are doing their best to ensure that mining power is decentralised while still trying to maximise their return to their investors; perhaps the rate of expansion implied by this analysis requires more investment than they can cover from their cashflow, and additional hashpower is funded by new investors who are simply assigned ownership of a new mining pool, which may helps Bitmain’s investors assure themselves they aren’t being duped by a pyramid scheme and gives more of an appearance of decentralisation.

It seems to me therefore there could be a variety of ways in which Bitmain may have influence over a majority of hashpower:

  • Direct ownership and control, that is being obscured in order to avoid an economic backlash that might result from people realising over 50% of hashpower is controlled by one group
  • Contractual control despite independent ownership, such that customers of Bitmain are committed to follow Bitmain’s lead when signalling blocks in order to maintain access to their existing hardware, or to be able to purchase additional hardware (an account on reddit appearing to belong to the GBMiners pool has suggested this is the case)
  • Contractual control due to offering essential ongoing services, eg support for physical hosting, or some form of mining pool services — maintaining the infrastructure for covert ASICBoost may be technically complex enough that Bitmain’s customers cannot maintain it themselves, but that Bitmain could relatively easily supply as an ongoing service to their top customers.
  • Contractual influence via leasing arrangements rather than sale of hardware — if hardware is leased to customers, or financing is provided, Bitmain could retain some control of the hardware until the leasing or financing term is complete, despite not having ownership
  • Coordinated investment resulting in cartel-like behaviour — even if there is no contractual relationship where Bitmain controls some of its customers in some manner, it may be that forming a cartel of a few top miners allows those miners to increase profits; in that case rather than a single firm having control of over 50% of hashrate, a single cartel does. While this is technically different, it does not seem likely to be an improvement in practice. If such a cartel exists, its members will not have any reason to compete against each other until it has maximised its profits, with control of more than 70% of the hashrate.

 

So, conclusions:

  • ASICBoost is worth using if you are able to. Bitmain is able to.
  • Nothing I’ve seen suggest Bitmain is economically clueless; so since ASICBoost is worth doing, and Bitmain is able to use it on mainnet, Bitmain are using it on mainnet.
  • Independently of ASICBoost, Bitmain’s most profitable course of action seems to be to control somewhere in the range of 50%-80% of the global hashrate at current prices and overall level of mining.
  • The distribution of hashrate between mining pools aligned with Bitmain in various ways makes it plausible, though not certain, that this may already be the case in some form.
  • If all this hashrate is benefiting from ASICBoost, then my estimate is that the value of ASICBoost is currently about $72M per annum
  • Avoiding dominant mining manufacturers tending towards supermajority control of hashrate requires either a high global hashrate or a relatively low price — the hashrate in TH/s should be about 5000 times the price in dollars.
  • The current price is about $2400 USD/BTC, so the corresponding hashrate to prevent centralisation at that price point is 12M TH/s. Conversely, the current hashrate is about 6M TH/s, so the maximum price that doesn’t cause untenable centralisation pressure is $1200 USD/BTC.

Worse Than FailureCodeSOD: Changing Requirements

Requirements change all the time. A lot of the ideology and holy wars that happen in the Git processes camps arise from different ideas about how source control should be used to represent these changes. Which commit changed which line of code, and to what end? But what if your source control history is messy, unclear, or… you’re just not using source control?

For example, let’s say you’re our Anonymous submitter, and find the following block of code. Once upon a time, this block of code enforced some mildly complicated rules about what dates were valid to pick for a dashboard display.

Can you tell which line of code was in a reaction to a radically changed requirement?

function validateDashboardDateRanges (dashboard){
    return true; 
    if(dashboard.currentDateRange && dashboard.currentDateRange.absoluteEndDate && dashboard.currentDateRange.absoluteStartDate) {
        if(!isDateRangeCorrectLength(dashboard.currentDateRange.absoluteStartDate, dashboard.currentDateRange.absoluteEndDate)){
            return false;
        }
    }
    if(dashboard.containers){
        for(var c = 0; c < dashboard.containers.length; c++){
            var container = dashboard.containers[c];
            for(var i = 0; i < container.widgets.length; i++){
                if(container.widgets[i].settings){
                    if(container.widgets[i].settings.customDateRange){
                        if(container.widgets[i].settings.dateRange.relativeDate){
                            if (container.widgets[i].settings.dateRange.relativeDate === '-1y'){
                                return false;
                            }
                        } else if (!isDateRangeCorrectLength(container.widgets[i].settings.dateRange.absoluteStartDate, container.widgets[i].settings.dateRange.absoluteEndDate)){
                            return false;
                        }
                    }
                }
            }
        }
    }
    return true;
}
[Advertisement] Otter allows you to easily create and configure 1,000's of servers, all while maintaining ease-of-use, and granular visibility down to a single server. Find out more and download today!

Planet DebianLars Wirzenius: Adding Yakking to Planet Debian

In a case of blatant self-promotion, I am going to add the Yakking RSS feed to the Planet Debian aggregation. (But really because I think some of the readership of Planet Debian may be interested in the content.)

Yakking is a group blog by a few friends aimed at new free software contributors. From the front page description:

Welcome to Yakking.

This is a blog for topics relevant to someone new to free software development. We assume you are already familiar with computers, and are curious about participating in the production of free software. You don't need to be a programmer: software development requires a wide variety of skills, and you can be a valued core contributor to a project without being a programmer.

If anyone objects, please let me know.

Don MartiBlind code reviews experiment

In case you missed it, here's a study that made the rounds earlier this year: Gender differences and bias in open source: Pull request acceptance of women versus men:

This paper presents the largest study to date on gender bias, where we compare acceptance rates of contributions from men versus women in an open source software community. Surprisingly, our results show that women's contributions tend to be accepted more often than men's. However, women's acceptance rates are higher only when they are not identifiable as women.

A followup, from Alice Marshall, breaks out the differences between acceptance of "insider" and "outsider" contributions.

For outsiders, women coders who use gender-neutral profiles get their changes accepted 2.8% more of the time than men with gender-neutral profiles, but when their gender is obvious, they get their changes accepted 0.8% less of the time.

We decided to borrow the blind auditions concept from symphony orchestras for the open source experiments program.

The experiment, launching this month, will help reviewers who want to try breaking habits of unconscious bias (whether by gender or insider/outsider status) by concealing the name and email adddress of a code author during a review on Bugzilla. You'll be able to un-hide the information before submitting a review, if you want, in order to add a personal touch, such as welcoming a new contributor.

Built with the WebExtension development work of Tomislav Jovanovic ("zombie" on IRC), and the Bugzilla bugmastering of Emma Humphries. For more info, see the Bugzilla bug discussion.

Data collection

The extension will "cc" one of two special accounts on a bug, to indicate if the review was done partly or fully blind. This lets us measure its impact without having to make back-end changes to Bugzilla.

(Yes, WebExtensions let you experiment with changing a user's experience of a site without changing production web applications or content sites. Bonus link: FilterBubbler.)

Coming soon

A first release is on a.m.o., here: Blind Reviews BMO Experiment, if you want an early look. We'll send out notifications to relevant places when the "last" bugs are fixed and it's ready for daily developer use.

Planet Linux AustraliaColin Charles: CFP for Percona Live Europe Dublin 2017 closes July 17 2017!

I’ve always enjoyed the Percona Live Europe events, because I consider them to be a lot more intimate than the event in Santa Clara. It started in London, had a smashing success last year in Amsterdam (conference sold out), and by design the travelling conference is now in Dublin from September 25-27 2017.

So what are you waiting for when it comes to submitting to Percona Live Europe Dublin 2017? Call for presentations close on July 17 2017, the conference has a pretty diverse topic structure (MySQL [and its diverse ecosystem including MariaDB Server naturally], MongoDB and other open source databases including PostgreSQL, time series stores, and more).

And I think we also have a pretty diverse conference committee in terms of expertise. You can also register now. Early bird registration ends August 8 2017.

I look forward to seeing you in Dublin, so we can share a pint of Guinness. Sláinte.

Planet DebianLucy Wayland: Basic Chilli Template

Amounts are to taste:
[Stage one]
Chopped red onion
Chopped garlic
Chopped fresh ginger
Chopped big red chillies (mild)
Chopped birds eye chillies (red or green, quite hot)
Chopped scotch bonnets (hot)
[Fry onion in some olive oil. When getting translucent, and rest of ingredients. May need to add some more oil. When the garlic is browning. On to stage two.]
[Stage two]
Some tins of chopped tomato
Some tomato puree
Some basil
Some thyme
Bay leaf optional
Some sliced mushroom
Some chopped capsicum pepper
Some kidney beans
Other beans optional (butter beans are nice)
Lentils optional (Pro tip: if adding lentils to adding lentils, especially red lentils, I recommend adding some garam masala as well. Lifts the flavour.)
Veggie mince optional
Pearled barley very optional
Stock (some reclaimed from swilling water around tom tims)
Water to keep topping up with if it get too sticky or dry
Dash of red wine optional
Worcester sauce optional
Any other flavouring you feel like optional (I quite often add random herbs or spices
[[Secret ingredient: a spoonful of Marmite]]
[Cook everything up together, but wait until there is enough fluid before you add the dry/sticky ingredients in.]
[Serve with carb of choice. I currently fond of using Ryvita as dipper instead of tortilla chips.]
[Also serve with a a “cooler” such as natural yogurt, soured cream or something else.

You want more than one type of chilli in there to broaden the flavour. I use all three, plus occasionally others as well. If you are feeling masochistic you can go hotter than scotch bonnets, but I although you may get something of the same heat, I think you lose something in the flavour.

BTW – if you get the chance, of all the tortilla chips, I think blue corn ones are the best. Only seem to find them in health food shops.

There you go. It’s a “Zen” recipe, which is why I couldn’t give you a link. You just do it until it looks right, feels right, tastes right. And with practice you get it better and better.


,

Planet DebianJose M. Calhariz: Crossgrading a minimal install of Debian 9

By testing the previous instructions for a full crosgrade I run into trouble. Here is the results of my tests to do a full crossgrade of a minimal installation of Debian inside a VM.

First you need to install a 64bits kernel and boot with it. See my previous post on how to do it.

Second you need to do a bootstrap of crossgrading:

 apt-get clean
 apt-get upgrade
 apt-get --download-only install dpkg:amd64 tar:amd64 apt:amd64 bash:amd64 dash:amd64 init:amd64
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --install /var/cache/apt/archives/*_amd64.deb
 dpkg --print-architecture
 dpkg --print-foreign-architectures
 apt-get --fix-broken --allow-remove-essential install

Third do a full crossgrade:

 apt-get install --allow-remove-essential $(dpkg --list | grep :i386 | awk '{print $2}' | sed -e s/:i386// )

This procedure seams to be a little fragile, but worked most of the time for me.

Planet DebianJose M. Calhariz: Crossgrading the kernel in Debian 9

I have a very old installation of 32bits Debian running in new hardware. Until now running a 64bits kernel was enough to use efficiently more than 4GiB of RAM. The only problem I found was the proprietary drivers from AMD/ATI and NVIDIA, that did not like this mixed environment and some problems with openafs, easilly solved with the help of the package maintainers of openafs. Crossgrading the Qemu/KVM to 64 bits did not pose a problem, so I have been running 64bits VMs for some time.

But now the nouveau driver do not work with my new display adapter and I need to run tools from OpsCode not available as 32bits. So is time to do a CrossGrade. Finding some problems I can not recommend it to the inexperienced people. Is time investigate the issues and report bugreports to Debian were appropriate.

If you run 32bits Debian installation you can easily install a 64bits kernel . The procedure is simple and well tested.

dpkg --add-architecture amd64
apt-get update
apt-get install linux-image-amd64:amd64

And reboot to test the new kernel.

You can expect here more articles about crossgrading.

LongNowThe Artangel Longplayer Letters: Alan Moore writes to Stewart Lee

Alan Moore (left) chose comedian Stewart Lee as the recipient of his Longplayer letter.


In January 02017, Iain Sinclair, a writer and filmmaker whose recent work focuses on the psychogeography of London, wrote a letter to writer Alan Moore as part of the Artangel Longplayer Letters series. The series is a relay-style correspondence, with the recipient of the letter responding with a letter to a different recipient of his choosing. Iain Sinclair wrote to Alan Moore. Now, Alan Moore has written a letter to standup comedian Stewart Lee.

The first series of correspondence in the Longplayer Letters, lasting from 02013-02015 and including correspondence with Long Now board members Stewart Brand, Esther Dyson, and Brian Eno, ended when a letter from Manuel Arriga to Giles Fraser went unanswered. You can find the previous correspondences here.


From: Alan Moore, Phippsville, Northampton

To: Stewart Lee, London

2 February 2017

Dear Stewart,

I’ll hopefully have spoken to you before you receive this and filled you in on what our bleeding game is. Iain Sinclair kicked off by writing a letter to me, and the idea is that I should kind-of-reply to Iain’s letter in the form of a letter to a person of my choosing. The main criteria seems to be that this person be anybody other than Iain, so you’re probably beginning to see how perfectly you fit the bill. Also, since your comedy often consists of repeating the same phrase, potentially dozens of times in the space of a couple of minutes, I thought you’d bring a contrasting, if not jarring, point of view to the whole Longplayer process.

As you’ve no doubt realised, this is actually a chain letter. In 2016, dozens of the world’s most beloved celebrities, the pro-European British public and the population of the USA all broke the chain, as did my first choice as a recipient of this letter, Kim Jong-nam. I’m just saying.

In his letter, Iain raised the point of what a problematic concept ‘long-term’ is for those of us at this far end of our character-arc; little more than apprentice corpses, really. Mind you – with the current resident of the White House – I suppose this is currently a problem whatever age we are. In terms of existential unease, eleven is the new eighty.

Iain also talks about “having tried, for too many years, to muddy the waters with untrustworthy fictions and ‘alternative truths’, books that detoured into other books, I am now colonised by longplaying images and private obsessions, vinyl ghost in an unmapped digital multiverse”, quoting Sebald with “And now, I am living the wrong life.”

I have to admit, that resonated with me. I’ve been thinking lately about the relationship between art and the artist, and I keep coming back to that Escher image of two hands, each holding a pencil, each sketching and creating the other (EscherSketch?). Yes, on a straightforward material level we are creating our art – our writing, our music, our comedy – but at the same time, in that a creator is modified by any significant work that they bring into being, the art is also altering and creating us. And when we embark upon our projects, it’s generally on little more than an inspired whim and with no idea at all about the person that we’ll be at the end of the process. Inevitably, we fictionalise ourselves. In terms of our personal psychology, we clearly don’t have what you’d call a plan, do we? Thus we actually have little say in the person that we end up as. Nobody could be this deliberately.

Adding to the problem for some of us is that, as artists, we tend to cultivate multiplying identities. The person that I am when I’m writing an introduction to William Hope Hodgson’s The House on the Borderland is different to my everyday persona as someone who is continually worshipping a snake and being angry about Batman. My persona when writing introductions is wearing an Edwardian smoking jacket and puffing smugly on a Meerschaum. I know that my recent infatuation with David Foster Wallace stemmed from an awareness that the persona he adopted for many of his essays and the various fictionalised versions of David Foster Wallace that appear in his novels and short stories were different entities to him-in-himself. I wonder how you, and also how the Comedian Stewart Lee, feels about this? I suppose at the end of the day this applies to everybody, doesn’t it? I mean you don’t have to be an artist to present yourself differently according to who you’re presenting yourself to, and in what role. We don’t talk to our parents the way that we do to our sexual partners, and we don’t talk to our sexual partners how we do to our houseplants. With good reason. The upshot of this is that all human identity is probably consciously or unconsciously constructed, and that for this reason its default position is shifting and fluid. What I’m saying is we may be normal.

Iain Sinclair quotes the verdict on George Michael’s death, “Unexplained but not suspicious”, as a fair assessment of all human lives, before going on to mention Jeremy Prynne’s offended response to a request for an example of his thought, “Like a lump of basalt.” The idea being that any thought is really part of a field of awareness; part of a cerebral weather condition that can’t be hacked out of its context without being “as horrifying as that morning radio interlude when listeners channel-hop and make their cups of tea: Thought for the Day.” I know what he means, but of course couldn’t help thinking about your New Year’s Day curatorship of the Today program, where you impishly got me to contribute to the religiously-inclined Thought for the Day section, broadcast at an unearthly hour of the morning, with an unfathomable diatribe about my sock-puppet snake deity, Glycon. Of course, I’m not saying that this is the specific edition of the show that Iain tuned into and found horrifying, but we have no indication that it wasn’t. Thinking about it, assuming that Thought for the Day is archived, then thanks to your clever inclusion of the world’s sole Glycon worshipper in amongst a fairly uniform rota of rabbis, imams and vicars, future social historians are going to have a drastically skewed and disproportionate view regarding early 21st-century spiritual beliefs. Actually, that’s a halfway decent call back to the idea of long-term thinking.

After circling around like an unusually keen-eyed intellectual buzzard for a couple of pages, Iain alights on the ‘time as solid’ premise of Jerusalem, which he likens to “a form of discontinued public housing in which pastpresentfuture coexist” (and yes, I am going to continue quoting his letter in an effort to bring some quality prose to my stretch of this serial epistle). He talks about that central idea of a muttering community of the living, the dead and the unborn, all existing in their different reaches of an eternal Now, referencing Yeats with “the living can assist the imagination of the dead” and remarking how much these words have defined his literary project since he first started writing about London. In light of what I was saying about identity earlier, I was reading in New Scientist that what we think of as ‘our’ consciousness is actually partly infiltrated by and composed of the consciousnesses that surround us. This of course includes the consciousness of a deceased person that we may be considering, as well as that of any imaginary person we may be projecting on the present or the future. I would be a slightly different creator and a slightly different person, for example, had I never entered into a consideration of the work and the consciousness of Flann O’Brien, or Mervyn Peake, or Angela Carter, or Kathy Acker, or William Burroughs, or a thousand other creators. Looked at like that, it’s as if we take on elements of other people’s consciousness, living or dead or imaginary, almost as a way of building up our psychological immune system. This makes us all fluctuating composites, feeding into and out of each other, and perhaps suggests a long-term possibility that goes beyond our mortal lifespan or status as individuals. If this were the case, if identity were a fluid commodity and we all flowed in and out of each other, then you’d have to see someone like John Clare as an instance where the levees had been overwhelmed and he was pretty much drowning in everybody.

Mentioning Clare’s asylum-mate Lucia Joyce and my rather brave stab at approximating her dad’s language, Iain introduced a notion of William Burroughs’ manufacture that I hadn’t come across before, that of the ‘word vine’: write a word, and the next word will suggest itself, and so on. I have to say, that is how the writing process seems to work with me. Perhaps people assume that writers have an idea and then they write it down fully formed, but that isn’t how it works in practice, or at least not for me. I don’t know if it’s the same for you, but for me most of the writing is generated by the act of writing itself. An average idea, if properly examined, may turn out to have strong neural threads of association or speculation that link it to an absolutely brilliant idea. I’m sure you must have found this with comedy routines; that a minor commonplace absurdity will open up logic gates on a string of increasingly striking or funny ideas, like finding a nugget that leads to a small but profitable gold seam. I don’t think creative people have ideas like a hen lays eggs; more that they arise by a scarcely-definable action from largely involuntary mental processes.

Still talking about Burroughs, Iain then moved on to a discussion of dreams via Burroughs’ My Education – “Couldn’t find my room as usual in the Land of the Dead. Followed by bounty hunters.” On Jerusalem’s pet subject of Eternalism, Iain took a position that sees our real eternity as residing in dreams, “in residues of sleep, in the community of sleepers, between worlds, beyond mortality.” While I’m not sure about that, it does admittedly feel right, perhaps because of the classical world’s equivalence between the world of dreams and the world of the dead, dreams being the only place you reliably met dead people.

I’ve become much more interested in dreams since Steve Moore’s death in 2014. Realising how much I missed reading through the last couple of weeks of Steve’s methodically recorded dreams on visits up to Shooters Hill, I’ve even started a much sparser and more impoverished dream-record of my own. I just really love the flavour of dreams, whether mine or other people’s. Iain reports a dream where me and him were ascending the stairs of the Princelet Street synagogue, the one featured in Rodinsky’s Room, in what seemed to him almost like an out-take from Chris Petit’s The Cardinal and the Corpse. After a ritual exchange of velvet cricket caps between us, the vista outside the synagogue window began to strobe like the climactic vision in Hodgson’s The House on the Borderland. Martin Stone – Pink Fairies, Mighty Baby – was laying out a pattern of white lines on a black case, and explained that he’d recently found a first edition of Hodgson’s book in an abandoned villa, lavishly inscribed and previously owned by Aleister Crowley. Isn’t that fantastic? Knowing that I was there in the dream gives me a sort of pseudo-memory of actually having been present at this unlikely event.

While I’m not sure about the connection between dreams and Eternalism, Iain is probably right. I very recently stumbled across – in a fascinating collection of outré individuals from David Bramwell entitled The Odditorium – the peculiar scientific theories of J.W. Dunne. Dunne proposed a kind of solid time similar to the Einsteinian state of affairs posited in Jerusalem, which I found mildly pleasing just as a supporting argument from another source, but I was taken aback by Dunne’s reasoning, in which he suggests that the accreted ‘substance’ of our dreams is somehow crucial to the phenomenon. This is so like the idea in Jerusalem of the timeless higher dimension of Mansoul being made from accumulated dream-stuff and chimes so well with Iain’s comment about eternity being located “in residues of sleep” that I should probably chew these notions over a bit more before coming to any conclusions.

Dreams certainly seem to be in the air at the moment, with dreams being the theme of our next-but-one Arts Lab magazine to be released, and Iain winding up his letter by referring to Steve Moore’s dream-centred rehearsals for eternity up there on Shooters Hill. For the anniversary of Steve’s death, I’ve decided to pay a long-postponed visit to his house, or rather to the structure that’s replaced it since the place was sold and rebuilt. I’ll hopefully get a chance to visit the Shrewsbury Lane burial mound where we scattered Steve’s ashes by the light of the supermoon following tropical storm Bertha in August 2014. Around a month or so ago I noticed an article in The Guardian about the classification of various places as World Heritage sites by English History, and was ridiculously pleased to find that the Shrewsbury Mound, the last surviving Bronze Age burial mound of several on Shooters Hill with the others having all been bulldozed in the early 1930s, was to be included. Steve’s instructions that his final resting place should be his favourite local landmark seems to me to be a way of fusing with the landscape and its history, its dreamtime if you like, which is perhaps as close to a genuine long-term strategy as its possible for a human being to get.

Anyway, it’s late – the moon tonight is a beautiful first-quarter crescent – and I should probably wrap this up. I’d like to leave you with the ‘Brexit Poem’ that I jotted down in an idle moment a month or two ago:

“I wrote this verse the moment that I heard/ the good news that we’d got our language back/ whence I, in a misjudged racial attack,/ kicked out French, German and Italian words/ and then I ”

With massive love and respect, as ever –-

Alan


Alan Moore was born in Northampton in 1953 and is a writer, performer, recording artist, activist and magician.
His comic-book work includes Lost Girls with Melinda Gebbie, From Hell with Eddie Campbell and The League of Extraordinary Gentlemen with Kevin O’Neill. He has worked with director Mitch Jenkins on the Showpieces cycle of short films and on forthcoming feature film The Show, while his novels include Voice of the Fire (1996) and his current epic Jerusalem (2016). Only about half as frightening as he looks, he lives in Northampton with his wife and collaborator Melinda Gebbie.

Stewart Lee was born in 1968 in Shropshire but grew up in Solihull. He started out on the stand-up comedy circuit in London in 1989, and started to work out whatever it was he was trying to do zoo after the turn of the century. He has written and performed four series of his own stand-up show on BBC2, had shows on at the Edinburgh fringe for 28 of the last 30 years, and has the last six of his full length stand-up shows out on DVD/download/whatever. He is the writer or co-writer of five theatre pieces and two art installations, a number of radio series, three books about comedy and a bad novel. He lives in Stoke Newington, North London with his wife, also a comedian, and two children. He was enrolled in the literary society The Friends of Arthur Machen by Alan Moore, and is a regular, if disguised, presence on London’s volunteer-fronted arts radio station Resonance 104.4 fm. His favourite comics book characters are Deathlok The Demolisher, Howard The Duck, Conan The Barbarian, Concrete and The Thing. His favourite bands/musicians are The Fall, Giant Sand, Dave Graney, John Coltrane, Miles Davis, Derek Bailey, Evan Parker, Bob Dylan, The Byrds and Shirley Collins. His favourite filmmakers are Sergio Leone, Sergio Corbucci, Andrew Kotting, Hal Hartley, and Akira Kurowsa. His favourite writers are Arthur Machen, William Blake, Ian Sinclair, Alan Moore, Stan Lee, Ray Bradbury, DH Lawrence, Thomas Hardy, Philip Larkin, Richard Brautigan, Geoff Dyer, Neil M Gunn, Francis Brett Young, and Eric Linklater and Robert E Howard.

Debian Administration Implementing two factor authentication with perl

Two factor authentication is a system which is designed to improve security, by requiring a second factor, in addition to a username/password pair, to access a particular resource. Here we'll look briefly at how you add two factor support to your applications with Perl.

Planet DebianSven Hoexter: environment variable names with dots and busybox 1.26.0 ash

In case you're for example using Alpine Linux 3.6 based docker images, and you've been passing through environment variable names with dots, you might miss them now in your actual environment. It seems that with busybox 1.26.0 the busybox ash got a lot stricter regarding validation of environment variable names and now you can no longer pass through variable names with dots in them. They just won't be there. If you've been running ash interactively you could not add them in the past, but until now you could do something like this in your Dockerfile

ENV foo.bar=baz

and later on accces a variable "foo.bar".

bash still allows those invalid variable names and is way more tolerant. So to be nice to your devs, and still bump your docker image version, you can add bash and ensure you're starting your application with /bin/bash instead of /bin/sh inside of your container.

Links

Cory DoctorowTalking Walkaway with Reason Magazine


Of all the press-stops I did on my tour for my novel Walkaway, I was most excited about my discussion with Katherine Mangu-Ward, editor-in-chief of Reason Magazine, where I knew I would have a challenging and meaty conversation with someone who was fully conversant with the political, technological and social questions the book raised.

I was not disappointed.


The interview, which was published online today, is a substantial and challenging one that gets at the core of what I’d hoped to do with the book. I hope you’ll give it a read.

But I think Uber is normal and dystopian for a lot of people, too. All the dysfunctions of Uber’s reputation economics, where it’s one-sided—I can tank your business by giving you an unfair review. You have this weird, mannered kabuki in some Ubers where people are super obsequious to try and get you to five-star them. And all of that other stuff that’s actually characteristic of Down and Out in the Magic Kingdom. I probably did predict Uber pretty well with what would happen if there are these reputation economies, which is that you would quickly have a have and a have-not. And the haves would be able to, in a very one-sided way, allocate reputation to have-nots or take it away from them, without redress, without rule of law, without the ability to do any of the things we want currency to do. So it’s not a store of value, it’s not a unit of exchange, it’s not a measure of account. Instead this is just a pure system for allowing the powerful to exercise power over the powerless.

Isn’t the positive spin on that: Well, yeah, but the way we used to do that allocation was by punching each other in the face?


Well, that’s one of the ways we used to. I was really informed by a book by David Graeber called Debt: The First 5,000 Years, where he points out that the anthropological story that we all just used to punch each other in the face all the time doesn’t really match the evidence. That there’s certainly some places where they punched each other in the face and there’s other places where they just kind of got along. Including lots of places where they got along through having long arguments or guilting each other.

I don’t know. Kabuki for stars on the Uber app still seems better than the long arguments or the guilt.

That’s because you don’t drive Uber for a living and you’ve never had to worry that tomorrow you won’t be able to.


Cory Doctorow’s ‘Fully Automated Luxury Communist Civilization’
[Katherine Mangu-Ward/Reason]


(Photo: Julian Dufort)

Planet DebianReproducible builds folks: Reproducible Builds: week 115 in Stretch cycle

Here's what happened in the Reproducible Builds effort between Sunday July 2 and Saturday July 8 2017:

Reproducible work in other projects

Ed Maste pointed to a thread on the LLVM developer mailing list about container iteration being the main source of non-determinism in LLVM, together with discussion on how to solve this. Ignoring build path issues, container iteration order was also the main issue with rustc, which was fixed by using a fixed-order hash map for certain compiler structures. (It was unclear from the thread whether LLVM's builds are truly path-independent or rather that they haven't done comparisons between builds run under different paths.)

Bugs filed

Patches submitted upstream:

Reviews of unreproducible packages

52 package reviews have been added, 62 have been updated and 20 have been removed in this week, adding to our knowledge about identified issues.

No issue types were updated or added this week.

Weekly QA work

During our reproducibility testing, FTBFS bugs have been detected and reported by:

  • Adrian Bunk (143)
  • Andreas Beckmann (1)
  • Dmitry Shachnev (1)
  • Lucas Nussbaum (3)
  • Niko Tyni (3)
  • Scott Kitterman (1)
  • Sean Whitton (1)

diffoscope development

Development continued in git with contributions from:

  • Ximin Luo:
    • Add a PartialString class to help with lazily-loaded output formats such as html-dir.
    • html and html-dir output:
      • add a size-hint to the diff headers and lazy-load buttons
      • add new limit flags and deprecate old ones
    • html-dir output
      • split index pages up if they get too big
      • put css/icon data in separate files to avoid duplication
    • main: warn if loading a diff but also giving diff-calculation flags
    • Test fixes for Python 3.6 and CI environments without imagemagick (#865625).
    • Fix a performance regression (#865660) involving the Wagner-Fischer algorithm for calculating levenshtein distance.

With these changes, we are able to generate a dynamically loaded HTML diff for GCC-6 that can be displayed in a normal web browser. For more details see this mailing list post.

Misc.

This week's edition was written by Ximin Luo, Bernhard M. Wiedemann and Chris Lamb & reviewed by a bunch of Reproducible Builds folks on IRC & the mailing lists.

Planet DebianYakking: Time - What are clocks?

System calls

You can't talk about time without clocks. A standard definition of clocks is "An instrument to measure time".

Strictly speaking the analogy isn't perfect, since these clocks aren't just different ways of measuring time, they measure some different notions of time, but this is a somewhat philosophical point.

The interface for this in Linux are system calls with a clkid_t parameter, though not all system calls are valid for every clock.

This is not an exhaustive list of system calls, since there are also older system calls for compatibility with POSIX or other UNIX standards, that offer a subset of the functionality of the above ones.

System calls beginning clock_ are about getting the time or waiting, beginning timer_ are for setting periodic or one-shot timers, timerfd_ are for timers that can be put in event loops.

For the sake of not introducing too many concepts at once, we're going to start with the simplest clock and work our way up.

CLOCK_MONOTONIC_RAW

The first clock we care about is CLOCK_MONOTONIC_RAW.

This is the simplest clock.

It it initialised to an arbitrary value on boot and counts up at a rate of one second per second to the best of its ability.

This clock is of limited use on its own since the only clock-related system calls that work with it are clock_gettime(2) and clock_getres(2). It can be used to determine the order of events, if the time of the event was recorded and the relative time difference between when they happened, since we know that the clock increments one second per second.

Example

In this program below, we time how long it takes to read 4096 bytes from /dev/zero, and print the result.

#include <stdio.h> /* fopen, fread, printf */
#include <time.h> /* clock_gettime, CLOCK_MONOTONIC_RAW, struct timespec */

int main(void) {
    FILE *devzero;
    int ret;
    struct timespec start, end;
    char buf[4096];

    devzero = fopen("/dev/zero", "r");
    if (!devzero) {
        return 1;
    }

    /* get start time */
    ret = clock_gettime(CLOCK_MONOTONIC_RAW, &start);
    if (ret < 0) {
        return 2;
    }

    if (fread(buf, sizeof *buf, sizeof buf, devzero) != sizeof buf) {
        return 3;
    }

    /* get end time */
    ret = clock_gettime(CLOCK_MONOTONIC_RAW, &end);
    if (ret < 0) {
        return 4;
    }

    end.tv_nsec -= start.tv_nsec;
    if (end.tv_nsec < 0) {
        end.tv_sec--;
        end.tv_nsec += 1000000000l;
    }
    end.tv_sec -= start.tv_sec;

    printf("Reading %zu bytes took %.0f.%09ld seconds\n",
           sizeof buf, (double)end.tv_sec, (long)end.tv_nsec);

    return 0;
}

You're possibly curious about the naming of the clock called MONOTONIC_RAW.

In the next article we will talk about CLOCK_MONOTONIC, which may help you understand why it's named the way it is.

I suggested the uses of this clock are for the sequencing of events and for calculating the relative period between them. If you can think of another use please send us a comment.

If you like to get hands-on, you may want to try reimplementing the above program in your preferred programming language, or extending it to time arbitrary events.

CryptogramMore on the NSA's Use of Traffic Shaping

"Traffic shaping" -- the practice of tricking data to flow through a particular route on the Internet so it can be more easily surveiled -- is an NSA technique that has gotten much less attention than it deserves. It's a powerful technique that allows an eavesdropper to get access to communications channels it would otherwise not be able to monitor.

There's a new paper on this technique:

This report describes a novel and more disturbing set of risks. As a technical matter, the NSA does not have to wait for domestic communications to naturally turn up abroad. In fact, the agency has technical methods that can be used to deliberately reroute Internet communications. The NSA uses the term "traffic shaping" to describe any technical means the deliberately reroutes Internet traffic to a location that is better suited, operationally, to surveillance. Since it is hard to intercept Yemen's international communications from inside Yemen itself, the agency might try to "shape" the traffic so that it passes through communications cables located on friendlier territory. Think of it as diverting part of a river to a location from which it is easier (or more legal) to catch fish.

The NSA has clandestine means of diverting portions of the river of Internet traffic that travels on global communications cables.

Could the NSA use traffic shaping to redirect domestic Internet traffic -- ­emails and chat messages sent between Americans, say­ -- to foreign soil, where its surveillance can be conducted beyond the purview of Congress and the courts? It is impossible to categorically answer this question, due to the classified nature of many national-security surveillance programs, regulations and even of the legal decisions made by the surveillance courts. Nevertheless, this report explores a legal, technical, and operational landscape that suggests that traffic shaping could be exploited to sidestep legal restrictions imposed by Congress and the surveillance courts.

News article. NSA document detailing the technique with Yemen.

This work builds on previous research that I blogged about here.

The fundamental vulnerability is that routing information isn't authenticated.

Worse Than FailureThe Defensive Contract

Working for a contractor within the defense industry can be an interesting experience. Sometimes you find yourself trying to debug an application from a stack trace which was handwritten and faxed out of a secured facility with all the relevant information redacted by overzealous security contractors who believe that you need a Secret clearance just to know that it was a System.NullReferenceException. After weeks of frustration when you are unable to solve anything from a sheet of thick black Sharpie stripes, they may bring you there for on-site debugging.

Security cameras perching on a seaside rock, like they were seagulls

Beforehand, they will lock up your cell phone, cut out the WiFi antennas from your development laptop, and background check you so thoroughly that they’ll demand explanations for the sins of your great-great-great-great grandfather’s neighbor’s cousin’s second wife’s stillborn son before letting you in the door. Once inside, they will set up temporary curtains around your environment to block off any Secret-rated workstation screens to keep you from peeking and accidentally learning what the Top Secret thread pitch is for the lug nuts of the latest black-project recon jet. Then they will set up an array of very annoying red flashing lights and constant alarm whistles to declare to all the regular staff that they need to watch their mouths because an uncleared individual is present.

Then you’ll spend several long days trying to fix code. But you’ll have no Internet connection, no documentation other than whatever three-ring binders full of possibly-relevant StackOverflow questions you had forseen to prepare, and the critical component which reliably triggers the fault has been unplugged because it occasionally sends Secret-rated UDP packets, and, alas, you’re still uncleared.

When you finish the work, if you’re lucky they’ll even let you keep your laptop. Minus the hard drive, of course. That gets pulled, secure-erased ten times over, and used for target practice at the local Marine battalion’s next SpendEx.

Despite all the inherent difficulties though, defense work can be very financially-rewarding. If you play your cards right, your company may find itself milking a 30-year-long fighter jet development project for all it’s worth with no questions asked. That’s good for salaries, good for employee morale, and very good for job security.

That’s not what happened to Nikko, of course. No. His company didn’t play its cards right at all. In fact, they didn’t even have cards. They were the player who walked up to the poker table after the River and went all-in despite not even being dealed into the game. “Hey,” the company’s leaders said to themselves, “Yeah we’ll lose some money, but at least we get to play with the big boys. That’s worth a lot, and someday we’ll be the lead contractor for the software on the next big Fire Control Radar!”

So Nikko found himself working on a project his company was the subcontractor (a.k.a. the lowest bidder) for. But in their excited rush to take on the work, nobody read the contract and signed it as-is. The customer’s requirements for this component were vague, contradictory, at times absurd, and of course the contract offered no protection for Nikko’s company.

In fact, months later when Nikko–not yet aware of the mess he was in–met with engineers from the lead contractor–whom we’ll call Acme–for guidance on the project, one of them plainly told him in an informal context “Yeah, it’s a terrible component. We just wanted to get out from under it. It’s a shame you guys bid on it…”

The project began, using a small team of a project manager, Nikko as the experienced lead, and two junior engineers. Acme did not make things easy on them. They were expected to write all code at Acme’s facilities, on a network with no Internet access. They were asked to bring their own laptops in to develop on, but the information security guys refused and instead offered them one 15-year-old Pentium 4 that the three engineers were expected to share. Of course, using such an ancient system meant that a clean compile took 20 minutes, and the hidden background process that the security guys used to audit file access constantly brought disk I/O to a halt.

But development started anyway, depsite all the red flags. They were required to use an API from another subcontractor. However, that subcontractor would only give them obfuscated JAR files with no documentation. Fortunately it was a fairly simple API and the team had some success decompiling it and figuring out how it works.

But their next hurdle was even worse. All the JAR did was communicate with a REST interface from a server. But due to the way the Acme security guys had things set up, there was no test server on the development network. It wasn’t allowed. Period.

The actual server lived in an integration lab located several miles away, but coding was not allowed there. Access to it was tightly-controlled and scheduled. Nikko found himself writing code, checking it in, and scheduling a time slot at the lab (which often took days) to try out his changes.

The integration lab was secured. He could not bring anything in and Acme information security specialists had to sign off on strict paperwork every time he wanted to transfer the latest build there. Debuggers were forbidden due to the fears of giving an uncleared individual access to the system’s memory, and Nikko had to hand-copy any error logs using pen and paper to bring any error messages out of the facility and back to the development lab.

Three months into the project, Nikko was alone. The project manager threw some kind of temper tantrum and either quit or was fired. One of the junior engineers gave birth and quit the company during maternity leave. And the other junior engineer accepted an offer from another company and also left.

Nikko, feeling burned out and unable to sleep one night, then remembered his father’s story of punchcard programming in computing’s early days. Back then, your program was a stack of punchcards, with each card performing a single machine instruction. You had to schedule a 15-minute timeslot with the computer to run through your program which was actually a stack of punchcards. And sometimes the operator accidentally dropped your box of punchcards on the way to the machine but made no effort to ensure they were executed in the correct order, ruining the job.

The day after that revelation, Nikko met with his bosses. He was upset, and flatly told them that the project could not succeed, they were following 1970’s punchcard programming methodologies in the year 2016, and that he would have no part in it anymore.

He then took on a job at a different defense contractor. And then found himself working again as a subcontractor on an Acme component. He decided to stick with it for a while since his new company actually read contracts before signing, so maybe it would be better this time? Still, in the back of his mind he started to wonder if he had died and Acme was his purgatory.

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

Planet DebianFrancois Marier: Toggling Between Pulseaudio Outputs when Docking a Laptop

In addition to selecting the right monitor after docking my ThinkPad, I wanted to set the correct sound output since I have headphones connected to my Ultra Dock. This can be done fairly easily using Pulseaudio.

Switching to a different pulseaudio output

To find the device name and the output name I need to provide to pacmd, I ran pacmd list-sinks:

2 sink(s) available.
...
  * index: 1
    name: <alsa_output.pci-0000_00_1b.0.analog-stereo>
    driver: <module-alsa-card.c>
...
    ports:
        analog-output: Analog Output (priority 9900, latency offset 0 usec, available: unknown)
            properties:

        analog-output-speaker: Speakers (priority 10000, latency offset 0 usec, available: unknown)
            properties:
                device.icon_name = "audio-speakers"

From there, I extracted the soundcard name (alsa_output.pci-0000_00_1b.0.analog-stereo) and the names of the two output ports (analog-output and analog-output-speaker).

To switch between the headphones and the speakers, I can therefore run the following commands:

pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker

Listening for headphone events

Then I looked for the ACPI event triggered when my headphones are detected by the laptop after docking.

After looking at the output of acpi_listen, I found jack/headphone HEADPHONE plug.

Combining this with the above pulseaudio names, I put the following in /etc/acpi/events/thinkpad-dock-headphones:

event=jack/headphone HEADPHONE plug
action=su francois -c "pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output"

to automatically switch to the headphones when I dock my laptop.

Finding out whether or not the laptop is docked

While it is possible to hook into the docking and undocking ACPI events and run scripts, there doesn't seem to be an easy way from a shell script to tell whether or not the laptop is docked.

In the end, I settled on detecting the presence of USB devices.

I ran lsusb twice (once docked and once undocked) and then compared the output:

lsusb  > docked 
lsusb  > undocked 
colordiff -u docked undocked 

This gave me a number of differences since I have a bunch of peripherals attached to the dock:

--- docked  2017-07-07 19:10:51.875405241 -0700
+++ undocked    2017-07-07 19:11:00.511336071 -0700
@@ -1,15 +1,6 @@
 Bus 001 Device 002: ID 8087:8000 Intel Corp. 
 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
-Bus 003 Device 081: ID 0424:5534 Standard Microsystems Corp. Hub
-Bus 003 Device 080: ID 17ef:1010 Lenovo 
 Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
-Bus 002 Device 041: ID xxxx:xxxx ...
-Bus 002 Device 040: ID xxxx:xxxx ...
-Bus 002 Device 039: ID xxxx:xxxx ...
-Bus 002 Device 038: ID 17ef:100f Lenovo 
-Bus 002 Device 037: ID xxxx:xxxx ...
-Bus 002 Device 042: ID 0424:2134 Standard Microsystems Corp. Hub
-Bus 002 Device 036: ID 17ef:1010 Lenovo 
 Bus 002 Device 002: ID xxxx:xxxx ...
 Bus 002 Device 004: ID xxxx:xxxx ...
 Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

I picked 17ef:1010 as it appeared to be some internal bus on the Ultra Dock (none of my USB devices were connected to Bus 003) and then ended up with the following port toggling script:

#!/bin/bash

if /usr/bin/lsusb | grep 17ef:1010 > /dev/null ; then
    # docked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output
else
    # undocked
    pacmd set-sink-port alsa_output.pci-0000_00_1b.0.analog-stereo analog-output-speaker
fi

,

Krebs on SecurityAdobe, Microsoft Push Critical Security Fixes

It’s Patch Tuesday, again. That is, if you run Microsoft Windows or Adobe products. Microsoft issued a dozen patch bundles to fix at least 54 security flaws in Windows and associated software. Separately, Adobe’s got a new version of its Flash Player available that addresses at least three vulnerabilities.

brokenwindowsThe updates from Microsoft concern many of the usual program groups that seem to need monthly security fixes, including Windows, Internet Explorer, Edge, Office, .NET Framework and Exchange.

According to security firm Qualys, the Windows update that is most urgent for enterprises tackles a critical bug in the Windows Search Service that could be exploited remotely via the SMB file-sharing service built into both Windows workstations and servers.

Qualys says the issue affects Windows Server 2016, 2012, 2008 R2, 2008 as well as desktop systems like Windows 10, 7 and 8.1.

“While this vulnerability can leverage SMB as an attack vector, this is not a vulnerability in SMB itself, and is not related to the recent SMB vulnerabilities leveraged by EternalBlue, WannaCry, and Petya.” Qualys notes, referring to the recent rash of ransomware attacks which leveraged similar vulnerabilities.

Other critical fixes of note in this month’s release from Microsoft include at least three vulnerabilities in Microsoft’s built-in browser — Edge or Internet Explorer depending on your version of Windows. There are at least three serious flaws in these browsers that were publicly detailed prior to today’s release, suggesting that malicious hackers may have had some advance notice on figuring out how to exploit these weaknesses.

brokenflash-aAs it is accustomed to doing on Microsoft’s Patch Tuesday, Adobe released a new version of its Flash Player browser plugin that addresses a trio of flaws in that program.

The latest update brings Flash to v. 26.0.0.137 for Windows, Mac and Linux users alike. If you have Flash installed, you should update, hobble or remove Flash as soon as possible. To see which version of Flash your browser may have installed, check out this page.

The smartest option is probably to ditch the program once and for all and significantly increase the security of your system in the process. An extremely powerful and buggy program that binds itself to the browser, Flash is a favorite target of attackers and malware. For some ideas about how to hobble or do without Flash (as well as slightly less radical solutions) check out A Month Without Adobe Flash Player.

If you choose to keep Flash, please update it today. The most recent versions of Flash should be available from the Flash home page. Windows users who browse the Web with anything other than Internet Explorer may need to apply this patch twice, once with IE and again using the alternative browser (Firefox, Opera, e.g.).

Chrome and IE should auto-install the latest Flash version on browser restart (users may need to manually check for updates in and/or restart the browser to get the latest Flash version). A green arrow in the upper right corner of my Chrome installation today gave me the prompt I needed to update my version to the latest.

Chrome users may need to restart the browser to install or automatically download the latest version. When in doubt, click the vertical three dot icon to the right of the URL bar, select “Help,” then “About Chrome”: If there is an update available, Chrome should install it then.

As always, if you experience any issues downloading or installing any of these updates, please leave a note about it in the comments below.

Planet DebianJoey Hess: bonus project

Little bonus project after the solar upgrade was replacing the battery box's rotted roof, down to the cinderblock walls.

Except for a piece of plywood, used all scrap lumber for this project, and also scavenged a great set of hinges from a discarded cabinet. I hope the paint on all sides and an inch of shingle overhang will be enough to protect the plywood.

Bonus bonus project to use up paint. (Argh, now I want to increase the size of the overflowing grape arbor. Once you start on this kind of stuff..)

After finishing all that, it was time to think about this while enjoying this.

(Followed by taking delivery of a dumptruck full of gravel -- 23 tons -- which it turns out was enough for only half of my driveway..)

Planet DebianAndreas Bombe: PDP-8/e Replicated — Overview

This is an overview of the hardware and internals of the PDP-8/e replica I’m building.

The front panel board

If you know the original or remember the picture from the first post it is clear that this is a functional replica not aiming to be as pretty as those of the other projects I mentioned. I have reordered the switches into two rows to make the board more compact (which also means cheaper) without sacrificing usability.

There’s the two rows of display lights plus one run light the 8/e provides. The upper row is the address made up of 12 bits of memory address and 3 bits of extended memory address or field. Below are the 12 bits indicator which can show one data set out of six as selected by the user.

All the switches of the original are implemented as more compact buttons1. While momentary switches are easily substituted by buttons, all buttons implementing two position switches toggle on/off with each press and they have a LED above that shows the current state. The six position rotary switch is implemented as a button cycling through all indicator displays together with six LEDs which show the active selection.

Markings show the meaning of the indicator and switches as on the original, grouped in threes as the predominant numbering system for the PDPs was octal. The upper line shows the meaning for the state indicator, the middle for the status indicator and bit numbers for the rest. Note that on the PDP-8 and opposite to modern conventions, the most significant bit was numbered 0.

I designed it as a pure front panel board without any PDP-8 simulation parts. The buttons and associated lights are controllable via SPI lines with a 3.3 V supply. The address and indicator lights have a separate common anode configuration with all cathodes individually available on a pin header without any resistors in the path, leaving voltage and current regulation up to the simulation board. This board is actually a few years old from a similar project where I emulated the PDP-8 in software on a microcontroller and the flexible design allowed me to reuse it unchanged.

The main board

This is where the magic happens. You can see three big ICs on the board: On the left is the STM32F405 microcontroller (with ARM Cortex-M4 core), the bigger one in the middle is the Altera2 MAX 10 FPGA and finally to the right is the SRAM that is large enough to hold all the main memory of the 32 KW maximum expansion of the PDP-8/e. The two smaller chips to the right of that are just buffers that drive the front panel address LEDs, the small chip at the top left is a RS-232 level shifter.

The idea behind this is that the PDP-8 and peripherals that are simple to directly implement, such as GPIO or a serial port, are fully on the FPGA. Other peripherals such as paper and magnetic tape and disks, which are after all not connected to real PDP-8 drives but disk images on a microSD, are implemented on the microcontroller interfacing with stub devices in the FPGA. Compared to implementing everything everything in the FPGA, the STM32F4 has the advantage of useful built in peripherals such as two host/device capable USB ports. 5 V tolerant I/O pins are very useful and simply not available in any FPGA.

I have to admit that this board was a bit of a rush job in order to have something at all to show at the Vintage Computer Festival Europe 18.0. Given that it was my first time designing a board with a large microcontroller and the first time with an FPGA, it wasn’t exactly my fastest progressing project either and I got basic functionality (front panel allows toggling in small programs and running them) working just in time. For various reasons the project hasn’t progressed much since, so the following is still just plans, but plans for which the hardware was designed.

Since the aim is to have a cycle accurate PDP-8/e implementation, digital I/O was always planned. Rather than defining my own header I have included Arduino R3 compatible headers (for 3.3 V compatible boards only) that have become a popular even outside the Arduino world for this purpose. The digital Arduino pins are connected directly to the FPGA and will be directly controllable by PDP-8 software. The downside of choosing the Arduino headers is that the original PDP-8 digital I/O interface is not a perfect match since it naturally has 12 lines whereas the Arduino has 15. The analog inputs are not connected to the FPGA, the documentation of the MAX10’s ADC in the EQFP package are not very encouraging. They are connected to the STM32 instead3.

Another interface connected directly to the FPGA and that would be directly under PDP-8 control is a standard 9 pin RS-232 interface. RX, TX, CTS and RTS are connected and level-shifted between 3.3 V and RS-232 levels by a MAX3232.

Besides the PDP-8, I also plan to implement a full video terminal on the board. The idea is that with a power supply, keyboard and monitor this board would form a complete system without the need of connecting another computer to act as a terminal. To that end, there is a VGA port attached to the FPGA with simple resistor network DACs for 9 bits color (RGB with 3 bits each). This is another spot where I left myself room to expand, for e.g. a VT220 you really only need one color in two brightness levels. Keyboards will be connected either via the PS/2 connector on the right or the USB-A host port at the top left.

Last of the interface ports is the USB micro-AB port on the left, which for now I am using only for power supply. I mainly plan to use it to provide alternative or additional serial ports to the PDP-8 or to export the video terminal serial port for testing purposes. Other possible uses are access to the image files on the microSD and firmware updates.

This has gotten rather long again, so I’m stopping here and leave some implementation notes for another post.


  1. They are also much cheaper. Given the large number of switches, the savings are substantial. Additionaly the buttons are nicer to operate than long rows of tiny switches. [return]
  2. Or rather Intel now. At least Altera’s web site, documentation and software have already been thoroughly rebranded, but the chips I got were produced prior to that. [return]
  3. That’s not to say that the analog conversions on the STM32 are necessarily better than those of the MAX10 when you can’t follow their guidelines, I have no comparisons. Certainly following the guidelines would have been prohibitive given how many pins’ usage they restrict. [return]

Rondam RamblingsThere's yer smoking gun

I predicted the existence of a Russia-gate smoking gun back in March, but I didn't expect it to actually turn up so soon.  And I certainly didn't expect it to turn up by having Donald Trump Jr. whip it out, shoot himself in the foot with it (twice!), and then loudly shout, "I told you there's nothing to see here, move along!" Here is the most damning part of the email chain released by Trump Jr.

Google AdsenseClarification around Pop-Unders

At Google, we value users, advertisers and publishers equally. We have policies in place that define where Google ads should appear and how they must be implemented. These policies help ensure a positive user experience, as well as maintain a healthy ads ecosystem that benefits both publishers and advertisers.

To get your attention, some ads pop up in front of your current browser window, obscuring the content you want to see. Pop-under ads can be annoying as well, as they will "pop under" your window, so that you don't see them until you minimize your browser. We do not believe these ads provide a good user experience, and therefore are not suitable for Google ads.

That is why we recently clarified our policies around pop-ups and pop-unders to help remove any ambiguity. To simplify our policies, we are no longer permitting the placement of Google ads on pages that are loaded as a pop-up or pop-under. Additionally, we do not permit Google ads on any site that contains or triggers pop-unders, regardless of whether Google ads are shown in the pop-unders.

We continually review and evaluate our policies to address emerging trends, and in this case we determined that a policy change was necessary.

As with all policies, publishers are ultimately responsible for ensuring that traffic to their site is compliant with Google policies. To assist publishers, we’ve provided guidance on best practices for buying traffic.

Planet DebianRaphaël Hertzog: Freexian’s report about Debian Long Term Support, June 2017

A Debian LTS logoLike each month, here comes a report about the work of paid contributors to Debian LTS.

Individual reports

In May, about 161 work hours have been dispatched among 11 paid contributors. Their reports are available:

Evolution of the situation

The number of sponsored hours increased slightly with one new bronze sponsor and another silver sponsor is in the process of joining.

The security tracker currently lists 49 packages with a known CVE and the dla-needed.txt file 54. The number of open issues is close to last month.

Thanks to our sponsors

New sponsors are in bold.

One comment | Liked this article? Click here. | My blog is Flattr-enabled.

CryptogramHacking Spotify

Some of the ways artists are hacking the music-streaming service Spotify.

Planet Linux AustraliaJames Morris: Linux Security Summit 2017 Schedule Published

The schedule for the 2017 Linux Security Summit (LSS) is now published.

LSS will be held on September 14th and 15th in Los Angeles, CA, co-located with the new Open Source Summit (which includes LinuxCon, ContainerCon, and CloudCon).

The cost of LSS for attendees is $100 USD. Register here.

Highlights from the schedule include the following refereed presentations:

There’s also be the usual Linux kernel security subsystem updates, and BoF sessions (with LSM namespacing and LSM stacking sessions already planned).

See the schedule for full details of the program, and follow the twitter feed for the event.

This year, we’ll also be co-located with the Linux Plumbers Conference, which will include a containers microconference with several security development topics, and likely also a TPMs microconference.

A good critical mass of Linux security folk should be present across all of these events!

Thanks to the LSS program committee for carefully reviewing all of the submissions, and to the event staff at Linux Foundation for expertly planning the logistics of the event.

See you in Los Angeles!

Worse Than FailureCodeSOD: Questioning Existence

Michael got a customer call, from a PHP system his company had put together four years ago. He pulled up the code, which thankfully was actually up to date in source control, and tried to get a grasp of what the system does.

There, he discovered a… unique way to define functions in PHP:

if(!function_exists("GetFileAsString")){
        function GetFileAsString($FileName){
                $Contents = "";
                if(file_exists($FileName)){
                        $Contents = file_get_contents($FileName);
                }
                return $Contents;
        }
}
if(!function_exists("RemoveEmptyArrayValuesAndDuplicates")){
        function RemoveEmptyArrayValuesAndDuplicates($ArrayName){
                foreach( $ArrayName as $Key => $Value ) {
                        if( empty( $ArrayName[ $Key ] ) )
                   unset( $ArrayName[ $Key ] );
                }
                $ArrayName = array_unique($ArrayName);
                $ReIndexedArray = array();
                $ReIndexedArray = array_values($ArrayName);
                return $ReIndexedArray;
        }
}

Before we talk about the function_exists nonsense, let’s comment on the functions themselves: they’re both unnecessary. file_get_contents returns a false-y value that gets converted to an empty string if you ever treat it as a string , which is exactly the same thing that GetFileAsString does. The replacement for RemoveEmptyArrayValuesAndDuplicates could also be much simpler: array_values(array_unique(array_filter($rawArray)));. That’s still complex enough it could merit its own function, but without the loop, it’s far easier to understand.

Neither of these functions needs to exist, which is why, perhaps, they’re conditional. I can only guess about how these came to be, but here’s my guess:

Once upon a time, in the Dark Times, many developers were working on the project. They worked with no best-practices, no project organization, no communication, and no thought. Each of them gradually built up a library of include files, that carried with it their own… unique solution to common problems. It became spaghetti, and then eventually a forest, and eventually, in the twisted morass of code, Sallybob’s GetFileAsString conflicted with Joebob’s GetFileAsString. The project teetered on the edge of failure… so someone tried to “fix” it, by decreeing every utility function needed this kind of guard.

I’ve seen PHP projects go down this path, though never quite to this conclusion.

[Advertisement] Application Release Automation for DevOps – integrating with best of breed development tools. Free for teams with up to 5 users. Download and learn more today!

,

Planet DebianSteve Kemp: bind9 considered harmful

Recently there was another bind9 security update released by the Debian Security Team. I thought that was odd, so I've scanned my mailbox:

  • 11 January 2017
    • DSA-3758 - bind9
  • 26 February 2017
    • DSA-3795-1 - bind9
  • 14 May 2017
    • DSA-3854-1 - bind9
  • 8 July 2017
    • DSA-3904-1 - bind9

So in the year to date there have been 7 months, in 3 of them nothing happened, but in 4 of them we had bind9 updates. If these trends continue we'll have another 2.5 updates before the end of the year.

I don't run a nameserver. The only reason I have bind-packages on my system is for the dig utility.

Rewriting a compatible version of dig in Perl should be trivial, thanks to the Net::DNS::Resolver module:

These are about the only commands I ever run:

dig -t a    steve.fi +short
dig -t aaaa steve.fi +short
dig -t a    steve.fi @8.8.8.8

I should do that then. Yes.

Planet DebianMarkus Koschany: My Free Software Activities in June 2017

Welcome to gambaru.de. Here is my monthly report that covers what I have been doing for Debian. If you’re interested in  Java, Games and LTS topics, this might be interesting for you.

Debian Games

Debian Java + Android

Debian LTS

This was my sixteenth month as a paid contributor and I have been paid to work 16 hours on Debian LTS, a project started by Raphaël Hertzog. In that time I did the following:

  • I triaged mp3splt and putty and marked CVE-2017-5666 and CVE-2017-6542 as no-dsa because the impact was very low.
  • DLA-975-1. I uploaded the security update for wordpress which I prepared last month fixing 6 CVE.
  • DLA-986-1. Issued a security update for zookeeper fixing 1 CVE.
  • DLA-989-1. Issued a security update for jython fixing 1 CVE.
  • DLA-996-1. Issued a security update for tomcat7 fixing 1 CVE.
  • DLA-1002-1. Issued a security update for smb4k fixing 1 CVE.
  • DLA-1013-1. Issued a security update for graphite2 fixing 8 CVE.
  • DLA-1020-1. Issued a security update for jetty fixing 1 CVE.
  • DLA-1021-1. Issued a security update for jetty8 fixing 1 CVE.

Misc

  • I updated wbar, fixed #829981 and uploaded mediathekview and osmo to unstable. For the Buster release cycle I decided to package the fork of xarchiver‘s master branch which receives regular updates and bug fixes. Besides being an GTK-3 application now, a lot of older bugs could be fixed.

Thanks for reading and see you next time.

Sociological ImagesHow LSD opened minds and changed America

In the 1950s and ’60s, a set of social psychological experiments seemed to show that human beings were easily manipulated by low and moderate amounts of peer pressure, even to the point of violence. It was a stunning research program designed in response to the horrors of the Holocaust, which required the active participation of so many people, and the findings seemed to suggest that what happened there was part of human nature.

What we know now, though, is that this research was undertaken at an unusually conformist time. Mothers were teaching their children to be obedient, loyal, and to have good manners. Conformity was a virtue and people generally sought to blend in with their peers. It wouldn’t last.

At the same time as the conformity experiments were happening, something that would contribute to changing how Americans thought about conformity was being cooked up: the psychedelic drug, LSD.

Lysergic acid diethylamide was first synthesized in 1938 in the routine process of discovering new drugs for medical conditions. The first person to discover it psychedelic properties — its tendency to alter how we see and think — was the scientist who invented it, Albert Hoffmann. He ingested it accidentally, only to discover that it induces a “dreamlike state” in which he “perceived an uninterrupted stream of fantastic pictures, extraordinary shapes with intense, kaleidoscopic play of colors.”

By the 1950s , LSD was being administered to unwitting American in a secret, experimental mind control program conducted by the United States Central Intelligence Agency, one that would last 14 years and occur in over 80 locations. Eventually the fact of the secret program would leak out to the public, and so would LSD.

It was the 1960s and America was going through a countercultural revolution. The Civil Rights movement was challenging persistent racial inequality, the women’s and gay liberation movements were staking claims on equality for women and sexual minorities, the sexual revolution said no to social rules surrounding sexuality and, in the second decade of an intractable war with Vietnam, Americans were losing patience with the government. Obedience had gone out of style.

LSD was the perfect drug for the era. For its proponents, there was something about the experience of being on the drug that made the whole concept of conformity seem absurd. A new breed of thinker, the “psychedelic philosopher,” argued that LSD opened one’s mind and immediately revealed the world as it was, not the world as human beings invented it. It revealed, in other words, the social constructedness of culture.

In this sense, wrote the science studies scholar Ido Hartogsohn, LSD was truly “countercultural,” not only “in the sense of being peripheral or opposed to mainstream culture [but in] rejecting the whole concept of culture.” Culture, the philosophers claimed, shut down our imagination and psychedelics were the cure. “Our normal word-conditioned consciousness,” wrote one proponent, “creates a universe of sharp distinctions, black and white, this and that, me and you and it.” But on acid, he explained, all of these rules fell away. We didn’t have to be trapped in a conformist bubble. We could be free.

The cultural influence of the psychedelic experience, in the context of radical social movements, is hard to overstate. It shaped the era’s music, art, and fashion. It gave us tie-dye, The Grateful Dead, and stuff like this:


via GIPHY

The idea that we shouldn’t be held down by cultural constrictions — that we should be able to live life as an individual as we choose — changed America.

By the 1980s, mothers were no longer teaching their children to be obedient, loyal, and to have good manners. Instead, they taught them independence and the importance of finding one’s own way. For decades now, children have been raised with slogans of individuality: “do what makes you happy,” “it doesn’t matter what other people think,” “believe in yourself,” “follow your dreams,” or the more up-to-date “you do you.”

Today, companies choose slogans that celebrate the individual, encouraging us to stand out from the crowd. In 2014, for example, Burger King abandoned its 40-year-old slogan, “Have it your way,” for a plainly individualistic one: “Be your way.” Across the consumer landscape, company slogans promise that buying their products will mark the consumer as special or unique. “Stay extraordinary,” says Coke; “Think different,” says Apple. Brands encourage people to buy their products in order to be themselves: Ray-Ban says “Never hide”; Express says “Express yourself,” and Reebok says “Let U.B.U.”

In surveys, Americans increasingly defend individuality. Millennials are twice as likely as Baby Boomers to agree with statements like “there is no right way to live.” They are half as likely to think that it’s important to teach children to obey, instead arguing that the most important thing a child can do is “think for him or herself.” Millennials are also more likely than any other living generation to consider themselves political independents and be unaffiliated with an organized religion, even if they believe in God. We say we value uniqueness and are critical of those who demand obedience to others’ visions or social norms.

Paradoxically, it’s now conformist to be an individualist and deviant to be conformist. So much so that a subculture emerged to promote blending in. “Normcore,” it makes opting into conformity a virtue. As one commentator described it, “Normcore finds liberation in being nothing special…”

Obviously LSD didn’t do this all by itself, but it was certainly in the right place at the right time. And as a symbol of the radical transition that began in the 1960s, there’s hardly one better.

Lisa Wade, PhD is a professor at Occidental College. She is the author of American Hookup, a book about college sexual culture, and a textbook about gender. You can follow her on Twitter, Facebook, and Instagram.

(View original at https://thesocietypages.org/socimages)

Planet DebianJonathan McDowell: Going to DebConf 17

Going to DebConf17

Completely forgot to mention this earlier in the year, but delighted to say that in just under 4 weeks I’ll be attending DebConf 17 in Montréal. Looking forward to seeing a bunch of fine folk there!

Outbound:

2017-08-04 11:40 DUB -> 13:40 KEF WW853
2017-08-04 15:25 KEF -> 17:00 YUL WW251

Inbound:

2017-08-12 19:50 YUL -> 05:00 KEF WW252
2017-08-13 06:20 KEF -> 09:50 DUB WW852

(Image created using GIMP, fonts-dkg-handwriting and the DebConf17 Artwork.)

CryptogramThe Future of Forgeries

This article argues that AI technologies will make image, audio, and video forgeries much easier in the future.

Combined, the trajectory of cheap, high-quality media forgeries is worrying. At the current pace of progress, it may be as little as two or three years before realistic audio forgeries are good enough to fool the untrained ear, and only five or 10 years before forgeries can fool at least some types of forensic analysis. When tools for producing fake video perform at higher quality than today's CGI and are simultaneously available to untrained amateurs, these forgeries might comprise a large part of the information ecosystem. The growth in this technology will transform the meaning of evidence and truth in domains across journalism, government communications, testimony in criminal justice, and, of course, national security.

I am not worried about fooling the "untrained ear," and more worried about fooling forensic analysis. But there's an arms race here. Recording technologies will get more sophisticated, too, making their outputs harder to forge. Still, I agree that the advantage will go to the forgers and not the forgery detectors.

Worse Than FailureRubbed Off

Early magnetic storage was simple in its construction. The earliest floppy disks and hard drives used an iron (III) oxide surface coating a plastic film or disk. Later media would use cobalt-based surfaces, providing a smaller data resolution than iron oxide, but wouldn’t change much.

Samuel H. never had think much about this until he met Micah.

an 8 inch floppy disk

The Noisiest Algorithm

In the fall of 1980, Samuel was a freshman at State U. The housing department had assigned him Micah as his roommate, assuming that since both were Computer Science majors, they would get along.

On their first night together, Samuel asked why Micah kept throwing his books off the shelf onto the floor. “Oh, I just keep shuffling the books around until they’re in the right order,” Micah said.

“Have you tried, I don’t know, taking out one book at a time, starting from the left?” Samuel asked. “Or sorting the books in pairs, then sorting pairs of pairs, and so on?” He had read about sorting algorithms over the summer.

Micah shrugged, continuing to throwing books on the floor.

Divided Priorities

In one of their shared classes, Samuel and Micah were assigned as partners on a project. The assignment: write a program in Altair BASIC that analyzes rainfall measurements from the university weather station, then displays a graph and some simple statistics, including the dates with the highest and lowest values.

All students had access to Altair 8800s in the lab. They were given one 8“ floppy disk with the rainfall data, and a second for additional code. Samuel wanted to handle the data read/write code and leave the display to Micah, but Micah insisted on doing the data-handling code himself. ”I’ve learned a lot," he said. Samuel remembered the sounds of books crashing on the floor and flinched. Still, he thought the display code would be easier, so he let Micah at it.

Samuel finished his half of the code early. Micah, though, was obsessed with Star Trek, a popular student-coded space tactics game, and waited until the day before to start work. “Okay, tonight, I promise,” he said, as Samuel left him in the computer lab at an Altair. As he left, he hard Micah close the drive, and the read-head start clacking across the disk.

Corrupted

The next morning, Samuel found Micah still in the lab at his Altair. He was in tears. “The data’s gone,” he said. “I don’t know what I did. I started getting read errors around 1AM. I think the data file got corrupted somehow.”

Samuel gasped when Micah handed him the floppy cask. Through the read window in the cover, he could see transparent stripes in the disk. The magnetic write surface had been worn away, leaving the clear plastic backing.

Micah explained. He had written his code to read the data file, find the lowest value, write it to an entirely new file, then mark the value in the original file as read. Then it would read the original file again, write another new file, and so on.

Samuel calculated that, with Micah’s “algorithm,” the original file would be read and written to n–1 times, given n entries.

Old Habits Die Hard

Samuel went to the professor and pleaded for a new partner, showing him the floppy with the transparent medium inside. Samuel was given full credit for his half of the program. Micah would have to write his entire program from scratch with a new copy of the data.

Samuel left Micah for another roommate that spring. He didn’t see much of him, as the latter had left Computer Science to major in Philosophy. He didn’t hear about Micah again until a spring day in 1989, after he had finished his PhD.

A grad student, who worked at the computer help desk, told Samuel about a strange man at the computer lab one night. He was despondent, trying to find someone who could help him recover his thesis from a 3 1/2" floppy disk. The student offered to help, and when the man handed him the disk, he pulled the metal tab aside to check for dust.

Most of the oxide coating had been worn away, leaving thin, transparent stripes.

[Advertisement] Release! is a light card game about software and the people who make it. Play with 2-5 people, or up to 10 with two copies - only $9.95 shipped!

Planet DebianKees Cook: security things in Linux v4.12

Previously: v4.11.

Here’s a quick summary of some of the interesting security things in last week’s v4.12 release of the Linux kernel:

x86 read-only and fixed-location GDT
With kernel memory base randomization, it was stil possible to figure out the per-cpu base address via the “sgdt” instruction, since it would reveal the per-cpu GDT location. To solve this, Thomas Garnier moved the GDT to a fixed location. And to solve the risk of an attacker targeting the GDT directly with a kernel bug, he also made it read-only.

usercopy consolidation
After hardened usercopy landed, Al Viro decided to take a closer look at all the usercopy routines and then consolidated the per-architecture uaccess code into a single implementation. The per-architecture code was functionally very similar to each other, so it made sense to remove the redundancy. In the process, he uncovered a number of unhandled corner cases in various architectures (that got fixed by the consolidation), and made hardened usercopy available on all remaining architectures.

ASLR entropy sysctl on PowerPC
Continuing to expand architecture support for the ASLR entropy sysctl, Michael Ellerman implemented the calculations needed for PowerPC. This lets userspace choose to crank up the entropy used for memory layouts.

LSM structures read-only
James Morris used __ro_after_init to make the LSM structures read-only after boot. This removes them as a desirable target for attackers. Since the hooks are called from all kinds of places in the kernel this was a favorite method for attackers to use to hijack execution of the kernel. (A similar target used to be the system call table, but that has long since been made read-only.)

KASLR enabled by default on x86
With many distros already enabling KASLR on x86 with CONFIG_RANDOMIZE_BASE and CONFIG_RANDOMIZE_MEMORY, Ingo Molnar felt the feature was mature enough to be enabled by default.

Expand stack canary to 64 bits on 64-bit systems
The stack canary values used by CONFIG_CC_STACKPROTECTOR is most powerful on x86 since it is different per task. (Other architectures run with a single canary for all tasks.) While the first canary chosen on x86 (and other architectures) was a full unsigned long, the subsequent canaries chosen per-task for x86 were being truncated to 32-bits. Daniel Micay fixed this so now x86 (and future architectures that gain per-task canary support) have significantly increased entropy for stack-protector.

Expanded stack/heap gap
Hugh Dickens, with input from many other folks, improved the kernel’s mitigation against having the stack and heap crash into each other. This is a stop-gap measure to help defend against the Stack Clash attacks. Additional hardening needs to come from the compiler to produce “stack probes” when doing large stack expansions. Any Variable Length Arrays on the stack or alloca() usage needs to have machine code generated to touch each page of memory within those areas to let the kernel know that the stack is expanding, but with single-page granularity.

That’s it for now; please let me know if I missed anything. The v4.13 merge window is open!

© 2017, Kees Cook. This work is licensed under a Creative Commons Attribution-ShareAlike 3.0 License.
Creative Commons License

Planet Linux AustraliaOpenSTEM: This Week in HASS – term 3, week 1

Today marks the start of a new term in Queensland, although most states and territories have at least another week of holidays, if not more. It’s always hard to get back into the swing of things in the 3rd term, with winter cold and the usual round of flus and sniffles. OpenSTEM’s 3rd term units branch into new areas to provide some fresh material and a new direction for the new semester. This term younger students are studying the lives of children in the past from a narrative context, whilst older students are delving into aspects of Australian history.

Foundation/Prep/Kindy to Year 3

The main resource for our youngest students for Unit F.3 is Children in the Past – a collection of stories of children from a range of different historical situations. This resource contains 6 stories of children from Aboriginal Australia more than 1,000 years ago, Ancient Egypt, Ancient Rome, Ancient China, Aztec Mexico and Zulu Southern Africa several hundred years ago. Teachers can choose one or two stories from this resource to study in depth with the students this term. The range of stories allows teachers to tailor the material to their class and ensure that there is no need to repeat the same stories in consecutive years. Students will compare the lives of children in the stories with their own lives – focusing on different aspects in different weeks of the term. In this first week teachers will read the stories to the class and help them find the places described on the OpenSTEM “Our World” map and/or a globe.

Students in integrated Foundation/Prep/Kindy and Year 1 classes (Unit F-1.3), will also be examining stories from the Children in the Past resource. Students in Years 1 (Unit 1.3), 2 (Unit 2.3) and 3 (Unit 3.3) will also be comparing their own lives with those of children in the past; however, they will use a collection of stories called Living in the Past, which covers the same areas and time periods as Children in the Past, but provides more in-depth information about a broader range of subject areas and includes the story of the young Tom Petrie, growing up in Brisbane in the 1840s. Students in Year 1 will be considering family structures and the differences and similarities between their own families and the families described in the stories. Students in Year 2 are starting to understand the differences which technology makes to peoples’ lives, especially the technology behind different modes of transport. Students in Year 3 retain a focus on local history. In fact, the Understanding Our World® units for Year 3, term 3 are tailored to match the capital city of the state or territory in which the student lives. Currently units are available for Brisbane and Perth, other capital cities are in preparation. Additional resources are available describing the foundation and growth of Brisbane and Perth, with other cities to follow. Teachers may also prefer to focus on the local community in a smaller town and substitute their own resources for those of the capital city.

Years 3 to 6

Opening of the first parliamentFirst Australian Parliament

Older students are focusing on Australian history this term – Year 3 students (Unit 3.7) will be considering the history of their capital city (or local community) within the broader context of Australian history. Students in Year 4 (Unit 4.3) will be examining Australia in the period up to and including the first half of the 19th century. Students in Year 5 (Unit 5.3) examine the colonial period in Australian history; whilst students in Year 6 (Unit 6.3) are investigating Federation and Australia in the 20th century. In this first week of term, students in Years 3 to 6 will be compiling a timeline of Australian history and filling in important events which they already know about or have learnt about in previous units. Students will revisit this timeline in later weeks to add additional information. The main resources for this week are The History of Australia, a broad overview of Australian history from the Ice Age to the 20th century; and the History of Australian Democracy, an overview of the development of the democratic process in Australia.

Governor's House Sydney 1791The rest of the 3rd term will be spent compiling a scientific report on an investigation into an aspect of Australian history. Students in Year 3 will choose a research topic from a list of themes concerning the history of their capital city. Students in Year 4 will choose from themes on Australia before 1788, the First Fleet, experiences of convicts and settlers, including children, as well as the impact of different animals brought to Australia during the colonial period. Students in Year 5 will choose from themes on the Australian colonies and people including explorers, convicts and settlers, massacres and resistance, colonial animals and industries such as sugar in Queensland. Students in Year 6 will choose from themes on Federation, including personalities such as Henry Parkes and Edmund Barton, Sport, Women’s Suffrage, Children, the Boer War and Aboriginal experiences. This research topic will be undertaken as a guided investigation throughout the term.

Planet Linux AustraliaAnthony Towns: Bitcoin: ASICBoost and segwit2x – Background

I’ve been trying to make heads or tails of what the heck is going on in Bitcoin for a while now. I’m not sure I’ve actually made that much progress, but I’ve at least got some thoughts that seem coherent now.

First, this post is background for people playing along at home who aren’t familiar with the issues or jargon: Bitcoin is a currency based on an electronic ledger that essentially tracks how much Bitcoin exists, and how someone can be authorised to transfer it to someone else; that ledger is currently about 100GB in size, growing at a rate of about a gigabyte a week. The ledger is updated by miners, who compete by doing otherwise pointless work running cryptographic hashes (and in so doing obtain a “proof of work”), and in return receive a reward (denominated in bitcoin) made up from fees by people transacting and an inflation subsidy. Different miners are competing in an essentially zero-sum game, because fees and inflation are essentially a fixed amount that is (roughly) divided up amongst miners according to how much work they do — so while you get more reward for doing more work, it comes at a cost of other miners receiving less reward.

Because the ledger only grows by (about) a gigabyte each week (or a megabyte per block, which is roughly every ten minutes), there is a limit on how many transactions can be included each week (ie, supply is limited), which both increases fees and limits adoption — so for quite a while now, people in the bitcoin ecosystem with a focus on growth have wanted to work out ways to increase the transaction rate. Initial proposals in mid 2015 suggested allowing miners to regularly determine the limit with no official upper bound (nominally “BIP100“, though never actually formally submitted as a proposal), or to increase by a factor of eight within six months, then double every two years after that, until reaching almost 200 times the current size by 2036 (BIP101), or to increase at a rate of about 17% per annum (suggested on the mailing list, but never formally proposed BIP103). These proposals had two major risks: locking in a lot of growth that may turn out to be unnecessary or actively harmful, and requiring what is called a “hard fork”, which would render the existing bitcoin software unable to track the ledger after the change took affect with the possible outcome that two ledgers would coexist and would in turn cause a range of problems. To reduce the former risk, a minimal compromise proposal was made to “kick the can down the road” and just double the ledger growth rate, then figure out a more permanent solution down the road (BIP102) (or to double it three times — to 2MB, 4MB then 8MB — over four years, per Adam Back). A few months later, some of the devs figured out a way to more or less achieve this that also doesn’t require a hard fork, and comes with a host of other benefits, and proposed an update called “segregated witness” at the December 2015 Scaling Bitcoin conference.

And shortly after that things went completely off the rails, and have stayed that way since. Ultimately there seem to be two camps: one group is happy to deploy segregated witness, and is eager to make further improvements to Bitcoin based on that (this is my take on events); while the other group does not, perhaps due to some combination of being opposed to the segregated witness changes directly, wanting a more direct upgrade immediately, being afraid deploying segregated witness will block other changes, or wanting to take control of the bitcoin codebase/roadmap from the current developers (take this with a grain of salt: these aren’t opinions I share or even find particularly reasonable, so I can’t do them justice when describing them; cf ViaBTC’s post to get that side of the argument made directly, eg)

Most recently, and presumably on the basis that the opposed group are mostly worried that deploying segregated witness will prevent or significantly delay a more direct increase in capacity, a bitcoin venture capitalist, Barry Silbert, organised an agreement amongst a number of companies including many miners, to both activate segregated witness within the next month, and to do a hard fork capacity increase by the end of the year. This is the “segwit2x” project; named because it takes segregated witness, (“segwit”) and then additionally doubles its capacity increase (“2x”). This agreement is not supported by any of the existing dev team, and is being developed by Jeff Garzik (who was behind BIP100 and BIP102 mentioned above) in a forked codebase renamed “btc1“, so if successful, this may also satisfy members of the opposed group motivated by a desire to take control of the bitcoin codebase and roadmap, despite that not being an explicit part of the agreement itself.

To me, the arguments presented for opposing segwit don’t really seem plausible. As far as future development goes, a roadmap was put out in December 2015 and endorsed by many developers that explicitly included a hard fork for increased capacity (“moderate block size increase proposals (such as 2/4/8 …)”), among many other things, so the risk of no further progress happening seems contrary to the facts to me. The core bitcoin devs are extremely capable in my estimation, so replacing them seems a bad idea from the start, but even more than that, they take a notably hands off approach to dictating where Bitcoin will go in future — so, to my mind, it seems like a more sensible thing to try would be working with them to advance the bitcoin ecosystem in whatever direction you want, rather than to try to replace them outright. In that context, it seems particularly notable to me that in the eighteen months between the segregated witness proposal and the segwit2x agreement, there hasn’t been any serious attempt to propose a hard fork capacity increase that meets the core dev’s quality standards; for instance there has never been any code for BIP100, and of the various hard forking codebases that have arisen by advocates of the hard fork approach — Bitcoin XT, Bitcoin Classic, Bitcoin Unlimited, btc1, and Bitcoin ABC — none have been developed in a way that’s suitable for the changes to be reviewed and merged into core via a pull request in the normal fashion. Further, since one of the main criticisms of a hard fork is that deployment costs are higher when it is done in a short time frame (weeks or a few months versus a year or longer), that lack of engagement over the past 18 months followed by a desperate rush now seems particularly poor to me.

A different explanation for the opposition to segwit became public in April, however. ASICBoost is a patent-pending optimisation to the way Bitcoin miners do the work that entitles them to extend the ledger (for which they receive the rewards described earlier), and while there are a few ways of making use of ASICBoost, perhaps the most effective way turns out to be incompatible with segwit. There are three main alternatives to the covert, segwit-incompatible approach, all of which have serious downsides. The first, overt ASICBoost via modifying the block version reveals that you’re using ASICBoost, which would either (a) encourage other miners to also use the optimisation reducing your profits, (b) give the patent holder cause to charge you royalties or cause other problems (assuming the patent is eventually granted and deemed valid), or (c) encourage the bitcoin community at large to change the ecosystem rules so that the optimisation no longer works. The second, mining empty blocks via ASICBoost means you don’t gain any fee income, reducing your revenue and hence profit. And the third, rolling the extranonce to find a collision rather than combining partial transaction trees increases the preparation work by a factor of ten or so, which is probably enough to outweigh the savings from the optimisation in the first place.

If ASICBoost were being used by a significant number of miners, and segregated witness prevents its continued use in practice, then we suddenly have a very plausible explanation for much of the apparent madness: the loss of the optimisation could significantly increase some miners’ costs or reduce their revenue, reducing profit either way (a high end estimate of $100,000,000 per year was given in the original explanation), which would justify significant investment in blocking that change. Further, an honest explanation of the problem would not be feasible, because this would be just as bad as doing the optimisation overtly — it would increase competition, alert the potential patent owners, and might cause the optimisation to be deliberately disabled — all of which would also negatively affect profits. As a result, there would be substantial opposition to segwit, but the reasons presented in public for this opposition would be false, and it would not be surprising if the people presenting these reasons only give half-hearted effort into providing evidence — their purpose is simply to prevent or at least delay segwit, rather than to actually inform or build a new consensus. To this line of thinking the emphasis on lack of communication from core devs or the desire for a hard fork block size increase aren’t the actual goal, so the lack of effort being put into resolving them over the past 18 months from the people complaining about them is no longer surprising.

With that background, I think there are two important questions remaining:

  1. Is it plausible that preventing ASICBoost would actually cost people millions in profit, or is that just an intriguing hypothetical that doesn’t turn out to have much to do with reality?
  2. If preserving ASICBoost is a plausible motivation, what will happen with segwit2x, given that by enabling segregated witness, it does nothing to preserve ASICBoost?

Well, stay tuned…

,

Planet DebianNiels Thykier: Approaching the exclusive “sub-minute” build time club

For the first time in at least two years (and probably even longer), debhelper with the 10.6.2 upload broke the 1 minute milestone for build time (by mere 2 seconds – look for “Build needed 00:00:58, […]”).  Sadly, the result it is not deterministic and the 10.6.3 upload needed 1m + 5s to complete on the buildds.

This is not the result of any optimizations I have done in debhelper itself.  Instead, it is the result of “questionable use of developer time” for the sake of meeting an arbitrary milestone. Basically, I made it possible to parallelize more of the debhelper build (10.6.1) and finally made it possible to run the tests in parallel (10.6.2).

In 10.6.2, I also made the most of the tests run against all relevant compat levels.  Previously, it would only run the tests against one compat level (either the current one or a hard-coded older version).

Testing more than one compat turned out to be fairly simple given a proper test library (I wrote a “Test::DH” module for the occasion).  Below is an example, which is the new test case that I wrote for Debian bug #866570.

$ cat t/dh_install/03-866570-dont-install-from-host.t
#!/usr/bin/perl
use strict;
use warnings;
use Test::More;

use File::Basename qw(dirname);
use lib dirname(dirname(__FILE__));
use Test::DH;
use File::Path qw(remove_tree make_path);
use Debian::Debhelper::Dh_Lib qw(!dirname);

plan(tests => 1);

each_compat_subtest {
  my ($compat) = @_;
  # #866570 - leading slashes must *not* pull things from the root FS.
  make_path('bin');
  create_empty_file('bin/grep-i-licious');
  ok(run_dh_tool('dh_install', '/bin/grep*'));
  ok(-e "debian/debhelper/bin/grep-i-licious", "#866570 [${compat}]");
  ok(!-e "debian/debhelper/bin/grep", "#866570 [${compat}]");
  remove_tree('debian/debhelper', 'debian/tmp');
};

I have cheated a bit on the implementation; while the test runs in a temporary directory, the directory is reused between compat levels (accordingly, there is a “clean up” step at the end of the test).

If you want debhelper to maintain this exclusive (and somewhat arbitrary) property (deterministically), you are more than welcome to help me improve the Makefile. 🙂  I am not sure I can squeeze any more out of it with my (lack of) GNU make skills.


Filed under: Debhelper, Debian

Harald WelteTen years after first shipping Openmoko Neo1973

Exactly 10 years ago, on July 9th, 2007 we started to sell+ship the first Openmoko Neo1973. To be more precise, the webshop actually opened a few hours early, depending on your time zone. Sean announced the availability in this mailing list post

I don't really have to add much to my ten years [of starting to work on] Openmoko anniversary blog post a year ago, but still thought it's worth while to point out the tenth anniversary.

It was exciting times, and there was a lot of pioneering spirit: Building a Linux based smartphone with a 100% FOSS software stack on the application processor, including all drivers, userland, applications - at a time before Android was known or announced. As history shows, we'd been working in parallel with Apple on the iPhone, and Google on Android. Of course there's little chance that a small taiwanese company can compete with the endless resources of the big industry giants, and the many Neo1973 delays meant we had missed the window of opportunity to be the first on the market.

It's sad that Openmoko (or similar projects) have not survived even as a special-interest project for FOSS enthusiasts. Today, virtually all options of smartphones are encumbered with way more proprietary blobs than we could ever imagine back then.

In any case, the tenth anniversary of trying to change the amount of Free Softwware in the smartphone world is worth some celebration. I'm reaching out to old friends and colleagues, and I guess we'll have somewhat of a celebration party both in Germany and in Taiwan (where I'll be for my holidays from mid-September to mid-October).

Sam VargheseFrench farce spoils great Test series in New Zealand

Referees or umpires can often put paid to an excellent game of any sport by making stupid decisions. When this happens — and it does so increasingly these days — the reaction of the sporting body concerned is to try and paper over the whole thing.

Additionally, teams and their coaches/managers are told not to criticise referees or umpires and to respect them. Hence a lot tends to be covered up.

But the fact is that referees and umpires are employees who are being paid well, especially when the sports they are officiating are high-profile. Do they not need to be competent in what they do?

And you can’t get much higher profile than a deciding rugby test between the New Zealand All Blacks and the British and Irish Lions. A French referee, Romain Poite, screwed up the entire game in the last few minutes through a wrong decision.

Poite awarded a penalty to the All Blacks when a restart found one of the Lions players offside. He then changed his mind and awarded a scrum to the All Blacks instead, using mafia-like language, “we will make a deal about this” before he mentioned the change of decision.

When he noticed the infringement initially, Poite should have held off blowing his whistle and allowed New Zealand the advantage as one of their players had gained possession of the ball and was making inroads into Lions territory. But he did not.

He blew, almost as a reflex action, and stuck his arm up to signal a penalty to New Zealand. It was in a position which was relatively easy to convert and would have given New Zealand almost certain victory as the teams were level 15-all at that time. There were just two minutes left to play when this incident happened.

The New Zealand coach Steve Hansen tried to paper over things at his post-match press conference by saying that his team should have sewn up things much earlier — they squandered a couple of easy chances and also failed to kick a penalty and convert one of their two tries — and could not blame Poite for their defeat.

This kind of talk is diplomacy of the worst kind. It encourages incompetent referees.

One can cast one’s mind back to 2007 and the quarter-finals of the World Cup rugby tournament when England’s Wayne Barnes failed to spot a forward pass and awarded France a try which gave them a 20-18 lead over New Zealand; ultimately the French won the game by this same score.

Barnes was never pulled into line and to this day he seems to be unable to spot a forward pass. He continues to referee international games and must be having quite powerful sponsors to continue.

Hansen did make one valid point though: that there should be consistency in decisions. And that did not happen either over the three tests. It is funny that referees use the same rulebook and interpret things differently depending on whether they are from the southern hemisphere or northern hemisphere.

Is there no chief of referees to thrash out a common ruling for the officials? It makes rugby look very amateurish and spoils the game for the viewer.

Associations that run various sports are often heard complaining that people do not come to watch games. Put a couple more people like Poite to officiate and you will soon have empty stadiums.

Planet DebianSteinar H. Gunderson: Nageru 1.6.1 released

I've released version 1.6.1 of Nageru, my live video mixer.

Now that Solskogen is coming up, there's been a lot of activity on the Nageru front, but hopefully everything is actually coming together now. Testing has been good, but we'll see whether it stands up to the battle-hardening of the real world or not. Hopefully I won't be needing any last-minute patches. :-)

Besides the previously promised Prometheus metrics (1.6.1 ships with a rather extensive set, as well as an example Grafana dashboard) and frame queue management improvements, a surprising late addition was that of a new transcoder called Kaeru (following the naming style of Nageru itself, from the japanese verb kaeru (換える) which means roughly to replace or excahnge—iKnow! claims it can also mean “convert”, but I haven't seen support for this anywhere else).

Normally, when I do streams, I just let Nageru do its thing and send out a single 720p60 stream (occasionally 1080p), usually around 5 Mbit/sec; less than that doesn't really give good enough quality for the high-movement scenarios I'm after. But Solskogen is different in that there's a pretty diverse audience when it comes to networking conditions; even though I have a few mirrors spread around the world (and some JavaScript to automatically pick the fastest one; DNS round-robin is really quite useless here!), not all viewers can sustain such a bitrate. Thus, there's also a 480p variant with around 1 Mbit/sec or so, and it needs to come from somewhere.

Traditionally, I've been using VLC for this, but streaming is really a niche thing for VLC. I've been told it will be an increased focus for 4.0 now that 3.0 is getting out the door, but over the last few years, there's been a constant trickle of little issues that have been breaking my transcoding pipeline. My solution for this was to simply never update VLC, but now that I'm up to stretch, this didn't really work anymore, and I'd been toying around with the idea of making a standalone transcoder for a while. (You'd ask “why not the ffmpeg(1) command-line client?”, but it's a bit too centered around files and not streams; I use it for converting to HLS for iOS devices, but it has a nasty habit of I/O blocking real work, and its HTTP server really isn't meant for production work. I could survive the latter if it supported Metacube and I could feed it into Cubemap, but it doesn't.)

It turned out Nageru had already grown most of the pieces I needed; it had video decoding through FFmpeg, x264 encoding with speed control (so that it automatically picks the best preset the machine can sustain at any given time) and muxing, audio encoding, proper threading everywhere, and a usable HTTP server that could output Metacube. All that was required was to add audio decoding to the FFmpeg input, and then replace the GPU-based mixer and GUI with a very simple driver that just connects the decoders to the encoders. (This means it runs fine on a headless server with no GPU, but it also means you'll get FFmpeg's scaling, which isn't as pretty or fast as Nageru's. I think it's an okay tradeoff.) All in all, this was only about 250 lines of delta, which pales compared to the ~28000 lines of delta that are between 1.3.1 (used for last Solskogen) and 1.6.1. It only supports a rather limited set of Prometheus metrics, and it has some limitations, but it seems to be stable and deliver pretty good quality. I've denoted it experimental for now, but overall, I'm quite happy with how it turned out, and I'll be using it for Solskogen.

Nageru 1.6.1 is on its way into Debian, but it depends on a new version of Movit which needs to go through the NEW queue (a soname bump), so it might be a few days. In the meantime, I'll be busy preparing for Solskogen. :-)

,

Krebs on SecuritySelf-Service Food Kiosk Vendor Avanti Hacked

Avanti Markets, a company whose self-service payment kiosks sit beside shelves of snacks and drinks in thousands of corporate breakrooms across America, has suffered of breach of its internal networks in which hackers were able to push malicious software out to those payment devices, the company has acknowledged. The breach may have jeopardized customer credit card accounts as well as biometric data, Avanti warned.

According to Tukwila, Wash.-based Avanti’s marketing literature, some 1.6 million customers use the company’s break room self-checkout devices — which allow customers to pay for drinks, snacks and other food items with a credit card, fingerprint scan or cash.

An Avanti Markets kiosk. Image: Avanti

An Avanti Markets kiosk. Image: Avanti

Sometime in the last few hours, Avanti published a “notice of data breach” on its Web site.

“On July 4, 2017, we discovered a sophisticated malware attack which affected kiosks at some Avanti Markets. Based on our investigation thus far, and although we have not yet confirmed the root cause of the intrusion, it appears the attackers utilized the malware to gain unauthorized access to customer personal information from some kiosks. Because not all of our kiosks are configured or used the same way, personal information on some kiosks may have been adversely affected, while other kiosks may not have been affected.”

Avanti said it appears the malware was designed to gather certain payment card information including the cardholder’s first and last name, credit/debit card number and expiration date.

Breaches at point-of-sale vendors have become almost regular occurrences over the past few years, but this breach is especially notable as it may also have jeopardized customer biometric data. That’s because the newer Avanti kiosk systems allow users to pay using a scan of their fingerprint.

“In addition, users of the Market Card option may have had their names and email addresses compromised, as well as their biometric information if they used the kiosk’s biometric verification functionality,” the company warned.

On Thursday, KrebsOnSecurity learned from a source at a law firm that the food vending machine in its employee lunchroom was no longer able to accept credit cards. The source said his firm’s information technology personnel told him the credit card functionality had been temporarily disabled because of a breach at Avanti.

Another source told this author that Avanti’s corporate network had been breached, and that Avanti had made the decision to turn off all self-checkouts for now — although the source said customers could still use cash at the machines.

“I was told that about half of the self-checkouts do not have P2Pe,” the source said, on condition of anonymity. P2Pe is short for “point-to-point encryption,” and it’s a technological solution that encrypts sensitive data such as credit card information at every point in the card transaction. In theory, P2Pe should to be able to protect card data even if there is malicious software resident on the device or network in question.

Avanti said in its notice that it had shut down payment processing at some locations, and that the company was working with its operators to purge infected systems of any malware from the attack and to take steps to “substantially minimize the risk of a data compromise in the future.”

THE MALWARE

On Friday evening, security firm RiskAnalytics published a blog post that detailed an experience from a customer who shared a remarkably similar experience to the one referenced by the anonymous law firm source above.

RiskAnalytics’s Noah Dunker wrote that the company’s technology on July 4 flagged suspicious behavior by a break room vending kiosk. Further inspection of the device and communications traffic emanating from it revealed it was infected with a family of point-of-sale malware known as PoSeidon (a.k.a. “FindPOS”) that siphons credit card data from point-of-sale devices.

“In our analysis of the incident, it seems most likely that the larger vendor was compromised, and some or all of the kiosks maintained by local vendors were impacted,” Dunker wrote. “We’ve been able to identify at least two smaller vendors with local operations that have been impacted in two different cities though we are not naming any impacted vendors yet, as we’ve been unable to contact them directly.”

KrebsOnSecurity reached out to RiskAnalytics to see if the vendor of the snack machine used by the victim organization he wrote about also was Avanti. Dunker confirmed that the kiosk vendor that was the subject of his post was indeed Avanti.

Dunker noted that much like point-of-sale devices at many restaurant chains, these snack machines usually are installed and managed by third-party technology companies, adding another layer of complexity to the challenge of securing these devices from hackers.

Dunker said RiskAnalytics first noticed something wasn’t right with its client’s break room snack machine after it began sending data out of the client’s network using an SSL encryption certificate that has long been associated with cybercrime activity — including ransomware activity dating back to 2015.

“This is a textbook example of an ‘Internet of Things’ (IoT) threat: A network-connected device, controlled and maintained by a third party, which cannot be easily patched, audited, or controlled by your own IT staff,” Dunker wrote.

ANALYSIS

Credit card machines and point-of-sale devices are favorite targets of malicious hackers, mainly because the data stolen from those systems is very easy to monetize. However, the point-of-sale industry has a fairly atrocious record of building insecure products and trying to tack on security only after the products have already gone to market. Given this history, it’s remarkable that some of these same vendors are now encouraging customers to entrust them with biometric data.

Credit cards can be re-issued, but biometric identifiers are for life. Companies that choose to embed biometric capabilities in their products should be held to a far higher security standard than those used to protect card data.

For starters, any device that requests, stores or transmits biometric data should at a minimum ensure that the data remains strongly encrypted both at rest and in transit. Judging by Avanti’s warning that some customer biometric data may have been compromised in this breach, it seems this may not have been the case for at least a subset of their products.

I would like see some industry acknowledgement of this before we start to see more stand-alone payment applications entice users to supply biometric data, but I share Dunker’s fear that we may soon see biometric components added to a whole host of Internet-connected (IoT) devices that simply were not designed with security in mind.

Also, breaches like this illustrate why it’s critically important for organizations to segment their internal networks, and to keep payment systems completely isolated from the rest of the network. However, neither of the victim organizations referenced above appear to have taken this important precaution.

To illustrate this concept a bit further, it may well be that the criminal masterminds behind this attack could have made far more money had they used the remote access they apparently had to these Avanti devices to push ransomware out to Microsoft Windows computers residing on the same internal network as the payment kiosks.

Planet Linux AustraliaLev Lafayette: One Million Jobs for Spartan

Whilst it is a loose metric, our little cluster, "Spartan", at the University of Melbourne ran its 1 millionth job today after almost exactly a year since launch.

The researcher in question is doing their PhD in biochemistry. The project is a childhood asthma study:

"The nasopharynx is a source of microbes associated with acute respiratory illness. Respiratory infection and/ or the asymptomatic colonisation with certain microbes during childhood predispose individuals to the development of asthma.

Using data generated from 16S rRNA sequencing and metagenomic sequencing of nasopharyn samples, we aim to identify which specific microbes and interactions are important in the development of asthma."

Moments like this is why I do HPC.

Congratulations to the rest of the team and to the user community.

read more

Planet DebianUrvika Gola: Outreachy Progress on Lumicall

unnamedLumicall 1.13.0 is released! 😀

Through Lumicall, you can make encrypted calls and send messages using open standards. It uses the SIP protocol to inter-operate with other apps and corporate telephone systems.

During the Outreachy Internship period I worked on the following issues :-

I researched on creating a white label version of Lumicall. Few ideas on how the white label build could be used..

  1. Existing SIP providers can use white label version of Lumicall to expand their business and launch SIP client. This would provide a one stop shop for them!!
  2. New SIP clients/developers can use Lumicall white label version to get the underlying working of making encrypted phone calls using SIP protocol, it will help them to focus on other additional functionalities they would like to include.

Documentation for implementing white labelling – Link 1 and Link 2

 

Since Lumicall is majorly used to make encrypted calls, there was a need to designate quiet times and the phone will not make an audible ringing tone during that time & if the user has multiple SIP accounts, the user can set the silent mode functionality on one of them, maybe, the Work account.
Documentation for adding silent mode feature  – Link 1 and Link 2

 

  • Adding 9 Patch Image 

Using Lumicall, users can send SIP messages across. Just to improve the UI a little, I added a 9 patch image in the message screen. A 9 patch image is created using 9 patch tools and are saved as imagename.9.png . The image will resize itself according to the text length and font size.

Documentation for 9 patch image – Link

9patch

You can try the new version of Lumicall here ! and know more about Lumicall on a blog by Daniel Pocock.
Looking forward to your valuable feedback !! 😀


Planet DebianDaniel Silverstone: Gitano - Approaching Release - Access Control Changes

As mentioned previously I am working toward getting Gitano into Stretch. A colleague and friend of mine (Richard Maw) did a large pile of work on Lace to support what we are calling sub-defines. These let us simplify Gitano's ACL files, particularly for individual projects.

In this posting, I'd like to cover what has changed with the access control support in Gitano, so if you've never used it then some of this may make little sense. Later on, I'll be looking at some better user documentation in conjunction with another friend of mine (Lars Wirzenius) who has promised to help produce a basic administration manual before Stretch is totally frozen.

Sub-defines

With a more modern lace (version 1.3 or later) there is a mechanism we are calling 'sub-defines'. Previously if you wanted to write a ruleset which said something like "Allow Steve to read my repository" you needed:

define is_steve user exact steve
allow "Steve can read my repo" is_steve op_read

And, as you'd expect, if you also wanted to grant read access to Jeff then you'd need yet set of defines:

define is_jeff user exact jeff
define is_steve user exact steve
define readers anyof is_jeff is_steve
allow "Steve and Jeff can read my repo" readers op_read

This, while flexible (and still entirely acceptable) is wordy for small rulesets and so we added sub-defines to create this syntax:

allow "Steve and Jeff can read my repo" op_read [anyof [user exact jeff] [user exact steve]]

Of course, this is generally neater for simpler rules, if you wanted to add another user then it might make sense to go for:

define readers anyof [user exact jeff] [user exact steve] [user exact susan]
allow "My friends can read my repo" op_read readers

The nice thing about this sub-define syntax is that it's basically usable anywhere you'd use the name of a previously defined thing, they're compiled in much the same way, and Richard worked hard to get good error messages out from them just in case.

No more auto_user_XXX and auto_group_YYY

As a result of the above being implemented, the support Gitano previously grew for automatically defining users and groups has been removed. The approach we took was pretty inflexible and risked compilation errors if a user was deleted or renamed, and so the sub-define approach is much much better.

If you currently use auto_user_XXX or auto_group_YYY in your rulesets then your upgrade path isn't bumpless but it should be fairly simple:

  1. Upgrade your version of lace to 1.3
  2. Replace any auto_user_FOO with [user exact FOO] and similarly for any auto_group_BAR to [group exact BAR].
  3. You can now upgrade Gitano safely.

No more 'basic' matches

Since Gitano first gained support for ACLs using Lace, we had a mechanism called 'simple match' for basic inputs such as groups, usernames, repo names, ref names, etc. Simple matches looked like user FOO or group !BAR. The match syntax grew more and more arcane as we added Lua pattern support refs ~^refs/heads/${user}/. When we wanted to add proper PCRE regex support we added a syntax of the form: user pcre ^/.+?... where pcre could be any of: exact, prefix, suffix, pattern, or pcre. We had a complex set of rules for exactly what the sigils at the start of the match string might mean in what order, and it was getting unwieldy.

To simplify matters, none of the "backward compatibility" remains in Gitano. You instead MUST use the what how with match form. To make this slightly more natural to use, we have added a bunch of aliases: is for exact, starts and startswith for prefix, and ends and endswith for suffix. In addition, kind of match can be prefixed with a ! to invert it, and for natural looking rules not is an alias for !is.

This means that your rulesets MUST be updated to support the more explicit syntax before you update Gitano, or else nothing will compile. Fortunately this form has been supported for a long time, so you can do this in three steps.

  1. Update your gitano-admin.git global ruleset. For example, the old form of the defines used to contain define is_gitano_ref ref ~^refs/gitano/ which can trivially be replaced with: define is_gitano_ref ref prefix refs/gitano/
  2. Update any non-zero rulesets your projects might have.
  3. You can now safely update Gitano

If you want a reference for making those changes, you can look at the Gitano skeleton ruleset which can be found at https://git.gitano.org.uk/gitano.git/tree/skel/gitano-admin/rules/ or in /usr/share/gitano if Gitano is installed on your local system.

Next time, I'll likely talk about the deprecated commands which are no longer in Gitano, and how you'll need to adjust your automation to use the new commands.

,

CryptogramFriday Squid Blogging: Why It's Hard to Track the Squid Population

Counting squid is not easy.

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

CryptogramAn Assassin's Teapot

This teapot has two chambers. Liquid is released from one or the other depending on whether an air hole is covered. I want one.

Krebs on SecurityB&B Theatres Hit in 2-Year Credit Card Breach

B&B Theatres, a company that owns and operates the 7th-largest theater chain in America, says it is investigating a breach of its credit card systems. The acknowledgment comes just days after KrebsOnSecurity reached out to the company for comment on reports from financial industry sources who said they suspected the cinema chain has been leaking customer credit card data to cyber thieves for the past two years.

bandbHeadquartered in Gladstone, Missouri, B&B Theatres operates approximately 400 screens across 50 locations in seven states, including Arkansas, Arizona, Florida, Kansas, Missouri, Mississippi, Nebraska, Oklahoma and Texas.

In a written statement forwarded by B&B spokesman Paul Farnsworth, the company said B&B Theatres was made aware of a potential breach by a local banking partner in one of its communities.

“Upon being notified we immediately engaged Trustwave, a third party security firm recommended to B&B by partners at major credit card brands, to work with our internal I.T. resources to contain the breach and mitigate any further potential penetration,” the statement reads. “While some malware was identified on B&B systems that dated back to 2015, the investigation completed by Trustwave did not conclude that customer data was at risk on all B&B systems for the entirety of the breach.”

The statement continued:

“Trustwave’s investigation has since shown the breach to be contained to the satisfaction of our processing partners as well as the major credit card brands. B&B Theatres values the security of our customer’s data and will continue to implement the latest available technologies to keep our networks & systems secure into the future.”

In June, sources at two separate U.S.-based financial institutions reached out to KrebsOnSecurity about alerts they’d received privately from the credit card associations regarding lists of card numbers that were thought to have been compromised in a recent breach.

The credit card companies generally do not tell financial institutions in these alerts which merchants got breached, leaving banks and credit unions to work backwards from those lists of compromised cards to a so-called “common point-of-purchase” (CPP).

In addition to lists of potentially compromised card numbers, the card associations usually include a “window of exposure” — their best estimate of how long the breach lasted. Two financial industry sources said initial reports from the credit card companies said the window of exposure at B&B Theatres was between Sept. 1, 2015 and April 7, 2017.

However, a more recent update to this advisory shared by my sources shows that the window of exposure is currently estimated between April 2015 and April 2017, meaning cyber thieves have likely been siphoning credit and debit card data from B&B Theatres customers for nearly two years undisturbed.

Malicious hackers can steal credit card data from organizations that accept cards by hacking into point-of-sale systems remotely and seeding those systems with malicious software that can copy account data stored on a card’s magnetic stripe. Thieves can then use that data to clone the cards and use the counterfeit cards to buy high-priced merchandise from electronics stores and big box retailers.

The statement from B&B Theatres made no mention of whether their credit card systems were set up to handle transactions from more secure chip-based credit and debit cards, which are far more difficult and expensive for thieves to counterfeit.

Under credit card association rules that went into effect in 2015, merchants that do not have the ability to process transactions from chip-based cards assume full liability for all fraudulent charges on purchases involving chip-enabled cards that were instead merely swiped through a regular mag-stripe reader at the point of purchase.

If there is a silver lining in this breach of a major silver screens operator, perhaps it is this: One source in the financial industry told this author that the breach at B&B persisted for so long that a decent percentage of the cards listed in the alerts his employer received from the credit card companies had been listed as compromised in other major breaches and so had already been canceled and re-issued.

Interested in learning the back story of why the United States is embarrassingly the last of the G20 nations to move to more secure chip-based cards? Ever wondered why so many retailers have chip-enabled readers at the checkout counter but are still asking you to swipe? Curious about how cyber thieves have adapted to this bonanza of credit card booty? If you answered “yes” to any of the above questions, you may find these stories useful and/or interesting.

Why So Many Card Breaches?

The Great EMV Fake-Out: No Chip for You!

Visa Delays Chip Deadline for Pumps to 2020

Chip & PIN vs. Chip and Signature

CryptogramDNI Wants Research into Secure Multiparty Computation

The Intelligence Advanced Research Projects Activity (IARPA) is soliciting proposals for research projects in secure multiparty computation:

Specifically of interest is computing on data belonging to different -- potentially mutually distrusting -- parties, which are unwilling or unable (e.g., due to laws and regulations) to share this data with each other or with the underlying compute platform. Such computations may include oblivious verification mechanisms to prove the correctness and security of computation without revealing underlying data, sensitive computations, or both.

My guess is that this is to perform analysis using data obtained from different surveillance authorities.

CryptogramFriday Squid Blogging: Food Supplier Passes Squid Off as Octopus

According to a lawsuit (main article behind paywall), "a Miami-based food vendor and its supplier have been misrepresenting their squid as octopus in an effort to boost profits."

As usual, you can also use this squid post to talk about the security stories in the news that I haven't covered.

Read my blog posting guidelines here.

Worse Than FailureError'd: Unmapped Potential

"As an Australian, I demand that they replace one of the two Belgiums with something to represent the quarter of the Earth they missed!" writes John A.

 

Andrew wrote, "{47}, {48}, and I would be {49} if {50} and {51}."

 

"Apparently, DoorDash is listening to their drivers about low wages and they 'fixed the glitch'," write Mark H.

 

"Advertising in Chicago's Ogilvie transportation center in need of recovery," writes Dave T.

 

"On the one hand, I'm interested in how Thunderbird plans to quaduple my drive space, but I'm kind of scared to click on the button," wrote Josh B.

 

"Good thing the folks at Microsoft put the little message in parenthesis," writes Bobbie, "else, you know, I might expect a touch bar to magically appear on my laptop."

 

[Advertisement] Scale your release pipelines, creating secure, reliable, reusable deployments with one click. Download and learn more today!

Planet Linux AustraliaDanielle Madeley: Using the Nitrokey HSM with GPG in macOS

Getting yourself set up in macOS to sign keys using a Nitrokey HSM with gpg is non-trivial. Allegedly (at least some) Nitrokeys are supported by scdaemon (GnuPG’s stand-in abstraction for cryptographic tokens) but it seems that the version of scdaemon in brew doesn’t have support.

However there is gnupg-pkcs11-scd which is a replacement for scdaemon which uses PKCS #11. Unfortunately it’s a bit of a hassle to set up.

There’s a bunch of things you’ll want to install from brew: opensc, gnupg, gnupg-pkcs11-scd, pinentry-mac, openssl and engine_pkcs11.

brew install opensc gnupg gnupg-pkcs11-scd pinentry-mac \
    openssl engine-pkcs11

gnupg-pkcs11-scd won’t create keys, so if you’ve not made one already, you need to generate yourself a keypair. Which you can do with pkcs11-tool:

pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so -l \
    --keypairgen --key-type rsa:2048 \
    --id 10 --label 'Danielle Madeley'

The --id can be any hexadecimal id you want. It’s up to you to avoid collisions.

Then you’ll need to generate and sign a self-signed X.509 certificate for this keypair (you’ll need both the PEM form and the DER form):

/usr/local/opt/openssl/bin/openssl << EOF
engine -t dynamic \
    -pre SO_PATH:/usr/local/lib/engines/engine_pkcs11.so \
    -pre ID:pkcs11 \
    -pre LIST_ADD:1 \
    -pre LOAD \
    -pre MODULE_PATH:/usr/local/lib/opensc-pkcs11.so
req -engine pkcs11 -new -key 0:10 -keyform engine \
    -out cert.pem -text -x509 -days 3640 -subj '/CN=Danielle Madeley/'
x509 -in cert.pem -out cert.der -outform der
EOF

The flag -key 0:10 identifies the token and key id (see above when you created the key) you’re using. If you want to refer to a different token or key id, you can change these.

And import it back into your HSM:

pkcs11-tool --module /usr/local/lib/opensc-pkcs11.so -l \
    --write-object cert.der --type cert \
    --id 10 --label 'Danielle Madeley'

You can then configure gnupg-agent to use gnupg-pkcs11-scd. Edit the file ~/.gnupg/gpg-agent.conf:

scdaemon-program /usr/local/bin/gnupg-pkcs11-scd
pinentry-program /usr/local/bin/pinentry-mac

And the file ~./gnupg/gnupg-pkcs11-scd.conf:

providers nitrokey
provider-nitrokey-library /usr/local/lib/opensc-pkcs11.so

gnupg-pkcs11-scd is pretty nifty in that it will throw up a (pin entry) dialog if your token is not available, and is capable of supporting multiple tokens and providers.

Reload gpg-agent:

gpg-agent --server gpg-connect-agent << EOF
RELOADAGENT
EOF

Check your new agent is working:

gpg --card-status

Get your key handle (grip), which is the 40-character hex string after the phrase KEY-FRIEDNLY (sic):

gpg-agent --server gpg-connect-agent << EOF
SCD LEARN
EOF

Import this key into gpg as an ‘Existing key’, giving the key grip above:

gpg --expert --full-generate-key

You can now use this key as normal, create sub-keys, etc:

gpg -K
/Users/danni/.gnupg/pubring.kbx
-------------------------------
sec> rsa2048 2017-07-07 [SCE]
 1172FC7B4B5755750C65F9A544B80C280F80807C
 Card serial no. = 4B43 53233131
uid [ultimate] Danielle Madeley <danielle@madeley.id.au>

echo -n "Hello World" | gpg --armor --clearsign --textmode

Side note: the curses-based pinentry doesn’t deal with piping content into stdin, which is why you want pinentry-mac.

Terminal console showing a gpg signing command. Over the top is a dialog box prompting the user to insert her Nitrokey token

You can also import your certificate into gpgsm:

gpgsm --import < ca-certificate
gpgsm --learn-card

And that’s it, now you can sign your git tags with your super-secret private key, or whatever it is you do. Remember that you can’t exfiltrate the secret keys from your HSM in the clear, so if you need a backup you can create a DKEK backup (see the SmartcardHSM docs), or make sure you’ve generated that revocation certificate, or just decided disaster recovery is for dweebs.

Don MartiTwo approaches to adfraud, and some good news

Adfraud is a big problem, and we keep seeing two basic approaches to it.

Flight to quality: Run ads only on trustworthy sites. Brands are now playing the fraud game with the "reputation coprocessors" of the audience's brains on the brand's side. (Flight to quality doesn't mean just advertise on the same major media sites as everyone else—it can scale downward with, for example, the Project Wonderful model that lets you choose sites that are "brand safe" for you.)

Increased surveillance: Try to fight adfraud by continuing to play the game of trying to get big-money impressions from the cheapest possible site, but throw more tracking at the problem. Biggest example of this is to move ad money to locked-down mobile platforms and away from the web.

The problem with the second approach is that the audience is no longer on the brand's side. Trying to beat adfraud with technological measures is just challenging hackers to a series of hacking contests. And brands keep losing those. Recent news: The Judy Malware: Possibly the largest malware campaign found on Google Play.

Anyway, I'm interested in and optimistic about the results of the recent Mozilla/Caribou Digital report. It turns out that USA-style adtech is harder to do in countries where users are (1) less accurately tracked and (2) equipped with blockers to avoid bandwidth-sucking third-party ads. That's likely to mean better prospects for ad-supported news and cultural works, not worse. This report points out the good news that the so-called adtech tax is lower in developing countries—so what kind of ad-supported businesses will be enabled by lower "taxes" and "reinvention, not reinsertion" of more magazine-like advertising?

Of course, working in those markets is going to be hard for big US or European ad agencies that are now used to solving problems by throwing creepy tracking at them. But the low rate of adtech taxation sounds like an opportunity for creative local agencies and brands. Maybe the report should have been called something like "The Global South is Shitty-Adtech-Proof, so Brands Built Online There Are Going to Come Eat Your Lunch."

,

Google AdsenseIntroducing AdSense Native ads

Today we’re introducing the new AdSense Native ads -- a suite of ad formats designed to match the look and feel of your site, providing a great user experience for your visitors. AdSense Native ads come in three categories: In-feed, In-article, and Matched content*. They can all be used at once or individually and are designed for:

  • A great user experience: they fit naturally on your site and use high quality advertiser elements, such as high resolution images, longer titles and descriptions, to provide a more attractive experience for your visitors. 
  • A great look and feel across different screen sizes: the ads are built to look great on mobile, desktop, and tablet. 
  • Ease of use: easy-to-use editing tools help you make the ads look great on your site.

Native In-feed opens up new revenue opportunity in your feeds
Available to all publishers, In-feed ads slot neatly inside your feeds, e.g. a list of articles or products on your site. These ads are highly customizable to match the look and feel of your feed content and offer new places to show ads.

Native In-article offers a better advertising experience
Available to all publishers, In-article ads are optimized by Google to help you put great-looking ads between the paragraphs of your pages. In-article ads use high-quality advertising elements and offer a great reading experience to your visitors.

Matched Content* drives more users to your content 
Available to publishers that meet the eligibility criteria, Matched content is a content recommendation tool that helps you promote your content to visitors and potentially increase revenue, page views, and time spent on site. Publishers that are eligible for the “Allow ads” feature can also show relevant ads within their Matched content units, creating an additional revenue opportunity in this placement.

Getting started with AdSense Native ads 
AdSense Native ads can be placed together, or separately, to customize your website’s ad experience. Use In-feed ads inside your feed (e.g. a list of articles, or products), In-article ads between the paragraphs of your pages, and Matched content ads directly below your articles.  When deciding your native strategy, keep the content best practices in mind.

To get started with AdSense Native ads:

  • Sign in to your AdSense account
  • In the left navigation panel, click My ads
  • Click +New ad unit
  • Select your ad category: In-article, In-feed or Matched content

We'd love to hear what you think about these ad formats in the comments section below this post.

Posted by: Violetta Kalathaki - AdSense Product Manager

*This format has already been available to eligible publishers, and is now part of AdSense Native ads.

LongNowThe Hermit Who Inadvertently Shaped Climate-Change Science

Billy Barr was just trying to get away from it all when he went to live at the base of Gothic Mountain in the Colorado wilderness in 1973. He wound up creating an invaluable historical record of climate change. His motivation for meticulously logging the changing temperatures, snow levels, weather, and wildlife sightings? Simple boredom.

Morgan Heim / Day’s Edge Production

Now, the Rocky Mountain Biological Observatory is using his 44 years of data to understand how climate change affects Gothic Mountain’s ecology, and scaling those learnings to high alpine environments around the world.

Read The Atlantic’s profile on Billy Barr in full (LINK). 

Cory DoctorowHow to write pulp fiction that celebrates humanity’s essential goodness


My latest Locus column is “Be the First One to Not Do Something that No One Else Has Ever Not Thought of Doing Before,” and it’s about science fiction’s addiction to certain harmful fallacies, like the idea that you can sideline the actual capabilities and constraints of computers in order to advance the plot of a thriller.


That’s the idea I turned on its head with my 2008 novel Little Brother, a book in which what computers can and can’t do were used to determine the plot, not the other way around.

With my latest novel Walkaway, I went after another convenient fiction: the idea that in disasters, people are at their very worst, neighbor turning on neighbor. In Walkaway, disaster is the moment at which the refrigerator hum of petty grievances stops and leaves behind a ringing moment of silent clarity, in which your shared destiny with other people trumps all other questions.


In this scenario — which hews much more closely to the truth of humanity under conditions of crisis — the fight isn’t between good guys and bad guys: it’s between people of goodwill who still can’t agree on what to do, and between people who are trying to help and people who are convinced that any attempt to help is a pretense covering up a bid to conquer and subjugate.


In its own way, man-vs-man-vs-nature is every bit as much a fantasy as the technothriller’s impos­sible, plot-expedient computers. As Rebecca Solnit documented in her must-read history book A Paradise Built in Hell, disaster is not typically attended by a breakdown in the social order that lays bare the true bestial nature of your fellow human. Instead, these are moments in which people rise brilliantly to the occasion, digging their neighbors out of the rubble, rushing to give blood, opening their homes to strangers.

Of course, it’s easy to juice a story by ignoring this fact, converting disaster to catastrophe by pitting neighbor against neighbor, and this approach has the added bonus of pandering to the reader’s lurking (or overt) racism and classism – just make the baddies poor and/or brown.

But what if a story made the fact of humanity’s essential goodness the center of its conflict? What if, after a disaster, everyone wanted to help, but no one could agree on how to do so? What if the argument was not between enemies, but friends – what if the fundamental, irreconcilable differences were with people you loved, not people you hated?

As anyone who’s ever had a difficult Christmas dinner with family can attest, fighting with people you love is a lot harder than fighting with people you hate.

Be the First One to Not Do Something that No One Else Has Ever Not Thought of Doing Before [Cory Doctorow/Locus]

Worse Than FailureAnnouncements: Build Totally Non-WTF Products at Inedo

As our friends at HIRED will attest, finding a good workplace is tough, for both the employee and the employer. Fortunately, when it comes looking for developer talent, Inedo has a bit of an advantage: in addition to being a DevOps products company, we publish The Daily WTF.

Not too long ago, I shared a Support Analyst role here and ended up hiring fellow TDWTF Ben Lubar to join the Inedo team. He's often on the front lines, supporting our customer base; but he's also done some interesting dev projects as well (including a Source Gear Vault to Git migration tool).

Today, we're looking for another developer to work from our Cleveland office. Our code is all in .NET, but we have a lot of integrations; so if you can write C# fairly comfortably but know Docker really well, then that's a great fit. The reason is that, as a software product company that builds tools for other developers, you'll do more than just write C# - in fact, a big part of the job will be resisting the urge to write mountains of code that don't actually solve a real problem. More often than not, a bit of support, a tutorial, an extension/plug-in, and better documentation go a heck of a lot further than new core product code.

We do have a couple of job postings for the position (one on Inedo.com, the other on Indeed), and you're welcome to read those to get a feel for the actual bullet points. But if you're reading this and are interested in learning more, you can use the VIP line and bypass the normal process: just shoot me an email directly at apapadimoulis at inedo dot com with "[TDWTF/Inedo] .NET Developer" as the subject and your resume attached.

Oh, we're also looking for a Community Manager, to help with both The Daily WTF and Inedo communities. So if you know anyone who might be interested in that, send them my way!

[Advertisement] Manage IT infrastructure as code across all environments with Puppet. Puppet Enterprise now offers more control and insight, with role-based access control, activity logging and all-new Puppet Apps. Start your free trial today!

CryptogramNow It's Easier than Ever to Steal Someone's Keys

The website key.me will make a duplicate key from a digital photo.

If a friend or coworker leaves their keys unattended for a few seconds, you know what to do.

Planet Linux AustraliaRusty Russell: Broadband Speeds, 2 Years Later

Two years ago, considering the blocksize debate, I made two attempts to measure average bandwidth growth, first using Akamai serving numbers (which gave an answer of 17% per year), and then using fixed-line broadband data from OFCOM UK, which gave an answer of 30% per annum.

We have two years more of data since then, so let’s take another look.

OFCOM (UK) Fixed Broadband Data

First, the OFCOM data:

  • Average download speed in November 2008 was 3.6Mbit
  • Average download speed in November 2014 was 22.8Mbit
  • Average download speed in November 2016 was 36.2Mbit
  • Average upload speed in November 2008 to April 2009 was 0.43Mbit/s
  • Average upload speed in November 2014 was 2.9Mbit
  • Average upload speed in November 2016 was 4.3Mbit

So in the last two years, we’ve seen 26% increase in download speed, and 22% increase in upload, bringing us down from 36/37% to 33% over the 8 years. The divergence of download and upload improvements is concerning (I previously assumed they were the same, but we have to design for the lesser of the two for a peer-to-peer system).

The idea that upload speed may be topping out is reflected in the Nov-2016 report, which notes only an 8% upload increase in services advertised as “30Mbit” or above.

Akamai’s State Of The Internet Reports

Now let’s look at Akamai’s Q1 2016 report and Q1-2017 report.

  • Annual global average speed in Q1 2015 – Q1 2016: 23%
  • Annual global average speed in Q1 2016 – Q1 2017: 15%

This gives an estimate of 19% per annum in the last two years. Reassuringly, the US and UK (both fairly high-bandwidth countries, considered in my previous post to be a good estimate for the future of other countries) have increased by 26% and 19% in the last two years, indicating there’s no immediate ceiling to bandwidth.

You can play with the numbers for different geographies on the Akamai site.

Conclusion: 19% Is A Conservative Estimate

17% growth now seems a little pessimistic: in the last 9 years the US Akamai numbers suggest the US has increased by 19% per annum, the UK by almost 21%.  The gloss seems to be coming off the UK fixed-broadband numbers, but they’re still 22% upload increase for the last two years.  Even Australia and the Philippines have managed almost 21%.

Worse Than FailureOpen Sources

121212 2 OpenSwissKnife

Here's how open-source is supposed to work: A goup releases a product, with the source code freely available. Someone finds a problem. They solve the problem, issue a pull request, and the creators merge that into the product, making it better for everyone.

Here's another way open-source is supposed to work: A group releases a product, with the source code freely available. Someone finds a problem, but they can't fix it themselves, so they issue a bug report. Someone else fixes the problem, issues a pull request, and the creators merge that into the product, making it better for everyone.

Here's one way open-source works: Someone creates a product. It gets popular—and I mean really, really popular, practically overnight. The creator didn't ask for this. They have no idea what to do with success. They try their best to keep up, but they can't keep on top of everything all the time. They haven't even set up a build pipeline yet. They're flying by the seat of their pants. One day, unwisely choosing to program with a fever, they commit broken code and push it up to GitHub, triggering an automatic release. Fifteen thousand downstram dependencies find their build broken and show up to scold the creator for not running tests before releasing.

Here's another way open-source works: A group creates a product. It gets popular. Some time later, there are 600 open issues and over 50 pending pull requests. The creator hasn't commented in a year, but people keep vainly trying to improve the product.

Here's another way open-source works: A group creates a product. They decide to avoid the above PR disaster by using some off-site bug tracker. Someone files a bug. Then another bug. Then 5 or 10 more. The creator goes on a rampage, insisting that everyone is using it wrong, and deletes all the bug reports, banning the users who submitted them. The product continues to gain success, and more and more people file bugs, only to find the bug reports summarily closed. Sometimes people get lucky and their reports will be closed, then re-opened when another dev decides to fix the problem.

Here's another way open-source works: Group A creates a product. Group B creates a product, and uses the first product as their support forum. That forum gets hacked. Group B files a bug to Group A, rightly worried about the security of the software they use. After all, if the forum has a remote code exploit, maybe they should move to something newer, maybe written in Ruby instead of PHP. One of the developers from Group A, today's submitter, offers to investigate.

Many forums allow admins to edit the themes for the site directly; for example, NodeBB provides admins a textbox in which they can paste CSS to tweak the theme to their liking. This forum figures that since the admins are already on the wrong side of the airtight hatchway, they can inject bits of PHP code directly into the forum header. Code like, say, saving a poison payload to a disk that creates a endpoint that accepts arbitrary file uploads so they can root the box.

But how did the hacker get access to that admin panel? Surely that's a security flaw, right? Turns out they'd found a weak link in the security chain and applied just enough force to break their way in. If you work in security, I'm sure you won't be surprised to hear the flaw: one of the admins had a weak password, one right out of a classic dictionary list.

Here's another way open-source works: 15% of the NPM ecosystem was controlled by people with weak passwords. Someone hacks them all, tells NPM how they did it, and gets a mass forced-reset on everyone's passwords. Everyone is more secure.

[Advertisement] Universal Package Manager – store all your Maven, NuGet, Chocolatey, npm, Bower, TFS, TeamCity, Jenkins packages in one central location. Learn more today!

,

LongNowThe AI Cargo Cult: The Myth of a Superhuman Artificial Intelligence

In a widely-shared essay first published in Backchannel, Kevin Kelly, a Long Now co-founder and Founding Editor of Wired Magazine, argues that the inevitable rise of superhuman artificial intelligence—long predicted by leaders in science and technology—is a myth based on misperceptions without evidence.

Kevin is now Editor at Large at Wired and has spoken in the Seminar and Interval speaking series several times, including on his most recent book, The Inevitable—sharing his ideas on the future of technology and how our culture responds to it. Kevin was part of the team that developed the idea of the Manual for Civilization, making a series of selections for it, and has also put forth a similar idea called the Library of Utility.

Read Kevin Kelly’s essay in full (LINK).

CryptogramDubai Deploying Autonomous Robotic Police Cars

It's hard to tell how much of this story is real and how much is aspirational, but it really is only a matter of time:

About the size of a child's electric toy car, the driverless vehicles will patrol different areas of the city to boost security and hunt for unusual activity, all the while scanning crowds for potential persons of interest to police and known criminals.

Cory DoctorowMy presentation from ConveyUX

Last March, I traveled to Seattle to present at the ConveyUX conference, with a keynote called “Dark Patterns and Bad Business Models”, the video for which has now been posted: “The Internet’s broken and that’s bad news, because everything we do today involves the Internet and everything we’ll do tomorrow will require it. But governments and corporations see the net, variously, as a perfect surveillance tool, a perfect pornography distribution tool, or a perfect video on demand tool—not as the nervous system of the 21st century. Time’s running out. Architecture is politics. The changes we’re making to the net today will prefigure the future our children and their children will thrive in—or suffer under.”

Cory DoctorowInterview with Wired UK’s Upvote podcast

Back in May, I stopped by Wired UK while on my British tour for my novel Walkaway to talk about the novel, surveillance, elections, and, of course, DRM. (MP3)

Rondam RamblingsA brief history of political discourse in the United States

1776 When in the Course of human events, it becomes necessary for one people to dissolve the political bands which have connected them with another, and to assume among the powers of the earth, the separate and equal station to which the Laws of Nature and of Nature's God entitle them, a decent respect to the opinions of mankind requires that they should declare the causes which impel

Sociological ImagesPolitical baseball: Happiness, unhappiness, and whose team is winning

Are conservatives happier than liberals. Arthur Brooks thinks so. Brooks is president of the American Enterprise Institute, a conservative think tank. And he’s happy, or at least he comes across as happy in his monthly op-eds for the Times.  In those op-eds, he sometimes claims that conservatives are generally happier. When you’re talking about the relation between political views and happiness, though — as I do — you ought to consider who is in power. Otherwise, it’s like asking whether Yankee fans are happier than RedSox fans without checking the AL East standings.

Now that the 2016 GSS data is out, we can compare the happiness of liberals and conservatives during the Bush years and the Obama years. The GSS happiness item asks, “Taken all together, how would you say things are these days –  would you say that you are very happy, pretty happy, or not too happy?”

On the question of who is “very happy,” it looks as though Brooks is onto something. More of the people on the right are very happy in both periods. But also note that in the Obama years about 12% of those very happy folks (five out of 40) stopped being very happy.But something else was happening during the Obama years. It wasn’t just that some of the very happy conservatives weren’t quite so happy. The opposition to Obama was not about happiness. Neither was the Trump campaign with its unrelenting negativism. What it showed was that a lot of people on the right were angry. None of that sunny Reaganesque “Morning in America: for them. That feeling is reflected in the numbers who told the GSS that they were “not too happy.”

Among extreme conservatives, the percent who were not happy doubled during the Obama years. The increase in unhappiness was about 60% for those who identified themselves as “conservative” (neither slight nor extreme).  In the last eight years, the more conservative a person’s views, the greater the likelihood of being not too happy. The pattern is reversed for liberals during the Bush years. Unhappiness rises as you more further left.

The graphs also show that for those in the middle of the spectrum – about 60% of the people – politics makes no discernible change in happiness. Their proportion of “not too happy” remained stable regardless of who was in the White House.  Those middle categories do give support to Brook’s idea that conservatives are generally somewhat happier. But as you move further out on the political spectrum the link between political views and happiness depends much more on which side is winning. Just as at Fenway or the Stadium, the fans who are cheering – or booing – the loudest are the ones whose happiness will be most affected by their team’s won-lost record.

Jay Livingston is the chair of the Sociology Department at Montclair State University. You can follow him at Montclair SocioBlog or on Twitter.

(View original at https://thesocietypages.org/socimages)

Sociological ImagesAlgorithms replace your biases with someone else’s

Originally posted at Scatterplot.

There has been a lot of great discussion, research, and reporting on the promise and pitfalls of algorithmic decisionmaking in the past few years. As Cathy O’Neil nicely shows in her Weapons of Math Destruction (and associated columns), algorithmic decisionmaking has become increasingly important in domains as diverse as credit, insurance, education, and criminal justice. The algorithms O’Neil studies are characterized by their opacity, their scale, and their capacity to damage.

Much of the public debate has focused on a class of algorithms employed in criminal justice, especially in sentencing and parole decisions. As scholars like Bernard Harcourt and Jonathan Simon have noted, criminal justice has been a testing ground for algorithmic decisionmaking since the early 20th century. But most of these early efforts had limited reach (low scale), and they were often published in scholarly venues (low opacity). Modern algorithms are proprietary, and are increasingly employed to decide the sentences or parole decisions for entire states.

“Code of Silence,” Rebecca Wexler’s new piece in Washington Monthlyexplores one such influential algorithm: COMPAS (also the study of an extensive, if contested, ProPublica report). Like O’Neil, Wexler focuses on the problem of opacity. The COMPAS algorithm is owned by a for-profit company, Northpointe, and the details of the algorithm are protected by trade secret law. The problems here are both obvious and massive, as Wexler documents.

Beyond the issue of secrecy, though, one issue struck me in reading Wexler’s account. One of the main justifications for a tool like COMPAS is that it reduces subjectivity in decisionmaking. The problems here are real: we know that decisionmakers at every point in the criminal justice system treat white and black individuals differently, from who gets stopped and frisked to who receives the death penalty. Complex, secretive algorithms like COMPAS are supposed to help solve this problem by turning the process of making consequential decisions into a mechanically objective one – no subjectivity, no bias.

But as Wexler’s reporting shows, some of the variables that COMPAS considers (and apparently considers quite strongly) are just as subjective as the process it was designed to replace. Questions like:

Based on the screener’s observations, is this person a suspected or admitted gang member?

In your neighborhood, have some of your friends or family been crime victims?

How often do you have barely enough money to get by?

Wexler reports on the case of Glenn Rodríguez, a model inmate who was denied parole on the basis of his puzzlingly high COMPAS score:

Glenn Rodríguez had managed to work around this problem and show not only the presence of the error, but also its significance. He had been in prison so long, he later explained to me, that he knew inmates with similar backgrounds who were willing to let him see their COMPAS results. “This one guy, everything was the same except question 19,” he said. “I thought, this one answer is changing everything for me.” Then another inmate with a “yes” for that question was reassessed, and the single input switched to “no.” His final score dropped on a ten-point scale from 8 to 1. This was no red herring.

So what is question 19? The New York State version of COMPAS uses two separate inputs to evaluate prison misconduct. One is the inmate’s official disciplinary record. The other is question 19, which asks the evaluator, “Does this person appear to have notable disciplinary issues?”

Advocates of predictive models for criminal justice use often argue that computer systems can be more objective and transparent than human decisionmakers. But New York’s use of COMPAS for parole decisions shows that the opposite is also possible. An inmate’s disciplinary record can reflect past biases in the prison’s procedures, as when guards single out certain inmates or racial groups for harsh treatment. And question 19 explicitly asks for an evaluator’s opinion. The system can actually end up compounding and obscuring subjectivity.

This story was all too familiar to me from Emily Bosk’s work on similar decisionmaking systems in the child welfare system where case workers must answer similarly subjective questions about parental behaviors and problems in order to produce a seemingly objective score used to make decisions about removing children from home in cases of abuse and neglect. A statistical scoring system that takes subjective inputs (and it’s hard to imagine one that doesn’t) can’t produce a perfectly objective decision. To put it differently: this sort of algorithmic decisionmaking replaces your biases with someone else’s biases.

Dan Hirschman is a professor of sociology at Brown University. He writes for scatterplot and is an editor of the ASA blog Work in Progress. You can follow him on Twitter.

(View original at https://thesocietypages.org/socimages)

CryptogramCommentary on US Election Security

Good commentaries from Ed Felten and Matt Blaze.

Both make a point that I have also been saying: hacks can undermine the legitimacy of an election, even if there is no actual voter or vote manipulation.

Felten:

The second lesson is that we should be paying more attention to attacks that aim to undermine the legitimacy of an election rather than changing the election's result. Election-stealing attacks have gotten most of the attention up to now -- ­and we are still vulnerable to them in some places -- ­but it appears that external threat actors may be more interested in attacking legitimacy.

Attacks on legitimacy could take several forms. An attacker could disrupt the operation of the election, for example, by corrupting voter registration databases so there is uncertainty about whether the correct people were allowed to vote. They could interfere with post-election tallying processes, so that incorrect results were reported­ an attack that might have the intended effect even if the results were eventually corrected. Or the attacker might fabricate evidence of an attack, and release the false evidence after the election.

Legitimacy attacks could be easier to carry out than election-stealing attacks, as well. For one thing, a legitimacy attacker will typically want the attack to be discovered, although they might want to avoid having the culprit identified. By contrast, an election-stealing attack must avoid detection in order to succeed. (If detected, it might function as a legitimacy attack.)

Blaze:

A hostile state actor who can compromise a handful of county networks might not even need to alter any actual votes to create considerable uncertainty about an election's legitimacy. It may be sufficient to simply plant some suspicious software on back end networks, create some suspicious audit files, or add some obviously bogus names to to the voter rolls. If the preferred candidate wins, they can quietly do nothing (or, ideally, restore the compromised networks to their original states). If the "wrong" candidate wins, however, they could covertly reveal evidence that county election systems had been compromised, creating public doubt about whether the election had been "rigged". This could easily impair the ability of the true winner to effectively govern, at least for a while.

In other words, a hostile state actor interested in disruption may actually have an easier task than someone who wants to undetectably steal even a small local office. And a simple phishing and trojan horse email campaign like the one in the NSA report is potentially all that would be needed to carry this out.

Me:

Democratic elections serve two purposes. The first is to elect the winner. But the second is to convince the loser. After the votes are all counted, everyone needs to trust that the election was fair and the results accurate. Attacks against our election system, even if they are ultimately ineffective, undermine that trust and ­ by extension ­ our democracy.

And, finally, a report from the Brennan Center for Justice on how to secure elections.

Krebs on SecurityWho is the GovRAT Author and Mirai Botmaster ‘Bestbuy’?

In February 2017, authorities in the United Kingdom arrested a 29-year-old U.K. man on suspicion of knocking more than 900,000 Germans offline in an attack tied to Mirai, a malware strain that enslaves Internet of Things (IoT) devices like security cameras and Internet routers for use in large-scale cyberattacks. Investigators haven’t yet released the man’s name, but news reports suggest he may be better known by the hacker handle “Bestbuy.” This post will follow a trail of clues back to one likely real-life identity of Bestbuy.

At the end of November 2016, a modified version of Mirai began spreading across the networks of German ISP Deutsche Telekom. This version of the Mirai worm spread so quickly that the very act of scanning for new infectable hosts overwhelmed the devices doing the scanning, causing outages for more than 900,000 customers. The same botnet had previously been tied to attacks on U.K. broadband providers Post Office and Talk Talk.

dtoutage

Security firm Tripwire published a writeup on that failed Mirai attack, noting that the domain names tied to servers used to coordinate the activities of the botnet were registered variously to a “Peter Parker” and “Spider man,” and to a street address in Israel (27 Hofit St). We’ll come back to Spider Man in a moment.

According to multiple security firms, the Mirai botnet responsible for the Deutsche Telekom outage was controlled via servers at the Internet address 62.113.238.138Farsight Security, a company that maps which domain names are tied to which Internet addresses over time, reports that this address has hosted just nine domains.

The only one of those domains that is not related to Mirai is dyndn-web[dot]com, which according to a 2015 report from BlueCoat (now Symantec) was a domain tied to the use and sale of a keystroke logging remote access trojan (RAT) called “GovRAT.” The trojan is documented to have been used in numerous cyber espionage campaigns against governments, financial institutions, defense contractors and more than 100 corporations.

Another report on GovRAT — this one from security firm InfoArmor — shows that the GovRAT malware was sold on Dark Web cybercrime forums by a hacker or hackers who went by the nicknames BestBuy and “Popopret” (some experts believe these were just two different identities managed by the same cybercriminal).

The hacker "bestbuy" selling his Govrat trojan on the dark web forum "Hell." Image: InfoArmor.

The hacker “bestbuy” selling his GovRAT trojan on the dark web forum “Hell.” Image: InfoArmor.

GovRAT has been for sale on various other malware and exploit-related sites since at least 2014. On oday[dot]today, for example, GovRAT was sold by a user who picked the nickname Spdr, and who used the email address spdr01@gmail.com.

Recall that the domains used to control the Mirai botnet that hit Deutsche Telekom all had some form of Spider Man in the domain registration records. Also, recall that the controller used to manage the GovRAT trojan and that Mirai botnet were both at one time hosted on the same server with just a handful of other (Mirai-related) domains.

According to a separate report (PDF) from InfoArmor, GovRAT also was sold alongside a service that allows anyone to digitally sign their malware using code-signing certificates stolen from legitimate companies. InfoArmor said the digital signature it found related to the service was issued to an open source developer Singh Aditya, using the email address parkajackets@gmail.com.

Interestingly, both of these email addresses — parkajackets@gmail.com and spdr01@gmail.com — were connected to similarly-named user accounts at vDOS, for years the largest DDoS-for-hire service (that is, until KrebsOnSecurity last fall outed its proprietors as two 18-year-old Israeli men).

Last summer vDOS got massively hacked, and a copy of its user and payments databases was shared with this author and with U.S. federal law enforcement agencies. The leaked database shows that both of those email addresses are tied to accounts on vDOS named “bestbuy” (bestbuy and bestbuy2).

Spdr01's sales listing for the GovRAT trojan on a malware and exploits site shows he used the email address spdr01@gmail.com

Spdr01’s sales listing for the GovRAT trojan on a malware and exploits site shows he used the email address spdr01@gmail.com

The leaked vDOS database also contained detailed records of the Internet addresses that vDOS customers used to log in to the attack-for-hire service. Those logs show that the bestbuy and bestbuy2 accounts logged in repeatedly from several different IP addresses in the United Kingdom and in Hong Kong.

The technical support logs from vDOS indicate that the reason the vDOS database shows two different accounts named “bestbuy” is the vDOS administrators banned the original “bestbuy” account after it was seen logged into the account from both the UK and Hong Kong. Bestbuy’s pleas to the vDOS administrators that he was not sharing the account and that the odd activity could be explained by his recent trip to Hong Kong did not move them to refund his money or reactivate his original account.

A number of clues in the data above suggest that the person responsible for both this Mirai botnet and GovRAT had ties to Israel. For one thing, the email address spdr01@gmail.com was used to register at least three domain names, all of which are tied back to a large family in Israel. What’s more, in several dark web postings, Bestbuy can be seen asking if anyone has any “weed for sale in Israel,” noting that he doesn’t want to risk receiving drugs in the mail.

The domains tied to spdr01@gmail.com led down a very deep rabbit hole that ultimately went nowhere useful for this investigation. But it appears the nickname “spdr01” and email spdr01@gmail.com was used as early as 2008 by a core member of the Israeli hacking forum and IRC chat room Binaryvision.co.il.

Visiting the Binaryvision archives page for this user, we can see Spdr was a highly technical user who contributed several articles on cybersecurity vulnerabilities and on mobile network security (Google Chrome or Google Translate deftly translates these articles from Hebrew to English).

I got in touch with multiple current members of Binaryvision and asked if anyone still kept in contact with Spdr from the old days. One of the members said he thought Spdr held dual Israeli and U.K. citizenship, that he would be approximately 30 years old at the moment. Another said Spdr was engaged to be married recently. None of those who shared what they recalled about Spdr wished to be identified for this story.

But a bit of searching on those users’ social networking accounts showed they had a friend in common that fit the above description. The Facebook profile for one Daniel Kaye using the Facebook alias “DanielKaye.il” (.il is the top level country code domain for Israel) shows that Mr. Kaye is now 29 years old and is or was engaged to be married to a young woman named Catherine in the United Kingdom.

The background image on Kaye’s Facebook profile is a picture of Hong Kong, and Kaye’s last action on Facebook was apparently to review a sports and recreation facility in Hong Kong.

dankaye

Using Domaintools.com [full disclosure: Domaintools is an advertiser on this blog], I ran a “reverse WHOIS” search on the name “Daniel Kaye,” and it came back with exactly 103 current and historic domain names with this name in their records. One of them in particular caught my eye: Cathyjewels[dot]com, which appears to be tied to a homemade jewelry store located in the U.K. that never got off the ground.

Cathyjewels[dot]com was registered in 2014 to a Daniel Kaye in Egham, U.K., using the email address danielkaye02@gmail.com. I decided to run this email address through Socialnet, a plugin for the data analysis tool Maltego that scours dozens of social networking sites for user-defined terms. Socialnet reports that this email address is tied to an account at Gravatar — a service that lets users keep the same avatar at multiple Web sites. The name on that account? You guessed it: Spdr01.

The output from the Socialnet plugin for Maltego when one searches for the email address danielkaye02@gmail.com.

The output from the Socialnet plugin for Maltego when one searches for the email address danielkaye02@gmail.com.

Daniel Kaye did not return multiple requests for comment sent via Facebook and the various email addresses mentioned here.

In case anyone wants to follow up on this research, I highlighted the major links between the data points mentioned in this post in the following mind map (created with the excellent and indispensable MindNode Pro for Mac).

A “mind map” tracing some of the research mentioned in this post.

Worse Than FailureCodeSOD: Swap the Workaround

Blane D is responsible for loading data into a Vertica 8.1 database for analysis. Vertica is a distributed, column-oriented store, for data-warehousing applications, and its driver has certain quirks.

For example, a common task that you might need to perform is swapping storage partitions around between tables to facilitate bulk data-loading. Thus, there is a SWAP_PARTITIONS_BETWEEN_TABLES() stored procedure. Unfortunately, if you call this function from within a prepared statement, one of two things will happen: the individual node handling the request will crash, or the entire cluster will crash.

No problem, right? Just don’t use a prepared statement. Unfortunately, if you use the ODBC driver for Python, every statement is converted to a prepared statement. There’s a JDBC driver, and a bridge to enable it from within Python, but it also has that problem, and it has the added cost of requiring a a JVM running.

So Blane did what any of us would do in this situation: he created a hacky-workaround that does the job, but requires thorough apologies.

def run_horrible_partition_swap_hack(self, horrible_src_table, horrible_src_partition,
                                     terrible_dest_table, terrible_dest_partition):
    """
    First things first - I'm sorry, I am a terrible person.

    This is a horrible horrible hack to avoid Vertica's partition swap bug and should be removed once patched!

    What does this atrocity do?
    It passes our partition swap into the ODBC connection string so that it gets executed outside of a prepared
    statement... that's it... I'm sorry.
    """
    conn = self.get_connection(getattr(self, self.conn_name_attr))

    hacky_sql = "SELECT SWAP_PARTITIONS_BETWEEN_TABLES('{src_table}',{src_part},{dest_part},'{dest_table}')"\
        .format(src_table=horrible_src_table,
                src_part=horrible_src_partition,
                dest_table=terrible_dest_table,
                dest_part=terrible_dest_partition)

    even_hackier_sql = hacky_sql.replace(' ', '+')

    conn_string = ';'.join(["DSN={}".format(conn.host),
                            "DATABASE={}".format(conn.schema) if conn.schema else '',
                            "UID={}".format(conn.uid) if conn.uid else '',
                            "PWD={}".format(conn.password) if conn.password else '',
                            "ConnSettings={}".format(even_hackier_sql)])  # :puke:

    odbc_conn = pyodbc.connect(conn_string)

    odbc_conn.close()

Specifically, Blane leverages the driver’s ConnSettings option in the connection string, which allows you to execute a command when a client connects to the database. This is specifically meant for setting up session parameters, but it also has the added “benefit” of not performing that action using a prepared statement.

If it’s stupid but it works, it’s probably the vendor’s fault, I suppose.

[Advertisement] Onsite, remote, bare-metal or cloud – create, configure and orchestrate 1,000s of servers, all from the same dashboard while continually monitoring for drift and allowing for instantaneous remediation. Download Otter today!